The two bots quoted in the above passage were designed, as explained in a Facebook Artificial Intelligence Research unit blog post in June, for the purpose of showing it is "possible for dialog agents with differing goals (implemented as end-to-end-trained neural networks) to engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes".
The bots were never doing anything more nefarious than discussing with each other how to split an array of given items (represented in the user interface as innocuous objects such as books, hats and balls) into a mutually agreeable split.
Military sex chat bot
With Optimus Prime simply too big to fit into most customer service departments that left us with a choice between too much talk from ‘Threepio’ or too little from WALL-E, but as they say ‘recruit for attitude’ and you can train the rest later.
"Facebook engineers panic, pull plug on AI after bots develop their own language," one site wrote. One British tabloid quoted a robotics professor saying the incident showed "the dangers of deferring to artificial intelligence" and "could be lethal" if similar tech was injected into military robots.
The service works by letting having people chat to a robot that asks them questions to find out what was wrong with their ticket.
They can include “Was it hard to understand the signs? ” The site will then use the information generated during that exchange to put together the most likely reason that someone’s parking ticket is invalid – then preparing them a letter that they can use to challenge it.
Do Not Pay – a site that helps people find the best challenge to their parking tickets – has been used a quarter of a million times, according to its teenage creator Joshua Browder.
The service allows users to quickly find how they might be able to challenge parking tickets, and then quickly generates a letter to present to authorities.
“We’ve been looking at different ways that we can have people interact with autonomous systems,” Marc Steinberg, an Office of Naval Research manager, said in a phone interview this month.
The Navy is funding a slew of projects at universities and institutes that look at how to train such systems, including stopping robots from harming people.
When Facebook directed two of these semi-intelligent bots to talk to each other, Fast Co reported, the programmers realised they had made an error by not incentivising the chatbots to communicate according to human-comprehensible rules of the English language.