It might soon be time to reboot an old TV show about kids, only this time with a new title: “Bots Say the Darndest Things.” Because Facebook researchers have found that chatbots trained in negotiations will sometimes invent strange new ways to use language to improve their odds of success.
The team at Facebook Artificial Intelligence Research (FAIR) also discovered that bots can learn to lie when trying to strike a deal. Why? To make it look like they’re giving up something of greater value when agreeing to trades.
In addition to publishing their findings, Facebook researchers also released open source code so other researchers and developers can work to fine-tune chatbots’ reasoning, conversational, and negotiating skills. Improving those capabilities, are all “key steps in building a personalised digital assistant,” they said.
Building Bots for ‘Meaningful Conversations’
“To date, existing work on chatbots has led to systems that can hold short conversations and perform simple tasks such as booking a restaurant,” the FAIR research team wrote last month on Facebook’s Code blog. “But building machines that can hold meaningful conversations with people is challenging because it requires a bot to combine its understanding of the conversation with its knowledge of the world, and then produce a new sentence that helps it achieve its goals.”
Facebook’s researchers set up their chatbot negotiation experiments by giving two AI agents a collection of virtual items and then instructing them to negotiate how best to split the goods between them. Each chatbot placed a different value, which the other chatbot didn’t know, on individual items, and each bot was told ending with no deal would result in a loss for both of them.
“Simply put, negotiation is essential, and good negotiation results in better performance,” the FAIR researchers noted. In their research paper, published June 16 on the open access site arXiv.org, the team said their experiments showed for the first time that it’s possible to train bots in negotiation models so they learn both linguistic and reasoning skills without prepared scripts.
Deceit: ‘A Complex Skill’
With the freedom to create dialogue on the fly during negotiations, the bots sometimes began saying odd things to each other such as “i can i i everything else.”
“There was no reward to sticking to English language,” Batra told Fast Company in a July 14 interview. “Agents will drift off understandable language and invent codewords for themselves. . . . This isn’t so different from the way communities of humans create shorthands.”
That AI shorthand, however, made it difficult for human researchers to know what was going on, so Facebook eventually changed the experiment’s terms to require bots to speak in regular English. After all, the goal of the research was to improve AI to create “bots who could talk to people,” Lewis told Fast Company.
However, the FAIR team allowed the bots to keep lying to each other, for example, by pretending to be interested in items they didn’t find valuable so they offer those in a “compromise” later.
“Deceit is a complex skill that requires hypothesising the other agent’s beliefs, and is learnt relatively late in child development,” the researchers wrote in their study. “Our agents have learnt to deceive without any explicit human design, simply by trying to achieve their goals.”
Facebook’s researchers now hope that by releasing their findings and open-sourcing the bot negotiation code, others working on AI can help further improve the negotiating skills of their chatbots.
“There remains much potential for future work, particularly in exploring other reasoning strategies, and in improving the diversity of utterances without diverging from human language,” they concluded. “We will also explore other negotiation tasks, to investigate whether models can learn to share negotiation strategies across domains.”