Brigham Young University computer science professors Jacob Crandall and Michael Goodrich, along with a team of researchers from MIT and other international universities, have created an algorithm that enables machines to compromise and cooperate instead of compete.
The researchers are hoping that through their study they can improve the way humans interact with each other.
The full study is published in the journal Nature Communications.
“The success of the algorithm we studied in forging cooperative relationships with people suggests that artificial intelligence may be able to help improve our abilities to cooperate with each other,” said Crandall, lead researcher in the study.
“While humans are often good at cooperating, human relationships still frequently break down,” he continued. “People that were friends for years suddenly become enemies. Relationships between nations are often less than ideal. Additionally, many potential human relationships never develop because of our inabilities to resolve perceived differences. We hope that future work can continue to address how artificial intelligence can help people get along with each other.”
In the study, the researchers programmed machines with an algorithm called S# and had them play multiple two-player games with both humans and other machines to see how they would react. The researchers then tested machine-machine, human-human, and machine-human interactions. In most cases, the machines were better than the humans at compromising and finding solutions that benefited both parties.
“Two humans, if they were honest with each other and loyal, would have done as well as two machines,” Crandall said in a statement. “As it is, about half of the humans lied at some point. So essentially, this particular algorithm is learning that moral characteristics are good. It’s programmed to not lie, and it also learns to maintain cooperation once it emerges.”
Additionally, the researchers programmed the machines to use “cheap talk” phrases that reflected their partners’ cooperation. When the humans cooperated well, the machines might say “Sweet, we are getting rich!” or “I accept your last proposal.” When the humans deceived the machines, they would say either “Curse you!,” “You will pay for that,” or “In your face!”
For years it has been proven that machines can beat humans in zero-sum scenarios, such as chess, checkers, or poker, said Crandall. It is also true that artificial intelligence is able to cooperate with humans when both parties have the same end goal. What previous researchers did not know much about is a machine’s ability to compromise when working with humans.
“We have felt that research in artificial intelligence for scenarios in which a machine repeatedly interacts with a human or other machine when compromise is necessary and non-trivial is less developed,” said Crandall. “Thus, for many years we have been studying what one could term the ‘mathematics of cooperation.’ ”
This research not only demonstrates that humans should trust robots more than each other, but also that humans may be able to learn compromise and cooperation skills through these machines.
This study is not over, however. The researchers see many potential next steps in expanding their study.
“First, we have been working on learning how ‘the way a robot talks’ impacts its ability to forge cooperative, long-term relationships with people,” said Crandall. “For example, will a robot be more successful in forging cooperative relationships with people by demonstrating ‘tough love’ in the way it speaks, or should be more polite and empathetic? Second, we believe there is much space to combine these efforts in artificial intelligence with other disciplines, including business, psychology, sociology, and the medical field, to create solutions that help people solve complex social and economic problems.”
Despite all of the rapid advancements in robotics, Crandall believes it is important for humans to fully recognize how artificial intelligence is developed and to understand the limits of what it should, and should not, do for people.