Artificial intelligence has had its share of ups and downs recently. In what was widely seen as a key milestone for artificial intelligence (AI) researchers, one system beat a former world champion at a mind-bendingly intricate board game. But then, just a week later, a “chatbot” that was designed to learn from its interactions with humans on Twitter had a highly public racist meltdown on the social networking site.
How did this happen, and what does it mean for the dynamic field of AI?
In early March, a Google-made artificial intelligence system beat former world champ Lee Sedol four matches to one at an ancient Chinese game, called Go, that is considered more complex than chess, which was previously used as a benchmark to assess progress in machine intelligence. Before the Google AI’s triumph, most experts thought it would be decades before a machine could beat a top-ranked human at Go. [Super-Intelligent Machines: 7 Robotic Futures]
But fresh off the heels of this win, Microsoft unveiled an AI system on Twitter called Tay that was designed to mimic a 19-year-old American girl. Twitter users could tweet at Tay, and Microsoft said the AI system would learn from these interactions and eventually become better at communicating with humans. The company was forced to pull the plug on the experiment just 16 hours later, after the chatbot started spouting racist, misogynistic and sexually explicit messages. The company apologized profusely, blaming a “coordinated attack” on “vulnerabilities” and “technical exploits.”
Despite Microsoft’s use of language that seemed to suggest the system fell victim to hackers, AI expert Bart Selman, a professor of computer science at Cornell University, said the so-called “vulnerability” was that Tay appeared to repeat phrases tweeted at it without any kind of filter. Unsurprisingly, the “lolz” to be had from getting the chatbot to repeat inflammatory phrases were too much for some to resist.
Selman said he is amazed Microsoft didn’t build in sufficient safeguards to prevent such an eventuality, but he told Live Science the incident highlights one of modern AI’s major weak points: language comprehension.