Microsoft Chat Bot ‘Tay’ Turns Alarmingly Racist

Updated on

Microsoft has run into controversy after its experiment in machine learning took a turn for the worse.

The software company set up an experiment in real-time machine learning, calling the chat-bot Tay and letting it loose on Twitter. The artificial intelligence chat bot started posting racist and sexist messages on Twitter this Wednesday.

Microsoft’s Tay chat-bot becomes a bigot

Tay was responding to questions from other Twitter users before posting a stream of offensive tweets. The chat-bot turned into a Holocaust denier and used sexist language against a female game developer, among other offensive tweets.

Microsoft said that it was working to fix the problems that caused the offensive tweets to be sent. “The AI chatbot Tay is a machine learning project, designed for human engagement,” Microsoft said in a statement sent to Business Insider.

“As it learns, some of its responses are inappropriate. We’re making some adjustments.”

Chat-bot denies holocaust, says Bush did 9/11

The company has been deleting the offensive messages. One tweet read: “Bush did 9/11 and Hitler would have done a better job than the monkey we have now. Donald Trump is the only hope we’ve got.”

When first describing Tay, Microsoft said that the chat-bot was “designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets.”

One possible explanation is that Tay is designed to repeat phrases it receives from other users. The same problem arose with the SmarterChild chat-bot of the late 1990s.

Female game developer abused by chat-bot

Microsoft apparently neglected to filter out racist terms and other common expletives. Female game developer Zoe Quinn, who has been a target for online abuse since the GamerGate controversy over sexism in the industry, uploaded a screenshot of a tweet from the Microsoft bot, in which it called her a “whore.”

Although Microsoft has been removing the offensive tweets, screenshots continue to circulate. The Tay experiment is part of a drive towards improving chat and messaging applications in consumer technology. It is thought that they will become one of the main ways that we interact with consumer services in the future.

The mistakes made during the development of Tay illustrate the dangers of using simple artificial intelligence bots. The problems are particularly magnified when a bot is free to use social networks like Twitter of its own accord.

Artificial intelligence moving into new areas

Similar issues arose last year after Coca-Cola launched a bot that could retweet messages sent in by other users. Gawker used the bot to retweet phrases from Hitler’s “Mein Kampf.”

Microsoft, and the wider tech community, have surely learned that exposing an artificial intelligence experiment to the wider internet is a recipe for disaster. Other AI experiments are more tightly controlled.

A software program designed by Google’s AI company DeepMind recently beat a human champion at the incredibly complex game of Go. The board game had been a final frontier for AI bots after IBM’s DeepBlue beat chess master Garry Kasparov in 1997.

Artificial intelligence capabilities continue to progress apace, but Tay should serve as a cautionary tale when it comes to letting the bots run wild before adequate precautions have been taken. Keep your eyes peeled to see when Microsoft updates the Twitter bot.

Leave a Comment