Microsoft Chat Bot ‘Tay’ Turns Alarmingly Racist

Microsoft Chat Bot ‘Tay’ Turns Alarmingly Racist

Microsoft has run into controversy after its experiment in machine learning took a turn for the worse.

The software company set up an experiment in real-time machine learning, calling the chat-bot Tay and letting it loose on Twitter. The artificial intelligence chat bot started posting racist and sexist messages on Twitter this Wednesday.

Greenlight Beat The S&P In Q4: Here Are The Fund’s Biggest Winners

David Einhorn Greenlight CapitalDavid Einhorn's Greenlight Capital funds were up 11.9% for 2021, compared to the S&P 500's 28.7% return. Since its inception in May 1996, Greenlight has returned 1,882.6% cumulatively and 12.3% net on an annualized basis. Q4 2021 hedge fund letters, conferences and more The fund was up 18.6% for the fourth quarter, with almost all Read More

Microsoft’s Tay chat-bot becomes a bigot

Tay was responding to questions from other Twitter users before posting a stream of offensive tweets. The chat-bot turned into a Holocaust denier and used sexist language against a female game developer, among other offensive tweets.

Microsoft said that it was working to fix the problems that caused the offensive tweets to be sent. “The AI chatbot Tay is a machine learning project, designed for human engagement,” Microsoft said in a statement sent to Business Insider.

“As it learns, some of its responses are inappropriate. We’re making some adjustments.”

Chat-bot denies holocaust, says Bush did 9/11

The company has been deleting the offensive messages. One tweet read: “Bush did 9/11 and Hitler would have done a better job than the monkey we have now. Donald Trump is the only hope we’ve got.”

When first describing Tay, Microsoft said that the chat-bot was “designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets.”

One possible explanation is that Tay is designed to repeat phrases it receives from other users. The same problem arose with the SmarterChild chat-bot of the late 1990s.

Female game developer abused by chat-bot

Microsoft apparently neglected to filter out racist terms and other common expletives. Female game developer Zoe Quinn, who has been a target for online abuse since the GamerGate controversy over sexism in the industry, uploaded a screenshot of a tweet from the Microsoft bot, in which it called her a “whore.”

Although Microsoft has been removing the offensive tweets, screenshots continue to circulate. The Tay experiment is part of a drive towards improving chat and messaging applications in consumer technology. It is thought that they will become one of the main ways that we interact with consumer services in the future.

The mistakes made during the development of Tay illustrate the dangers of using simple artificial intelligence bots. The problems are particularly magnified when a bot is free to use social networks like Twitter of its own accord.

Artificial intelligence moving into new areas

Similar issues arose last year after Coca-Cola launched a bot that could retweet messages sent in by other users. Gawker used the bot to retweet phrases from Hitler’s “Mein Kampf.”

Microsoft, and the wider tech community, have surely learned that exposing an artificial intelligence experiment to the wider internet is a recipe for disaster. Other AI experiments are more tightly controlled.

A software program designed by Google’s AI company DeepMind recently beat a human champion at the incredibly complex game of Go. The board game had been a final frontier for AI bots after IBM’s DeepBlue beat chess master Garry Kasparov in 1997.

Artificial intelligence capabilities continue to progress apace, but Tay should serve as a cautionary tale when it comes to letting the bots run wild before adequate precautions have been taken. Keep your eyes peeled to see when Microsoft updates the Twitter bot.

Updated on

While studying economics, Brendan found himself comfortably falling down the rabbit hole of restaurant work, ultimately opening a consulting business and working as a private wine buyer. On a whim, he moved to China, and in his first week following a triumphant pub quiz victory, he found himself bleeding on the floor based on his arrogance. The same man who put him there offered him a job lecturing for the University of Wales in various sister universities throughout the Middle Kingdom. While primarily lecturing in descriptive and comparative statistics, Brendan simultaneously earned an Msc in Banking and International Finance from the University of Wales-Bangor. He's presently doing something he hates, respecting French people. Well, two, his wife and her mother in the lovely town of Antigua, Guatemala. <i>To contact Brendan or give him an exclusive, please contact him at [email protected]</i>
Previous article Larry Cunningham On Berkshire’s Culture And Future
Next article Executive Pay Rises Despite Law Change

No posts to display