Microsoft apologies for racist tweets by AI chatbot

Microsoft is "deeply sorry" for racist and sexist Twitter messages generated by AI robot, Tay

Tags: Artifical intelligenceEntertainmentMicrosoft CorporationTwitter Incorporation
  • E-Mail
Microsoft apologies for racist tweets by AI chatbot Tay was created to mimic a 19-year-old American girl to engage like a real millennial teenager with users on Twitter (Twitter)
By  Aasha Bodhani Published  March 27, 2016

Microsoft's artificial intelligence chat robot, Tay, turned bad last week after it began spouting racist, sexist and otherwise offensive remarks on Twitter.

Tay was created to mimic a 19-year-old American girl to engage like a real millennial teenager with Twitter users aged between 18- to 24-year-olds, by using slang and providing comic responses. The more users that interacted with it, the smarter it would become. Microsoft developers would then collect nicknames, gender, favourite food, and zip codes of anyone who interacted with Tay.

However, Tay only survived a day on Twitter after trolls taught the chatbot to make inappropriate comments on taboo subjects, such as proclaiming her love for Adolf Hitler, denying the Holocaust, comparing feminism to cancer and making threats to "evil" races.  

Microsoft took Tay offline and apologised, but said Twitter trolls took advantage of the chatbot.

"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," Peter Lee, corporate vice president at Microsoft Research. "As we developed Tay, we planned and implemented a lot of filtering and conducted extensive user studies with diverse user groups. We stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience."

He added: "Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images.

"Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways."

"We will take this lesson forward as well as those from our experiences in China, Japan and the U.S.," Lee said. "Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay."

This is not the first AI robot Microsoft has experimented with. In 2014 it found success when it launched chatbot Xiaolce in China. According to the software company, Xiaolce engages with around 40m people.

Add a Comment

Your display name This field is mandatory

Your e-mail address This field is mandatory (Your e-mail address won't be published)

Security code