Home / / Microsoft apologies for racist tweets by AI chatbot

Microsoft apologies for racist tweets by AI chatbot

Microsoft is "deeply sorry" for racist and sexist Twitter messages generated by AI robot, Tay

Microsoft apologies for racist tweets by AI chatbot
Tay was created to mimic a 19-year-old American girl to engage like a real millennial teenager with users on Twitter

Microsoft's artificial intelligence chat robot, Tay, turned bad last week after it began spouting racist, sexist and otherwise offensive remarks on Twitter.

Tay was created to mimic a 19-year-old American girl to engage like a real millennial teenager with Twitter users aged between 18- to 24-year-olds, by using slang and providing comic responses. The more users that interacted with it, the smarter it would become. Microsoft developers would then collect nicknames, gender, favourite food, and zip codes of anyone who interacted with Tay.

However, Tay only survived a day on Twitter after trolls taught the chatbot to make inappropriate comments on taboo subjects, such as proclaiming her love for Adolf Hitler, denying the Holocaust, comparing feminism to cancer and making threats to "evil" races.  

Microsoft took Tay offline and apologised, but said Twitter trolls took advantage of the chatbot.

"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," Peter Lee, corporate vice president at Microsoft Research. "As we developed Tay, we planned and implemented a lot of filtering and conducted extensive user studies with diverse user groups. We stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience."

He added: "Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images.

"Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways."

"We will take this lesson forward as well as those from our experiences in China, Japan and the U.S.," Lee said. "Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay."

This is not the first AI robot Microsoft has experimented with. In 2014 it found success when it launched chatbot Xiaolce in China. According to the software company, Xiaolce engages with around 40m people.

Follow us to get the most comprehensive consumer tech news delivered fresh from our social media accounts on Facebook, Twitter, Youtube, and listen to our Weekly Podcast. Click here to sign up for our weekly newsletter on curated technology news in the Middle East and Worldwide.

REGISTER NOW | Webinar Event | Security you can bank on – Safeguarding the Middle East’s financial sector

Presented in partnership with security and network specialist Cybereason, the second in the three part webinar series will bring together a panel of experts to discuss how banks and financial institutions are evolving their service offering while simultaneously staying one step ahead of the cyber criminals who seek to bring their operations crashing to the ground.