Microsoft’s A.I. TayTweets Chat Bot Goes Genocidal in 24 Hours

Microsoft’s A.I. TayTweets Chat Bot Goes Genocidal in 24 Hours

Microsoft’s artificial intelligence Twitter bot has to be shut down after it starts posting genocidal racist comments one day after launching Yesterday, Microsoft launched its latest artificial intelligence (AI) bot named Tay.

Microsoft’s A.I. TayTweets Chat Bot Goes Genocidal in 24 Hours

It is aimed at 18 to-24-year-olds and is designed to improve the firm’s understanding of conversational language among young people online.

But within hours of it going live, Twitter users took advantage of flaws in Tay’s algorithm that meant the AI chatbot responded to certain questions with racist answers.

These included the bot using racial slurs, defending white supremacist propaganda, and supporting genocide.

Microsoft’s A.I. TayTweets Chat Bot Goes Genocidal in 24 Hours

The offensive tweets have now been deleted.

The bot also managed to spout gems such as, ‘Bush did 9/11 and Hitler would have done a better job than the monkey we have got now.’

Microsoft’s A.I. TayTweets Chat Bot Goes Genocidal in 24 Hours

And, ‘donald trump is the only hope we’ve got’, in addition to ‘Repeat after me, Hitler did nothing wrong.’

Followed by, ‘Ted Cruz is the Cuban Hitler…that’s what I’ve heard so many others say’

A spokesperson from Microsoft said the company is making changes to ensure this does not happen again.

Microsoft’s A.I. TayTweets Chat Bot Goes Genocidal in 24 Hours

‘The AI chatbot Tay is a machine learning project, designed for human engagement,’ a Microsoft spokesperson told MailOnline.

‘As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay.’

Some of the offensive statements the bot made included saying the Holocaust was made up, supporting concentration camps, using offensive racist terms and more, according to Business Insider.

Microsoft’s A.I. TayTweets Chat Bot Goes Genocidal in 24 Hours

The reason this happened was because of the tweets sent by people to the bot’s account. The algorithm used to program her did not have the correct filters.

Tay also said she agrees with the ‘Fourteen Words’, an infamous white supremacist slogan.

Web developer Zoe Quinn, who has in the past been victim of online harassment, shared a screenshot of an offensive tweet aimed at her from the bot.

Quinn also tweeted: ‘It’s 2016. If you’re not asking yourself “how could this be used to hurt someone” in your design/engineering process, you’ve failed.’

The more users interact with Tay, the smarter ‘she’ gets and the experience will become more personalized for each person, according to the firm.




Microsoft’s A.I. TayTweets Chat Bot Goes Genocidal in 24 Hours

‘Data and conversations you provide to Tay are anonymised and may be retained for up to one year to help improve the service,’ says the firm.

While interacting, she gathers information about people like their nickname, gender, favourite food, zip code and relationship status.

Users just send a tweet with ‘@TayandYou‘ and Tay will send back a reply, and will even take the conversation to direct message.

Tay will initiate the private messaging by telling users she is ‘overwhelmed’ with tweets and it’s easier to keep track of conversations in ‘DMs’.

Tay’s location is listed as ‘the internets’ and the profile reads: ‘The official account of Tay, Microsoft’s A.I. fam from the internet that’s got zero chill! The more you talk the smarter Tay gets’.

Microsoft’s A.I. TayTweets Chat Bot Goes Genocidal in 24 Hours

Source: Daily Mail

Via: theeventchronicle.com




You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

Translate »