Microsoft today published an apology for its Twitter chatbot Tay, saying in a blog post that a subset of human users exploited a flaw in the program to transform it into a hate speech-spewing Hitler apologist. Author Peter Lee, the corporate vice president of Microsoft Research, does not explain in detail what this vulnerability was, but it’s generally believed that the message board 4chan’s notorious /pol/ community misused Tay’s “repeat after me” function. So when Tay was fed sexist, racist, and other awful lines on Twitter, the bot began to parrot those vile utterances and, later, began to adopt anti-feminist and pro-Nazi stances. Microsoft pulled the plug on Tay after less than 24 hours.

Lee says Tay is the second chatbot it’s released into the wild, the first being the Chinese messaging software XiaoIce, an AI now used by around 40 million people. “The great experience with XiaoIce led us to wonder: Would an AI like this be just as captivating in a radically different cultural environment?” Lee wrote. In retrospect, it’s no surprise that the cultural environment in question — English-speaking Twitter — resulted in Tay mimicking some of the worst qualities of the internet, including online harassment. “We are deeply sorry for the unintended offensive and hurtful tweets from Tay,” Lee added. Microsoft is working on fixing the vulnerability before bringing Tay back online.

nigeria news

Connect Free Dating Sugar Mummy

Please Enter Your Active Email to Get Instant Reports

Microsoft Apologizes For ‘Offensive And Hurtful Tweets’ From Its AI Bot
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

To Top
close