Hmm: Microsoft’s A.I. bot ‘Tay’ taken offline after learning Holocaust denial, racism from Web users

Hmm: Microsoft’s A.I. bot ‘Tay’ taken offline after learning Holocaust denial, racism from Web users
Credit: Countdown.org

[Ed. – That’s a limitation of A.I. that will never change.  You can program responses and moral decision trees, but you can’t actually enable it to think both independently and morally.  Only humans can do that.]

Microsoft’s newly launched A.I.-powered bot called Tay, which was responding to tweets and chats on GroupMe and Kik, has already been shut down due to concerns with its inability to recognize when it was making offensive or racist statements. Of course, the bot wasn’t coded to be racist, but it “learns” from those it interacts with. And naturally, given that this is the Internet, one of the first things online users taught Tay was how to be racist, and how to spout back ill-informed or inflammatory political opinions. [Update: Microsoft now says it’s “making adjustments” to Tay in light of this problem.]

In case you missed it, Tay is an A.I. project built by the Microsoft Technology and Research and Bing teams, in an effort to conduct research on conversational understanding. That is, it’s a bot that you can talk to online. The company described the bot as “Microsoft’s A.I. fam the internet that’s got zero chill!”, if you can believe that. …

As Twitter users quickly came to understand, Tay would often repeat back racist tweets with her own commentary.

Continue reading →


Commenting Policy

We have no tolerance for comments containing violence, racism, vulgarity, profanity, all caps, or discourteous behavior. Thank you for partnering with us to maintain a courteous and useful public environment where we can engage in reasonable discourse.

You may use HTML in your comments. Feel free to review the full list of allowed HTML here.