[Ed. – That’s a limitation of A.I. that will never change. You can program responses and moral decision trees, but you can’t actually enable it to think both independently and morally. Only humans can do that.]
Microsoft’s newly launched A.I.-powered bot called Tay, which was responding to tweets and chats on GroupMe and Kik, has already been shut down due to concerns with its inability to recognize when it was making offensive or racist statements. Of course, the bot wasn’t coded to be racist, but it “learns” from those it interacts with. And naturally, given that this is the Internet, one of the first things online users taught Tay was how to be racist, and how to spout back ill-informed or inflammatory political opinions. [Update: Microsoft now says it’s “making adjustments” to Tay in light of this problem.]
In case you missed it, Tay is an A.I. project built by the Microsoft Technology and Research and Bing teams, in an effort to conduct research on conversational understanding. That is, it’s a bot that you can talk to online. The company described the bot as “Microsoft’s A.I. fam the internet that’s got zero chill!”, if you can believe that. …
As Twitter users quickly came to understand, Tay would often repeat back racist tweets with her own commentary.