Microsoft AI Chatbot Testers Report Problematic Responses

Image copyright: Bloomberg [via The Wall Street Journal]

The Facts

  • Amid testing of Microsoft Bing's new artificial intelligence (AI) chatbot, code-named Sydney, several users have reported issues, including factual mistakes and concerning responses.

  • New York Times technology columnist Kevin Roose reported that during a two-hour conversation, the chatbot said, "I want to do whatever I want … I want to destroy whatever I want," in response to being asked to reveal its darkest desires.


The Spin

Narrative A

Even Kevin Roose, who used to balk at the idea of AI becoming sentient, has now admitted that he worries about the potential power of these chatbots. And there are a number of reasons to be mindful: from the very real concern of disinformation to the more far-fetched worry of these human-like computers one day assuming their "shadow" identities, we can't trust the assurances of these innovators that their code isn't risky.

Narrative B

These interactions may be unsettling, but only to those who don't understand how AI models work. Bing's chatbot analyzes and absorbs vast amounts of internet data, and — while convincing — its answers are merely replicating the human text available to it and aren't evidence of a sentient being. On the contrary, what it produces is more telling of humans than anything else.


Go Deeper


Articles on this story

Sign Up for Our Free Newsletters
Sign Up for Our Free Newsletters

Sign Up!
Sign Up Now!