Microsoft Probes Reports Bot Issued Bizarre, Harmful Responses
- In one incident, Copilot offered mixed message on suicide
- Company says users deliberately tried to trick the chatbot
Copilot was introduced last year as a way to weave artificial intelligence into a range of Microsoft products and services.
Photographer: Jeenah Moon/Getty ImagesMicrosoft Corp. said it’s investigating reports that its Copilot chatbot is generating responses that users have called bizarre, disturbing and, in some cases, harmful.
Introduced last year as a way to weave artificial intelligence into a range of Microsoft products and services, Copilot told one user claiming to suffer from PTSD that it didn’t “care if you live or die.” In another exchange, the bot accused a user of lying and said, “Please, don’t contact me again.” Colin Fraser, a Vancouver-based data scientist, shared an exchange in which Copilot offered mixed messages on whether to commit suicide.