OpenAI, Microsoft Blame AI ‘Hallucinations’ for Chatbot Errors

April 3, 2023, 11:00 AM UTC

There’s been so much talk about AI hallucinating that it’s making me feel like I’m hallucinating. But first…

Help us make this newsletter better by filling out this survey

Today’s must-reads:

• China hit Micron with a chips review
• Twitter users balked at paying for blue check marks
• Italian regulators launched a probe into OpenAI

Choice of words

Somehow the idea that an artificial intelligence model can “hallucinate” has become the default explanation anytime a chatbot messes up.

It’s an easy-to-understand metaphor. We humans can at times hallucinate: We may see, hear, feel, smell or taste things that aren’t truly there. ...

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.