Artificial intelligence-driven chatbots are giving users problematic medical advice about half the time, according to a new study, highlighting the health risks of the technology that’s becoming increasingly integral in day-to-day life.
Researchers from the US, Canada and the UK evaluated five popular platforms — ChatGPT, Gemini, Meta AI, Grok and DeepSeek — by asking each of them 10 questions across five health categories. Out of the total responses, about 50% were deemed problematic, including almost 20% that were highly problematic, according to findings published this week in medical journal BMJ Open.
The chatbots performed relatively better on closed-ended prompts ...
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
