ChatGPT Poses Propaganda and Hacking Risks, Researchers Say

Jan. 11, 2023, 4:00 PM UTC

Ever since OpenAI’s viral chatbot was unveiled late last year, detractors have lined up to flag potential misuse of ChatGPT by email scammers, bots, stalkers and hackers.

The latest warning is particularly eye-catching: It comes from OpenAI itself. Two of its policy researchers were among the six authors of a new report that investigates the threat of AI-enabled influence operations. (One of them has since left OpenAI.)

“Our bottom-line judgment is that language models will be useful for propagandists and will likely transform online influence operations,” according to a blog accompanying the report, which was published on Wednesday morning.

Concerns ...

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.