OpenAI: Planning as Though New AI Models Can Pose ‘High’ Risk

December 10, 2025, 8:13 PM UTC

OpenAI is assuming that new AI models can pose a ‘high’ risk.

  • Company is strengthening models for defensive cybersecurity tasks
  • Training models to safely respond to harmful requests
  • Strategy is to ‘mitigate risk through layered safety stack’
  • OpenAI to establish ‘Frontier Risk Council’
    • Advisory group to bring experienced cyber defenders and security practitioners into close collaboration with our teams

To view the source of this information click here

To contact the reporter on this story:
Marisa Coulton in Toronto at mcoulton1@bloomberg.net

To contact the editor responsible for this story:
Ilya Banares at ibanares@bloomberg.net

© 2025 Bloomberg L.P. All rights reserved. ...

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.