OpenAI is assuming that new AI models can pose a ‘high’ risk.
- Company is strengthening models for defensive cybersecurity tasks
- Training models to safely respond to harmful requests
- Strategy is to ‘mitigate risk through layered safety stack’
- OpenAI to establish ‘Frontier Risk Council’
- Advisory group to bring experienced cyber defenders and security practitioners into close collaboration with our teams
To view the source of this information click
To contact the reporter on this story:
To contact the editor responsible for this story:
© 2025 Bloomberg L.P. All rights reserved. ...
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
