Advocates for strict guardrails over AI developers to prevent their models from causing catastrophic harms, such as releasing biological weapons, are split over initial recommendations from a working group assembled by California Gov. Gavin Newsom (D).
Some safety groups back the March 18 preliminary proposals issued by the team of academics, such as audits by third parties to assess a model’s risk. Other tech critics say the group should add tougher compliance rules to hold AI companies more accountable.
State Sen. Scott Wiener (D) wants to shape his new bill (SB 53) based on the report, after Newsom ...
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.