AI Rules Can Draw on Approach to Cyber, White House Adviser Says

Jan. 12, 2024, 12:17 AM UTC

Senior Biden administration officials flocked to CES in Las Vegas this week to see the cutting-edge technologies rolling out this year and weigh in on the government’s role in the artificial intelligence boom.

Anne Neuberger, the deputy national security adviser for cyber and emerging technologies, has played a key role in the White House response to AI, involved in crafting President Joe Biden’s sweeping executive order that directs agencies to set rules to help ensure the technology is deployed safely.

Companies and the federal government should work together to encourage innovation but keep consumers safe from cyberattacks, for example, the top cyber official said at a CES panel on Thursday. Separately, she announced the US is entering an agreement with the European Union on adopting cyber labeling standards to protect smart devices from cyberattacks.

Bloomberg Government caught up with Neuberger afterward to discuss the latest AI policy developments unfolding in Washington.

This transcript has been edited for length and clarity.

Q: Where do you see Congress stepping in to regulate AI?

A: There’s three parts to the work that you’ve been watching happening on AI. The first part was the president negotiated the voluntary commitments with companies to say—companies, you’re building this technology, you’re accountable for trust and safety, and you need to make commitments to how you’re thinking about that trust and safety.

That was a bridge then to the president’s executive order, which covered a lot. Both in utilizing the promise of AI in areas like potentially education ... and then the perils of bias in models used in loans and hiring. So that was the president’s executive order, which went to the line up to what could be done under his authority.

Now that’s a bridge to where we need new laws, and that’s the work you see happening on the Hill. I think Leader Chuck Schumer has tried to take a different approach in convening both classified sessions, open sessions, closed-door sessions in a bipartisan way to say, this is really a technology that has both promise and risk, and we want to do whatever we can to ensure America’s leading on the promise, sharing that obviously with allies and partners, but also making real steps on the risks.

Q: How closely is the White House engaged with lawmakers on AI?

A: The White House talks to the Hill a lot. Technical assistance—where the Hill will say, here’s some of the legislation we’re thinking about, give us your thoughts on technical assistance. So we talk regularly.

This is clearly a Senator Schumer Hill-led process. But as you know, in a new and evolving technology area quickly, we’ll brief on how we see adversaries potentially using it, we’ll brief on our take on how companies are implementing trust and safety, how the market is evolving. There’s a lot of different thinking on large scale frontier models versus smaller, more focused train models, and just how we ensure that a regulatory framework allows for innovation in both.

Q: A lot of the conversation has become about striking that balance of mitigating AI’s risks and promoting innovation. It seems as though because AI touches everything, it’ll be hard to tackle.

A: I think that as we look at the field of AI, there’s a lot we can learn from the field of cybersecurity. For example, how you test components before they go to market, and that they must be tested before they go, and then additional controls on use. There are different kinds of use that worry us more. So if an AI model is deployed to ensure optimum efficiency in railroad signaling. Clearly, we’ll want a different level of risk mitigation assessment and regularly checking, than we would in an AI model that’s used for, you know, to help consumers write first draft of resumes. So thinking about the lifecycle of tech, and where the key risks are so that there’s, as we say, defense in depth. From a cybersecurity perspective, there are real lessons here to think about.

The AI executive order had a particular tasking, for example, to say by the end of this month, critical infrastructure sectors to assess what new risks AI deployments could bring, and to come back with recommendations of how to tackle that. We didn’t want to do a one-size-fits-all and wanted to bring in those regulators who really know each sector best and how to think about it.

Q: What are your main concerns around AI and how would you prioritize them?

A: You saw it laid out in the executive order, ensuring that we’re protecting consumers from discrimination and showing that in the day-to-day ways that AI may be built into our services that can be explained to ensure that they’re fair, that they’re just, that they’re right.

That moves all the way to critical services as we talked about, when AI is deployed in a rail signaling system or AI is deployed in an energy grid or in water optimization or when you add chemicals to keep water safe. All the way through to the national security side of how an adversary potentially could use AI models to accelerate development of cyber weapons or bioweapons.

All of those are really a focus. And given as you noted the transformative potential of technology, we’re really moving out on all because the technology is evolving quickly. We’re watching and we’re working through that because one risk we want to make sure is that we also glean the promise.

Q: On the promise, what are some of the cool things you’ve been learning about AI and its potential to do good?

A: The promise is really a big one, and you see that when you walk around. I’ll give one example and that relates to CES here as well.

We see with climate change, potentially bigger floods, potentially drier, hotter temperatures as well. We want to ensure that food production remains the same or even becomes more for a growing global population. So bringing together predictive weather models, whether on rain, on winds, on heat, together with planting cycles, and food production to say, predict, should we plant earlier this year? Should different types of products be planted in different regions as some regions become drier and some become wetter? Should food and agriculture adjust?

Another really interesting area is an area of computer vision models to detect cancers more quickly. And then I think a final one, that particularly post-Maui, we really want to see happen quickly is emergency management. So if you can do image identification faster from satellite collection, combined with prediction of winds on how fire is moving, and then capability of where cars are and where people are evacuating to help guide people evacuating an emergency disaster. Those pieces coming together quickly during an urgent time to save lives.

To contact the reporter on this story: Oma Seddiq at oseddiq@bloombergindustry.com

To contact the editors responsible for this story: John Hewitt Jones at jhewittjones@bloombergindustry.com; Robin Meszoly at rmeszoly@bgov.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.