An advisory commission is recommending that the U.S. government weigh risks to privacy and threats of bias from using tools like facial recognition for national security.
Set to be finalized in March, the recommendations from the National Security Commission on Artificial Intelligence urge intelligence agencies, the Department of Homeland Security, and the Federal Bureau of Investigation to review and mitigate such effects.
The recommendations call for the government to assess how AI is used in the national security context and the potential impact on privacy and civil rights, in order to ease public concerns over potential misuses of the technology.
The commission, led by former Google chairman Eric Schmidt, is advising Congress and the executive branch on using AI for national security purposes. Its work emphasizes the growing importance of AI, with a call to double annual spending on research and development to reach $32 billion a year by 2026.
Assessing Impacts
Alongside advances in technology, the recommendations signal a shift from AI principles to more procedural mechanisms, like impact assessments.
Impact assessments for AI could help get at the issue of bias in automated decision-making, which hasn’t traditionally been part of privacy law.
“Having a commission at this level focus on that problem is important,” said Justin Antonipillai, founder and chief executive of WireWheel, a firm that specializes in data privacy compliance. Antonipillai previously led international negotiations on data privacy and security during the Obama administration.
Federal agencies already conduct privacy impact assessments that are similar to those required of companies that must comply with European Union data protection rules. The assessments include looking at what kind of personal data is involved, what the data is used for, and how it’s protected.
Such assessments are often seen as box-checking exercises, according to Eric Horvitz, a commission member and chief scientific officer at Microsoft Corp.
“AI technologies that go from data to models to deployment require a new kind of impact assessment when it comes to privacy and civil liberties,” Horvitz said.
‘Black Box’
The commission acknowledges concerns that have been raised over a lack of transparency into AI’s use and impacts, which could affect public and political backing for the technology.
One challenge to convincing the public to trust AI is that its use isn’t always disclosed, according to John Davisson, senior counsel at the nonprofit Electronic Privacy Information Center. So publishing AI impact assessments could be a good “first step,” he said.
“That makes the argument for transparency,” Davisson said.
DHS and the FBI should review whether people are given enough notice that AI is used in decision-making and whether AI systems can be audited to trace how they arrived at a decision, if it’s contested, according to the commission’s report. Such audits would seek to get at a so-called “black box” problem with AI, because it can be hard to tell how the technology makes decisions.
These recommendations are part of a proposed system of redress for people impacted by AI errors, whether they are denied a visa or placed on a no-fly list, for example. Concerns have been raised about AI such as facial recognition being used to press criminal charges and potentially lead to an improper arrest.
The commission is urging the U.S. attorney general to issue guidance on AI and due process under the law, so that people have the right to challenge a decision made against them.
Civil Rights
The commission’s report also calls for a task force focused on the privacy and civil liberties implications of AI to review current policy and pinpoint any “legal gaps” for existing and emerging technologies.
“There’s this tension, as in almost any technological development, between concerns about the misuse of technology and people that are saying we’re behind,” said Tom Stefanick, a visiting fellow in the foreign policy program at the Brookings Institution.
The commission is seeking a larger role for the Privacy and Civil Liberties Oversight Board, which is tasked with making sure the federal government’s efforts to prevent terrorism are balanced with the need to protect privacy and civil liberties.
“Looking at the civil rights implications of AI is important,” said James Lewis, a former government official who’s now a senior vice president and director of a technology program at the Center for Strategic and International Studies. “That’s going to condition some of the political acceptance of how comfortable people feel using this technology.”
The board should be given more visibility and technical insight into AI systems such as surveillance used for intelligence purposes and facial recognition used in security for air travel, according to the recommendations.
To contact the reporter on this story:
To contact the editor responsible for this story:
To read more articles log in.
Learn more about a Bloomberg Law subscription.