- Federal agencies will have to examine commercial data use
- Biden called on Congress for federal privacy legislation
President Joe Biden’s executive order on artificial intelligence puts federal agencies at the center of a plan to strengthen protections for Americans’ privacy against the risks posed by AI.
AI technologies create a litany of risks to Americans’ privacy and civil liberties, including “making it easier to extract, re-identify, link, infer, and act on sensitive information about people’s identities, locations, habits, and desires,” the executive order notes.
The order, signed by Biden on Monday, encourages agencies to “use their full range of authorities” to protect consumers from threats to privacy and other risks from AI. While not explicitly directing agencies to regulate the technology in new ways, the executive order says agencies should “consider emphasizing or clarifying where existing regulations and guidance apply to AI,” and directs them to put safeguards on their own use of the technology.
The executive order builds on pledges made by the Justice Department, Consumer Financial Protection Bureau, and Equal Employment Opportunity Commission earlier this year to use their existing authorities to prevent unfair and deceptive practices.
It represents a “green light” for agencies to issue additional guidance on AI, including handling of user data, said Rebecca Engrav, a partner at Perkins Coie LLP.
“It would not surprise me to begin to see more statements, guidance about how AI should be used by companies subject to certain agencies’ remit,” she said.
One significant step in the White House plan is requiring federal agencies to evaluate how they procure, maintain, and use commercially available data that contains personally identifiable information. The executive order follows a recent slew of agency actions trying to rein in potential harms facilitated by the data broker industry. Notably, data used for national security purposes is exempted.
The executive order also directs federal agencies to rethink their existing processes for examining the privacy impacts of incorporating AI.
Within 180 days of the order, the Justice Department must issue a request for information on potential updates to the E-Government Act of 2002, which dictates the management and governance of online government services. Among other things, the request will examine how to improve privacy impact assessments conducted by federal agencies to better mitigate privacy harms—including those posed by AI.
“Many of the privacy components of this order are going to improve privacy protections for Americans and at the same time lay down those human rights safeguards on the use of AI, so it’s serving two purposes,” said John Davisson, director of litigation at the nonprofit Electronic Privacy Information Center.
The order will also have broad implications for how federal agencies develop and deploy privacy-enhancing technologies, such as encryption.
Congressional Action?
Davisson said while there are limits on what the executive order can do, the frameworks it sets up shows the administration is aligned with the underlying principles he thinks should be included in comprehensive federal privacy legislation.
“It’s not the end-all-be-all solution but it’s a major step forward and it should be a kick in the seat for Congress to move ahead with comprehensive privacy legislation,” he said.
Biden renewed his call for Congress to pass bipartisan data privacy legislation during his signing of the executive order. Comprehensive privacy legislation stagnated in the last Congress, and piecemeal bills haven’t advanced this year. Senate Majority Leader Chuck Schumer (D-N.Y.) is leading a series of bipartisan forums to develop a bipartisan approach to AI; he said Monday legislative movement is likely still months away.
Support from the White House is “important” to push Congress toward action, said Brandon Pugh, director of cybersecurity and emerging threats at the R Street Institute. “I hope the follow-on act is that the White House continues to keep the momentum and the pressure on congressional leaders to act on that.”
Engrav, who co-chairs Perkins Coie’s Artificial Intelligence & Machine Learning industry group, said while there isn’t a comprehensive federal privacy law, AI is still governed by the existing federal and state data security laws, even if they are “patchwork.”
“Privacy, of all issues that AI might present, is not really one where there’s a gap in coverage of laws,” Engrav said . “While there may be plenty of reasons why it would help the ecosystem in America to have a federal privacy statute in general, I don’t see AI itself as being the reason why we need a federal privacy statute.”
Others said AI adds urgency to the need for a comprehensive privacy law, and a broader suite of legislation to rein in the harms of the fast-growing technology.
“This executive order is a real step forward, but we must not allow it to be the only step,” Maya Wiley, president and CEO of The Leadership Conference on Civil and Human Rights, said in a statement. “We need comprehensive civil rights legislation that takes into account unintentional, group based harms, ensures privacy, enables the public to have meaningful ways to see and know that AI is safe for use, and ensures all consumers have real protections.”
Companies will also be looking for more guidance from Congress, said Daren Orzechowski, partner at Allen & Overy LLP.
“A lot of the executive order is focusing on where the government has leverage,” said Orzechowski, who is a member of the firm’s AI advisory group. “There’s a huge part of the market that is going to wait and see if Congress is going to pass something.”
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
