Welcome
Daily Labor Report®

Border Agency Eye Scan Flaws Expose Government AI Risks: Report

Feb. 18, 2020, 11:00 AM

U.S. Customs and Border Protection ditched a plan to use iris scanning to track people coming in and out of the country after a federal contractor couldn’t explain flaws in the technology. The agency switched gears instead to facial recognition, a move that independent reviewers say highlights the risks and opportunities that come with the growing use of artificial intelligence in the federal government.

The border agency wasn’t able to fully understand what went wrong with the scans, meant to use unique patterns in travelers’ irises to confirm their identities against their identification documents, according to internal agency records. That’s because the unnamed contractor that created the system didn’t want to divulge proprietary information.

“If CBP fails to understand the flaws in its own technology, it can expose itself to known vulnerabilities and fail to detect adversarial attacks,” researchers said in a first-of-its-kind examination of federal government use of artificial intelligence set for release Tuesday. “More broadly, agencies that lack access to a contractor’s proprietary technology may be unable to troubleshoot and adapt their own systems.”

The border agency is now implementing facial recognition technology to scan passengers at airports, sea ports and land entry points and compare the images against known photographs, to help confirm travelers’ identities and root out visa fraud and overstays. But the researchers found that the agency’s continued reliance on contractors for artificial intelligence technology poses security risks.

“CBP’s use of facial recognition reflects a bigger concern with contractor-provided AI tools,” Stanford University professor David Freeman Engstrom, one of the report’s lead authors, told Bloomberg Law. “The algorithms are not just technically opaque. The companies that make them may also be able to use trade secret protections to block disclosure of their technical details in court,” which Engstrom said could make it difficult for agencies to explain how they’re using machine learning.

And while the report didn’t provide technical details about the iris scanning failures or the circumstances surrounding the testing, the researchers said the agency’s use of facial recognition technology still faces challenges in obtaining traveler consent as well as in areas like data privacy, security, and surveillance.

The report, commissioned by the Administrative Conference of the United States, reviewed the use of AI at nearly 150 agencies across the federal government. Researchers at Stanford University and New York University explained how the federal government has employed the technology to do everything from catch securities fraudsters and track severe weather to comb through public comments on proposed regulations and develop statistics on workplace injuries.

Liability for use of artificial intelligence is among dozens of legal and political issues that federal agencies must address as they explore ways to harness advanced algorithms and machine learning to carry out their missions, the researchers concluded.

‘Basic Tensions’

Already AI significantly assists agencies in two core tasks: enforcing regulatory mandates and adjudicating benefits and privileges, the report said. Machine learning is also being employed in regulatory analysis, personnel management, citizen engagement, and service delivery.

But there are downsides.

“Such programs raise privacy and security risks and reveal basic tensions between the goals of law enforcement and agency transparency,” the report said.

When public officials deny benefits to an individual or make decisions affecting the public’s rights, the law generally requires them to explain why an action was taken; yet many of the more advanced AI tools are not, by their structure, fully explainable, the report said.

Technologies associated with the burgeoning field of artificial intelligence aren’t well known to the public at large and their adoption by the federal government is often not disclosed publicly. That makes them ripe for consideration by ACUS, which is made up of experts from the public and private sectors who recommend ways to promote efficiency, equity, and greater public participation in regulatory policy and agency-level administration.

For the report, researchers selected 142 of the government’s largest agencies, bureaus, and offices, excluding 21 military and intelligence organizations. Researchers documented 157 use cases across 64 agencies, and most of the AI uses evaluated were concentrated among a small number of agencies.

The Securities and Exchange Commission, for example, is using AI to police insider trading, but questions about how enforcement decisions are being made inevitably arise. Despite legal gray areas, many agencies are experimenting with AI, with nearly half of the agencies studied planning, piloting, or implementing such techniques.

The National Oceanic and Atmospheric Administration, for example, is using AI to refine high-impact weather tracking systems to improve decision-making in real-time. The Transportation Security Administration is exploring the use of image recognition technology to screen passenger luggage for explosive devices. The Centers for Medicare and Medicaid Services is developing AI-based tools to predict instances of health care fraud. And the Department of Housing and Urban Development deployed a prototype chat-bot to enable citizens to acquire information about rental assistance, agency programs, and procedures for filing civil rights complaints.

“Managed well, algorithmic governance tools can modernize public administration, promoting more efficient, accurate, and equitable forms of state action,” the report said. Managed poorly, government deployment of AI tools can increase undesirable opacity in public decision-making and heighten concerns about arbitrary government action and power, it said.

Legal Framework

AI also raises a number of legal concerns for agencies to navigate.

In one case noted in the report, government agencies considering adopting autonomous or self-driving vehicles face an uncertain regulatory future on issues such as tort liability and data privacy. The U.S. Postal Service has been testing AI applications for transporting mail and parcels since 2014, and now has two pilot-phase projects: autonomous delivery vehicles and autonomous long-haul trucks, the report said.

There is not yet a comprehensive federal regulatory framework for autonomous vehicles, so the full legal implications of the technology remain unclear, the report said. Although automated vehicles are predicted to reduce accident frequency, they will not eliminate collisions, so the most immediate legal implication relates to the Postal Service’s tort liability for vehicle accidents, according to the report.

Despite the legal risks, the federal government is investing heavily in AI. The White House is seeking to double spending across the government on AI and quantum information science research by fiscal 2022. The president’s budget request for fiscal 2021, released earlier this month, calls for an increase in non-defense spending on AI and quantum artificial intelligence to $2 billion by fiscal 2022, from $1 billion in fiscal 2020, in an effort to maintain global leadership in research into quantum technologies that can process data millions of times faster than today’s supercomputers.

“In the face of intense global competition, the FY 2021 budget affirms the importance of technology leadership to America’s economic strength and national security,” Michael Kratsios, the U.S. chief technology officer, said in a statement on the proposal.

This summer, Kratsios said, the Office of Science and Technology Policy will issue a report showcasing total nondefense AI research and development funding enacted in fiscal 2020. This report will mark the first time an administration has publicly documented how much has been spent specifically on AI at individual agencies across the federal government.

With assistance from Rebecca Kern

To contact the reporters on this story: Michaela Ross in Washington at mross@bgov.com; Cheryl Bolen in Washington at cbolen@bgov.com

To contact the editors responsible for this story: John Lauinger at jlauinger@bloomberglaw.com; Chris Opfer at copfer@bloomberglaw.com

To read more articles log in. To learn more about a subscription click here.