The contracting community is at odds over the use of facial recognition technology as the federal government appears willing to continue to pay for it.
The government uses private sector facial recognition tools for activities like streamlining entry at security checkpoints or identifying suspects during criminal investigations.
Two major contractors—Idemia and Clearview AI—say they plan to continue using facial recognition, while Microsoft announced last month that it would limit its suite of artificial intelligence offerings.
Idemia partners with the Department of Homeland Security to identify and approve travelers at airports. It holds the most unclassified contract revenue for facial recognition among government vendors.
Clearview AI offers a searchable database of faces to agencies like the Department of Defense and Homeland Security as well as to local police forces.
Microsoft, which is mainly a participant in the federal marketplace to sell IT services to customers like the Department of Defense and Homeland Security, is pushing for responsible use of facial recognition. The tech giant made a push for “government regulation and responsible industry measures” in 2018. In 2020, it barred police from using its facial recognition tool.
Last month, Microsoft said it would discontinue facial analysis tools that purport to detect a person’s emotional state due to inaccuracies and discriminatory applications. It will allow access to its tools based on a narrow set of approved use cases.
Government Is a Big Buyer
The government has awarded $76 million worth of unclassified facial recognition-related contracts in the last two decades, according to Bloomberg Government data—and that doesn’t cover the likely trove of classified contract dollars involving federal law enforcement or the military.
The Transportation Security Administration exemplifies the attitude of procurement officers about the technology. “While our use of facial recognition technology remains in the pilot phase of development, it seems to be showing promise as a method of accurately verifying identity,” an agency spokesperson said in a statement for Bloomberg Government. “TSA has no plans to limit our current use of facial recognition.”
Advocates say if the government is to continue buying facial recognition tools from the private sector, it first needs to institute more regulations on where facial recognition can be applied and establish better standards to evaluate. Some also want new laws putting limits on the technology.
The AI industry suffers from a lack of uniformity and coordination, said Emrah Gultekin, CEO and co-founder of facial recognition contractor Chooch. His company’s facial recognition helps military analysts sort through millions of images from drones and satellites.
AI standards require constant upgrades to keep up with changes in the tech. Because standards are inconsistent across the industry, the government is “reliant on what commercial suppliers say their software can do,” said Ellen Broad, associate professor at the Australian National University’s School of Cybernetics, in an interview.
“There needs to be real scrutiny of the technologies being used and clear rules and mechanisms in place about how they’re used,” she said.
Contractors maintain that how the tool is used matters most. “If you have the proper use case and the proper controls around the technology, then you can justify it in a positive way,” Clearview AI CEO and co-founder Hoan Ton-That, said in an interview.
Law Enforcement Is Most Dicey
Critics point to law enforcement as one of the most problematic uses of facial recognition. Gultekin conceded that law enforcement applications can “get very hairy very quickly.”
Facial recognition by police “is most often deployed in communities targeting Black and Brown people in ways that allow police surveillance of Black and Brown communities,” Caitlin Seeley George, a director at the digital rights advocacy group, Fight for the Future, said in an interview.
The systems use tools like mugshot databases, “which are disproportionately filled with Black and Brown faces, as sources of comparison,” she said.
Much depends on how the system is set up. Ton-That said Clearview’s algorithms have never led to a wrongful arrest. The database doesn’t include a percentage match for accuracy, so officers can’t claim a near-certain match as evidence in court. He countered that facial recognition can offer “a fairer system overall,” since suspects’ defense teams can use the technology to identify witnesses or corroborate testimony.
There are applications that earn approval even from critics. Broad said facial recognition could be used so autonomous vehicles can delineate between human faces and objects or to assist people with visual impairments to identify visitors in their homes.
Building Datasets
Datasets mark the biggest problem in creating workable and unbiased facial recognition tools. If human biases are encoded into machine learning models, it leads to failures. For example, an AI tool trained on all White faces can’t accurately recognize people of color.
“The technology is inherently biased, both in its current algorithms, which we know are worse at identifying Black and Brown faces, faces of women, gender nonconforming people, basically anyone who’s not a White male,” Seeley George said.
Companies can reduce bias by introducing more data into their datasets, but that can cause its own problems.
Ton-That touts its facial recognition search engine, which engineers trained using 70 million images. Typical models use just five million faces. As Clearview AI added more data, engineers saw a decrease in bias and inaccuracy.
“To collect more data, it’s also a pitfall because you’re basically collecting other people’s information,” Gultekin said. “It’s this cycle that you need more data to lessen the bias, but you also need to create guardrails to make sure that we’re all working through a very similar system in order to deploy these computer vision models.”
Some companies opt not to build datasets off of publicly scraped images entirely.
The practice is “not something that we will vouch for,” Teresa Wu, vice president of innovation and client engagement at Idemia, told Bloomberg Government. “It’s not ethical. You have no consent of the subject and it’s not legally obtained data.”
Feds Tread Lightly
There is no federal legislation guiding the development of facial recognition technology, despite the myriad concerns around privacy and bias. Some cities and states have passed bans or rules governing its use, like Illinois’s Biometric Information Privacy Act, which creates standards for companies regarding the collection of biometric data.
The most straightforward attempt by agencies to provide direction to the private industry on developing biometric identification standards comes from the National Institute for Standards and Technology, which has been working with the industry since the 1990s. NIST conducts a Face Recognition Vendor Test that rates Clearview AI and Idemia at the top of the industry with close to 100 percent accuracy.
Lawmakers have since called for additional standards and regulations.
Congress failed last year to advance a bill (
To contact the reporter on this story:
To contact the editors responsible for this story:
To read more articles log in.
Learn more about a Bloomberg Law subscription.