Contractors anxious to understand how federal agencies will rate their artificial intelligence-based tools can see the next point for this evolving industry in a new facial recognition standard with higher expectations for image quality.
Wide adoption across government of a new standard will change how vendors develop tools targeted at federal customers, requiring a higher level of accuracy and potentially smoothing out problems with bias embedded in AI tech.
The National Institute for Standards and Technology is collaborating with the Department of Homeland Security to develop a “vendor-agnostic assessment tool for determining how well an image would be useful for biometric identification purposes,” a DHS spokesperson said in an email.
U.S. Customs and Border Protection and the Transportation Security Administration aim to use face scans to verify travelers’ identity and are trying to assure the public the effort will make travel safer and more efficient.
NIST computer scientist Patrick Grother said work on the standard will conclude by the end of 2023.
The Face Image Quality Standard will include a number of requirements such as lighting appropriate to ensure a front-facing photograph passes muster for use in a facial recognition model.
Ensuring photos are properly lit could go a long way toward enhancing companies’ abilities to match and recognize Black and Brown faces. Facial recognition tools have historically underperformed with subjects of color.
“One of the well-known problems with some face photographs is underexposure and overexposure,” Grother said. “Such photos are linked to demographic properties, phenotypic properties of skin. Face recognition engines ultimately can fail with grossly overexposed or grossly underexposed photos.”
Standards as Evaluation Criteria
DHS said the standard will “drive the development and implementation of best practices for face image capture,” but the agency would not specify whether the standard would be tied to contract evaluation standards.
Donnie Scott, CEO of Idemia, which sells facial recognition and other biometric identification programs, said NIST standards should be directly linked to which companies receive contracts.
“Very few jurisdictions have tied any result of a NIST evaluation to a policy, to a buying criteria, to an evaluation criteria, or to its use,” Scott said. “The government has done a great job with NIST as a body that can effectively evaluative technologies. And it’s almost stopped there.”
NIST standards, like the in-development Face Image Quality Standard and the existing Facial Recognition Vendor Test, “can be leveraged in government procurements as a way to make sure that we have a common bar and threshold set that the algorithms chosen, the providers used, and how the technologies were developed was responsible, ethical, highly effective, and very useful,” Scott told Bloomberg Government.
Vendors’ technology could be rated against relevant NIST standards on a pass-fail basis in qualifying for a government contract, he said. Alternatively, agencies could also use a score derived from a standard to determine whether a contractor is rated as exceptional, very good, satisfactory, marginal, or unsatisfactory.
“We’ve yet to see the standards translate into an objective measure in government proposals,” Scott said.
‘A Still-Maturing Industry’
Government use of artificial intelligence continues to skyrocket, while regulation and standardization lag.
Federal spending on AI is forecasted to reach $1.4 billion by the end of fiscal year 2022, according to Bloomberg Government analysis.
Meanwhile, regulatory bodies from Congress to the Federal Trade Commission to NIST have been racing to keep up, establishing new standards, strategies, and policy prescriptions to shape the use of AI.
NIST also plans to publish a first full version of its AI Risk Management Framework and companion playbook in January to help designers in the public and private sector create AI products that are accurate, explainable, reliable, secure, and unbiased.
“We don’t yet have clear mechanisms to assure ourselves that what is being offered does what it says on the tin,” Ellen Broad, associate professor at the Australian National University’s School of Cybernetics, said.
AI ethicists continue to warn that machine learning models are often built using data scraped from the Internet, which leads to bias and inaccuracy, that AI can be used for mass surveillance programs that impinge on personal privacy, and that the tech can be employed as a catch-all solution instead of a narrow use case.
“We don’t have consistent standards by which we can measure and evaluate the design of facial recognition software or the ongoing performance of it,” Broad said. “We’re really reliant on what commercial suppliers say their software can do. And so, to me, that’s a sign of a still-maturing industry.”
To contact the reporter on this story:
To contact the editors responsible for this story: