- Settlement signals agency’s approach to biometric technology
- Lawyers say companies should audit their systems for bias
The Federal Trade Commission sent a warning to companies using AI systems with its move prohibiting
Experts say the settlement, outlined in a proposed stipulated order filed with the FTC’s complaint Tuesday in the US District Court for the Eastern District of Pennsylvania, marks a new plateau of artificial intelligence-related developments from the consumer protection agency.
“I want industry to understand that this Order is a baseline for what a comprehensive algorithmic fairness program should look like,” Commissioner Alvaro Bedoya said in a statement accompanying it. “In the future, companies that violate the law when using these systems should be ready to accept the appointment of an independent assessor to ensure compliance.”
The move follows previous warnings the FTC issued about both algorithmic fairness and misusing biometric technologies.
“No one should walk away from this settlement thinking that this Commission affirmatively supports the use of biometric surveillance in commercial settings,” Bedoya wrote.
The proposed settlement would ban Rite Aid from using facial recognition surveillance for five years and requires it to delete all biometric data collected in connection with its surveillance and to implement new safeguards.
Going forward, Rite Aid would have to notify consumers if they’re enrolled in any facial-recognition system, tell anyone when it takes action against them based on the system, and allow them to contest the action in a timely manner. It would mandate that Rite Aid discontinue use of any automated biometric systems in the future “if it cannot control potential risks to consumers.”
The FTC’s action “signals to companies you really have to do some due diligence when you’re adopting an AI system,” said Ben Winters, senior counsel at the Electronic Privacy Information Center, a privacy advocacy group.
The settlement also addresses allegations that Rite Aid violated a 2010 FTC security order by requiring the agency to implement a stronger information security program.
The stipulated order will go into effect once approved by the federal district court as well as the court overseeing Rite Aid’s bankruptcy proceedings.
While Rite Aid agreed to the settlement, the drugstore chain took issue with the agency’s assessment of its use of the AI-based technology.
“We respect the FTC’s inquiry and are aligned with the agency’s mission to protect consumer privacy,” Rite Aid said in a statement. “However, we fundamentally disagree with the facial recognition allegations in the agency’s complaint.”
“Rite Aid stopped using the technology in this small group of stores more than three years ago, before the FTC’s investigation regarding the Company’s use of the technology began,” the company said.
AI Harms
The settlement’s requirements, which include ongoing audits, hew closely to the National Institute of Standards and Technology’s voluntary AI risk management framework that many companies have used in developing their AI governance practices. The FTC action signals to companies that failing to meet those standards could land them in enforcement trouble, attorneys said.
“What’s different here is the rubber is meeting the road,” said Jevan Hutson, an associate at Hintze Law LLC. “We are now seeing for the first time a much more detailed layout of expectations for organizations to facilitate if they are operating AI and machine-learning tools that significantly impact or reasonably harm consumers.”
That should give organizations pause as they race to adopt new technologies, said Winters.
“You really have to think carefully about if using a system like that is worth it, given the actual harm risks, but also clearly the risks of regulatory intervention,” he said. “It’s not worth the potential shoplifting benefit if you’re going to have to pay millions of dollars and stop your use.”
The FTC’s complaint alleges that Rite Aid failed to mitigate harms from misidentifying consumers using facial recognition between 2012 and 2020, including “heightened risks to certain consumers because of their race or gender.” The company also failed to test the accuracy of its facial recognition system before and after deploying it or to train employees how to use it, it says.
The regulator noted that the technology was largely deployed in “plurality non-White areas” and that “Black, Asian, Latino, and women consumers” were at the greatest risk of surveillance and misidentification. As a result of thousands of false matches made, customers—including an 11-year-old girl—were stopped and searched, according to the complaint.
“These are very high-risk technologies. And when you’re using such a high-risk technology, you need to make sure they’re being fairly used,” said Tatiana Rice, senior counsel at the Future of Privacy Forum.
Rice noted that even off-the-shelf technologies that have been tested for bias and accuracy are continuously learning and can be affected by new data input by companies. As a result, companies need to engage in continuous testing, she said.
Future AI Enforcement
The complaint against Rite Aid marks the first time the FTC has used what’s known as its unfairness authority for enforcement action in relation to an AI system—an area for which privacy experts and advocacy groups have called on the agency to initiate rulemaking.
The Rite Aid case could position the agency to more explicitly tackle AI system bias in the future, Rice said.
Issuing the order is just another way that the FTC is lapping Congress on AI regulatory issues, said Winters.
“But they can only do so much,” he said. “They can only take it case by case. Legislation is what’s needed to codify some of these things.”
Federal lawmakers have introduced legislation on algorithmic bias, but no bills have reached the House or Senate floor. Meanwhile, state regulators in Colorado, California, and elsewhere have made pushes for rules on when companies are required to notify customers about the use of AI, similar to the stipulated order’s requirements.
In the lieu of federal law, the FTC has explored a rulemaking process to regulate commercial surveillance. The Rite Aid settlement could hint at how the agency may answer some of the questions raised about fairness and accuracy obligations raised in its August 2020 notice of proposed rulemaking, said Hutson.
The action also highlights the agency’s growing interest in another tool in its belt: model deletion. The order is the sixth time in recent history, and the third in the past year, that the FTC has required a company to delete ill-gotten data used in algorithms and associated products.
“It is now clear going into the new year that the FTC is going to incorporate model deletion into privacy security and AI related orders,” Hutson said. “Reasonable algorithm fairness practices are going to be important to organization to undertake if they wish to avoid model deletion.”
The case is FTC v. Rite Aid Corp., E.D. Pa., No. 2:23-cv-5023, complaint filed 12/19/23.
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
