While the technology of facial recognition systems (FRS) has developed rapidly, so have legal issues surrounding its use. Companies using FRS have faced legal challenges and criticism throughout society. Privacy remains the foremost legal issue, though governments’ FRS use has revealed racial bias inherent in these systems, raising civil rights issues.
Without a federal regulatory regime governing the use of FRS, the U.S. legal landscape resembles a patchwork of state laws, as well as federal industry-specific laws, guidelines, and general best practices. While there remain gaps in the protection of facial and other biometric information, recent developments appear to prompt heightened scrutiny and interest for regulatory oversight of the use of FRS.
Recently, both companies and governments have acted. On Nov. 2, 2021, popular social media platform, Facebook, (now called Meta Platforms Inc.) announced that it would end use of its FRS and delete more than a billion users’ facial recognition templates, citing privacy and regulatory challenges as factors in its decision, on the back of ongoing government investigations and a class-action lawsuit.
Facebook’s decision comes amid rulings by governmental agencies and watchdogs in the U.K. on Nov. 29, 2021, Australia on Nov. 3, 2021, and Canada on Feb. 3, 2021—all of which have ordered and/or fined international facial recognition company Clearview AI to cease collecting images for their database and destroy more than three billion collected images, asserting the company’s breach of privacy by collecting and sharing face-identification information without consent and by unfair means.
Most recently, on Dec. 16, 2021, France’s Commission Nationale Informatique et Libertés (CNIL) said Clearview AI breached Europe’s GDPR data protection law, giving the company two months to delete the collected photos and personal information and stop “unlawful processing” of the data.
Current U.S. Legal Regime
Several states and localities have either enacted or proposed their own laws having varying degrees of scope and protection:
Governmental Use: Vermont and Virginia have enacted laws that generally ban governmental use of FRS, except for specific uses expressly authorized through new legislation (e.g., use in commercial airports for Virginia and use of FRS on drone-captured images when taken pursuant to a warrant in Vermont).
New York, California, New Hampshire, Oregon, and Utah have enacted laws that partially ban governmental use of FRS. Many of them protect against specific uses, such as use of images taken for law enforcement, immigration enforcement, traffic enforcement, etc.
Further, major cities such as San Francisco; Boston; Portland, Ore.; and New Orleans have enacted full bans on governmental use of FRS.
Contrarily, some states such as Massachusetts, Washington, Utah, and cities such as New York, Seattle, Pittsburgh, and Nashville, Tenn., have taken a more tempered approach by passing laws that only regulate governmental use of FRS.
Commercial Use: Illinois, Texas, and Washington have enacted biometrics laws that regulate the commercial use of FRS, while California, Colorado, and Virginia have general data privacy regulations in place for a broad range of commercial data collection, including through FRS.
A common theme among these state-level biometrics laws is that they require commercial operators to obtain consent before collection and provide consumers with detailed privacy notices on what data is collected, how it is used and shared, how consumers can exercise their personal data rights and obtain a copy of their data, and provide an opportunity to delete their data or opt out of any sale of the data.
There are exemptions from provision of notices or obtaining consent if the data is collected for security and fraud prevention.
Portland, Ore., and Baltimore are among the first cities to enact laws that generally ban the commercial use of FRS. Portland’s law restricts the FRS’ use in places of public accommodation. Baltimore’s law has broader protection against the use of FRS, but is expected to expire at the end of 2022, which indicates an experimental intention behind the law.
The Road Ahead for 2022
FRS’ use admittedly will continue to increase, despite the recent developments. With data breaches, cybercrime, and new surreptitious uses of biometric information being revealed every day, we can expect some form of a comprehensive federal legislation regulating the use of FRS in the next year or so. Its success, however, may largely depend on the mid-term elections.
A number of legislative proposals in Congress demonstrate the growing demand for FRS regulations.
Notable proposals include the Facial Recognition and Biometric Technology Moratorium Act of 2021 and 2020, the Ethical Use of Facial Recognition Act of 2020, the Facial Recognition Technology Warrant Act of 2019, the FACE Protection Act of 2019, and the Commercial Facial Recognition Privacy Act of 2019.
A federal regulatory framework governing FRS must, at a minimum, offer privacy safeguards, consistency with constitutional protections, and transparency surrounding the specific uses of FRS. The overarching significance of consent and enhanced transparency for governing use of sensitive biometric data will continue to be supported by other legislative proposals on data privacy.
Enforcement will be a key issue in any regulation of FRS. Some federal proposals suggest enforcement through the removal of federal funding or the eligibility for funding for governmental and commercial operators. Others proposals have suggested private rights of actions for affected citizens, either individually or collectively, to bring civil actions for injunctive relief, declaratory relief, and/or monetary damages. Any legislation should have clear enforcement plans to push companies to comply.
Until a comprehensive federal regime is established, we will continue living in this patchwork of state and local laws, which may help develop common consensus for requirements under a federal regulatory regime. Meanwhile, companies using FRS should maintain privacy and data security measures that comply with state and local laws, as well as develop their own policies and procedures to protect sensitive information.
This column does not necessarily reflect the opinion of The Bureau of National Affairs, Inc. or its owners.
Palash Basu is a member of the Intellectual Property practice at Nixon Peabody LLP and focuses on artificial intelligence/machine learning technologies. He counsels clients on intellectual property procurement strategy, portfolio development, and patent litigation matters.
Jenny Holmes is the deputy leader of Nixon Peabody LLP’s Cybersecurity & Privacy team. She counsels clients on the development, implementation, and maintenance of efficient and effective cybersecurity programs.