Madison Square Garden Entertainment Corp. recently drew considerable criticism over its use of facial recognition technology to identify opposing counsel and prevent their access to its properties and entertainment venues.
The entertainment conglomerate justifies its policy by saying it doesn’t exclude attorneys “based on the protected classes identified in state and federal civil rights laws.”
While the policy may follow the letter of New York City’s law, it avoids its spirit by using scanning software to discriminatorily target attorneys at law firms litigating against it—even those not directly involved in that legal action.
This has spurred an important debate over which has more merit: a corporation’s property rights or an attorney’s privacy and First Amendment rights.
Inequity and Bias
The company that operates Madison Square Garden and Radio City Music Hall claims it employs a targeted system that uses images of attorneys from targeted law firms’ websites to “train” its system.
But the technology still can’t guarantee freedom from algorithmic discrimination. Its use in this manner poses a potential chilling effect on both free speech and access to justice if an attorney might decline client representation to avoid application of this policy.
The conglomerate’s policy fails to account for the inequities and algorithmic biases that may be inherent in that technology.
Facial recognition systems that aren’t adequately trained with diverse data can lead to algorithmic biases. Unfairness then seeps into a wide range of applications—from seeking employment or securing a loan to being misidentified for a crime.
Without regulation, such dystopian applications of the technology may proliferate. The ability to create and technologically enforce an enemies list is inimical to a healthy democracy.
In 2019, Congress was poised to take action on the Commercial Facial Recognition Privacy Act, which would have prohibited entities from collecting, processing, storing, or controlling facial recognition data.
This is unless such entities provided documentation that explained the capabilities and limitations and obtained explicit affirmative consent from end users to use such technology after providing notice about facial-recognition data use.
The bill didn’t pass once the pandemic diverted attention to other matters. While federal legislation remains pending, New York advanced its own regulation to curb potential discriminatory practices.
The New York State Bar Association is also studying the impact of facial recognition software and has formed a working group to investigate the technology’s effect on a lawyer’s ability to represent clients without fear of retribution.
Regulation is necessary to strike a balance between the rights of the few and the public good. Such regulation can proscribe the deliberate targeting of an individual’s right to free speech, privacy, access to justice, and personal safety.
Facial recognition owes its genesis to the development of computer vision in the 1960s. In the early 1990s, two US government agencies inaugurated a facial recognition program to encourage development in the commercial market. The project created a database of 2,413 still facial images representing 856 people.
This representative sample set triggers potential algorithmic biases. For instance, when Google launched facial recognition in its photo application, it miscategorized a Black couple in a White person’s photo album as “gorillas.”
In 2019, the National Institute of Standards and Technology studied several facial recognition algorithms and found they were 10 to 100 times less accurate at identifying a Black face than a White one. While the technology may have evolved over the last few years, it clearly isn’t infallible.
Benefits and Risks
Proponents note that the technology has beneficial applications. For example, doctors have used the technology to diagnose certain diseases detectable through facial expressions.
The Department of Homeland Security deploys the technology at airports to enforce the No Fly List. Mobile phone manufacturers use the technology for biometric access to smartphone devices. A professor has used facial recognition to detect boredom among his students.
Though widely deployed across a variety of disciplines and industries, facial recognition technology remains largely unregulated. As a result, these systems have the potential to impinge on the constitutional right to privacy, surveilling individuals’ movements without their knowledge or consent.
These systems have also been deployed for nefarious purposes, such as online harassment and cyber-stalking. Hackers have manipulated facial recognition to commit fraud and identity theft. Law enforcement has used these systems to target protesters, posing a threat to the constitutional right to peaceful assembly.
Increasing Private-Sector Use
In the wake of George Floyd’s murder in 2020, several states and municipalities banned or suspended the use of facial recognition systems.
A handful of large technology companies also announced temporary bans on sales to police departments—IBM Corp., Amazon.com Inc., and Microsoft Corp. Further, Clearview AI announced that it wouldn’t offer the technology to certain countries known to abuse its use.
While law enforcement use has been curtailed because of these actions, facial recognition technology has exploded in the private sector.
Leaders in the field such as NEC Corp. and Clearview AI have deployed their systems worldwide to various groups, including immigration control agencies, banks, entertainment and conference venues, and stadiums.
Increased unregulated private company use highlights the urgency for federal and state legislative action. Without legal guardrails, potential benign use may be outweighed by more ominous applications that impinge on privacy rights and lead to discrimination.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Write for Us: Author Guidelines
Vivian Wesson is executive vice president, corporate secretary, and general counsel for the Board of Pensions of the Presbyterian Church (U.S.A.). She also chairs the New York State Bar Association’s Committee on Attorney Professionalism.