If there’s been a constant in the tech industry’s approach to the idea of regulating new products, it’s been the twin mantras of “leave us alone, we’ll take care of it” and “don’t make rules that will harm innovation.” But now there’s an exception: facial recognition. Governments around the world -- with the huge exception of China -- are also pondering rules to rein in this branch of artificial intelligence. Questions include whether any regulations would go as far as privacy advocates argue is needed -- and whether changes could be made before the technology is too ubiquitous to put back in the bottle.
1. Who in tech wants regulations for facial recognition?
Alphabet Inc. chief Sundar Pichai recently said it was important that government and regulation“tackles it sooner rather than later.” Microsoft Corp. was among the first of the tech giants to urge governments to push ahead with regulation of facial recognition, advocating for human review and oversight of the technology in some critical cases, as a way to mitigate the risks of biased outcomes, intrusions into privacy and democratic freedoms. Amazon backed that call.But both companies have opposed bills they feel go too far.
2. Why does facial recognition stir up such reactions?
It doesn’t always. There are uses of it that have been seen as benign, from unlocking smartphones to finding missing children. It’s the locking people up and no-privacy-anywhere aspects that are alarming people. Law enforcement agencies across the world are rapidly adopting the technology, including as a real-time tool that they say helps them to quickly sweep large crowds for criminals. Then there’s the example of China. It’s been accused of human rights abusesin the province of Xinjiang, where hundreds of thousands of Uighurs, a mostly Muslim ethnic minority, have been identified via facial-recognition scanning systems and then locked up in clandestine camps in China, while millions more are monitored through the technology in their daily lives. In the U.S., reports that some police departments were using technology from a startup called Clearview AI has caused a backlash from privacy groups and lawmakers in recent weeks. The startup had scraped billions of photos from social media accounts without people’s consent, using them to build a massive database with the aim of helping law enforcement find suspects who have no criminal records.
3. Is the technology really that good?
No, and that’s part of the problem. Women and people with darker skin tones are particularly susceptible to false positives. A 2018 study from the Massachusetts Institute of Technology found that commercially available facial-analysis programs were inaccurate 35% of the time for darker-skinned women. In the hands of law enforcement, such a tool could potentially lead to incarcerating innocent people, civil society activists warn. A group of artificial intelligence researchers last April called on Amazon.com Inc. to stop selling its facial-recognition softwareRekognition to police departments after other AI researchers found the company’s software had much higher error rates when predicting the gender of darker-skinned women in images, compared with lighter-skinned men -- findings Amazon contested. More recently, a 2019 study published in December by the National Institute of Standards and Technology found that most of the commercially available programs had higher rates of false identifications for blacks and Asians than for whites.
4. What do law-enforcement groups say?
They warn against banning a tool that can make societies safer. The Chief Information Officer of the Belgian police at a Brussels conference in January noted that society usually accepts that police officers can sometimes make mistakes but that when it comes to facial recognition, “there’s zero tolerance.”
5. Who’s making all this stuff?
U.S. and Chinese companies dominate the industry. In the U.S., Microsoft and Amazon get a lot of the attention but other players like NEC and startup Clearview AI have many of the police and government contracts for facial recognition software. Microsoft pitches its product with features including “person identification that matches an individual in your private repository of up to 1 million people” and “perceived emotion recognition that detects a range of facial expressions like happiness, contempt, neutrality, and fear.” Google by contrast has held back on developing a general purpose application programming interface for the technology due to the risks involved. From China, Hangzhou Hikvision Digital Technology Co. and Zhejiang Dahua Technology Co. control one-third of the global market for video surveillance, according to a report by Deutsche Bank AG, even keeping watch over London’s subway system. Megvii’s Face++ is another heavyweight in the region. Those products service the types of real-time facial recognition programs in use everywhere from Detroit and Chicago to London and Xinjiang.
6. Who’s doing what about it?
In Europe, the bloc’s strict privacy rules, the General Data Protection Regulation, already regulatesthe technology to some degree. Deploying remote biometric identification without consent from the targeted individual is not allowed under the EU’s rules, albeit with certain exceptions. The EU in the coming months will look to further define what those exceptions should be. The European Commission, the bloc’s executive body, is inviting comments from the public as part of a wider consultation in response to its plans to legislate artificial intelligence to be unveiled Wednesday. Depending on the outcome of the public consultation, companies or agencies wishing to deploy such facial recognition software in Europe could have to submit the software’s data-sets to public authorities and undergo audits.
7. What about in the U.S.?
There’s more action on the state and local level than by the federal government. Bills involving facial recognition have been introduced in 11 state legislatures, ranging from a proposed ban on real-time use of the technology in Michigan to a requirement that stores in Vermont notify consumers if a facial recognition system is in use. A handful of cities across the country, including San Francisco and Cambridge, Mass., have banned the use of the technology by their police or other agencies altogether. At the federal level, congressional lawmakers from both major parties have discussed bills to force a moratorium on adoption of facial recognition systems by government agencies. But passage is far from a sure thing -- the Democratic chairman of the House Homeland Security Committee said that while he wants to add privacy protections, he would oppose a moratorium.
8. What kind of regulations have a realistic chance?
With prominent companies and some law enforcement officials calling for guidelines governing the use of facial recognition technology, there’s a good chance they’ll be willing to accept at least some regulation. But the question is at what level and would that satisfy privacy and civil society activists? Companies have pushed back against the idea of bans, which would mean they’d have to forgo lucrative contracts for the emerging technology. In the state of Washington, Microsoft supported a bill that would have required prominent notices in public places where the technology was being used. Another bill, opposed by the company, would have prohibited its use by the government unless officials prove the technology would not lead to discrimination. Amazon opposed both measures, neither of which passed. Governments might also not have an incentive to set strict rules given that their national agencies have been the beneficiaries of loose oversight of the technology. The U.S. Customs and Border Protection is expanding its use of facial recognition to screen people entering the country, a government document showed last year.
The Reference Shelf
- QuickTakes about
artificial intelligence andfacial recognition . - A Bloomberg News
story about the legal requirements the EU is mulling for facial recognition technology. - A 2019 Human Rights Watch report about the mass surveillance app used by police in Xinjiang, China.
- A New York Times story about the controversial facial recognition startup Clearview AI.
- A QuickTake video on facial recognition.
--With assistance from
To contact the reporter on this story:
To contact the editors responsible for this story:
John O’Neil, Grant Clark
© 2020 Bloomberg L.P. All rights reserved. Used with permission.
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
