Bloomberg Law
May 31, 2019, 8:01 AM

INSIGHT: AI Regulations Aim at Eliminating Bias

Ben Kelly
Ben Kelly
Baker McKenzie
Yoon Chae
Yoon Chae
Baker McKenzie

Artificial intelligence, machine learning technologies, and algorithms are used in a wide variety of applications across industries. Challenges associated with developing, procuring, and using AI are becoming more apparent, including the extent to which algorithms may be biased or discriminatory.

For example, in some states, machine learning tools are used to assist with pre-trial bail determinations by attempting to categorize each defendant as a “low,” “medium,” or “high” risk of not re-appearing for court or of committing another crime. Some oppose the use of such tools on the grounds that the algorithms, even when intended to remove racial and other biases, result in even more bias.

The use of such tools in the criminal justice context is a troubling example of how a biased algorithm could affect one’s civil rights. But algorithms are also used in a variety of other ways that could potentially result in discrimination. Those uses include determinations regarding school or job applications and decisions about who gets a mortgage, loan, credit card, or insurance (and on what terms).

One can quickly imagine the legal, moral, and ethical concerns with an algorithm that is—even unintentionally—biased or discriminatory.

Trend Toward Gov’t Regulation of Algorithms, Facial Recognition Tech


For these reasons, there is a growing trend toward city, state, and federal governments considering regulation of AI, with the stated goal of eliminating discrimination and allowing for public transparency.

For example, in early 2018, New York City enacted the first algorithm accountability law—“A Local Law in relation to automated decision systems used by agencies”—in the United States. It created a task force to recommend criteria for identifying automated decisions used by city agencies, a procedure for determining whether those automated decisions disproportionately impact protected groups, and a proposal for making information publicly available that would allow the public to “meaningfully assess” how those systems function.

Notably, the law only permits making technical information about the system publicly available “where appropriate” and expressly states that there is no requirement to disclose any “proprietary information.”

Similarly, earlier this year, an algorithm accountability bill was introduced in the Washington State House of Representatives. The bill seeks to “protect consumers, improve transparency, and create more market predictability” by establishing guidelines for government procurement and use of automated decision systems. (The original House Bill 1655 was introduced on Jan. 25, and the Substitute House Bill 1655 was introduced on Feb. 22.)

It provides that Washington’s chief information officer must create an inventory of all automated decision systems that are being used, developed, or procured by state agencies and provide to the legislature a report for each automated decision system stating, among other things, whether the system “has a known bias, or is untested for bias,” and whether “the automated decision system makes decisions affecting the constitutional or legal rights, duties, or privileges of any Washington resident.”

On a slightly different note, San Francisco is expected to become the first major American city to adopt a ban on the use of facial recognition software by the police and other agencies. At the time of this writing, the ordinance is expected to pass in days. Similar ordinances are being considered in Oakland, Calif., and Somerville, Mass., and can be expected in a growing number of cities.

Similarly, in Massachusetts, a bill (S.1385) was introduced earlier this year seeking to establish a moratorium on the use of face recognition and remote biometric surveillance systems by state and local law enforcement. Many of these efforts expressly reference the goal of eliminating gender and racial bias.

Although these local laws and bills are generally limited to use by government agencies, they would typically be contracting with technology companies to develop or procure those technologies.

Federal Bills Not Limited to Government Agencies

In 2019, lawmakers in Congress introduced legislation that would regulate certain aspects of AI at the national level. Unlike the regulations addressed above, however, the laws would not be limited to those used by government agencies.

The Algorithmic Accountability Act of 2019 would require large companies to audit their algorithms for potential bias and discrimination and to submit impact assessments to FTC officials. The reports would have to address the accuracy, fairness, bias, discrimination, privacy, and security issues of any high-risk systems being used. The reports would also have to advise the FTC on how the system was developed and the data it uses.

As currently written, the act targets large companies by limiting its application to those with more than $50 million in gross annual revenue or those possessing personal information on more than 1 million consumers or consumer devices.

The Commercial Facial Recognition Act of 2019, introduced in March (S. 847), would generally ban the commercial use of facial recognition technology to “identify or track an end user” without obtaining their consent.

It would also prohibit (1) using “the facial recognition technology to discriminate against an end user”; (2) repurposing “facial recognition data for a purpose that is different from those presented to the end user”; and (3) sharing “the facial recognition data with an unaffiliated third party without affirmative consent.” Under the bill, with some limited exceptions, facial recognition technology that is available as an online service must also be made available for independent third party testing “for accuracy and bias.”

Greater public interest in AI is also reflected in other recent activities of the government. In February, President Donald Trump signed an executive order titled American Leadership in Artificial Intelligence, requiring federal agencies to fund and prioritize investments into research, promotion, and training on AI.

In addition, the Growing Artificial Intelligence Through Research (GrAITR) Act, introduced in April by Rep. Daniel Lipinski (D-Ill), seeks to boost funding for AI research and development, but has not yet passed.

In 2017, the FUTURE of Artificial Intelligence Act (H.R. 4625, S. 2217) was introduced. Although never passed, it would have required the Department of Commerce to establish a new committee to advise on topics related to the development and implementation of AI.

Congress also considered the SELF DRIVE Act (H.R. 3388) and AV START Act (S. 1885), which were not enacted, but would have established a framework for a federal role in ensuring the safety of autonomous vehicles.

In 2016, the Obama administration released three reports directed to strategies and planning regarding the growing use of AI in the United States: (1) Preparing for the Future of Artificial Intelligence; (2) The National Artificial Intelligence Research and Development Strategic Plan; and (3) Artificial Intelligence, Automation, and the Economy.

What’s Next for Tech Companies Developing Algorithms?

While specifics of the regulations that will ultimately be adopted more widely are not certain, the recent legislative trends do show an increasing likelihood of government regulation of algorithms and facial recognition technologies affecting citizens and consumers. And these regulations could have significant regulatory and intellectual property implications on technology companies going forward.

First, such legislation could have a significant impact on technology companies’ disclosure requirements regarding such technology. Their attorneys will need to examine the requirements of the regulations and potentially identify all algorithms and technologies covered, and prepare reports complying with the regulatory requirements for submission to government agencies.

Moreover, depending on the extent to which Congress or other legislative bodies will ultimately balance the need for public transparency with the need for technology companies to protect their proprietary information and intellectual property, companies will need to carefully consider how to lawfully meet their disclosure requirements while also protecting their proprietary information and intellectual property.

Moreover, counsel may need to partner with technical teams during development or procurement of algorithms and technologies to, among other things, anticipate areas of concern or potential inquiry by regulators and make the appropriate changes to the systems early.

This column does not necessarily reflect the opinion of The Bureau of National Affairs, Inc. or its owners.

Author Information

Ben Kelly is a partner in Baker McKenzie’s IP and Technology practice group. He manages litigation involving patent, trade secret, and commercial disputes with a focus on enforcing and protecting a wide range of technologies, particularly in the electrical, communications, networking, and video display fields.

Yoon Chae is an associate in Baker McKenzie’s IP and Technology practice group, where he focuses on patent litigation and IP advisory work. He also regularly writes and speaks about IP law and ethics issues relating to AI and autonomous systems.

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.