Artificial intelligence (AI) and machine learning technology moved forward at a rapid pace in 2019, and the legal and regulatory framework in the United States is continuing to catch up. However, this will change in 2020 as the federal government, as well as states and municipalities, continue to move forward on regulatory approaches.
While private companies have been busy innovating in AI, the time for regulatory engagement is fast approaching. The year 2020 will be a crucial one for determining whether governments move toward a careful, risk-based approach to AI—acknowledging its many potential benefits—or whether AI experiences a “techlash” that pushes governments toward overregulation of the technology itself out of concern for its downsides.
The Key Issues for AI
AI is generally defined as technology that has the ability to learn to solve problems. Rather than being designed to solve a problem in a certain way, an AI algorithm is designed to learn as it goes. AI can be deployed in an enormously broad range of ways—for example, AI and machine learning can be used to predict health trends for large data sets and assess credit risk for individuals who lack traditional credit indicators. The potential to help consumers across a range of applications is enormous.
Policymakers and regulators have raised a number of potential issues with AI. One key issue is “explainability,” or the ability of the AI technology to describe the reason for an outcome in a way that is understandable for humans.
A further concern is that AI will result in certain harmful biases, such as a negative impact on a protected class.
Other key issues that have been raised are whether the use of AI should be transparent in certain circumstances, whether data sets are sufficiently robust to train AI systems, and when humans should be involved and held accountable for decisions an AI system makes.
Federal Government Action on AI in 2020
The federal government will be looking more closely at AI over the coming year, in multiple forums. The attention to AI across the government was kick-started in February 2019 by the an executive order on AI, which directed governmental resources and attention to advancing AI.
First, the National Institute of Standards and Technology (NIST) at the U.S. Department of Commerce will continue its work on developing AI standards, likely focusing on explainability and bias early in the new year. In August 2019, NIST released a plan for federal engagement on AI standards, which called for greater industry engagement as NIST helped to develop standards with industry participation.
While NIST is not a regulatory body, its work on standards can directly inform regulatory approaches. For example, by outlining standards that can be incorporated into government procurement contracts or form the framework for regulatory expectations. NIST’s deeper dive into explainability and bias will likely involve public workshops and comment periods to facilitate industry engagement.
Second, the U.S. Office of Management and Budget (OMB), pursuant to the president’s executive order on AI, has released a draft memorandum on federal agencies’ regulatory approaches to AI, which is open for comment until March 13. The OMB memorandum provides guidance on how agencies adopt regulations that impact AI, which can be subject to review by the OMB’s Office of Information and Regulatory Affairs (OIRA). That review gives OMB considerable power over AI regulatory approaches through the government.
Third, work on AI will continue in other parts of the government, including at individual agencies that are considering how AI impacts policy issues in their jurisdiction. For example, the U.S. Department of Housing and Urban Development is considering a proposal to exempt certain kinds of algorithmic decision making from liability under a disparate impact theory in fair housing law, under certain circumstances.
Aditionally, the Federal Trade Commission (FTC) held a 2018 workshop on algorithmic decisionmaking and may release further guidance in addition to its 2016 Big Data report—and indeed, one commissioner has specifically called for the FTC to police “data abuses” like algorithmic bias under its enforcement authority.
State and Local Activity
States and localities have also been active in specific areas of AI regulation. For example, Illinois in 2019 passed the Artificial Intelligence Video Review Act, which mandates that an employer that asks applicants to record video interviews, and uses an AI analysis of an applicant-submitted video in hiring decisions, provide notice and obtain consent to the AI evaluation.
In the area of AI-enabled facial recognition, numerous municipalities, including San Francisco, Oakland, Calif., and Somerville, Mass., have banned the government use of facial recognition technology.
Another front to watch in 2020 is state privacy law. States have increasingly considered and passed their own privacy laws, resulting in a patchwork of privacy and data governance obligations on companies that do business nationally.
Illinois, Washington, and Texas, for example, have biometric-specific privacy laws that could impact the use of AI-powered facial recognition in certain circumstances.
And the California Consumer Privacy Act (CCPA), which went into effect on Jan. 1, 2020, also covers collection and use of biometric information as well as the collection and use of data sets that can be used by AI more generally. Moreover, the California Attorney General has indicated that enforcement of the law will begin after July 1, 2020, and the office will scrutinize business practices retroactively for the year. Given the Attorney General’s enforcement discretion, and the broad impact of the law on companies’ data collection and governance obligations, it remains to be seen how much AI applications will be targeted in connection with privacy law enforcement.
AI technology has enormous potential benefits, from improving health outcomes to enhancing cybersecurity to making our lives more efficient, but concern about potential harmful effects will continue to drive scrutiny by regulators. This will be an important year for determining whether the regulatory approach veers toward overregulation, or instead focuses on practical approaches while allowing innovative uses of the technology to flourish.
This column does not necessarily reflect the opinion of The Bureau of National Affairs, Inc. or its owners.
Duane C. Pozza is a partner in Wiley Rein’s Telecom, Media & Technology; Privacy, Cyber & Data Governance; and Fintech practices. He advises clients on complex legal and regulatory issues involving emerging technology, consumer protection, and FTC enforcement and previously served as assistant director in the Division of Financial Practices at the FTC’s Bureau of Consumer Protection.