Bloomberg Law
Oct. 19, 2020, 8:00 AM

Bias in Artificial Intelligence: Is Your Bot Bigoted?

Heather J. Meeker
Heather J. Meeker
O'Melveny & Myers LLP
Amit Itai
Amit Itai
O'Melveny & Myers LLP

The era of artificial intelligence has made the seemingly impossible possible. Want to create a model of how a new drug will affect cancer survival rates? AI can help. Want to predict who will pay bills and who will default? AI offers you a crystal ball.

However, using AI does not come without risk. Earlier this year, a group of African American plaintiffs filed a class action against YouTube due to the way in which it allegedly employs AI. The plaintiffs claimed that YouTube’s AI filtering tools automatically profile, censor, and discriminate users’ content based “wholly or in part” on race.

AI systems are only as good as they are designed to be, and unfortunately, our human biases can often creep into AI. Lawmakers, regulators, and civil activists have begun to focus on AI biases and how they might affect our society. As they do so, they have demanded that businesses be held accountable for their use of AI.

Put simply, bias in AI has now become a legal issue that companies must address.

What Is Bias?

We are just starting to understand how much of our human biases make their way into AI. For example, the data that we use to train AI may be selected in a biased way. But even if companies building AI systems do not intend to discriminate, the tools they use can still have discriminatory outcomes. And because software controls so much of our day-to-day lives, the result is systemic bias that can be challenging to eradicate.

Being aware of the risk of bias and working to mitigate it should be a top priority for anyone designing automated systems. As companies work to develop AI systems that we can trust, it is critical to ensure that AI algorithms and systems can be easily audited.

If the AI is a “black box” that cannot be re‑engineered, the only solution may be to throw it away and start over, discarding useful insights along with the biases, or avoiding the technology altogether.

But improving technological design is not enough. Companies that implement and rely on AI systems need to take proactive measures to avoid bias—whether intentional or not. Many companies today are adopting corporate AI compliance policies with a view to bias prevention and the proper use of AI.

Companies should also consider preparing an AI incident plan to address and mitigate any biases as soon as they are uncovered by internal or external stakeholders.

Legal Risks Due to AI Bias

AI bias is a legal issue that companies, board members, and investors must address or deal with significant consequences later. In fact, state and federal legislators have already started introducing laws regarding the regulation of AI.

As AI-related legislation becomes prevalent and regulators become more active in this field, companies should expect increased scrutiny with respect to how they are deploying their AI as well as the direct and indirect effects of their AI systems.

Moreover, due to the broad implications of AI-related biases, companies should expect class actions to be filed as these laws become widely available across states. Class action and civil rights lawyers may also attempt to use existing anti-discrimination laws to sue for AI biases, as YouTube recently learned.

The extensive deployment of AI by companies only exacerbates this risk, and, as a lawsuit filed by LGBT creators against YouTube demonstrates, where one lawsuit is filed for AI discrimination, more will soon follow.

Finally, in addition to the legal consequences, companies should be aware that AI bias incidents could have far-reaching PR implications.

Best Practices and Solutions

Companies that are deploying AI can take proactive measures and adopt tools to prevent and address potential biases. For larger companies, this usually includes adopting a written AI policy and an AI incident plan. An example is available here.

Companies can establish general guidelines, which will be shared with internal stakeholders, such as R&D, management, legal, and marketing.

The AI Policy should include issues such as:

  • Evaluate the development processes used for AI systems and the system outputs;
  • Develop training programs for those engaged in AI development and data processing to raise awareness of inherent biases in the data and its collection;
  • Establish a diversity board, which will include AI experts, people of color, women, and other minority groups, that will examine the company’s internal AI practices;
  • Implement an audit system (regularly checking the input and output data generated by the AI);
  • Document key decision-making and participants in AI software development;
  • Develop AI tools that improve the traceability and explainability of AI decisions to provide real-time insights into how decisions are made;
  • Increase transparency to consumers regarding data and AI use; and
  • Be extra careful when deploying AI hiring tools. Ensure constant auditing as well as human review and intervention (“Human-in-the-loop”).

AI Incident Plan

In addition to preventive measures, companies should also establish an AI incident response plan, which will determine how the company should react in the case of an AI bias incident.

A response plan should coordinate gathering of facts, assessment of whether bias has occurred, assessment of any harm that may have occurred as a result, and evaluate and implement remediation measures. The plan should coordinate the response of technology teams, outside evaluators (if appropriate), internal counsel, and corporate communications.

This column does not necessarily reflect the opinion of The Bureau of National Affairs, Inc. or its owners.

Write for Us: Author Guidelines

Author Information

Heather J. Meeker is a partner in O’Melveny’s Silicon Valley office and a leader of O’Melveny’s Technology Transactions group. She focuses on intellectual property and technology, with a particular focus on open source software. She is also a Founding Portfolio Partner at OSS Capital, which invests in early stage open source software companies.

Amit Itai is an associate in O’Melveny’s Silicon Valley office and a member of O’Melveny’s Data Security & Privacy group. He advises on privacy compliance, data breach laws, and the regulation of artificial intelligence. He also represents emerging technology companies in transactional and general corporate matters.

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.