Bloomberg Law
July 10, 2020, 8:01 AM

INSIGHT: Tech and Black Lives—Firms Can Mitigate Discrimination in Tech

Betsy Popken
Betsy Popken
Orrick Herrington & Sutcliffe
Kelly Newsome
Kelly Newsome
Orrick Herrington & Sutcliffe

Recent police brutality and the resulting Black Lives Matter protests have shined a spotlight on discrimination against Black people and inequality in America and around the world. This has been compounded by the disparate impact of Covid-19 on communities of color.

Companies have responded by making statements in support of Black Lives Matter, increasing their focus on diversity, equity, and inclusion efforts, committing millions of dollars to racial and social justice initiatives, recognizing Juneteenth as a company holiday, improving their anti-racism education, updating their policies, and renaming products with racially offensive names. However, forward-looking companies will also meet the moment by ensuring that their business and operations—including the technologies they develop and sell—do not unintentionally contribute to discrimination or inequality.

This article discusses international human rights laws governing anti-discrimination and equality, examples of technologies that can disparately impact people based on race, color, and/or other protected characteristics, and framework steps that a company can take to assess and remediate any unintended disparate impacts of its technologies.

International Human Rights Law Protections Specific to Anti-Discrimination and Equality

International human rights law prohibits discrimination and protects the rights and freedoms of all people without regard to race—and the U.N. Guiding Principles on Business and Human Rights extend this responsibility to companies.

Specifically, Article 2 of the Universal Declaration of Human Rights provides in part that, “Everyone is entitled to all the rights and freedoms set forth in this Declaration, without distinction of any kind, such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status.”

Article 7 states that, “All are equal before the law and are entitled without any discrimination to equal protection of the law. All are entitled to equal protection against any discrimination in violation of this Declaration and against any incitement to such discrimination.”

Still other Articles protect equal dignity and rights (Art. 1), full equality to a fair and public hearing by an independent and impartial tribunal (Art. 10), equal rights to marriage (Art. 16), equal access to public services (Art. 21(2)), equal suffrage (Art. 21(3)), right to equal pay for equal work (Art. 23(2)), and equal access to education (Art. 26(1)). Jurisdictions around the world have codified these rights to varying degrees in legislation ranging from civil rights laws to employment laws.

While the responsibility to protect these human rights ultimately lies with states, the U.N. Guiding Principles on Business and Human Rights (UNGPs) instruct companies to “respect” human rights by “avoid[ing] infringing on the human rights of others and…address[ing] adverse human rights impacts with which they are involved.”

Therefore, in addition to jurisdiction-specific anti-discrimination and equality laws, this body of soft law can help guide companies seeking to ensure that their technologies do not have a disparate impact on people on the basis of their race, color, or another protected characteristic.

Technologies Can Have Disparate Impact on Basis of Race, Color, Other Protected Characteristics

While technologies can help advance human rights, they can also—often inadvertently—contribute to discrimination and inequality. For instance, facial recognition technology has been found to disproportionately impact people of color through misidentification and increased surveillance. In the wake of the most recent calls for police reform, several technology companies committed to stopping or pausing sales of their facial recognition technology to law enforcement.

Fintech companies have also begun exploring ways to identify and remove bias in decision-making algorithms, which disproportionately identify people of color as risks and prevent them from obtaining loans, purchasing homes, or finding employment.

And some experts warn that the use of Covid-19 contract tracing applications may provide a tool for law enforcement to track protesters, identify immigrants, and further police communities of color.

Steps to Assess, Remediate Unintentional Disparate Impacts

Although we have known about the potential disparate impacts of these and other technologies for years, with increased focus by company leaders globally on issues of discrimination, the time is ripe for companies to revisit whether and how their technologies might inadvertently contribute to discrimination or inequality.

A human rights impact assessment—increasingly common in the tech sector—would seek to “identify and assess any actual or potential adverse human rights impacts” of a given technology throughout the product lifecycle and “integrate the findings from [the] impact assessment[] across the relevant internal functions and processes” within the company (UNGPs 18-19).

This process should “involve meaningful consultation with potentially affected groups and other relevant stakeholders,” such as representatives from Black communities, anti-discrimination experts, human rights NGOs, and legal experts (UNGP 18). With the input of these stakeholders, companies can then build processes to monitor the technology’s impact on such rights moving forward (UNGP 20).

According to the UNGPs, if a company’s technologies are assessed to have contributed to discrimination, inequality, or another rights violation, that company should then remediate those harms—both by making those persons as whole as possible and by ensuring such violations do not recur. The creation of grievance mechanisms for individuals to report such harms can help companies both quickly identify and remediate human rights violations (UNGP 29).

For instance, in the wake of allegations of racial discrimination, one company committed to increasing the diversity of its workforce, hired former U.S. Attorney General Eric Holder to create a nondiscrimination policy, partnered with the NAACP to attract users from communities of color, and consulted rightsholders before rolling out new policies.

Efforts by tech companies to develop learning programs to educate their workforce about the responsible use of technology will also contribute in the mid- and long-term to mitigating any discriminatory impacts of technology.

This column does not necessarily reflect the opinion of The Bureau of National Affairs, Inc. or its owners.

Author Information

Betsy Popken is special counsel in the San Francisco office of Orrick and co-leads the firms global Business and Human Rights Practice, focusing on advising companies on best practices to uncover, address and remediate human rights issues. She has an extensive background advising parties in international human rights matters, serves as a term member of the Council on Foreign Relations, and is currently a lecturer at Stanford Law School.

Kelly Newsome is an associate in Orrick’s New York office and a member of the firm’s white collar and government investigations practice. She advises clients on a broad range of anti-corruption compliance measures globally.

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.