Orrick’s Alexandra Stathopoulos assesses the fast-moving AI sector and how corporate counsel should review processes and workplace adoption to avoid risk and ensure effective implementation.
Business leaders and their legal departments are navigating a new workplace reconfigured by artificial intelligence. It’s impacting hiring, employer decision-making, and how employees do their jobs. In response, state and local governments are enacting a patchwork of AI-related laws, producing a dizzying landscape of regulation in the absence of a uniform federal approach.
These state and local laws have far-reaching consequences, impacting transparency and accountability of AI systems, potential bias and discrimination in AI algorithms, data privacy and protection, workforce displacement, compliance and reporting obligations, and the ethical use of AI. And, existing legal frameworks related to intellectual property may also have profound implications for employee use of generative AI.
There is an urgency for employers and in-house counsel to prepare for these fast-changing legal requirements to ensure they can quickly adapt as new laws are enacted. Here is a roadmap to prepare.
Form an AI Working Group
To set the groundwork for a robust AI compliance program that addresses different laws and regulations across multiple locations, employers should establish a cross-functional AI working group to identify and track all the ways the company uses AI. With the involvement of in-house counsel, this group should gather information about existing AI systems or ones the company plans to deploy, assess their purpose and potential impact on employees, customers, and other stakeholders, consider whether there are any AI tools the company should license, and whether to restrict or limit employees’ use of other AI tools.
The working group should also assess the degree that employees are independently deciding to use AI tools in the course and scope of their employment, and how they are using them. This initial step will help employers understand the specific functionalities and capabilities of each AI system, including any data inputs, algorithms, and decision-making processes, and will create quicker responses to new AI laws and regulations.
Review, Update Policies and Practices
Conduct periodic reviews of AI-related policies and practices to identify any gaps or inconsistencies with applicable laws and regulations, with an eye towards changes on the horizon. This review may include:
Policies and practices directly related to AI procurement, development, deployment, and monitoring processes. Contracts with third-party AI vendors, paying particular attention to data privacy and security provisions, and any indemnification clauses. Policies and practices impacted by use of AI, such as confidentiality and IP assignment agreements, recruitment and hiring, job descriptions, codes of conduct, and data collection, storage, and retention.
Assessment of whether it is appropriate for employees to use generative AI tools in light of the need for freedom to exploit commercially and ownership of any company assets created with the assistance of such tools. Insurance policies, to assess whether there is coverage for potential AI-related liabilities. Employers and in-house counsel may need to create bespoke AI policies or addenda to comply with the unique laws and regulations in each jurisdiction.
Assess Data Collection, Storage, Usage
Review data collection, storage, and usage practices in relation to AI systems. Applicable laws and regulations may impose restrictions on the collection, use and storage of certain types of data, such as biometric data—e.g., data gleaned from “face recognition” technology), sensitive personal information, or information relating to protected class characteristics.
Employers and their third-party vendors should obtain appropriate consent from applicants and employees for the collection and use of their data in relation to AI systems where required by applicable law (and there are many jurisdiction-specific nuances). In addition, review employee notices or disclosures to ensure they are clear and provide sufficient information about how the AI systems will be used, describe the data collected, and how any automated decisions are made and reviewed.
Employers should also consider carrying out a privacy impact assessment to help determine any risks associated with the use of AI systems and identify ways to mitigate any identified risk.
Supervised by counsel, employers may also want to conduct periodic privileged reviews of AI algorithms to test for potential bias, both in terms of input data and output results. Taking proactive steps to ensure that the algorithms avoid discrimination or bias against protected groups is likely to put an employer in a better position to comply with the forthcoming wave of legislation focusing on this issue.
Provide Employee Training and Awareness
Employees play a critical role in the use of AI in the workplace, and training and awareness programs can help an employer stay on top of changing laws and regulations. Employers should provide comprehensive training and awareness programs to educate employees on their responsibilities and obligations related to the use of AI systems.
This may include training on when it is appropriate to use generative AI tools in the performance of duties, ethical use of AI, understanding the limitations and potential biases of AI algorithms, being aware of “hallucinations” (false AI outputs), steps to take to preserve intellectual property rights in company assets created using AI tools, promoting transparency and accountability in using AI for decision-making, and educating employees on the nuances of applicable laws and regulations that apply to the use of AI.
To reinforce efforts to establish transparency and accountability, employers should consider developing an incident response plan and training managers and HR on how to address any issues or complaints related to the use of AI, including procedures for escalating potential issues to in-house legal counsel, and investigating and addressing any allegations of discrimination, bias, or other potential legal or policy violations.
Stay Informed on Regulations
Make sure to follow the evolving legal landscape. This includes monitoring proposed and enacted laws at the federal, state, and local levels that specifically address use of AI in the workplace. Employers should keep a close eye on legislative developments, including proposed bills, amendments, and regulatory guidance, and seek legal counsel to interpret and analyze the implications for their specific industry and operations in each jurisdiction.
With over 20 states enacting or developing new laws that will shape AI workplace regulation, more changes are inevitable. But these basic approaches ensure your team remains ahead of the curve as AI becomes further ingrained in the workplace.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Alexandra Stathopoulos is a California-based partner in Orrick’s employment practice.
Write for Us: Author Guidelines
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.