How AI Can Be Used Ethically to Monitor Worker Productivity

December 13, 2022, 9:00 AM UTC

Chief technology officers should follow the do no harm mantra of the Hippocratic Oath when incorporating artificial intelligence software into company platforms.

While an overarching goal of introducing AI is to increase efficiencies or remove biases, there are often unexpected consequences when good ideas unintentionally cause harm.

For example, use of facial recognition technology to identify criminal suspects can sometimes result in the arrest (or worse) of an innocent person. Or the development of a weaponized drone for the military that falls into the wrong hands can stray far from the developer’s original intention.

Here is how technology companies and those who use the technology might look beyond the intended uses of AI to identify potential unforeseen consequences.

Critical Questions to Ask

There are some important questions to ask when evaluating the ethics of a new AI solution, which can differ from industry to industry and company to company.

  • Do you understand the solution?
  • Do you understand the market for it?
  • Do you understand the potential unintended uses for the solution?
  • Even if it is legal, is it worth it? Is it ethical? Is it aligned with your company’s culture and values?
  • What might be the impact to your company’s reputation?

Most of corporate America likely won’t be using facial recognition software or weaponized drones. However, many companies rolled out monitoring software during the pandemic to better keep track of where and how their employees were working remotely.

For some, AI was a great resource for building safety models that helped bring the workforce back to the office in a post-pandemic world. But for others, monitoring productivity in a remote workforce seemed like an invasion of privacy rather than ethical use of AI tools.

Purposeful, Legal Use

It is important to be transparent with your workforce when debuting a monitoring tool to avoid Big Brother vibes that cultivate distrust in the workforce.

If your company decides to use AI in the workplace, it should be for a good reason. For example, this includes reviewing job descriptions to eliminate unconscious bias, tracking driver safety, and temperature and social distancing checks, to name a few.

But for companies that operate in multiple locations, AI use raises other legal concerns.

What if the practice is illegal in jurisdictions where you operate but legal in others? Do you use the AI solution where it is legal? And if so, does this raise a fundamental employee fairness question?

For example, will some employees be subject to adverse employment action as a result of the monitoring while other employees engaging in similar conduct that is undetected are not?

Certainly, creating a disengaged, distrustful, or unfair workforce is not an acceptable consequence of using this technology.

Engage All Departments

Ensuring responsible use of AI is a team sport, and requires engagement from the business, marketing, security, privacy, legal, compliance, and HR departments.

Using an established governance process when implementing AI can be integral to correct use of AI, as the reputational impact of getting this wrong can be significant. The business sponsor or owner of the platform, along with a cross-functional team, should be able to address the following threshold questions:

  • Does the solution do what you think it is going to do?
  • Are your communications or marketing materials accurate and transparent?
  • Who is the accountable “keeper” of the algorithm?
  • Are there privacy and security risks that need to be managed?
  • Is the impact to your employees fair and reasonable?
  • Is there any potential for this to create a health or safety concern?
  • Are you satisfied with the potential ethical and/or reputational risks?

If the answers are satisfactory, ensure that there is accountability to manage any identified risks, and that the business sponsor or owner is required to periodically report back on the status of the development and rollout of the solution.

Proactive companies have created governance or steering committees to review and approve guiding principles, discuss solutions that fall within the gray space, create individual management action plans to ensure accountability for each solution, and outline escalation paths.

This group should be reporting to the board of directors or a board subcommittee that can be informed of these principles and can sign off on some of the riskier solutions, as senior management and the board are responsible for setting the risk appetite for the company.

AI tools for many companies offer potential solutions for time-consuming projects, ways to eliminate human bias, and opportunities to increase safety protocols However, it is important to consider the potential unintended consequences when evaluating and rolling out this new tool.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Write for Us: Author Guidelines

Author Information

Amy E. Schuh is a partner at Morgan Lewis who focuses on corporate ethics and compliance counseling, internal and government investigations, and mergers and acquisitions due diligence and integration.

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.