AI Use Desperately Needs Proactive Guardrails Across Industries

Sept. 21, 2023, 8:00 AM UTC

Last year at a Washington forum, Henry Kissinger likened artificial intelligence to the threat of nuclear weapons, noting that without intervention, AI “is simply a mad race for some catastrophe.” While declaring catastrophe may overstate the problem, there is ample evidence for concern.

Recently, a New York attorney’s AI-generated legal brief filed in a federal district court was found to be rife with nonexistent case law and fake judicial opinions. Elsewhere, faked scientific research has surfaced and false, defamatory information about citizens has spread—all thanks to the advent of generative AI technology such as ChatGPT, which collects and uses existing training data not just to answer questions, but to “generate” new content.

Over the next few years, AI promises to transform virtually every industry sector. While several leading AI companies including Meta, Google, Microsoft, Amazon, and others recently committed to voluntarily install safeguards against these risks, little has been discussed about how consumers of this technology can protect themselves.

Although AI offers great promise, it can also pose significant risks, and it’s critical for individuals and companies that increasingly rely on AI to develop frameworks to manage them.

Those risks are numerous. AI technology can be used as an instrument of threat activity from criminals, state actors, or disgruntled groups. The AI systems themselves can be targeted with malicious intent. Even well-intentioned use can produce profound political, economic, ethical, and other potentially destabilizing effects.

Generative AI could enable threat actors to develop increasingly realistic social engineering campaigns, or be used as a tool to spread false or misleading information. And the greater a company’s dependence on AI, the greater the likelihood it will be targeted by threat actors—whether to illicitly obtain sensitive information, disrupt core business functions for extortion purposes, or make a political statement.

Furthermore, AI can only perform tasks based on the information we give it, which creates risk in both input and output. Inputting information can lead to leaks of sensitive data being loaded into prompts and underlying models, or diminished quality of prompts.

There are also output risks, such as explicit and implicit biases contained in the source data on which they were trained, especially when the data is not carefully curated. Particularly troubling are AI hallucinations: absurd conclusions generated by AI applications based on generative algorithmic flaws, poorly constructed prompts, or bad data.

It is essential that organizations consider how to implement guardrails using a three-pronged approach of data, disclosure, and decision-making. For data, focus should be on both accuracy and securing the legal right to use the data, such as whether it was screen-scraped without the consent of the owner.

Additionally, organizations displaying text or video content developed by AI should disclose that fact. One of the White House’s 2023 Voluntary AI Commitments is for AI companies to develop provenance to determine if a particular piece of content was created with their system. And finally, organizations should never leave a final decision that affects a human being to AI. A human should always be in the loop as the final adjudicator.

Until the legal and regulatory issues AI presents are addressed, as this powerful tool advances, someone must be accountable for making sure that the models being used are appropriate for the task. We need boundaries in place to prevent their misuse—ones that are stress-tested to discover and address unintended consequences in a timely manner.

Because right now, if AI is fed false data, it has no way to fact-check itself. And when AI spreads those falsehoods, it’s accountable to no one. We’re the only ones who can change that.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Michael Chertoff served as the secretary of homeland security under President George W. Bush. He is a special adviser on the American Bar Association AI Task Force, and executive chairman and co-founder of The Chertoff Group.

Write for Us: Author Guidelines

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.