Attorney Smita Rajmohan explains how state and federal laws can help—and hinder—attempts to prevent spread of misinformation online, including by artificial intelligence.
This presidential election year in the US has policymakers, companies, and citizens worried about how generative AI will accelerate the creation and dissemination of disinformation. It’s important to understand what laws exist to fight against disinformation, and how other laws may get in the way of that objective.
Politicians and candidates who buy fake followers on social media to boost engagement or making false statements about their business successes might have to tread carefully. In some cases, lying can be deemed an unfair and deceptive trade practice.
The Federal Trade Commission has broad powers to investigate companies that engage in unfair and deceptive trade practices, including misrepresentations, under Section 5 of the FTC Act. For example, oil and gas companies claiming to have sustainable products have been reprimanded by the FTC for essentially lying to the public.
With regard to artificial intelligence’s potential role in disinformation, the FCC has prohibited AI-generated voices for unsolicited robocalls. Generative AI chatbots such as OpenAI’s ChatGPT can generate content quickly in response to prompts and have been shown to be able to create incorrect and potentially damaging information about persons.
Generative AI technologies rely on large datasets that are often scraped from corners of the internet and provide responses based on word distributions in their training datasets. They’re not necessarily concerned with accuracy because they have no innate understanding of true or false statements (although it’s possible to partially mitigate this problem through reinforcement learning at the training stage).
Bias in datasets, insufficient data, or overfitting of models can create problems in model quality. This leads to inaccurate responses, sometimes called hallucinations. Depending on the training data, a chatbot may reproduce verbatim a scurrilous or defamatory statement that is present in its training dataset.
Such victims of disparaging statements may even rely on tort laws such as defamation for recourse. A radio host in Georgia sued OpenAI for defamation when ChatGPT claimed incorrectly that he had embezzled funds from a non-governmental organization. Political candidates may be able to bring such claims if they’re defamed by use of deepfakes and other AI output.
The First Amendment protects free speech and prevents laws that impede that right, with limited exceptions, so it’s a challenge to enact laws that may require social media companies to take down content that may be perceived as disinformation.
During the Covid-19 pandemic, the Biden administration asked multiple social media companies to take down false information about Covid treatments and vaccine effects. The US Supreme Court is now being asked to opine on whether such requests were in violation of the First Amendment.
Similarly, states have passed several children’s privacy and content safety laws. These have been enjoined on the basis that they interfere with the First Amendment. Those court decisions are likely to greatly impact the promise of proposed disinformation laws.
However, it’s unclear whether the First Amendment would cover generative AI content (arguably AI doesn’t have a right to free speech) and whether requesting social media companies to prevent the amplification of generative AI output would violate the First Amendment.
If a generative AI model has been trained on data intended to be private (such as text messages or photos), then its output may result in the processing of personal data in a manner that is a privacy violation.
Some states allow claims for intrusions on privacy and violating a person’s reasonable expectation of privacy. In the EU, processing of data requires a legal basis which may be hard to prove. Public personalities are already witnessing their privacy being compromised through location tracking and deepfakes.
No federal legislation covers deepfakes and their proliferation; state legislation focuses mainly on deepfake pornography. Several states have proposed bills to counteract the effects of deepfakes in political discourse, some suggesting that all AI generated content be ‘watermarked’ and suggesting a criminal penalty for such bad actors.
One such bill seeks to establish a federal right of publicity, currently a state-specific right that protects a person’s name or likeness from being used for commercial benefit. Celebrities often rely on this recourse if their pictures or voices are used for advertising without permission or compensation.
The Communications Decency Act’s Section 230 provides companies a ‘safe harbor’ from liability for content posted by its users. Many experts oppose repealing Section 230 and think of this provision as protecting online free speech. Attempts to repeal and revise this provision are likely to impact governments’ ability to ask social media companies to remove generative AI content or hold them accountable for disinformation on their platforms.
It’s unclear whether Section 230 protections will cover generative AI companies that aren’t hosts of third-party content. AI outputs often have no owner or source, making it harder for AI providers to argue they were not the “publishers” of such output.
Even Meta Platforms Inc. has been held accountable for housing discrimination stemming from its advertising algorithms and not given the benefit of Section 230, on the basis that Meta was a co-developer of discriminatory ads. Predictably, some bills propose to clearly exclude such AI vendors from Section 230 protections.
While tackling disinformation has always been a challenging area, the issues are further amplified by the onset of generative AI. Voters and politicians should be aware of their rights and recourse in wading into such untested waters.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Smita Rajmohan is an AI and technology attorney in Silicon Valley and serves on the AI policy committee for the Institute of Electrical and Electronics Engineers.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.