Meta, Snap Design Suits Over Child Harm Take on Liability Shield

Sept. 9, 2024, 9:03 AM UTC

New Mexico’s latest tech lawsuit, accusing Snap Inc. of designing features used for child sexual exploitation, signals a new front in the burgeoning legal battle challenging the limits of Big Tech’s federal content liability protections.

Section 230 of the Communications Decency Act, which protects platforms from liability for user-generated content, is the front line of defense for tech companies against a growing wave of litigation over not just what users post, but how platforms circulate and connect users with potentially harmful content. New Mexico’s case—filed Sept. 5 in state court—mirrors a December 2023 lawsuit against Meta over virtually identical conduct, in which a New Mexico judge denied the social media giant’s invocation of Section 230’s shield against tools and content used for trafficking.

That order’s offered plaintiffs a possible turning point in litigation against Big Tech that’s long been smothered by the shield.

Section 230 protects platforms from its users’ content, not necessarily from its own “algorithms and the other things that they do to prioritize information,” said Laura Draper, associate program director for the Tech, Law, and Security Program at American University’s Washington College of Law.

An undercover New Mexico Justice Department investigation found more than 10,000 records related to child sexual abuse materials connected to Snap on dark web sites, the internet’s seedy underbelly often home to illicit activity. New Mexico investigators were also able to set up a profile posing as a 14 year-old girl who exchanged messages with accounts with names like “child.rape” and “pedo_lover10" who attempted to coerce the agent into sharing explicit photos.

While the complaints against Meta and Snap point specifically to the proliferation of child sexual abuse materials by the platform’s users—a clear legal violation if proved—they also say that how tech companies design the user experience is driving harm. Similar arguments have picked up steam in class actions targeting everything from online bullying to sexual extortion of teens.

Whether a potential Section 230 defense by Snap suffers the same fate as Meta’s is still unclear. But any decision will provide a key data point in a growing deluge of cases asking a question that courts across the country are still grappling with.

“All of these cases are going to require some pretty creative lawyering and some pretty interesting departures in the judicial landscape,” said Draper. " Which isn’t to say they might not be, that they won’t be successful, it’s just to say it’s pretty untested water.”

A shift by plaintiffs in targeting “design features” over content is “designed to mask the assumption” that how a company chooses to promote content is an editorial decision that should be covered by Section 230, said Eric Goldman, a professor at Santa Clara University School of Law and an expert on Section 230.

“Plaintiffs are trying to sidestep that by trying to make their targets even more abstract by saying things like you shouldn’t have allowed people to find each other,” he said.

Beyond New Mexico

Courts have started to grapple with the question in class action litigation.

In July 2022, a Texas district court dismissed a lawsuit accusing Snap of negligently designing its app to allow predators to interact with teens on the grounds that Snap was protected by Section 230. The US Court of Appeals for the Fifth Circuit agreed. The Supreme Court also declined to review the case though Justice Clarence Thomas dissented, saying the court failed to address “whether social-media platforms—some of the largest and most powerful companies in the world—can be held responsible for their own misconduct.”

Last month, the Ninth Circuit upheld a district court rulingthat design-related product liability claims in a cyberbullying case involving YOLO, an app formerly owned by Snap, were blocked by Section 230.

Meta was sued by the attorneys general of several states in October over allegations it harms kids with negligent design features.

“It’s still a cutting-edge issue,” said Goldman. “Literally, every day we’re getting new rulings on that topic.”

And algorithms aren’t the only features under attack. Judges also need to look at the question of if it’s reasonable to expect social media platforms to read and monitor private messages, as implicated in the New Mexico case, Goldman said.

Such privacy is one that law enforcement, including some attorney general, have balked at. In January, the state of Nevada sued Meta for rolling out end-to-end encryption for all users, claiming it facilitated online child exploitation.

It all amounts to a cottage industry of social media harm lawsuits working their way through various levels of the judicial system.

“The courts are all over the map right now,” said Eric Goldman. There are “dozens or hundreds of other cases where the exact same theories are being tested,” he said.

To contact the reporter on this story: Tonya Riley in Washington at triley@bloombergindustry.com

To contact the editors responsible for this story: Kartikay Mehrotra at kmehrotra@bloombergindustry.com; Adam M. Taylor at ataylor@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.