Social Media Verdicts Spur Online Product Safety Reassessments

March 27, 2026, 4:53 PM UTC

A pair of landmark verdicts this week finding Meta and Google negligent in their platform design is forcing companies to refocus products on kids’ safety, with an emphasis on features such as age verification.

The verdicts are bellwethers for the disposition of more than 1,000 similar complaints along with litigation from more than 40 other attorneys general. They’re also a warning that platforms will be held accountable for how their designs impact consumers, even if the law doesn’t spell out exact design requirements.

“I think it’s going to cause some waves because companies are going to be reassessing their platforms in terms of safety and making sure they understand the platform,” said Amy Mushahwar, partner at Lowenstein Sandler.

“The signal that it sets for other social media platforms and also generative AI platforms is you can have policies but you need to have the engineering pipelines to prove the policies,” she said. “You should have metrics if you have a safety program.”

State Law Void

A Los Angeles jury found Meta and Google negligent in how they designed their products, awarding $6 million in damages to a woman who claimed the platforms knowingly deployed addictive features that caused her mental harm. The verdict followed a $375 million win for the New Mexico attorney general in a lawsuit accusing Meta of deceiving users about kids’ safety.

The verdicts coincide with a new wave of state laws that require platforms to bake user safety into their product designs as well as restrict access to features that could cause harm to kids.

While many of those state laws have been blocked by the courts, the recent verdicts demonstrate that legislation isn’t the only avenue to remedy company behavior. New Mexico’s suit calls for Meta to “enact effective age verification” as part of its injunctive relief.

“The common theme is still if you’re going to build products responsibly and in a way that matches what both regulators and users expect, you need to be able to build products that are either just not available to minors or have different experiences for minors,” said Roman Karachinsky, chief product officer at Incode, an identity verification company.

Next Steps

It’s not entirely clear, however, what product choices companies will make to address claims like those in the Los Angeles lawsuit, in which a woman said Instagram and YouTube knowingly deployed addictive features that caused her mental health issues.

Platforms could explore “simpler options,” to reduce risk, such as warning labels about product harms, instead of invasive age gates, Jesse Saivar, chair of Greenberg Glusker’s Technology Group, wrote in an email.

“I think one of the most fascinating things about these cases will be seeing how the platforms adapt,” he wrote.

Trials could force expectations for platforms to change, even as state laws implementing age appropriate design codes and other limitations on social media features get held up in federal court challenges.

“We could see more of an expectation that the platform itself is designed appropriately rather than designed for consent to access,” said Casey Waughn, senior associate at Armstrong Teasdale.

Meta said it disagreed with the New Mexico and Los Angeles verdicts and will be appealing. Google also plans to appeal the Los Angeles ruling.

To contact the reporter on this story: Tonya Riley in Washington at triley@bloombergindustry.com

To contact the editors responsible for this story: Jeff Harrington at jharrington@bloombergindustry.com; Michelle M. Stein at mstein1@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.