Meta to Reduce Role of Outside Content Moderators in Favor of AI

March 19, 2026, 4:00 PM UTC

Meta Platforms Inc. will soon cut back on its use of third-party vendors to help with content moderation, relying instead on advanced artificial intelligence systems to detect and remove posts that violate the company’s terms of service.

Meta, which owns Facebook and Instagram, has used AI for years to detect spam and abusive posts at scale on its networks, and has also paid human moderators from companies like Accenture Plc to manually review and remove inappropriate posts.

The social media giant recently started testing more advanced AI tools built on large language models to help sift through posts and enforce ...

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.