Meta Platforms Inc. on Friday introduced new measures to curb impersonation and clarify what constitutes “original content” on Facebook, following widespread criticism that the platform had become flooded with low-quality AI-generated posts.
The updates build on last year’s crackdown targeting spammy and unoriginal material, including repeated use of others’ photos, videos, or text.
Meta said the initiatives aim to elevate authentic creator content in users’ feeds and restore trust in the platform’s quality, countering what it described as the “AI slop” that had been weighing down Facebook’s reputation.
The move is crucial for Facebook’s ongoing success as a platform for creators. Industry experts note that if unoriginal content and AI-generated posts overwhelm authentic voices and limit monetization opportunities, creators may migrate to competing platforms.
Meta reported that its previous initiatives have already had a measurable impact: views of, and time spent on, original content roughly doubled in the second half of 2025 compared with the same period in 2024.
The company also reported progress in tackling impersonation, saying it removed 20 million accounts last year and achieved a 33 per cent decline in impersonation reports targeting high-profile creators.
Facebook is testing upgrades to its content protection tools, which let creators take action when their reels are reposted by impersonators across the platform.
Through a centralized dashboard, creators can flag duplicated content, and Meta says an upcoming update will streamline the reporting process, enabling submissions from a single interface.
The company noted, however, that the current system primarily identifies duplicate content and does not yet detect unauthorized use of a creator’s likeness, a gap the platform still aims to address.
