Meta Cracks Down on Unoriginal Content and Fake Accounts on Facebook

The crackdown arrives amid growing user frustration over Meta’s moderation practices, especially on Instagram

Menlo Park, CA – In a bid to preserve content authenticity and support original creators, Meta has announced a sweeping set of new policies aimed at curbing the spread of unoriginal and spam-like content on Facebook.

In a statement released Monday, the tech giant revealed it has already removed approximately 10 million accounts in 2025 for impersonating popular content creators, while a further 500,000 accounts were penalized for spam behavior or generating fake engagement.

“These accounts will face reduced reach and be barred from Facebook’s monetisation programmes,” Meta confirmed. Repeat violators risk losing distribution privileges altogether.

Targeting Duplicate and Low-Quality Media

The initiative specifically targets accounts that habitually repost others’ videos, photos, and text without meaningful changes or attribution. Meta emphasized that the policy does not affect users who engage creatively — such as through commentary, reactions, or participation in trends — but rather those who merely recycle content to boost engagement artificially.

To promote original content, Facebook will now demote duplicate videos in user feeds and is testing a new feature that links reposted content back to the original source, ensuring credit and visibility for the original creator.

AI Content and “Slop” Media Under Scrutiny

Though the announcement did not mention artificial intelligence directly, Meta cautioned creators against piecing together clips or overlaying simple watermarks on borrowed material — practices commonly associated with mass-produced, low-effort “AI slop” videos.

The company is urging creators to focus on authentic storytelling and meaningful captions, implicitly discouraging the growing reliance on unedited, AI-generated content.

Creators Voice Concerns Over Enforcement

The crackdown arrives amid growing user frustration over Meta’s moderation practices, especially on Instagram, where creators have complained about wrongful takedowns and a lack of support. A recent petition demanding transparency and improvements in enforcement has gathered nearly 30,000 signatures.

To address transparency, Meta is introducing new post-level insights via the Professional Dashboard, allowing users to monitor content performance and receive alerts about potential policy violations.

Read more: Meta Unveils AI-Powered Smart Glasses with Oakley, Expands Wearable Tech Line

Combating Fake Accounts at Scale

Meta also disclosed new figures from its transparency report, revealing that 3% of Facebook’s monthly active users are fake accounts. From January to March 2025 alone, the platform took action against one billion fraudulent profiles.

Shifting away from internal fact-checking, Meta is piloting a U.S.-based Community Notes system — modeled after a similar tool used by X (formerly Twitter) — that enables users to assess and verify the accuracy of posts.

The company says the rollout of these new enforcement tools and policy updates will be gradual, giving content creators time to adapt to the changes.

Comments are closed, but trackbacks and pingbacks are open.