• Home
  • YouTube to tighten monetization rules…

YouTube to tighten monetization rules to combat AI-generated content

YouTube to tighten monetization rules to combat AI-generated content

YouTube is set to roll out new monetization rules aimed at curbing the spread of “inauthentic” and mass-produced content on its platform—much of it made easier by the rise of generative AI tools.

The company will update its YouTube Partner Program policies on July 15, reinforcing what kinds of content are eligible for monetization.

Although the exact policy language has yet to be released, YouTube has clarified that the update is intended to help creators better understand what counts as “inauthentic” today. According to YouTube’s support documentation, monetized videos must be “original” and “authentic,” criteria that some creators argue have become increasingly difficult to define in the AI era.

The change comes amid growing concerns over a flood of low-quality, AI-generated content—dubbed “AI slop”—spreading across YouTube. These often feature robotic voiceovers layered over stock images, reused video clips, or entirely fabricated news segments. Some AI-generated music and storytelling channels have attracted millions of views, raising alarm about the platform’s content quality and credibility.

While creators feared the update might threaten formats like reaction videos or those using licensed clips, YouTube’s Head of Editorial and Creator Liaison, Rene Ritchie, said the policy update is more of a clarification than a new restriction. In a video published Tuesday, Ritchie assured creators that content formats like reactions or commentary remain safe—as long as they demonstrate originality and add value.

“This is a minor update to long-standing policy,” he said, emphasizing that mass-produced, repetitive content has long been ineligible for monetization.

Still, with AI making it easier than ever to churn out low-effort content, YouTube is clearly preparing to draw a firmer line. Behind the scenes, the platform is likely gearing up for stricter enforcement, especially as AI-generated misinformation and scams continue to threaten its integrity.