Some AI-generated videos will now require a labelKYouTube

The policy was first announcedby the company last November, but according to a new update from YouTubeposted on Monday, the new policy and its required compliance tools are now being launched, and will continue to roll out over the coming weeks. 

"We’re introducing a new tool in Creator Studio requiring creators to disclose to viewers when realistic content – content a viewer could easily mistake for a real person, place, or event – is made with altered or synthetic media, including generative AI," says YouTube.


You May Also Like

SEE ALSO: If you're buying the Kara Swisher book on Amazon, make sure it's not an AI-generated knockoff

YouTube is requiring that creators mark certain AI video content so the platform can affix an "altered or synthetic content" label on it. However, not all AI video content will need to be labeled.

YouTube's AI labelAn example of what the AI label looks like on the YouTube mobile app video player. Credit: YouTube

According to YouTube, this policy only covers AI digital alterations or renderings of a realistic person, footage of real events or places, or complete generation of a realistic looking scene.

YouTube also explains what type of AI-generated content is exempt. For the most part, these exemptions are minor alterations that were possible well before the generative AI boom of recent years. These include videos that use beauty filters, special effects like blur or a vintage overlay, or color correction.

Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up!

Potential pitfalls from YouTube's AI labeling policy

There is one interesting and glaring exemption from YouTube's new AI-labeling policy: Animated AI-content.

According to YouTube, animated content is "clearly unrealistic" so it does not need to be labeled. The policy is meant to curb misinformation or potential legal issues which could arise from generated versions of real people. It is not meant to be quality control, warning users when low-effort AI generated junk starts playing on their screen.

However, as Wiredpoints out, YouTube is arguably dropping the ball here because the policy leaves out the bulk of kids content, animated video.


Related Stories
  • Behold, a giant AI-generated rat penis
  • Yelp introduces AI-generated summaries of restaurants, bars, and more
  • Should we trust Amazon's AI-generated review summaries?

Disturbing kids videos on YouTube have made headlines over the years and the company has made movesto deal with the problem. Often these appear to be pumped out as quickly as possible without any educational intent or even steps to ensure age-appropriateness.

Kid-oriented content on YouTube will be affected by this policy if creators push misinformation, because that would fall under the "realistic" portion of these new rules. However, bulk-generated AI animated junk, usually aimed at the youngest demographics, will not. And it seems YouTube is missing out on an opportunity to have this type of content labeled so parents can easily filter it out.

All-in-all, YouTube's new AI policy is a step in the right direction. Generative AI that could be misconstrued as real will be labeled, and uses of AI by filmmakers and creators to enhance high-quality content won't be affected.

Still, it doesn't appear that YouTube is yet dealing with the potential for low-quality, AI-generated content to fill the site and fundamentally change the platform. It may be forced to confront that reality, though, when and if it arises.

Topics Artificial Intelligence YouTube

Author

Editorial Team

Our editorial team is dedicated to delivering accurate, timely, and engaging content. With expertise across various domains, we strive to inform and inspire our readers.