YouTube says the label is not applicable for all AI generated content. (Express Photo)
With generative AI available to all, it can be hard to determine if the images and videos you see on video-sharing platforms like YouTube are real or generated using artificial intelligence.
To combat misinformation and make viewers aware if the content they are seeing has been altered or generated synthetically, YouTube decided that creators will have to disclose if they are using media generated by generative AI. Now, the company has introduced a new tool in Creator Studio that helps content creators disclose content that is "unrealistic, animated, includes special effects, or has used generative AI for production assistance."
In a blog post, YouTube says these disclosures will now appear as labels in the expanded video description, but when it comes to sensitive topics like news, elections, finance or health, these labels will be visible on the video itself. Creators will have to disclose when they are using AI-generated media if the video uses the likeness of a realistic person, altered footage of real events or places and if it has realistic-looking scenes that were made using AI-powered tools.
Comments