YouTube announces new measures for AI-generated content

YouTube has outlined its commitment to responsible AI innovation, emphasizing a proactive approach to safeguarding against the misuse of AI-generated content.

In the coming months, several updates and measures will be implemented to address concerns and enhance transparency.

Disclosure Requirements and Content Labels

To counter the potential for misleading content, YouTube will introduce disclosure requirements for creators utilizing AI tools to produce realistic altered or synthetic material.

Creators must explicitly disclose such content during the upload process. This includes videos depicting events that never occurred or individuals saying or doing things they didn’t do. Failure to comply may result in content removal, suspension from the YouTube Partner Program, or other penalties.

Viewer Awareness

YouTube aims to keep viewers informed about altered or synthetic content in two ways.

  • First, a label in the description panel will indicate the presence of such content.
  • Second, for sensitive topics, a more prominent label will be applied to the video player.

In cases where a label alone is insufficient to mitigate harm, YouTube may remove synthetic media violating Community Guidelines.

New Options for Creators, Viewers, and Artists

Responding to community feedback, YouTube plans to allow users to request the removal of AI-generated or synthetic content that simulates an identifiable individual. This includes the person’s face or voice.

Consideration factors include parody or satire, unique identification, and involvement of public officials or well-known individuals. Music partners can also request removal of AI-generated music content mimicking an artist’s voice.

AI Technology for Content Moderation

YouTube emphasizes the integration of AI technology in content moderation, with over 20,000 reviewers globally. AI classifiers enhance the speed and accuracy of detecting violative content, reducing the exposure of human reviewers to harmful material.

YouTube says it continuously refines its AI tools to adapt to emerging threats, utilizing generative AI to expand the information set for classifier training.

Building Responsibility into AI Tools

YouTube asserts a commitment to responsible AI development, prioritizing accuracy over speed. Ongoing efforts include developing guardrails to prevent AI tools from generating inappropriate content.

Acknowledging the likelihood of bad actors attempting to bypass these measures, YouTube actively incorporates user feedback and conducts adversarial testing through dedicated teams.

Announcing the updates, Jennifer Flannery O’connor And Emily Moxley, Vice Presidents, Product Management, YouTube, said:

We’re embarking on the initial stages of our exploration to unleash fresh possibilities in innovation and creativity on YouTube through generative AI. The potential of this technology has us incredibly enthusiastic, and we recognize that its future implications will echo throughout the creative industries for years.

Striking a balance between these advantages and safeguarding our community’s ongoing safety is crucial at this pivotal juncture. We’re committed to collaborating closely with creators, artists, and stakeholders in the creative industries to shape a future that brings benefits to everyone involved.


Related Post