YouTube is taking steps to combat the problem of fake content generated by AI by introducing new rules that require uploaded videos to be labeled as synthetic, altered, or not. By checking a box during the upload process, creators will indicate the nature of their content, and the system will display a notification to inform viewers.
As an example, YouTube cites content that is created using AI to simulate events that never actually occurred, or videos that show individuals speaking or doing things they never actually did.
Creators found to be in violation of these rules may face content removal or even banning, although no specific information is disclosed during the upload process.
In addition, YouTube is implementing a reporting system for individuals whose faces or voices have been manipulated in videos, allowing them to request the removal of the content. YouTube’s criteria for evaluating these situations may vary depending on the circumstances.
TLDR: YouTube is addressing the issue of AI-generated fake content by introducing new labeling rules and providing a reporting system for manipulated faces and voices. Violators may have their content removed or face a ban.