Meta isn’t the only company grappling with the rise of ai-generated content and how it affects its platform. YouTube also quietly implemented a policy change in June that will allow people to request removal of ai-generated content or other synthetic content that mimics their face or voice. The change allows people to request removal of this type of ai content under YouTube’s privacy request process. It’s an expansion of its previous policy. ai-innovation/” target=”_blank” rel=”noreferrer noopener nofollow”>Announcing a focus on a responsible ai agenda First introduced in November.
Instead of requesting that content be removed for being misleading, such as a deepfake, YouTube wants affected parties to directly request the removal of content for considering it a violation of privacy. According to information recently updated by YouTube Help Documentation On the subject, first-party claims are required outside of a handful of exceptions, such as when the affected individual is a minor, does not have access to a computer, is deceased, or other similar exceptions.
However, simply submitting a removal request does not necessarily mean that the content will be removed. YouTube notes that it will make its own decision on the report based on a number of factors.
For example, it may consider whether the content is being disclosed as synthetic or ai-created, whether it uniquely identifies an individual, and whether the content could be considered parody, satire, or something else of value and public interest. The company further notes that it may consider whether ai content features a public figure or other well-known individual, and whether or not it shows them engaging in “sensitive behavior” such as criminal activity, violence, or endorsing a political product or candidate. The latter is particularly concerning in an election year, where ai-generated recommendations could potentially sway votes.
YouTube says it will also give the uploader of the content 48 hours to act on the complaint. If the content is removed before that time elapses, the complaint will be closed. Otherwise, YouTube will initiate a review. The company also warns users that removal means completely removing the video from the site and, if applicable, also removing the individual’s name and personal information from the video’s title, description, and tags. Users can also blur the faces of people in their videos, but they can’t simply make the video private to comply with the removal request, as the video could become public again at any time.
However, the company did not widely publicize the policy change. ai-generated-content/” target=”_blank” rel=”noreferrer noopener nofollow”>In March it introduced a tool in Creator Studio, which allowed creators to disclose when realistic-looking content had been created using altered or synthetic media, including generative ai. Also, more recently, A test of a function has started which would allow users to add Collaborative notes that provide additional context in the videos, whether it is intended as a parody or is misleading in some way.
YouTube is not against the use of ai, as it has already experimented with generative ai, including a comment summarizer and a conversational tool to ask questions about a video or receive recommendations. However, the company has ai-innovation/” target=”_blank” rel=”noreferrer noopener nofollow”>Previously warned that simply labeling ai content as such won't necessarily protect it from being removed, as it will still have to comply with YouTube's Community Guidelines.
In the case of privacy complaints related to ai material, YouTube will not be quick to penalize the creator of the original content.
“For creators, if you receive a notification of a privacy complaint, please note that privacy violations are separate from Community Guidelines strikes and receiving a privacy complaint will not automatically result in a strike,” a company representative said last month. ai-synthetic-content-on-youtube?hl=en&sjid=9109542335888483752-NA” target=”_blank” rel=”noreferrer noopener nofollow”>shared on the YouTube Community site, where the company directly updates creators about new policies and features.
In other words, YouTube Privacy Guidelines are different from their Community Principlesand some YouTube content may be removed as a result of a privacy request, even if it doesn’t violate the Community Standards. While the company won’t apply a penalty, such as an upload restriction, when a creator’s video is removed following a privacy complaint, YouTube tells us it may take action against accounts with repeated violations.
Updated July 1, 2024 at 4:17 p.m. ET with more information about actions YouTube may take in the event of privacy violations.