After content creators, politicians and journalists, YouTube will also enable celebrities to access its likeness detection tool, allowing them to remove deepfakes and stop unauthorized impersonation on the platform.
YouTube’s biometric likeness technology scans for AI-generated videos that match a verified user’s appearance. The feature functions similarly to Content ID, a tool that helps detect and remove copyrighted material on the platform, according to a company blog post.
Users can verify their identity by submitting a government ID and a self-recorded video for face biometrics matching. Those enrolled in the program receive alerts when potential matches surface and can request removal if the content goes against YouTube’s privacy policy.
The feature was first offered to creators in the YouTube Partner Program last year, while in March it was expanded to government officials, journalists and political candidates. Expansion to other famous people is the next logical step.
YouTube says it has collaborated with talent agencies and management companies, such as CAA, UTA, WME, and Untitled Management, to provide the tech to entertainers.
While some celebrities have recoiled at the thought of seeing their deepfakes online, others are seeing it as a money-making tool. Talent agency CAA has, for instance, built a database with AI developer Veritone that stores their clients’ digital likenesses and voices to give them control and compensation in cases of AI use.
YouTube is not the only company that is introducing measures to protect famous people from unauthorized deepfakes, as AI-generated videos fuel scams, political misinformation and reputational manipulation.
Last year, OpenAI committed to “strengthen guardrails around replication of voice and likeness when individuals do not opt-in,” following Breaking Bad star Bryan Cranston’s decision to reach out to the actors’ union SAG-AFTRA over unauthorized AI-generated versions of his likeness.
Meanwhile, both OpenAI and YouTube have voiced support for the proposed NO FAKES Act, a U.S. federal law designed to protect individuals’ voices and visual likenesses from unauthorized, AI-generated digital replicas.
Article Topics
biometrics | deepfake detection | deepfakes | face biometrics | generative AI | likeness detection | YouTube