/buzzincontent-1/media/media_files/2025/10/22/ai-detection-2025-10-22-10-24-17.png)
New Delhi: YouTube has rolled out a new likeness-detection tool for creators in the YouTube Partner Program, designed to identify and remove AI-generated videos that use their face or voice without consent.
The feature follows a successful pilot phase and represents the first wave of a global rollout. Creators received notification emails on Tuesday morning, according to a YouTube spokesperson.
“The technology is aimed at protecting creators from the misuse of their likeness in content that could falsely endorse products, spread misinformation, or harm reputations,” the spokesperson said.
The introduction comes after several high-profile incidents of unauthorised AI use, including the case of electronics company Elecrow, which used an AI-generated version of YouTuber Jeff Geerling’s voice to promote its products.
On its Creator Insider channel, YouTube explained how the tool operates. Creators can access a new “Likeness” tab, agree to data processing terms, and scan a QR code using a smartphone. The code directs users to an identity verification page requiring a photo ID and a short selfie video.
Once verified, creators gain access to a dashboard listing all videos detected as using their likeness. From there, they can submit a removal request under YouTube’s privacy policies, file a copyright complaint, or archive the video for record-keeping.
Creators retain full control over participation in the programme. Those who opt out will have data scanning halted within 24 hours.
YouTube began testing the technology earlier in 2025, in collaboration with Creative Artists Agency (CAA), to help celebrities, athletes, and influencers detect deepfake content on the platform.
In April, YouTube expressed support for the proposed NO FAKES Act, legislation intended to curb the creation and distribution of deceptive AI-generated replicas that imitate individuals for malicious or misleading purposes.