/buzzincontent-1/media/media_files/2025/06/24/ai-creator-2025-06-24-10-22-34.jpeg)
New Delhi: A new wave of ethical scrutiny has engulfed the tech industry after Google confirmed that YouTube videos were used to train its AI-powered video generator, Veo 3, without explicit consent or compensation from creators.
The revelation has triggered widespread concern across the creator economy, raising critical questions about ownership, consent, and the use of user-generated content in artificial intelligence development.
Veo 3 is positioned as a generative video model capable of producing cinematic-quality clips from simple text prompts. It includes features like realistic visuals, ambient sound, and dialogue, and is slated for integration into YouTube Shorts later this year.
YouTube CEO Neal Mohan, speaking at Cannes Lions 2025, described Veo 3 as a transformative tool for democratising short-form storytelling, especially as YouTube Shorts now sees more than 200 billion daily views.
But behind Veo’s innovation lies a contentious development: the model was partially trained on YouTube’s vast content library, reportedly over 20 billion videos, without creators’ active knowledge. In a statement to CNBC, a YouTube spokesperson said, “We’ve always used YouTube content to make our products better, and this hasn’t changed with the advent of AI,” maintaining that usage aligns with YouTube’s creator agreements.
However, creators expressed shock, noting they were unaware their videos could be used to train AI systems that might ultimately compete with their own content. Updated Terms of Service from September 2024 grant YouTube broad rights to use uploaded content, including for “machine learning and AI applications,” but do not offer an opt-out for training by Google’s own models, only for third parties like Apple or Anthropic.
YouTube’s AI training practices reflect a broader trend among technology companies racing to develop generative models. OpenAI faced backlash in 2024 over allegations it transcribed more than a million hours of YouTube content. Nvidia reportedly trained its AI using decades' worth of YouTube footage, and firms including Meta, Salesforce, and Apple have turned to public video data to power their own AI systems.
Meta, in particular, acknowledged using Facebook and Instagram posts to train its LLaMA models, prompting legal action from affected users. Meanwhile, Apple has faced pushback from publishers over AI licensing deals lacking clarity on attribution or compensation.
The issue has also reached emerging AI firms like Perplexity AI, recently valued at $14 billion, whose web-scraping practices for its AI-powered search engine have drawn fire from outlets like Forbes for allegedly reproducing articles without permission. Collectively, these cases highlight a growing ethical concern: are tech firms using publicly accessible content under “fair use” at the expense of content creators?
For creators, Veo 3 presents both promise and peril. While some welcome AI as a tool for simplifying production, others fear being sidelined by machine-generated content. Luke Arrigoni, CEO of Loti, said, “It’s plausible that they’re taking data from creators who’ve spent years building their channels, only to create a synthetic version—a poor facsimile—of their work.”
Technology firm Vermillio used its Trace ID tool to evaluate Veo-generated content. In one case, a Brodie Moss video scored 71% visual similarity and over 90% audio match, raising concerns that AI tools may replicate creators’ unique styles or likenesses. YouTube has introduced protective tools for public figures in collaboration with Creative Artists Agency, but smaller creators remain unprotected.
The economic consequences are also mounting. Over 25% of creators in the YouTube Partner Program earn revenue through Shorts. As AI-generated videos increase in volume, fears of algorithmic preference and audience fatigue could marginalise human-made content.
The debate is also unfolding in legal circles. The US Copyright Office’s 2024 report asserted that mass use of copyrighted material for AI training without compensation “is not fair use under current law.” Legal experts point to recent lawsuits, such as Disney and Universal suing Midjourney over the use of iconic characters—as early examples of what may become a wave of litigation against generative AI platforms.
To reassure users, Google has introduced indemnification for Veo 3, promising to cover legal liabilities arising from copyright disputes involving its outputs. YouTube also allows creators to request takedowns of infringing AI content. However, critics argue these safeguards are reactive, not preventive, and creators are still denied the choice to opt out entirely from AI training datasets.
Concerns are even more pronounced in the European Union, where data usage regulations are stricter. Experts suggest that Google’s reliance on creator content for AI training could breach upcoming AI Act provisions, which mandate transparency and traceability of training data.
The Veo 3 controversy has amplified industry-wide calls for ethical reform in AI development. Key proposals gaining traction include:
- Transparency and consent: Notifying users when their content is used in training datasets, with opt-out rights.
- Compensation models: Establishing royalty or licensing systems, similar to those used in music streaming.
- Regulatory oversight: Strengthening copyright and data protection laws for AI-era content use.
- Creator tools: Expanding access to detection and identity protection features like Trace ID to all creators.
As AI becomes more deeply integrated into content platforms, the clash between innovation and creator rights intensifies. The companies driving the AI revolution: Google, Meta, Apple, built their platforms on the backs of creators. Now, as they seek to redefine content creation through machine learning, creators are demanding a seat at the table.
Without meaningful protections and compensation, industry experts warn, the generative AI boom risks undermining the very communities that made platforms like YouTube successful in the first place.