Creators flag compliance load as AI labels turn “back-end” tool use into “front-end” disclosure

Anmol Sachar and CA-creator Isha Jaiswal say labels will reshape workflows and contracts, while platform verification and a three-hour takedown timeline could hit time-sensitive posts and smaller creators

author-image
Shilpashree Mondal
New Update
AI-disclosure

New Delhi: India’s creator economy is preparing for stricter disclosure and faster takedowns after the Ministry of Electronics and Information Technology (MeitY) notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, bringing “synthetically generated information” within the due diligence framework. 

The changes come into force from February 20, 2026, and require prominent labelling and verification measures on platforms, alongside a three-hour takedown timeline when flagged by a competent authority or court.

The explanatory note to the amendments sets out the intent: create a legal basis for labelling, traceability and accountability for AI-generated or modified information, including deepfakes, amid concerns over misinformation, fraud and reputational harm.

Creators said the biggest shift is that AI use, earlier embedded quietly in production, now becomes a visible signal on the final post. 

Sachar
Anmol Sachar

“Earlier, AI was invisible in the workflow. Now it becomes visible in the final product. That alone changes creative decisions,” said creator Anmol Sachar. “If a label is going to sit on my content, I have to ask myself whether AI is actually adding value to the idea or just complicating how the audience will read it.”

Isha-Jaiswal
Isha Jaiswal

Isha Jaiswal, a chartered accountant and digital content creator, said the rule will push creators towards a more structured process even before the camera rolls. “We’ll need to classify content intent before creation,” she said, arguing that Indian formats like Hinglish make labelling decisions harder. 

She flagged “tool segregation” as another pressure point, because “most affordable AI tools used by Indian creators bundle these features together.”

Under the amendments described in MeitY’s note, intermediaries enabling the creation or modification of synthetically generated information must ensure the content is labelled or embedded with a permanent unique identifier, and the label must be displayed prominently or made audible. 

The note also said the label should cover at least 10% of the surface area of a visual display, or appear during the initial 10% of an audio clip’s duration.

Creators expect brand briefs and contracts to get tighter as liability and reputational risk shift upstream to campaign planning. “Yes, brands will ask for more clarity because nobody wants to be the one blamed later,” Sachar said, while cautioning that the industry should not slow down creator workflows. “Creators will adapt, but the industry needs to be careful not to overcomplicate something that thrives on fast, intuitive workflows.”

Jaiswal said brands will “absolutely” seek formal clauses, but predicted confusion early on. 

“Indian brand teams themselves don’t fully understand AI yet. There’ll be a 6-month chaos period of brands asking for disclosures without knowing what to do with them,” she said. She added that while top creators will adapt quickly, smaller and regional creators could struggle with compliance overhead. “Most creators are one-person armies managing shoot, edit, post, and now compliance documentation,” she said.

On publishing, both creators flagged delays and friction as platforms move to user declarations and verification. MeitY’s explanatory note stated that significant social media intermediaries will be required to obtain a user declaration on whether uploaded information is synthetically generated, and deploy reasonable technical measures to verify such declarations.

“There will definitely be some collateral damage. Platforms are evolving every day. Algorithms aren’t perfect, and creators will get caught in the middle,” Sachar said. “The real impact is on time-sensitive content. If your reel is about a moment, and it gets stuck in review, the moment is already gone.”

Jaiswal said the issue is amplified in India’s real-time commentary categories. “For time-sensitive content (Budget reactions, RBI policy analysis, SEBI changes), this is catastrophic. If I can't publish within 30 minutes, I've lost relevance,” she said. She also warned of a “language barrier in moderation”, arguing that tools trained on English may struggle with Hinglish, code-switching and regional languages, increasing the risk of false positives for Indian language creators. “In India's price-sensitive creator economy, where CPM rates are already 1/10th of US rates, even a 20% reach drop means many creators fall below earning thresholds,” she added.

She also raised concerns about false positives for non-English formats. “Most AI detection tools are trained on English. How will they handle Hinglish? Code-switching? Regional languages? I fear Indian language creators will face disproportionate false positives,” she said.

The three-hour takedown provision is also prompting creators to rework how they handle live moments and clipped formats. The platforms will have to act within three hours on content flagged by a competent authority or court, tightening earlier timelines. 

“In live campaigns, three hours is basically no time. One misinterpretation and your content is gone before you can course-correct,” Sachar said. “It doesn’t make me less creative, but it makes me more cautious about formats that can be clipped out of context.”

Jaiswal said the timeline could reshape how creators plan spontaneity and risk. “The three-hour window will be devastating for India's live-content culture,” she said, citing moments such as Budget, IPL integrations and festival launches. She warned that takedowns during peak moments can directly hit renewals because “promised reach” is not delivered. 

She also argued it may “formalise the divide” between creators with agency and legal backing and solo creators, and push more creators into exclusive agency deals for compliance support.

On trust, creators argued that disclosure itself may not be the problem, but the absence of it can be. “I don’t think labels kill trust; deception does. If audiences know upfront that something used AI, they’re more forgiving,” Sachar said. “The backlash only happens when people feel they were tricked into believing something was real.”

Jaiswal said transparent labels could normalise over time, with varying reactions across cohorts. She argued Indian audiences have already lived through misinformation and morphed content, so “mandatory AI labels might actually increase trust through transparency.”

She also said India’s “value-over-authenticity culture” could mean labels do not automatically hurt educational creators, while metro audiences may scrutinise AI-labelled posts more than Tier 2/3 audiences. “When ASCI mandated #ad disclosures, everyone predicted doom. Today it's normalised. I expect a 12-month adjustment, then business as usual,” she said.

Both creators also drew a practical line between AI assistance and content that needs clear disclosure. “If AI is helping me execute my idea faster, it’s an assistance. If AI is changing what the audience believes happened, it needs disclosure. That’s the line for me,” Sachar said.

Jaiswal said a binary label may be too blunt for India’s multilingual workflows. She proposed a more granular taxonomy that states where AI was used, such as for script assistance, audio enhancement or translation, similar to “FSSAI labels.” 

“Instead of binary ‘AI/Not AI,’ Indian creators need culturally contextualised taxonomy,” she said, suggesting disclosures like “This content used AI for: [Script assistance 30%] [Visual generation 0%] [Audio enhancement 60%] [Hindi-English translation 40%] ”.

ASCI influencer-brand contracts influencers creator economy MeitY AI