/buzzincontent-1/media/media_files/2025/10/14/lok_sabha-2025-10-14-13-10-21.jpg)
New Delhi: The Parliamentary Standing Committee on Communications and Information Technology called for stronger accountability among content creators, both human and AI-driven, and recommended licensing, labelling and stricter fines to curb the spread of misinformation across media platforms.
The Committee, chaired by Dr Nishikant Dubey, presented its 22nd report titled “Review of Mechanism to Curb Fake News” in the Lok Sabha.
The report observed that the digital boom has turned almost every individual into a creator of information, with social media enabling rapid and unchecked dissemination of content. Traditional editorial filters, it said, have weakened, allowing misinformation to spread through posts, videos and influencer-driven platforms.
Individual creators and rise of unverified content
Stakeholders told the Committee that creators today range from established media outlets to independent influencers, YouTubers and digital storytellers, many of whom operate without verification mechanisms. The Editors Guild of India (EGI) said fake news is often spread by individuals or groups who deliberately manipulate content to mislead or harm.
The News Broadcasters and Digital Association (NBDA) added that misinformation also travels through creators incentivised by engagement and advertising revenue, not accuracy.
The report underlined that misinformation disguised as authentic news can sway public opinion, harm reputations and disrupt democratic processes. It said creators should bear direct responsibility for the content they publish or promote.
Proposals for licensing and fines
To bring order to the fast-growing creator ecosystem, the Committee proposed licensing norms for content creators and mandatory labelling for all AI-generated or manipulated media.
It said such a system would help differentiate verified content from synthetic material and make creators legally accountable for deliberate misinformation.
The panel also noted that penalties under existing media laws are inconsistent and inadequate.
The Press Council of India can only censure print publications, while television channels face advisories or warnings. Under the Information Technology Act, digital publishers and creators can face limited fines or blocking of content.
To address this, the Committee recommended graded penalties similar to those of the News Broadcasting and Digital Standards Authority (NBDSA), which fines up to Rs 25 lakh or 1% of a network’s annual turnover for repeated violations.
It suggested escalating consequences for creators and publishers who spread or monetise fake news, including suspension or cancellation of accreditation and blocking of repeat offenders.
The AI factor in fake news
While focusing on creators, the report also flagged the growing role of Artificial Intelligence (AI) in amplifying misinformation. It said AI tools and deepfake technology have made it easier to create realistic videos, audio clips and images that mislead audiences and erode trust.
The Ministry of Information and Broadcasting (MIB) told the Committee that AI could assist in detecting and flagging fake content but cannot yet replace human verification. The Editors Guild of India suggested combining AI detection systems with editorial checks to avoid errors and overreach.
The Committee acknowledged the dual role of AI, as both a problem and a potential solution. It recommended developing AI-driven detection tools with human oversight and urged platforms to label synthetic or AI-generated media clearly.
Government action on deep fakes
The Ministry of Electronics and Information Technology (MeitY) informed the Committee that it has formed a nine-member panel to study deepfakes and funded two research projects, one on fake speech detection and another on deep fake video analysis. Prototype tools developed by C-DAC centres in Hyderabad and Kolkata are currently under testing.
MeitY has also issued advisories reminding social media intermediaries to monitor and remove deep fake and synthetic content under the IT Rules, 2021.
Algorithmic responsibility and platform oversight
The Committee said the spread of fake news is also linked to how platforms promote sensational content through algorithms that prioritise engagement. It backed reviewing the “safe harbour” clause in the IT Act that protects intermediaries from liability. The panel said platforms must share responsibility for content accuracy and disclose how their algorithms recommend or amplify posts.
It also supported introducing independent audits of social media platforms to ensure transparency in handling misinformation.
Call for coordinated response and media literacy
The Committee asked the government to strengthen coordination between the Ministry of Information and Broadcasting, MeitY and the Department of Telecommunications to create a unified framework against fake news. It suggested that the Ministry of Education explore incorporating media literacy into school curricula to help young users identify and question misinformation.
In its concluding observations, the Committee said creators, both individuals and AI-driven systems, must be held accountable for the content they produce and share. It emphasised that misinformation, whether spread by humans or machines, poses a direct risk to public trust and democratic communication.
The panel recommended the introduction of licensing norms, clearer labelling, higher fines and AI-assisted monitoring as part of a broader legal and technological roadmap to curb fake news in India