/buzzincontent-1/media/media_files/2026/03/03/x-twitter-2026-03-03-10-53-42.png)
New Delhi: X, the social media platform formerly known as Twitter, is preparing to introduce a “Made with AI” label that allows creators to disclose when their posts contain artificially generated or manipulated content.
The feature was first identified by app researcher Nima Owji, who observed a new toggle in the platform’s posting interface. The option enables users to indicate whether text, images or videos in a post were created or altered using artificial intelligence tools.
BREAKING: X is working on a "Made with AI" label!
— Nima Owji (@nima_owji) February 22, 2026
Users will soon be able to label their posts as AI-generated content!
Most probably, not labeling them will go against the X rules when this feature launches! pic.twitter.com/CMXafCUjTZ
The move comes amid increasing difficulty in distinguishing authentic content from synthetic material online. AI-generated images, edited videos and machine-written text have become widespread across social media platforms, prompting calls for clearer labelling and transparency.
Under the proposed system, creators can voluntarily activate the label when publishing AI-assisted content. Screenshots shared by Owji show a visible marker attached to posts once the option is selected. However, the reliance on user disclosure raises questions about enforcement, as creators could choose not to apply the label.
The platform already applies watermarks to images and videos produced through its chatbot Grok. Earlier in January 2026, X also introduced a “Manipulated Media” tag designed to automatically flag deceptive edits that could cause harm or mislead viewers.
According to Owji, posts that contain synthetic media but are not labelled may eventually be treated as violations of the platform’s rules, suggesting that enforcement measures could accompany the new disclosure feature.
The development comes at a time when regulators are pressing technology companies to introduce stronger provenance signals for AI-generated content. In India, the Ministry of Electronics and Information Technology has directed platforms such as Meta, YouTube and Facebook to clearly label AI-generated material and include embedded identifiers rather than relying solely on visible tags.
Under the revised IT compliance framework, platforms must remove deepfake or AI-generated content within three hours of receiving a government or court order. Companies are also required to deploy automated tools capable of identifying and preventing the spread of illegal or deceptive synthetic media.
Several platforms have already introduced similar disclosure mechanisms. Meta, for instance, applies “Made with AI” labels to images, audio and video where detection signals or user disclosures indicate the presence of AI-generated elements.
/buzzincontent-1/media/agency_attachments/ovtHKkiRFrKggtFNaCNI.png)
Follow Us