/buzzincontent-1/media/media_files/2025/11/06/adam-mossurie-2025-11-06-15-08-29.png)
New Delhi: Instagram chief executive Adam Mosseri has warned that the rapid rise of artificial intelligence is making it increasingly difficult for users to distinguish real content from synthetic material, potentially reshaping how trust and authenticity function online.
In a series of posts shared towards the end of the year on Instagram and Threads, Mosseri reflected on how AI-generated images and videos are altering long-established assumptions about visual media. He suggested that the traditional cues people relied on to assess credibility are weakening as AI tools become more accessible and sophisticated.
“Authenticity is becoming infinitely reproducible,” Mosseri wrote, describing a shift that he believes is already underway as platforms head into 2026.
According to Mosseri, creative qualities that once set individuals apart are no longer exclusive. “Everything that made creators matter, the ability to be real, to connect, to have a voice that couldn’t be faked, is now accessible to anyone with the right tools,” he said.
He noted that advances in deepfake technology and generative AI are eroding the long-held assumption that photos and videos represent direct records of reality.
Mosseri linked this development to earlier changes brought about by the internet, when falling distribution costs shifted influence away from institutions and towards individuals.
“Individuals, not publishers or brands, established that there’s a significant market for content from people,” he observed. At the same time, declining trust in institutions has driven audiences towards content shared by creators they personally follow and believe.
While criticism of low-quality AI output is widespread, Mosseri cautioned against dismissing all AI-generated content. “There’s a lot of amazing AI content,” he said.
However, he acknowledged that even the most polished AI visuals currently have identifiable characteristics. “Even the quality AI content has a look,” he noted, citing overly smooth skin and highly processed visuals.
He added that these signs are unlikely to last. “That will change. We're going to see more realistic AI content.”
Rather than sidelining creators, Mosseri suggested that the growing volume of AI material could increase the value of distinctive voices. “The bar is shifting from ‘can you create?’ to ‘can you make something that only you could create?’” he wrote.
He also challenged how Instagram itself is perceived. For many users, particularly older ones, the platform remains associated with curated photo feeds. “That feed is dead,”
Mosseri said, explaining that personal sharing has largely moved into private messages. “Blurry photos and shaky videos of daily experiences. Shoe shots and unflattering candids now dominate how users document their lives.”
This more unpolished style, he said, is increasingly visible in public content as well. In that context, Mosseri argued that parts of the imaging industry are prioritising the wrong aesthetic. “They’re competing to make everyone look like a pro photographer from 2015,” he said, adding that flawless visuals are no longer compelling. “Flattering imagery is cheap to produce and boring to consume.”
Instead, he suggested that imperfection has taken on a new role. “In a world where everything can be perfected, imperfection becomes a signal.” He described rawness as both expressive and defensive. “Rawness isn’t just aesthetic preference anymore; it's proof. It’s defensive. A way of saying, ‘This is real because it’s imperfect.’”
Mosseri acknowledged, however, that this distinction may not hold indefinitely. As AI improves, even imperfect aesthetics could be convincingly simulated. When that happens, he believes the emphasis will shift again. “We’ll need to shift our focus to who says something instead of what is being said.”
Reflecting on broader behavioural change, Mosseri said users are likely to move from assuming content is genuine to approaching it with scepticism. “This will be uncomfortable. We're genetically predisposed to believing our eyes,” he wrote.
He said platforms including Instagram will continue efforts to detect and label AI-generated material, though he acknowledged the limits of that approach. “It will be more practical to fingerprint real media than fake media,” he suggested, referencing methods such as cryptographic signing at the point of capture.
Ultimately, Mosseri argued that technical labels alone will not resolve questions of trust. “We need to surface much more context about the accounts sharing content so people can make informed decisions,” he wrote.
In an online environment defined by what he described as “infinite abundance and infinite doubt”, Mosseri said creators who retain credibility will be those able to sustain trust through consistency, transparency and a recognisable personal voice.
/buzzincontent-1/media/agency_attachments/ovtHKkiRFrKggtFNaCNI.png)
Follow Us