In the age of generative AI, disinformation stretches beyond mere falsehoods – it is about storytelling. Viral posts or conspiracy theories spread not only because they contain inaccuracies, but because they resonate emotionally with audiences.
Recent research indicates that while AI can help detect these manipulative narratives, human judgment and emotional literacy remain critical to understanding – and countering – the digital spread of misleading content.
AI authorship
A study by Zhao, Zhou, and Wang tested 1,600 participants across three experiments to measure how AI authorship labels and content type affect readers’ responses.
They found that when news was labelled as AI-generated, it consistently received lower cognitive and affective evaluations – irrespective of factual accuracy.
However, emotionally charged content, including conspiratorial or partisan news, still elicited stronger engagement and sharing intentions even when attributed to AI.
The authors remarked that “Emotional resonance can override source scepticism […], with new challenges especially in curbing emotionally manipulative AI-generated content”.
This illustrates a crucial point, that emotion can override scepticism about AI authorship, meaning simple labelling is insufficient to halt the viral spread of manipulative content.
Source attribution
Luttrell, Davis, and Welch examined source attribution in AI-era journalism and found that traditional methods are increasingly inadequate. AI-generated text can mimic human writing so convincingly that automated systems often fail to detect it.
Their research emphasises that no single technical fix can secure journalism against AI-era disinformation. Robust defence requires a layered strategy combining detection, provenance verification, and human-in-the-loop editorial judgement.
The authors highlight the implementation of the Theory of Content Consistency (ToCC) as a framework for AI developers “to design more nuanced evaluation sets grounded in journalistic values”.
This highlights the need for human editorial oversight alongside AI detection to identify subtle manipulations.
Journalists’ perspectives
A survey of 504 journalists in the Basque Country by Peña-Alonso, Peña-Fernandez, and Meso-Ayerdi found that almost 90 per cent believe AI will significantly increase disinformation risks.
The survey also showed that experienced journalists perceive higher risks than less-experienced colleagues, and that journalists who frequently use AI tools have more nuanced perceptions, recognising both potential benefits and risks.
The authors conclude that although concern about AI-driven disinformation is nearly unanimous, experience and familiarity with AI influence perception, highlighting the importance of professional literacy and training.
Journalists highlighted challenges, including detecting deepfakes, falsified datasets, and content that blends fact with manipulative narrative – a reminder that human expertise remains indispensable.
Weaponised storytelling
Research at Florida International University shows how AI can detect “weaponised storytelling”, where disinformation uses persona cues, culturally loaded symbols, and narrative sequences to manipulate audiences.
AI tools analyse usernames and handles to infer credibility or affiliation, post sequences to detect manipulated timelines, and culturally specific symbols to uncover subtle patterns of influence that traditional fact-checking might miss.
This demonstrates that disinformation is not just about false facts, but about the way stories are told to evoke trust, fear, or other emotions.
The authors illustrate their point with the following sentence: “The woman in the white dress was filled with joy.”
While in the West, the phrase evokes a happy image, in parts of Asia it carries the opposite connotation, because white symbolises death or mourning.
The research found that training AI on diverse cultural narratives enhances its sensitivity to such nuances.
Transparency and literacy
Across these studies, a consistent theme emerges: AI is powerful, but it cannot replace human judgement.
While AI can scan thousands of posts, detect stylistic markers, and flag inconsistencies, humans are needed to interpret emotional nuance, cultural context, and ethical considerations.
There are also limits to binary AI labels. Emotional content drives affective and behavioural responses more than authorship cues, making simple disclosure inadequate.
Platforms, policymakers, and journalists must combine AI detection with human analysis, editorial judgement, and public digital literacy.
In a digital ecosystem where stories travel faster than verification, understanding the interplay between AI, emotion, and narrative is critical.
Fact-checking alone is no longer sufficient; combating disinformation requires a multi-layered approach that combines technological tools with human expertise and emotional intelligence and failing to do so risks letting manipulation set the agenda, rather than facts.
(BM)