NEW DELHI — Artificial intelligence-generated images and videos are fueling a new wave of anti-Muslim hate across India’s social media ecosystem, according to a new report by the Center for the Study of Organized Hate (CSOH), which warns that AI has become a powerful amplifier of anti-minority narratives.
CSOH, a Washington D.C.-based non-profit, non-partisan think-tank that researches organized hate, published “AI-Generated Imagery and the New Frontier of Islamophobia in India” on Sept. 29 on its website. The 60-page report identified 1,326 AI-generated hateful posts targeting Muslims in India, published by 297 public accounts across X, Instagram and Facebook between January 2024 and April 2025.
“That’s only a microcosm of what’s happening at a much larger scale across digital platforms in India,” Raqib Hameed Naik, founder and executive director of CSOH, told Nikkei Asia.
The report says that while activity was minimal in 2023, it spiked sharply in mid-2024 — a surge the study links to the growing accessibility of Artificial intelligence tools, such as Stable Diffusion, Midjourney and DALL-E.
“AI-generated images are being weaponized to dehumanize Muslims, propagate conspiracy theories, aestheticize violence, and normalize misogynistic and Islamophobic narratives,” the report says.
It has drawn attention in India for revealing how AI is accelerating the spread of communal propaganda online. The report also warns of a rise in the creation of sexualized AI images and content of Muslim women.
The report by the Center for the Study of Organized Hate includes nine recommendations, including ones for policymakers and AI model developers.
Its release coincides with “I Love Muhammad” protests that began in the northern city of Kanpur in September and soon spread across India. Tensions reportedly flared when members of a right-wing Hindu group objected to a “I Love Muhammad” banner. Police in several states have since charged thousands of Muslims with unlawful assembly, though organizers insist the rallies were peaceful expressions of faith.
The CSOH report said that, in present-day India, even “common incidents of protests or localized conflicts in neighborhoods can quite easily be reframed through the lens of ethnic or sectarian strife by bad-faith actors.”
It added, “AI-generated images can be conveniently mobilized along these lines.”
The findings come amid growing concern that India lacks both the legal and cultural capacity to address the flood of AI-generated disinformation.
Srinivas Kodali, a digital privacy rights researcher, told Nikkei Asia that the real danger lies in how deeply such content can shape perceptions and beliefs across different communities. “The effects of AI-generated media aren’t linear,” Kodali said. “They vary depending on who’s viewing the content and how they interpret it.”
According to the Indian government’s 2025 roadmap, AI is to be made “open, affordable and accessible” to ensure that innovation “leads inclusive growth, empowers citizens and builds a globally-competitive digital economy.”
Kodali added that governments have long justified regulating speech to prevent harm, but AI blurs those lines further, functioning as both a creative and a deceptive tool. “There are traditional forms of harmful speech — defamation, misinformation, hate speech — but AI has introduced a new layer of complexity,” he said.
A core issue, Naik said, lies with the “upstream model developers — the companies building these AI systems … These models have been trained on vast amounts of unfiltered data, including harmful conspiracy theories and hate content,” he said. “Anyone can prompt them to produce hateful images, and they’ll do it — instantly.”
The CSOH report concludes with a list of nine recommendations, including ones for policymakers and AI model developers. Such companies “should implement and continually refine strong detection, reporting, and moderation systems to identify misuse in real time,” it said.
On Wednesday, India’s Ministry of Electronics and Information Technology reportedly proposed new rules requiring social media platforms to require their users to declare any AI-generated content.
The article was published in the asia.nikkei

