In October 2025, the Ministry of Electronics and Information Technology (MeitY) published the draft Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025 (draft) for stakeholder comments. The draft is a clear indication of the government’s commitment to legislate for an internet reshaped by artificial intelligence (AI). Its objectives are straightforward – to curb the harm that convincingly realistic synthetic media, including deepfakes, voice clones and AI-generated images, can inflict on privacy, reputation, elections and financial integrity. The MeitY’s explanatory note classifies these aims under the broader themes of labelling, traceability and accountability.
Ashima Obhan
Senior Partner
Obhan & Associates
However, good regulatory intentions only go so far. The draft introduces three regulatory levers, a new definition, mandatory labelling and metadata requirements and additional duties for significant platforms in such a way that risks overreach, inconsistent enforcement and significant compliance burdens.
The draft defines “synthetically generated information as information … created, generated, modified or altered using a computer resource, in a manner that reasonably appears to be authentic or true”. On its face, the definition captures deepfakes. However, it also catches everyday edits, AI-assisted copy corrections, filters, stylised animation and many chatbot outputs. The phrase “reasonably appears”, is subjective. Without operational guidelines, the provisions are likely be applied inconsistently.
The draft further provides that intermediaries enabling synthetic content must embed permanent identifiers that are visibly displayed and cover at least 10% of the surface area of images or appear during the first 10% of the duration of any audio. This idea is novel but blunt. A blanket 10% metric is arbitrary. It will impair creative content, disrupt user interfaces and user experiences and impose complex technical demands. Small platforms and startups will be disproportionately affected. Even large ones will struggle with live streams and mixed content, that is part synthetic and part authentic.
Arnav Joshi
Associate
Obhan & Associates
Some provisions of the draft are aimed at generative AI platforms on which synthetic content may be created or modified. However, because generative AI platforms do not simply transmit and store data on behalf of a user, a threshold question arises as to whether such generative AI platforms are indeed intermediaries.
Specific obligations and duties are imposed on significant social media intermediaries (SSMI). The draft requires that such SSMIs obtain user declarations regarding synthetically generated information, deploy reasonable and appropriate technical measures to verify those declarations and prevent unlabelled publication. Verification poses technical challenges because AI detection tools are imperfect, particularly when AI uses divergent accents. This makes errors unavoidable. Requiring users to declare synthetic content before every upload or live stream is also impractical and exposes SSMIs to legal risks for misdeclarations they cannot realistically verify.
The draft does contain worthy provisions. For example, it provides statutory backing for labelling, focuses on traceability and metadata and removes from platforms the defence of ignorance. Even so, the balance between safety, freedom of expression and innovation has to be fine-tuned. Pragmatic changes will make the draft more effective without stifling innovation. These must begin with the definition of synthetically generated information. It should be narrower and focus on deepfakes and other synthetic media, which are misleading or likely to cause harm. Clear and objective standards will bring precision to those activities categorised as reasonable. The rigid 10% labelling rule should be replaced with a risk-based approach, in which metadata ensures back-end traceability by default. Visible labels should be required only when content crosses a defined threshold.
Regulation of synthetic media is urgently needed. The MeitY has identified the problem correctly. However, the challenge is in the solution. Good policy protects citizens from the most harmful uses of AI without imposing overly restrictive constraints on creativity, speech and competition. The draft starts the conversation, but it should not be the final chapter.
Ashima Obhan is a senior partner and Arnav Joshi is an associate at Obhan & Associates
Obhan & Associates
Advocates and Patent Agents
N – 94, Second Floor
Panchsheel Park
New Delhi 110017, India
Contact details:
Ashima Obhan
T: +91 98 1104 3532
E: email@obhans.com | ashima@obhans.com


