HomeAsiaIndia Tightens Rules on Deepfakes and AI-Generated Content

India Tightens Rules on Deepfakes and AI-Generated Content


Generative artificial intelligence (GenAI) has transformed online media, creating accessible content at speed and increasing productivity and outreach. It also helps to mould misinformation, identity-related fraud and non-consensual synthetic media, better known as deepfakes.

The use of deepfakes to spread false information gained general attention in 2023, when a deepfake video of Indian celebrity Rashmika Mandanna went viral on social media, causing great concern among others in the public eye. The prime minister even referred to such use as a crisis. Since then, well-known figures have sought protection, usually from Delhi High Court, from AI-generated content such as AI chatbots, AI generated videos and pornographic content created by deepfake technology. Courts have granted relief, with content creators held responsible and intermediary platforms directed to take immediate corrective action.

Dhruv Anand
Partner
Anand and Anand

To prevent the misuse of such technology, the government has amended the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT rules), in relation to synthetically generated information. The amendment, in force on 15 November 2025, addresses the threats from the misuse of artificially generated content, including deepfakes, misinformation and other unlawful content. It strengthens the due diligence obligations contained in rule 3 of the IT rules for social media intermediaries (SMI) and significant social media intermediaries (SSMI) as defined in rules 2(1)(w) and (v).

The amendment defines synthetically generated information as “information that is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that appears reasonably authentic or true”. In a welcome approach, this is the first legislative definition of such content and is similar to the European Union’s AI Act. This provides similar identification measures and safeguards. China also recently rolled out AI labelling rules under which content providers must display clear labels to identify AI-generated content.

The amendment requires that platforms allowing the creation and dissemination of AI content ensure that such content is prominently labelled or embedded with permanent, unique identifiers or metadata. In the case of visual content, the label or disclaimer should cover at least 10% of the total surface area. Audio content warnings must be the first 10% of the total duration.

Dhananjay Khanna
Associate
Anand and Anand

SSMIs have to ensure that developers declare if the uploaded content is synthetically generated and must have in place “reasonable and appropriate technical measures” including automated tools or other suitable mechanisms to verify the accuracy of such declarations. Should the declaration or technical verification confirm that the content is synthetically generated, SSMIs have to ensure that a disclaimer is clearly and prominently displayed on an appropriate label or notice.

In a significant change, removal of such synthetically generated content no longer depends on the receipt of a court order or notification from an appropriate governmental agency. SSMIs must now remove such content using reasonable efforts. Failure of SSMIs to comply may cause them to lose their safe harbour protection under section 79 of the Information Technology Act, 2000 (IT act). This ensures that no synthetically generated information is published without a declaration. This disclaimer has already been adopted voluntarily by various social media platforms. However, this would be more effective if contained in the Intermediary Guidelines.

The amendment is welcome, because it now fixes SMIs and SSMIs with responsibility. However, leaving assessments to social media platforms may lead to varied standards. The distinction between deepfakes and creative content using AI is sometimes unclear and an inter-ministerial coordinating body should be established. This should also consider requiring AI content creators to be licensed and AI-generated videos and content to be labelled.

Guidelines under the IT Act are difficult to implement because the purpose of the act is to provide intermediaries with safe harbour protection. Precise legal and technological standards are urgently needed to identify and prosecute those responsible for spreading artificial intelligence-generated deepfakes.

Dhruv Anand is a partner and Dhananjay Khanna is an associate at Anand and Anand

Anand and Anand
First Channel Building
Plot No. 17A
Sector 16A, Film City
Noida, Uttar
Pradesh 201301, India
https://www.anandandanand.com/
Contact details:
T: +91 120 405 9300

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

spot_img