top of page

India’s IT Intermediary Rules 2026 Amendment on AI‑Generated Content: A Legal Analysis

  • Writer: Tuhin Batra
    Tuhin Batra
  • 17 hours ago
  • 8 min read

Updated: 5 hours ago


I. Introduction


The rapid proliferation of artificial intelligence generated content,particularly deepfakes, synthetic audio‑visual material, and algorithmically altered images,has posed unprecedented regulatory challenges for governments worldwide. In India, these concerns have translated into a significant regulatory intervention through amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules”). The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, represents the Indian government’s most direct attempt to bring AI‑driven misinformation and impersonation risks within the statutory framework of intermediary regulation under the Information Technology Act, 2000.


This article examines the legal architecture of the amendment, its substantive obligations on intermediaries, its interaction with Section 79 safe‑harbour protections, and the broader constitutional and compliance implications for platforms, AI developers, and users.


II. Regulatory Context and Legislative Basis


The IT Rules are issued under Sections 69A, 79, and 87 of the Information Technology Act, 2000. While the original 2021 Rules focused on due diligence obligations, grievance redressal, and digital media ethics, they did not explicitly address AI‑generated or synthetic content. The amendment fills this gap by:

  • introducing an express regulatory category for synthetically generated information;

  • imposing heightened transparency and takedown obligations; and

  • tightening enforcement timelines in cases involving AI‑enabled harm.


Notably, the amendment does not amend the IT Act itself but operates through delegated legislation, thereby raising important questions of proportionality, reasonableness, and constitutional validity,particularly under Articles 14 and 19(1)(a) of the Constitution of India.


III. Definition and Scope of AI‑Generated Content


A key feature of the amendment is the formal recognition of synthetically generated information. Broadly, this covers content that is:

  • created, modified, or altered using computer resources or algorithms;

  • presented in a manner that makes it appear authentic, realistic, or attributable to a real person or event; and

  • capable of misleading users as to its origin or veracity.


The breadth of this definition is intentional. It allows the Rules to capture not only malicious deepfakes but also AI‑altered images, cloned voices, and manipulated videos,even where no explicit intent to deceive is proven. From a regulatory perspective, this shifts the focus from intent to effect and risk.


IV. Due Diligence Obligations on Intermediaries


The most legally consequential aspect of the amendment lies in the expansion and recalibration of due diligence obligations imposed on intermediaries. Unlike earlier iterations of the IT Rules, which largely relied on reactive notice-and-takedown mechanisms, the amended framework introduces proactive, continuous, and technology-facing compliance duties specifically tailored to AI-generated content.


A. Mandatory Labelling and Disclosure of AI-Generated Content


Intermediaries are now under a statutory obligation to ensure that AI-generated or synthetically altered content is clearly, prominently, and continuously identifiable as such to end users. This obligation operates irrespective of whether the content is otherwise lawful, satirical, artistic, or informational in nature.


From a legal standpoint, this marks a decisive shift from content legality-based regulation to content-origin transparency regulation.


The key components of this obligation include:


User-facing disclosures: Platforms must display visible labels or notices at the point of access or consumption, indicating that the content has been generated or materially altered using AI tools.


Embedded traceability mechanisms: Content must carry persistent metadata, watermarks, or unique digital identifiers that can establish its synthetic origin, even if re-uploaded or reshared.


Integrity safeguards: Intermediaries are required to take reasonable steps to prevent the removal, manipulation, or circumvention of such identifiers.


This effectively introduces a chain-of-custody concept for digital content,where origin transparency must survive downstream dissemination.


B. User Declarations at the Point of Upload


Social media intermediaries are further required to obtain affirmative declarations from users at the time of content upload, disclosing whether the content is AI-generated or synthetically modified. While framed as a disclosure requirement, this provision has deeper legal implications.


First, it creates a self-reporting obligation on users, shifting part of the compliance burden away from platforms. Second, it provides intermediaries with a documentary basis to demonstrate good faith and due diligence in the event of regulatory scrutiny or litigation.


However, the Rules do not treat user declarations as conclusive. A false declaration does not absolve the intermediary of responsibility where reasonable technological measures could have detected synthetic content.


C. Reasonable Efforts and Technological Measures


Perhaps the most nuanced,and potentially contentious,element of the amendment is the requirement that intermediaries deploy reasonable efforts and appropriate technical measures to identify and label AI-generated content.


Legally, the phrase “reasonable efforts” is significant. It mirrors the standard used in intermediary jurisprudence to avoid imposing an absolute monitoring obligation, which would be inconsistent with Section 79 of the IT Act. At the same time, it raises the compliance threshold well beyond passive hosting.


In practice, this may include AI-based detection tools for deepfakes and synthetic media; pattern recognition systems trained to identify manipulated audio-visual content; internal escalation protocols for flagged AI content; and periodic audits of AI-content moderation systems.


What is critical is that the obligation is contextual and proportional. Large social media platforms will be expected to deploy far more sophisticated systems than small or niche intermediaries. Nonetheless, the absence of bright-line standards creates interpretive uncertainty and potential regulatory discretion.


D. No General Monitoring,But a Narrow Corridor


While the amendment does not formally impose a general obligation to monitor all content, it undeniably narrows the safe corridor previously enjoyed by intermediaries. Courts are likely to be called upon to determine whether AI-detection and labelling duties amount to de facto general monitoring.


The regulatory intent appears to be to carve out an exception for high-risk content categories, justified by the scale and speed of AI-driven harm. Whether this balancing act survives constitutional scrutiny remains to be seen.


E. Contractual and Policy-Level Implications


The expanded due diligence regime will require intermediaries to revisit:


terms of service and acceptable use policies;


content upload workflows and user interfaces;


representations and warranties obtained from users and advertisers;


internal compliance manuals and crisis response playbooks.


Failure to align contractual documentation with regulatory obligations could expose platforms to both public law penalties and private claims.


V. Due Diligence Obligations in Practice: Recent Judicial and Regulatory Developments


The expanded due diligence framework under the amendment does not operate in a vacuum. Over the last two years, Indian courts and regulators have increasingly confronted the real-world harms caused by AI-generated and synthetic content. These developments provide crucial context for understanding why the amendment places such emphasis on proactive labelling, traceability, and rapid response.


A. Judicial Recognition of Harm from AI-Generated Content


Indian courts have begun to expressly recognise AI-generated deepfakes as a distinct and serious category of legal harm, particularly where they implicate privacy, dignity, and personality rights.


In Suniel V Shetty vs John Doe S Ashok Kumar (Bombay High Court, 2025), the court granted interim relief against the circulation of AI-generated videos and endorsements falsely attributed to the actor. The Court treated the misuse of AI tools as an aggravated violation of personality rights, directing platforms to promptly remove the content and restrain further dissemination. Significantly, the Court emphasised the speed at which such content spreads and the inadequacy of delayed remedial action,an observation that directly mirrors the policy rationale behind shortened takedown timelines under the amended Rules.


Similarly, in Kamya Buch v. JIX5A & Ors. (Delhi High Court, 2025), the Court granted urgent injunctions against the circulation of non-consensual, AI-manipulated images. The judgment grounded relief in Article 21 of the Constitution, framing synthetic sexual imagery as a violation of dignity and bodily autonomy. Although decided prior to the amendment, the case illustrates judicial intolerance for intermediary inaction in cases involving synthetic media abuse.


Earlier law enforcement action in the widely reported Rashmika Mandanna deepfake incident (2024) further demonstrates that even before a bespoke AI regulatory framework, authorities were willing to invoke provisions of the IT Act and the IPC to address identity theft and impersonation through AI-generated content. The amendment effectively codifies and systematises this enforcement instinct.


B. Regulatory Shift from Reactive to Preventive Compliance


These judicial trends reveal a consistent theme: harm caused by AI-generated content is often irreversible once it achieves virality. The amendment’s insistence on mandatory labelling, embedded identifiers, and user declarations reflects an attempt to prevent such harm at the dissemination stage rather than relying solely on post-facto remedies.


The introduction of a three-hour takedown requirement for unlawful AI-generated content must be understood against this backdrop. Courts have repeatedly noted that conventional notice-and-takedown timelines are ill-suited to synthetic media, where reputational and psychological damage can occur within minutes.


C. Intermediary Liability and the Narrowing Safe Harbour Corridor


Taken together, recent cases suggest a judicial willingness to scrutinise intermediary conduct more closely where platforms possess the technical ability to detect or mitigate AI-driven harm but fail to act expeditiously. While courts have stopped short of imposing a general monitoring obligation, they have signalled that technological capacity is a relevant factor in assessing due diligence.


The amendment formalises this trend by embedding a reasonableness standard that is sensitive to platform size and capability, but unmistakably forward-looking. Safe harbour under Section 79 increasingly depends not merely on responding to notices, but on demonstrable, system-level preparedness to deal with AI-generated risks.


VI. Accelerated Takedown and Grievance Redressal


Perhaps the most consequential change is the drastic reduction in takedown timelines:

  • unlawful or prohibited AI‑generated content must be removed or disabled within three hours of receiving a lawful notice;

  • in certain urgent or sensitive cases, timelines may be further compressed; and

  • grievance redressal acknowledgements and resolutions are subject to shortened statutory deadlines.

These timelines reflect the government’s view that AI‑generated misinformation spreads at a velocity that renders traditional 24–36 hour response windows ineffective. However, from a compliance standpoint, they impose significant operational burdens, particularly on smaller intermediaries.


VI. Categories of Prohibited AI‑Generated Content


The amendment clarifies that AI‑generated content falling within existing categories of illegality attracts immediate action. This includes:

  • child sexual abuse material (including synthetic or morphed imagery);

  • non‑consensual intimate images or deepfake pornography;

  • impersonation, fraud, or deceptive representations of real individuals;

  • content misleading users about real‑world events, elections, or public order.

Importantly, the synthetic nature of the content does not dilute liability; rather, it aggravates regulatory scrutiny.


VII. Safe Harbour Under Section 79: Preserved, But Conditional


The government has expressly clarified that intermediaries acting in good faith and in compliance with the amended Rules will continue to enjoy safe‑harbour protection under Section 79 of the IT Act. However, non‑compliance,particularly failure to label, delay in takedown, or inadequate grievance handling, may result in loss of such protection.


In practical terms, safe harbour is no longer a passive shield but a compliance‑contingent privilege.


VIII. Constitutional and Policy Implications


From a constitutional perspective, the amendment raises three key concerns:

1. Freedom of Speech: Mandatory labelling and rapid takedown obligations may have a chilling effect on satire, parody, and artistic expression using AI tools.

2. Delegated Legislation: The breadth of executive discretion in defining and enforcing AI content norms may invite judicial scrutiny.

3. Proportionality: Whether three‑hour takedown mandates are proportionate across all categories of intermediaries remains debatable.

At the same time, the amendment aligns India with a global regulatory trend favouring transparency‑based AI governance rather than outright bans.


IX. Practical Takeaways for Stakeholders


Platforms must invest in AI‑content detection, metadata management, and rapid response compliance systems.


AI tool providers should design outputs with default labelling and traceability features. Businesses and creators using AI for marketing or media must ensure disclosures are accurate and platform‑compliant. Legal and compliance teams should revisit intermediary policies, user terms, and crisis‑response protocols.


X. Conclusion


The IT Intermediary Rules amendment on AI‑generated content marks a decisive shift in India’s digital regulation,from reactive content moderation to proactive transparency and accountability. While the framework seeks to curb genuine harms posed by deepfakes and synthetic misinformation, its long‑term legitimacy will depend on balanced enforcement, judicial oversight, and continuous engagement with technological realities.


For intermediaries and AI‑enabled businesses, the message is clear, AI innovation is permissible, but opacity is not.

bottom of page