India's AI Governance Landscape: Principles, Guidelines, and the Path Ahead
- Tuhin Batra

- 1 day ago
- 3 min read
Updated: 9 hours ago
TL;DR Summary Illustration
No Dedicated AI Law Yet
India lacks a dedicated AI law as of early 2026, with no standalone statute regulating AI models, automated decision-making, or algorithmic governance across sectors. The law minister confirmed in Parliament that no formal policy or specific legal framework exists yet for AI use in courts or judicial processes. Instead, AI governance relies on existing frameworks like data protection, cybersecurity, and intermediary rules, supplemented by guidelines that promote responsible development and deployment.
Absence of Clear Accountability Allocation
A critical gap in India’s current framework is the absence of explicit statutory allocation of responsibility between model developers, system integrators, and end users. Unlike the EU AI Act, which assigns obligations across the AI value chain, India presently offers no clear differentiation between foundation model providers, application developers, and deploying enterprises. This creates uncertainty around attribution of fault, standard of care, and evidentiary thresholds in AI-related disputes.
Foundational Principles for Responsible AI
India's approach to AI has been shaped by ethical principles since 2018, starting with NITI Aayog’s National Strategy for AI (#AIForAll), which focuses on applications in healthcare, agriculture, education, smart cities, and mobility. In 2021, NITI Aayog released Principles for Responsible AI, outlining key tenets such as safety and reliability, inclusivity and non-discrimination, privacy, transparency and explainability, accountability, protection of positive human values, and compliance with applicable laws. These principles guide ethical AI design and deployment without imposing binding rules.
India AI Governance Guidelines (2025)
A major step forward came in November 2025 with the Ministry of Electronics & Information Technology (MeitY) issuing the India AI Governance Guidelines. This principle-based framework, not a statute, adopts a risk-based approach to prevent unrestricted high-risk AI deployment, emphasizing risk assessments, institutional oversight, data privacy, cybersecurity, and bias mitigation. It builds on existing laws like the Information Technology Act and the Digital Personal Data Protection Act, 2023 (DPDP Act), while prioritizing innovation and balancing trust with minimal regulatory burdens.
Key Institutional and Sectoral Initiatives
Supporting these efforts are institutional initiatives, including the INDIAai portal, which showcases AI developments, resources, research, and policies to foster responsible awareness. MeitY announced plans for an IndiaAI Safety Institute in January 2025 to advance ethical, safe, India-centric AI research and standards. Upcoming bodies like the Artificial Intelligence Governance Group (AIGG) and Technology & Policy Expert Committee (TPEC) will provide policy guidance, while sectoral frameworks, such as the RBI's FREEAI principles for finance address risks in specific domains.
Cross-Cutting Legal Frameworks
Cross-cutting laws fill current gaps, with the DPDP Act regulating data handling essential for AI, the Information Technology Act and rules covering intermediary liability and cybersecurity, and other sectoral regulations like those for telecom, finance, and consumer protection applying to AI use cases. Judicial AI use remains unregulated nationally, with no formal policy; some courts, like those in Kerala and Andhra Pradesh, have issued directives restricting AI in reasoning or urging caution.
Practical Compliance Reality for AI Deployers
In the absence of a dedicated AI statute, compliance responsibility in India currently falls on AI developers, deployers, and enterprises through indirect legal exposure. Organizations deploying AI systems must self-map risks across data protection, cybersecurity, consumer protection, contract law, and sectoral regulations. There is no unified compliance checklist or statutory safe harbor for responsible AI practices. As a result, governance today is largely contractual and internal: companies rely on AI policies, model risk assessments, vendor due diligence, audit rights, data processing agreements, and internal ethics frameworks to manage exposure. Liability for AI-driven harm is likely to arise through traditional causes of action negligence, deficiency of service, breach of contract, product liability, or data protection violations rather than AI-specific provisions.
Current Approach
Overall, India's AI strategy is principles-first rather than rules-first, risk-based and sectoral in nature, innovation-friendly, and reliant on tools like the DPDP Act, IT rules, consumer protections, and judicial directives. While no dedicated law exists, robust guidelines, planned institutions, and existing regulations form an emerging ecosystem.
Regulatory Philosophy: India vs Global Models
India’s approach differs materially from the European Union’s prescriptive, rights-heavy AI Act model. Instead of ex-ante licensing and prohibited-use categories, India favors a soft-law, innovation-first framework relying on post-facto enforcement through existing legislation. Compared to the US’s market-led model and the UK’s regulator-driven approach, India is positioning itself as a middle path principle-based governance with sectoral supervision seeking to avoid early overregulation while building institutional capacity.
Likely Trajectory
Over the next few years, India is expected to move gradually from principle-based guidance toward selective statutory intervention, particularly for high-risk use cases such as finance, healthcare, employment, surveillance, and critical infrastructure. Rather than a single comprehensive AI Act, India is more likely to adopt incremental regulation through sectoral rules, delegated legislation under the IT Act or DPDP Act, and mandatory governance standards for large AI deployers. Courts will also play a significant role in shaping accountability through evolving jurisprudence on algorithmic decision-making, bias, and automated harm.

