Resource Center / Resource Articles / Global AI Governance Law and Policy: India
Global AI Governance Law and Policy: India
This article, part of a series co-sponsored by HCLTech, analyzes the laws, policies, and broader contextual history and developments relevant to AI governance in India. The full series can be accessed here.
Published: July 2025
Contributors:
Navigate by Topic
From single computer in Kolkata’s Indian Statistical Institute in the 1950s to powering the world’s digital infrastructure, India’s technical journey has been tremendously transformative. In the 1990s, the government focused on the ‘information technology’ sector by encouraging the export of software services. This resulted in India eventually emerging as a global IT hub first as an outsourcing destination and eventually as the birthplace of digital public infrastructure that offers a new form of governance using population-scale digital architecture. Today, India has the world’s largest digital identity system, the biggest digital payments system by volume and a population that is, for the most part, digital by default.
This dramatic digital transformation has naturally fuelled the adoption of AI. According to the Stanford Artificial Intelligence Index Report 2025, India ranks second in the list of countries with the highest AI skill penetration from 2015 to 2024 and is among the top 10 countries in the world that received the most private investment in AI from 2013 to 2024.
India's AI policy has two different facets. One set of policy initiatives focuses on promoting AI adoption amid rapid advances in generative AI. The government has pledged to invest USD1.25 billion in AI development, and, to that end, launched the IndiaAI Mission. Additional initiatives promote the integration of AI across a range of use cases and sectors. The other set of policy initiatives relates to the governance of AI and the risks that it could pose. While India currently does not have standalone AI legislation, existing intellectual property, data protection, cybersecurity and content regulations are being adapted to apply to AI.
Historically, India’s digital space was exclusively regulated by the Information Technology Act, 2000, an omnibus law that addresses online contracts, data protection, cybercrimes and digital harms, such as phishing and identity theft. The IT Act has undergone numerous amendments to respond to new threats in the digital space. Originally intended to regulate computers and electronic records, it has been used to regulate a range of digital products and services due to the expansive definition of computer resources. AI systems and models will likely fall within the purview of its legislative scope.
Various subordinate legislations were enacted under the IT Act, including the IT (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011, which governed how entities should process personal data. Until they were replaced by the Digital Personal Data Protection Act, 2023, these rules served as the primary data protection regulation. Other laws, such as the Bharatiya Nyaya (Second) Sanhita, 2023, and the Indian Copyright Act, 1957, broadly extend to the digital space and also apply to AI.
In 2018, India’s National Institution for Transforming India, the government's apex policy think tank, released the National Strategy on Artificial Intelligence. The policy took an "AI for all" approach and aimed to address challenges of accessibility, affordability and skilled expertise.
The NSAI set out four key areas. One part concentrated on research to boost core and applied research in the AI field. Another focused on reskilling the current workforce to facilitate large-scale employment generation through AI; this initiative called for realigning the education sector to harness the potential AI. A third priority promoted investments in AI and product development that would enhance AI adoption. Finally, there was a focus to manage concerns around ethics, privacy and security, such as through the use of privacy preserving technologies.
In 2021, the IndiaAI Mission was launched to develop a comprehensive ecosystem to foster AI innovation by democratising access to compute, enhancing data quality, developing indigenous AI capabilities, attracting top AI talent and promoting ethical AI. The comprehensive national-level program houses initiatives like the IndiaAI Compute Capacity, which intends to scale AI computing infrastructure by deploying over 18,000 graphics processing units through strategic public-private partnerships.
The Ministry of Electronics and Information Technology indicated it was looking to replace the IT Act with new legislation in 2023, tentatively titled the Digital India Act. Since the Indian government is unlikely to introduce a standalone legislation on AI, it is likely the Digital India Act will also apply to potential AI risks.
The MeitY issued an AI advisory addressed to all intermediaries and platforms on 15 March 2024. It required them to ensure that their use of AI models, large language models and generative AI do not make it possible for users to share unlawful content on the platform. Additionally, they were required to ensure the use of AI technology does not result in any bias or discrimination or threaten the integrity of the electoral process.
All platforms and intermediaries were obliged to test their AI models; if they were unreliable in any way, intermediaries and platforms had to appropriately label them as such. Users had to be informed through terms of service or user agreements that their user account could be terminated if they were dealing with unlawful information.
The MeitY has recently constituted a subcommittee to examine whether the IT Act, the Bharatiya Nagarik Suraksha Sanhita, 2023, various content laws, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, cyber security laws, DPDPA and India’s intellectual property regulations are sufficient to regulate AI. Since AI spans multiple sectors, the view was that a fragmented regulatory approach may result in inefficiencies. To address this concern, the committee recommended an integrated government approach, calling on the MeitY and principal scientific adviser, who advises the prime minister on matters of science and technology, to establish an empowered mechanism to coordinate on AI governance.
While there is no official guidance on how existing intellectual property laws apply to AI, the NSAI notes that the IP regime must enable AI developers to innovate. The policy, therefore, has recommended the creation of a task force comprised of the Ministry of Corporate Affairs and the Department for Promotion of Industry and Internal Trade to consider the issue.
The Consumer Protection Act, 2019, offers broad protections to consumers that could extend to AI harms. The government recently issued the "Guidelines for Prevention and Regulation of Dark Patterns,” prohibiting the use of dark patterns on the grounds that they amount to deceptive advertising.
Sector-specific AI regulations
Various sectoral regulators are looking to regulate AI within the specific domains they supervise. Some key initiatives are listed below:
-
expand_more
Financial sector
The Reserve Bank of India developed the Framework for Responsible and Ethical Enablement of Artificial Intelligence committee to study AI adoption by financial institutions and review AI regulation in the context of the global financial sector. The committee is yet to release its report.
The Securities and Exchange Board of India now requires that all registered mutual fund offerings using AI or machine learning applications and systems file a report with the board on a quarterly basis. These reports will detail how the AI/machine learning project is implemented, the safeguards put in place to prevent abnormal behavior of the AI/machine learning system, and if there are key controls in the AI/machine learning system in accordance with SEBI’s cyber security control requirements.
-
expand_more
Health care sector
The Indian Council of Medical Research released the Ethical Guidelines for Application of AI in Biomedical Research and Healthcare in 2023. Some of these guidelines include ensuring a "Human in the Loop" model, minimizing risk by implementing safety standards, securing data privacy and protection, defining accountability and liability for AI actions and promoting accessibility, equity and inclusiveness.
Under the Telemedicine Practice Guidelines, 2020, any technology platform based on AI or machine learning is prohibited from counselling patients or prescribing medication. However, the technology could be used to assist and support a registered medical practitioner in carrying out patient evaluation, diagnosis or management.
-
expand_more
Telecommunication sector
The Telecommunication Engineering Centre, a technical wing of the Department of Telecommunication, has recognized that AI enables real-time decision making and will significantly influence upcoming technologies, such as satellite broadband, drone communication and the metaverse. In this context, TEC published a report on AI system fairness and invited stakeholder input on developing new standards to assess and rate the robustness of AI systems in telecom networks and digital infrastructure.
-
expand_more
Defense
The Artificial Intelligence in Defence report sets out a risk-based assessment framework to integrate AI applications into defense operations. In this report, the Indian Department of Defence recognizes the wide array of AI applications that are critical to the defense sector.
-
expand_more
Cybersecurity
The Indian Computer Emergency Response Team is the national nodal agency for responding to cybersecurity incidents. It has previously released an advisory on the security implications of using AI language-based applications.
-
expand_more
Foundation models
India has developed indigenous foundation AI models, both large and small language models. India’s first indigenous large language model, Sarvam-1, is trained on datasets in other languages apart from English to perform multilingual tasks; BharatGen is a government-funded AI-based LLM for Indian languages.
According to the MeitY subcommittee, multiple stakeholders are involved in the lifecycle of a foundation model, such as data principals, data providers, AI developers including model builders and AI deployers including app builders and distributors. Accordingly, it is critical that the distribution of responsibilities between different players is clear.
-
expand_more
Agentic AI
While the Indian government has not specifically addressed agentic AI in any of the released policies and guidelines, the larger principles on responsible and trustworthy AI are likely to be used to apply to the development of agentic AI as well. Various tech companies in India have deployed AI agents to optimize their logistics services. Infosys Limited, an Indian multinational technology company, has developed generative AI agents for client applications.
-
expand_more
Enforcement
Presently, AI-related enforcement is carried out under existing legal frameworks. For instance, the BNSS outlines offenses related to cybercrimes, creation and dissemination of deepfakes and AI-generated misinformation, impersonation-based cheating and privacy violations — particularly when deepfakes exploit a person’s image. These are common risks that may arise from the use and deployment of AI systems and models.
The privacy rules and DPDPA address any violations of an individual’s privacy rights. Accordingly, any AI system that uses user data would need to ensure compliance with the requirements under these laws. Similarly, since AI systems and models are trained on large datasets that may contain copyrighted material or other intellectual property, they must comply with the ICA and other intellectual property laws.
The Digital India Act has not been released for public consultation and it is unlikely that any AI law will materialize this year.
An Inter-Ministerial AI Coordination Committee/Governance Group is likely to be set up to develop a common roadmap and government approach to regulate AI. Some of the key goals of this committee would aim to strengthen existing laws to minimize AI-related risks and harm, harmonize existing efforts and initiatives around common technologies, provide legal clarity on the development and use of AI and create a policy environment that enables the use of AI for beneficial use-cases.
In parallel, various developments are underway to address sector-specific considerations and challenges posed by AI, such as the RBI’s FREE-AI committee report, TEC’s standards on AI, and policy frameworks released by the healthcare and defense sectors. The government is also deliberating on how existing legal frameworks, such as copyright law, pose challenges to the advancement of AI.
Articles in this series, co-sponsored by HCLTech, dive into the laws, policies, broader contextual history and developments relevant to AI governance across different jurisdictions. The selected jurisdictions are a small but important snapshot of distinct approaches to AI governance regulation in key global markets.
Each article provides a breakdown of the key sources and instruments that govern the strategic, technological and compliance landscape for AI governance in the jurisdiction through voluntary frameworks, sectoral initiatives or comprehensive legislative approaches.
Global AI Governance Law and Policy
Jurisdiction Overviews 2025
The overview page for this series can be accessed here.
- Australia
- Canada
- China
- European Union
- India
- Japan
- Singapore
- South Korea
- United Arab Emirates
- United Kingdom
- United States
-
expand_more
Additional AI resources