TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

| India's foray into regulating AI Related reading: IAPP GPS 2024: Looking toward AI enforcement future

rss_feed

""

India has taken initial steps toward regulating artificial intelligence through two advisories issued by the Ministry of Electronics and Information Technology, the government body responsible for policymaking on internet-related issues.

Leading up to this month's general elections, the Indian government has amplified efforts to tackle misinformation spread through deepfakes and AI. The focus remains on intermediaries and platforms, however, various stakeholders have voiced concern over the advisories' applicability, scope and legality. While issuance of advisories do not constitute binding rules under Indian law, they do offer insights into the government's approach toward such emerging technologies.

First advisory

On 1 March, MeitY issued its initial notification advising intermediaries or platforms to ensure compliance with their due diligence requirements under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code). This came in continuation of an advisory dated 26 Dec. 2023, which highlighted the growing concern of misinformation spread through deepfakes.

The March advisory stated that untested or unreliable AI models, large language models or generative AI systems would require explicit permission before being deployed to users on the "Indian Internet."

A consent pop-up mechanism was suggested for informing users about the fallibility of output generated by such systems. Further, any information which was created, generated or modified using such software and that could potentially be characterized as misinformation or a deepfake was to be labelled for users to identify the intermediary or first originator of the information. A clear ban was also prescribed against use of AI for permitting discrimination, bias or threats to the integrity of the electoral process.

Notably, all intermediaries were requested to submit an "Action Taken-cum-Status Report" within a short span of 15 days to MeitY.

The advisory placed heavy emphasis on compliance by intermediaries and platforms, claiming potential penal consequences under the Information Technology Act, 2000 and other laws.

Subsequent clarifications

Soon thereafter, the Minister of State for Electronics and Information Technology Rajeev Chandrasekhar provided certain clarifications on social media platform X. He stated the advisory only applied to significant or large platforms and not startups.

The legality of the minister's comments posted on a personal X account notwithstanding, several experts raised concerns regarding the unpracticable approach of regulating AI. The discrepancies between the official notification and the minister's subsequent comments raised further questions on the applicability of the notification. Experts voiced concerns that the government mandate for explicit permission before deployment of AI systems would stifle startups and companies working on AI systems. Lack of uniform definitions for terms used in the advisory — such as "untested," "significant" and "Indian Internet" — amplified the ambiguities.

Second advisory

On 15 March, MeitY released a second advisory to replace the first, doing away with the mandate for prior government permission before AI system deployment and submission of status reports by platforms. The revised advisory emphasized that platforms and intermediaries should ensure the use of AI models, large language models or generative AI software or algorithms by end users does not facilitate any unlawful content stipulated under Rule 3(1)(b) of the IT Rules, in addition to any other laws. Thus, applying to almost all stakeholders that are part of the AI ecosystem in India.

Untested or unreliable AI models, or those in development, could only be made available to the public after proper labelling of the output as being inherently fallible or unreliable. The revised advisory maintained the ban on use for threatening the electoral process by intermediaries or platforms. It also expanded platforms' responsibility to ensure they did not facilitate such unlawful use.

In keeping with the initial advisory, platforms and intermediaries were tasked with informing users about terms of service and the removal of any information found to be in violation.

The second advisory did, however, expand the consequences for noncompliance. Failure to comply could expose intermediaries, platforms and users to prosecution under the Information Technology Act, 2000 and other criminal laws like the Indian Penal Code, 1860.

Implications

AI systems are being developed and deployed rapidly in India. The use of AI in health care, education and law enforcement is already underway. Its transformative effect on commerce, government service delivery and human interactions can hardly be overstated. Prior to the MeitY advisories, the public policy think tank of the government of India, NITI Aayog, released a National Strategy on Responsible AI in 2018, apart from several publications, including discussion papers on a framework for operationalizing its vision of responsible AI.

The government has taken a proactive approach toward developing frameworks for ethical and responsible deployment of AI systems. India stands at a unique position to leverage such emerging technologies for dynamic growth.

Nevertheless, MeitY's recent advisories raised eyebrows, particularly due to concerns of over-regulation. In revising the mandate to take explicit permission and easing the rollout of AI models, the second advisory alleviated concerns.

The ministry has taken a rather balanced approach toward ensuring user safety with innovation. In April, a committee comprising experts from various ministries recommended taking a "whole-of-government" approach toward regulating AI. The use of AI systems by bad actors has become prominent, creating the need for guardrails. The revised advisory reflects the correct approach of the Indian government seeking to foster the growth of AI while cautioning against possible misuse.

Global AI Law and Policy Tracker

This tracker identifies AI legislative policy and related developments in a subset of jurisdictions. It is not globally comprehensive, nor does it include all AI initiatives within each jurisdiction.

View Here


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.