Although it has been three whole weeks since the IAPP Europe Data Protection Congress 2024 in Brussels, it still feels like it was yesterday, with my head bursting with a thousand thoughts from the two-day proceedings.

While I was there, big news emerged from India on a fine against Meta. In what is considered an unprecedented move, the Competition Commission of India issued an 18 Nov. order imposing a fine of INR2.13 billion  — about USD26 million — along with certain requirements. The quantum is significant by Indian standards.

The "story" began in 2021 when WhatsApp rolled out a privacy notice update in India that enabled it to share users' data with other Meta companies without obtaining their informed consent. CCI took up the matter of its own accord at a time when a number of other lawsuits were filed.

Since the WhatsApp notice change did not give consumers a choice, the only "hypothetical" option they had was to switch to a different platform. However, given the dominant position of WhatsApp, users don't really have any alternative to switch to.

The CCI concluded this was unfair as it was, in effect, forcing users to stay on the platform and thereby letting their data be accessed by other Meta companies. Further, the CCI said this data sharing created entry barriers for competitors. Hence, the order prohibits WhatsApp from sharing any data with other Meta companies for five years for the purpose of advertising.

What is interesting about the order is that it came from the CCI and not from any technology or sector-specific regulator. In fact, the CCI seems to be getting active in general. It just recently kicked off an investigation into Google about some alleged anti-competitive practices, as well.

Meanwhile, the Indian Parliament's winter session is currently on. When Parliament is in session, we get access to questions being asked by various parliamentarians and the responses provided by the concerned ministers and departments, which give us an indication of what lies ahead.

On artificial intelligence governance, Member of Parliament Derek O'Brien asked whether the government is developing a set of voluntary guidelines for organizations working on AI and, if so, the number of stakeholders consulted thus far, as well as those planned to be consulted.

MP Dola Sen asked whether the government has "formulated any law aiming to formulate ethics regarding" the use of AI or generative AI like ChatGPT and whether the government is planning to introduce legislation similar to the EU AI Act.

Responding to O'Brien, the Minister of State for Electronics and Information Technology, Jitin Prasada, said the government's goal is "to create a supportive environment that encourages organizations to follow good practices voluntarily" and that the National Association of Software and Service Companies "along with stakeholders from industry is working on voluntary guidelines to promote safe, secure and trustworthy development and deployment of AI."

The emphasis on "voluntary" continues with the impressions given earlier that no stringent law or regulation around governance of AI is expected.

Incidentally, in keeping with the above, NASSCOM launched "The Developer's Playbook for Responsible AI in India" in late November. The playbook provides a detailed, sector-agnostic framework for developing safe, trusted and inclusive AI — the principles stated in the IndiaAI mission — across the AI life cycle.

Prasada's response further emphasized the expected stance of the government that the existing laws and regulations are to be leveraged to address the risks associated with AI. For example, in the context of deepfakes, he said the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 "cast specific legal obligations on intermediaries, including social media intermediaries and platforms, to ensure their accountability towards safe and trusted internet including their expeditious action towards removal of the prohibited misinformation, patently false information and deepfakes." Similarly, he referred to India's Digital Personal Data Protection Act for addressing privacy risks.

He also mentioned the government has constituted an Advisory Group on AI for an India-specific regulatory AI framework with diverse stakeholders from academia, industry and government. The group's objective is to address all issues related to development of a responsible AI framework for the safe and trusted development and deployment of AI. Note the terminology used is "framework" and not law or regulation.

In a different context, we saw Minister for Electronics and IT Ashwini Vaishnaw comment on the regulation of "vulgar content" on social media. In response to a question raised by MP Arun Govil, he said, "The countries from which these social media platforms hail, their sensitivities and our country's sensitivities are very different. Such a debate is ongoing in almost every country. Previously, the press used to have editorial checks on content to decide whether published content was accurate; that check has now ended."

He urged the Parliamentary Standing Committee on IT to look into this, which we understand they are currently doing. Content comes under the ambit of the Ministry of Information and Broadcasting currently.

While debates and discussions by legislative bodies continue, the courts forge ahead via rulings. In a recent case, the Delhi High Court articulated that the right to privacy includes the right to be forgotten. This was related to an acquitted individual who requested the court mask his name in documentation relating to a case.

The court observed "there is no reason why an individual who has been duly cleared of any guilt by law should be allowed to be haunted by the remnants of such accusations easily accessible to the public. Such would be contrary to the individual's right to privacy which includes the right to be forgotten, and the right to live with dignity guaranteed under Article 21 of the Constitution of India." Further, it said the individual could ask the relevant platforms and public search engines to do the same.

In another interesting development in the context of digital governance, Asian News International sued OpenAI for copyright infringement. ANI claimed OpenAI used its content to train its large language models without authorization. It also alleged OpenAI attributed to ANI content about events that never actually took place, thus threatening the agency's reputation. This is the first time an Indian news agency has sued an AI company about copyright.

So much action but there is still no news about the DPDPA rules, yet. There was hope of discussion on the rules during this current winter session of Parliament.

Let's hope 2024 brings some good news on this front.

Shivangi Nadkarni is co-founder of Arrka, a Persistent Company.