As I write this, the monsoon is here in India and there is heavenly rain pouring outside my window while I relish a hot cup of masala chai. However, I am acutely aware that I am one of the lucky ones who gets to enjoy this. Many are facing the brunt — losing their loved ones and homes as the monsoon wreaks havoc in different corners of India.
Like every year, the experts keep reminding us this is a result of destroying the ecology in so many places over the years — rampant greed taking precedence over long-term concern. Yes, there are laws and regulations. Have they been enough? Have they been enforced as required? The answers lie in the reality around us.
Parallels of this scenario are playing out in the digital realm, too. Our data continues to be sucked from of us and further processed in ways we don't often understand or control — even as laws and regulations play catchup. It is not too far in the future that we will see the proverbial "monsoon deluge" equivalent in the cyberworld, too. If we as individuals stand by helplessly as the harms play out around us, they will directly impact many of us greatly. Unless we act, and act fast.
The reality of the regulatory aspect of this is captured in my new favorite term: digital entropy. I first heard it from the various reports during the IAPP's annual leadership retreat in June, and again a few weeks later as IAPP Research and Insights Director Joe Jones kicked off the Asia Privacy Forum 2024 in Singapore.
As we in India settle in with the new government at the center, following the recently concluded general elections, we are already seeing the concept of digital entropy play out in the country. Various aspects of governance are being looked at with gusto covering digital competition, artificial intelligence governance, dark patterns, telecommunications, financial tech, and more.
Before we dig in, let me first mention what is on top of everyone's minds: the Digital Personal Data Protection Act rules. A version of the much-awaited rules has been leaked, however, per media reports, the draft rules are expected to be officially released in two weeks. With the same Minister Ashwini Vaishnaw at the helm of the Ministry of Electronics and Information Technology, there is continuity there and, hence, reasonable confidence in this conjecture.
Meanwhile, there is much action around in AI, just as in the rest of the world. However, the tidbits around what to expect regarding governance and risk management are few and far between. At a 16 July event in Benglaru, MeitY Secretary S Krishnan discussed responsible use of AI and the importance of building a democratized, multistakeholder approach to address the risks. Krishnan specifically mentioned how important it is for developers, deployers and users — the three primary stakeholders in the AI ecosystem — to come together to address this and build a robust framework.
The courts are getting their share of cases of dealing with the downstream outcomes of AI use. Popular singer Arijit Singh is the latest celebrity to take action, claiming transgression of his personality rights by multiple entities using AI. In a 31 July order, the High Court of Bombay directed about 38 entities, including at least five AI platforms, to stop commercially exploiting Singh's name, image, voice and other characteristics in an unauthorized manner. He is the most recent in a line of celebrities who have approached the courts, and I am sure there will be many more to come.
Also beginning to be taken seriously are dark patterns. On 30 Nov., the Central Consumer Protection Authority of India issued the Guidelines for Prevention and Regulation of Dark Patterns and a report published by the Advertising Standards Council of India and the UI/UX design agency Parallel offered startling data points on the current reality.
Of the 12,000 screens across 53 apps covering nine different industries, key results emerged:
- Almost 80% of apps manipulate users into unknowingly sharing more personal data than intended.
- About 45% of apps highlight certain parts of the interface and hide others, misdirecting users into taking a particular action.
- Approximately 43% of apps indulge in "drip pricing" wherein additional fees are gradually revealed throughout the purchase process resulting in the final price being higher than quoted.
- About 32% created a false sense of urgency by projecting limited time or stock availability, manipulating users into making rushed decisions.
For me, the only heartening takeaway from this study is the fact that it was conducted in the first place, bringing focus to an issue the industry needs to take seriously.
As always, the Reserve Bank of India, continues to be active. It recently published its Report on Currency and Finance 2023-24. Three aspects mentioned in this comprehensive report in the domain of digital trust made me feel positive.
Digital trust and fintechs.To say that the fintech sector in India is exploding is an understatement. Per the Department for Promotion of Industry and Internal Trade, there are over 8,000 fintech entities in India, with revenues expected to reach USD190 billion by 2030. The sector has brought tremendous benefits, especially in ensuring the reach of financial products to previously unreached sections of society.
The RBI emphasized the critical ways in which the fintech ecosystem needs to improve, particularly in areas such as cybersecurity, dark patterns and predatory lending practices misusing technology. For example, its report specifically mentions phishing, card skimming, man-in-the-middle attacks, malware and ransomware, among others, as areas of high risk. Similarly, the report talks of use of dark patterns like forced subscriptions, false urgency and hidden fees.
Big Tech dominance. While the report acknowledged the integrations between financial services and the nonfinancial Big Tech platforms impacts the financial sector positively, it can also threaten the sector. One concern involves how this can lead to reduced competition, owing to bundling and cross-subsidizing products, thereby making them the sole conduit for financial institutions to access their customers. The other is the risk of them becoming the few single points of failure in markets.
AI-related risks. The report talks of the huge impact AI and large-language models can have on the banking and financial services sector. At the same time, it poses a whole set of risks, as well. In the digital realm, these include privacy risks, risks of algorithmic bias, risks of model hallucinations, and digital disinformation and misinformation.
In a first from a regulator in India, the report further lays out 10 principles in designing responsible and ethical AI solutions for financial services: fairness, transparency, accuracy, consistency, data privacy, explainability, accountability, robustness, monitoring, and updating and human oversight.
In the midst of all this, the average consumer is waking up.
In the year 2023 alone, Minister of State for Rural Development and Communications Chandra Sekhar Pemmasani told members of Parliament that telecommunications organizations in India received 1.22 million complaints about spam calls from unregistered telemarketers.
An EY report said 77% Indian consumers expressed "profound concern" about the possibility of data breaches while shopping online, while 73% worried about their personal information being disclosed. This was based on a poll of 1,000 Indian participants.
So, there is hope yet.
Cheers.
Shivangi Nadkarni, CIPT, is co-founder and CEO at Arrka.