TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

| Examining India's efforts to balance AI, data privacy Related reading: Top 6 operational impacts of India’s DPDPA – Scope, key definitions and lawful data processing

rss_feed

""

Enterprises and individuals worldwide find themselves amid a historical tipping point as artificial intelligence continues to transform how organizations and customers conduct business. AI-based technologies are poised to reshape industries, enhance operational efficiencies and improve overall quality of life. However, as AI integration becomes more pervasive, it also brings forth significant privacy concerns that demand careful consideration.

From a legal and regulatory perspective, the recent enactment of India's Digital Personal Data Protection Act inspired the country's government to kick-start its compliance journey. Organizations are charting their privacy maturity frameworks — mapping personal data flows, revisiting user interfaces to identify where to show pop-up notices, updating privacy policies and vendor contracts, and training employees.

Furthermore, in 2022, the Indian government proposed enacting the Digital India Act to provide contemporaneous legal standards catering to the country's evolving digital ecosystem. Given its widespread use in critical fields such as health care, banking and aviation, this proposed law seeks to regulate AI. Therefore, balancing the benefits of AI with the protection of personal data and privacy is a critical challenge the government must address to ensure a sustainable and ethical digital future.

Proliferation of AI-based technologies 

AI is an all-encompassing concept that includes machine learning, a set of techniques and tools that enable computers to "think" by developing mathematical algorithms based on accumulated data, and deep learning, systems built on a known set of training data that assists self-learning algorithms in performing a task.

Recognizing the importance of these technologies, the Ministry of Electronics and Information Technology developed INDIAai, an AI-powered language translation platform that supports the digital inclusion of diverse languages in India. The goal is to ensure services such as banking and health care are leveraged by the entire population. Additionally, INDIAai and Meta signed a memorandum of understanding to establish a framework for AI and emerging technologies.

Private enterprises, including those in retail, logistics, sales and marketing, fintech, and health care, are also adopting AI-based solutions, and successful adoption may impact economies. A report recently published by Boston Consulting Group found, in India alone, the successful adoption of AI could add 1.4% annually to the country's gross domestic product growth.

As AI is applied to better understand human biology at a genetic level by leveraging patient data, patient intake and diagnoses may be improved. Further, it can create operational efficiencies by automating routine clinic tasks, such as patient intake and billing, which enable health care professionals to focus on critical decision-making and quality of care. 

The fintech sector has embraced applications of AI, integrating it with cross-platform connectivity to deploy various communication channels, like WhatsApp, chatbots and interactive voice response, to tailor impactful customer acquisition, debt collection and support strategies. Functions such as know-your-customer verification and credit score calculation also leverage AI for enhanced convenience and turn-around times.

Moreover, the field of sales and marketing is attempting to stay ahead of the curve through "Trinity" — touted as the world's first sales simulator with an AI-generated persona — that will help organizations train their salesforce digitally, replicating a real-life sales pitch.

Over time, AI-based services have created a certain level of customer expectations based on customizations, efficiency and accuracy.

Privacy considerations for AI-based technologies                  

AI-based technologies depend entirely on data used to train the algorithms. The guiding mantra for these algorithms is "the more, the merrier" when processing data sets. While the benefits of AI are multi-fold and abundant, organizations must be cognizant of an individual's right to decide how their data is used, triggering certain data privacy challenges associated with developing and using AI.

Organizations are obligated to process personal data within the legal parameters of the applicable laws and be transparent to individuals about aspects like the purpose of processing, profiling and AI-based decision-making. Additionally, they would have to maintain data accuracy, obtain consent before processing data, retain personal data for a predetermined time and implement security protocols such as access control and encryption. Therefore, the existing privacy compliance and maturity frameworks would require a review to factor in the obligations aligned with the use of AI.

The decisions made by AI-based platforms may be laced with human bias if the experiences, political and social ethos, and philosophical inclinations of the developers and trainers seep into the training models. This could lead to race or gender-based discrimination and contradict the fairness principle under the right to privacy. Organizations would be required to reexamine the criteria and calculations used to develop the algorithm.

Considering the large volumes of personal data required for developing AI-based services and products, the data minimization principle could be a challenge. This principle requires the use of personal data to be adequate, relevant and limited to what is necessary for achieving the purpose of processing. However, while training algorithms, it is difficult to predetermine the purpose of processing, as predicting what it will learn can be arduous. The training may also impact the purpose of processing, which could change as the machine learns and develops. Entities would benefit by assessing the risk posture of their processing activity and deleting personal data no longer needed for achieving the predetermined business purpose.

Organizations should conduct a data protection impact assessment before personal information is processed to identify privacy and security-related risks. Another approach would be to consider privacy by design, which mandates focusing on privacy protections such as encryption, differential privacy, federated learning and generative adversarial networks to ensure data is safeguarded in the system's standard settings.

With the growth of the AI ecosystem in India, the potential for misuse of personal information and invasion of privacy could also get a boost. The challenge lies in creating a regulatory and compliance framework that protects individuals against any adverse effects from the use of personal information in AI but does not unduly restrict the development of AI, considering the vast array of applications and socioeconomic benefits.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.