TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Asia Pacific Dashboard Digest | Notes from the Asia-Pacific region, 6 Oct. 2023 Related reading: Seizing children's privacy momentum in Australia



Good day, fellow privacy professionals.

The public launch of ChatGPT in November 2022 ignited the world's imagination, as an unprecedented number of individuals and organizations experimented with generative artificial intelligence for the first time. Over the course of 2023, multiple reports, consultations and guidelines have been put forth by regulators, law firms and consultancies.

There is also a growing number of scholarly papers on various topics from "prompt engineering" to the importance of data hygiene to the potential impacts of generative AI on the economy — including both the chance for productivity gains and the risk of workforce displacement. Consumers, professionals and executives, alike, are presented with a persistent stream of news and op-eds focused on the economic and social implications of large language models. 

Many of us, as knowledge workers, have experienced viscerally — perhaps for the first time — anxiety about competing with machines. Imminently, generative AI will be integrated "horizontally" into a vast array of existing software products and services. Indeed, market leaders such as Microsoft are forging ahead with enterprise-grade solutions that incorporate LLMs in their tech stack.

From a privacy perspective, pressing concerns relate to the absence of legal bases for processing personal data, use of personal data to train platform models, a lack of user notification, lack of transparency, security (e.g., "prompt injection attacks") and the creation of false information. Legal and ethical issues surrounding fraud and impersonation, cheating (in exams, for example), infringing intellectual property rights, and defamation are worrying too.

There has been intense scrutiny in Europe, with countries including France, Germany, Italy, Poland and Spain opening investigations into ChatGPT's compliance with the EU General Data Protection Regulation. The EU currently has a draft AI Act which seeks to regulate AI systems according to fundamental rights and values.

Certain Asia regulators are also perturbed by privacy concerns around ChatGPT and other generative AI platforms. China issued Interim Measures for the Management of Generative AI Services focusing on generative AI tools aimed at Chinese residents. The interim measures focus, among other things, on content moderation and liability, transparency and user protection.

In the same vein, South Korea's Personal Information Protection Commission released guidance for the safe use of personal information in AI environments and further seeks to tighten AI tools with its draft bill on "Fostering AI Industry and Establishing Trust." This is in addition to Korea’s efforts to establish new standards for copyrights of AI-generated content. Japan's Personal Information Protection Commission warned OpenAI not to collect data without consent and noted it may take actions if necessary.

In contrast, Singapore chose to embrace AI with its Model AI Governance Framework and AI Verify initiative promulgated by the Infocomm Media Development Authority. The former prescribes the high-level principles of AI-assisted decision making should be explainable, transparent, and fair, and AI systems should be human-centric and safe. The latter provides "a self-testing toolkit to demonstrate responsible deployment of AI."

Regardless of the approach by different countries, generative AI and AI will continue to be a hot and evolving topic concerning privacy in the foreseeable future.

On this note, I have been working with Singapore Management University's Centre for AI & Data Governance and Microsoft on an Industry Playbook on generative AI. The Industry Playbook aims to provide an end-to-end and deep-dive assessment of the generative AI landscape. It takes the sudden ubiquity of LLMs and the immaturity of the applications built on them, such as OpenAI's ChatGPT, as its starting point. It then enunciates comparative legal, ethical, privacy and security issues based on the aforesaid. Examples of industry-specific and cross-sector use cases will also be explained. Finally, the playbook will illustrate how principles can be implemented in practice and the future direction of generative AI.

To summarize, I am excited about the future of privacy with the rapidly changing landscape of technology. I believe generative AI and ChatGPT are only a glimpse of the thrilling times ahead of us.

Credits: 1

Submit for CPEs


If you want to comment on this post, you need to login.