Editor's note: The IAPP is policy neutral. We publish contributed opinion pieces to enable our members to hear a broad spectrum of views in our domains.
Warm greetings to all my fellow privacy and digital governance professionals.
At the recent AI Action Summit in Paris, Singapore unveiled a Global AI Assurance Pilot, which weaves together emerging norms and best practices around the technical testing of generative artificial intelligence applications. Notably, the pilot skirts underlying foundation models, which are not covered within the garment of its technical testing.
The types of use cases the pilot threads are live, or soon-to-launch, generative AI applications, which cloak individuals through automated or semi-automated recommendations or decisions.
A non-exhaustive patchwork of risk dimensions sewn into the testing fabric covers safety and health risks, unfair treatment of individuals, lack of transparency and recourse, inappropriate data seepage, malicious use and other double-knotted security risks, close-knit trust and reputation concerns, looming financial loss, lack of an appropriate thimble of human oversight, and breaches of industry-specific regulations or other blanket internal compliance requirements that are cut from the same cloth.
In terms of rolling out the yarn of timelines for this pilot, organizations can cross-thread their expressions of interest to participate in the pilot by registering this month. Thereafter, the main needlework of technical testing will be performed on participating generative AI applications from March to April, followed by a crocheted quilt of insights to be showcased at the Asia Tech x Singapore 2025 conference in May.
Singapore has also embroidered a Joint Testing Report with Japan that aims to make large language models safer in different linguistic environments. A throw of 10 non-English languages were run along the hem, including French, Japanese, Korean, Malay, and Mandarin Chinese, as well as several harm categories spanning violent and nonviolent crime, intellectual property, privacy and jailbreaking. These tests are aimed at reinforcing evaluation capabilities and methodological standards with a view to ironing out associated risks.
Moving seamlessly to Malaysia, the National AI Office uncovered a bobbin holder of specialized working groups, each of which will pinpoint around the tracing wheel specific areas of focus needed to tailor the nation's AI strategy — namely, AI sovereignty, AI regulation and policy, AI security, AI governance and ethics, AI safety, AI talent and AI advisory, the last of which is intended to provide insights on emerging AI trends and real-world implementation challenges.
Needless to say, AI continues to be a hot button topic across the quilt of Southeast Asian digital and data regulation.
Charmian Aw, CIPP/A, CIPP/E, CIPP/US, CIPM, FIP, is a partner at Hogan Lovells.