When Singapore Minister for Digital Development and Information Josephine Teo announced the country's artificial intelligence regulatory sandbox was going live in the summer of 2025, she couched the development as one reaching beyond the country's small borders.
According to Teo, the initiative was necessary because of the speed and scaling of AI around the world, with regulation unlikely to keep up.
"There are many stages to go through. In Singapore at least, we have taken the critical first steps to grow the ecosystem for testing and assurance," she said during Singapore's Personal Data Protection Week in July. "Our hope is that industry players will join us to initiate 'soft' standards that can be the basis for the eventual establishment of formal standards."
Singapore is ranked 11th among countries in responsible AI governance practices by the Global Index on Responsible AI and the only southeast Asian country currently participating in the International Network of AI Safety Institutes.
Being a relatively small deployer AI market — the total global private investment in the technology and number of AI companies were middle of the pack, according to Stanford University's 2025 AI Index Report — Singapore is taking the route of providing testing infrastructure using international standards rather than hard and fast regulations.
Opportunity to lead
The Datasphere Initiative found 23 countries were planning to create sandbox programs specifically to deal with AI, according to a February 2025 report.
Singapore's pilot launch marked the beginning of a proactive approach compared to more prominent jurisdictions. U.S. Sen. Ted Cruz, R-Texas, proposed a bill in September to create an AI sandbox program which offers exceptions from federal regulations in exchange for participation. The U.K. announced 21 Oct. it was looking at creating its own AI sandbox program.
But whereas those proposals would be focused on the country's own laws, Singapore's sandbox takes a wider approach: it is part of the country's AI Verify Foundation and takes its cues from its testing framework.
That framework has 11 principles that are familiar to AI governance professionals — including transparency, safety, repeatability, fairness and human oversight — and is mapped to international frameworks. Those include the U.S. National Institute of Standards and Technology Artificial Intelligence Risk Management Framework, the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems and the International Organisation for Standardisation ISO/IEC 42001.
Singapore Infocomm Media Development Authority Deputy of AI Governance and Safety Sang Hao Chung said the country's sandbox program yields three key accomplishments.
"Firstly, it is to reduce testing-related barriers to (generative AI) adoption through both practical guidance and access to specialist testing service providers," he said. "Secondly, the output of the sandbox provides valuable inputs into the development of and eventual technical testing standards for GenAI applications. Last but not least, the sandbox supports the growth of a viable AI assurance market."
Chung indicated the program's international outlook is intentional. No single country has a monopoly on the best way to address AI risks, or all the identified use cases. "It is thus essential to harness the global community's efforts to identify, test and mitigate risks associated with generative AI applications," he said.
Sandbox v. regulation
Unlike some of the bigger developer and deployer markets, Singapore is small and highly connected to other countries, and therefore unable to insulate itself from AI developments, according to Future of Privacy Forum Managing Director for the Asia-Pacific Josh Lee.
Such a posture requires more careful consideration for how the country chooses to regulate AI. Moving too quickly could mean companies would take advantage of different rules in other countries, putting Singapore at risk of losing out on growth opportunities.
It has taken some concrete steps to regulate election-related deepfakes, such as with the Elections (Integrity of Online Advertising) (Amendment) Act, but otherwise Singapore chooses "to clarify where it can, to test where it should, and to legislate where it must," Lee said.
By launching a sandbox, Singapore can offer itself as an international testing environment and thus build its own relevance in the AI world, Lee said. That helps play into the overall AI strategy and the objective to encourage Singapore’s own development efforts.
Singapore has taken a similar approach to regulating other sectors. The Monetary Authority of Singapore created the FEAT principles in 2018, which became the groundwork for how it regulated AI in the fintech sector. It has a regulatory sandbox dedicated to fintech, as well as another one focused on privacy-enhancing technology, or PETs.
'Controls across multiple geographies'
For businesses, the benefit of a sandbox is to test a product or a method before it hits the wider market, ensuring some degree of regulatory certainty before committing to an ideal.
Mastercard participated in the Singapore PETs sandbox in 2023, an experience used to explore how homomorphic encryption could be used to share information between multiple countries. Their findings became a significant part of the company's 2024 white paper on PETs.
For Mastercard Deputy Chief Privacy Officer for AI Derek Ho, CIPP/US, CIPM, the experience of having a contained environment in the PETs sandbox allowed the company to consult with many parties involved in the regulatory process at once. It not only led to useful insights but sped up the process and created documentation of use cases for people to refer to when looking to make their own adoptions.
"Singapore's approach of looking at NIST and ISO allows businesses to ensure that we are able to apply a consistent set of controls across multiple geographies, which allows us to scale and reduce the need for separate governance structures for each jurisdiction," he said.
Standard Chartered, a British multinational bank, reported a similar experience when participating in the pilot program of the AI sandbox, when it worked with an independent firm working with the country to evaluate a generative AI-driven application. Mohammed Rahim, a group chief data officer for the bank, said the exercise let Standard Chartered benchmark its approach against other third-parties in the assessment and evaluate the LLM in a systemic way.
The company has worked with IMDA before, leaving Rahim confident the project "would be well managed, comprehensive, and potentially provide meaningful results." The application tested during the program was in the pilot stage as of this fall while the bank incorporates its learnings from the sandbox, he said.
The sandbox will not stay static. IMDA's Chung said he anticipates the risk dimensions explored in the program will evolve as new AI technologies and use cases appear.
The lessons learned will inform the country's overall legislative approach to AI, including letting lawmakers know what concerns there are with generative AI and helping explore gaps in AI testing. He said this could lead to better research and development methods down the road, especially for domestic and international AI testing standards.
Caitlin Andrews is a staff writer for the IAPP.
