As AI agents become more common in the years ahead, they can be tested as part of a pioneering AI assurance effort in Singapore to gauge generative AI accuracy and develop better guardrails.
The country’s infocomm regulator said today that it would add agentic AI and other risks such as data leakage and vulnerability to prompt injections to a new Global AI Assurance sandbox, in its goal to create more trustworthy AI.
The sandbox follows the Global AI Assurance pilot set up by Singapore’s Infocomm Media Development Authority (IMDA) and its non-profit subsidiary AI Verify Foundation earlier this year for organisations to test out their AI technologies for potential risks.
A number of organisations have since used the system to test their AI deployments. Changi General Hospital in Singapore, for example, checked how well its medical reports were summarised by AI, while Taiwan-based human resource firm Mind-Interview tested its AI-enabled screening tool for bias and privacy.

and Smart Nation Group, Josephine Teo, speaking at the Personal Data Protection
Week event on July 7, 2025.
The new sandbox, announced today at Singapore’s Personal Data Protection Summit, is a reflection of the growing complexity of AI technologies and how difficult it is to ensure they are delivering accurate, balanced responses.
From mere chatbots mimicking human language less than three years ago, AI has become smarter with semi-autonomous AI agents, though guardrails are often not deployed in advance.
At the same time, AI’s expanded use brings new risks, such as cyber attackers injecting malicious code to generate false or erroneous responses for unsuspecting users.
Speaking at the data protection summit this morning, Minister for Digital Development and Information, Josephine Teo, pointed to the need for stricter testing for AI applications that are becoming increasingly common.
“A lot of the things that we use on a day-to-day basis, such as the appliances in our homes, the vehicles that take us to the workplace – we would not use them if they had not been properly tested,” she noted.
“And yet, on a day-to-day basis, AI applications are being used on us without having been properly tested,” she added. “So this is a lacuna, a serious gap that needs to be filled.”
She said the aim of AI testing sandboxes is to find consensus for data protection or AI governance, with subject matter experts and testers weighing in.
There is urgency for standards to be developed and agreed to, she noted, but there are also many stages to go through.
“In Singapore at least, we have taken the critical first steps to grow the ecosystem for testing and assurance,” she stressed.
“Our hope is that industry players will join us to initiate “soft” standards that can be the basis for the eventual establishment of formal standards,” she added.
antalya nakliye