Singapore has proposed a new international standard to test generative AI systems that could help organisations better assess the safety of the new technology and build trust over time.
The first of its kind, it seeks to bring common benchmarking and red teaming methodologies to testing. This means organisations can replicate results and compare them to determine if their AI systems are safe and reliable.

The proposed standard – called ISO/IEC 42119-8 – is yet to be confirmed. It will be discussed at a meeting that brings together more than 35 national bodies and over 250 AI experts from around the world in Singapore this week.
Organised by the Infocomm Media Development Authority (IMDA) and Enterprise Singapore, the 17th ISO/IEC JTC 1/SC 42 plenary meeting is happening in Southeast Asia for the first time as well.
Singapore’s push for an AI standard comes as AI has become more embedded across industries and societies. As a small country that leads in AI adoption and practices, it believes globally recognised standards are needed to ensure AI is deployed safely.
The proposal builds on Singapore’s previous AI assurance work through IMDA, which includes the AI Verify Toolkit, the Starter Kit for Testing of LLM-Based Applications for Safety and Reliability, and the Global AI Assurance Sandbox.
These efforts are part of Singapore’s broader push to advance international AI standards. This is seen in the national adoption and accreditation programme of the ISO/IEC 42001 standard (for AI system management) led by EnterpriseSG. The country has also contributed practical use cases to support ISO/IEC TR 24030, which documents real-world AI applications.
Singapore is not alone in pushing for standards for AI safety and assurance. The United States has published the NIST AI Risk Management Framework and its generative AI profile are often used to include trustworthiness considerations into the design, development, use and evaluation of AI products, services and systems.
