
The Monetary Authority of Singapore (MAS) has proposed a set of guidelines calling for AI oversight and robust risk management for financial institutions, as the technology becomes increasingly prevalent in the industry.
In a consultancy paper it put out today, the government regulator is arguing for a more structured approach to how AI is used and managed, while it seeks industry feedback to its proposals.
Chief among them is an expectation that the board and senior management of financial institutions would govern and oversee AI-related risk across a range of areas.
MAS is proposing that they establish frameworks and structures to identify and inventorise the use of AI, assess their risk, and put in place governance and risk management. They should also manage the risks of AI throughout its lifecycle.
Financial institutions, it proposes, should identify where AI si used. They should also establish and maintain an accurate and up-to-date inventory of AI use cases, systems or models throughout the AI lifecycle.
The proposals recognise that AI is not a one-off implementation but a continuous journey, says MAS. Throughout an AI’s life cycle, financial institutions should plan for robust controls based on the risks involved, according to the regulator.
This should be assessed on areas such as data management, fairness, explainability and auditability. Among other questions, the MAS is seeking comments on the standards, processes and controls that should be applied here.
The proposed guidelines point to some basic requirements that the regulator expects from all financial institutions. At the least, they should set up basic policies for the use of AI commensurate with the their level of AI adoption, it proposes.
Financial institutions should address who is responsible for overseeing AI use, guidelines on allowed and disallowed uses of AI, as well as the communication, checks and reviews of such guidelines, the regulator adds, in the paper released today.
It also points to the possible amplification of existing AI risks by generative AI. The greater complexity of GenAI, it notes, can give rise to even greater uncertainty and unexpected behaviour.
The unstructured nature of Generative AI inputs and outputs, and a lack of established techniques in this area also make it harder to evaluate and test, as well as understand and explain GenAI’s behaviour and outputs, it adds.
Plus, the diverse and often opaque data sources used in GenAI training, coupled with difficulties in evaluating bias of its outputs, could also result in decisions that lead to unfair customer outcomes, it warns.
It even listed some risks, such as prompt injection or data poisoning that leads to compromised responses from a GenAI model.
Privacy risks from a leakage of customer data to third-party GenAI tools, as well as potential copyright right infringement from GenAI usage are other risks that the MAS has highlighted in possibly its most comprehensive AI-focused policy paper to date.
Ho Hern Shin, the MAS’ deputy managing director, said the proposed guidelines provide financial institutions with clear expectations so they can make use of AI in their operations.
Calling the guidelines “proportionate” and “risk-based”, she said they would enable responsible innovation by financial institutions with safeguards against AI risks.
The call for feedback closes on January 31, 2026. Financial institutions can expect a transition period of 12 months to get their ship in shape after the final guidelines are issued.
