The rush to gain the upper hand in AI, whether it’s between geopolitical rivals or companies in a competitive sector, has taken on a somewhat reckless approach since ChatGPT first broke onto the scene three years ago. Move fast, break things, has been the approach.
Yet, as they now struggle with the many issues that AI has caused, while most of its promises are still yet to be realised, organisations are beginning to ask whether they have forgotten the foundational steps of governance and security along the way.
Since the first leakages of sensitive data exposed AI vulnerabilities early on, there have been many cautionary tales. Not only will a lack of care with AI deployments cause potential cybersecurity breaches but also possible legal troubles, as clearer regulations take shape across Asia-Pacific.
Consider shadow AI, where employees use or install unsanctioned AI tools for their work and expose their companies to security and compliance risks.
Amazon, for example, has detected its internal data in ChatGPT. Now, imagine if a hospital employee leaks patient data carelessly. Or a government staffer misplacing citizen information to be stolen by hackers.
Then there is the governance challenge. Not only does AI need to be secure, it has to be unbiased and accurate in delivering outcomes.
Can someone poison your training model to make you make the wrong decisions? And have you done enough to ensure your models are not biased, to begin with? Most organisations have trouble confirming they are on solid ground.
“Governance and innovation are often positioned as opposites but in GenAI you need to bake in governance from the get-go,” said Guy Hilton, vice-president for strategy and go-to-market at Amdocs, which helps build digital services for large companies.
Making sure guardrails are in place at the start might slow you down initially but a lack of proper controls for AI will to cause it to falter badly eventually, he cautioned.
“Put guardrails in when you start so you create responsible AI, something you can moderate, record, analyse and roll back,” he advised.
“If you turn AI on and let it loose, with no governance or retraining or traceability, you are not going to get to the optimal place you want to get to,” he told Techgoondu, at an interview at the recent Singapore FinTech Festival.
Agentic AI, which proponents say will soon be a virtual labour force to tap on, is another potential hazard. After all, AI agents will attempt to reason without any prompting – they will interpret a user’s intent and take action semi-autonomously.
Again, this underlines the importance of guardrails, which will ensure that an AI’s autonomy is within the bounds set by human users, said Hilton.
The question of trust will come up repeatedly as companies embark on their agentic AI journeys, he noted, because it has to be built on clear AI controls and governance.
With trust still not quite there, much of today’s agentic AI is deployed internally or in the back office, say, to assist in data migration or managing potential cybersecurity alerts, he noted.
It is not ready for full-blown interactions with customers, for example, to assess an insurance claim and to explain to them the reason for approving or rejecting it, he added.
Unsurprisingly, governance is expected to be a top agenda in the coming years, starting with governments themselves. By 2028, 80 per cent of governments will have their adoption and ongoing monitoring of AI independently audited, according to research firm Gartner.
That said, even if organisations wish to bake in governance into their AI efforts, it is not as simple as putting up a list of rules to follow. Even government regulators have trouble keeping up.
“The challenge in governing AI is about the speed at which AI is moving,” said Greg Clark, a director of product management at OpenText, which provides information management software to businesses.
“It is evolving so fast that most governance models cannot catch up,” he noted, pointing to the different regulatory regimes across the world, such as the European Union’s AI Act.
The trouble for a lot of companies is that they don’t know where their sensitive data – their crown jewels – are, he pointed out.
A related issue, he added, is explainability – if an insurance company is using AI to automatically update customers’ credit scores, it has to be able to explain how AI is coming up with the numbers.
Singapore’s approach – innovation with intent – is a great blueprint for the region he noted, so that AI innovation is not outpacing governance.
AI agents, he said, have to be treated like humans accessing data and be governed in real time – if they are accessing data at an unusual time or with suspicious frequency, they should trigger an alert.
Indeed, so many of these considerations are just coming to the fore now, after organisations have had a chance to assess their AI efforts after a few years of breakneck deployments, mostly under pressure from top-down AI mandates.
The result: Many don’t even know where they have gone wrong with AI, and some are now looking at the areas where they might be exposed after the initial rush.
Unlike the move to the cloud that was largely planned, with a lot of thinking and budgeting, AI has not been deployed in the same controlled manner, pointed out Tomer Avni, vice-president for product and go-to-market at security firm Tenable.
“With AI, it’s not controlled – they (users) are using what they want,” he told Techgoondu. “It’s hard to grasp everything and to block everything when AI is moving so fast.”
To get their AI deployment back under control, organisations have to know where to look, by first finding out what they have in their digital environment, he added.
One large organisation in Asia-Pacific, he noted, found employees using Copilot and ChatGPT for important decisions, when the AI is not ready for them.
There is a fine line between using AI to assist them in decision making and depending on AI heavily to do so, he noted, adding that the risks need to be weighed based on the tasks and departments involved.
“Everyone is very eager to adopt AI, asking how to leverage AI,” he said. “The eagerness is there but the monitoring part has ways to go – there is a gap.”

