If AI is so good, why aren’t more businesses reaping the rewards?

Alfred Siew
12 Min Read
ILLUSTRATION: Adapted from Unsplash image.

AI is going to take your job. Or perhaps, it has already taken yours, if you are among the unfortunate thousands laid off by companies eager to prove that a computer does your job better, faster and cheaper.

A couple of weeks ago, Jensen Huang, the head honcho of Nvidia, which is making billions selling AI chips, said that programmers not using AI would lose their job to someone who uses AI.

“You can’t raw dog it,” he said at an AI summit in the United States, referring to those still manually coding their software programs.

In the same week, closer to home, you hear Tan Su Shan, the chief executive of DBS talk up AI at a conference as well. Even her own top job is at risk from AI, she claims.

Perhaps she should ask how the 4,000 contract staff the bank is looking to cut in the next three years would feel. Would they think they were in the same boat as corporate leaders such as she?

As an early AI adopter, DBS says it has created more than 350 uses cases for AI and has 1,500 models in production. Among the tasks being taken on by AI? Content generation and writing, Southeast Asia’s largest bank revealed in an AI conference.

Surely, with such overwhelming evidence, there should be no argument by now that AI is the game changer everyone has been predicting.

Companies must be doing so much better now they have got rid of the unnecessary humans, right? After all, they have taken the advice from AI gurus to jump in fast and get an edge over others in this existential race.

Well, the evidence from companies adopting AI in the past 18 months may paint a vastly different picture.

Just a couple of weeks ago, automation company ServiceNow shared a study that showed organisations around the world have dropped 9 points in terms of AI maturity.

Organisations in Singapore, Japan, Australia, and India all reported year-over-year declines in AI spending as a percentage of their overall technology budgets, the company revealed.

“While the ambition is big, the foundation is not there,” said ServiceNow chief marketing officer Colin Fleming, who pointed to fragmented systems, siloed operations and added complexity as common problems as organisations rushed to deploy AI.

Describing many of today’s experimental efforts as a “hornet’s nest of complexity” and “PoC (proof of concept) purgatory”, he said efforts are often disjointed and uncoordinated.

ServiceNow, which is also selling AI technology, isn’t the first to start calling a reality check on the AI hype. Earlier in June, research firm Gartner warned that more than 40 per cent of agentic AI projects would be cancelled by the end of 2027.

Most damning perhaps is how it described technology companies contributing to the hype by engaging in “agent washing”.

Yes, it’s referring to the rebranding of existing products, such as AI assistants, robotic process automation (RPA) and chatbots, without substantial agentic capabilities as the next big thing – agentic AI. Gartner estimates only about 130 of the thousands of agentic AI vendors are “real”. 

Indeed, the same reality check is needed for anything from vibe coding to graphic design to copywriting. The idea that you don’t need experts any more is clearly unsound.

Cutting-edge AI tools actually slowed down experienced software developers when they were working in codebases familiar to them, instead of supercharging the work, a study from non-profit METR revealed.

Using AI increased the time needed to complete tasks by 20 per cent, the study found. The study’s authors, believing initially that AI would speed things up, were shocked by the result, Reuters reported.

This is not to mention the many issues that people who use AI extensively have reported. Hallucinations, for example, can give you really bad answers.

Google “privacy law Singapore amendment 2024”, for example, and the AI summary would likely give you some authoritative-sounding text of the various changes made in the last year.

The AI even lists its sources, which could include links to Singapore’s online law website and analyses by respected law firms. Yet, go through each of these links carefully and you’d realise the AI has merged the changes made in Malaysia’s set of laws with Singapore’s.

Doing research on this recently, I looked through the Singapore Statutes website and couldn’t find the amendments mentioned by the AI. Sure, the law firm citations seemed legit but then I realised they were referring to the Malaysian version of the Personal Data Protection Act, which has the same name of the law in Singapore!

In other words, AI simply found a few sources online and decided to merge them and present its findings as authoritative fact. It’s like a student who has studied for an exam and ready to regurgitate everything whether it answers a question correctly.

I use AI for research regularly and most of the time, it’s been really helpful. However, if I were not careful and did not do my checks, the serious errors in this instance would have been devastating to my work.

If simple chatbots and large language models (LLMs) are already so problematic, what more AI agents in future that are supposed to be semi-autonomous?

Surely, what they need is a better AI foundation, be it data quality, cloud infrastructure or simply having the human expertise to check, check, check.

After all, if an AI incorrectly awards an insurance claim, it would be hard for the insurance firm to claw the money back by blaming a faulty AI agent. Similarly, it has to be ready to explain its AI agent’s decision to reject certain claims.

All said, why is AI so good yet so dangerously unfit for real work so often? Why are people being laid off in the name of AI when AI clearly isn’t ready to take over?

Actually, the vastly different scenarios can be true at the same time. For starters, some companies that were gungho on AI have reassessed their earlier optimism.

For one, Klarna Group, a Swedish buy-now-pay-later company, has decided to hire people again to speak to customers, after earlier saying AI agents could do their job.

Chief executive Sebastian Siemiatkowski even declared in 2023 that he wanted his company to be OpenAI’s “favourite guinea pig”.

Yet, despite being more realistic about AI, his new approach is what many early adopters have since pivoted towards – putting experts in place to manage, teach and correct these new AI systems and agents you are putting in place.

Someone has to be responsible for training and overseeing the AI – it can’t be that the CEO gets sacked if an AI agent makes a mistake.

It is also not true that AI is all hype. Businesses that have successfully adopted AI often already have a firm foundation, perhaps in simpler forms of process automation in the past. They also start small and win small before scaling up projects with real lessons learnt.

For example, if your company already has in place automated processes, say, for new employees to be onboarded with their digital IDs, devices and network access, then it’d be a lot easier to get agents to take over more of these processes in future.

Similarly, if you already have great customer service with many processes automated today through interconnected systems, say, across logistics and service quality, then you can more easily get AI agents to do some of the legwork.

Got someone calling in for a broadband problem? A smart agent with access to customer data and network applications can do a quick speed test and even roll back a software update to enable a customer to get online again quickly.

If the problem can’t be solved, the AI agent can set an appointment for a technician to turn up on location to check further, because it would have the access and intelligence to handle the tasks.

This is not the same as sticking an AI agent or chatbot in front of a customer service hotline or chat and denying easy access to real help. If your customer service is bad, sticking AI on it will probably make it even worse. Yes, Singapore telecom operators, do take note.

In a nutshell, you can say that AI is making a difference in some businesses – the industry leaders – but most are still justifying their efforts in this grand experiment. Many efforts will be stuck in proofs of concept, mere experiments.

This is no comfort for those who have lost their jobs to organisations that have taken the opportunity to cut costs, even if the AI isn’t the answer. They are the unfortunate guinea pigs of their employers’ lab trial with AI.

Perhaps they can find some cheer in knowing that these organisations have set back their own AI journeys by getting rid of the experts that should have guided and managed these AI agents of tomorrow.

After all, successful AI adopters will not only need the technology foundation in place but also humans and processes to integrate well for real results. Like all technology, AI is no magic bullet.

Let’s also not forget that AI lacks something that humans have – intuition and sparks of creativity that come from collaboration. No, not the seamless exchange of information and tasks between AI agents but the friction from disagreement between people is something you’d need in creating true innovation.

Share This Article
Follow:
Alfred is a writer, speaker and media instructor who has covered the telecom, media and technology scene for more than 20 years. Previously the technology correspondent for The Straits Times, he now edits the Techgoondu.com blog and runs his own technology and media consultancy.
Leave a Comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.