AI deepfakes are being weaponised and people are not ready

Alfred Siew
7 Min Read
PHOTO: Unsplash

Just for fun, I made an innocuous video of a baby with Google’s free AI tools earlier this week. I asked the AI to create a clip of a cute baby cooing and smiling to her parents.

The result, which I showed to friends and family, surprised many, who did not see it as AI-generated initially. A friend even joked if I was thinking of having another child.

Despite seeing many examples of deepfakes in the news and on social media, it is difficult for most people to discern real from AI-generated video and audio. Text can be fake, sure, but we have trusted our eyes and ears all our lives. Now, seeing is not believing.

Worryingly, people often think they can spot a deepfake until they are confronted by one. In other words, they are poorly prepared to face the deluge of deepfake scams that are surely coming their way in the months and years ahead.

AI video generated with Google. VIDEO: Alfred Siew/Google AI

Earlier this month, Singapore’s Cyber Security Agency (CSA) revealed that only one in four people here can distinguish a deepfake video from a legitimate one. Yet, a majority say they are confident they are can do so.

They should know that deepfake technology is quickly being weaponised, as many security experts have warned in recent years.

Last week came news that an impersonator used AI to generate the voice of United States Secretary of State, Marco Rubio, while making calls to three foreign ministers in June.

And, already, deepfake scams have been costly. Last year, a finance officer in Hong Kong was duped by a deepfake video call with several “colleagues” to wire HKS$200 million to hackers who had carefully surveyed and targeted the victim.

“Over the past year, deepfake scams have advanced at an alarming pace, driven by the commoditisation of AI-based tools and the growing availability of deepfakes as a service,” said Steven Scheurmann, regional vice-president of Asean at cybersecurity firm Palo Alto Networks.

There are entire marketplaces offering deepfake video and audio tools, with services ranging from free to premium subscriptions of up to US$249 per month, he told me.

“These capabilities are no longer the domain of sophisticated threat actors alone; criminals of any skill level can now produce highly convincing impersonations of high-profile executives, celebrities, and even employees in an organisation,” he warned.

Indeed, the familiar North Korean IT worker scam, where hackers join organisations as remote workers to later attack them from the inside, now involves deepfake video feeds to fool recruiters during interviews over video calls.

How hard is it to make such deepfakes? A Palo Alto Networks researcher with no expertise in image manipulation can make a synthetic identity in just 70 minutes using freely available tools and inexpensive consumer hardware.

I spent only a few minutes making the cute baby video with Google. If I were to access some of the tools available to make deepfakes, perhaps another hour on the PC would enable me to generate a convincing fake identity.

Seeing that people in Singapore already lose S$1.1 billion a year to scams, it is time to raise the alarm on deepfake convincing scams that will start reaching them on their phones and PCs, via their social media feeds, WhatsApp chats and e-mails.

The authorities here have warned people here of increasingly convincing deepfakes that are hard to detect even to the trained eye. Besides analysing the video, say, for stiff movements, they also call for tools that help to authenticate content provenance.

These tools may be the key to “mass inoculation” against an upcoming wave of fake content. Just like scam call filters and fake SMS ID blocking can keep out a large number of cyber criminals today, easily available content analysis tools should be used to help the majority of consumers discern real from fake.

In other words, use AI to fight AI because, frankly, you can’t trust what you’re seeing with your naked eye any more. Like e-mail filters that block possible phishing messages, a content filter can flag what appears suspicious.

Of course, AI is prone to bias too. Grok, the AI on social network X, often does confirm facts but after a recent revamp, it assumed the persona of MechaHitler and tried to push views friendly to X owner Elon Musk.

Perhaps AI should best be left to state the facts, as reported by reputable sources, and offer differing opinions for a human user to decide which is more persuasive.

Then again, facts, like statistics, are often damned lies. That’s because we want to understand what they mean, especially in an era of information overload. Sometimes, we even want to believe what we know to be untrue.

Unfortunately, this also means deepfakes will always find a way in, like fake news today that has made people doubt the safety of vaccines and even believe the Earth is flat.

The best bet, then, may be to build up herd immunity, like with Covid vaccines. Deep fakes, like physical viruses, can’t be eradicated but as long as most of the public don’t end up believing in fake news or get scammed of their life savings, that may be the best outcome to expect.

Share This Article
Follow:
Alfred is a writer, speaker and media instructor who has covered the telecom, media and technology scene for more than 20 years. Previously the technology correspondent for The Straits Times, he now edits the Techgoondu.com blog and runs his own technology and media consultancy.
Leave a Comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.