Q&A: For now, humans still needed to make decisions AI cannot, says Appier

June 27th, 2019 | by Alfred Siew
Q&A: For now, humans still needed to make decisions AI cannot, says Appier
Dr Min Sun, chief AI scientist at Appier.

Depending on who you ask, artificial intelligence (AI) could be the greatest thing to come along in recent years or the biggest threat to human existence.

No one seems to have a clear answer of how things will pan out, even as AI becomes more sophisticated and starts making automated decisions and starts learning and teaching itself, sometimes without human intervention.

At least for the foreseeable future, however, humans will still be needed to make decisions that a machine may be uncertain of making, said Dr Min Sun, the chief AI scientist of Taiwan-based AI firm Appier.

Humans, too, are needed to explain and communicate findings from AI and make big decisions that only humans can make, such as a doctor advising a patient of his illness, he noted.

New guidelines are needed for AI in future, he noted, pointing to companies and countries that have started to draft these new rules.

They will have to ensure that innovation is not curtailed while providing clarity on how AI will work in future, he noted, in the latest Q&A with Techgoondu.

NOTE: Replies have been edited for brevity and house style.

Q: When people speak about AI, there’s still a misconception that it refers to humanoid robots. Has that changed in recent years?

A: AI has certainly become sensationalised somewhat in the mainstream media, and people might picture a world of autonomous robots where humans have become obsolete!

However, most business and government leaders are now aware of advanced AI and data technology and understand that at its core, AI describes technology systems that exhibit intelligent behaviors such as understanding text, images, and other data.

We are quite some way from humans being taken over by computers. In fact, we’re much more likely to find ourselves working alongside AI technology systems in our daily work. AI helps us to be more efficient and effective. 

We call these AI systems “human-centred AI”, which have been designed from the outset with the human user in mind. These systems are able to complement the things humans are good at (emotion, compassion and creativity) with the things AI is good at (logic, scale and speed).

These things can work as a highly efficient team, achieving more together than either could alone. 

Q: In your view, which industries are taking up AI the fastest?

A: The retail sector is one field that is further ahead than others in the implementation of AI.

A recent survey conducted by Forrester and Appier found that 56 per cent of retailers in Asia-Pacific have either implemented or are expanding AI initiatives.

Retail is a broad sector, with players both large and small, online and offline, and often with a large amount of data to leverage.

This means there are multiple ways to adapt technology, such as streamlining payment processes with facial recognition; unifying online and offline shopping; and using AI to effectively target consumers with the most relevant content during their customer journey. 

Q: People have been told that AI is here to augment, not replace humans in the workforce. However, machines are beginning to teach other machines without human intervention, like with Google’s AI built to play Go. Do we still need a person to make decisions or train machines in future?

A: Overall, AI technology still lacks many of the attributes that we typically ascribe to humanity, such as the ability to feel compassion or empathy.

For big decisions – a medical diagnosis, for example – AI can be immensely helpful in saving a doctor time, but she will still need to look at the data, consider past experience and education, make the final decision and communicate the information to the patient in a compassionate way. 

The case of AlphaGo is an example of ‘reinforcement learning’, which looks at which actions software should take in a given environment or situation to result in a reward or positive outcome (in the case of AlphaGo, winning the game).

For businesses, this breakthrough can support an improvement in areas that involve resource allocation. For example, technology companies with large data centres need to ensure consistent quality while reducing power consumption.

Reinforcement learning can automatically allocate which machines should perform a task, while also changing the appropriate cooling settings at the same time.

As in the game of Go, it’s about “where to place a stone” for the best outcome. However, we will still need humans for the foreseeable future to ensure that when the AI system is uncertain about its decision, a human can quickly get in the loop of decision making with enough information about the situation.

Q: Some countries such as Singapore have set up a panel to consider the ethical use of AI. Briefly, what issues do you foresee will arise from AI and ethics in the years ahead?

A: Ethics in AI is important because business is about optimisation of decisions, but it is also about trust among all parties involved. Ethics allows every party to know what to trust and what to expect to create a healthy environment that fosters true business success.

In the past several years, most large technology companies (such as Facebook, Amazon, Microsoft) have created committees to come up with a set of ethics guidelines or principles.

Governing bodies, such as the European Union, have created similar things. This is the first step in ethics in AI, and the second step – which we can expect in the coming years – will be determining how to turn the guidelines into rules and how they will be enforced. 

That said, this next phase cannot be rushed or approached without great care. Enforcing rules around AI will have consequences – in terms of impact, gains and losses.

The people who ultimately drive this forward will need to be highly educated on every aspect, so that they don’t enforce things so strongly that innovation is limited, or too loosely so that there is confusion.

The rules are then likely to be tested in small areas (industries or geographies) to determine possible consequences and then, over time, applied more broadly. 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.