By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
TechgoonduTechgoonduTechgoondu
  • Audio-visual
  • Enterprise
    • Software
    • Cybersecurity
  • Gaming
  • Imaging
  • Internet
  • Media
  • Mobile
    • Cellphones
    • Tablets
  • PC
  • Telecom
Search
© 2023 Goondu Media Pte Ltd. All Rights Reserved.
Reading: Q&A: APAC firms are keen on AI, but should tread carefully, says Cloudera
Share
Font ResizerAa
TechgoonduTechgoondu
Font ResizerAa
  • Audio-visual
  • Enterprise
  • Gaming
  • Imaging
  • Internet
  • Media
  • Mobile
  • PC
  • Telecom
Search
  • Audio-visual
  • Enterprise
    • Software
    • Cybersecurity
  • Gaming
  • Imaging
  • Internet
  • Media
  • Mobile
    • Cellphones
    • Tablets
  • PC
  • Telecom
Follow US
© 2023 Goondu Media Pte Ltd. All Rights Reserved.
Techgoondu > Blog > Cybersecurity > Q&A: APAC firms are keen on AI, but should tread carefully, says Cloudera
CybersecurityEnterpriseSoftware

Q&A: APAC firms are keen on AI, but should tread carefully, says Cloudera

Ai Lei Tao
Last updated: August 18, 2023 at 4:53 PM
Ai Lei Tao
Published: August 18, 2023
9 Min Read
SHARE
Daniel Hand, field CTO for APJ, Cloudera. PHOTO: Cloudera

Organisations in the Asia-Pacific are using or plan to use AI applications as a way to keep ahead of the technological curve.

Some 88 per cent of Asia-Pacific organisations are using or plan to use AI applications in the next 12 months, according to IDC. Generative AI, in particular, is growing in popularity, with nearly two-thirds of organisations investing in or planning to invest in it by 2023. 

Businesses in the region are working to protect sensitive information and to safely and economically get value from Large Language Models (LLM). LLMs are being used for everything from improving developer efficiency, to providing analysts with summaries of complex dense reports and improving the efficiency and effectiveness of customer call centres.

Usage policies are being carefully developed and self-hosted LLMs are increasingly being deployed to complement the use of SaaS-based LLMs. There is also a focus on ethical and responsible AI with governments and regulatory bodies playing an increasingly important role. 

Regulatory bodies now are under pressure to address issues around data privacy and security, intellectual property rights, and the potential misuse of AI-generated content, with countries like India are drafting the Digital India bill to regulate AI and keep its digital citizens safe.

Singapore has launched its AI Verify Foundation to promote the development of tools for responsible AI usage, and boost AI testing capabilities to meet the needs of companies and regulators globally.

With this nascent technology, organisations have to consider key risks and limitations of AI today, even as they pursue its benefits, says Daniel Hand, field chief technology officer for APJ at Cloudera, which provides data analysis tools for cloud-based data.

In this month’s Q&A, he calls for organisations to better understand the risks of AI and consider how to carefully innovate for successful AI implementations. 

NOTE: Responses have been edited for style and clarity.

Q: What are some of the key risks and limitations of AI today, in terms of enterprise use? 

A: Ethical issues (especially bias and discrimination), data privacy, data security, transparency and explainability, and concerns around the accuracy and relevance of answers are significant risks associated with AI models.

These risks can impact an organisation’s brand and service reputation. A larger, contextually-relevant training dataset leads to better outcomes, but suboptimal or misleading results may occur if suitable context to sensitive data or if data lineage is questionable.

AI models can be influenced by bias, often due to poor data preparation during the model training process. This can result in negative outcomes like lost service or revenue, and legal consequences. There have been several high-profile cases of bias influencing credit limits and insurance policies.

AI-supported decisions made within an opaque black box where there is a lack of explainability and transparency can introduce risks that may violate industry guidelines and data protection regulations. 

An example is the dismissal of workers in the Netherlands without either suitable human intervention or transparency and explainability in AI supported processes. The employer was found to have violated article 22 of General Data Protection Regulation (GDPR).

There have been significant advances in AI and ML algorithms and in the performance of LLMs. However, few organisations have the resources to train these models. They can either consume closed-source proprietary models as public SaaS services or host Open Source models in a trusted environment. 

The risks include a lack of transparency, biases, and sharing incorrect information or worse, sensitive data. Some reported cases have led to the organisations tightening usage policies, often with a blanket ban on using public SaaS-based LLMs.

Plus, generative AI models often lack contextual understanding of enterprise questions, leading to incorrect or inaccurate responses.

For example, a chatbot replying to a query on warranty duration can fail to provide important context. That causes confusion and misunderstanding, especially when dealing with issues outside warranty coverage. This can negatively impact customer satisfaction, credibility, and trust in the business.

Q: What can organisations do to mitigate such risks?

A: Let’s focus on the risks of data privacy, contextual-related performance, and ethical or responsible AI.

Data privacy risks are crucial for organisations to mitigate. To ensure data privacy, organisations should classify data and provide clear guidelines on usage. For instance, using a SaaS-based LLM for sensitive internal documents may violate data management policies.

Besides putting in place policies, guidelines, and technology to control data privacy, organisations need to augment SaaS-based solutions with their own privately hosted solutions that provide comparable performance.

To ensure contextual relevance and performance, organisations should control access to the prompt and inject relevant context through Retrial Augmented Generation (RAG).

Responsible or ethical AI is multifaceted, with bias being a significant element. To address bias, organizations should understand the bias in the training data and in-built biases in pre-trained models. Connecting with governing bodies within their industry, such as Monetary Authority of Singapore (MAS) for financial institutions in Singapore, is recommended.

There are two main approaches to benefiting from LLMs: Public SaaS-based LLMs and privately-hosted LLMs based on open source models. A combination of data sensitivity and economic efficiency determines whether it’s appropriate to consume SaaS-based LLMs.

Achieving trusted data across the entire data lifecycle across public and private clouds is essential for ensuring trusted data and broader AI and ML use cases.

Q. What are some best practices for organisations to take note of when adopting enterprise AI?

A: I would start with a clear usage policy, strong data management controls, and a scalable, reliable approach to machine learning operations (MLOps). These are crucial for analytical use cases like data warehousing and predictive analytics. AI models, particularly ML and Deep Learning, perform better with high-quality data.

Next, data ethics and responsible AI should be influenced by relevant industry bodies.

Organisations should have clear data usage policies, which require classification and approval of data, algorithms, models, and services. Regular training and updates are essential for personnel to understand licensing and fair usage policies.

For example, SaaS-based developer productivity services may be restricted to only a subset of non-sensitive development projects. However, with the introduction of a privately hosted LLM based on the Open Source StarCoder LLM, the policy is extended to include this new capability for sensitive development projects.

Finally, most AI models struggle to get out of the lab and into production efficiently and at scale. One solution is Machine Learning Operations (MLOps) which covers everything for data exploration, data engineering, model training, model tuning and subsequently making those models available for consumption.

It also includes the process of monitoring model performance and retraining models when appropriate with suitable human oversight.

Q. What are enterprises in Asia-Pacific doing to prepare their business to be “AI-ready”?

A: Organisations in the Asia-Pacific are focusing on data management, data platform capabilities, and AI-readiness.

An example is OCBC bank, which has developed strong data management and platform capabilities, such as integrating LLMs into their on-premises business.

They have successfully replaced existing developer code assist tools with privately-hosted services based on StarCoder LLM. This has reduced operating costs and enhanced the service to be more contextually specific with their own coding standards.

Besides a strong technology capability, building strong data science and data engineering skills is essential to take advantage of the available algorithms and models.

Shattered silos: 2024’s top technology trend
How collaboration can work for your business
Parallels Access 2.5 supports more devices, including Samsung’s Galaxy Note 4
In 2021, will new technologies widen or close the digital divide?
SAP partners get hands-on experience at Singapore Co-Innovation Lab
TAGGED:AI risksAsia-Pacificclouderagenerative AILLM

Sign up for the TG newsletter

Never miss anything again. Get the latest news and analysis in your inbox.

By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Whatsapp Whatsapp LinkedIn Copy Link Print
Avatar photo
ByAi Lei Tao
Ai Lei is a writer who has covered the technology scene for more than 20 years. She was previously the editor of Asia Computer Weekly (ACW), the only regional IT weekly in Asia. She has also written for TechTarget's ComputerWeekly, and was editor of CMPnetAsia and Associate Editor at Computerworld Singapore.
Previous Article Sony ZV-1 Mark 2 review: An all rounder with some weaknesses
Next Article Exploring the spectrum landscape of 6G
Leave a Comment

Leave a ReplyCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Stay Connected

FacebookLike
XFollow

Latest News

Fujifilm GFX100RF review: Fun medium-format street photography camera
Imaging
May 14, 2025
Looks over AI? Samsung pitches slimmed-down Galaxy S25 Edge
Cellphones Mobile
May 13, 2025
Stunning AI advancements could transform healthcare, education and agriculture globally: Bill Gates
Internet
May 7, 2025
NRF 2025 APAC show in Singapore to spotlight latest in retail innovation
Enterprise
May 7, 2025

Techgoondu.com is published by Goondu Media Pte Ltd, a company registered and based in Singapore.

.

Started in June 2008 by technology journalists and ex-journalists in Singapore who share a common love for all things geeky and digital, the site now includes segments on personal computing, enterprise IT and Internet culture.

banner banner
Everyday DIY
PC needs fixing? Get your hands on with the latest tech tips
READ ON
banner banner
Leaders Q&A
What tomorrow looks like to those at the leading edge today
FIND OUT
banner banner
Advertise with us
Discover unique access and impact with TG custom content
SHOW ME

 

 

POWERED BY READYSPACE
The Techgoondu website is powered by and managed by Readyspace Web Hosting.

TechgoonduTechgoondu
© 2024 Goondu Media Pte Ltd. All Rights Reserved | Privacy | Terms of Use | Advertise | About Us | Contact
Join Us!
Never miss anything again. Get the latest news and analysis in your inbox.

Zero spam, Unsubscribe at any time.
 

Loading Comments...
 

    Welcome Back!

    Sign in to your account

    Username or Email Address
    Password

    Lost your password?