IT security leaders in the Asia-Pacific are optimistic about the potential of agentic AI, with most identifying at least one security concern that AI agents could help address. However, significant hurdles remain, according to a global study released by Salesforce last week.
In a survey of more than 2,000 IT security leaders globally – including 588 from the Asia-Pacific – half of the regional respondents said their data foundation was not ready to optimise the use of agentic AI, while 57 per cent were not confident about having necessary guardrails in place for effective AI agent deployment.
For organisations, quality, trusted data is fundamental to the successful use of AI. Many IT security leaders believe they lack quality data to underpin agents, or that they could deploy the technology with the right permissions, policies, and guardrails, but progress is being made, according to the Salesforce study.
“Organisations can only trust AI agents as much as they trust their data,” said Gavin Barfield, vice-president and chief technology officer for solutions for Asean at Salesforce, which provides customer relationship management (CRM) technologies.
He stressed that robust data governance is essential, as 62 per cent of security leaders in Asia-Pacific say that customers are hesitant about AI adoption due to security and privacy concerns.

“IT teams that establish strong data governance frameworks will find themselves uniquely positioned to harness AI agents for their security operations, all while ensuring data protection and compliance standards are met,” he added.
Today, some 45 per cent of IT security teams use agents in day-to-day operations, and this is expected to grow to 74 per cent within two years, as agents are deployed for threat detection and even sophisticated auditing of AI model performance.
Key security concerns
AI is reshaping cybersecurity on both sides. While security teams use autonomous agents to reduce manual work, bad actors are exploiting it to find vulnerabilities.
The Salesforce survey found that data poisoning, where malicious actors compromise AI training data sets, is a key security concern. This is in addition to the familiar risks like cloud security threats, malware, and phishing attacks. In response, some 76 per cent of organisations expect to increase security budgets in the coming year.
A challenge to AI implementation is complex regulatory environments. Some 82 per cent of IT security leaders believe AI agents offer compliance opportunities, such as improving adherence to global privacy laws.
However, they acknowledge that AI agents also present compliance challenges. This is partly driven by the growing complexity of regulations across regions and industries, and made worse by compliance processes that are still largely manual and error-prone.
As a result, just 52 per cent of Asia-Pacific respondents are confident they can deploy AI agents in compliance with regulations and standards, and 85 per cent say they have yet to fully automate their compliance processes.
Findings by research firm IDC concur with the Salesforce survey findings. An IDC study from March 2025, revealed that 70 per cent of Asia-Pacific organisations expect agentic AI to disrupt business models within the next 18 months.
Similarly, IDC noted that while agentic AI presents great opportunities, it also raises challenges around explainability, governance, and data security. This highlights the need for robust frameworks, dynamic pipelines, and scalable architectures.
“Apart from customer care, which is the earliest adopter of agents, ITOps, and research and development are the top two areas in which agentic AI will be integrated across the enterprise,” said Surjyadeb Goswami, research director for AI and automation at IDC Asia-Pacific.