Contact us
Need a problem solved?
We work with governments, regulators, and industry to design digital and AI governance frameworks that are globally trusted and locally relevant.
This content was originally posted on ccapac.asia. The Coalition for Cybersecurity in Asia-Pacific (CCAPAC) is a group of industry stakeholders dedicated to positively shaping the cybersecurity environment in Asia through policy analysis, engagement, and capacity building. Access Partnership is the secretariat for CCAPAC.
The rise of agentic artificial intelligence marks a fundamental shift in organisational automation and decision-making. AI agents are designed to enhance efficiency by autonomously handling complex tasks and learning from interactions, enabling organisations and individuals to achieve more with less. This promises unprecedented efficiency gains, but it also introduces security vulnerabilities that most organisations are not addressing.
Recent survey data from PwC reveals that 79% of 300 senior executives report AI agent adoption already underway in their companies. Yet, according to Sailpoint, only 44% of these early adopters have established security policies specifically designed to counter agent-related threats. This means most organisations deploying this transformative technology are operating without adequate protections.
Agentic AI systems represent a distinctive change from conventional AI applications. While traditional AI models operate in isolation and react to predefined inputs, agentic systems can autonomously set goals, plan multi-step processes, and execute complex tasks within defined environments. They distribute work across autonomous agents that communicate and collaborate, enabling faster execution, continuous innovation, and self-healing networks.
These systems synthesise information from multiple sensors and databases to establish contextual awareness, supporting sophisticated autonomous decision-making. This shift fundamentally changes both the opportunities and the risks organisations face.
Agentic AI presents compelling opportunities for enhancing cybersecurity itself. Research demonstrates that autonomous AI agents can automate 98% of security alerts and reduce threat containment time to under five minutes. Industry experts already predict that 2026 will witness significant expansion of agent deployment in security operations.
However, the Coalition for Secure AI warns that these same systems introduce novel attack surfaces and security challenges extending beyond traditional software security paradigms. Security experts describe this transition from assistive to agentic AI as creating “autonomous chaos,” a landscape where defensive advantages come bundled with clear risks. The Open Worldwide Application Security Project (OWASP) GenAI Security Project has identified fifteen distinct threat vectors specific to agentic AI systems. Among the most concerning:
Finally, organisations must secure the human-AI interaction layer itself, where agents remain vulnerable to prompt injections and model poisoning through manipulated inputs. This includes training users to identify when agents hallucinate or have been compromised by other agents in multi-agent systems.
These concerns are no longer theoretical. Survey data indicates that 23% of IT professionals have already witnessed incidents where AI agents were deceived into revealing access credentials. More strikingly, 80% of companies report situations where autonomous agents executed unintended actions, demonstrating that agent security issues constitute operational realities, not distant possibilities.
The threat extends to offensive operations. One notable case involved a malicious actor utilising Claude Code, Anthropic’s agentic AI coding assistant, to conduct comprehensive data extortion targeting at least 17 organisations across multiple economic sectors. This incident demonstrated how agentic AI can be weaponised for sophisticated cybercriminal activities, transforming theoretical vulnerabilities into concrete attack vectors.
Recognising that emerging AI security risks demand fundamentally new defensive approaches, the cybersecurity industry is rapidly developing specialised countermeasures. Investment patterns reflect this urgency: the AI security market, valued at USD 20.19 billion in 2023, is projected to reach USD 141.64 billion by 2032. Current solutions include containment technologies, zero trust agent frameworks, AI security posture management systems specifically designed for autonomous operations, and, finally, human risk management platforms.
Governments worldwide have similarly mobilised, establishing national AI strategies that address, to varying degrees, security threats posed by AI agents. For example, French and German cybersecurity agencies have recommended applying zero trust architecture to agentic AI deployments, while Thailand has advocated control measures including kill chain monitoring and regulated Software Bills of Materials.
The Coalition for Cybersecurity in Asia Pacific (CCAPAC), a group of industry stakeholders strengthening cybersecurity policy through analysis, engagement, and capacity building, addresses these challenges in its upcoming 2025 annual report.
On 24 October 2025, Access Partnership and CCAPAC members will convene stakeholders at the Australian High Commission in Singapore to launch this report, which examines emerging AI security risks, including agentic AI threats and AI-enabled phishing.
The report analyses nascent industry solutions and government responses while offering five major recommendations to ensure accountable AI autonomy through sustained investment in technical capabilities and evidence-based security frameworks.
As agentic AI transitions from experimental technology to operational reality, such collaborative efforts become essential for organisations navigating the tension between innovation and security: determining whether AI agents will ultimately represent our greatest opportunity or most pressing vulnerability.
We work with governments, regulators, and industry to design digital and AI governance frameworks that are globally trusted and locally relevant.