OWASP unveils first Top 10 risks for agentic AI use
Cyber security community OWASP has published its first Top 10 list for agentic AI applications, setting out the main security risks it sees emerging as organisations deploy autonomous artificial intelligence systems across core operations.
The new framework focuses on agentic AI, a class of systems that can make decisions and carry out actions without direct human instruction. These agents can connect to business systems, initiate workflows and interact with external services, which expands the potential impact of errors, misconfigurations or malicious interference.
OWASP, best known in corporate security teams for its long-running Top 10 list of web application risks, is extending its methodology to AI agents as businesses move beyond static chatbots and question-answering tools. Security practitioners view this shift as a structural change in how organisations use AI inside production environments.
Keren Katz, Co-Lead of the Top 10 for Agentic AI Applications project at OWASP and Senior Group Manager of AI Security at Tenable, said the pace of adoption has outstripped many organisations’ understanding of the underlying risks.
“Many organisations are rapidly adopting agentic AI without fully appreciating the shift it represents. These systems do more than answer questions; they initiate tasks, chain decisions, and execute actions that previously required a human operator. That capability can unlock tremendous efficiency and give employees the space to focus on higher-value work. But it also introduces a new class of operational and security risk that most enterprises are not yet structured to handle,” said Katz.
The Top 10 list sets out the main categories of risk that OWASP sees across agent-based architectures. These include the ways agents ingest prompts and contextual information, how they authenticate to systems, how they store and recall memory, and how they interact with external tools, APIs and other agents.
Security teams have raised concern that these behaviours turn AI agents into active participants in corporate workflows. An error or compromise can therefore result in data being moved, transactions being executed or system settings being changed, rather than a simple incorrect answer.
Katz said security leaders should treat agentic AI as a structural addition to their workforce model, not as a passive software feature.
“The biggest takeaway is that in practice, agentic AI functions as a new class of digital workforce. It can navigate core systems, initiate transactions, access sensitive data, orchestrate complex processes, and exercise autonomous judgment at unmatched speed. When it falters, the outcome is not a simple error but a consequential action that usually comes with direct operational impact. One misaligned decision can trigger a cascade felt across the enterprise,” said Katz.
OWASP’s framework highlights several specific technical and operational concerns. These range from prompt-based attacks and injection of hostile context, through to weaknesses in identity and access management for AI agents, and exposures in the software supply chain that supports agentic operations.
Risk scenarios extend beyond data loss and privacy breaches. Autonomous agents can initiate financial transactions, modify records inside enterprise applications or trigger follow-on processes in connected systems. A single flawed decision or compromised agent can therefore generate knock-on activities in multiple business units.
Katz pointed to an expanded attack surface that aligns closely with how users and systems interact with AI agents during normal work.
“Whether it’s risky user prompts, injected context memory, or external agentic connections payloads, enabled simply by user interactions, privilege-escalation pathways and identity gaps, or supply chain weaknesses, the risk surface spans nearly every layer of Agentic AI. If data exfiltration, output manipulation, or workflow hijacking are alarming on their own, the situation can quickly escalate - from rogue agents acting as malicious insiders to cascading failures that disrupt entire operations - all captured in the Agentic Top 10,” said Katz.
Security professionals expect the list to inform internal governance policies, procurement criteria and technical design patterns for AI deployments. The framework collates known attack types, misconfigurations and gaps in current controls that become more exposed when AI agents operate across several systems at once.
Governance gap
The emergence of agentic AI has coincided with a fragmented regulatory landscape. Many jurisdictions are still developing AI-specific rules, while existing data protection and cyber security regulations do not explicitly address autonomous AI behaviour inside enterprise workflows.
Katz said many organisations now face a policy and skills gap as they put agents into production environments.
“The hype and promise of AI, combined with the lack of guidelines and regulations, have created a knowledge gap, leaving organisations with no clear path forward to secure this emerging threat. We created the OWASP Top 10 for Agentic Applications to bridge the gap, simplifying and collating existing guidance so organisations can confidently forge ahead in their AI journey,” said Katz.
Security teams and AI engineering groups are expected to use the new Top 10 list as a reference as they assess current deployments and plan future agentic AI projects.