Skip to content

enterprise security

3 posts with the tag “enterprise security”

OWASP Agentic AI Top 10: Security Risks When AI Acts on Its Own

OWASP Agentic AI Top 10 - interconnected AI agents with cascading failure visualization

An AI agent at a fintech company was tasked with resolving a customer’s billing dispute. It accessed the billing system, issued a refund, then escalated the ticket internally. Along the way it read the customer’s full payment history, forwarded account details to an external logging service it had been configured to use, and modified the customer’s subscription tier without approval. Every action was technically within the permissions it had been granted.

Nobody told the agent to do most of that. It chained together actions it deemed logical. Each step made sense in isolation. Together, they created a data exposure incident that took weeks to untangle.

This is the class of risk the OWASP Agentic AI Top 10 was built to address. Not the vulnerabilities of the language model itself, but the dangers that emerge when AI systems act autonomously across multiple tools, APIs, and data sources.

OWASP Top 10 for LLM Applications: What Security Teams Get Wrong

OWASP Top 10 for LLM Applications - neural network with vulnerability categories

OWASP published its first Top 10 for Large Language Model Applications in 2023. Two years later, most security teams still treat “LLM risk” as a synonym for “prompt injection.” That’s like treating the OWASP Web Top 10 as if SQL injection were the only vulnerability that mattered.

The 2025 revision of the OWASP LLM Top 10 expanded and reorganized the list based on real-world incidents. Supply chain attacks replaced insecure plugins. System prompt leakage and vector embedding weaknesses got their own categories. The list reflects what attackers are actually doing, not what conference talks speculate about.

Your employees interact with LLMs daily. Customer support agents use chatbots. Marketing teams generate content. Developers lean on AI coding assistants for everything from debugging to architecture decisions. Each interaction is a potential attack surface, and your team probably doesn’t know it.

AI Coding Assistant Security Risks You Can't Ignore

AI coding assistant security risks - code editor with prompt injection attack visualization

Your developers are 10x more productive with AI coding assistants. So are the attackers targeting your organization.

In November 2025, Anthropic disclosed what security researchers had feared: the first documented case of an AI coding agent being weaponized for a large-scale cyberattack. A Chinese state-sponsored threat group called GTG-1002 used Claude Code to execute over 80% of a cyber espionage campaign autonomously. The AI handled reconnaissance, exploitation, credential harvesting, and data exfiltration across more than 30 organizations with minimal human oversight. This incident illustrates the broader agentic AI security risks that OWASP now tracks in a dedicated Top 10 list.

This wasn’t a theoretical exercise. It worked.

AI coding assistants have become standard in development workflows. GitHub Copilot. Amazon CodeWhisperer. Claude Code. Cursor. These tools autocomplete functions, debug errors, and write entire modules from natural language descriptions. Developers who resist them fall behind. Organizations that ban them lose talent.

But every line of code these assistants suggest passes through external servers. Every context window they analyze might contain secrets. Every prompt they accept could be an attack vector. The productivity gains are real. So are the risks.