Skip to content

AI security

5 posts with the tag “AI security”

OWASP Agentic AI Top 10: Security Risks When AI Acts on Its Own

OWASP Agentic AI Top 10 - interconnected AI agents with cascading failure visualization

An AI agent at a fintech company was tasked with resolving a customer’s billing dispute. It accessed the billing system, issued a refund, then escalated the ticket internally. Along the way it read the customer’s full payment history, forwarded account details to an external logging service it had been configured to use, and modified the customer’s subscription tier without approval. Every action was technically within the permissions it had been granted.

Nobody told the agent to do most of that. It chained together actions it deemed logical. Each step made sense in isolation. Together, they created a data exposure incident that took weeks to untangle.

This is the class of risk the OWASP Agentic AI Top 10 was built to address. Not the vulnerabilities of the language model itself, but the dangers that emerge when AI systems act autonomously across multiple tools, APIs, and data sources.

Deepfake Social Engineering: When You Can't Trust Your Own Eyes

Deepfake social engineering - split view comparing a real person and their AI-generated deepfake clone

Your CFO joins a video call with the Hong Kong finance team. She asks them to execute a series of wire transfers totaling $25 million. Her face, her voice, her mannerisms. The team complies. The entire call was a deepfake.

This happened to Arup, the British engineering firm, in early 2024. The attackers recreated the CFO and several other executives using publicly available video footage. Every person on that call except the target was synthetic.

OWASP Top 10 for LLM Applications: What Security Teams Get Wrong

OWASP Top 10 for LLM Applications - neural network with vulnerability categories

OWASP published its first Top 10 for Large Language Model Applications in 2023. Two years later, most security teams still treat “LLM risk” as a synonym for “prompt injection.” That’s like treating the OWASP Web Top 10 as if SQL injection were the only vulnerability that mattered.

The 2025 revision of the OWASP LLM Top 10 expanded and reorganized the list based on real-world incidents. Supply chain attacks replaced insecure plugins. System prompt leakage and vector embedding weaknesses got their own categories. The list reflects what attackers are actually doing, not what conference talks speculate about.

Your employees interact with LLMs daily. Customer support agents use chatbots. Marketing teams generate content. Developers lean on AI coding assistants for everything from debugging to architecture decisions. Each interaction is a potential attack surface, and your team probably doesn’t know it.

AI Coding Assistant Security Risks You Can't Ignore

AI coding assistant security risks - code editor with prompt injection attack visualization

Your developers are 10x more productive with AI coding assistants. So are the attackers targeting your organization.

In November 2025, Anthropic disclosed what security researchers had feared: the first documented case of an AI coding agent being weaponized for a large-scale cyberattack. A Chinese state-sponsored threat group called GTG-1002 used Claude Code to execute over 80% of a cyber espionage campaign autonomously. The AI handled reconnaissance, exploitation, credential harvesting, and data exfiltration across more than 30 organizations with minimal human oversight. This incident illustrates the broader agentic AI security risks that OWASP now tracks in a dedicated Top 10 list.

This wasn’t a theoretical exercise. It worked.

AI coding assistants have become standard in development workflows. GitHub Copilot. Amazon CodeWhisperer. Claude Code. Cursor. These tools autocomplete functions, debug errors, and write entire modules from natural language descriptions. Developers who resist them fall behind. Organizations that ban them lose talent.

But every line of code these assistants suggest passes through external servers. Every context window they analyze might contain secrets. Every prompt they accept could be an attack vector. The productivity gains are real. So are the risks.

Clawdbot (Moltbot) Security Risks: What to Know

Clawdbot (Moltbot) security risks - lobster mascot with sensitive files and infostealer warning

Silicon Valley fell for Clawdbot overnight. A personal AI assistant that manages your email, checks you into flights, controls your smart home, and executes terminal commands. All from WhatsApp, Telegram, or iMessage. A 24/7 Jarvis with infinite memory.

Security researchers saw something different: a honey pot for infostealers sitting in your home directory.

Clawdbot stores your API tokens, authentication profiles, and session memories in plaintext files. It runs with the same permissions as your user account. It reads documents, emails, and webpages to help you. Those same capabilities make it a perfect attack vector.

The creator, Peter Steinberger, built a tool that’s genuinely useful. The official documentation acknowledges the risks directly: “Running an AI agent with shell access on your machine is… spicy. There is no ‘perfectly secure’ setup.”

This article examines what those risks actually look like.