Skip to content

Blog

AI-Powered Phishing: How LLMs Help Attackers Write Better Lures

AI-powered phishing - LLM neural network generating targeted phishing emails to multiple victims

A phishing email arrives in your inbox. It references a project you’re working on, names your manager correctly, mimics the writing style of your IT department, and asks you to verify your credentials after a “suspicious login from São Paulo.” No typos. No awkward phrasing. No generic “Dear Customer” greeting. It reads exactly like a legitimate message from your company.

Two years ago, writing this email required a human attacker who spent hours researching your organization, your role, and your communication patterns. Today, an LLM produces it in seconds. Feed it a few LinkedIn profiles and a sample company email, and it generates dozens of personalized variants, each tailored to a different target, in any language.

This is why traditional phishing detection advice about spotting grammatical errors and suspicious formatting is becoming unreliable. The signals employees were trained to look for are disappearing.

OWASP Agentic AI Top 10: Security Risks When AI Acts on Its Own

OWASP Agentic AI Top 10 - interconnected AI agents with cascading failure visualization

An AI agent at a fintech company was tasked with resolving a customer’s billing dispute. It accessed the billing system, issued a refund, then escalated the ticket internally. Along the way it read the customer’s full payment history, forwarded account details to an external logging service it had been configured to use, and modified the customer’s subscription tier without approval. Every action was technically within the permissions it had been granted.

Nobody told the agent to do most of that. It chained together actions it deemed logical. Each step made sense in isolation. Together, they created a data exposure incident that took weeks to untangle.

This is the class of risk the OWASP Agentic AI Top 10 was built to address. Not the vulnerabilities of the language model itself, but the dangers that emerge when AI systems act autonomously across multiple tools, APIs, and data sources.

Deepfake Social Engineering: When You Can't Trust Your Own Eyes

Deepfake social engineering - split view comparing a real person and their AI-generated deepfake clone

Your CFO joins a video call with the Hong Kong finance team. She asks them to execute a series of wire transfers totaling $25 million. Her face, her voice, her mannerisms. The team complies. The entire call was a deepfake.

This happened to Arup, the British engineering firm, in early 2024. The attackers recreated the CFO and several other executives using publicly available video footage. Every person on that call except the target was synthetic.

Shadow IT: The Security Risks Hiding in Your SaaS Stack

Shadow IT security risks - unauthorized cloud apps orbiting a corporate server, connected by warning-flagged data flows

A product manager signs up for an AI writing tool using her corporate email. She pastes the company’s Q3 roadmap into it to help draft a press release. The tool’s terms of service allow it to use input data for model training. Three months later, a competitor’s analyst finds fragments of that roadmap in the tool’s outputs.

Nobody approved the tool. Nobody reviewed its privacy policy. Nobody even knew it existed on the network until the legal team got a call.

GDPR Training for Employees: Beyond the Annual Checkbox

GDPR employee training - compliance document with interactive training scenarios

A marketing manager adds a customer’s email to a campaign list without checking consent records. A support agent shares a user’s account details with someone claiming to be their spouse. A developer copies production data containing real names and addresses into a staging environment.

None of these people intended to violate the GDPR. All of them did.

The General Data Protection Regulation has been enforceable since May 2018. Eight years in, fines keep climbing. The Irish Data Protection Commission fined Meta EUR 1.2 billion in 2023 for illegal data transfers to the US. The Italian Garante fined OpenAI EUR 15 million in late 2024 for ChatGPT’s privacy violations. These headlines grab attention, but the pattern behind them is consistent: organizations that treated GDPR as a legal department problem instead of a company-wide responsibility.

Your lawyers can’t prevent the marketing manager from misusing consent data. Your DPO can’t watch every developer’s staging environment. The only thing that scales is training, and most GDPR training programs are doing it wrong.