AI-Powered Phishing: How LLMs Help Attackers Write Better Lures
A phishing email arrives in your inbox. It references a project you’re working on, names your manager correctly, mimics the writing style of your IT department, and asks you to verify your credentials after a “suspicious login from São Paulo.” No typos. No awkward phrasing. No generic “Dear Customer” greeting. It reads exactly like a legitimate message from your company.
Two years ago, writing this email required a human attacker who spent hours researching your organization, your role, and your communication patterns. Today, an LLM produces it in seconds. Feed it a few LinkedIn profiles and a sample company email, and it generates dozens of personalized variants, each tailored to a different target, in any language.
This is why traditional phishing detection advice about spotting grammatical errors and suspicious formatting is becoming unreliable. The signals employees were trained to look for are disappearing.