Prompt injection against an AI agent with calendar and email access can exfiltrate meeting notes, send phishing emails from a trusted internal address, and chain tool calls to escalate privileges — all triggered by a malicious document the agent was asked to summarize. This is not theoretical. It has happened in production environments in 2025. Security teams are not ready.

The New Threat Model

Traditional endpoint security operates on the assumption that malicious behavior originates from code execution. AI agents break this assumption — the agent executes legitimate code in response to adversarial inputs that look like normal text. No process injection, no file write, no network anomaly. The OWASP LLM Top 10 lists prompt injection as the #1 risk. Most security teams haven't read it.