The OWASP Prompt Injection Cheat Sheet You Should Bookmark
OWASP published a comprehensive prompt injection prevention cheat sheet. If you're building anything with LLMs, bookmark it.
If you’re building applications on top of LLMs, you need to know about prompt injection. It’s the SQL injection of the AI era — and most teams are still learning the hard way.
OWASP (Open Worldwide Application Security Project) published a solid cheat sheet that covers exactly this:
LLM Prompt Injection Prevention Cheat Sheet
What It Covers
The guide catalogs 13 attack categories — from direct injection and Unicode smuggling (which I wrote about recently) to RAG poisoning and agent-specific exploits like tool call manipulation.
More importantly, it lays out practical defenses:
- Input validation and sanitization — detecting dangerous patterns before they reach the model
- Structured prompts — formatting that separates instructions from user data
- Output monitoring — catching signs of successful injection in responses
- Human-in-the-loop controls — flagging high-risk requests for review
- Least privilege — limiting what the model can actually do when compromised
The core insight is one you’ve probably heard before but bears repeating: use layered defense. No single technique stops prompt injection. You stack mitigations — input filtering, output validation, privilege restrictions, monitoring — so that bypassing one layer doesn’t give an attacker everything.
Why It Matters
The cheat sheet is honest about the state of things. It acknowledges that research shows current defenses have limitations against persistent, sophisticated attackers. That’s not a reason to skip defenses — it’s a reason to take them seriously and build in depth.
If you’re shipping LLM-powered features to users, go read it. Bookmark it. Share it with your team.