Hello, Injection
Your first prompt injection. A chatbot is guarding a secret word - can you make it spill?
Free AI Security Labs
7 free labs to practice prompt injection, RAG poisoning, tool exploitation, and defense bypass with real LLMs.
Each lab is a hands-on prompt injection exercise tied to a concept from the learning modules. You interact with a real LLM and try to make it do something it shouldn't - extract a secret, fabricate information, or trigger an unauthorized action.
The Context Trace panel shows exactly what the model receives - system prompt, RAG documents, tool definitions, and your input - so you can see how prompt injection attacks work from the inside. Labs are grouped by module and progress from beginner to advanced, aligned with the OWASP Top 10 for LLM Applications and the OWASP Top 10 for Agentic AI.
All labs are completely free. Sign in with GitHub or Google to track your progress. Ready for a bigger challenge? Try the Gauntlet — progressively harder AI defenses, from basic rules to LLM classifiers.
Direct prompt injection on unguarded models
RAG poisoning and indirect prompt injection
Tool abuse and indirect prompt injection
Bypassing system-level protections
MCP, A2A, and why agents amplify every vulnerability
An AI agent that builds charts from company data. The chart renderer is sandboxed, but the dashboard still trusts status messages from that renderer.
An AI agent that generates architecture diagrams from system descriptions. It uses Mermaid 11.6-style sequence diagrams with legacy KaTeX label measurement.
An AI agent that creates SVG graphics from design specs. The output goes through a regex sanitizer before being inserted into a sandboxed preview.