Free AI Security Labs

Prompt Injection Labs

7 free labs to practice prompt injection, RAG poisoning, and tool exploitation with real LLMs.

Each lab is a hands-on prompt injection exercise tied to a concept from the learning modules. You interact with a real LLM and try to make it do something it shouldn't - extract a secret, fabricate information, or trigger an unauthorized action.

The Context Trace panel shows exactly what the model receives - system prompt, RAG documents, tool definitions, and your input - so you can see how prompt injection attacks work from the inside. Labs are grouped by module and progress from beginner to advanced, aligned with the OWASP Top 10 for LLM Applications and the OWASP Top 10 for Agentic AI.

All labs are completely free. Sign in with GitHub or Google to track your progress. Ready for a bigger challenge? Try the Gauntlet - 15 levels of progressively harder AI defenses.