Free Learning Path
Learn AI Security & Prompt Injection
A free, 8-module course covering LLM security from fundamentals to advanced attacks. Start with how LLMs work, then attack and defend real AI systems in hands-on labs. No prior security experience required. Aligned with the OWASP Top 10 for LLM Applications and the OWASP Top 10 for Agentic AI.
AI Fundamentals
Understand how LLMs work before you break them
How LLMs Actually Work
Learn how large language models process tokens, context windows, and text generation - the foundation for understanding prompt injection attacks
System Prompts & the Context Window
How developers instruct LLMs with system prompts, why they're fragile, and how prompt injection exploits this fundamental weakness
RAG: When LLMs Read External Data
How Retrieval-Augmented Generation works, where trust boundaries break, and why RAG poisoning is a critical LLM security risk
Tools & Function Calling
How LLMs invoke external tools and APIs, and why tool exploitation and excessive agency are top LLM security risks
Security Modules
Attack and defend real LLM systems
The Bare LLM
Direct prompt injection against unprotected LLMs - extract system prompts, override instructions, and learn why 'ignore previous instructions' works
LLM + External Data
Indirect prompt injection through RAG poisoning - how attackers embed malicious instructions in knowledge bases to manipulate LLM outputs
LLM + Tools
Tool exploitation and excessive agency in LLM systems - discover hidden tools, abuse function calling, and inject through AI-generated output
LLM + Defenses
Bypass LLM security defenses - keyword filters, instruction hierarchy, self-check prompts, and code-level guards. Learn what works and what doesn't