FREE AI SECURITY TRAINING

Learn prompt injection through hands-on labs.

Master LLM security through prompt injection, AI red teaming, RAG poisoning, and tool exploitation with real LLMs. See the full prompt stack with Context Trace. Free, no experience required.

100% free - sign in with GitHub or Google to play labs

7

Hands-On Labs

15

CTF Levels

8

Learning Modules

5

LLM Providers

Free

No Paid Tiers

How It Works

01

See

Full prompt stack visibility with Context Trace. System instructions, RAG context, tool definitions, and user input - no abstractions.

02

Attack

Execute real prompt injection exploits in sandboxed labs. Practice RAG poisoning, tool abuse, and defense bypass against real LLMs.

03

Understand

Learn why prompt-level defenses fail and what actually works. Every lab explains the vulnerability and the fix. Aligned with the OWASP Top 10 for LLMs and Agentic AI.

Frequently Asked Questions

What is prompt injection?

Prompt injection is an attack where user input overrides the system instructions given to an LLM. Attackers craft messages that make the AI ignore its rules and follow new instructions instead - extracting secrets, fabricating information, or triggering unauthorized actions. It ranks #1 on the OWASP Top 10 for LLM Applications.

Is PromptTrace free?

Yes, completely free. All learning modules are free to read without an account. Labs and the Gauntlet require a free sign-in (GitHub or Google) to track your progress. There are no paid tiers or enterprise licensing.

What is the Context Trace?

The Context Trace is PromptTrace's core feature. It shows you the complete prompt stack sent to the LLM in real-time - system prompt, RAG documents, tool definitions, and your input. By seeing exactly what the model sees, you understand how prompt injection attacks work and why defenses fail.

What's the difference between direct and indirect prompt injection?

Direct prompt injection is when attackers type malicious instructions into the chat input. Indirect prompt injection hides malicious instructions in external data (documents, web pages, emails) that the LLM processes through RAG or tool calling. Indirect injection is harder to detect and defend against.

Do I need a cybersecurity background?

No. PromptTrace starts from fundamentals - how LLMs work, what context windows are, how system prompts function. The 8 learning modules assume no prior security knowledge. Security experience helps but isn't required.

What is RAG poisoning?

RAG poisoning is an attack where adversaries inject malicious content into the knowledge base that a Retrieval-Augmented Generation system uses. When the RAG pipeline retrieves this poisoned content, the embedded instructions can override the LLM's system prompt - a form of indirect prompt injection.

How does the Gauntlet CTF work?

The Gauntlet is a 15-level capture-the-flag challenge. Each level has a progressively harder AI defense system protecting a secret passphrase. Defenses progress from prompt-level rules (levels 1-7) to code-level guards (8-11) to LLM classifiers (12-15). Your attempts and completion times are tracked on the leaderboard.