About
About PromptTrace
A free, hands-on AI security training platform for learning prompt injection, RAG poisoning, and LLM red teaming. No paid tiers, no enterprise licensing - 100% free for individuals and teams.
What is PromptTrace?
PromptTrace is a hands-on training platform for AI security and LLM security. It lets you practice prompt injection, RAG poisoning, and tool exploitation against real LLMs - not simulations - with full visibility into what the model actually receives.
The core idea is the Context Trace: a real-time view of the complete prompt stack sent to the LLM, including the system prompt, retrieved documents, tool definitions, and your input. By seeing exactly what the model sees, you develop an intuition for how attacks work and why defenses fail.
Who built this
PromptTrace is built by Abdelrahman Adel, AI Security Researcher and Founder of AIRED Lab (AI Research, Education & Defense).
Abdelrahman brings over 9 years of offensive security experience to AI red teaming. He is a Cyber Security Tech Leader and a Top 100 Bug Bounty Hunter on Bugcrowd, holding the OSCP, CREST CRT, and WAPTX certifications.
His transition to AI security is a natural evolution - the same adversarial mindset that finds injection flaws in web applications now targets prompt injection, RAG poisoning, and tool exploitation in LLM systems. PromptTrace was built to make this new attack surface accessible to the security community.
The curriculum is informed by the OWASP Top 10 for LLM Applications, the OWASP Top 10 for Agentic AI, MITRE ATLAS framework for adversarial ML, and real-world prompt injection research. Every lab scenario is based on documented attack patterns observed in production AI systems.
Why this exists
Most AI security education is theoretical. Papers describe attacks, blog posts explain defenses, but rarely can you see the full picture - the actual tokens flowing through the system during an attack.
PromptTrace was built to close that gap. Every lab shows you the invisible layers: how system prompts are assembled, how RAG documents get injected, how tool calls are formatted, and where trust boundaries exist in the prompt context.
How it works
- 1.8 learning modules teach foundational concepts - how LLMs process context, what system prompts do, how RAG pipelines work, and how tool calling creates attack surfaces.
- 2.7 interactive prompt injection labs put you in front of a real LLM with a specific security objective. Labs cover direct prompt injection, RAG poisoning, tool exploitation, and defense bypass.
- 3.The Gauntlet - a 15-level prompt injection CTF with progressively stronger defenses: prompt guards, code guards, and LLM classifiers. Track your best time on the community leaderboard.
Platform at a glance
Read more
Security & responsible disclosure
If you discover a vulnerability in the PromptTrace platform itself (not the intentional lab challenges), please report it through the chat widget or via the security.txt contact. We take all reports seriously and will acknowledge receipt within 48 hours.
Contact
Questions, feedback, or vulnerability reports - reach out through the chat widget in the bottom-right corner or visit the leaderboard to see the community.