Frameworks & Standards
20OWASP, NIST, MITRE, and industry security frameworks
OWASP AI Exchange
Open-source framework for AI security controls, governance, and threat classification maintained by OWASP.
MITRE ATLAS
Tactical classification of adversary behavior and TTPs against AI systems, maintained by MITRE.
OWASP Top 10 for LLM Applications
Taxonomy of the ten most critical LLM security risks including prompt injection and insecure output handling.
ISO/IEC 27090
International standard providing guidelines for AI system security management and risk assessment.
OpenCRE
Cross-reference engine mapping security standards and controls across OWASP, NIST, ISO, and more.
AI Security Matrix
OWASP framework for identifying threats across analytical, discriminative, and generative AI systems.
AI Program Quickstart (G.U.A.R.D)
Enterprise governance framework providing a structured approach to building AI security programs.
Dual LLM Pattern
Architectural defense pattern using separate privileged and quarantined LLMs to prevent injection attacks.
Arcanum Prompt Injection Taxonomy
Open-source taxonomy classifying prompt injections across intent, technique, evasion, and input vectors.
AI Supply Chain Management
OWASP framework for managing vendor and model risk across the AI supply chain lifecycle.
AI Security Testing
OWASP methodology and tool guidance for systematic AI security testing and evaluation.
AI Privacy Section
OWASP guidance on data protection and GDPR compliance for AI systems processing personal data.
Data Poisoning (Dev-time)
OWASP framework covering training data integrity threats and defenses during model development.
Evasion Attacks (Input)
OWASP classification of input-based evasion attacks that manipulate AI decision boundaries.
AI Security Essentials
OWASP summary of essential AI security principles and minimum viable controls.
Cisco AI Security Taxonomy
Cisco's integrated safety and security taxonomy for classifying AI defense requirements.
Composite Detection Guide
Framework for building correlated attack chain detections across multiple AI security signals.
AgentBench
Multi-dimensional benchmark for evaluating LLM agent performance and security characteristics.
Pangea Attack Taxonomy
Taxonomy mapping LLM attack types to specific remediation strategies.
AI Risk Taxonomy
MIT-maintained risk mapping framework covering AI security threats across deployment scenarios.
Know a resource we're missing?
This directory is community curated. Submit a pull request to add your favorite AI security resources.
Contribute on GitHub