Risk & Threat Modelling is the systematic process of identifying potential threats specific to AI systems and developing mitigation strategies. Unlike traditional cybersecurity threats, AI systems face unique risks such as adversarial attacks (deliberately crafted inputs designed to fool the AI), data poisoning (corrupting training data to compromise the model), model extraction (stealing intellectual property through API queries), prompt injection (manipulating AI through carefully crafted inputs), and inference attacks (deducing sensitive information from model outputs). This requires structured threat modelling frameworks adapted for AI contexts.
Traditional security approaches may not adequately address these AI-specific vulnerabilities, which can be subtle and difficult to detect. This dimension assesses whether your organisation systematically identifies and mitigates AI-specific threats.
Why It Matters
AI systems are vulnerable to unique attacks that traditional cybersecurity may not address.
Maturity Levels
| Basic | Standard | Advanced | Leading |
|---|---|---|---|
| No risk assessment for AI systems. | Basic risk mapping for AI initiatives. | Structured threat modelling (e.g., STRIDE, MITRE ATT&CK) applied to AI systems. | Continuous horizon scanning for emerging AI threats, with proactive mitigation strategies. |
📥 Related Resources & Templates
Downloadable templates, examples, and frameworks to help you implement this dimension.
AI Risk Assessment
PremiumComprehensive risk assessment template for evaluating AI-specific risks, likelihood, impact, and mitigation strategies.
AI Threat Modelling
PremiumThreat modelling framework and templates for identifying AI-specific security threats, attack vectors, and defenses.