AI Readiness Assessment
15-Dimension AI Readiness Framework
15 Dimensions. 4 Maturity Levels. One honest picture of your organisation's AI readiness.
Most AI readiness assessments tell you what you want to hear. Ours tells you what you need to know: where you stand today, where the gaps are, and what it would realistically take to close them.
Four Maturity Levels
Each dimension is assessed against four maturity levels. Most organisations are not at "Leading" across the board, and that is perfectly fine. The point is to know where you are and decide where you need to be.
Basic
Ad hoc, reactive, no formal approach
Standard
Documented processes, some consistency
Advanced
Measured, managed, continuously improving
Leading
Optimised, predictive, industry-leading
The 15 Dimensions
Organised across five themes, each dimension captures a specific area of AI readiness. Together, they give your board a complete and honest picture.
Governance & Compliance
Policy & Governance
No AI policy. No governance structure. Teams adopt AI tools without oversight or approval.
Draft AI policy exists. Informal governance through IT or a single sponsor. Some visibility into AI use.
Board-approved AI policy. Formal governance board with cross-functional representation. Decision framework for AI adoption.
Living policy with regular reviews. AI governance integrated into corporate governance. Metrics tracked and reported to board.
Compliance & Risk
AI risks not formally identified. No AI-specific risk register. Regulatory exposure unknown.
Key risks identified informally. Some awareness of AI Act / GDPR implications. Risk is discussed but not systematically tracked.
AI-specific risk register maintained. Impact assessments conducted for new use cases. Regulatory mapping to EU AI Act risk categories.
Continuous compliance monitoring. Automated risk scoring. Regulatory horizon scanning. Third-party audits of AI systems.
Operations & Management
Use Case Management
Teams experiment independently. No central register of AI use cases. Duplicated effort across departments.
Some use cases documented. Informal prioritisation. IT has partial visibility into what teams are doing with AI.
Centralised use case register. Prioritisation framework based on value, risk, and feasibility. Stage-gate process for new use cases.
Use case portfolio managed like a product backlog. Measured outcomes feeding future priorities. Reuse of components across use cases.
Human Oversight
No formal review processes. AI outputs used directly without human verification. No escalation paths defined.
Informal expectation that users check AI outputs. Some teams have review steps. Escalation handled ad-hoc.
Risk-based oversight model. High-risk outputs require human approval. Defined escalation paths with named owners.
Automated confidence scoring triggers oversight. Feedback loops from human review improve AI quality. Oversight burden measured and optimised.
Technology & Architecture
Individual SaaS subscriptions (ChatGPT, Copilot). No central platform. Data leaving the organisation via consumer AI tools.
Some enterprise AI tools deployed (Azure OpenAI, Copilot for M365). Limited integration between tools. Partial data sovereignty.
Governed AI platform with in-tenant deployment. Centralised model access. VNet isolation. Private endpoints. Full data sovereignty.
Multi-model platform with automated model selection. API-first architecture enabling workflow integration. Infrastructure-as-code for repeatable deployments.
Training & Awareness
No AI training programme. Employees self-teach via YouTube and articles. Mixed understanding and significant fear.
Ad-hoc training sessions or lunch-and-learns. Some guidance documents circulated. Awareness varies widely by team.
Structured AI literacy programme. Role-based training paths (users, administrators, leadership). Mandatory AI awareness for all staff.
Continuous learning culture with embedded AI skills development. Internal champions network. Training effectiveness measured and iterated.
Technical Foundation
Monitoring & Performance
No monitoring in place. No metrics tracked. Nobody knows if AI is delivering value or how much it costs.
Basic usage metrics tracked (logins, queries). Cost visible at subscription level. No accuracy or outcome measurement.
Dashboards tracking usage, cost, and accuracy per assistant/use case. Performance baselines established. Regular reporting to stakeholders.
Automated anomaly detection. Model drift monitoring. ROI quantified per use case. Continuous improvement driven by performance data.
Risk & Threat Modelling
AI-specific threats not assessed. Security team not involved in AI decisions. No understanding of prompt injection or data leakage risks.
Awareness of AI security risks exists. Some controls in place (e.g. DLP rules). No formal AI threat model.
AI-specific threat models documented. OWASP Top 10 for LLMs reviewed. Input/output guardrails implemented. Red team testing conducted.
Continuous adversarial testing. Automated guardrails with real-time monitoring. Security integrated into AI development lifecycle.
Vendor & Model Due Diligence
Teams choose AI tools based on personal preference or marketing. No vendor evaluation criteria. No understanding of model differences.
IT involved in some AI vendor decisions. Basic security questionnaire applied. Limited understanding of model capabilities and limitations.
Formal AI vendor evaluation framework. Model comparison across accuracy, cost, latency, and data handling. Contractual review of training data usage.
Continuous vendor monitoring. Model benchmarking on your data. Strategic vendor relationships with negotiated enterprise terms. Exit strategies defined.
Data & Ethics
Data Lifecycle Management
Data unclassified. No understanding of what data AI systems can access. Users paste sensitive data into consumer AI tools.
Data classification exists but not AI-specific. Some access controls. Awareness of data sensitivity but inconsistent enforcement.
AI-specific data governance policies. Classified data mapped to permitted AI use cases. Retention policies for AI training data and conversation logs.
Automated data classification feeding AI access controls. Data quality monitoring for AI inputs. Full audit trail from source data to AI output.
Ethics & Explainability
No ethical framework for AI. Outputs taken at face value. No consideration of bias, fairness, or transparency.
Awareness of AI ethics issues. Some guidance on appropriate use. No systematic approach to bias detection or explainability.
Ethical AI principles adopted. Source citations required for all AI outputs. Bias testing for high-risk use cases. Transparency statements for users.
Embedded ethical review in AI lifecycle. Automated fairness testing. External ethics advisory input. Published AI transparency reports.
Enterprise Integration
Incident Response & Escalation
No AI incident plan. Problems handled ad-hoc. No defined escalation path when AI produces harmful or incorrect outputs.
AI incidents handled through general IT incident process. Some awareness that AI failures differ from system outages. Informal escalation.
AI-specific incident response procedures. Defined severity levels for AI failures. Named owners for escalation. Post-incident reviews.
AI-specific runbooks with automated detection. Simulated incident exercises. Lessons learned feeding model and prompt improvements. Regulatory notification process.
Enterprise Architecture & Integration
AI runs in browser tabs. No integration with business systems. Users copy-paste AI outputs into emails and documents.
AI acknowledged in EA roadmap. Some Copilot/embedded AI features in M365. No API integration with line-of-business apps.
AI systems integrated via APIs into core workflows. Power Automate, ServiceNow, or ERP consuming AI outputs. Architecture standards for AI integration defined.
AI embedded across enterprise architecture. Event-driven AI triggers. Structured API endpoints returning typed data consumed by multiple systems. AI as infrastructure, not a tool.
Model Lifecycle Management
No model management. Whatever model the SaaS tool uses. No awareness of model versions, deprecations, or alternatives.
Awareness of model choices. Some documentation of which models are in use. Ad-hoc model updates when vendors deprecate versions.
Model registry tracking all models in production. Evaluation criteria for model selection. Planned transitions when new models release. Regression testing.
Automated model benchmarking on your data. A/B testing for model upgrades. Cost-performance optimisation. Model governance integrated into change management.
Stakeholder Engagement
AI driven by one individual or team. No cross-functional input. Business leaders either unaware or sceptical. End users not consulted.
Some cross-functional awareness. IT leads AI with limited business input. Executive sponsor exists but involvement is light-touch.
AI steering group with cross-functional representation. Business cases co-owned by business and IT. User champions in each department.
AI strategy owned at board level with federated execution. Active community of practice. External stakeholder engagement (customers, regulators, partners).
Three Ways to Get Started
Choose the depth that fits your situation. Every option uses the same 15-dimension framework; the difference is how much support you need.
Self-Assessment Toolkit
£997 +VAT
The complete framework as a structured workbook. Your team runs the assessment independently using our methodology, scoring guides, and reporting templates.
- -- 15-dimension scoring workbook
- -- Maturity level descriptors and examples
- -- Board-ready summary template
- -- Gap analysis worksheet
AI Readiness Assessment
£12,500 +VAT
Comprehensive expert-led assessment across all 15 dimensions. Includes stakeholder interviews, independent maturity scoring, board-ready report, and executive presentation.
- -- Expert-led stakeholder interviews
- -- Independent dimension scoring
- -- 20-page board-ready report
- -- 90-minute executive presentation
Assessment + Platform
Year 1: £78,497 +VAT
The complete journey: assessment, governed AI platform deployment, and ongoing support. Everything your organisation needs to move from assessment to governed AI operations.
- -- Full AI Readiness Assessment
- -- Platform setup and configuration
- -- Governed AI assistants for your teams
- -- Ongoing support and quarterly reviews
Find Out Where You Stand
An honest assessment is the first step toward a credible AI strategy. No hype, no pressure, just a clear picture of your readiness and what to do about it.
Request an assessmentWe will respond within two working days with next steps and a scoping conversation.
Explore further