AI Readiness Assessment

15-Dimension AI Readiness Framework

15 Dimensions. 4 Maturity Levels. One honest picture of your organisation's AI readiness.

Most AI readiness assessments tell you what you want to hear. Ours tells you what you need to know: where you stand today, where the gaps are, and what it would realistically take to close them.

Four Maturity Levels

Each dimension is assessed against four maturity levels. Most organisations are not at "Leading" across the board, and that is perfectly fine. The point is to know where you are and decide where you need to be.

Basic

Ad hoc, reactive, no formal approach

Standard

Documented processes, some consistency

Advanced

Measured, managed, continuously improving

Leading

Optimised, predictive, industry-leading

The 15 Dimensions

Organised across five themes, each dimension captures a specific area of AI readiness. Together, they give your board a complete and honest picture.

Governance & Compliance

Policy & Governance

Basic

No AI policy. No governance structure. Teams adopt AI tools without oversight or approval.

Standard

Draft AI policy exists. Informal governance through IT or a single sponsor. Some visibility into AI use.

Advanced

Board-approved AI policy. Formal governance board with cross-functional representation. Decision framework for AI adoption.

Leading

Living policy with regular reviews. AI governance integrated into corporate governance. Metrics tracked and reported to board.

Compliance & Risk

Basic

AI risks not formally identified. No AI-specific risk register. Regulatory exposure unknown.

Standard

Key risks identified informally. Some awareness of AI Act / GDPR implications. Risk is discussed but not systematically tracked.

Advanced

AI-specific risk register maintained. Impact assessments conducted for new use cases. Regulatory mapping to EU AI Act risk categories.

Leading

Continuous compliance monitoring. Automated risk scoring. Regulatory horizon scanning. Third-party audits of AI systems.

Operations & Management

Use Case Management

Basic

Teams experiment independently. No central register of AI use cases. Duplicated effort across departments.

Standard

Some use cases documented. Informal prioritisation. IT has partial visibility into what teams are doing with AI.

Advanced

Centralised use case register. Prioritisation framework based on value, risk, and feasibility. Stage-gate process for new use cases.

Leading

Use case portfolio managed like a product backlog. Measured outcomes feeding future priorities. Reuse of components across use cases.

Human Oversight

Basic

No formal review processes. AI outputs used directly without human verification. No escalation paths defined.

Standard

Informal expectation that users check AI outputs. Some teams have review steps. Escalation handled ad-hoc.

Advanced

Risk-based oversight model. High-risk outputs require human approval. Defined escalation paths with named owners.

Leading

Automated confidence scoring triggers oversight. Feedback loops from human review improve AI quality. Oversight burden measured and optimised.

Technology & Architecture

Basic

Individual SaaS subscriptions (ChatGPT, Copilot). No central platform. Data leaving the organisation via consumer AI tools.

Standard

Some enterprise AI tools deployed (Azure OpenAI, Copilot for M365). Limited integration between tools. Partial data sovereignty.

Advanced

Governed AI platform with in-tenant deployment. Centralised model access. VNet isolation. Private endpoints. Full data sovereignty.

Leading

Multi-model platform with automated model selection. API-first architecture enabling workflow integration. Infrastructure-as-code for repeatable deployments.

Training & Awareness

Basic

No AI training programme. Employees self-teach via YouTube and articles. Mixed understanding and significant fear.

Standard

Ad-hoc training sessions or lunch-and-learns. Some guidance documents circulated. Awareness varies widely by team.

Advanced

Structured AI literacy programme. Role-based training paths (users, administrators, leadership). Mandatory AI awareness for all staff.

Leading

Continuous learning culture with embedded AI skills development. Internal champions network. Training effectiveness measured and iterated.

Technical Foundation

Monitoring & Performance

Basic

No monitoring in place. No metrics tracked. Nobody knows if AI is delivering value or how much it costs.

Standard

Basic usage metrics tracked (logins, queries). Cost visible at subscription level. No accuracy or outcome measurement.

Advanced

Dashboards tracking usage, cost, and accuracy per assistant/use case. Performance baselines established. Regular reporting to stakeholders.

Leading

Automated anomaly detection. Model drift monitoring. ROI quantified per use case. Continuous improvement driven by performance data.

Risk & Threat Modelling

Basic

AI-specific threats not assessed. Security team not involved in AI decisions. No understanding of prompt injection or data leakage risks.

Standard

Awareness of AI security risks exists. Some controls in place (e.g. DLP rules). No formal AI threat model.

Advanced

AI-specific threat models documented. OWASP Top 10 for LLMs reviewed. Input/output guardrails implemented. Red team testing conducted.

Leading

Continuous adversarial testing. Automated guardrails with real-time monitoring. Security integrated into AI development lifecycle.

Vendor & Model Due Diligence

Basic

Teams choose AI tools based on personal preference or marketing. No vendor evaluation criteria. No understanding of model differences.

Standard

IT involved in some AI vendor decisions. Basic security questionnaire applied. Limited understanding of model capabilities and limitations.

Advanced

Formal AI vendor evaluation framework. Model comparison across accuracy, cost, latency, and data handling. Contractual review of training data usage.

Leading

Continuous vendor monitoring. Model benchmarking on your data. Strategic vendor relationships with negotiated enterprise terms. Exit strategies defined.

Data & Ethics

Data Lifecycle Management

Basic

Data unclassified. No understanding of what data AI systems can access. Users paste sensitive data into consumer AI tools.

Standard

Data classification exists but not AI-specific. Some access controls. Awareness of data sensitivity but inconsistent enforcement.

Advanced

AI-specific data governance policies. Classified data mapped to permitted AI use cases. Retention policies for AI training data and conversation logs.

Leading

Automated data classification feeding AI access controls. Data quality monitoring for AI inputs. Full audit trail from source data to AI output.

Ethics & Explainability

Basic

No ethical framework for AI. Outputs taken at face value. No consideration of bias, fairness, or transparency.

Standard

Awareness of AI ethics issues. Some guidance on appropriate use. No systematic approach to bias detection or explainability.

Advanced

Ethical AI principles adopted. Source citations required for all AI outputs. Bias testing for high-risk use cases. Transparency statements for users.

Leading

Embedded ethical review in AI lifecycle. Automated fairness testing. External ethics advisory input. Published AI transparency reports.

Enterprise Integration

Incident Response & Escalation

Basic

No AI incident plan. Problems handled ad-hoc. No defined escalation path when AI produces harmful or incorrect outputs.

Standard

AI incidents handled through general IT incident process. Some awareness that AI failures differ from system outages. Informal escalation.

Advanced

AI-specific incident response procedures. Defined severity levels for AI failures. Named owners for escalation. Post-incident reviews.

Leading

AI-specific runbooks with automated detection. Simulated incident exercises. Lessons learned feeding model and prompt improvements. Regulatory notification process.

Enterprise Architecture & Integration

Basic

AI runs in browser tabs. No integration with business systems. Users copy-paste AI outputs into emails and documents.

Standard

AI acknowledged in EA roadmap. Some Copilot/embedded AI features in M365. No API integration with line-of-business apps.

Advanced

AI systems integrated via APIs into core workflows. Power Automate, ServiceNow, or ERP consuming AI outputs. Architecture standards for AI integration defined.

Leading

AI embedded across enterprise architecture. Event-driven AI triggers. Structured API endpoints returning typed data consumed by multiple systems. AI as infrastructure, not a tool.

Model Lifecycle Management

Basic

No model management. Whatever model the SaaS tool uses. No awareness of model versions, deprecations, or alternatives.

Standard

Awareness of model choices. Some documentation of which models are in use. Ad-hoc model updates when vendors deprecate versions.

Advanced

Model registry tracking all models in production. Evaluation criteria for model selection. Planned transitions when new models release. Regression testing.

Leading

Automated model benchmarking on your data. A/B testing for model upgrades. Cost-performance optimisation. Model governance integrated into change management.

Stakeholder Engagement

Basic

AI driven by one individual or team. No cross-functional input. Business leaders either unaware or sceptical. End users not consulted.

Standard

Some cross-functional awareness. IT leads AI with limited business input. Executive sponsor exists but involvement is light-touch.

Advanced

AI steering group with cross-functional representation. Business cases co-owned by business and IT. User champions in each department.

Leading

AI strategy owned at board level with federated execution. Active community of practice. External stakeholder engagement (customers, regulators, partners).

What Your Scorecard Looks Like

Every assessment produces a clear, dimension-by-dimension picture. Here is an illustrative example of what a mid-sized organisation's scorecard might reveal.

Sample Readiness Scorecard

Policy & Governance
Standard
Compliance & Risk
Basic
Use Case Management
Basic
Human Oversight
Basic
Technology & Architecture
Advanced
Training & Awareness
Basic
Monitoring & Performance
Basic
Data Lifecycle Management
Standard
Ethics & Explainability
Standard

This is a simplified illustration. The full assessment covers all 15 dimensions with detailed findings, gap analysis, and prioritised recommendations.

Three Ways to Get Started

Choose the depth that fits your situation. Every option uses the same 15-dimension framework; the difference is how much support you need.

Self-Assessment Toolkit

£997 +VAT

The complete framework as a structured workbook. Your team runs the assessment independently using our methodology, scoring guides, and reporting templates.

  • -- 15-dimension scoring workbook
  • -- Maturity level descriptors and examples
  • -- Board-ready summary template
  • -- Gap analysis worksheet
View details

AI Readiness Assessment

£12,500 +VAT

Comprehensive expert-led assessment across all 15 dimensions. Includes stakeholder interviews, independent maturity scoring, board-ready report, and executive presentation.

  • -- Expert-led stakeholder interviews
  • -- Independent dimension scoring
  • -- 20-page board-ready report
  • -- 90-minute executive presentation
Request Assessment

Assessment + Platform

Year 1: £78,497 +VAT

The complete journey: assessment, governed AI platform deployment, and ongoing support. Everything your organisation needs to move from assessment to governed AI operations.

  • -- Full AI Readiness Assessment
  • -- Platform setup and configuration
  • -- Governed AI assistants for your teams
  • -- Ongoing support and quarterly reviews
View pricing breakdown

Find Out Where You Stand

An honest assessment is the first step toward a credible AI strategy. No hype, no pressure, just a clear picture of your readiness and what to do about it.

Request an assessment

We will respond within two working days with next steps and a scoping conversation.