5. 90-Day Launch Plan

A practical roadmap to stand up your AI CoE and start delivering value within three months.

Standing up an AI Centre of Excellence doesn’t happen overnight, but it doesn’t require years of planning either. This 90-day launch plan balances quick wins with foundational work, ensuring your CoE starts delivering value while building sustainable structures.


Phase 1: Foundations (Days 0–30)

Goal: Establish leadership, baseline current state, and define scope.

Week 1: Appoint Leadership and Secure Sponsorship

Actions:

  • ✅ Appoint Head of AI CoE and Executive Sponsor(s)
  • ✅ Define reporting structure (CIO, CDO, CTO, or Chief AI Officer)
  • ✅ Secure initial budget and headcount approvals
  • ✅ Communicate the CoE vision to the organisation

Outputs:

  • Executive mandate and charter
  • Initial team roster (even if not yet hired)
  • Communication plan

Key risks:

  • Lack of executive buy-in → Escalate early; show quick wins
  • Unclear reporting line → Define before announcing CoE

Week 2–3: Baseline AI Readiness

Actions:

  • ✅ Conduct AI Readiness Assessment across 15 dimensions
  • ✅ Inventory existing AI initiatives (shadow IT, pilots, production systems)
  • ✅ Identify gaps in governance, risk, and technology
  • ✅ Interview stakeholders (IT, Risk, Legal, Business) to understand pain points

Outputs:

  • AI maturity heatmap (where you are today)
  • Inventory of existing AI projects
  • Prioritised gap analysis

Key risks:

  • Incomplete inventory → Engage Finance and Procurement to track AI spend
  • Resistance to assessment → Frame as “improvement, not blame”

Week 3–4: Define Non-Negotiables and Initial Standards

Actions:

  • ✅ Draft initial non-negotiables (identity, secrets, logging, data contracts)
  • ✅ Define security baselines (threat modelling, least privilege, encryption)
  • ✅ Establish privacy-by-design principles (DPIA, data minimisation)
  • ✅ Create intake and prioritisation rubric (ROI, strategic fit, feasibility, risk)

Outputs:

  • Non-negotiables document (v0.1)
  • Intake form template
  • Prioritisation rubric

Key risks:

  • Standards too restrictive → Start lean; evolve based on feedback
  • Ignored by delivery teams → Embed standards in golden paths

Week 4: Stand Up MVP Platform

Actions:

  • ✅ Provision cloud environment (AWS, Azure, GCP)
  • ✅ Deploy foundational tools: experiment tracking (MLflow), model registry, CI/CD
  • ✅ Set up identity and access management (IAM, SSO)
  • ✅ Implement logging and secrets management

Outputs:

  • MVP AI platform (basic but functional)
  • Access provisioning process
  • Platform documentation (wiki or Confluence)

Key risks:

  • Over-engineering the platform → Start with essentials; add features incrementally
  • Vendor lock-in → Use open-source where possible; negotiate exit clauses

Week 4: Pick 2–3 High-Value Pilots

Actions:

  • ✅ Review existing AI use cases and select 2–3 for pilot acceleration
  • ✅ Criteria: High business value, manageable risk, sponsorship, data availability
  • ✅ Assign Product Owners and delivery teams
  • ✅ Define success metrics (business outcomes, not just technical metrics)

Outputs:

  • Pilot project briefs
  • Success criteria and timelines
  • Resource allocations

Key risks:

  • Picking too complex pilots → Start with medium complexity; demonstrate value
  • Lack of executive sponsorship → Ensure each pilot has C-level backing

Phase 2: Enablement and Governance (Days 31–60)

Goal: Build golden paths, implement governance cadence, and run pilots.

Week 5–6: Build Golden Paths

Actions:

  • ✅ Document reference architectures for common patterns (batch inference, real-time scoring, LLM applications)
  • ✅ Create templates: data pipeline, model training, deployment, monitoring
  • ✅ Implement evaluation gates (accuracy, bias, robustness thresholds)
  • ✅ Publish golden paths to internal wiki or platform docs

Outputs:

  • 3–5 golden paths (documented and tested)
  • Template library
  • Evaluation framework

Key risks:

  • Golden paths ignored → Demonstrate value via pilot projects
  • Documentation out of sync → Automate doc generation where possible

Week 6–7: Establish Governance Cadence

Actions:

  • ✅ Schedule recurring meetings: Executive Steering, Portfolio Council, Architecture Review, MLOps CAB
  • ✅ Create agenda and minutes templates
  • ✅ Define RACI for key decisions
  • ✅ Launch RAID log (Risks, Assumptions, Issues, Dependencies)

Outputs:

  • Meeting calendar for next 6 months
  • Decision log and action tracker
  • RACI matrix

Key risks:

  • Meeting overload → Keep meetings focused and time-boxed
  • Low attendance → Ensure value is demonstrated; rotate participants

Week 7–8: Run Proof-of-Value (PoV) on Pilots

Actions:

  • ✅ Pilot teams adopt golden paths for their solutions
  • ✅ Conduct architecture and model risk reviews
  • ✅ Gather feedback on golden paths and platform
  • ✅ Track time-to-value and adoption metrics

Outputs:

  • PoV results (technical feasibility, business value)
  • Feedback for golden path improvements
  • Lessons learned

Key risks:

  • Pilot failures → Treat as learning; document what didn’t work
  • Insufficient business engagement → Product Owners must drive adoption

Week 8: Launch Community of Practice

Actions:

  • ✅ Schedule first Community of Practice (CoP) meeting
  • ✅ Invite all AI practitioners (engineers, scientists, product owners)
  • ✅ Agenda: Intro to CoE, demo of golden paths, Q&A
  • ✅ Create Slack/Teams channel for async collaboration

Outputs:

  • First CoP meeting (recorded)
  • Community engagement plan
  • Collaboration channel

Key risks:

  • Low engagement → Showcase quick wins and celebrate contributors
  • One-way communication → Make it interactive; solicit feedback

Phase 3: Production-Ready and Measurement (Days 61–90)

Goal: Launch first production solutions, implement monitoring, and start tracking benefits.

Week 9–10: Production-Ready 1–2 Pilots

Actions:

  • ✅ Pilot teams complete production readiness checklist
  • ✅ Conduct final architecture, security, privacy, and model risk reviews
  • ✅ Deploy to production with monitoring and rollback plans
  • ✅ Communicate launches internally

Outputs:

  • 1–2 AI solutions live in production
  • Monitoring dashboards and runbooks
  • Case studies for future reference

Key risks:

  • Rushed deployments → Enforce stage-gate discipline
  • Lack of monitoring → No production deployment without observability

Week 11: Implement Monitoring and Playbooks

Actions:

  • ✅ Deploy drift detection and alerting for production models
  • ✅ Create runbooks for common incidents (model degradation, API outages, data quality)
  • ✅ Establish on-call rotation and incident response process
  • ✅ Test rollback procedures

Outputs:

  • Monitoring dashboards (uptime, latency, drift)
  • Runbooks and playbooks
  • Tested incident response

Key risks:

  • Alert fatigue → Tune thresholds; avoid noisy alerts
  • Unclear escalation → Document on-call procedures and contacts

Week 12: Launch Benefits Ledger (v1)

Actions:

  • ✅ Define value tracking framework (ROI, cost savings, time saved, revenue uplift)
  • ✅ Establish baseline metrics before AI implementation
  • ✅ Track actual outcomes vs. predicted benefits
  • ✅ Create portfolio dashboard for Executive Steering

Outputs:

  • Benefits ledger (spreadsheet or tool)
  • Portfolio dashboard (value, risk, adoption)
  • Quarterly business review template

Key risks:

  • Benefits overstated → Use conservative assumptions; validate with Finance
  • Lack of baseline → Capture “before” metrics for all future initiatives

Week 12: Retrospective and Roadmap Update

Actions:

  • ✅ Conduct CoE retrospective: What went well? What didn’t?
  • ✅ Gather feedback from stakeholders (delivery teams, executives, risk/compliance)
  • ✅ Update golden paths, standards, and governance cadence based on lessons learned
  • ✅ Draft 6-month roadmap: next use cases, platform enhancements, training plans

Outputs:

  • Retrospective report
  • Updated standards and golden paths
  • 6-month roadmap

Key risks:

  • Ignoring feedback → Act on lessons learned; communicate changes
  • Roadmap overcommitment → Be realistic about capacity

90-Day Checklist

MilestoneComplete?
Head of AI CoE appointed
Executive Sponsor secured
AI Readiness Assessment conducted
Non-negotiables defined
MVP platform deployed
2–3 pilots selected
Golden paths documented
Governance cadence established
Community of Practice launched
1–2 pilots live in production
Monitoring and runbooks in place
Benefits ledger (v1) launched

Post-90-Day: Sustaining Momentum

Once the CoE is stood up, focus shifts to:

✓ Scaling adoption — Onboard more use cases and teams ✓ Maturing the platform — Add advanced features (AutoML, feature stores, advanced monitoring) ✓ Building talent — Hire, train, and certify AI practitioners ✓ Measuring value — Quarterly business reviews with executives ✓ Evolving standards — Regular updates to golden paths and non-negotiables


Next Steps

With the CoE launched, the final step is understanding success metrics and avoiding anti-patterns that can derail even well-intentioned initiatives.