An effective AI CoE runs on a structured cadence of governance meetings. Each forum has a specific purpose, attendees, inputs, and outputs. Together, they create a feedback loop that balances speed, quality, and risk.
Executive Steering Committee
Frequency: Monthly Duration: 60–90 minutes Chair: Executive Sponsor or Head of AI CoE
Purpose
Set strategic direction, approve funding, remove organisational blockers, and review portfolio performance.
Attendees
- Executive Sponsor(s)
- Head of AI CoE
- CIO/CDO/CTO
- CFO or Finance representative
- Chief Risk Officer or Risk Lead
- Business domain executives (rotating)
Inputs
- Portfolio performance dashboard (value, risk, adoption)
- Strategic initiatives requiring executive decision
- Escalated blockers (budget, resourcing, policy conflicts)
Outputs
- Strategic decisions (approved, deferred, rejected)
- Budget allocations and funding approvals
- Action items for blockers
Success Metrics
- Decisions made within one meeting cycle
- Clear escalation path for unresolved issues
- Transparency of portfolio health
Portfolio Council
Frequency: Fortnightly Duration: 60 minutes Chair: Head of AI CoE or PMO Lead
Purpose
Triage new AI use case proposals, prioritise the backlog, allocate resources, and manage stage-gate approvals.
Attendees
- Head of AI CoE
- AI Product Owners
- Domain Leads
- PMO Analyst
- Risk and Compliance representatives
Inputs
- New use case intake forms
- Stage-gate review results (PoC, pilot, production)
- Resource availability and capacity planning
- Prioritisation rubric (ROI, strategic fit, feasibility, risk)
Outputs
- Ranked backlog of AI initiatives
- Stage-gate decisions (go/no-go/conditional)
- Resource allocations to active initiatives
Success Metrics
- Transparent prioritisation criteria
- Predictable stage-gate review times
- Balanced portfolio (quick wins + strategic bets)
Architecture Review Board
Frequency: Fortnightly Duration: 90 minutes Chair: Lead AI/ML Architect
Purpose
Review solution designs for adherence to architectural standards, security, scalability, and reusability.
Attendees
- AI/ML Architects
- Security Architect
- MLOps Lead
- Domain Architect (as needed)
- Solution designer presenting
Inputs
- Architecture design documents
- Data flow diagrams
- Technology stack proposals
- Compliance and security checklists
Outputs
- Approve, conditional approval (with changes), or decline
- Recommended patterns or alternatives
- Action items for security or compliance gaps
Success Metrics
- Review turnaround time <5 business days
- Reuse of reference architectures
- Consistency across solutions
Model Risk & Ethics Committee
Frequency: Monthly Duration: 60 minutes Chair: Model Risk & Ethics Lead
Purpose
Assess model risk, evaluate bias and fairness, review red-team results, and approve model launches.
Attendees
- Model Risk & Ethics Lead
- Privacy Counsel
- Compliance Lead
- Applied Scientists / ML Engineers (presenting)
- Domain Product Owner
Inputs
- Model evaluation reports (accuracy, bias, robustness)
- Red-teaming and adversarial testing results
- Data Protection Impact Assessments (DPIAs)
- Explainability and transparency documentation
Outputs
- Go/no-go decision for model launch
- Risk mitigation actions
- Documentation of ethical considerations
Success Metrics
- Zero unmitigated high-risk models in production
- Documented rationale for all launch decisions
- Timely review (within 10 business days of request)
Data Governance Forum
Frequency: Monthly Duration: 60 minutes Chair: Data Governance Lead or CDO representative
Purpose
Ensure data quality, compliance, and availability for AI use cases. Manage data access requests, SLAs, and lineage.
Attendees
- Data Governance Lead
- Data Engineers
- Privacy Counsel
- Domain Data Owners
- AI Product Owners (as needed)
Inputs
- Data access requests
- Data quality incident reports
- SLA adherence metrics
- Lineage and cataloguing updates
Outputs
- Approved or declined data access requests
- Data quality improvement actions
- Updated data contracts and SLAs
Success Metrics
- Data request turnaround time <7 days
- SLA adherence >95%
- Zero unauthorised data access incidents
MLOps Change Advisory Board (CAB)
Frequency: Weekly Duration: 30–45 minutes Chair: MLOps Lead
Purpose
Approve production deployments, coordinate releases, manage rollback plans, and track incidents.
Attendees
- MLOps Engineers
- Applied Scientists / ML Engineers
- Security representative
- Domain Product Owner
- On-call support engineer
Inputs
- Deployment requests with testing evidence
- Rollback plans and risk assessments
- Previous week’s incident log
- Change calendar (what’s deploying when)
Outputs
- Approved/deferred deployments
- Updated change calendar
- Incident follow-up actions
Success Metrics
- Deployment success rate >98%
- Mean time to rollback <15 minutes
- Zero unplanned production outages
Delivery Sync
Frequency: Weekly Duration: 30 minutes Chair: Head of AI CoE or PMO Analyst
Purpose
Track active initiatives, surface dependencies, manage impediments, and coordinate cross-team work.
Attendees
- AI Product Owners
- Domain Leads
- MLOps and Data Engineering leads
- PMO Analyst
Inputs
- Sprint progress and burndown
- Blockers and impediments
- Dependencies on other teams
- RAID log (Risks, Assumptions, Issues, Dependencies)
Outputs
- Updated status dashboard
- Action items for blockers
- Escalations to Portfolio Council or Executive Steering
Success Metrics
- Blockers resolved within 1 week
- Transparency on progress and risks
- High initiative completion rate
Community of Practice (CoP)
Frequency: Fortnightly Duration: 60 minutes Format: Open forum, demos, and knowledge sharing
Purpose
Foster learning, share reusable patterns, celebrate wins, and discuss lessons learned.
Attendees
- Open to all AI practitioners (engineers, scientists, product owners)
- Occasional external speakers or vendor demos
Agenda Examples
- Demo of a new golden path or tool
- Post-mortem from a recent launch
- Presentation on emerging AI techniques
- Q&A with external experts
Outputs
- Updated playbooks and templates
- Reusable code libraries and patterns
- Action items for platform improvements
Success Metrics
- Attendance and engagement
- Number of reusable assets contributed
- Cross-team collaboration
Incident & Quality Review
Frequency: As needed (within 48 hours of major incidents) Duration: 60–90 minutes Chair: Head of AI CoE or Risk Lead
Purpose
Conduct post-mortems for production incidents, model drift, or quality regressions. Identify root causes and corrective actions.
Attendees
- Incident owner
- MLOps and on-call engineers
- Model Risk Lead
- Domain Product Owner
- PMO Analyst (to track follow-ups)
Inputs
- Incident timeline and logs
- Monitoring and alerting data
- Preliminary root cause hypothesis
Outputs
- Root Cause Analysis (RCA) report
- Corrective and preventive actions (CAPA)
- Updates to runbooks or playbooks
Success Metrics
- RCA completed within 5 business days
- Corrective actions tracked to completion
- Incident recurrence rate <5%
Vendor & Procurement Forum
Frequency: Monthly Duration: 60 minutes Chair: Vendor Manager or Head of AI CoE
Purpose
Review vendor health, roadmap alignment, cost management, and contract renewals. Identify risks and negotiation opportunities.
Attendees
- Vendor Manager
- Procurement Lead
- Head of AI CoE
- Legal/Contracts representative
- Domain stakeholders using vendor solutions
Inputs
- Vendor scorecards (performance, cost, support)
- Contract renewal dates
- Vendor roadmap updates
- Cost vs. budget analysis
Outputs
- Vendor health status (green/amber/red)
- Renegotiation or exit actions
- Cost optimisation opportunities
Success Metrics
- Vendor SLA adherence >95%
- Cost variance <10% of budget
- Fair contract terms (exit clauses, IP rights)
Compliance & Audit Prep
Frequency: Quarterly Duration: 60–90 minutes Chair: Compliance Lead
Purpose
Ensure audit readiness, maintain evidence packs, and track compliance obligations (AI Act, GDPR, sector-specific regs).
Attendees
- Compliance Lead
- Legal/Privacy Counsel
- Head of AI CoE
- Internal Audit representative
- PMO Analyst
Inputs
- Compliance obligations tracker
- Evidence packs (policies, approvals, logs)
- Upcoming audit schedule
- Regulatory changes
Outputs
- Audit readiness status
- Evidence gaps and remediation actions
- Updated compliance tracker
Success Metrics
- Audit findings: zero critical, <3 medium
- Evidence completeness >95%
- Timely response to regulatory queries
Meeting Templates and Tools
To streamline governance, the CoE should maintain:
- Agenda and minutes templates for each forum (available at
/templates/meeting-agenda-minutes) - Decision log tracking all key decisions, owners, and dates
- Action tracker for follow-ups and accountability
- RAID log for risks, assumptions, issues, and dependencies
Next Steps
With governance in place, the next step is defining golden paths and standards that accelerate delivery while ensuring quality.