
Founder
Rich Bushnell
Independent AI governance for CEOs and boards who need clarity on commercial AI decisions, not another vendor pitch.
Partner in the Loop exists to close the gap between AI opportunity and board-level accountability in regulated industries.
Why I Started Partner in the Loop
I founded Partner in the Loop because I kept seeing the same problem from the inside. As Head of AI Centre of Excellence at SEFE Group, I built and led the company’s first AI CoE, engaging over 200 people across nine AI domains. I partnered with Microsoft on enterprise-scale Copilot deployment. I aligned governance with the EU AI Act before most organisations had started thinking about it. And through all of that, one thing became clear: the biggest risk to AI adoption was not the technology. It was the gap between what leadership teams were being sold and what they actually understood about the decisions they were making.
Regulated industries face a particular version of this problem. Vendors arrive with confident promises. Internal teams push for speed. Boards hear about competitive pressure and feel they need to act. But nobody in the room is asking the hard questions about accountability, about what happens when an AI system makes a decision that a regulator wants explained, or when a customer is harmed by an automated process that nobody fully owns. The governance conversation either happens too late or gets delegated to compliance teams who lack the commercial context to make it useful.
My background gave me a perspective that sits between these worlds. I studied Physics at the University of Manchester, spent years in enterprise technology delivery, and led a major programme replacing legacy billing systems at SEFE Energy. That combination of scientific rigour, hands-on technical experience, and commercial programme leadership means we understand both the engineering reality and the boardroom reality. I have sat in both rooms, and I know how rarely they speak the same language.
AI governance is a leadership challenge, not a technology challenge. The organisations that get this right will be the ones whose leaders ask the right questions before committing resources, whose boards understand what they are accountable for, and whose governance frameworks are built around commercial reality rather than abstract principles. That is what Partner in the Loop exists to support: helping senior leaders make defensible AI decisions with clarity about the trade-offs involved.
We work independently because independence is the point. No vendor partnerships, no referral fees, no technology preferences. When we assess an AI decision, the only interest we represent is yours. That is not a marketing position. It is the structural foundation of everything we do.
Career Timeline
A path from physics to AI governance, built through hands-on delivery at every stage.
Foundation
BSc Physics, University of Manchester
Analytical rigour and first-principles thinking applied to every problem since. Physics teaches you to question assumptions and follow evidence.
Early Career
Enterprise Technology Delivery
Years spent building, deploying, and supporting enterprise systems. Learned how technology decisions play out in practice, not just on architecture diagrams.
Programme Leadership
Enterprise Programme Delivery, SEFE Energy
Led large-scale programmes replacing legacy billing systems and introducing enterprise integration. Delivered against tight commercial timelines with real accountability for outcomes.
AI Leadership
Head of AI Centre of Excellence, SEFE Group
Built the company's first AI CoE from scratch. Engaged 200+ people across 9 AI domains. Partnered with Microsoft on enterprise Copilot rollout. Aligned governance with EU AI Act requirements.
Current
Partner in the Loop Founded
Independent AI governance advisory for CEOs and boards. Applying everything learned from the inside to help leaders make defensible decisions from the outside.
Structurally Independent
Independence is not a value statement. It is an operational structure that removes conflicts of interest from every recommendation.
No Vendor Partnerships
No commercial relationships with technology vendors. Recommendations are based on your needs, not referral agreements.
No Referral Fees
No commissions from implementation partners, software vendors, or service providers. If we recommend a tool, it is because it fits your situation.
No Tech Stack Bias
No preferred platforms, no certifications that create loyalty to a particular ecosystem. Assessment is platform-agnostic by design.
No Retainer Dependency
Engagements are scoped to specific decisions. The goal is to build your internal capability, not create long-term advisory dependency.
What This Is and What This Is Not
What This Is
- ✓ Independent perspective on AI decisions with governance implications
- ✓ Structured assessment of commercial trade-offs and enterprise risk
- ✓ Board-ready analysis that speaks the language of accountability
- ✓ Honest evaluation, including "do not proceed" when appropriate
- ✓ A founder who has done this work from the inside at enterprise scale
What This Is Not
- × A consultancy that sends junior staff after the sales meeting
- × An implementation shop looking for the next project
- × A vendor reseller with "independent" branding
- × A compliance tick-box service disconnected from commercial reality
- × Anyone who will tell you what you want to hear
Assess. Deploy. Improve.
A methodology built for decisions that need governance oversight, not speed-to-market enthusiasm.
Assess
Understand the decision context before recommending anything.
- • Map the AI opportunity against governance requirements
- • Identify regulatory, reputational, and operational risks
- • Evaluate vendor claims against evidence
- • Produce a written assessment with clear recommendations
Deploy
Support the decision through implementation with governance guardrails.
- • Define governance checkpoints for the implementation plan
- • Prepare board briefing materials at each milestone
- • Provide independent oversight as decisions are executed
- • Flag emerging risks before they become incidents
Improve
Review outcomes and strengthen governance for the next decision.
- • Conduct post-implementation governance review
- • Update risk assessments based on real-world outcomes
- • Build internal capability for ongoing governance
- • Prepare the organisation for the next AI decision
Start With an Assessment
If you are facing an AI decision that requires governance clarity, an initial assessment will tell you where you stand and what to do next.
Request an assessmentNo obligation. Straightforward conversation about your situation.