Ethics & Explainability addresses the moral principles guiding AI use and the ability to explain how AI systems make decisions. Ethical considerations include fairness (avoiding discrimination), privacy (protecting personal information), accountability (who is responsible when things go wrong), and broader societal impacts. Explainability means being able to describe, in terms stakeholders can understand, why an AI system produced a particular output or decision. This includes technical explainability for specialists and accessible explanations for affected individuals, along with maintaining decision logs for audit purposes.
Many AI systems—particularly modern machine learning models—operate as “black boxes” where even their creators cannot fully explain individual decisions. This creates challenges when decisions affect people’s lives, when legal or ethical questions arise, or when building trust with stakeholders. This dimension evaluates whether your organisation has established ethical principles, explainability standards, and mechanisms to communicate AI decision-making to affected individuals.
Why It Matters
Opaque AI systems erode trust, harm individuals, and expose organisations to legal and reputational risk.
Industry Frameworks for Responsible AI
Several industry frameworks provide structured approaches to AI ethics. One prominent example is Microsoft’s Responsible AI Standard, which defines six core principles:
- Fairness — AI systems should treat all people fairly and avoid discrimination
- Reliability & Safety — AI systems should perform reliably and safely under diverse conditions, including adversarial threats
- Privacy & Security — AI systems should protect sensitive data and respect privacy
- Inclusiveness — AI systems should empower and engage people, serving society broadly
- Transparency — People should understand how AI systems work and make decisions
- Accountability — There should be clear responsibility when AI systems impact people’s lives
These principles align closely with this dimension’s focus on ethical AI use and explainability. Organisations can adopt established frameworks like Microsoft’s Responsible AI Standard as a starting point for developing their own ethical guidelines.
Further Reading: Responsible AI Frameworks
Microsoft’s white paper “Colocation: Build a Scalable Cloud Foundation for AI” (page 8) discusses their Responsible AI Standard and human-centred AI approaches.
While such frameworks provide valuable ethical guidance, organisations must still implement concrete mechanisms for explainability, decision logging, stakeholder communication, and ethical oversight—the practical capabilities assessed in this dimension.
Maturity Levels
| Basic | Standard | Advanced | Leading |
|---|---|---|---|
| Ethics ignored; no transparency or explainability mechanisms. | Basic transparency (e.g., disclaimers that AI is in use). | Explainability standards and decision logs maintained for audits. | Ethics embedded in organisational culture, with continuous stakeholder engagement and transparency reporting. |
📥 Related Resources & Templates
Downloadable templates, examples, and frameworks to help you implement this dimension.
AI Ethics Policy
Policy template covering ethical AI principles, responsible AI practices, and organizational commitments.
Explainability Checklist
Checklist for assessing AI model explainability requirements and implementation approaches.
Explainability Standards
PremiumStandards and guidelines for implementing explainability features in AI systems, including rationale logging.