The Architecture

Three-Layer
Governance &
Delivery System

NIST AI RMF sets the organisational standard. Most organisations stop there. This architecture provides the two layers beneath it that translate governance into execution — and the board layer above it that directs all three.

Board-Level Diagnostic™
Directing Layer
CAA™ · All Layers
Layer 01 · Enterprise
NIST AI RMF
Governs the Organisation
↓ Alignment Assurance ↑
Layer 02 · Initiative
7P Compass™
Governs the Initiative
↓ Execution Evidence ↑
Layer 03 · Delivery
7P-DOM
Governs the Delivery
3
Governance layers
from board to delivery
7
Compass pillars
each initiative-level
6
CAA™ lifecycle stages
for every AI agent
5
Board decisions that
cannot be delegated
Layer 01 · Enterprise Governance The Foundation

NIST AI RMF — The Organisational Standard

The NIST AI Risk Management Framework is the recognised enterprise governance standard. It sets the organisational boundaries within which all AI activity must operate.

Most organisations implement NIST AI RMF and consider their governance obligations met. What NIST does not provide is the initiative-level decision framework that tells leaders how to deploy AI within those boundaries — or the delivery model that tells teams how to execute.

The Gap This Architecture Closes

NIST governs the organisation. The 7P Compass™ governs the initiative. The 7P-DOM governs the delivery. Alignment flows down through each layer. Assurance and evidence flow back up.

Policies
Acceptable use, ethical boundaries, compliance requirements
Roles
Who is accountable for AI decisions at organisational level
Risk Appetite
What categories of risk are acceptable, and at what level
Culture
Organisational norms around AI adoption and human oversight
Oversight
Governance bodies, review processes, escalation paths
Layer 02 · Initiative Governance The Decision Framework

7P Compass™ — Seven Pillars. One Framework.

Translates enterprise governance into initiative-specific decisions across seven interdependent dimensions — moving organisations from accidental Frankenstein stitching to intentional orchestration.

Purpose
The Anchor
Identify the Friction Tax. What genuine value does this unlock?
🧭
People
The North Star
SMEs as orchestrators. From users to operators.
🏗️
Platforms
The Enablers
If SMEs think about the tech, the platform failed.
🧵
Pipelines
The Sutures
Messy data into decision-ready intelligence.
📦
Products
The Outcomes
Solve a specific person's specific problem.
🛡️
Principles
The Outer Ring
Governance as a trust engine. Built in.
💓
Performance
The Pulse
Measure AI, humans, and outcomes together.
Operational Engine · Runs Across All Layers

Continuous Agentic Assurance™ (CAA™)

Day 2 is where most AI systems fail. CAA™ is the governance engine that runs vertically across all three layers — governing every AI agent from Commission through Retirement.

"No agent without an owner. No decision without a trail. No deployment without a governed lifecycle."

01
Commission
Define the agent's role, scope, success criteria, and named human owner before deployment begins.
02
Onboard
Establish performance baselines. Validate outputs against ground truth. Set a probation period.
03
Supervise
Continuous oversight by the named human owner — the agent's line manager. Real-time monitoring.
04
Develop
Expand capability and autonomy as trust is earned through demonstrated, verified performance.
05
Review
Periodic performance assessment. Drift detection. Regression testing. Override rate analysis.
06
Retire
Governed offboarding when no longer fit for purpose. Documented. Accountable. Evidence retained.

Why CAA™ Runs Across All Three Layers

An agent deployed under NIST AI RMF governance (Layer 1), with a 7P Compass™ initiative framework (Layer 2), still needs day-to-day operational governance (CAA™) and a delivery accountability structure (Layer 3). CAA™ is the connective tissue that keeps all three layers coherent over time.

The Digital Workforce Analogy

No responsible organisation lets human employees operate without performance standards, named line managers, and a governed exit process. Yet most organisations are doing exactly this with their AI agents — deploying and forgetting. CAA™ applies workforce governance logic to the digital workforce.

In Regulated Industries

Agents making decisions that affect customers, patients, or citizens without a governance lifecycle is not just operational risk. In regulated industries, it is a potential fiduciary exposure that boards have not yet addressed.

Layer 03 · Delivery Governance · Forthcoming 2026

7P Delivery Operating Model (7P-DOM)

Operationalises the 7P Compass™ decisions into the roles, rituals, governance gates, and artefacts that delivery teams need to execute AI transformation at enterprise scale.

Covers: Principles
AI Governance Lead
  • Ethics & legal compliance
  • Proportionality assessment
  • Safeguard design
  • Regulatory alignment
Covers: Purpose + People
AI Domain Lead
  • Problem definition
  • Domain correctness
  • Human oversight design
  • Suitability assessment
Covers: Platforms + Pipelines
AI Systems Architect
  • Modularity & reversibility
  • Lineage map
  • Drift triggers
  • Evaluation framework
Covers: Products + Performance
AI Delivery Lead
  • Workflow integration
  • Value measurement
  • Safety accountability
  • Continuous improvement

Full 7P-DOM specification — roles, rituals, governance gates, and artefact library — publishing 2026

DM "7P" to be notified on release

Ready to Build the Right Architecture?

Start with the Board Diagnostic — the five decisions that must be made before anything else.