GeraWitness in the UK 2026 — Human Oversight of High-Risk AI in Britain
Published 21 April 2026 · 9 min read
Quick answer. GeraWitness is a human-in-the-loop review layer for high-risk AI decisions. For the UK it sits at the intersection of the EU AI Act Article 14 spillover (which applies to UK vendors placing systems on the EU market), the ICO's UK GDPR Article 22 guidance, and the Online Safety Act duty of care for user-to-user platforms. Not shipping yet. This post sets out what British organisations should think about when designing defensible human oversight in 2026.
The UK's “principles-based” AI posture in plain English
The UK Government's White Paper (“A pro-innovation approach to AI regulation”) asked existing regulators (ICO, FCA, CMA, Ofcom, MHRA, HSE, CQC) to apply their existing tools to AI rather than creating a new AI regulator immediately. In practice that means British organisations working with AI must read the output of at least five regulators, not one. GeraWitness is a workflow that satisfies the highest common denominator of those duties.
The UK regulatory stack that shapes GeraWitness
- UK GDPR Article 22 — restricts solely automated decisions with legal or similarly significant effects. Human review with meaningful intervention power is the safe path.
- EU AI Act Article 14 (spillover) — UK vendors serving EU customers must implement human oversight on high-risk systems. Oversight evidence is contemporaneous, not after-the-fact.
- ICO Children's Code — under-18 contexts demand higher oversight thresholds.
- Online Safety Act 2023 — duty of care requires human review of edge-case takedown and moderation.
- FCA Consumer Duty — where AI drives financial outcomes, the firm must evidence fair value and support. Witness logs contribute.
- AI Safety Institute (AISI) — the UK's evaluations body. GeraWitness architecture aligns with the evals expectations (observability, audit, reproducibility).
What GeraWitness actually does
- Defines risk tiers per AI workflow (low, medium, high)
- Routes medium/high decisions into a review queue with SLA
- Employs trained reviewers with declared expertise and accountability
- Records the reviewer action, reasoning and time-to-decision
- Feeds aggregate outcomes back into model-performance monitoring
- Exports UK-regulator-ready evidence packs (ICO, FCA, CQC, Ofcom)
UK use cases
- GeraClinic — AI-assisted symptom triage reviewed by a GMC-registered clinician above a clinical-risk threshold
- GeraSure — loan-adjacent insurance decisions reviewed for Consumer Duty fairness
- GeraCompliance — DSAR decisions reviewed before release
- GeraJobs — ATS shortlisting reviewed for Equality Act fairness
- GeraMarket — counterfeit-listing takedowns reviewed for Consumer Rights Act accuracy
UK pricing (when it ships)
- Starter (under 1,000 decisions/month): £99/month
- Growth (up to 100k decisions/month): £499/month
- Enterprise (high-risk AI, sector-regulated): bespoke, including on-boarded reviewer teams
Who British organisations will compare us with
- In-house trust & safety teams — right for the biggest platforms; cost-prohibitive for mid-sized UK firms.
- Scale Surge / Invisible / outsourced RLHF vendors — data-labelling first, oversight second.
- Anthropic constitutional AI / OpenAI moderation APIs — content classifiers, not human review.
- GeraWitness — UK-focused, regulator-ready evidence packs, tiered review SLAs.
What GeraWitness is not doing
- Not replacing your Data Protection Officer or Nominated Officer
- Not making regulatory determinations on your behalf
- Not pretending human review is a silver bullet — model quality still matters
Related UK reading
- GeraNexus UK
- GeraMind UK
- GeraCompliance — UK GDPR Article 22 and EU AI Act workflows
Help design agent safety that scales.
Join the waitlist