← Back to Blog
US · Vision

What GeraWitness Means for US Tech by 2030 — Human-in-the-Loop Agent Safety for America

Published 21 April 2026 · 7 min read

Coming soon — join the waitlist

Quick answer. GeraWitness is a protocol and service for human-in-the-loop review of high-risk AI agent actions. In the US, the design pressure comes from NIST AI RMF 1.0 (voluntary but widely adopted), the Colorado AI Act 2024 (high-risk AI duties, effective 2026), NYC Local Law 144 (AEDT bias audits), California ADMT rulemaking under CPRA, and FTC deceptive-practices enforcement. Public spec drafting; 2027 reference.

Why US AI needs human witnesses

As US organisations move from “AI recommends” to “AI acts,” regulators and the public need confidence that consequential actions are reviewable. Automated eligibility denials, automated clinical triage, automated hiring decisions, automated fraud flags — each has real consequences and is squarely inside current US regulatory attention. GeraWitness is the protocol layer that defines when a human must see an action, how fast, with what context, and with what accountability.

US regulatory grounding

  • NIST AI Risk Management Framework 1.0: “Manage” function explicitly anticipates human-in-the-loop for high-risk use cases. GeraWitness packages evidence automatically.
  • Colorado AI Act (SB 24-205, effective February 2026): duty to avoid algorithmic discrimination, risk management programme, impact assessments. Human oversight is a natural control.
  • New York City Local Law 144 (AEDT): bias audit is the baseline; human-review escalation extends it.
  • California CPRA ADMT rulemaking: consumer right to opt out of automated decisions for specific use cases; GeraWitness queues the opt-out flow.
  • Illinois AI Video Interview Act: consent for AI-assisted interviews.
  • Sectoral rules: HIPAA, GLBA, FCRA, ECOA all already expect humans in certain loops.
  • FTC Section 5: deceptive “AI reviewed by humans” claims are enforceable; GeraWitness generates auditable evidence.

Protocol sketch

A risk tier assignment at the point of agent action: Tier 0 (low-consequence, fully automatic), Tier 1 (logged, post-hoc sampled human review), Tier 2 (near-real-time human review with SLA), Tier 3 (human blocks the action until explicit approval). Reviewer accountability is logged; sampled Tier 0 decisions are audited at a defined rate to catch drift.

US comparisons

  • Anthropic Constitutional AI + Safety team: in-house model-level safety
  • OpenAI moderation / content-policy teams: in-house content review
  • Scale Surge — Scale AI's RLHF / expert review services:commercial human review but tied to Scale's stack
  • Trust and Safety teams inside platforms: functional but bespoke

GeraWitness stands out as an open, cross-platform protocol with reviewer accountability and external auditability.

Roadmap

  • 2026: spec v0.1, reference implementation for GeraClinic US triage flow
  • 2027: Colorado AI Act compliant deployments across two US Gera verticals
  • 2028–2030: third-party certification programme

Cross-links

GeraNexus, GeraCompliance, GeraMind.

US sources

  • NIST AI Risk Management Framework 1.0 (2023)
  • Colorado General Assembly — SB 24-205
  • NYC DCWP — AEDT implementation
  • California Privacy Protection Agency — ADMT rulemaking
  • FTC — deceptive AI claims enforcement actions

Help design agent safety that scales.

Join the waitlist