Submitted to: National Institute of Standards and Technology (NIST), AI Risk Management Framework programme
Submission date: March 2026
Status: Submitted — frozen document; do not edit.
License: CC-BY-SA-4.0 (per AEGIS public-content convention)

What it is

A formal position statement responding to NIST’s AI Risk Management Framework, arguing that the AI RMF’s risk-management goals require runtime governance at the architectural layer in addition to the model-layer practices the framework currently emphasizes. The submission introduces the AEGIS architecture as a reference for how that runtime governance can be implemented, and grounds the argument in the empirical evidence the AEGIS edge laboratory has produced.

Position summary

Status

The submission is on file with NIST and is one of two NIST-track engagements (the other is the NCCoE response on AI agent identity and authorization). Both are formally submitted, peer-validated, and treated as frozen documents in the AEGIS Initiative repositories — substantive edits would require a version bump and re-submission rather than amendment in place.

Canonical text

The authoritative submission lives at aegis-governance/docs/position-papers/nist/ in both Markdown and PDF form. The PDF is the version-of-record for citation; the Markdown is reference-only.

Relationship to other AEGIS work

NIST policy context

Shapira et al.’s Agents of Chaos (2026) explicitly names NIST’s February 2026 AI Agent Standards Initiative (§16.5, p. 43) as the policy context for empirical agent-failure work. The AEGIS submission predates that initiative; the Round 2 replication, when complete, will be the artifact most relevant to the NIST initiative’s empirical-evidence needs. A separate follow-up note to the NIST AI RMF programme is planned for that point.