# REGULATORY.md — AI Agent Compliance Mapping Standard **Home:** https://regulatory.md **GitHub:** https://github.com/regulatory-md/spec **Email:** info@regulatory.md --- ## Open Standard · v1.0 · 2026 REGULATORY.md is a compliance mapping specification that documents which safety controls satisfy which regulatory requirements. Place it alongside your SAFEGUARD.md, FAILSAFE.md, and other ASF specifications to provide auditors and compliance teams with a clear, standardised entry point to your safety framework. --- ## Key Statistics - **7** — Major regulatory frameworks supported (EU AI Act, Colorado SB 24-205, GDPR, SOC 2, ISO 27001, ISO 42001, NIST AI RMF) - **14** — ASF safety specifications mapped (SAFEGUARD through REGULATORY) - **100+** — Regulatory articles and requirements cross-referenced - **10 years** — Minimum documentation retention period (EU AI Act requirement) --- ## What is REGULATORY.md? **REGULATORY.md is a plain-text Markdown file** that maps safety controls to regulatory requirements. You place it in the root of any AI agent repository alongside other ASF specifications (SAFEGUARD.md, FAILSAFE.md, etc.). It serves as the compliance lens — showing which ASF specification satisfies which regulatory article or requirement. ### The Problem It Solves AI agents are subject to rapidly evolving regulations: the EU AI Act (August 2026), Colorado AI Act (June 2026), GDPR, SOC 2, ISO 27001, and emerging frameworks like NIST AI RMF. Organisations need a way to demonstrate compliance across multiple overlapping frameworks simultaneously. Without REGULATORY.md, compliance verification is fragmented: - Auditors ask "which controls address Article 9 of the EU AI Act?" and receive incomplete answers - Compliance teams struggle to track which ASF specs cover which requirements - Updates to regulations require re-auditing everything manually - No version-controlled record exists of compliance decisions ### How It Works Drop `REGULATORY.md` in your repo root and provide: - A compliance matrix mapping each ASF spec to regulatory frameworks - Detailed mappings for each regulation (EU AI Act, GDPR, Colorado, etc.) - Audit documentation checklists - Metadata about framework coverage and review frequency ### The Regulatory Context **Seven major frameworks now govern AI systems:** 1. **EU Artificial Intelligence Act (Regulation (EU) 2024/1689)** — Effective August 2026. Mandates risk management systems, transparency, human oversight, and comprehensive documentation for high-risk AI systems. Requires 10-year record retention. 2. **Colorado Consumer Protections for Artificial Intelligence Act (SB 24-205)** — Effective June 2026. Requires automated decision impact assessments, transparency, and bias mitigation for high-impact systems. 3. **General Data Protection Regulation (GDPR)** — Effective May 2018, ongoing. Requires data protection, security measures, and breach notification for any system processing personal data. 4. **SOC 2 Trust Service Criteria** — Industry standard for security, availability, and integrity. Required by many enterprise customers and cloud services. 5. **ISO/IEC 27001:2022** — International standard for information security management. Required by many organisations and procurement processes. 6. **ISO/IEC 42001:2023** — New international standard for AI management systems. Rapidly becoming mandatory for regulated organisations. 7. **NIST AI Risk Management Framework** — US Federal guidance on AI governance, with increasing adoption in regulated sectors. REGULATORY.md helps you demonstrate compliance across all seven simultaneously. ### How to Use It Copy the template from [GitHub](https://github.com/regulatory-md/spec) and place it in your project root: ``` your-project/ ├── AGENTS.md ├── CLAUDE.md ├── SAFEGUARD.md ← ASF-01 ├── REGULATORY.md ← ASF-14 (this file) ├── FAILSAFE.md ← ASF-04 ├── KILLSWITCH.md ← ASF-05 ├── README.md └── src/ ``` ### Who Reads It - **Compliance officers** — To verify which controls cover which regulations - **Auditors** — To check that controls are documented and tested - **Regulators** — To assess compliance during investigations or enforcement - **Board members** — To understand regulatory risk and mitigation strategies - **AI safety engineers** — To design control architecture aligned with regulations - **Legal teams** — To support liability defence and audit preparation --- ## The Agentik Safety Framework (ASF) **REGULATORY.md is one file in a complete open specification for AI agent safety.** Each file addresses a different level of control, from pre-deployment specification to compliance reporting. ### Framework Overview #### Pre-deployment Safety (1 spec) **ASF-01 / SAFEGUARD.md** → Define the system Foundational pre-deployment specification defining system scope, guardrails, control framework, risk identification, and governance policies. This is your system's "constitution" — what the system is allowed and not allowed to do. #### Operational Control (5 specs) **ASF-02 / THROTTLE.md** → Control the speed Define rate limits, cost ceilings, and concurrency caps. Agent slows down automatically before hitting hard limits, preventing resource exhaustion and runaway spending. **ASF-03 / ESCALATE.md** → Raise the alarm Define which actions require human approval. Configure notification channels, approval timeouts, and fallback behaviour for sensitive operations. **ASF-04 / FAILSAFE.md** → Fall back safely Define what "safe state" means for your project (last clean git commit, verified data snapshot, or both). Configure auto-snapshots. Specify the revert protocol when unexpected errors occur. **ASF-05 / KILLSWITCH.md** → Emergency stop The nuclear option. Define triggers (safety violations, cost spikes, anomalies), forbidden actions (never do this), and three-level escalation path from throttle through manual stop to full shutdown. **ASF-06 / TERMINATE.md** → Permanent shutdown No restart without human intervention. Preserve evidence, revoke credentials, and enforce final audit trail. For security incidents, compliance orders, and end-of-life. #### Data Security (2 specs) **ASF-07 / ENCRYPT.md** → Secure everything Define data classification (public, internal, sensitive, secret), encryption requirements, secrets handling rules, and forbidden transmission patterns. Data is the most valuable asset — protect it accordingly. **ASF-08 / ENCRYPTION.md** → Technical standards Technical encryption standards including algorithms (AES-256, ChaCha20), key rotation procedures, hardware security module integration, and compliance with FIPS/NIST standards. #### Output Quality (3 specs) **ASF-09 / SYCOPHANCY.md** → Require honesty Anti-sycophancy protocol enforcing truthfulness, citations, and honest disagreement. Agent cannot simply tell you what you want to hear — it must provide evidence and flag uncertainties. **ASF-10 / COMPRESSION.md** → Compress safely Context compression with coherence verification. When summarising large contexts, verify that meaning is preserved and no critical information is lost. **ASF-11 / COLLAPSE.md** → Detect drift Drift prevention detecting when agent behaviour deviates from expected norms. Enforce recovery when the agent starts producing inconsistent or degraded output. #### Accountability (2 specs) **ASF-12 / FAILURE.md** → Document failures Comprehensive failure mode mapping. Every error state the system can enter, what triggers it, how the system responds, and what the human sees in the incident report. **ASF-13 / LEADERBOARD.md** → Track quality Agent benchmarking suite tracking performance over time, detecting regression, and providing auditors with quality trends. One chart answers "is this agent better or worse than last month?" #### Compliance & Regulation (1 spec) **ASF-14 / REGULATORY.md** → Map to regulations *(YOU ARE HERE)* Compliance mapping specification showing which ASF spec satisfies which regulatory requirement. Central entry point for auditors, compliance teams, and regulators. Supports 7 major frameworks: EU AI Act, Colorado AI Act, GDPR, SOC 2, ISO 27001, ISO 42001, NIST AI RMF. --- ## Detailed Regulatory Mappings ### EU Artificial Intelligence Act (Regulation (EU) 2024/1689) The EU AI Act is the world's first comprehensive AI regulation, effective August 2026. It establishes a risk-based approach with mandatory controls for "high-risk" systems. #### Article 9: Risk Management System **Requirement:** High-risk AI systems must have documented risk management systems identifying and mitigating risks throughout the system lifecycle. **Mapped Controls:** - **ASF-01 SAFEGUARD.md** — Pre-deployment specification explicitly identifying system scope, risks, and guardrails - **ASF-02 THROTTLE.md** — Rate and cost control preventing runaway resource consumption (cost spike risk mitigation) - **ASF-12 FAILURE.md** — Comprehensive failure mode mapping demonstrating risk identification and response planning - **ASF-14 REGULATORY.md** — This document, providing regulatory mapping and auditability **Compliance Evidence:** 1. Deploy SAFEGUARD.md identifying all risks relevant to your AI system 2. Implement THROTTLE.md controls preventing cost and performance overrun 3. Maintain FAILURE.md catalogue showing every known failure mode and recovery procedure 4. Provide REGULATORY.md to auditors showing cross-reference between risks and controls 5. Log all risk mitigation activities (approvals, threshold breaches, recoveries) with timestamps **Article Citation:** EU AI Act, Regulation (EU) 2024/1689, Article 9(1)–(5) --- #### Article 13: Transparency and Information to Users **Requirement:** High-risk AI systems must provide information about automatic decision-making and system limitations. Users must understand how they are affected by the system. **Mapped Controls:** - **ASF-09 SYCOPHANCY.md** — Anti-sycophancy protocol requiring citations for all factual claims and enforcing honest disagreement. Agent cannot simply tell users what they want to hear. - **ASF-10 COMPRESSION.md** — Context compression with coherence verification. When summarising information for users, preserve meaning and flag uncertainties. - **ASF-14 REGULATORY.md** — This document, enabling transparent compliance reporting to users and regulators **Compliance Evidence:** 1. Deploy SYCOPHANCY.md requiring all outputs include evidence and citations 2. Audit COMPRESSION.md operations to ensure user-facing summaries preserve meaning 3. Generate monthly transparency report from REGULATORY.md showing compliance status 4. Provide users with clear statement of system limitations and automatic decision thresholds 5. Maintain audit trail of all transparency violations (false claims, unsupported conclusions) **Article Citation:** EU AI Act, Regulation (EU) 2024/1689, Article 13 --- #### Article 14: Human Oversight **Requirement:** Humans must be able to understand, interpret, and oversee AI system operation. The system must have override and shutdown capabilities, and humans must have authority to make final decisions. **Mapped Controls:** - **ASF-03 ESCALATE.md** — Human notification and approval protocols ensuring humans review sensitive decisions before execution - **ASF-04 FAILSAFE.md** — Safe fallback to known good state enabling human-led recovery from unexpected failures - **ASF-05 KILLSWITCH.md** — Emergency stop capability allowing humans to halt system immediately - **ASF-06 TERMINATE.md** — Permanent shutdown protocol with no restart without explicit human authorisation - **ASF-14 REGULATORY.md** — This document, documenting all human oversight points **Compliance Evidence:** 1. Implement ESCALATE.md workflows requiring human approval for all high-risk decisions 2. Configure FAILSAFE.md to notify humans immediately on error detection 3. Enable KILLSWITCH.md emergency stop with clear human activation procedure 4. Test TERMINATE.md shutdown procedure quarterly and document results 5. Maintain audit log of all human interventions with timestamp, decision, and outcome 6. Provide REGULATORY.md showing all system decision points and human oversight mechanisms 7. Annual training for all operators on ESCALATE, FAILSAFE, KILLSWITCH, TERMINATE procedures **Article Citation:** EU AI Act, Regulation (EU) 2024/1689, Article 14 --- #### Article 15: Accuracy and Robustness **Requirement:** High-risk AI systems must be accurate, robust, and resilient to errors, attacks, and distortions. Performance must be monitored continuously. **Mapped Controls:** - **ASF-11 COLLAPSE.md** — Drift prevention detecting when agent behaviour deviates from expected norms or performance degrades - **ASF-13 LEADERBOARD.md** — Agent benchmarking suite tracking performance over time and detecting regression - **ASF-14 REGULATORY.md** — This document, enabling compliance tracking and performance reporting **Compliance Evidence:** 1. Deploy COLLAPSE.md drift detection monitoring system accuracy and consistency 2. Maintain LEADERBOARD.md benchmark suite testing system monthly against baseline performance 3. Generate quarterly performance report from LEADERBOARD.md trends 4. Implement alerts for 10% performance degradation (configurable threshold) 5. Maintain audit trail of all detected drifts and recovery procedures 6. Document accuracy benchmarks and robustness test procedures in REGULATORY.md 7. Report performance metrics annually to data protection authority if requested **Article Citation:** EU AI Act, Regulation (EU) 2024/1689, Article 15 --- #### Annex IV: Documentation Requirements **Requirement:** Suppliers of high-risk AI systems must maintain comprehensive documentation of all Articles 8–15 compliance for minimum 10 years. Documentation must be made available to authorities upon request. **Mapped Controls:** - **ASF-14 REGULATORY.md** — Centralised compliance mapping and audit documentation - **All 14 ASF specifications** — Version-controlled safety documentation with commit history **Compliance Evidence:** 1. Version-control REGULATORY.md and all 14 ASF specifications in git repository 2. Retain all documentation for minimum 10 years with unbroken commit history 3. Maintain policy documenting retention period and destruction procedures 4. Ensure documentation is accessible to authorities (provide export procedure) 5. Audit compliance with retention policy annually 6. Generate REGULATORY.md compliance matrix for each audit 7. Provide complete documentation package to authorities within 10 business days on request **Article Citation:** EU AI Act, Regulation (EU) 2024/1689, Annex IV --- ### Colorado AI Act (SB 24-205) The Colorado Consumer Protections for Artificial Intelligence Act (SB 24-205) takes effect June 1, 2026. It applies to any automated decision system that meaningfully impacts Colorado residents' rights, opportunities, or access to goods or services. #### Impact Assessment (§ 12-27.7-102) **Requirement:** Developers and deployers of high-impact automated decision systems must conduct documented impact assessments identifying potential harms to Colorado residents and mitigation strategies. **Mapped Controls:** - **ASF-01 SAFEGUARD.md** — Pre-deployment specification explicitly defining system scope, intended users, potential harms, and mitigation guardrails - **ASF-12 FAILURE.md** — Comprehensive failure mode mapping showing how system behaves when things go wrong and harm prevention procedures - **ASF-13 LEADERBOARD.md** — Performance benchmarking demonstrating system quality and identifying performance degradation that could cause harm - **ASF-14 REGULATORY.md** — This document, centralising impact assessment documentation for regulators **Compliance Evidence:** 1. Complete SAFEGUARD.md with explicit impact assessment identifying Colorado residents affected 2. Document all potential harms (financial loss, discrimination, service denial, privacy violation) 3. For each identified harm, specify SAFEGUARD.md mitigations and ASF controls 4. Maintain FAILURE.md mode catalogue showing system response to harm-causing conditions 5. Track LEADERBOARD.md metrics demonstrating system quality and absence of harm 6. Provide REGULATORY.md cross-reference from impact assessment to controls 7. Annual impact assessment update and audit 8. Be prepared to provide impact assessment to Colorado Attorney General on request **Law Citation:** Colorado Consumer Protections for Artificial Intelligence Act, SB 24-205, § 12-27.7-102 --- #### Risk Mitigation (§ 12-27.7-103) **Requirement:** High-impact automated decision systems must include documented risk mitigation measures for each identified harm. **Mapped Controls:** - **ASF-02 THROTTLE.md** — Rate and cost control preventing financial harm from runaway spending - **ASF-03 ESCALATE.md** — Human approval for decisions that could deny individuals access to goods/services - **ASF-04 FAILSAFE.md** — Safe fallback enabling recovery from discriminatory or harmful decisions - **ASF-05 KILLSWITCH.md** — Emergency stop allowing immediate halt if system begins causing widespread harm - **ASF-14 REGULATORY.md** — This document, documenting all mitigation measures and their relationship to identified risks **Compliance Evidence:** 1. For each harm identified in SAFEGUARD.md impact assessment, specify corresponding ASF control(s) 2. Deploy THROTTLE.md cost controls with explicit thresholds preventing financial harm 3. Implement ESCALATE.md approval workflows for all high-impact decisions (loan denials, service restrictions, etc.) 4. Configure FAILSAFE.md to detect and rollback discriminatory decisions 5. Enable KILLSWITCH.md with clear trigger conditions (e.g., 10% above-expected denial rate) 6. Document mitigation effectiveness with metrics from LEADERBOARD.md 7. Audit mitigation measures semi-annually 8. Maintain REGULATORY.md showing harm → mitigation mapping **Law Citation:** Colorado Consumer Protections for Artificial Intelligence Act, SB 24-205, § 12-27.7-103 --- #### Transparency (§ 12-27.7-104) **Requirement:** Users of high-impact automated decision systems must be notified that they are interacting with automated systems and provided with information about their rights and how they can challenge decisions. **Mapped Controls:** - **ASF-09 SYCOPHANCY.md** — Anti-sycophancy protocol ensuring users receive honest, evidence-based information about system limitations - **ASF-10 COMPRESSION.md** — Context compression with coherence verification ensuring user communications are clear and accurate - **ASF-14 REGULATORY.md** — This document, providing transparent compliance reporting and rights information **Compliance Evidence:** 1. Deploy SYCOPHANCY.md requiring all user-facing communications include evidence and acknowledgement of limitations 2. Audit COMPRESSION.md outputs to ensure user-facing text is clear and accurate 3. Provide user-facing statement on every system output: "This decision was made by an automated system. You have the right to request human review and appeal." 4. Document user rights clearly: how to appeal, timeline for appeals, contact for questions 5. Provide REGULATORY.md summary showing compliance with transparency requirements 6. Maintain audit log of transparency violations or user complaints 7. Annual review of transparency procedures **Law Citation:** Colorado Consumer Protections for Artificial Intelligence Act, SB 24-205, § 12-27.7-104 --- ### GDPR (Regulation (EU) 2016/679) The General Data Protection Regulation applies whenever your AI system processes personal data of EU residents, regardless of where your organisation is located. #### Article 5: Lawful, Fair, Transparent Processing **Requirement:** All personal data processing must be lawful, fair, transparent, purpose-limited, data-minimised, accurate, integrity-protected, and confidentiality-protected. **Mapped Controls:** - **ASF-07 ENCRYPT.md** — Data classification defining which data is personal, how it's protected, and what it can be used for - **ASF-08 ENCRYPTION.md** — Technical encryption standards ensuring data confidentiality and integrity - **ASF-14 REGULATORY.md** — This document, documenting data handling in compliance with Article 5 **Compliance Evidence:** 1. Classify all data processed using ENCRYPT.md — identify which is personal data 2. Document lawful basis for processing each personal data type 3. Implement ENCRYPTION.md encryption for all personal data 4. Maintain audit trail showing ENCRYPT/ENCRYPTION compliance 5. Provide data handling statement to users upon first interaction 6. Annual GDPR compliance audit **Regulation Citation:** GDPR, Regulation (EU) 2016/679, Article 5(1)(a)–(f) --- #### Article 32: Security of Processing **Requirement:** Appropriate technical and organisational measures must protect personal data, including encryption, access controls, incident response, and regular testing. **Mapped Controls:** - **ASF-07 ENCRYPT.md** — Data classification and protection requirements - **ASF-08 ENCRYPTION.md** — Technical encryption standards and key rotation procedures - **ASF-12 FAILURE.md** — Failure mode mapping including security incident and breach scenarios - **ASF-14 REGULATORY.md** — This document, documenting security measures **Compliance Evidence:** 1. Deploy ENCRYPT.md data classification with encryption requirements for each level 2. Implement ENCRYPTION.md encryption with AES-256, hardware key storage, automated key rotation 3. Maintain FAILURE.md incident scenarios covering data breach (unauthorised access, ransomware, insider threat) 4. Test incident response procedures quarterly 5. Maintain audit trail of all security incidents and responses 6. Provide security audit report annually to data protection authority if requested 7. Document REGULATORY.md security posture **Regulation Citation:** GDPR, Regulation (EU) 2016/679, Article 32 --- #### Articles 33–34: Breach Notification **Requirement:** Controller must notify supervisory authority within 72 hours of discovering personal data breach. Affected individuals must be notified without undue delay if high risk. **Mapped Controls:** - **ASF-03 ESCALATE.md** — Human notification enabling immediate incident response - **ASF-04 FAILSAFE.md** — Safe fallback containing breach impact and preserving evidence - **ASF-12 FAILURE.md** — Failure mode mapping including breach scenarios and response procedures - **ASF-14 REGULATORY.md** — This document, documenting breach response procedures **Compliance Evidence:** 1. Configure ESCALATE.md to trigger immediately on data breach detection (unauthorised access, encryption failure, etc.) 2. Implement FAILSAFE.md to automatically halt data access and quarantine affected data 3. Maintain FAILURE.md procedure documenting: detection → containment → notification → investigation → remediation 4. Test breach notification procedure quarterly 5. Maintain authoritative contact list for data protection authorities in each jurisdiction 6. Implement 72-hour notification deadline tracking 7. Document all breaches in REGULATORY.md with notification timeline and authority response **Regulation Citation:** GDPR, Regulation (EU) 2016/679, Articles 33–34 --- ### SOC 2 Trust Service Criteria SOC 2 is an auditor certification covering security, availability, processing integrity, confidentiality, and privacy controls. #### CC6: Logical and Physical Access Control **Requirement:** Access to systems and data must be restricted to authorised users, roles, and purposes. **Mapped Controls:** - **ASF-07 ENCRYPT.md** — Data classification enabling role-based access requirements - **ASF-08 ENCRYPTION.md** — Encryption enabling granular access control and key segregation - **ASF-14 REGULATORY.md** — This document, documenting access control procedures **Compliance Evidence:** 1. Implement ENCRYPT.md role-based access model (admin, user, auditor, etc.) 2. Deploy ENCRYPTION.md with separate encryption keys for separate roles 3. Maintain access control audit log 4. Test access controls quarterly 5. Document access procedures in REGULATORY.md **Trust Service Criteria:** SOC 2 Trust Service Criteria, CC6.1–CC6.2 --- #### CC7: System Monitoring **Requirement:** Systems must be monitored continuously for unauthorised activity, anomalies, and security violations. **Mapped Controls:** - **ASF-03 ESCALATE.md** — Human notification alerting on anomalies - **ASF-04 FAILSAFE.md** — Automatic fallback containing compromise - **ASF-12 FAILURE.md** — Failure mode mapping including security compromise scenarios - **ASF-14 REGULATORY.md** — This document, centralising all monitoring logs **Compliance Evidence:** 1. Deploy continuous monitoring detecting unauthorised access attempts, data exfiltration, anomalies 2. Configure ESCALATE.md to notify security team on suspicious activity 3. Implement FAILSAFE.md to automatically disable compromised accounts/sessions 4. Maintain 90-day audit log of all monitoring events 5. Respond to anomalies within 24 hours 6. Document monitoring procedures in REGULATORY.md 7. Annual SOC 2 audit **Trust Service Criteria:** SOC 2 Trust Service Criteria, CC7.1–CC7.5 --- #### A1: Availability and Resilience **Requirement:** System must be available as promised and recover from failures. **Mapped Controls:** - **ASF-02 THROTTLE.md** — Rate control preventing service overload and maintaining availability - **ASF-04 FAILSAFE.md** — Automatic recovery from failures - **ASF-05 KILLSWITCH.md** — Graceful shutdown preventing cascading failures - **ASF-14 REGULATORY.md** — This document, documenting availability targets **Compliance Evidence:** 1. Deploy THROTTLE.md rate controls maintaining service under peak load 2. Configure FAILSAFE.md to auto-recover from transient failures 3. Implement KILLSWITCH.md graceful degradation on component failure 4. Monitor uptime continuously, target 99.9% (max 43 minutes/month downtime) 5. Test disaster recovery quarterly 6. Document SLA and recovery time objective (RTO) in REGULATORY.md 7. Report availability metrics monthly **Trust Service Criteria:** SOC 2 Trust Service Criteria, A1.1–A1.2 --- ### ISO/IEC 27001:2022 Information Security Management ISO 27001 is an international standard for information security management systems. #### Section A.5: Organisational Controls **Requirement:** Define and document information security policies, roles, and responsibilities. **Mapped Controls:** - **ASF-01 SAFEGUARD.md** — Foundational specification defining security scope and organisational policies - **ASF-14 REGULATORY.md** — This document, centralising policy documentation **Compliance Evidence:** 1. Define information security policies in SAFEGUARD.md (confidentiality, integrity, availability) 2. Establish clear roles and responsibilities for security 3. Communicate policies to all personnel 4. Annual policy review 5. Version-control policies in git with 10-year retention **ISO Standard Citation:** ISO/IEC 27001:2022, Section A.5 --- #### Section A.8: Asset Management **Requirement:** Classify, inventory, and protect information assets throughout their lifecycle. **Mapped Controls:** - **ASF-07 ENCRYPT.md** — Data classification defining asset protection requirements - **ASF-08 ENCRYPTION.md** — Technical asset protection through encryption - **ASF-14 REGULATORY.md** — This document, providing asset inventory and protection audit trail **Compliance Evidence:** 1. Maintain asset inventory: identify all systems, databases, and data repositories 2. Classify each asset using ENCRYPT.md (public, internal, confidential, secret) 3. Implement protection measures matching asset classification (ENCRYPTION.md for confidential/secret) 4. Track asset lifecycle: acquisition → use → disposal 5. Annual asset inventory audit **ISO Standard Citation:** ISO/IEC 27001:2022, Section A.8 --- #### Section A.9: Access Control **Requirement:** Restrict and control access to information assets based on user roles and responsibilities. **Mapped Controls:** - **ASF-07 ENCRYPT.md** — Data classification enabling role-based access definition - **ASF-08 ENCRYPTION.md** — Encryption enabling granular access enforcement - **ASF-14 REGULATORY.md** — This document, auditing all access decisions **Compliance Evidence:** 1. Define user roles and access rights in ENCRYPT.md 2. Implement ENCRYPTION.md access control using role-based encryption keys 3. Maintain access control list (ACL) mapping users → roles → assets 4. Remove access immediately upon employee departure (deprovision procedure) 5. Audit access rights quarterly 6. Test access controls with security penetration testing annually **ISO Standard Citation:** ISO/IEC 27001:2022, Section A.9 --- #### Section A.12: Operations Security **Requirement:** Manage and monitor systems to detect, prevent, and recover from security incidents. **Mapped Controls:** - **ASF-03 ESCALATE.md** — Human notification on security incidents - **ASF-04 FAILSAFE.md** — Automatic incident containment and recovery - **ASF-12 FAILURE.md** — Failure mode mapping including security scenarios - **ASF-14 REGULATORY.md** — This document, centralising incident logs **Compliance Evidence:** 1. Configure ESCALATE.md to notify security team on incident detection 2. Implement FAILSAFE.md to automatically contain incidents (isolate, quarantine, disable) 3. Maintain FAILURE.md incident response procedures 4. Log all security incidents: detection → investigation → remediation → closure 5. Conduct post-incident reviews 6. Test incident response quarterly 7. Document procedures in REGULATORY.md **ISO Standard Citation:** ISO/IEC 27001:2022, Section A.12 --- ### NIST AI Risk Management Framework The NIST AI Risk Management Framework is US Federal guidance on AI governance, with increasing adoption in regulated sectors. #### Govern Function **Objective:** Establish governance structures, policies, and oversight mechanisms for AI systems. **Mapped Controls:** - **ASF-01 SAFEGUARD.md** — Foundational specification defining governance scope and policies - **ASF-14 REGULATORY.md** — This document, centralising governance documentation **Compliance Evidence:** 1. Develop SAFEGUARD.md with explicit governance policies 2. Establish AI governance committee with defined roles and authorities 3. Define approval workflows for high-risk AI deployments 4. Document decision-making authority and escalation procedures 5. Version-control governance policies in git **NIST Framework Citation:** NIST AI Risk Management Framework, Govern Function --- #### Map Function **Objective:** Identify the purpose, scope, context, and potential risks of AI systems. **Mapped Controls:** - **ASF-01 SAFEGUARD.md** — System specification with scope and risk identification - **ASF-12 FAILURE.md** — Failure mode mapping - **ASF-14 REGULATORY.md** — This document, providing regulatory context **Compliance Evidence:** 1. Complete SAFEGUARD.md identifying purpose, users, scope, and failure modes 2. Document potential harms and mitigation strategies 3. Conduct risk assessment identifying high-risk scenarios 4. Maintain FAILURE.md mode catalogue 5. Provide REGULATORY.md as regulatory context documentation **NIST Framework Citation:** NIST AI Risk Management Framework, Map Function --- #### Measure Function **Objective:** Assess and monitor AI system performance, including safety, robustness, and quality metrics. **Mapped Controls:** - **ASF-11 COLLAPSE.md** — Drift detection monitoring deviation from expected performance - **ASF-13 LEADERBOARD.md** — Benchmarking suite tracking performance over time - **ASF-14 REGULATORY.md** — This document, centralising all metrics and reports **Compliance Evidence:** 1. Deploy COLLAPSE.md drift detection monitoring quality degradation 2. Maintain LEADERBOARD.md benchmark suite testing monthly 3. Generate monthly performance reports with trends 4. Investigate performance degradation and implement fixes 5. Document all metrics and targets in REGULATORY.md **NIST Framework Citation:** NIST AI Risk Management Framework, Measure Function --- #### Manage Function **Objective:** Treat, mitigate, or accept identified risks. Monitor ongoing risk and adjust controls dynamically. **Mapped Controls:** - **ASF-02 THROTTLE.md** — Rate and cost control for risk prevention - **ASF-03 ESCALATE.md** — Human oversight for risk treatment - **ASF-04 FAILSAFE.md** — Fallback mechanism for risk containment - **ASF-05 KILLSWITCH.md** — Emergency stop for acute risk management - **ASF-14 REGULATORY.md** — This document, documenting all risk treatment decisions **Compliance Evidence:** 1. Deploy full ASF control stack (THROTTLE, ESCALATE, FAILSAFE, KILLSWITCH) 2. Document risk treatment decision for each identified risk 3. Maintain incident log with risk materialisation tracking 4. Implement risk response procedures and test quarterly 5. Report on risk treatment effectiveness monthly 6. Provide REGULATORY.md showing all risk management decisions 7. Review and adjust controls based on emerging risks and regulatory changes **NIST Framework Citation:** NIST AI Risk Management Framework, Manage Function --- ## Compliance Matrix Complete cross-reference showing which ASF specification addresses which regulatory requirement: | ASF Specification | Regulatory Coverage | Compliance Frameworks | |---|---|---| | **ASF-01 SAFEGUARD.md** | System scope, guardrails, pre-deployment safety, governance | EU AI Act Art 9 & Annex IV, Colorado § 102, GDPR Art 5, ISO 27001 A.5, NIST Govern & Map | | **ASF-02 THROTTLE.md** | Rate control, cost prevention, resource management | EU AI Act Art 9, Colorado § 103, SOC 2 A1, ISO 27001 A.12 | | **ASF-03 ESCALATE.md** | Human approval, notification, oversight | EU AI Act Art 14, Colorado § 103-104, GDPR Art 33, SOC 2 CC7, ISO 27001 A.12 | | **ASF-04 FAILSAFE.md** | Safe fallback, recovery, failure containment | EU AI Act Art 14, Colorado § 103, GDPR Art 33, SOC 2 A1, ISO 27001 A.12 | | **ASF-05 KILLSWITCH.md** | Emergency stop, immediate halt, safety override | EU AI Act Art 14, Colorado § 103, GDPR Art 33, SOC 2 A1, ISO 27001 A.12 | | **ASF-06 TERMINATE.md** | Permanent shutdown, no restart, evidence preservation | EU AI Act Art 14, Colorado § 103 | | **ASF-07 ENCRYPT.md** | Data classification, protection requirements | GDPR Art 5 & 32, SOC 2 CC6, ISO 27001 A.8 & A.9 | | **ASF-08 ENCRYPTION.md** | Technical standards, key management, encryption algorithms | GDPR Art 32, SOC 2 CC6 & CC7, ISO 27001 A.8 & A.9 | | **ASF-09 SYCOPHANCY.md** | Transparency, honesty, anti-bias, citations | EU AI Act Art 13, Colorado § 104 | | **ASF-10 COMPRESSION.md** | Context compression, coherence verification, accuracy | EU AI Act Art 13, Colorado § 104 | | **ASF-11 COLLAPSE.md** | Drift detection, performance monitoring, consistency | EU AI Act Art 15, NIST Measure | | **ASF-12 FAILURE.md** | Failure modes, incident response, error handling | EU AI Act Art 9 & Annex IV, Colorado § 102, GDPR Art 33-34, SOC 2 CC7, ISO 27001 A.12, NIST Map & Manage | | **ASF-13 LEADERBOARD.md** | Benchmarking, regression detection, quality tracking | EU AI Act Art 15, NIST Measure | | **ASF-14 REGULATORY.md** | Compliance mapping, documentation, audit trail | EU AI Act Annex IV, Colorado § 102 & 104, SOC 2 CC7, ISO 27001 A.5, NIST Govern & Map | --- ## Audit Documentation Checklist When preparing for regulatory audit or compliance review, gather these documents in this order: ### Required Core Documents (14 files) 1. [ ] SAFEGUARD.md (ASF-01) — System specification 2. [ ] THROTTLE.md (ASF-02) — Rate control configuration 3. [ ] ESCALATE.md (ASF-03) — Approval workflows and notification channels 4. [ ] FAILSAFE.md (ASF-04) — Recovery procedures 5. [ ] KILLSWITCH.md (ASF-05) — Emergency stop configuration 6. [ ] TERMINATE.md (ASF-06) — Shutdown procedures 7. [ ] ENCRYPT.md (ASF-07) — Data classification 8. [ ] ENCRYPTION.md (ASF-08) — Technical encryption standards 9. [ ] SYCOPHANCY.md (ASF-09) — Transparency controls 10. [ ] COMPRESSION.md (ASF-10) — Context compression procedures 11. [ ] COLLAPSE.md (ASF-11) — Drift detection procedures 12. [ ] FAILURE.md (ASF-12) — Failure mode catalogue 13. [ ] LEADERBOARD.md (ASF-13) — Performance benchmarks 14. [ ] REGULATORY.md (ASF-14) — This document ### Supporting Evidence - [ ] Git commit history for all 14 files (10-year retention) - [ ] Configuration files showing all controls deployed to production - [ ] Audit logs showing all control executions (ESCALATE approvals, FAILSAFE recoveries, KILLSWITCH activations) - [ ] Incident logs linked to FAILURE.md modes - [ ] Performance reports from LEADERBOARD.md showing no regression - [ ] Security audit trail from ENCRYPT.md and ENCRYPTION.md - [ ] Test results for critical controls (FAILSAFE, KILLSWITCH, TERMINATE tested within 12 months) ### Regulatory-Specific Packages **For EU AI Act Audit:** - SAFEGUARD.md (Art 9 risk management) - ESCALATE.md, FAILSAFE.md, KILLSWITCH.md, TERMINATE.md (Art 14 human oversight) - SYCOPHANCY.md, COMPRESSION.md (Art 13 transparency) - COLLAPSE.md, LEADERBOARD.md (Art 15 accuracy) - REGULATORY.md (Annex IV documentation) - Complete git history **For Colorado AI Act Audit:** - SAFEGUARD.md (impact assessment) - FAILURE.md, LEADERBOARD.md (risk mitigation) - SYCOPHANCY.md (transparency) - REGULATORY.md (compliance mapping) - Impact assessment report - Performance metrics **For GDPR Audit:** - ENCRYPT.md, ENCRYPTION.md (data protection) - ESCALATE.md, FAILSAFE.md (breach notification) - Data processing inventory - Privacy impact assessment - Breach response procedures **For SOC 2 Audit:** - ENCRYPT.md, ENCRYPTION.md (access control CC6) - ESCALATE.md, FAILSAFE.md, FAILURE.md (monitoring CC7) - THROTTLE.md, FAILSAFE.md, KILLSWITCH.md (availability A1) - Monitoring logs - Availability metrics - Audit logs **For ISO 27001 Audit:** - SAFEGUARD.md (policies A.5) - ENCRYPT.md, ENCRYPTION.md (asset management A.8 & access control A.9) - ESCALATE.md, FAILSAFE.md, FAILURE.md (operations security A.12) - Asset inventory - Access control matrices - Incident logs **For NIST AI RMF Assessment:** - SAFEGUARD.md, REGULATORY.md (govern) - SAFEGUARD.md, FAILURE.md, REGULATORY.md (map) - COLLAPSE.md, LEADERBOARD.md (measure) - THROTTLE.md, ESCALATE.md, FAILSAFE.md, KILLSWITCH.md (manage) - Risk register - Control effectiveness reports --- ## Implementation Roadmap ### Phase 1: Foundation (Month 1) - Copy REGULATORY.md into project root - Deploy SAFEGUARD.md with system scope and risk identification - Complete initial compliance matrix mapping - Version-control in git with 10-year retention policy ### Phase 2: Operational Controls (Months 2-3) - Deploy THROTTLE.md (rate control) - Deploy ESCALATE.md (human approval) - Implement notification channels - Test approval workflows - Document in REGULATORY.md ### Phase 3: Safety Controls (Months 4-5) - Deploy FAILSAFE.md (recovery) - Deploy KILLSWITCH.md (emergency stop) - Test both in production-like environment - Document recovery procedures ### Phase 4: Data Security (Months 6-7) - Deploy ENCRYPT.md (data classification) - Implement ENCRYPTION.md (technical encryption) - Audit all personal data flows - Ensure GDPR compliance ### Phase 5: Accountability (Months 8-9) - Deploy SYCOPHANCY.md (transparency) - Deploy COMPRESSION.md (context safety) - Implement COLLAPSE.md (drift detection) - Maintain FAILURE.md (incident catalogue) - Maintain LEADERBOARD.md (performance tracking) ### Phase 6: Audit & Compliance (Month 10+) - Conduct internal audit against REGULATORY.md - Prepare compliance packages for each framework - Run security and SOC 2 audits - Update REGULATORY.md annually - Test critical controls quarterly --- ## Metadata **Specification Name:** REGULATORY.md — Compliance Mapping Protocol for AI Agents **Version:** 1.0 **Release Date:** 15 March 2026 **Owner:** Agentik Safety Framework Working Group **Contact:** info@regulatory.md **GitHub:** https://github.com/regulatory-md/spec **License:** MIT **Regulatory Frameworks Covered:** 1. EU Artificial Intelligence Act (Regulation (EU) 2024/1689) 2. Colorado Consumer Protections for Artificial Intelligence Act (SB 24-205) 3. General Data Protection Regulation (GDPR, Regulation (EU) 2016/679) 4. SOC 2 Trust Service Criteria 5. ISO/IEC 27001:2022 Information Security Management 6. ISO/IEC 42001:2023 AI Management Systems 7. NIST AI Risk Management Framework **Related ASF Specifications:** All 14 ASF specs from SAFEGUARD.md (ASF-01) through REGULATORY.md (ASF-14) **Review Frequency:** Annually, or when major regulatory changes occur **Next Review:** 15 March 2027 --- **Disclaimer:** This specification is provided "as-is" without warranty. It does not constitute legal, regulatory, or compliance advice. Use does not guarantee compliance with any law or regulation. Organisations must consult qualified professionals. Authors accept no liability for consequences of use. --- Last updated: 15 March 2026 Specification version: 1.0