# Standard Intelligence — EU AI Act Technical Documentation (Full Text) > This file contains the complete plain-text content of all 673 documentation articles. > For the structured index, see /llms.txt > Source: https://docs.standardintelligence.com > Generated: 2026-03-05 --- # Getting Started --- ## Data Flow URL: https://docs.standardintelligence.com/data-flow Breadcrumb: Getting Started › Workflow › Data Flow Last updated: 28 Feb 2026 Data Flow AISDP module(s): 3 (Architecture and Design) Regulatory basis: Annex IV (2)(b–e); Article 12 The AISDP requires architectural documentation that traces the path of data through the system from ingestion to output. The data flow is structured around an eight-layer reference architecture , each layer providing specific protections against intent drift and outcome drift. Data enters at the Data Ingestion Layer (Layer 1), where schema validation, input range enforcement, and prohibited feature blocking intercept malformed or out-of-distribution inputs. The Feature Engineering Layer (Layer 2) transforms validated data into the representation used for inference, enforcing training-serving consistency and maintaining a feature registry with proxy variable flags. The Model Inference Layer (Layer 3) applies the model with version pinning, confidence thresholding, and output constraint enforcement. Post-inference, the Post-Processing Layer (Layer 4) applies business rules and threshold calibration, with fairness re-evaluation on production data. The Explainability Layer (Layer 5) generates explanations using methods appropriate to the model architecture. The Human Oversight Interface (Layer 6) presents the system's recommendation alongside the explanation for operator review, with automation bias countermeasures and override capability. The Logging and Audit Layer (Layer 7) captures immutable, append-only records of events at Layers 3, 4, and 6, using cryptographic hash chains. The Monitoring Layer (Layer 8) consumes these logs to compute performance, fairness, data drift , operational, and human oversight metrics against AISDP-declared thresholds. The data flow diagram is essential for demonstrating Article 12 compliance (record-keeping) and for enabling traceability analysis throughout the system's lifecycle. Key outputs Data flow diagram (part of AISDP Module 3 ) System context, container, and component diagrams (C4 model) --- ## Delivery Process URL: https://docs.standardintelligence.com/delivery-process Breadcrumb: Getting Started › Delivery Process Last updated: 28 Feb 2026 The delivery process organises compliance activities into seven phases, each with a defined owner, outputs, and governance gate. Phase 1 covers discovery and classification. Phase 2 addresses risk assessment and the fundamental rights impact assessment . Phase 3 establishes architecture and design. Phase 4 manages development and testing with CI/CD integration. Phase 5 conducts pre-deployment validation through three assessment workstreams. Phase 6 handles registration and deployment with per-jurisdiction checklists. Phase 7 establishes operational monitoring with a continuous feedback loop from post-market monitoring through to AISDP updates. ℹ Each phase includes a governance gate that must be passed before proceeding. The gates are designed to prevent compliance debt from accumulating. --- ## Domain Expertise URL: https://docs.standardintelligence.com/domain-expertise Breadcrumb: Getting Started › Workflow › Domain Expertise Last updated: 28 Feb 2026 Domain Expertise AISDP module(s): All (contextual) Regulatory basis: N/A (guidance) Different roles require different domain expertise. The table below maps each role to its priority sections and key focus areas, enabling targeted engagement with the material. Role Priority sections Key focus areas AI Governance Lead Getting Started, Conformity Assessment , Regulator Interaction, Strategic Synthesis Residual risk acceptability thresholds, non-conformity management , Declaration of Conformity liability implications, end-of-life planning AI System Assessor Risk Assessment , Conformity Assessment, all technical sections (working familiarity) Full risk assessment methodology, Annex VI walkthrough, documentation standards, delivery process Technical SME / Engineering Lead Model Selection , Data Governance , Development Architectures, Version Control , CI/CD Pipeline s, Cybersecurity, Post-Market Monitoring Level 1 monitoring, break-glass procedures , oversight interface requirements, technical shutdown procedures Legal and Regulatory Advisor Getting Started, Conformity Assessment, Regulator Interaction, Copyright & IP Exposure Article 6(3) exception, FRIA , penalty framework, serious incident reporting , IP risk, liability and insurance , end-of-life regulatory basis Conformity Assessment Coordinator Conformity Assessment, Certification Assessment execution methodology , non-conformity register, multi-system coordination, documentation finalisation DPO Liaison Data Governance, Post-Market Monitoring GDPR alignment, special category data , PMM data retention Executive Leadership Getting Started, Strategic Synthesis Strategic significance, regulatory timeline, reputational risk , escalation culture, penalty exposure provides a consolidated cross-reference index mapping every cited Article, Annex, and AISDP Module to the sections that address it. Key outputs None (contextual article) --- ## EU AI Act Overview URL: https://docs.standardintelligence.com/eu-ai-act-overview Breadcrumb: Getting Started › Introduction › EU AI Act Overview Last updated: 28 Feb 2026 EU AI Act Overview AISDP module(s): All (contextual) Regulatory basis: Regulation (EU) 2024/1689 (full text) The EU AI Act (Regulation (EU) 2024/1689) is the first comprehensive regulatory framework governing artificial intelligence systems at scale. It becomes fully enforceable for high-risk systems on 2 August 2026, requiring every high-risk AI system placed on the Union market to carry a completed conformity assessment , a signed Declaration of Conformity , CE marking where applicable, and a registration entry in the EU database . AI systems present a distinctive regulatory challenge because of their relationship with time. A traditional application behaves tomorrow the way it behaves today unless someone deliberately changes it. An AI system, by contrast, is designed to improve through learning, and that learning introduces continuous change that conventional software governance was never built to handle. Models are retrained on new data; feature distributions shift as the population served by the system evolves; fine-tuning adjusts behaviour in ways that may be subtle and difficult to document after the fact. The evidentiary backbone for meeting the Act's requirements is a single artefact: the AI System Documentation Package (AISDP). A national competent authorit y will open the AISDP first during any inquiry. A notified body will scrutinise it for technical rigour. Internal governance, legal counsel, and engineering teams will consult it throughout the system's operational life, and it must remain retrievable for ten years after the system is placed on the market. The Act tells organisations what they must document. It does not tell them how. Articles 8 through 15 set out the substantive requirements; Annex IV specifies the technical documentation contents. The gap between a regulatory requirement and the engineering workflow that satisfies it is where most compliance programmes stall. The AISDP preparation process described in these articles occupies that gap, translating every material obligation into concrete engineering practices, governance processes, and organisational structures. Key outputs None (contextual article) --- ## Four Risk Tiers URL: https://docs.standardintelligence.com/four-risk-tiers Breadcrumb: Getting Started › Introduction › Four Risk Tiers Last updated: 28 Feb 2026 Four Risk Tiers AISDP module(s): 1 (System Identity), 6 (Risk Management System) Regulatory basis: Articles 5, 6, 7, 50; Annex III The EU AI Act establishes a four-tier risk classification framework that determines the obligations attaching to each AI system. Understanding where a system falls within this framework is the precondition for every subsequent compliance activity. Tier 1: Prohibited Practices ( Article 5 ). Systems that deploy subliminal manipulation, exploit vulnerabilities of specific groups, implement social scoring by public authorities, or perform untargeted facial recognition scraping are prohibited outright. Emotion recognition in workplaces and educational institutions (except where intended for medical or safety reasons), criminal risk assessment solely through profiling, and real-time remote biometric identification in publicly accessible spaces (outside narrow law enforcement exceptions) also fall within this tier. These systems cannot proceed through the AISDP process; their existence triggers immediate escalation and cessation. Tier 2: High-Risk Systems (Annex III and Article 6). Systems falling within the eight Annex III domains (biometrics; critical infrastructure; education and vocational training; employment, workers management and access to self-employment; access to and enjoyment of essential private services and essential public services and benefits; law enforcement; migration, asylum and border control management; administration of justice and democratic processes) or constituting safety components of products governed by Annex I harmonisation legislation require the full AISDP comprising all twelve modules, conformity assessment , CE marking , and EU database registration . Tier 3: Limited Risk. Systems triggering transparency obligations, such as chatbots, emotion recognition systems, biometric categorisation systems, and systems generating or manipulating synthetic content, require a Standard AISDP addressing transparency measures. Tier 4: Minimal Risk. Systems that do not trigger any of the above categories require a Baseline AISDP confirming the classification rationale. The Article 6(3) exception allows certain systems that would otherwise be classified as high-risk to be treated as lower risk, provided two conditions are both satisfied: the system performs narrow procedural tasks, improves previously completed human activities, or detects decision-making patterns without replacing human assessment; and the system does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons. Both criteria must be met, and any reliance on the exception must be rigorously documented. Key outputs Classification Decision Record (CDR) with risk tier determination Article 6(3) exception assessment (where applicable) --- ## Introduction URL: https://docs.standardintelligence.com/introduction Breadcrumb: Getting Started › Introduction Last updated: 28 Feb 2026 The introduction provides the regulatory context for the AISDP . The EU AI Act overview summarises the regulation's scope, structure, and timeline. The four risk tiers explain the classification framework from prohibited practices through high-risk to limited and minimal risk. The penalty structure documents the graduated fines. Key concepts defines the foundational terms used throughout the documentation. ℹ This section provides the regulatory foundation. Readers already familiar with the EU AI Act may proceed directly to the Workflow section. --- ## Key Concepts URL: https://docs.standardintelligence.com/key-concepts Breadcrumb: Getting Started › Introduction › Key Concepts Last updated: 28 Feb 2026 Key Concepts AISDP module(s): All (contextual) Regulatory basis: Article 3; Articles 8–15; Annex IV Several concepts recur throughout AISDP preparation and must be understood precisely. AI system (Article 3(1)): a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from input how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Provider : the entity that develops an AI system or has one developed and places it on the market or puts it into service under its own name or trademark. Deployer : the entity that uses the system under its authority. Many organisations hold both roles simultaneously, which triggers dual obligations. Intended purpose : the use for which the provider intends the system, as specified in the instructions for use, technical documentation, and Declaration of Conformity . Substantial modification : a change after placing on the market that was not foreseen or planned by the provider and that affects compliance or modifies the intended purpose; provide quantitative thresholds for identifying these. Placing on the market : the first making available on the Union market, which starts the ten-year documentation retention clock. The AISDP itself is structured as twelve modules, each traceable to source evidence: System Identity, Development Process, Architecture and Design, Data Governance , Testing and Validation, Risk Management System, Human Oversight, Transparency and User Information, Robustness and Cybersecurity, Record-Keeping, FRIA , and Post-Market Monitoring and Change History. Every claim in the AISDP requires a supporting artefact. The approach described across these articles generates that evidence as a natural byproduct of the engineering workflow. Key outputs None (contextual article) --- ## Outcomes URL: https://docs.standardintelligence.com/outcomes Breadcrumb: Getting Started › Workflow › Outcomes Last updated: 28 Feb 2026 Outcomes AISDP module(s): All Regulatory basis: Articles 8–15, 43, 47, 48, 71, 72 The seven-phase delivery process produces a defined set of compliance outcomes. On completion, the organisation holds a signed Declaration of Conformity , a CE-marked system, a registration entry in the EU database, and a complete AISDP with all twelve modules populated and traceable to source evidence. The AISDP becomes a living document. Each material change to the system, its documentation, or its operational context creates a new version. The version history demonstrates the organisation's continuous compliance discipline throughout the system's operational lifetime and during the ten-year post-market retention period. The thirteen domains addressed in this documentation ( risk assessment , model selection , data governance , development architectures, version control , CI/CD pipeline s, cybersecurity, conformity assessment , certification, regulator interaction, post-market monitoring , operational oversight, and technical delivery) are deeply interdependent. A deficiency in one propagates through others, often surfacing as a compliance failure far from the original gap. The strategic synthesis in maps these dependencies and provides a maturity model , common pitfalls, and a readiness assessment framework to help organisations evaluate their compliance posture holistically. Key outputs Complete AISDP (twelve modules) Signed Declaration of Conformity CE marking evidence EU database registration confirmation Post-market monitoring plan (operational) --- ## Penalty Structure URL: https://docs.standardintelligence.com/penalty-structure Breadcrumb: Getting Started › Introduction › Penalty Structure Last updated: 28 Feb 2026 Penalty Structure AISDP module(s): All (contextual) Regulatory basis: Article 99 The AI Act establishes a graduated penalty framework under Article 99, calibrated to the severity of the violation. National competent authorit ies hold investigative powers under Article 74 to detect and pursue non-compliance. Three penalty tiers apply. The first tier covers prohibited AI practices ( Article 5 ) and carries fines of up to EUR 35 million or 7% of global annual turnover, whichever is higher. This is the highest penalty tier in the EU regulatory landscape, exceeding the GDPR 's maximum of EUR 20 million or 4%. The second tier covers breaches of high-risk system obligations under Articles 8 through 15, Articles 16 and 17, Articles 25 through 27, and Articles 43 through 49, with fines reaching EUR 15 million or 3% of global annual turnover. The third tier applies to providing incorrect, incomplete, or misleading information to notified bodies or competent authorities, carrying fines of up to EUR 7.5 million or 1% of turnover. For SMEs and start-ups, Article 99(6) provides that the lower of the two amounts (absolute figure or turnover percentage) applies. Enforcement action may be triggered by proactive market surveillance , complaints from affected persons or deployers, the provider's own serious incident notifications under Article 73 , cross-border referrals from other member state authorities, or media and civil society reporting. The AISDP is central to enforcement; a competent authority's first request will typically be for the complete technical documentation, and an AISDP that is incomplete or inconsistent with the deployed system is itself a non-compliance finding. Article 99(7) directs competent authorities to consider mitigating factors when determining penalty amounts. A thorough AISDP, a functioning post-market monitoring system, responsive incident reporting, and a cooperative posture toward authorities will materially reduce penalty exposure. The AISDP is therefore both the document under review and part of the evidence that determines consequences. Key outputs None (contextual article) --- ## Phase 1 — Discovery & Classification URL: https://docs.standardintelligence.com/phase-1--discovery-and-classification Breadcrumb: Getting Started › Delivery Process › Phase 1 Last updated: 28 Feb 2026 Phase 1: Discovery & Classification — Owner & Outputs AISDP module(s): 1 (System Identity) Regulatory basis: Articles 3(1), 5, 6, 7, 50; Annex III Phase 1 runs during Weeks 1 to 3. The AI System Assessor owns this phase. The objective is to determine whether the system falls within the AI Act's scope, classify its risk tier, and produce the Classification Decision Record . The Assessor examines the system against the Article 3(1) definition by asking three questions: (1) is the system designed to operate with varying levels of autonomy; (2) may it exhibit adaptiveness after deployment; and (3) does it infer from inputs how to generate outputs such as predictions, content, recommendations, or decisions? All three elements must be present for the system to fall within scope. The Assessor then classifies in-scope systems against the four risk tiers: prohibited (Article 5), high-risk (Articles 6–7 and Annex III), limited risk, or minimal risk. For systems falling within Annex III categories, the Assessor evaluates the Article 6(3) exception by testing the functional criterion and the risk criterion separately. The functional criterion covers whether the system performs narrow procedural tasks, improves previously completed human activities, or detects decision-making patterns without replacing human assessment. The risk criterion covers whether the system poses a significant risk of harm to health, safety, or fundamental rights. The Classification Reviewer independently reviews the Assessor's determination. Disagreements are escalated to the AI Governance Lead . This independent review is a structural safeguard against classification bias, where the team developing a system may have incentives to classify it at a lower risk tier. Key outputs Classification Decision Record (CDR) Initial risk profile identifying triggered regulatory obligations Evidence pack with source materials informing the classification Phase 1: Governance Gate (CDR Approval) AISDP module(s): 1 (System Identity), 6 (Risk Management System) Regulatory basis: Articles 5, 6, 7; Annex III Phase 1 concludes with a governance gate: the AI Governance Lead must approve the Classification Decision Record before Phase 2 begins. This gate serves several purposes. It ensures that the classification has been independently reviewed and that any disagreement between the AI System Assessor and the Classification Reviewer has been resolved. It confirms that the CDR is current, that no reclassification trigger s have been activated, and that the classification rationale remains sound given the system's deployment context. Where the Article 6(3) exception is claimed, both the Legal and Regulatory Advisor and the AI Governance Lead must review the claim. The gate also establishes the scope of subsequent work. A high-risk classification triggers the full twelve-module AISDP, conformity assessment , CE marking , and EU database registration . A limited-risk classification triggers a Standard AISDP: a reduced documentation package addressing the transparency obligations under Article 50 , covering system identity, intended purpose, transparency measures, and the classification rationale. A minimal-risk classification requires only a Baseline AISDP: a lightweight record confirming the classification analysis, the system's intended purpose, and the date of determination. Misclassification at this stage propagates through the entire lifecycle; a system incorrectly classified as limited-risk that later proves to be high-risk will lack the documentation, testing, and governance infrastructure that should have been in place from the outset. A system that has drifted from its intended purpose into a higher-risk domain since its original classification requires reclassification before risk assessment proceeds. The gate enforces this check explicitly. Key outputs Approved CDR with AI Governance Lead sign-off Confirmed scope for subsequent AISDP preparation --- ## Phase 2 — Risk Assessment & FRIA URL: https://docs.standardintelligence.com/phase-2--risk-assessment-and-fria Breadcrumb: Getting Started › Delivery Process › Phase 2 Last updated: 28 Feb 2026 Phase 2: Risk Assessment & FRIA — Owner & Outputs AISDP module(s): 6 (Risk Management System), 11 (FRIA) Regulatory basis: Articles 9, 27 Phase 2 runs during Weeks 2 to 6. The Technical SME and AI System Assessor jointly own this phase. The objective is to conduct the comprehensive risk assessment that informs all subsequent design and development decisions. Article 9 (2)(a) requires the risk management system to identify and analyse the known and the reasonably foreseeable risks that the high-risk AI system can pose to health, safety, or fundamental rights, including risks arising from the system's intended use and conditions of reasonably foreseeable misuse. The team applies the five-method risk identification approach: Failure Mode and Effects Analysis (FMEA), stakeholder consultation, regulatory gap analysis, adversarial red-teaming, and horizon scanning. The risk register is established, and each risk is scored across four dimensions: health and safety, fundamental rights, operational integrity, and reputational exposure. Residual risk acceptability is assessed against Article 9(4)'s standard, which requires elimination or reduction of risks "as far as possible through adequate design and development," with any remaining residual risk judged acceptable in relation to the system's intended purpose and the persons or groups of persons on whom it is intended to be used. This standard does not require zero residual risk; it requires evidence that the organisation has pursued risk reduction to the point where further reduction would be disproportionate, technically infeasible, or counterproductive. For deployers of high-risk systems that fall within the categories specified in Article 27 (1) (bodies governed by public law, private entities providing public services, and deployers conducting creditworthiness evaluation or insurance risk pricing), the Fundamental Rights Impact Assessment under Article 27 is conducted in parallel. Article 27 places the FRIA obligation on these deployer categories specifically; where the organisation is both provider and deployer, the FRIA is nonetheless a deployer-capacity activity, and the assessment should reflect the deployer's knowledge of the deployment context rather than the provider's design assumptions alone. The FRIA examines the impact on all potentially affected EU Charter rights, with particular attention to intersectional effects where multiple vulnerability factors compound in the same individuals. The reputational risk framework is also applied, assessing customer, market, regulatory, shareholder, and employee dimensions for each technical risk. provide detailed guidance on FRIA methodology and documentation requirements. Key outputs Risk register (populates AISDP Module 6 ) FRIA report (populates AISDP Module 11 ) Reputational risk assessment Risk mitigation plan with assigned owners and timelines Phase 2: Governance Gate (Residual Risk Acceptance) AISDP module(s): 6 (Risk Management System) Regulatory basis: Article 9(4) Phase 2 concludes with a governance gate: the AI Governance Lead reviews the risk register and formally accepts the residual risk profile before development proceeds. This is a governance decision, not a technical one. For each risk above the acceptance threshold, the Assessor must document the mitigations already implemented, the residual risk rating after those mitigations, the alternative mitigations that were considered and rejected, and the rationale for rejection. That rationale must address cost relative to the system's economic value and the severity of the risk. Claims of technical infeasibility require supporting evidence from the Technical SME. Claims that an alternative mitigation would degrade performance must quantify the degradation and explain why the current performance level is necessary for the intended purpose. Each acceptance is signed by the AI Governance Lead. These sign-offs are retained as part of the AISDP evidence pack for the full ten-year retention period. Residual risk is communicated to deployers through the Instructions for Use ( Module 8 ), specifying which subgroups are affected, the magnitude of the risk, the conditions under which it may materialise, and the compensating controls that deployers should apply. Residual risk acceptability is not a one-time determination. The quarterly risk register review must re-assess acceptability in light of changes to deployment scale, affected population, new evidence, or tightened regulatory standards. Key outputs Signed residual risk acceptance decisions Deployer risk communication documentation Phase 2: Identification Methods & Scoring AISDP module(s): 6 (Risk Management System) Regulatory basis: Article 9 Risk identification for AI systems requires a multi-method approach because no single technique captures every class of risk. The recommended methodology combines five complementary methods. Failure Mode and Effects Analysis (FMEA) is the workhorse. For each system component, the team enumerates failure modes (including data drift , concept drift, adversarial manipulation, distributional shift, label noise propagation, and emergent biases), the effects of each failure, and the severity of those effects. Each failure mode receives a Risk Priority Number (RPN) based on severity, occurrence probability, and detectability, scored on scales of 1 to 10. RPNs above a defined threshold trigger mandatory mitigation. A threshold of 100 on the 1,000-point scale is a common starting point, but the threshold must be calibrated per system: a safety-critical system in healthcare may warrant a lower threshold (e.g., 80), while an internal analytics tool may accept a higher one (e.g., 150). The chosen threshold, the calibration rationale, and the individuals involved in setting it should be documented in the risk register and reviewed at each annual calibration workshop. Stakeholder consultation surfaces experiential risks invisible to the engineering team. Structured consultation with deployers, affected persons, civil society representatives, and domain experts produces documented and actionable insights. Regulatory gap analysis maps the system against every obligation in Articles 9 through 15 to identify shortfalls. Adversarial red-teaming subjects the system to deliberate misuse scenarios, with the MITRE ATLAS threat taxonomy as the reference framework and tools such as Microsoft PyRIT for LLM-based systems. Horizon scanning reviews incidents and enforcement actions from comparable systems, drawing on the OECD AI Policy Observatory, Stanford HAI, and the AI Incident Database. Risk scoring uses four impact dimensions: health and safety, fundamental rights, operational integrity, and reputational exposure. Annual calibration workshops ensure scoring consistency across assessors. High-uncertainty risks may employ semi-quantitative Bayesian scoring to make uncertainty visible rather than concealing it behind point estimates. Key outputs Populated risk register with RPNs and four-dimension scoring FMEA analysis documentation Red-team exercise reports Stakeholder consultation records Calibration workshop records --- ## Phase 3 — Architecture & Design URL: https://docs.standardintelligence.com/phase-3--architecture-and-design Breadcrumb: Getting Started › Delivery Process › Phase 3 Last updated: 28 Feb 2026 Phase 3: Architecture & Design — Owner & Outputs AISDP module(s): 2 (Development Process), 3 (Architecture and Design), 4 (Data Governance), 9 (Robustness and Cybersecurity) Regulatory basis: Annex IV (2); Articles 9–15 Phase 3 runs during Weeks 4 to 8. The Technical Owner leads, with contributions from the Technical SME and AI System Assessor . The objective is to design the system architecture informed by the risk assessment , select the model approach, and establish the data governance framework. The phase begins with the Statement of Business Intent, documenting the system's purpose, constraints, prohibited outcomes, ethical framework, and transparency commitment across four audiences: deployers, affected persons, regulators, and internal stakeholders. Model selection follows the compliance criteria described above, evaluating each candidate against six dimensions: documentability, testability, auditability, bias detectability, maintainability, and determinism. The full spectrum of decisioning approaches is assessed, including heuristic systems, statistical models, neural networks, and LLMs. Model origin risk, copyright risk, and nation-alignment risk are all evaluated. The layered architecture is designed with per-layer compensating controls against intent and outcome drift. The data governance framework is established, and version control strategy, CI/CD pipeline design, and infrastructure-as-code approach are defined. The cybersecurity threat model is developed using STRIDE/PASTA methodology. Key outputs Statement of Business Intent (approved by AI Governance Lead and Business Owner) Model selection rationale document (AISDP Module 3 ) System architecture document with dependency maps (AISDP Module 3) Data governance plan (AISDP Module 4 ) Version control and CI/CD design (AISDP Module 2 ) Cybersecurity threat model (AISDP Module 9 ) Phase 3: Governance Gate (Architecture Review) AISDP module(s): 3 (Architecture and Design) Regulatory basis: Annex IV(2); Articles 9–15 Phase 3 concludes with a governance gate: the Technical SME , Legal and Regulatory Advisor , and AI Governance Lead conduct a formal architecture review with sign-off confirming that the design satisfies the risk mitigation plan. This review verifies that every risk identified in Phase 2 has a corresponding architectural control. The eight-layer reference architecture should demonstrate per-layer protections against both intent drift and outcome drift. The model selection rationale must address compliance criteria scores, and any model origin risks or IP exposure must have documented mitigations. The Legal and Regulatory Advisor confirms that the architecture supports all applicable regulatory requirements, that the data governance plan addresses Article 10 obligations, and that the cybersecurity threat model aligns with Article 15 . The review should also verify that the version control and CI/CD design will produce compliance evidence as a byproduct of the engineering workflow, rather than leaving documentation as a retrospective exercise. Architectural decisions made at design time have downstream implications for the system's eventual decommissioning . Systems designed with clear infrastructure-as-code definitions, isolated credential namespaces, and modular data storage are substantially easier to decommission in a controlled and auditable manner. The architecture review should consider decommission-readiness as a non-functional requirement. Key outputs Signed architecture review record Confirmation that design satisfies the risk mitigation plan Phase 3: Eight-Layer Reference Architecture & Per-Layer Controls AISDP module(s): 3 (Architecture and Design), 7 (Human Oversight) Regulatory basis: Articles 9, 12, 13, 14, 15; Annex IV(2) The reference architecture structures a high-risk AI system as eight layers, each providing specific compensating protections. Layer 1 (Data Ingestion) enforces schema validation, input range enforcement based on training data distributions, prohibited feature blocking as a hard technical control, and data minimisation for GDPR compliance. Distribution monitoring at this layer computes real-time summary statistics against the training baseline. Layer 2 (Feature Engineering) maintains training-serving consistency through feature stores and a single computation specification, monitors feature distributions against the training baseline, and maintains a feature registry with proxy variable flags and justifications. Layer 3 (Model Inference) enforces model version pinning with cryptographic hash verification, confidence thresholding (below-threshold cases routed to human review), and output constraint enforcement using schema validation. Layer 4 (Post-Processing) applies documented business rules with override logging, monitors threshold stability, and re-evaluates fairness on production data with periodic threshold recalibration. Layer 5 (Explainability) generates explanations using methods appropriate to the model (SHAP, LIME, GradCAM, attention), validates explanation fidelity against model sensitivity, and provides audience-appropriate abstraction for operators and affected persons. Layer 6 (Human Oversight Interface) enforces mandatory review workflows that prevent auto-acceptance, deploys automation bias countermeasures (data-first display, dwell time enforcement, calibration cases), captures override rationale, and monitors override rates and sub-60-second review times. Layer 7 (Logging and Audit) captures immutable, append-only records with cryptographic hash chains across nine event types, supports log-based drift detection, and provides on-demand regulatory export in NCA-specified formats. Layer 8 (Monitoring) operates intent alignment dashboards comparing real-time metrics against AISDP thresholds, performs statistical anomaly detection with severity-based escalation, and monitors five drift dimensions: input, output, fairness, error, and override. Key outputs Per-layer architecture specification (AISDP Module 3) Per-layer control documentation --- ## Phase 4 — Development & Testing URL: https://docs.standardintelligence.com/phase-4--development-and-testing Breadcrumb: Getting Started › Delivery Process › Phase 4 Last updated: 28 Feb 2026 Phase 4: Development & Testing — Owner & Outputs AISDP module(s): 2 (Development Process), 4 ( Data Governance ), 5 (Testing and Validation), 9 (Robustness and Cybersecurity), 10 (Record-Keeping) Regulatory basis: Articles 9–15; Annex IV (2–3) Phase 4 runs during Weeks 6 to 18. The Engineering Team , led by the Technical Owner, owns this phase. The objective is to build the system in accordance with the approved architecture, generating compliance evidence as a natural byproduct of the engineering workflow. Development follows version-controlled code, model, and data artefacts. The CI/CD pipeline enforces quality gates at every commit: static analysis (including AI-specific rules), unit testing for every component type, contract testing between services, dependency and licence scanning, and secret detection. Data engineering follows the pre-step/post-step capture methodology, documenting each transformation before execution and verifying it afterwards. Dataset documentation is maintained continuously as datasets are assembled, cleaned, and transformed. Model training, validation, and testing follow the documented methodology, with performance, fairness, robustness, and calibration metrics computed and recorded. The model validation gate blocks promotion of any model that fails AISDP-declared thresholds. The human oversight interface is developed with automation bias countermeasures, mandatory review workflows, and override capability. The explainability layer is implemented with fidelity validation. Cybersecurity testing runs throughout: SAST and DAST in the CI pipeline, dependency scanning, container image scanning, and infrastructure-as-code scanning. Adversarial ML testing covers adversarial examples, data poisoning simulations, and prompt injection testing where applicable. Key outputs Version-controlled code, model artefacts, and dataset versions Automated test reports (unit, integration, regression, fairness, robustness) Auto-generated model card s Data quality reports and training pipeline logs Cybersecurity scan results and remediation records Feature registry with proxy variable assessments Phase 4: Governance Gate (Sprint-Level Compliance Review) AISDP module(s): All (incremental) Regulatory basis: Articles 9–15 Phase 4 employs multiple governance gates operating at different cadences rather than a single end-of-phase gate. An automated model validation gate blocks any model that fails performance, fairness, or robustness thresholds. A manual security review gate applies for the first deployment. The integration test suite must pass before any promotion. The recommended approach is embedding compliance activities in the sprint cadence itself. Each sprint should include updating the relevant AISDP modules for any design decisions made during the sprint, running the full test suite (including fairness and robustness gates) as part of the sprint's definition of done, reviewing any new risks identified during development and adding them to the risk register , and updating the evidence pack with artefacts produced during the sprint. The sprint retrospective should include a compliance dimension: what evidence was generated, what gaps remain, and what risks were introduced. This approach ensures that the AISDP is assembled incrementally throughout development. Module 1 (System Identity) is completed during Phase 1. Module 6 (Risk Management) is drafted during Phase 2 and updated continuously. Module 3 (Architecture) is populated during Phase 3 and refined as the architecture evolves. Module 4 (Data Governance) grows as the data engineering work progresses. By the time Phase 5 arrives, the AISDP should be substantially complete, requiring only final review and consistency checking. Feature flags that enable new model versions, data sources, or decision pathways are themselves system changes that the AI System Assessor assesses against the substantial modification thresholds. Feature flag configurations are version-controlled, and activation events are logged in the deployment ledger . Key outputs Sprint-level compliance review records Incrementally assembled AISDP modules Feature flag configuration and activation logs Phase 4: CI/CD Gates & Incremental AISDP Population AISDP module(s): 2 (Development Process), 5 (Testing and Validation), 9 (Robustness and Cybersecurity), 10 (Record-Keeping) Regulatory basis: Articles 9, 10, 12, 15; Annex IV(2–3) The CI/CD pipeline is the mechanism through which compliance evidence is produced as a byproduct of development. For AI systems, CI/CD extends beyond traditional software pipelines to operate on multiple artefact types (code, data, models, configurations) with multiple interconnected build processes. A compliance-grade pipeline defines discrete, auditable stages. Data preparation ingests from documented sources, applies quality checks, and produces a versioned dataset. Feature engineering transforms the dataset with lineage captured at each step. Model training records all metadata: duration, resource consumption, convergence metrics, random seed. Model evaluation computes all performance, fairness, robustness, and calibration metrics declared in the AISDP; the stage fails if any metric breaches its declared threshold. Four model validation gates enforce compliance boundaries. Gate 1 (Performance) verifies that accuracy, precision, recall, and other metrics meet AISDP-declared thresholds. Gate 2 (Fairness) evaluates selection rate ratios, equalised odds, and calibration across protected characteristic subgroups. Gate 3 (Robustness) tests resilience to adversarial examples and input perturbation. Gate 4 (Drift) compares the candidate model's behaviour against the production and baseline models. Any gate failure halts the pipeline. The pipeline definition itself is a compliance artefact, version-controlled alongside code and configuration. Changes to the pipeline definition constitute changes to the development process documented in AISDP Module 2 . Each pipeline stage should be idempotent (same inputs produce same outputs), observable (emitting structured logs and metrics), and recoverable (resumable from the failed stage without re-executing completed stages). Pipeline orchestration tools such as Apache Airflow, Kubeflow Pipelines, Dagster, or Prefect manage dependencies and sequencing. Key outputs Pipeline execution records and metadata Gate pass/fail evidence Auto-generated model cards and evaluation reports Versioned pipeline definition --- ## Phase 5 — Pre-Deployment Validation URL: https://docs.standardintelligence.com/phase-5--pre-deployment-validation Breadcrumb: Getting Started › Delivery Process › Phase 5 Last updated: 28 Feb 2026 Phase 5: Pre-Deployment Validation — Owner & Outputs AISDP module(s): All (compilation and assessment) Regulatory basis: Articles 8–15, 17, 43; Annex IV ; Annex VI Phase 5 runs during Weeks 16 to 20. The Conformity Assessment Coordinator owns this phase, with support from the AI System Assessor and Technical SME. The objective is to validate the complete system in a production-representative environment and compile the AISDP. The system is deployed to staging, where end-to-end inference tests, regression tests, and chaos/fault injection tests are executed against production-representative data. Performance, fairness, and robustness metrics are computed and compared against AISDP-declared thresholds. The AISDP is compiled from the artefacts produced during development. Each module is populated from corresponding engineering artefacts rather than written from scratch. The Conformity Assessment Coordinator reviews each module for completeness and consistency. The internal conformity assessment (Annex VI) is then conducted in three workstreams : a QMS assessment verifying compliance with Article 17 per Annex VI(a), a technical documentation assessment examining the AISDP to assess whether the system complies with Articles 8 through 15 per Annex VI(b), and a consistency assessment tracing from the AISDP to source artefacts. Non-conformities are recorded and remediated. The operational oversight framework is also established during this phase. Monitoring infrastructure is configured, alerting thresholds are set, escalation procedures are documented, break-glass procedures are tested, and operator training is completed. Key outputs Complete AISDP (all twelve modules) Internal conformity assessment report Non-Conformity Register (all items resolved or accepted with rationale) Assessment evidence register Operational oversight readiness confirmation Operator training and certification records Phase 5: Governance Gate (Critical NC Resolution) AISDP module(s): All Regulatory basis: Articles 8–15, 43, 47; Annex V; Annex VI Phase 5 concludes with a governance gate: the AI Governance Lead reviews the assessment report and non-conformity register before signing the Declaration of Conformity . Non-conformities are classified by severity. A critical non-conformity means the system cannot be placed on the market until it is resolved. A major non-conformity allows continued operation under a defined remediation timeline; the Technical SME must resolve it or secure an approved remediation plan with a timeline that does not extend beyond deployment. A minor non-conformity is an improvement opportunity that does not block deployment. Each non-conformity carries a root cause analysis, a corrective action plan, an assigned owner, a deadline, and a verification step confirming that the corrective action was effective. The AI Governance Lead must confirm that all critical non-conformities have been resolved and that major non-conformities have approved remediation plans before signing the Declaration of Conformity. The Declaration is a legally binding statement issued under Article 47, containing the eight elements specified in Annex V: provider identity, system identification, a statement of sole responsibility, a conformity statement citing the AI Act and any other applicable Union law, a statement of GDPR and data protection compliance where personal data is processed, references to relevant harmonised standards or common specifications used, notified body details where applicable, and the signatory's name, function, date, and signature. Where harmonised standards under Article 40 have not yet been published for a given requirement, common specifications adopted under Article 41 may be applied instead; the Declaration should reference whichever instruments were used. Signing carries material legal consequences. The Declaration must be retained for ten years after the system is placed on the market. Key outputs Signed Declaration of Conformity Fully resolved Non-Conformity Register Phase 5: Three Assessment Workstreams AISDP module(s): All (assessment) Regulatory basis: Article 43; Annex VI The internal conformity assessment under Annex VI proceeds through three workstreams, typically sequenced across five execution phases. Workstream 1: QMS Assessment. Verifies that all Article 17 elements are operational. The assessor examines document control , change management , non-conformity management, and continual improvement mechanisms. The QMS is the organisational framework that ties all technical controls into a governed, auditable process. ISO/IEC 42001:2023 (AI Management System) provides the most directly relevant framework, with a control set that aligns with the EU AI Act's requirements. Workstream 2: Technical Documentation Assessment. Examines the AISDP against Articles 8 through 15 and Annex IV. For each requirement, the assessor records the evidence demonstrating compliance, the determination (conformant, non-conformant, or partially conformant), and any conditions or recommendations. The assessment checklist should be granular; each sub-requirement of each Article should be a separate item with its own evidence requirement. Workstream 3: Consistency Assessment. Traces from the AISDP to source artefacts. The assessor verifies that each referenced artefact exists, is accessible, is the correct version, and supports the claim it is cited for. Live system verification examines whether the deployed system's behaviour matches the documentation. Stakeholder interviews with the Technical SME, Business Owner, and Operators verify architecture, testing, deployment, intended purpose, and override capability. Assessment documentation is distinct from the AISDP itself. It comprises the assessment plan, the structured assessment checklist, the evidence register, the Non-Conformity Register, and the formal assessment report. A pre-assessment readiness review determines whether the system is mature enough to undergo assessment, avoiding the demoralising cycle of premature assessment, mass non-conformity, and re-assessment. provide detailed guidance on readiness criteria and the pre-assessment review process. Key outputs QMS assessment findings Technical documentation assessment findings Consistency assessment findings Assessment report with overall determination --- ## Phase 6 — Registration & Deployment URL: https://docs.standardintelligence.com/phase-6--registration-and-deployment Breadcrumb: Getting Started › Delivery Process › Phase 6 Last updated: 28 Feb 2026 Phase 6: Registration & Deployment — Owner & Outputs AISDP module(s): 8 (Transparency and User Information), 12 ( Post-Market Monitoring and Change History) Regulatory basis: Articles 13, 48, 49, 71; Annex VIII Phase 6 runs during Weeks 20 to 22. The Conformity Assessment Coordinator owns this phase. The objective is to register the system in the EU database, affix the CE marking, and deploy to production. The Coordinator registers the provider and system in the EU database under Article 71, submitting the Annex VIII information. For multi-jurisdiction deployment s, registration information reflects all deployment member states. Systems in sensitive domains (law enforcement, migration, border control) are registered in the secure, non-public section. The CE marking is affixed to the user interface and accompanying documentation. Deployment follows the CI/CD pipeline 's compliance controls: staging validation, canary or shadow deployment, a human approval gate, and deployment logging. The AI Governance Lead reviews validation results and authorises production deployment for the initial release. The deployment event is recorded in the immutable deployment ledger . Deployers are provided with the Instructions for Use ( Article 13 ), which include the system's intended purpose, capabilities and limitations, performance characteristics, human oversight requirements, and maintenance obligations. These instructions must be specific enough that deployers understand the residual risk s they are inheriting and the compensating controls they should apply. Key outputs EU database registration confirmation CE marking evidence (screenshots in UI and documentation) Deployment ledger entry Deployer communication records (Instructions for Use) Signed Declaration of Conformity (filed with the AISDP) Phase 6: Governance Gate (Registration & CE Verification) AISDP module(s): 8 (Transparency), 12 (Post-Market Monitoring) Regulatory basis: Articles 48, 49, 71 Phase 6 concludes with a three-part governance gate. First, EU database registration must be confirmed. The Conformity Assessment Coordinator verifies that the registration is complete and accurate, covering all deployment jurisdictions. Second, CE marking is verified. The marking must be visible, legible, and indelible on the system's user interface and documentation, and must comply with Article 48 's requirements. Third, production deployment must be authorised and logged. Registration under Article 71 is a mandatory precondition for placing a high-risk system on the market. The database is publicly accessible (with the exception of the restricted section for sensitive domains), and the information submitted becomes a matter of public record. Inaccurate registration data is a Tier 3 violation under the penalty framework, carrying fines of up to EUR 7.5 million or 1% of turnover. CE marking under Article 48 is the provider's visible declaration that the system conforms to all applicable requirements. It is affixed only after the conformity assessment is complete and the Declaration of Conformity is signed. Affixing the CE marking without a completed conformity assessment is itself a non-compliance finding. The deployment authorisation confirms that the system deployed to production matches the system that passed conformity assessment. The immutable deployment ledger provides the traceability link between the assessed artefacts and the production deployment. Key outputs Verified EU database registration CE marking verification record Authorised production deployment record Phase 6: Per-Jurisdiction Deployment Checklist AISDP module(s): 8 (Transparency), 12 (Post-Market Monitoring) Regulatory basis: Articles 13, 48, 49, 71, 73 Organisations deploying across multiple member states face coordination challenges that compound with each additional jurisdiction. Although the AI Act is a regulation applying uniformly, the implementation ecosystem (competent authorities, guidance documents, inspection practices, enforcement priorities) varies by member state. For each deployment jurisdiction, the following actions are required before deployment. The Legal and Regulatory Advisor identifies the national competent authorit y and market surveillance authority, reviews jurisdiction-specific guidance for conflicts with the existing compliance posture, and translates the Declaration of Conformity if the member state requires it. The Conformity Assessment Coordinator translates Instructions for Use into the member state's official language and verifies that EU database registration covers the new jurisdiction. The AI Governance Lead pre-identifies the serious incident reporting channel for the member state. At deployment, the AI Governance Lead briefs deployers in the new jurisdiction on their Article 26 obligations, and the Legal and Regulatory Advisor adds the jurisdiction to the quarterly guidance monitoring cycle. Additional considerations include pre-translating incident report templates where authorities require the national language, confirming data residency and sovereignty compliance for the jurisdiction, and managing divergent interpretive guidance across member states. A single internal coordination point, typically the Conformity Assessment Coordinator, maintains a register of all relevant authorities across deployment jurisdictions. This register captures the competent authority, market surveillance authority, data protection authority, sector-specific regulators, published guidance, and preferred communication channels for each member state. Key outputs Per-jurisdiction deployment checklist (completed) Jurisdiction register Translated Instructions for Use and Declarations of Conformity --- ## Phase 7 — Operational Monitoring URL: https://docs.standardintelligence.com/phase-7--operational-monitoring Breadcrumb: Getting Started › Delivery Process › Phase 7 Last updated: 28 Feb 2026 Phase 7: Operational Monitoring — Owner & Outputs AISDP module(s): 12 (Post-Market Monitoring and Change History) Regulatory basis: Articles 72, 73; Annex IV (4) Phase 7 is ongoing and begins at deployment. The AI Governance Lead owns this phase. The post-market monitoring system operates continuously, collecting metrics across five dimensions: performance, fairness, data drift , operational health, and human oversight. Alerts are generated and triaged according to the severity framework described above, which define three tiers. The AI Governance Lead convenes quarterly PMM review meetings examining monitoring trends, operator escalation patterns, deployer feedback, complaint volumes, and the non-conformity register . The Internal Audit Assurance Lead conducts an annual oversight audit, testing monitoring infrastructure, escalation pathways, break-glass procedures , training currency, and non-retaliation commitments. Serious incidents are detected, triaged, reported, investigated, and remediated in accordance with the Article 73 process, with evidence preserved and systems left unaltered prior to authority notification. System changes flow through the version control and CI/CD framework. Each change is assessed against the substantial modification thresholds. Changes crossing the threshold trigger a new conformity assessment cycle (returning to Phase 5). Changes below the threshold are documented in the AISDP change history ( Module 12 ). Regulatory developments are monitored by the Legal and Regulatory Advisor and assessed for impact. The AISDP is maintained as a living document; each material change creates a new version, and the version history demonstrates continuous compliance discipline. Key outputs Monthly PMM reports Quarterly review meeting minutes and action items Annual oversight audit report Serious incident reports (as required, within mandated timelines) AISDP version updates Updated risk register entries Regulatory horizon scanning summaries Phase 7: Feedback Loop (PMM → Decision → Action → Validation → AISDP) AISDP module(s): 6 (Risk Management System), 12 (Post-Market Monitoring) Regulatory basis: Article 72 The PMM feedback loop is the operational mechanism that ensures monitoring findings translate into system improvements. Its value depends entirely on execution; findings that accumulate in dashboards without triggering action represent a compliance failure. The loop follows a defined cycle: a PMM finding (alert, report, or deployer feedback) is identified; a decision authority determines the appropriate action; engineering implements the fix; validation gates confirm the fix is effective; the AISDP is updated; and the evidence pack records the complete cycle as a traceable record. Decision authority is tiered by impact. Threshold adjustments can be authorised by the Technical SME. Model retraining on updated data requires Technical Owner authorisation with notice to the AI Governance Lead. Architecture changes or hyperparameter shifts require AI Governance Lead approval and a substantial change assessment. System suspension or withdrawal requires AI Governance Lead sign-off with immediate notice to the Legal and Regulatory Advisor and affected deployers. PMM-triggered remediation competes with feature development and other engineering priorities. Organisations should establish a PMM action backlog separate from the general engineering backlog. Critical actions (compliance threshold breaches, serious incident corrective actions) override all other engineering work. Warning-level actions are scheduled within the next development sprint. The feedback loop is itself monitored through meta-metrics: time from finding to decision, time from decision to completed fix, the share of findings resulting in system changes versus those accepted as within tolerance, and the share of fixes that successfully resolve the originating finding. A feedback loop with a median response time of six months is materially different from one with a median of two weeks, and the difference directly affects the organisation's ability to maintain compliance under Article 72. Key outputs Feedback loop cycle records (per finding) PMM action backlog Feedback loop meta-metrics dashboard --- ## Roles URL: https://docs.standardintelligence.com/roles Breadcrumb: Getting Started › Workflow › Roles Last updated: 28 Feb 2026 Roles AISDP module(s): All (cross-cutting) Regulatory basis: Articles 9, 11, 17, 43; Annex VI AISDP preparation requires clearly assigned roles with documented responsibilities. Ten roles are defined across this documentation, though smaller organisations may combine them provided responsibilities are explicitly allocated. The AI Governance Lead holds ultimate accountability: reviewing and approving the AISDP, signing the Declaration of Conformity , managing competent authority relationships, and holding authority to compel remediation or halt deployment. The AI System Assessor handles discovery, classification, risk assessment , and AISDP compilation, combining regulatory and technical understanding. The Technical SME is the subject-matter expert for the system's technical design, data, and operational behaviour, providing engineering evidence across architecture, model evaluation, data governance , and testing. The Technical Owner (typically an engineering lead) ensures that design, implementation, and testing satisfy Articles 9 through 15. The Business Owner (product manager or business unit head) ensures that intended purpose, deployment context, and human oversight measures are correctly documented. The Conformity Assessment Coordinator manages the end-to-end certification workflow, non-conformity register , Declaration of Conformity preparation, and EU database registration . The Legal and Regulatory Advisor reviews evidence for legal sufficiency and advises on novel or ambiguous requirements. The DPO Liaison confirms consistency between data governance documentation and GDPR obligations. The Internal Audit Assurance Lead provides independent verification that the certification process was followed correctly and that evidence is complete and authentic. The Classification Reviewer independently reviews the AI System Assessor's risk tier determination for each system, providing a structural safeguard against classification bias; disagreements are escalated to the AI Governance Lead. Organisational scale determines team composition. Small organisations (5 to 10 AI systems) may combine the Assessor and Conformity Assessment Coordinator roles, with legal, DPO, and audit support on a consultancy basis. Medium organisations (10 to 30 systems) field a dedicated governance team. Large enterprises (30+ systems) operate a full AI Compliance Office with domain-organised assessors and embedded legal and audit functions. Key outputs Role assignment register for each AI system Documented responsibility matrix --- ## Steps URL: https://docs.standardintelligence.com/steps Breadcrumb: Getting Started › Workflow › Steps Last updated: 28 Feb 2026 Steps AISDP module(s): All (cross-cutting) Regulatory basis: Articles 8–15 (collectively) The AISDP preparation follows a seven-phase delivery workflow spanning approximately 20 to 28 weeks for a medium-complexity high-risk system. Phases overlap; risk assessment informs architecture, which informs development, and development may begin before risk assessment is fully complete. Phase 1: Discovery and Classification (Weeks 1–3). Determine scope, classify risk tier, produce the Classification Decision Record . Phase 2: Risk Assessment and FRIA (Weeks 2–6). Conduct the five-method risk identification , establish the risk register, perform the Fundamental Rights Impact Assessment . Phase 3: Architecture and Design (Weeks 4–8). Design the system architecture informed by the risk assessment, select the model approach, establish data governance and version control frameworks. Phase 4: Development and Testing (Weeks 6–18). Build the system with compliance evidence generated as a byproduct of the engineering workflow. Phase 5: Pre-Deployment Validation (Weeks 16–20). Validate the complete system, compile the AISDP, conduct internal conformity assessment . Phase 6: Registration and Deployment (Weeks 20–22). Register in the EU database , affix CE marking , deploy to production. Phase 7: Operational Monitoring (Ongoing). Maintain the system's compliance posture through continuous monitoring, periodic review, and responsive action. Each phase produces defined artefacts and concludes with a governance gate that must be passed before proceeding. The timeline assumes that foundational infrastructure (version control, CI/CD, monitoring) is in place before the system-specific workflow commences. Factors that increase effort include third-party GPAI models with limited disclosures (add 3 to 6 weeks), brownfield system s (add 4 to 10 weeks), biometric identification requiring third-party assessment (add 6 to 12 weeks), and organisations without existing compliance infrastructure (add 8 to 16 weeks). Fully loaded costs for a medium-complexity system range from EUR 150,000 to EUR 400,000 for initial preparation, with annual ongoing costs of EUR 50,000 to EUR 150,000. Key outputs Phase-gated delivery plan Resource and timeline estimate --- ## Workflow URL: https://docs.standardintelligence.com/workflow Breadcrumb: Getting Started › Workflow Last updated: 28 Feb 2026 The workflow section defines who does what, in what order, with what data, and to what end. Roles describes the ten governance and technical roles. Domain expertise maps the specialist knowledge required. Steps outlines the workflow sequence. Data flow (Article 8) traces information through the process. Outcomes specifies the deliverables. ℹ This section provides the operational framework. It should be read before the Delivery Process section. --- # Development --- ## Access Control — CI/CD Promotes; No Manual Promotion URL: https://docs.standardintelligence.com/access-control-cicd-promotes-no-manual-promotion Breadcrumb: Development › Version Control › Model Registry › Access Control — CI/CD Promotes; No Manual Promotion Last updated: 28 Feb 2026 Access Control — CI/CD Promotes; No Manual Promotion AISDP module(s): Module 10 (Record-Keeping), Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 12 , Article 15 For high-risk systems, only the CI/CD pipeline should be able to promote a model to the production stage. Manual promotion is prohibited because it bypasses the automated validation gates and creates a pathway for untested models to reach production. This access control is enforced at the registry level. The CI/CD pipeline authenticates with a service account that has the specific permission to transition models from staging to production. Human users, including administrators, do not have this permission. If a human needs to intervene (for example, to roll back to a previous version in an emergency), the rollback is itself a governed event that triggers the validation pipeline and is logged. The access control configuration should be auditable: the registry's access control settings, the service account's permissions, and the authentication mechanism are all documented and verifiable. Penetration testing should specifically test whether manual promotion paths exist that could circumvent the CI/CD pipeline. Key outputs Registry access control restricting production promotion to CI/CD service accounts Prohibition of manual promotion with documented enforcement mechanism Emergency rollback procedure with governance logging Module 10 and Module 9 AISDP documentation --- ## AI-Specific Custom Rules — Hardcoded Threshold Detection URL: https://docs.standardintelligence.com/ai-specific-custom-rules-hardcoded-threshold-detection Breadcrumb: Development › CI › CD Pipelines › Static Analysis › AI-Specific Custom Rules — Hardcoded Threshold Detection Last updated: 28 Feb 2026 AI-Specific Custom Rules — Hardcoded Threshold Detection AISDP module(s): Module 10 (Record-Keeping), Module 3 (Architecture and Design) Regulatory basis: Article 12 , Article 3(23) Hardcoded thresholds, such as if score > 0.65 embedded directly in code, undermine version control and change tracking. When a threshold is embedded in application code, changing it requires a code change that may be reviewed as a software fix rather than as a compliance-relevant configuration change. The threshold's history becomes entangled with the code's history, making it difficult to isolate threshold changes for substantial modification assessment. The Semgrep rule for hardcoded threshold detection flags magic numbers in decision logic, matching patterns where a score or prediction variable is compared against a literal float value. The rule directs the developer to define the threshold in a version-controlled configuration file, where threshold changes are tracked independently and subject to their own governance review. This rule supports the broader principle that all compliance-relevant parameters should be externally configurable, version-controlled, and subject to explicit governance. A hardcoded threshold that is changed in a large code commit may escape the scrutiny it deserves; a threshold change in a dedicated configuration file is visible, reviewable, and assessable against the substantial modification framework. Key outputs Semgrep rule for hardcoded threshold detection Developer guidance directing thresholds to configuration files Integration with pre-commit hooks and CI pipeline Module 10 and Module 3 documentation --- ## AI-Specific Custom Rules — Missing Logging Detection (Art. 12) URL: https://docs.standardintelligence.com/ai-specific-custom-rules-missing-logging-detection-art-12 Breadcrumb: Development › CI › CD Pipelines › Static Analysis › AI-Specific Custom Rules — Missing Logging Detection (Art. 12) Last updated: 28 Feb 2026 AI-Specific Custom Rules — Missing Logging Detection (Art. 12) AISDP module(s): Module 10 (Record-Keeping) Regulatory basis: Article 12 Article 12 requires automatic recording of events during the system's operation. Any inference code path that can execute without emitting a log event is a compliance gap. The missing logging detection rule flags inference code paths that do not call the logging instrumentation. The rule identifies function definitions or code blocks within the inference pipeline that lack calls to the tracing or logging framework (OpenTelemetry span creation, structured log emission, or equivalent). A flag does not necessarily mean the logging is absent; it may mean the logging is implemented at a different layer (for example, through framework-level instrumentation rather than application-level calls). The flag triggers a review to confirm that logging coverage is complete. Combined with the comprehensive event coverage requirement described above, this rule ensures that logging gaps are detected during development rather than discovered during a regulatory inspection. The rule configuration and its findings are retained as Module 10 evidence, demonstrating that the organisation actively verifies logging completeness as part of its development process. Key outputs Semgrep rule for missing logging detection in inference paths Integration with pre-commit hooks and CI pipeline Review process for flagged code paths Module 10 AISDP evidence --- ## AI-Specific Custom Rules — Model Registry Bypass Detection URL: https://docs.standardintelligence.com/ai-specific-custom-rules-model-registry-bypass-detection Breadcrumb: Development › CI › CD Pipelines › Static Analysis › AI-Specific Custom Rules — Model Registry Bypass Detection Last updated: 28 Feb 2026 AI-Specific Custom Rules — Model Registry Bypass Detection AISDP module(s): Module 10 (Record-Keeping), Module 3 (Architecture and Design) Regulatory basis: Article 12 Direct model file loading, such as torch.load('model.pt') or joblib.load('model.pkl') , bypasses the model registry entirely and breaks the traceability chain. If a model is loaded directly from the file system rather than through the registry, the composite version identifier may not reflect the actual model being served, the version pinning control is circumvented, and the integrity verification (hash check at load time) is skipped. The Semgrep rule for model registry bypass detection flags direct loading function calls for common ML frameworks: torch.load , joblib.load , pickle.load , tf.saved_model.load , and keras.models.load_model . The rule produces an error-level finding (not merely a warning), because registry bypass is a structural compliance risk rather than a stylistic concern. The rule directs the developer to load models through the registry client (for example, mlflow.pyfunc.load_model ), which ensures that the model is loaded from the registry, the version is recorded, and the integrity hash is verified. This control is version-controlled in the Semgrep configuration and enforced in both pre-commit hooks and the CI pipeline. Key outputs Semgrep rule for model registry bypass detection (error severity) Developer guidance directing model loading through the registry client Integration with pre-commit hooks and CI pipeline Module 10 and Module 3 documentation --- ## AI-Specific Custom Rules (Semgrep) — Demographic Feature Flagging URL: https://docs.standardintelligence.com/ai-specific-custom-rules-semgrep-demographic-feature Breadcrumb: Development › CI › CD Pipelines › Static Analysis › AI-Specific Custom Rules (Semgrep) — Demographic Feature Flagging Last updated: 28 Feb 2026 AI-Specific Custom Rules (Semgrep) — Demographic Feature Flagging AISDP module(s): Module 4 (Data Governance), Module 6 (Risk Management System) Regulatory basis: Article 10 , Article 9 Standard linting tools do not catch AI-specific compliance risks. Custom static analysis rules, implemented in Semgrep or equivalent, flag coding patterns that are permissible in general software but problematic in high-risk AI systems. The first such rule category flags the direct use of protected characteristic columns (gender, age, ethnicity, disability status) in feature engineering or model training code. The flag does not mean the code is wrong; it means the use requires documented justification in the feature registry and approval through the CODEOWNERS mechanism. The Semgrep rule pattern matches direct column access on protected characteristic names and produces a warning referencing the relevant AISDP section. This automated flagging ensures that no use of demographic features enters the codebase without triggering a review. It converts what would otherwise be a procedural expectation ("developers should flag demographic feature use") into a technical control that fires consistently, regardless of whether the developer remembers the policy. The rule configuration is version-controlled in the repository and referenced in the AISDP as part of the data governance controls. Key outputs Semgrep rule for demographic feature use detection Integration with pre-commit hooks and CI pipeline Linkage to CODEOWNERS review for flagged code Module 4 and Module 6 AISDP documentation --- ## Architecture Artefacts URL: https://docs.standardintelligence.com/architecture-artefacts Breadcrumb: Development › Architectures › Artefacts Last updated: 28 Feb 2026 Statement of Business Intent (Signed) System Architecture Document (C4 Diagrams) Data Flow & Deployment Diagrams Dependency Maps Per-Layer Control Specifications Human Oversight Interface Specification --- ## Auditability URL: https://docs.standardintelligence.com/auditability Breadcrumb: Development › Model Selection › Compliance Criteria Scoring › Auditability Last updated: 28 Feb 2026 Auditability AISDP module(s): 3, 10 Regulatory basis: Article 12 Auditability asks whether the model produces outputs that can be logged, traced, and attributed in accordance with Article 12. Can individual decisions be reconstructed from the logs? Models that require only the input and the model version for output reconstruction are strongly auditable: the audit trail is compact, and any decision can be verified by replaying the input through the documented model version. Models where the output depends on runtime conditions (session state, conversation history, retrieval-augmented generation context) require more sophisticated logging. The assessment specifies what must be logged for the candidate architecture to achieve auditability. For RAG-based systems, auditability requires logging not only the input and output, but also the retrieved documents, their relevance scores, and the prompt assembled from the retrieved context. For agentic systems, the entire chain of reasoning and action must be captured. The logging payload size and storage requirements should be estimated as part of the assessment. The auditability score reflects the ease with which individual decisions can be reconstructed and the completeness of the audit trail that the architecture naturally supports. Key outputs Auditability score per candidate model Logging requirements specification --- ## Automated Documentation URL: https://docs.standardintelligence.com/automated-documentation Breadcrumb: Development › CI › CD Pipelines › Automated Documentation Last updated: 28 Feb 2026 Model Cards (Per Build) AISDP module(s): Module 5 (Testing and Validation), Module 3 (Architecture and Design) Regulatory basis: Annex IV (2), Annex IV(3) Model cards are auto-generated from the evaluation metrics stored in the experiment tracker and model registry as part of the CI pipeline's documentation generation stage. When a new model version is registered, the pipeline generates a model card containing the model's architecture summary, training data version, evaluation metrics disaggregated by subgroup, intended use statement, and known limitations. The model card template is version-controlled and maintained by the Conformity Assessment Coordinator. Template changes require review to ensure continued alignment with AISDP module requirements and Annex IV expectations. Each generated model card references the template version used in its creation. Google's Model Cards Toolkit provides an established framework for this generation. The data that populates the card is drawn from the pipeline's own outputs (evaluation metrics, registry metadata, configuration values), ensuring that the model card always reflects the actual model. Auto-generation eliminates the risk of model cards being written from memory weeks after training, which introduces inaccuracies. The generated model card is stored as a pipeline artefact with a ten-year retention policy. Key outputs Auto-generated model card per model version Version-controlled template with governance review for changes Pipeline artefact storage with ten-year retention Module 5 and Module 3 AISDP evidence Test Reports (Per Build) AISDP module(s): Module 5 (Testing and Validation) Regulatory basis: Annex IV(3) Test reports covering all unit, integration, regression, fairness, and robustness tests are auto-generated as part of each CI pipeline execution. The report aggregates the results from every test category and the model validation gate s into a single, navigable document. The report should present results at two levels: a summary view showing pass/fail status per test category (suitable for the AI Governance Lead 's review), and a detailed view showing individual test results, failure messages, and metric values (suitable for the Technical SME's investigation). The report carries a timestamp, the pipeline execution ID, the composite version identifier, and the data version used for evaluation. The currency mechanism ensures that each auto-generated report carries a source reference linking it to the specific pipeline run that produced it. A governance dashboard should display, for each AISDP module, the date of the last auto-generated update and the date of the last human review. Modules where the auto-generated content is newer than the last human review are flagged for attention. Key outputs Auto-generated test report per pipeline execution Summary and detail views for different audiences Timestamp, pipeline execution ID, and composite version linkage Module 5 AISDP evidence SBOMs (CycloneDX/SPDX, ML-Specific Components) AISDP module(s): Module 9 (Robustness and Cybersecurity), Module 3 (Architecture and Design) Regulatory basis: Article 15 , Annex IV(2) Software Bills of Materials (SBOMs) list all third-party dependencies with versions and licence information. For AI systems, the SBOM must extend beyond traditional software dependencies to include ML-specific components: model framework versions, pre-trained model provenance, dataset processing library versions, and any third-party model APIs or embedding services. CycloneDX and SPDX are the two standard formats for SBOMs. The SBOM is auto-generated as part of the CI pipeline using tools such as Syft, CycloneDX CLI, or SPDX tools. The generated SBOM provides the foundation for dependency scanning and licence compliance scanning, ensuring that the vulnerability and licence assessments operate on an accurate and complete inventory. The SBOM is retained as both Module 3 evidence (documenting the system's technical composition) and Module 9 evidence (supporting the cybersecurity assessment). Each deployment should reference the SBOM version that corresponds to the deployed container image. Over the system's lifecycle, the sequence of SBOMs provides a history of the system's dependency evolution, useful for investigating supply chain incidents and for demonstrating proactive dependency management. Key outputs Auto-generated SBOM per pipeline build (CycloneDX or SPDX format) ML-specific component inclusion (frameworks, pre-trained models, APIs) SBOM linkage to the deployed container image version Module 3 and Module 9 AISDP evidence AISDP Section Updates AISDP module(s): Module 10 (Record-Keeping), all affected modules Regulatory basis: Annex IV The CI pipeline should generate draft updates to the AISDP modules most affected by a model or system change. Using Jinja2 templates or equivalent templating engines, the pipeline populates draft AISDP module sections from pipeline metadata. A Module 5 draft, for instance, draws from the model registry for model architecture and training configuration, the experiment tracker for evaluation metrics, the fairness evaluation for subgroup metrics, and the deployment ledger for deployment date and version. The draft is not the final AISDP; it is a starting point that the AI Governance Lead reviews, augments with narrative context, and approves. This review-and-augment workflow reduces the human effort from writing documentation from scratch to verifying and enriching an already-accurate draft. The generated drafts carry the pipeline execution ID and timestamp, linking them to the specific build that produced them. Documentation quality assurance should verify four properties: completeness (no missing sections or empty fields), accuracy (metrics match the raw evaluation data), consistency (terminology aligns with AISDP style conventions), and currency (the generation timestamp matches the artefact it describes). A quarterly manual review of auto-generated documentation against the source data catches systematic generation errors. Key outputs Auto-generated AISDP module drafts per pipeline execution Version-controlled Jinja2 templates maintained by the Conformity Assessment Coordinator AI Governance Lead review and approval workflow Module 10 documentation --- ## Automated Test Reports URL: https://docs.standardintelligence.com/automated-test-reports Breadcrumb: Development › CI › CD Pipelines › Artefacts › Automated Test Reports Last updated: 28 Feb 2026 Automated Test Reports AISDP module(s): Module 5 (Testing and Validation) Regulatory basis: Annex IV (3) This artefact comprises the complete set of auto-generated test reports produced by the CI pipeline across the system's lifecycle. Each report aggregates unit test results, integration and end-to-end test results, and model validation gate results into a single, timestamped document. The collection of test reports provides longitudinal evidence of the system's quality trajectory. An assessor can review reports from successive pipeline runs to observe how the system's performance, fairness, and robustness metrics have evolved over time. Trends in the test results, such as gradually declining performance or intermittent fairness gate failures, are visible in the report history. Each report is linked to its pipeline execution ID, the composite version identifier, and the data version used for evaluation. Reports are stored as pipeline artefacts with the ten-year retention policy mandated by Article 18 . The Conformity Assessment Coordinator maintains an index of reports, enabling rapid retrieval for conformity assessment, market surveillance , or incident investigation. Key outputs Complete test report archive across the system's lifecycle Per-report linkage to pipeline execution, version, and data Ten-year retention with indexed retrieval Module 5 AISDP evidence --- ## Bias Detectability URL: https://docs.standardintelligence.com/bias-detectability Breadcrumb: Development › Model Selection › Compliance Criteria Scoring › Bias Detectability Last updated: 28 Feb 2026 Bias Detectability AISDP module(s): 3, 4 Regulatory basis: Article 10 Bias detectability asks whether fairness metrics can be computed at the subgroup level, whether the model can be interrogated for proxy variable effects, and whether the architecture supports fairness-aware training or post-hoc calibration. The assessment determines whether the candidate architecture supports feature attribution methods (SHAP, LIME, integrated gradients) that can identify proxy variable effects. Models with calibrated probability score outputs are more amenable to fairness analysis than models producing only ranked outputs or categorical labels, since probability scores enable threshold-based fairness metrics such as equalised odds and calibration within groups. For ensemble methods, SHAP values provide strong proxy variable detection at the individual prediction level. For deep neural networks, feature attribution is possible through KernelSHAP or DeepSHAP, though with lower precision. For LLMs, bias detection typically relies on benchmarking across demographic categories in the test set rather than per-prediction feature attribution; the assessment specifies which bias detection methodologies are applicable and their limitations. The score reflects the combined strength of proxy variable detection, disaggregated fairness evaluation, and the availability of fairness-aware training or post-hoc calibration methods for the candidate architecture. Key outputs Bias detectability score per candidate model Applicable fairness evaluation methodology --- ## Bias Evaluation Reports (Pre-Training & Post-Training) URL: https://docs.standardintelligence.com/bias-evaluation-reports-pre-training-and-post-training Breadcrumb: Development › Data Governance › Artefacts › Bias Evaluation Reports (Pre-Training & Post-Training) Last updated: 28 Feb 2026 Bias Evaluation Reports (Pre-Training & Post-Training) AISDP module(s): 4 ( Data Governance and Dataset Documentation ), 5 (Testing and Validation) Regulatory basis: Article 10(2)(f); Article 9 Bias Evaluation Reports consolidate the complete fairness assessment chain: pre-training analysis and post-training evaluation. They present the end-to-end bias story for the system, from data examination through model evaluation to fairness concept prioritisation. The pre-training section includes the distributional analysis, label bias assessment, proxy variable detection, and intersectional analysis. The post-training section includes the five fairness metrics (selection rate ratio, equalised odds, predictive parity, calibration within groups, counterfactual fairness), the threshold compliance status for each metric, and the fairness concept prioritisation decision. The report also includes the bias mitigation section: which techniques were applied, their measured effectiveness, the residual bias after mitigation, and the compensating controls. The report concludes with the AI Governance Lead 's acceptance of any residual bias risk. This artefact is the central fairness evidence document for AISDP Modules 4 and 5. It is reviewed during conformity assessment and should be retrievable if a competent authority or notified body requests evidence of the organisation's fairness practices. Key outputs Bias Evaluation Report (combined pre-training and post-training) Mitigation effectiveness assessment AI Governance Lead residual risk acceptance --- ## Data Lineage & Version Control URL: https://docs.standardintelligence.com/bias-mitigation--data-lineage-and-version-control Breadcrumb: Development › Data Governance › Bias Mitigation › Data Lineage & Version Control Last updated: 28 Feb 2026 Data Lineage & Version Control --- ## Bias Mitigation URL: https://docs.standardintelligence.com/bias-mitigation Breadcrumb: Development › Data Governance › Bias Mitigation Last updated: 28 Feb 2026 Pre-Processing Techniques (Oversampling, Undersampling, Reweighting, Synthetic Data) In-Processing Techniques (Fairness Constraints, Adversarial Debiasing, Invariant Representations) Post-Processing Techniques (Threshold Calibration, Score Adjustment, Reject Option) Compensating Controls — Mandatory Human Review & Enhanced Monitoring Compensating Controls — Deployment Restrictions & Residual Bias Acceptance Data Lineage & Version Control --- ## Calibration Within Groups — Reliability Diagrams URL: https://docs.standardintelligence.com/calibration-within-groups-reliability-diagrams Breadcrumb: Development › Data Governance › Post-Training Bias Evaluation › Calibration Within Groups — Reliability Diagrams Last updated: 28 Feb 2026 Calibration Within Groups — Reliability Diagrams AISDP module(s): 4 ( Data Governance and Dataset Documentation ), 5 (Testing and Validation) Regulatory basis: Article 10(2)(f); Article 9 Calibration within groups tests whether the model's confidence scores carry consistent meaning across protected subgroups. If the model assigns a 70% probability to applicants from one group and those applicants are indeed successful 70% of the time, the model is well-calibrated for that group. If the same 70% probability corresponds to only 55% actual success in another group, the model is poorly calibrated, and operators relying on the confidence score will be systematically misled for that subgroup. Reliability diagrams are the standard visualisation tool. They plot predicted probability against observed frequency, with a separate curve for each subgroup. A perfectly calibrated model produces a diagonal line. Deviations from the diagonal indicate miscalibration: overconfident predictions (where predicted probability exceeds actual frequency) or underconfident predictions (the reverse). Fairlearn and AI Fairness 360 both support per-subgroup calibration analysis. The AISDP includes the reliability diagrams as visual evidence, alongside the numerical calibration metrics: Brier score decomposition (reliability, resolution, uncertainty) per subgroup, and the maximum calibration error across all subgroups. Calibration is particularly important for systems where the confidence score is presented to operators as part of the oversight interface. If operators use the confidence score to decide how much scrutiny to apply to a recommendation, miscalibration for specific subgroups means those groups receive inappropriate levels of oversight, undermining the Article 14 human oversight framework. Key outputs Reliability diagrams per protected subgroup Calibration metrics (Brier score decomposition, maximum calibration error) Calibration impact assessment for operator oversight --- ## Chaos & Fault Injection Testing (Gremlin, Litmus) — Graceful Degradation URL: https://docs.standardintelligence.com/chaos-and-fault-injection-testing-gremlin-litmus-graceful Breadcrumb: Development › CI › CD Pipelines › Integration Testing › Chaos & Fault Injection Testing (Gremlin, Litmus) — Graceful Degradation Last updated: 28 Feb 2026 Chaos & Fault Injection Testing (Gremlin, Litmus) — Graceful Degradation AISDP module(s): Module 9 (Robustness and Cybersecurity), Module 5 (Testing and Validation) Regulatory basis: Article 15 Chaos and fault injection tests simulate failures at each layer of the system (data source unavailable, model serving timeout, post-processing misconfiguration, network partition) to verify that the system degrades gracefully. Graceful degradation means no data loss, no silent accuracy degradation, proper error handling, correct logging of the failure event, and activation of failsafe mechanisms. Gremlin, Litmus, and Chaos Monkey provide infrastructure for injecting failures in a controlled manner. The tests should cover pod crashes, network partitions between services, dependency outages ( model registry unavailable, logging backend down), and resource exhaustion (CPU, memory, disk). Each test verifies that the system's behaviour under failure matches the failsafe behaviour documented in the disaster recovery plan. Chaos testing is conducted before every major release and periodically in production during controlled, off-peak windows. The test results are retained as Module 5 and Module 9 evidence. A system that fails ungracefully, producing incorrect outputs without error indication, represents a compliance risk for high-risk systems where every inference affects an individual's rights. Key outputs Fault injection test scenarios covering each architectural layer Graceful degradation verification (error handling, failsafe activation, logging) Pre-release and periodic production chaos testing schedule Module 5 and Module 9 AISDP evidence --- ## CI/CD Artefacts URL: https://docs.standardintelligence.com/cicd-artefacts Breadcrumb: Development › CI › CD Pipelines › Artefacts Last updated: 28 Feb 2026 Automated Test Reports Model Cards SBOMs Security Scan Results & Remediation Records Deployment Ledger Entries Exception Approval Records --- ## CI/CD Pipelines URL: https://docs.standardintelligence.com/cicd-pipelines Breadcrumb: Development › CI › CD Pipelines (S.7) Last updated: 28 Feb 2026 The CI/CD pipeline for a high-risk AI system enforces compliance at every stage, from static analysis through deployment. Static analysis extends conventional linting with AI-specific Semgrep rules that detect demographic feature handling violations, hardcoded thresholds, missing logging, and model registry bypasses. Unit testing covers every layer of the eight-layer reference architecture , from data pipeline boundary cases through explainability coverage and human oversight interface bypass prevention. Integration testing validates end-to-end inference paths, regression against golden datasets, and system resilience under load and fault injection. Model validation gate s enforce four non-negotiable quality checks: performance, fairness, robustness, and documentation completeness. Automated documentation generates model cards, test reports, SBOMs, and AISDP section updates per build. Compliance-gated deployment requires all four gates plus human approval before production promotion, with canary or shadow deployment phases and immutable deployment ledger entries. The section concludes with the artefacts produced. ℹ This section corresponds to the CI/CD Pipelines section and feeds primarily into AISDP Module 2 (Development Process) and Module 5 (Testing and Validation). --- ## Code Version Control URL: https://docs.standardintelligence.com/code-version-control Breadcrumb: Development › Version Control › Code Version Control Last updated: 28 Feb 2026 Git Repository Management & Branch Protection AISDP module(s): Module 10 (Record-Keeping) Regulatory basis: Article 12 Compliance-grade version control requires that every version is immutable once committed, attributable to a named individual with a verified identity, timestamped from a trusted source, and retrievable for the full ten-year retention period. These requirements shape the repository management and branch protection configuration. The main branch, from which production deployments are made, must be protected. Direct commits are prohibited; all changes flow through a pull request workflow requiring at least one reviewer who was not the author, automated CI pipeline success before the merge is permitted, and AI Governance Lead approval for changes affecting fairness metrics, model architecture, intended purpose, or any AISDP-documented parameter. Branch protection rules are enforced at the repository platform level (GitHub, GitLab, or Bitbucket), not merely by convention. Signed commits (GPG or SSH) cryptographically bind each commit to a verified identity, providing assurance that the change history is authentic. This matters for accountability under the QMS and for incident investigation. Write access follows the principle of least privilege: data scientists access model training code, data engineers access pipeline code, and infrastructure engineers access IaC definitions. Administrative access is restricted to designated repository administrators, and all administrative actions are logged. Key outputs Branch protection configuration (review requirements, CI gates, signed commits) Access control matrix following least-privilege principles Administrative action logging Module 10 AISDP documentation Mandatory Code Review AISDP module(s): Module 10 (Record-Keeping), Module 2 (Development Process) Regulatory basis: Article 12, Annex IV (2) Every change to the AI system's codebase must pass through a mandatory code review before reaching the main branch. The review requirement is enforced by the branch protection rules described in : a pull request cannot be merged without at least one approved review from a qualified reviewer. Code review serves multiple compliance functions. It provides a second pair of eyes on changes that may affect the system's behaviour, fairness characteristics, or security posture. It creates a documented record of the review (the reviewer's identity, comments, and approval decision) that forms part of the Module 10 audit trail. It also creates an opportunity for the AI Governance Lead or their delegate to assess whether a change has compliance implications that require further analysis. The review process should be structured to ensure that reviewers have the context they need. Pull request descriptions should explain the purpose of the change, its expected impact on the system's behaviour, and any AISDP modules affected. For changes that touch fairness-sensitive code (feature engineering, thresholds, post-processing rules), the review should specifically address the fairness implications and reference any updated evaluation results. Key outputs Mandatory code review enforced via branch protection Pull request template with compliance-relevant fields Review records retained as Module 10 and Module 2 evidence CODEOWNERS Enforcement (Fairness Code, Thresholds, Feature Engineering) AISDP module(s): Module 10 (Record-Keeping) Regulatory basis: Article 12, Article 9 CODEOWNERS files add a compliance-specific layer to the code review process by designating which roles must review changes to particular file paths. This is a machine-enforced control: a pull request that modifies a fairness-sensitive path cannot be merged until the designated reviewer approves it. A reference CODEOWNERS configuration is provided below. AISDP threshold configurations and protected attribute definitions require AI Governance Lead approval. Model source code requires both AI Governance Lead and Technical SME review. Data handling code requires Technical SME and DPO Liaison review. Infrastructure and Kubernetes configurations require Technical SME and platform team review. CI/CD pipeline definitions and security policies require security team and Technical SME review. The CODEOWNERS file transforms what would otherwise be a procedural expectation ("fairness code should be reviewed by the governance lead") into an architectural constraint ("the platform will not permit a merge until the governance lead approves"). This distinction matters for conformity assessment : an assessor can verify that the constraint is in place and that the platform enforces it, providing stronger assurance than a policy document that may or may not be followed in practice. Key outputs CODEOWNERS file mapping paths to required reviewers Platform configuration enforcing CODEOWNERS on protected branches Evidence of enforcement (blocked merges, approval records) Module 10 AISDP documentation --- ## Commercial APIs — Contractual Terms & SLAs URL: https://docs.standardintelligence.com/commercial-apis-contractual-terms-and-slas Breadcrumb: Development › Model Selection › Model Origin Risk › Commercial APIs — Contractual Terms & SLAs Last updated: 28 Feb 2026 Commercial APIs — Contractual Terms & SLAs AISDP module(s): 3, 9 Regulatory basis: Articles 11, 15; Annex IV (2) Models licensed from commercial API providers present a different risk profile from open-source components. The provider may refuse to disclose training data composition, model architecture details, or fairness evaluation results, citing trade secrets. This creates documentation gaps in the AISDP that must be addressed. The vendor due diligence questionnaire (prepared during Phase 1) should capture the provider's willingness to supply the information required by Annex IV. Where disclosures are insufficient, the AI System Assessor records the gaps as non-conformities and assesses whether the organisation can compensate through its own testing and evaluation of the model's outputs. The Article 25 (3) information request framework provides the legal basis for requesting specific information from GPAI providers. Contractual terms carry compliance implications. Service level agreements should address availability, latency, and throughput guarantees relevant to the system's operational requirements under Article 15 . Terms of service may grant the provider broad data usage rights, limit the provider's liability, or disclaim responsibility for downstream use. The Legal and Regulatory Advisor reviews these terms and assesses the resulting gap in risk allocation. The AISDP documents the provider's contractual commitments, the organisation's assessment of their adequacy, and the residual risk s where contractual protections are insufficient. Change notification commitments are particularly important: if the provider may silently update the model within a version identifier, the organisation faces uncontrolled behavioural drift that undermines the AISDP's traceability . Key outputs Vendor due diligence questionnaire responses Contractual terms analysis Documentation gap assessment with compensating controls --- ## Commercial APIs — Provider Data Handling & Geographic Considerations URL: https://docs.standardintelligence.com/commercial-apis-provider-data-handling-and-geographic Breadcrumb: Development › Model Selection › Model Origin Risk › Commercial APIs — Provider Data Handling & Geographic Considerations Last updated: 28 Feb 2026 Commercial APIs — Provider Data Handling & Geographic Considerations AISDP module(s): 3, 4, 9 Regulatory basis: Articles 10, 15; GDPR Many commercial AI API providers collect data from their customers' usage. This may include the inputs submitted, the outputs generated, usage patterns and metadata, and feedback signals. The AISDP must document these practices and the controls applied to manage the resulting risks. Data collection risks include the provider using customer data to improve its own models, potentially incorporating the organisation's proprietary data and personal data of affected individuals into the provider's training corpus. The provider's retention and processing practices may conflict with GDPR requirements. Aggregated or anonymised data may be shared with third parties. Module 3 records the provider's data collection practices, the data processing agreement in place, measures taken to prevent personal data leakage (such as pseudonymisation of inputs before API calls), and residual risk s. Where the organisation processes personal data of EU residents through an API hosted outside the EU, the GDPR data transfer implications must be assessed; the DPO Liaison should verify the lawful basis for any cross-border data transfer. Geographic considerations extend beyond data handling to model behaviour. Models trained predominantly on data from a particular jurisdiction may perform poorly when applied to EU populations. The Technical SME should evaluate the model's performance across EU member state populations where the system will be deployed and document any geographic performance variations in the AISDP. Infrastructure hosting arrangements also matter. If the model's inference infrastructure is hosted outside the EU, the risk that foreign governments could compel access under their domestic laws (the US CLOUD Act, China's National Intelligence Law) must be assessed and documented in Module 9 . Key outputs Provider data handling assessment Data processing agreement review Geographic performance evaluation results Infrastructure sovereignty assessment --- ## Compensating Controls — Deployment Restrictions & Residual Bias Acceptance URL: https://docs.standardintelligence.com/compensating-controls-deployment-restrictions-and-residual Breadcrumb: Development › Data Governance › Bias Mitigation › Compensating Controls — Deployment Restrictions & Residual Bias Acceptance Last updated: 28 Feb 2026 Compensating Controls — Deployment Restrictions & Residual Bias Acceptance AISDP module(s): 4 ( Data Governance and Dataset Documentation ), 6 (Risk Management System) Regulatory basis: Article 10(2)(f); Article 9 Where neither mitigation techniques nor mandatory human review fully address the identified bias, the AISDP documents the residual bias and the deployment restrictions or risk acceptance applied. Deployment restrictions limit the system's use to contexts where the residual bias is acceptable. A system that performs unfairly for a specific geographic population may be restricted to deployment only in regions where the data is representative. A system with insufficient data for reliable fairness assessment on a particular intersectional subgroup may be restricted to advisory use only (with mandatory human decision-making) for cases involving that subgroup. Restrictions are documented in the Instructions for Use (AISDP Module 8 ) so that deployers understand the system's limitations. Residual bias acceptance is a formal decision by the AI Governance Lead , documented with a signed acceptance record. The record specifies the nature of the residual bias (which subgroups, which metrics, what magnitude), the mitigations attempted and their measured effectiveness, the compensating controls in place, the residual risk level, and the conditions under which the acceptance would be revisited (such as availability of additional data, new mitigation techniques, or changes in the deployment context). The residual bias acceptance is not a permanent decision. It is revisited at each scheduled risk review and whenever post-market monitoring reveals material changes in the fairness profile. If the residual bias worsens, or if new mitigation options become available, the acceptance is reassessed. Key outputs Deployment restriction specification (where applicable) Residual bias acceptance record (signed by AI Governance Lead) Review schedule and reassessment triggers --- ## Compensating Controls — Mandatory Human Review & Enhanced Monitoring URL: https://docs.standardintelligence.com/compensating-controls-mandatory-human-review-and-enhanced Breadcrumb: Development › Data Governance › Bias Mitigation › Compensating Controls — Mandatory Human Review & Enhanced Monitoring Last updated: 28 Feb 2026 Compensating Controls — Mandatory Human Review & Enhanced Monitoring AISDP module(s): 4 ( Data Governance and Dataset Documentation ), 7 (Human Oversight) Regulatory basis: Article 10(2)(f); Article 14 When bias mitigation techniques do not fully eliminate identified bias, compensating controls are applied. Mandatory human review and enhanced monitoring are the primary compensating controls for residual bias. Mandatory human review requires that all decisions affecting members of disadvantaged subgroups are reviewed by a human operator before being actioned. This control is most effective when the operator has access to the system's explanation, the affected person's complete file, and clear guidelines for when to override the system's recommendation. The AISDP documents the subgroups subject to mandatory review, the review criteria, the operator qualifications and training, and the override rate monitoring. Enhanced monitoring tracks fairness metrics for the disadvantaged subgroup at a higher frequency than standard post-market monitoring . Where the standard PMM cycle may review fairness metrics quarterly, enhanced monitoring may compute fairness metrics weekly or even daily, with automated alerts when metrics breach the declared thresholds. The AISDP documents the enhanced monitoring configuration: metrics tracked, frequency, alert thresholds, escalation procedures, and the responsible role. Both controls impose operational cost. Mandatory human review creates a capacity requirement; enhanced monitoring creates a data infrastructure requirement. These costs are factored into the system's operational planning and documented in the AISDP alongside the controls themselves. Key outputs Mandatory human review specification (subgroups, criteria, operator requirements) Enhanced monitoring configuration Operational cost assessment --- ## Completeness Assessment URL: https://docs.standardintelligence.com/completeness-assessment Breadcrumb: Development › Data Governance › Completeness Assessment Last updated: 28 Feb 2026 Population Representativeness AISDP module(s): 4 ( Data Governance and Dataset Documentation ) Regulatory basis: Article 10(3) Population representativeness asks whether the training, validation, and testing datasets adequately represent the full range of individuals on whom the system will operate. Article 10(3) requires that datasets be "sufficiently representative," and this requirement demands a structured assessment rather than a general assertion. The Technical SME defines the deployment population: the specific demographic, geographic, and contextual characteristics of the people the system will serve or affect. The definition is derived from the system's intended purpose (AISDP Module 1 ) and the conditions of use. A recruitment screening system intended for use by employers across the EU/EEA has a deployment population spanning all EU/EEA member states and the demographic diversity within them. A medical diagnostic system intended for use in a specific hospital has a narrower deployment population defined by the hospital's patient demographics. The representativeness assessment compares the dataset's composition against the deployment population across all measured dimensions: demographic subgroups, geographic regions, temporal periods, and any domain-specific stratification relevant to the system's purpose. Statistical tests (chi-squared for categorical distributions, Kolmogorov-Smirnov for continuous distributions) quantify the alignment between data composition and deployment population. Where the representativeness assessment reveals gaps, the compensating controls documented in Article 65 (synthetic data augmentation, transfer learning, stratified sampling, deployment restrictions) apply. The assessment results, including the statistical test outputs and the compensating control specifications, are retained as Module 4 evidence. Key outputs Population representativeness assessment Statistical comparison of dataset composition against deployment population Compensating control specifications for identified gaps Underrepresented Subgroup Identification AISDP module(s): 4 (Data Governance and Dataset Documentation) Regulatory basis: Article 10(2)(f), 10(3) The population representativeness assessment identifies gaps at the aggregate level. Underrepresented subgroup identification drills into specific groups that are most at risk of inadequate model performance, extending the analysis to intersectional combinations. The Technical SME examines each protected characteristic subgroup's representation in the training data, computing the ratio of the subgroup's dataset proportion to its deployment population proportion. Subgroups with ratios substantially below 1.0 are flagged. The analysis then extends to intersectional combinations: female applicants over 55, disabled applicants from ethnic minority backgrounds, and so on. These intersectional subgroups frequently have critically small cell sizes even when each individual characteristic is adequately represented. Cell size thresholds determine when reliable analysis is possible. A common threshold is 30 instances for basic performance metrics, with 100 or more needed for reliable fairness metrics with meaningful confidence intervals. Subgroups below these thresholds are documented as data-insufficient, and the AISDP states this limitation rather than reporting unreliable metrics. For each underrepresented subgroup, the documentation records the group definition, the current representation level, the deployment population proportion (where available), the impact on model performance (if measurable with the available data), and the mitigation applied (oversampling, synthetic augmentation, model selection favouring architectures with lower data requirements, or deployment restrictions limiting use for the affected population). Key outputs Underrepresented subgroup register Cell size analysis for intersectional subgroups Mitigation strategy per underrepresented subgroup Pre-Training Bias Assessment --- ## Compliance Criteria Scoring URL: https://docs.standardintelligence.com/compliance-criteria-scoring Breadcrumb: Development › Model Selection › Compliance Criteria Scoring Last updated: 28 Feb 2026 Six compliance criteria are scored against each candidate model architecture during selection. Together they form the quantitative basis of the Model Selection Record and the Compliance Criteria Scoring Matrix. Documentability Testability Auditability Bias Detectability Maintainability Determinism --- ## Compliance-Gated Deployment URL: https://docs.standardintelligence.com/compliance-gated-deployment Breadcrumb: Development › CI › CD Pipelines › Compliance-Gated Deployment Last updated: 28 Feb 2026 All Four Gates Passed Requirement AISDP module(s): Module 5 (Testing and Validation), Module 2 (Development Process) Regulatory basis: Article 15 No model may be deployed to production without passing all four validation gates: performance, fairness, robustness, and documentation. This requirement is enforced architecturally through the CI/CD pipeline , not merely by policy. The deployment step is gated by a policy engine (OPA/Rego or equivalent) that verifies all four gate results before allowing the deployment to proceed. The gate architecture is layered and sequential. Performance runs first, because a model that fails basic performance is not worth evaluating for fairness or robustness. Fairness runs second, because a model that passes performance but fails fairness is rejected regardless of its robustness characteristics. Robustness runs third. The documentation gate runs last. If any gate fails, execution halts and no subsequent gates run. A reference OPA policy is provided ( deployment_compliance.rego ) that encodes this requirement. The policy also verifies that the AISDP version in the deployment matches the model's assessed version, that human approval has been recorded within the last 48 hours, and that staging tests have passed on the exact version being deployed. Deny reasons are generated for debugging failed deployments. Key outputs CI/CD pipeline enforcing sequential four-gate passage OPA/Rego policy encoding all deployment prerequisites Deny-reason generation for failed deployment attempts Module 5 and Module 2 AISDP evidence Human Approval for Production Promotion AISDP module(s): Module 7 (Human Oversight), Module 2 (Development Process) Regulatory basis: Article 14 Article 14's human oversight requirement extends to the deployment decision itself. Deployment of high-risk AI systems cannot be fully automated. The pipeline pauses at the human approval step, presenting the deployment's metadata (model version, validation gate results, staging test results) to the designated approver. For routine releases, the Technical SME is the designated approver. For releases affecting fairness metrics, the model architecture, or the intended purpose, the AI Governance Lead approves. The approval is logged with the approver's identity, timestamp, the evidence reviewed, and the composite version identifier being deployed. Rejection halts the pipeline and logs the rejection reason. GitHub Actions manual approval, GitLab manual jobs, and Jenkins input steps all support this pattern. The approval log is retained as part of the AISDP evidence pack and feeds into the deployment ledger. The OPA deployment policy verifies that approval has been recorded within the last 48 hours, preventing stale approvals from being used for deployments that occurred long after the review. Key outputs Human approval gate in the deployment pipeline Role-based approver designation (Technical SME or AI Governance Lead) Approval logging with identity, timestamp, evidence, and version Module 7 and Module 2 AISDP evidence Canary or Shadow Deployment Phase AISDP module(s): Module 2 (Development Process), Module 12 (Post-Market Monitoring) Regulatory basis: Article 15, Article 72 Progressive delivery reduces the blast radius of a deployment that causes problems despite passing staging validation. In a canary deployment, the new version receives a small percentage of production traffic (typically 1–5%) while the existing version handles the remainder. Automated analysis compares the canary's metrics against the existing version. If the metrics diverge beyond a threshold, the canary is automatically rolled back. If the metrics are acceptable, the canary's traffic share is gradually increased until the new version handles 100% of traffic. Argo Rollouts and Flagger automate this process on Kubernetes, with configurable analysis steps and automatic rollback. Shadow deployment is more conservative: the new version processes production data but its outputs are not delivered to users, allowing evaluation on real data without risk to affected persons. Shadow deployment is particularly valuable for initial deployments of high-risk systems, where the consequences of an error are severe and confidence in staging validation is limited. The canary percentage, canary duration, analysis metrics, and rollback criteria are defined by the Technical Owner in the deployment policy and documented in the AISDP. The canary or shadow analysis results are retained as Module 12 evidence. Key outputs Canary or shadow deployment configuration Automated metric comparison and rollback triggers Deployment policy documenting canary percentage, duration, and criteria Module 2 and Module 12 AISDP evidence Immutable Deployment Ledger Entry AISDP module(s): Module 10 (Record-Keeping), Module 12 (Post-Market Monitoring) Regulatory basis: Article 12 Every deployment event, including canary promotions, full rollouts, and rollbacks, is recorded in the immutable deployment ledger described above. The entry captures the deployment timestamp, the composite version deployed (model version, configuration version, code version), which service versions changed, the identity of the deployer and approver, the validation evidence (gate reports, staging results, canary analysis), and the deployment outcome. For GitOps deployments (ArgoCD, Flux), the deployment ledger is naturally produced through the Git workflow: every deployment change is a Git commit, providing an immutable audit trail. For non-GitOps deployments, the engineering team implements a custom append-only log using WORM storage (S3 Object Lock, Azure Immutable Blob Storage) or cryptographic hash chains. Rollback events are themselves recorded as deployment ledger entries, capturing the reason for the rollback, the version restored to, and any incident reference. The deployment ledger feeds directly into AISDP Module 10 (Record-Keeping) and Module 12 (Change History). Inspectors and notified bodies may request deployment ledger entries for specific time periods during market surveillance . Key outputs Immutable ledger entry per deployment event (including rollbacks) Composite version, approver, evidence, and outcome per entry GitOps audit trail or custom WORM-based implementation Module 10 and Module 12 AISDP evidence Failure Handling — Severity-Based Blocking & Exception Process AISDP module(s): Module 2 (Development Process), Module 6 (Risk Management System) Regulatory basis: Article 15 The Technical SME classifies test failures by severity to determine the appropriate response. Critical failures (any test that exercises a compliance-relevant property, such as fairness, human oversight bypass, or Article 12 logging completeness) block the pipeline unconditionally. No exception process applies to critical failures; the issue must be resolved before deployment can proceed. High-severity failures (end-to-end accuracy regression, latency threshold breach) block the pipeline unless the AI Governance Lead approves an exception with documented justification. The exception approval records the approver's identity, the justification, the compensating controls in place, and the conditions under which the exception expires. Medium-severity failures (non-critical UI tests, documentation formatting) generate warnings and are tracked in the non-conformity register but do not block deployment. This severity classification prevents two failure modes. Without it, every test failure blocks deployment equally, leading to compliance fatigue where teams lose urgency about genuine compliance failures because they are overwhelmed by minor issues. With excessively permissive handling, genuine compliance issues may be dismissed as low severity. The classification is documented in the AISDP and reviewed periodically to ensure it remains calibrated to the system's risk profile. Key outputs Severity classification (critical, high, medium) per test category Unconditional blocking for critical failures Exception approval process for high-severity failures Module 2 and Module 6 AISDP documentation --- ## Composite Versioning Scheme URL: https://docs.standardintelligence.com/composite-versioning-scheme Breadcrumb: Development › Version Control › Composite Versioning Scheme Last updated: 28 Feb 2026 Version Components (Code SHA, Data, Model, Config, Prompt) AISDP module(s): Module 10 (Record-Keeping), Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 12 , Article 3(23) Traceability underpins the entire AISDP. The composite version identifier captures the specific combination of artefact versions that constitute the deployed system at any point in time. For a high-risk AI system, the composite version comprises five components: the code version (Git commit SHA), the data version (DVC, Delta Lake, or LakeFS reference), the model version ( model registry identifier), the configuration version (threshold values, feature flags, business rules), and, for systems using LLMs, the prompt version (system instructions and prompt templates). Each component has its own versioning mechanism and repository, but they are linked through cross-references. A model registry entry references the Git commit and the data version that produced it; a Git commit references the data version it was validated against. This linkage makes the version control "compliance-grade": given any deployed model version, the organisation can trace back to the exact code, data, configuration, and pipeline execution that produced it. Without a composite version identifier, the organisation cannot demonstrate which version of the system was deployed at any given time, what changed between versions, whether a change constitutes a substantial modification, or that the system assessed during conformity assessment is the same system deployed in production. The composite version is the version recorded in the AISDP, the EU database registration , and the Declaration of Conformity . Key outputs Composite version identifier schema Cross-reference linkage between code, data, model, config, and prompt repositories Module 10 and Module 12 documentation of the versioning scheme Inference Request Tagging with Composite ID AISDP module(s): Module 10 (Record-Keeping) Regulatory basis: Article 12 Every inference request processed by the system must be tagged with the composite version identifier at the point of execution. This tag is embedded in the log record and cannot be modified after the fact. From the composite ID, the full provenance chain is one lookup away: the model registry entry, the training data version, the code commit, and the pipeline execution that produced the model. The tag should be injected by the serving infrastructure, not by the model code. This ensures it cannot be accidentally omitted by a developer who forgets to include it. The serving framework (Triton, TensorFlow Serving, TorchServe, or a custom implementation) attaches the composite version to each request before it enters the inference pipeline, and the logging layer captures it as part of the structured trace. Inference request tagging enables incident investigation. When an adverse outcome is reported, the investigator retrieves the inference ID, extracts the composite version from the log, and queries the model registry, code repository, and data versioning system to reconstruct the complete provenance. OpenLineage with Marquez provides this as a standardised service; for simpler tooling, a provenance query script chaining lookups across Git, DVC, and MLflow achieves the same result. The Technical SME tests this query capability periodically and retains the results as Module 10 evidence. Key outputs Serving infrastructure configuration for composite ID injection Log schema including the composite version field Provenance query capability (OpenLineage/Marquez or custom script) Module 10 AISDP evidence --- ## Configuration & Prompt Versioning URL: https://docs.standardintelligence.com/configuration-and-prompt-versioning Breadcrumb: Development › Version Control › Configuration & Prompt Versioning Last updated: 28 Feb 2026 Decision Thresholds & Feature Flags AISDP module(s): Module 10 (Record-Keeping), Module 3 (Architecture and Design) Regulatory basis: Article 12 , Article 3(23) Decision thresholds and feature flags are configuration artefacts that materially affect the system's behaviour. A change to a decision threshold (for example, raising the shortlisting score from 65 to 70) alters which individuals are affected by the system's outputs. A feature flag that enables or disables a processing pathway changes the system's functional behaviour. Both types of change must be version-controlled with the same rigour as code and model changes. Configuration artefacts should be stored as configuration-as-code in version-controlled repositories, separate from application code. Each configuration change produces a new version with a timestamp, the identity of the author, and a description of the change. The configuration version forms one of the five components of the composite version identifier described above. Threshold changes are assessed against the substantial modification framework. A threshold change that shifts the system's fairness profile or significantly alters its selection rates may constitute a substantial modification, even though no code or model has changed. The AI Governance Lead reviews threshold changes before deployment, and the review record is retained as Module 10 evidence. Key outputs Version-controlled configuration files for thresholds and feature flags Change tracking with author identity, timestamp, and rationale Integration into the composite version identifier AI Governance Lead review records for threshold changes Business Rules AISDP module(s): Module 10 (Record-Keeping), Module 3 (Architecture and Design) Regulatory basis: Article 12, Article 9 Business rules applied in the post-processing layer are configuration artefacts that shape the system's outputs. Each rule modifies the model's raw output and changes the outcome that affected persons experience. Business rules must therefore be version-controlled, with each change tracked, reviewed, and assessable against the substantial modification thresholds. Like decision thresholds, business rules should be stored as configuration-as-code, separate from the application code that executes them. This separation ensures that rule changes are visible as discrete, reviewable events rather than buried within broader code changes. The configuration file records each rule's definition, its rationale, and its fairness impact assessment. Changes to business rules follow the same governance pathway as threshold changes. The Technical SME assesses the fairness impact of the proposed change, the AI Governance Lead reviews and approves, and the change is deployed through the CI/CD pipeline with the updated configuration version reflected in the composite version identifier. The complete history of business rule changes is retained as Module 10 and Module 12 evidence. Key outputs Version-controlled business rule configuration files Per-rule definition, rationale, and fairness impact assessment Governance review and approval records Module 10 and Module 12 AISDP documentation LLM Prompts & System Instructions AISDP module(s): Module 3 (Architecture and Design), Module 10 (Record-Keeping) Regulatory basis: Article 12, Article 3(23) For AI systems incorporating large language models, the system prompt is one of the most consequential design artefacts. It defines the model's behaviour: its constraints, persona, output format, safety boundaries, and domain-specific instructions. A change to the system prompt can alter the system's outputs as materially as a change to the model's weights. Despite this significance, system prompts are frequently managed outside formal version control. They may be edited by product managers without engineering review or stored in configuration files not subject to change control. This governance gap creates compliance risk: a prompt change that materially alters the system's behaviour may constitute a substantial modification under Article 3(23) without being assessed as such. System prompts for high-risk AI systems should be treated as version-controlled, governed artefacts. They are stored in Git or a dedicated prompt registry with equivalent versioning and audit logging. Every prompt change produces a new version with a timestamp, the author's identity, and a change description. Changes that could alter the system's intended purpose or compliance-relevant behaviour are assessed against the substantial modification thresholds. Changes should trigger re-execution of the sentinel dataset test to verify outputs remain within declared thresholds. Module 3 records the current prompt version and content; Module 10 records the prompt governance process and change history. Key outputs Version-controlled prompt artefacts in Git or a dedicated prompt registry Prompt change approval process (Technical SME and AI Governance Lead review) Sentinel test re-execution on prompt changes Module 3 and Module 10 AISDP documentation --- ## Contract Tests (Service-to-Service) URL: https://docs.standardintelligence.com/contract-tests-service-to-service Breadcrumb: Development › CI › CD Pipelines › Integration Testing › Contract Tests (Service-to-Service) Last updated: 28 Feb 2026 Contract Tests (Service-to-Service) AISDP module(s): Module 5 (Testing and Validation), Module 3 (Architecture and Design) Regulatory basis: Article 15 , Annex IV (3) Contract tests validate that each service's outputs conform to the expectations of its consumers. As described above, consumer-driven contract testing (Pact) and statistical contract testing (Great Expectations) detect silent breaking changes that integration testing may miss. In the CI/CD pipeline , contract tests run for every service change. A contract test failure blocks deployment. The tests validate data schemas (types, field names, required fields), value ranges and distributions (statistical contract testing), response latency and throughput, error handling and fallback behaviour, and format compliance for any regulatory data flows such as logging output destined for the audit layer. The contract test suite is version-controlled alongside the services and referenced in the AISDP as part of the quality management documentation. For microservice architectures, contract tests are the primary mechanism for ensuring that changes to one service do not silently degrade the behaviour of dependent services. The test results are retained as Module 5 evidence, and any contract violations are logged with the resolution actions taken. Key outputs Consumer-driven contracts (Pact) per service interface Statistical contracts (Great Expectations) per data interface CI pipeline integration with deployment blocking on failure Module 5 AISDP evidence --- ## Copyright & IP Exposure URL: https://docs.standardintelligence.com/copyright-and-ip-exposure Breadcrumb: Development › Model Selection › Copyright & IP Exposure Last updated: 28 Feb 2026 Training Data Copyright Assessment AISDP module(s): 3, 4 Regulatory basis: Article 53 (1)(c); Directive (EU) 2019/790 The training data used to develop AI models, particularly large language models and generative systems, may include copyrighted material. The legal landscape is evolving rapidly, with active litigation in multiple jurisdictions. For high-risk AI systems, the AISDP must document the copyright status of the training data. The assessment identifies whether the training data includes copyrighted text, images, audio, or other works and documents the legal basis relied upon: licence, consent, the text and data mining exception under Directive (EU) 2019/790, or another basis. It records the measures taken to identify and exclude material where rights holders have exercised an opt-out. Procedures for responding to copyright claims from rights holders are also documented. For systems incorporating pre-trained models from third parties, the organisation should obtain contractual representations regarding the copyright status of the model's training data. Where such representations are unavailable or qualified, the AI System Assessor records the risk in the risk register and assesses potential regulatory and reputational impact. The Code of Practice for GPAI providers under Article 56 includes copyright compliance commitments; however, the Code's content and signatory coverage continue to evolve. The downstream provider should cross-reference any Code of Practice commitments against the information actually received, and should not treat Code of Practice participation as a substitute for direct contractual representations where those can be obtained. Copyright risk is distinct from data protection risk. A dataset may be GDPR-compliant (no personal data, or personal data processed with lawful basis) yet still infringe copyright. The IP and Licensing Analysis artefact should address both dimensions. Key outputs Training data copyright assessment Legal basis documentation per data source Opt-out compliance records Rights holder response procedures Personal Data Consent Verification AISDP module(s): 4 Regulatory basis: Article 10 ; GDPR Articles 6, 9 Where training, validation, or testing data includes personal data, the AISDP must document the lawful basis for processing under GDPR Article 6. Consent under Article 6(1)(a) is one possible basis, though legitimate interests under Article 6(1)(f) or public interest under Article 6(1)(e) may be more appropriate depending on the context. The appropriate lawful basis for AI model training is an area of active regulatory debate across EU member states; data protection authorities have taken divergent positions on whether legitimate interests can support large-scale model training, and enforcement practice continues to evolve. The Legal and Regulatory Advisor and DPO Liaison should monitor developments and revisit the lawful basis determination if regulatory guidance shifts. The verification should confirm that the lawful basis is appropriate for the specific processing activity (model training may require a different basis from production inference), that data subjects were informed of the processing in accordance with GDPR Articles 13 and 14, that any consent obtained meets the GDPR's requirements for being freely given, specific, informed, and unambiguous, and that the organisation can demonstrate compliance (accountability under GDPR Article 5 (2)). For special category data (racial or ethnic origin, political opinions, religious beliefs, trade union membership, genetic data, biometric data, health data, sex life or sexual orientation), the stricter requirements of GDPR Article 9 apply. The AISDP must document the specific exemption relied upon under Article 9(2), which may include explicit consent or the substantial public interest exemption, alongside the safeguards applied. Where models are trained on data obtained from third parties, the organisation must verify the third party's data governance, including the lawful basis, the consent mechanisms, and the data processing agreements in place. address third-party data validation in detail. Key outputs Lawful basis documentation per dataset and processing activity Data subject notification records Third-party data governance verification records Special category data exemption documentation (where applicable) Residual IP Risk Documentation AISDP module(s): 3, 6 Regulatory basis: Articles 9, 11; Annex IV After completing copyright assessment, personal data consent verification, and licence compatibility review, residual intellectual property risks may remain. These risks aggregate across the system's model components, training data, and third-party dependencies. The IP and Licensing Analysis artefact consolidates these residual risk s. For each risk, the document records the source (copyright uncertainty in training data, ambiguous licence terms, provider refusal to disclose training data composition), the potential impact (regulatory sanctions, injunctive relief, reputational damage, deployment restrictions), the mitigations applied (contractual representations, copyright filtering, alternative data sourcing), and the residual risk rating. Residual IP risk is communicated to the AI Governance Lead for formal acceptance and may also need to be communicated to deployers through the Instructions for Use if it affects the deployer's own compliance position. The risk register entries for IP risk are subject to periodic review, particularly as the legal landscape around AI training data copyright evolves. Key outputs IP and Licensing Analysis (consolidated artefact) Residual IP risk register entries AI Governance Lead risk acceptance (where applicable) --- ## Counterfactual Fairness Testing URL: https://docs.standardintelligence.com/counterfactual-fairness-testing Breadcrumb: Development › Data Governance › Post-Training Bias Evaluation › Counterfactual Fairness Testing Last updated: 28 Feb 2026 Counterfactual Fairness Testing AISDP module(s): 4 ( Data Governance and Dataset Documentation ), 5 (Testing and Validation) Regulatory basis: Article 10(2)(f); Article 9 Counterfactual fairness is the most direct test of whether a model uses protected characteristics in its decisions. For individual predictions, the protected characteristic is changed (while all other features are held constant) and the model's output is observed. If flipping gender from male to female changes the prediction, the model is using gender or its proxies in the decision. The Technical SME applies counterfactual testing to a representative sample of the evaluation dataset. The proportion of predictions that change under counterfactual manipulation is reported, along with the direction and magnitude of the changes. Alibi Explain's counterfactual explanations are particularly useful for this analysis, as they answer the question "what would need to be different for the outcome to change?" in concrete, per-instance terms. Counterfactual testing is computationally tractable for tabular models where the protected characteristic is a discrete input feature. For models where the protected characteristic is entangled with other features (in text or image data where gender may be expressed through language patterns or visual features, not as a discrete input), the testing methodology becomes more complex. The AISDP documents the testing methodology, its applicability to the model architecture, and any limitations. The results are interpreted in context. A small proportion of changed predictions may be acceptable if the changes are small in magnitude and the model's overall fairness profile (as measured by the other metrics) is satisfactory. A large proportion of changed predictions, or changes concentrated in specific subgroups, indicates that the model is relying on protected characteristics and requires mitigation. Key outputs Counterfactual test methodology documentation Proportion and direction of changed predictions Subgroup-level analysis of counterfactual sensitivity --- ## Data Flow & Deployment Diagrams URL: https://docs.standardintelligence.com/data-flow-and-deployment-diagrams Breadcrumb: Development › Architectures › Artefacts › Data Flow & Deployment Diagrams Last updated: 28 Feb 2026 Data Flow & Deployment Diagrams AISDP module(s): Module 3 (Architecture and Design), Module 10 (Record-Keeping) Regulatory basis: Annex IV (2)(d), Article 12 The Data Flow Diagram traces the path of data through the system from ingestion to output, showing raw input data entering the system, validation and preprocessing steps, feature computation, model inference, post-processing and threshold application, explanation generation, output delivery to the human oversight interface, and logging at each stage. This diagram is essential for demonstrating Article 12 compliance and for enabling traceability analysis. The Deployment Diagram shows the physical or cloud infrastructure: the container orchestration platform, the cloud provider and region, node types and resource allocations, the network topology (VPC, subnets, security groups), and external service endpoints. It supports Annex IV's requirement to describe the hardware and software environment and feeds directly into the cybersecurity documentation in Module 9 . Both diagrams must use consistent notation (C4, UML, or ArchiMate) and be version-controlled. Infrastructure-as-code definitions (Terraform, Pulumi) can generate deployment diagrams automatically, ensuring the documentation remains current. The data flow diagram should annotate each stage with the logging events that are captured, demonstrating that the Article 12 traceability requirement is satisfied end-to-end. Key outputs Data Flow Diagram with logging annotations at each stage Deployment Diagram showing infrastructure topology Version-controlled diagram files Module 3 and Module 10 AISDP evidence --- ## Data Governance Artefacts URL: https://docs.standardintelligence.com/data-governance-artefacts Breadcrumb: Development › Data Governance › Artefacts Last updated: 28 Feb 2026 Dataset Documentation Cards Distributional Analysis Reports Bias Evaluation Reports (Pre-Training & Post-Training) Mitigation Effectiveness Assessments Data Lineage Records Feature Registry with Proxy Variable Assessments DPIA (Where Required) --- ## Data Governance URL: https://docs.standardintelligence.com/data-governance Breadcrumb: Development › Data Governance (S.4) Last updated: 28 Feb 2026 Data governance addresses the EU AI Act's Article 10 requirements for training, validation, and testing data. This section covers 43 articles across nine subsections, spanning the full data lifecycle from initial documentation through bias assessment, mitigation, lineage tracking, and specialised governance for RAG architectures. The subsections are organised to mirror the compliance workflow. Dataset documentation establishes provenance and composition. Completeness assessment evaluates representativeness. Pre-training bias assessment examines data for distributional imbalances, label bias, and proxy variables before any model is trained. Post-training bias evaluation applies five fairness metrics to the trained model's outputs. Bias mitigation documents the techniques applied and their effectiveness. Data lineage and version control ensure every transformation is traceable and every dataset version retrievable for ten years. Special category data handling addresses the Article 10(5) provisions for processing sensitive personal data in bias detection. RAG-specific governance extends Article 10's requirements to knowledge bases, embeddings, and multilingual performance. The section concludes with the artefacts produced. ℹ This section corresponds to the Data Governance section and feeds primarily into AISDP Module 4 (Data Governance and Dataset Documentation). --- ## Data Lineage & Version Control URL: https://docs.standardintelligence.com/data-lineage-and-version-control Breadcrumb: Development › Data Governance › Data Lineage & Version Control Last updated: 28 Feb 2026 Transformation Documentation (Pre-Step / Post-Step) AISDP module(s): 4 (Data Governance and Dataset Documentation ) Regulatory basis: Article 10(2); Annex IV (2)(d) Data lineage requires documenting every data engineering step with a pre-step record (captured before execution) and a post-step record (captured after execution). This methodology creates an audit trail demonstrating that data engineering was deliberate and considered. The pre-step record includes the input datasets referenced by version identifier, the intended transformation (what the step will do), the rationale for the transformation (what data quality, completeness, or fairness problem it addresses), the expected output characteristics (schema, record count, distribution properties), and the validation criteria to be applied to the output. The post-step record includes the actual output dataset referenced by version identifier, the actual output characteristics, a comparison against pre-step expectations noting any deviations and their explanation, the impact on data quality metrics, the impact on fairness-relevant distributions, and the identity and date of the person who executed the step. Data lineage operates at three levels of granularity. Pipeline-level lineage captures the macro view using DAG-based orchestration tools (Airflow, Prefect, Dagster). Transformation-level lineage captures the logic within each step, requiring version-controlled code (dbt for SQL, tracked Python scripts for other transforms). Column-level lineage tracks how each column in the output relates to columns in source datasets, which is essential for proxy variable analysis; OpenLineage and Marquez provide an open standard for emitting and collecting lineage events at all three levels. Great Expectations integrates naturally with the pre-step/post-step methodology: an expectation suite defines expected output characteristics, and the validation result (pass/fail with specifics) serves as the post-step record. Key outputs Pre-step and post-step records per data engineering step Lineage event records (pipeline, transformation, and column level) Deviation analysis documentation Data Versioning Tooling (DVC, Delta Lake, LakeFS, Manual Snapshots) AISDP module(s): 4 (Data Governance and Dataset Documentation) Regulatory basis: Article 10 ; Annex IV(2)(d) Every dataset used in the system's lifecycle must be versioned with an immutable identifier that allows the exact dataset to be retrieved at any future point. Dataset versions are linked to model versions so that the AISDP can state precisely which data was used to train each model version. DVC (Data Version Control) is the most widely adopted open-source tool. It extends Git to track large files and datasets, storing the data in a configured backend (S3, GCS, Azure Blob) while Git tracks the metadata and version pointers. DVC enables branch-based data experimentation and ensures that the data used for any model version is reproducible from the Git commit history. Delta Lake provides ACID transactions on top of data lakes, supporting time-travel queries that retrieve the exact dataset as it existed at any point in time. It is well-suited to Spark-based pipelines and large-scale data environments. LakeFS provides Git-like branching and versioning for data lakes, supporting isolated data experimentation without affecting production datasets. Cloud-native versioning (S3 object versioning, for example) provides a simpler alternative for smaller datasets. For organisations with manual or semi-automated data workflows, manual snapshots (timestamped copies of the dataset stored in a defined location with a version identifier) provide minimum viable versioning. The AISDP must reference the versioning mechanism, the storage location, the retention policy, and the access controls governing the versioned datasets. The choice of tooling should align with the system's data infrastructure and the team's capabilities. The AISDP documents the selected tool, the versioning scheme, and the integration with the model registry (so that model-to-data traceability is maintained). Key outputs Data versioning tool selection and configuration Versioning scheme documentation Model-to-data version linkage specification Ten-Year Retention Planning — Storage & Lifecycle Policies AISDP module(s): 4 (Data Governance and Dataset Documentation) Regulatory basis: Article 18 ; GDPR Article 5 (1)(e) Article 18 of the AI Act requires that technical documentation, including information about training data, be retained for ten years after the system is placed on the market or put into service. The GDPR's storage limitation principle requires that personal data be kept no longer than necessary. Reconciling these obligations is a core data governance challenge. The resolution lies in retaining the documentation about the data, not necessarily the underlying personal data. In practice, this requires retaining metadata (provenance records, quality metrics, distributional statistics, versioning records, schema documentation, bias assessment results) after the personal data itself has been deleted or anonymised. The data architecture must be designed so that compliance-relevant information about training data can survive deletion of the individual records it describes. The retention plan specifies, for each data category (training data, validation data, test data, inference inputs, inference outputs, operator interaction logs): the retention period, the justification for the period (regulatory requirement, reproducibility need, audit trail obligation, retraining schedule), the storage tier and cost implications (hot storage for active use, warm for periodic access, cold for archival), and the deletion or anonymisation process at the end of the retention period. The DPO Liaison reviews the retention plan against GDPR requirements, confirming that personal data retention periods are justified and that deletion/anonymisation procedures are technically verified. At system end-of-life , the retention framework faces its most demanding test; the DPO Liaison should review the retention schedule against the decommission circumstances. Key outputs Data retention plan per data category Storage tier and lifecycle policy DPO Liaison review and confirmation Third-Party Data Validation — Contracts, Ingestion Checks & Quarantine AISDP module(s): 4 (Data Governance and Dataset Documentation) Regulatory basis: Article 10(2); Annex IV(2)(d) Many high-risk AI systems rely on data from external sources: commercial data brokers, GPAI provider training corpora, feature enrichment services. The organisation bears full Article 10 compliance responsibility for this data regardless of its origin. The third-party data governance framework operates on three layers: contractual, technical, and ongoing monitoring. The contractual layer establishes baseline expectations. Data supplier agreements should address provenance disclosure (collection methodology, lawful basis, populations represented, known biases), data quality specifications (completeness thresholds, accuracy guarantees, timeliness requirements, consistency standards), bias and representativeness warranties (demographic composition statistics to the extent disclosable), change notification (30 to 90 days before material changes), and audit rights (direct inspection or third-party auditor access at a risk-proportionate frequency). The technical layer validates every delivery regardless of contractual promises. The intake validation pipeline verifies schema compliance, completeness, range and distribution checks against the historical baseline, and anomaly detection. Great Expectations or Soda Core can define a dedicated expectation suite per supplier. Deliveries that fail validation are quarantined: the data sits in a holding area, the supplier is notified, and the data does not enter the training pipeline until the failure is resolved. The quarantine log is retained as Module 4 evidence. The ongoing monitoring layer detects silent changes. Statistical monitoring of incoming deliveries compares each delivery's distributional profile against the historical baseline, flagging sudden shifts that may indicate undisclosed methodology changes. Periodic re-assessment (at least annually) evaluates whether the data remains suitable for the system's intended purpose given evolving deployment populations and available alternatives. Key outputs Third-party data governance framework documentation Supplier contract provisions summary Intake validation pipeline configuration Quarantine log and resolution records Special Category Data (Art. 10(5)) --- ## Data Lineage Records URL: https://docs.standardintelligence.com/data-lineage-records Breadcrumb: Development › Data Governance › Artefacts › Data Lineage Records Last updated: 28 Feb 2026 Data Lineage Records AISDP module(s): 4 (Data Governance and Dataset Documentation ) Regulatory basis: Article 10 ; Annex IV (2)(d) Data Lineage Records consolidate the lineage documentation from Article 88 into a queryable audit trail. They demonstrate, for any data point used in training, validation, or inference, where it came from and what happened to it along the way. The records operate at three levels: pipeline-level lineage (which steps ran, in what order, with what inputs and outputs), transformation-level lineage (the logic within each step, version-controlled as code), and column-level lineage (how each feature in the model's input relates to columns in source datasets). OpenLineage provides the open standard for emitting and collecting lineage events; Marquez, DataHub, or Apache Atlas provide the server and query infrastructure. Feature stores (Feast, Tecton, Hopsworks) contribute to the lineage chain by centralising feature definitions, computation logic, and versioned feature values. They enforce consistency between training and inference features, eliminating training-serving skew. The lineage records are retained for the full AISDP evidence period (ten years under Article 18 ). They support multiple compliance activities: demonstrating Article 10 compliance, supporting GDPR data subject rights (identifying whether a specific individual's data was used in training), enabling post-incident investigation (tracing a faulty prediction to its data origins), and supporting the substantial modification assessment (determining whether a data change alters the system's behaviour materially). Key outputs Data Lineage Records (pipeline, transformation, column level) Feature store integration documentation Retention and access control specification --- ## Data Pipeline Tests (Normal, Boundary, Pathological, Schema, Distribution, Property-Based) URL: https://docs.standardintelligence.com/data-pipeline-tests-normal-boundary-pathological-schema Breadcrumb: Development › CI › CD Pipelines › Unit Testing › Data Pipeline Tests (Normal, Boundary, Pathological, Schema, Distribution, Property-Based) Last updated: 28 Feb 2026 Data Pipeline Tests (Normal, Boundary, Pathological, Schema, Distribution, Property-Based) AISDP module(s): Module 5 (Testing and Validation), Module 2 (Development Process) Regulatory basis: Annex IV (3), Article 15 Each data transformation step in the pipeline requires unit tests that go beyond verifying correct output for a handful of known inputs. Data pipeline tests must validate that the transformation produces the expected output for normal inputs, that edge cases (null values, empty strings, extreme values, malformed records) are handled correctly, that the transformation preserves data types and schemas, and that the transformation's effect on data distributions is within expected bounds. Property-based testing with Hypothesis is particularly valuable for data pipelines. The developer defines properties that should hold for any valid input, such as "the output of the normalisation step should have mean approximately 0 and standard deviation approximately 1 for any input distribution." Hypothesis generates hundreds of random inputs to test the property, catching edge cases that hand-written tests miss: empty datasets, single-row datasets, datasets with all null values, datasets with extreme values. Great Expectations complements Hypothesis by validating schema contracts and distribution expectations against actual data. Together, these tools provide comprehensive coverage across the spectrum from structural correctness (schema, types) to statistical correctness (distributions, ranges). The test results are retained as Module 5 evidence and referenced in the AISDP's test strategy documentation. Key outputs Unit tests covering normal, boundary, and pathological inputs per transformation step Property-based tests (Hypothesis) for distribution-level properties Schema and distribution validation (Great Expectations) Module 5 and Module 2 AISDP evidence --- ## Data Version Control URL: https://docs.standardintelligence.com/data-version-control Breadcrumb: Development › Version Control › Data Version Control Last updated: 28 Feb 2026 Data Versioning Tooling (DVC, Delta Lake, LakeFS) AISDP module(s): Module 4 (Data Governance), Module 10 (Record-Keeping) Regulatory basis: Article 10 , Article 12, Article 18 Data versioning ensures that, for any model version, the organisation can retrieve the exact dataset used to train it. Without deliberate versioning, the data that trained last quarter's model may have been silently overwritten by this quarter's data pipeline. Three tools address this requirement with different architectural trade-offs. DVC (Data Version Control) works alongside Git. The dataset is stored in remote storage (S3, GCS, Azure Blob), and DVC creates a small metadata file in the Git repository recording the storage location, content hash, and version. Checking out a Git commit allows DVC to retrieve the corresponding dataset. DVC fits into existing Git workflows and is the most widely adopted option, though it tracks whole files, meaning even small changes require storing a complete new copy. Delta Lake, built on Apache Spark, provides ACID transactions on data lakes with time-travel capability for querying historical versions. It handles incremental changes efficiently but depends on the Spark ecosystem. LakeFS provides Git-like semantics (branches, commits, merges) directly on object storage, working with any S3-compatible tool. All three tools must support the ten-year retention requirement under Article 18, which has infrastructure implications: durable storage, surviving access credentials, and budgeted storage costs for a decade. Key outputs Selected data versioning tool with deployment configuration Integration with the code repository (cross-referencing data versions to Git commits) Ten-year retention infrastructure (long-term storage, lifecycle policies) Module 4 and Module 10 AISDP documentation Dataset Manifest per Version (Count, Schema, Hash, Creator, Transformations) AISDP module(s): Module 4 (Data Governance), Module 10 (Record-Keeping) Regulatory basis: Article 10, Article 12 Each dataset version must be accompanied by a manifest that records the essential metadata needed for traceability and reproducibility. The manifest serves as the dataset's identity card, enabling the organisation to verify that the correct dataset was used for a given training run and to detect any unauthorised modifications. The manifest records the version identifier, the creation date, the creator's identity, the record count, the column schema (field names, types, and formats), the SHA-256 content hash of each data file, the source description, and any transformations applied since the previous version. The content hash is particularly important: it provides a tamper-evident fingerprint that can be verified at any future point to confirm the dataset has not been altered. For organisations using DVC, the manifest information is partially captured in the .dvc metadata files. For Delta Lake and LakeFS, the transaction log provides equivalent information. Regardless of tooling, the manifest should be stored alongside the dataset version in a human-readable format (YAML or JSON) and cross-referenced from the model registry entry that used the dataset for training. This cross-reference completes the data-to-model traceability chain. Key outputs Dataset manifest template (YAML or JSON) Content hash computation per data file Cross-reference to model registry entries Module 4 and Module 10 evidence Manual Alternative (Filename Versioning, YAML Manifests, Restricted Access) AISDP module(s): Module 4 (Data Governance), Module 10 (Record-Keeping) Regulatory basis: Article 10, Article 12 For organisations that cannot deploy DVC, Delta Lake, or LakeFS, data versioning reverts to manual snapshot management. Each dataset version becomes a complete copy stored with a naming convention (for example, training_data_v2.4_2026-02-15/ ) and accompanied by a manifest file in YAML or JSON format as described above. The manual approach requires strict operational discipline. Storage must have access controls and no-delete policies in place. The model registry entry (or tracking spreadsheet) must cross-reference the dataset version identifier. Each version must be a complete, unmodified snapshot; partial updates or in-place modifications undermine the versioning guarantee. This approach sacrifices several capabilities that automated tooling provides: incremental storage efficiency (every version is a full copy), automated hash verification on retrieval, and integration with Git for code-data cross-referencing. For datasets above approximately 10GB, the storage cost and manual management burden become substantial, and adopting tooling becomes strongly advisable. DVC is open-source and free; the real cost is the engineering time to integrate it. Key outputs Naming convention and directory structure for dataset snapshots YAML or JSON manifest per version Access controls and no-delete storage policies Module 4 and Module 10 documentation Ten-Year Retention (Long-Term Storage, Lifecycle Policies, Credential Survivability) AISDP module(s): Module 4 (Data Governance), Module 10 (Record-Keeping) Regulatory basis: Article 18 Article 18 of the EU AI Act requires that technical documentation be retained for ten years from the date the system is placed on the market. For data versioning, this means that older dataset versions must remain retrievable for the entire period. Many organisations underestimate the infrastructure implications. The versioning backend's storage must be durable, with replication and backup to protect against data loss. The access credentials must survive personnel changes; a dataset versioning system that runs on a team's cloud account and is forgotten when the team reorganises fails the retention test. The storage cost must be budgeted for a decade. Versioned datasets should be stored in the organisation's long-term compliance storage (S3 Glacier, Azure Archive, or equivalent), with lifecycle policies preventing accidental deletion. The AI Governance Lead is responsible for ensuring that the ten-year retention obligation is reflected in the organisation's infrastructure planning and budgeting. Credential survivability means that the credentials needed to access archived data are managed through the organisation's central secrets management, not held by individuals. Lifecycle policies should prevent both accidental deletion and premature archival to storage tiers that make retrieval impractically slow for incident response purposes. Key outputs Long-term storage configuration (S3 Glacier, Azure Archive, or equivalent) Lifecycle policies preventing accidental deletion Credential management through central secrets management Module 4 and Module 10 AISDP documentation --- ## Completeness Assessment URL: https://docs.standardintelligence.com/dataset-documentation--completeness-assessment Breadcrumb: Development › Data Governance › Dataset Documentation › Completeness Assessment Last updated: 28 Feb 2026 Completeness Assessment --- ## Dataset Documentation Cards URL: https://docs.standardintelligence.com/dataset-documentation-cards Breadcrumb: Development › Data Governance › Artefacts › Dataset Documentation Cards Last updated: 28 Feb 2026 Dataset Documentation Cards AISDP module(s): 4 (Data Governance and Dataset Documentation ) Regulatory basis: Article 10 ; Annex IV (2)(d) Dataset Documentation Cards are the consolidated artefacts that present the information in a structured, reviewable format for each dataset in the system's lifecycle. They follow the Datasheets for Datasets framework, extended with the EU AI Act-specific requirements. Each card covers the seven standard sections (motivation, composition, collection process, preprocessing, uses, distribution, maintenance) along with the additional sections required for AISDP compliance: GDPR lawful basis, protected characteristic distributions, representativeness assessment, known limitations contextualised to the intended purpose, and versioning metadata. The cards are treated as living artefacts. Version bumps to the underlying dataset trigger corresponding updates to the card. Tools such as OpenMetadata, DataHub, or a Markdown file co-located with the dataset in the versioning system provide version-controlled documentation that evolves alongside the data. The AI System Assessor verifies that the cards are current at each phase gate and during conformity assessment . Documentation depth is proportionate to the dataset's role: training datasets warrant comprehensive cards; static reference datasets warrant lighter treatment. The proportionality rationale is itself documented. Key outputs Dataset Documentation Card per dataset Proportionality rationale document Version tracking for card updates --- ## Dataset Documentation URL: https://docs.standardintelligence.com/dataset-documentation Breadcrumb: Development › Data Governance › Dataset Documentation Last updated: 28 Feb 2026 Source & Acquisition Method Record Count, Schema & Version Identifier Temporal & Geographic Scope Demographic Composition Known Limitations Completeness Assessment --- ## Deep Neural Networks — Candid Explainability Limitations Assessment URL: https://docs.standardintelligence.com/deep-neural-networks-candid-explainability-limitations Breadcrumb: Development › Model Selection › Full-Spectrum Evaluation › Deep Neural Networks — Candid Explainability Limitations Assessment Last updated: 28 Feb 2026 Deep Neural Networks — Candid Explainability Limitations Assessment AISDP module(s): 3, 7 Regulatory basis: Articles 13, 14 Where a deep neural network is selected for a high-risk system, the AISDP must include a candid assessment of the explainability limitations, distinct from the technical description of the explanation method itself. This assessment is a compliance-critical artefact because it sets the foundation for the compensating controls that the oversight framework must provide. The assessment should address fidelity risk: the extent to which the post-hoc explanation method reflects the model's actual reasoning rather than producing a plausible but potentially misleading narrative. LIME, for instance, fits a local linear model around each prediction point; if the true decision boundary is highly non-linear in that region, the LIME explanation may be unfaithful. Stability risk is also relevant, covering whether the explanation changes if the input is perturbed slightly. An explanation method that produces materially different attributions for nearly identical inputs undermines operator confidence and complicates audit. Coverage limitations should be documented. Not every prediction may receive a full explanation due to computational cost constraints. The AISDP must state the explanation coverage: the proportion of predictions receiving full explanations, the method used for the remainder (such as top-three features only), and the computational overhead involved. For systems processing thousands of predictions daily, generating full SHAP explanations for every case may be infeasible; practical approaches include full explanations for a random sample, lightweight explanations for all predictions, and pre-computed explanations for common input patterns. The assessment should conclude with the compensating controls selected. These typically include enhanced human oversight (more intensive review, longer dwell times, calibration cases), output validation against known-good references, constrained output spaces, and explanation quality monitoring in production (periodic fidelity testing, explanation pattern monitoring, and human evaluation sampling). The AI Governance Lead reviews and accepts the residual explainability risk with a formal sign-off, retained in the evidence pack . Key outputs Explainability limitations assessment document Compensating controls specification AI Governance Lead acceptance of residual explainability risk --- ## Deep Neural Networks — Types & Post-Hoc Explanation Methods URL: https://docs.standardintelligence.com/deep-neural-networks-types-and-post-hoc-explanation-methods Breadcrumb: Development › Model Selection › Full-Spectrum Evaluation › Deep Neural Networks — Types & Post-Hoc Explanation Methods Last updated: 28 Feb 2026 Deep Neural Networks — Types & Post-Hoc Explanation Methods AISDP module(s): 2, 3 Regulatory basis: Articles 13, 14, 15; Annex IV (2)(b) Convolutional networks, recurrent networks, and transformer architectures achieve state-of-the-art performance on unstructured data tasks such as image classification, natural language processing, and speech recognition. Their selection for high-risk systems introduces specific compliance challenges that the AISDP must address candidly. Post-hoc explanation methods exist for deep neural networks: SHAP (via KernelSHAP or DeepSHAP), LIME (Local Interpretable Model-agnostic Explanations), GradCAM (for vision models), and attention visualisation (for transformer architectures). Each method approximates the model's reasoning rather than exposing it directly. Their fidelity to the model's actual decision process is debated in the academic literature, and the AISDP must document the chosen method, its known limitations, and the fidelity validation performed. For the compliance criteria, deep neural networks score weakly on documentability, since a transformer with billions of parameters cannot have its learned representations enumerated. The architecture can be described, yet the documentation gap must be addressed through behavioural characterisation. Testability is adequate; standard evaluation methodologies exist, though stochastic outputs (in generative models) may require statistical testing frameworks. Auditability varies depending on logging infrastructure; models where output depends on runtime conditions (conversation history, retrieval context) require more sophisticated logging. Bias detectability is adequate, as feature attribution methods can identify proxy effects, though with lower precision than for simpler models. Maintainability is weak to adequate, since deep networks can exhibit large behavioural shifts from small data changes. Determinism varies; some architectures are deterministic given fixed seeds, while others are inherently stochastic. Where deep learning is chosen for a high-risk system, the AISDP must describe the compensating controls applied to address the explainability gap, including more intensive human oversight, output validation against known-good references, or constrained output spaces. Key outputs Post-hoc explanation method selection and justification Compensating controls for explainability limitations Compliance criteria scoring for neural network candidates --- ## Demographic Composition URL: https://docs.standardintelligence.com/demographic-composition Breadcrumb: Development › Data Governance › Dataset Documentation › Demographic Composition Last updated: 28 Feb 2026 Demographic Composition AISDP module(s): 4 ( Data Governance and Dataset Documentation ) Regulatory basis: Article 10(2)(f), 10(3) The demographic composition of a dataset determines whether the model will perform equitably across the populations on whom it operates. Article 10(2)(f) requires that training, validation, and test data be examined, in light of the intended purpose of the AI system, in view of possible biases that are likely to affect the health and safety of persons, have a negative impact on fundamental rights, or lead to discrimination prohibited under Union law. The Technical SME presents composition statistics both in aggregate and disaggregated by relevant subgroups. The documentation records the distribution of protected characteristics within the dataset: age bands, gender, ethnicity, disability status, and any other characteristics relevant to the system's deployment context. These distributions are compared against the deployment population to identify over-representation and under-representation. Where protected characteristic data is not directly available in the dataset, the documentation records this as a limitation and describes any proxy-based or inferred demographic analysis conducted (with appropriate caveats about the reliability of proxy-based inference). Population completeness is assessed: does the dataset represent the full range of persons and groups on whom the system will operate? Underrepresentation of specific subgroups degrades the model's performance for those groups and creates fairness risk. Where the deployment population spans multiple EU member states with different demographic profiles, the completeness assessment should address each target member state individually. The demographic composition feeds directly into the pre-training bias assessment. Distributional imbalances identified at this stage inform the Technical SME's bias mitigation strategy and may influence the model selection decision (a model requiring less training data may be preferable when certain subgroups are underrepresented). Key outputs Demographic composition report with protected characteristic distributions Deployment population comparison Under-representation identification and flagging --- ## Dependency Maps URL: https://docs.standardintelligence.com/dependency-maps Breadcrumb: Development › Architectures › Artefacts › Dependency Maps Last updated: 28 Feb 2026 Dependency Maps AISDP module(s): Module 3 (Architecture and Design) Regulatory basis: Annex IV (2)(b) Dependency maps show how the AI system relates to its external dependencies: the data sources it consumes, the APIs it calls, the infrastructure services it relies upon, and the downstream systems that consume its outputs. For microservice architectures, the dependency map also captures internal service-to-service relationships. The dependency map classifies each dependency by criticality (would the system fail if this dependency became unavailable?), data sensitivity (does personal data flow to or from this dependency?), and change risk (how frequently does this dependency change, and what is the notification mechanism?). This classification informs the risk assessment and the disaster recovery planning described above. Both declared dependencies (captured in a service catalogue such as Backstage) and observed dependencies (discovered through distributed tracing with Jaeger, Zipkin, or cloud-native tracing services) should be documented. The discrepancy between declared and observed dependencies is itself a finding that warrants investigation. The dependency map is regenerated periodically, aligned with the risk review cadence, and compared against the previous version to detect undocumented changes. Key outputs Dependency map with criticality, data sensitivity, and change risk classification s Service catalogue entries (Backstage or equivalent) Distributed tracing validation of declared dependencies Module 3 AISDP evidence --- ## Dependency Scanning (Snyk, Dependabot, pip-audit, OWASP) URL: https://docs.standardintelligence.com/dependency-scanning-snyk-dependabot-pip-audit-owasp Breadcrumb: Development › CI › CD Pipelines › Static Analysis › Dependency Scanning (Snyk, Dependabot, pip-audit, OWASP) Last updated: 28 Feb 2026 Dependency Scanning (Snyk, Dependabot, pip-audit, OWASP) AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Every third-party dependency (Python packages, npm modules, system libraries) must be scanned against known vulnerability databases (CVE, OSV) at every build. The scan should fail the pipeline if any dependency has a known critical or high-severity vulnerability without an approved exception. Snyk, Dependabot, and pip-audit scan the project's dependency tree and alert on vulnerable versions. OWASP Dependency-Check provides an open-source alternative with NIST NVD integration. The scanner runs on every commit and blocks merges if critical vulnerabilities are found. For vulnerabilities without available patches, the AI Governance Lead may approve a time-limited exception with documented justification and compensating controls. Dependency scanning is essential for supply chain security . An AI system's inference behaviour depends on the correctness of its entire dependency tree; a compromised library could alter model outputs, exfiltrate data, or introduce backdoors. The scan results are retained as Module 9 evidence, and the dependency vulnerability status is reviewed as part of the periodic security assessment. Key outputs Dependency scanning tool configuration (Snyk, Dependabot, pip-audit, or OWASP) CI pipeline integration with merge blocking on critical/high vulnerabilities Exception approval process for unpatched vulnerabilities Module 9 AISDP evidence --- ## Deployment Ledger Entries URL: https://docs.standardintelligence.com/deployment-ledger-entries Breadcrumb: Development › CI › CD Pipelines › Artefacts › Deployment Ledger Entries Last updated: 28 Feb 2026 Deployment Ledger Entries AISDP module(s): Module 10 (Record-Keeping), Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 12 This artefact is the materialised deployment ledger described above, viewed as a CI/CD pipeline output. Each pipeline execution that results in a deployment (or rollback) produces a deployment ledger entry. The collection of entries across the system's lifecycle constitutes the system's complete deployment history. The deployment ledger entries from the CI/CD pipeline complement the broader deployment ledger by providing the specific pipeline execution context: which pipeline run triggered the deployment, what tests were executed, what gate results were produced, and what approval was recorded. This level of detail enables precise reconstruction of the deployment decision for any point in the system's history. The entries are stored in append-only storage and retained for the ten-year period. The Conformity Assessment Coordinator maintains an index enabling retrieval by date range, model version, or deployment outcome. Key outputs Deployment ledger entries per pipeline-triggered deployment Pipeline execution context (tests, gates, approval) per entry Append-only storage with ten-year retention Module 10 and Module 12 AISDP evidence --- ## Determinism URL: https://docs.standardintelligence.com/determinism Breadcrumb: Development › Model Selection › Compliance Criteria Scoring › Determinism Last updated: 28 Feb 2026 Determinism AISDP module(s): 3, 10 Regulatory basis: Articles 12, 15 Determinism asks whether the model produces the same output for the same input consistently across executions. This property directly affects reproducibility for conformity assessment , auditability for Article 12 logging, and the testing methodology required for Article 15 evaluation. The assessment determines whether the candidate architecture is inherently deterministic or inherently stochastic. Linear models, decision trees, and most ensemble methods are deterministic: given the same input and model version, the output is identical. LLMs, diffusion models, and other generative architectures are inherently stochastic, with outputs varying across invocations even for identical inputs. For stochastic architectures, the assessment specifies the controls needed to achieve sufficient reproducibility for compliance purposes. Temperature clamping reduces variance. Seed fixing enables reproduction in testing environments. Output logging captures the actual output for each inference, compensating for the inability to reproduce it deterministically. The assessment should also evaluate the performance cost of these controls, since setting temperature to zero may degrade output quality for tasks where diversity is valuable. The determinism score reflects the architecture's natural reproducibility and the feasibility and cost of the controls needed to achieve compliance-grade reproducibility where the architecture is not inherently deterministic. Key outputs Determinism score per candidate model Reproducibility control specification for stochastic architectures --- ## Development Architectures URL: https://docs.standardintelligence.com/development-architectures Breadcrumb: Development › Development Architectures (S.5) Last updated: 28 Feb 2026 Development architectures translate the system's compliance requirements into a concrete technical design. This section covers 50 articles across four subsections: the statement of business intent, the eight-layer reference architecture , infrastructure design, and the artefacts produced. The statement of business intent establishes the system's purpose, prohibited outcomes, ethical framework, and transparency commitments before any architectural work begins. The eight-layer reference architecture provides per-layer controls spanning data ingestion, feature engineering, model inference, post-processing, explainability, human oversight, logging, and monitoring. Infrastructure design addresses deployment topology, containerisation, and data sovereignty. The artefacts subsection documents the deliverables. ℹ This section corresponds to the Development Architectures section and feeds primarily into AISDP Module 3 (Architecture and Design). --- ## Distributional Analysis Reports URL: https://docs.standardintelligence.com/distributional-analysis-reports Breadcrumb: Development › Data Governance › Artefacts › Distributional Analysis Reports Last updated: 28 Feb 2026 Distributional Analysis Reports AISDP module(s): 4 ( Data Governance and Dataset Documentation ) Regulatory basis: Article 10(2)(f) Distributional Analysis Reports consolidate the outputs of Article 70 (distributional analysis), Article 73 (proxy variable detection), and Article 75 (intersectional pre-training analysis) into a single evidence artefact per dataset. Each report contains the distributional analysis output matrix (features vs protected characteristics, with test statistics and p-values), the flagged features register with the Technical SME's assessment of each flagged feature, the proxy variable correlation matrix with the justification review outcomes, and the intersectional analysis with cell sizes and reliability assessments. The report is generated as part of the data preparation pipeline and stored as a Module 4 evidence artefact. It is reviewed by the AI System Assessor for completeness and by the AI Governance Lead for the acceptability of any identified biases. The report is updated whenever the dataset is modified. Key outputs Distributional Analysis Report per dataset Reviewer sign-off records --- ## Distributional Analysis — Statistical Tests & Output Matrix URL: https://docs.standardintelligence.com/distributional-analysis-statistical-tests-and-output-matrix Breadcrumb: Development › Data Governance › Pre-Training Bias Assessment › Distributional Analysis — Statistical Tests & Output Matrix Last updated: 28 Feb 2026 Distributional Analysis — Statistical Tests & Output Matrix AISDP module(s): 4 ( Data Governance and Dataset Documentation ) Regulatory basis: Article 10(2)(f) Before any model is trained, the Technical SME examines the data for bias through distributional analysis. This analysis computes the distribution of each feature across protected characteristic subgroups, identifying significant differences that may indicate historical disparities the model would learn and perpetuate. For categorical features, the chi-squared test of independence tests whether the feature's distribution is independent of the protected characteristic. For continuous features, the Kolmogorov-Smirnov test compares cumulative distribution functions across subgroups, and the Mann-Whitney U test detects location shifts where one subgroup's values are systematically higher or lower. The analysis should cover every feature, not only those the team suspects are problematic. The practical output is a matrix: features on one axis, protected characteristics on the other, with each cell showing the test statistic and p-value. Features with statistically significant distributional differences (p Flagged features require a documented assessment: is the distributional difference an artefact of historical disparity that the model should not learn, or is it a legitimate difference that the model should capture? A recruitment dataset where female applicants have systematically lower years of experience due to historical workforce participation patterns contains a distributional difference that, if weighted heavily by the model, would reproduce that disparity. The Technical SME's assessment of each flagged feature is retained as a Module 4 artefact. Key outputs Distributional analysis output matrix Flagged features with assessment rationale Statistical test parameters and correction methodology --- ## Documentability URL: https://docs.standardintelligence.com/documentability Breadcrumb: Development › Model Selection › Compliance Criteria Scoring › Documentability Last updated: 28 Feb 2026 Documentability AISDP module(s): 3 Regulatory basis: Annex IV (2)(b) Documentability is the first criterion. The question is: can the model's architecture, hyperparameters, and decision process be described precisely enough to satisfy Annex IV, Section 2? Could a qualified reviewer reproduce the training process from the documentation alone? The assessment examines the model architecture and determines whether its structure can be expressed in a technical specification document. A logistic regression model scores strongly: every parameter is a named coefficient with a clear interpretation. A gradient-boosted tree ensemble scores adequately: the architecture (number of trees, depth, splits) can be described, though enumerating every learned split across thousands of trees is impractical. A transformer with billions of parameters scores weakly: the architecture can be described at a structural level, yet the learned representations cannot be enumerated. The assessment identifies documentation gaps that would need compensating measures, such as detailed behavioural characterisation in lieu of parameter-level documentation. The score (strong, adequate, or weak) is recorded in the compliance criteria scoring matrix alongside the evidence supporting the determination. The assessment is not merely a label; it must specify what can be documented, what cannot, and what compensating measures would be required if the architecture were selected. Key outputs Documentability score per candidate model Gap identification with compensating measures --- ## DPIA (Where Required) URL: https://docs.standardintelligence.com/dpia-where-required Breadcrumb: Development › Data Governance › Artefacts › DPIA (Where Required) Last updated: 28 Feb 2026 DPIA (Where Required) AISDP module(s): Module 4 ( Data Governance ) Regulatory basis: GDPR Article 35, EU AI Act Article 27 A Data Protection Impact Assessment is required under GDPR Article 35 whenever processing is likely to result in a high risk to individuals' rights and freedoms. For AI systems that process personal data, this obligation is triggered in most high-risk deployments. The DPIA should follow the methodology set out in the EDPB's guidelines (WP 248 rev.01, as endorsed by the EDPB), which specify the minimum content requirements and the criteria for determining when a DPIA is required. The DPIA is a distinct exercise from the Fundamental Rights Impact Assessment required under AI Act Article 27, though the two overlap considerably and should be coordinated to avoid duplication. The DPIA focuses specifically on data protection risks: confidentiality, integrity, and availability of personal data, along with the broader risks to data subjects' rights arising from the processing. Findings from the DPIA should feed into the FRIA, since data protection risks are a subset of fundamental rights risks. Conversely, fairness concerns surfaced during the FRIA may carry data protection implications that the DPIA must address. Module 4 of the AISDP records how the two assessments are coordinated, cross-references their findings, and confirms that both remain current throughout the system's lifecycle. The DPO Liaison is responsible for ensuring the DPIA reflects the specific technical characteristics of the AI system, including the lawful basis for processing training data, data subject rights implications (particularly the right to erasure and the right not to be subject to solely automated decision-making), and data retention tensions between GDPR's storage limitation principle and the AI Act's ten-year documentation retention obligation under Article 18 . Organisations should not treat the DPIA as a one-time exercise. Changes to the system's data processing activities, including retraining on new datasets or expanding to new deployer contexts, may require the DPIA to be revisited. Key outputs Completed DPIA document covering the AI system's personal data processing Cross-reference mapping between DPIA findings and FRIA findings Documented coordination methodology between the two assessments Evidence of DPO Liaison sign-off --- ## Eight-Layer Reference Architecture URL: https://docs.standardintelligence.com/eight-layer-reference-architecture Breadcrumb: Development › Architectures › Eight-Layer Reference Architecture Last updated: 28 Feb 2026 The eight-layer reference architecture provides a structured approach to designing high-risk AI systems with compliance controls embedded at every stage of the data and inference pipeline. Each layer addresses specific EU AI Act requirements and produces specific evidence for the AISDP . Layer 1 (Data Ingestion, ) enforces schema contracts and detects source drift. Layer 2 (Feature Engineering, ) ensures training-serving consistency and monitors feature distributions. Layer 3 (Model Inference, ) handles version pinning, confidence thresholding, and output constraints. Layer 4 (Post-Processing, ) documents business rules, monitors threshold stability, and re-evaluates fairness on production data. Layer 5 (Explainability, ) generates and validates human-readable explanations. Layer 6 (Human Oversight Interface, ) enforces mandatory review workflows and counters automation bias. Layer 7 (Logging & Audit, ) provides immutable logging and regulatory export. Layer 8 (Monitoring, ) delivers intent alignment dashboards and multi-dimensional drift detection. ℹ All eight layers are fully populated. --- ## Embedding Bias & Representational Risk URL: https://docs.standardintelligence.com/embedding-bias-and-representational-risk Breadcrumb: Development › Data Governance › RAG-Specific Governance › Embedding Bias & Representational Risk Last updated: 28 Feb 2026 Embedding Bias & Representational Risk AISDP module(s): 4 ( Data Governance and Dataset Documentation ) Regulatory basis: Article 10(2)(f), 10(2)(g) Embedding models encode semantic associations from their training data into the geometry of the vector space. Research has consistently shown that models trained on broad web corpora encode societal biases: associations between professions and gender, between names and ethnicity, between geographic locations and socioeconomic status. In a high-risk AI system, embedding bias manifests as differential retrieval quality. A RAG-based recruitment system using biased embeddings may retrieve systematically different reference materials for candidates whose profiles contain demographic markers. A semantic search system for legal case matching may retrieve different precedents depending on the ethnicity or socioeconomic background expressed in the case description. These effects are subtle, difficult to detect through aggregate performance metrics, and fall squarely within Article 10(2)(f) on examination for possible biases. The Technical SME assesses embedding bias through intrinsic and extrinsic evaluation. Intrinsic evaluation examines the embedding space directly for known bias patterns using methods such as WEAT (Word Embedding Association Test) and its sentence-level extensions. Extrinsic evaluation, which is more directly relevant to compliance, tests whether retrieval quality differs across demographic subgroups by submitting paired queries that differ only in demographic markers and measuring whether retrieval results differ systematically. The retrieval bias test suite is run at initial deployment and as part of the PMM programme. Statistically significant differences in retrieval results across protected dimensions indicate embedding bias that requires mitigation, whether through fine-tuning the embedding model on debiased data, applying post-hoc bias correction to the embedding space, or selecting an alternative embedding model with a better bias profile. Key outputs Intrinsic embedding bias evaluation results (WEAT or equivalent) Extrinsic retrieval bias test results Mitigation specification (where bias is identified) --- ## Embedding Version Control URL: https://docs.standardintelligence.com/embedding-version-control Breadcrumb: Development › Data Governance › RAG-Specific Governance › Embedding Version Control Last updated: 28 Feb 2026 Embedding Version Control AISDP module(s): 4 ( Data Governance and Dataset Documentation ) Regulatory basis: Article 12 ; Article 15 Embedding models produce vector representations specific to the model version. When the embedding model is updated, the new version may produce different vectors for the same input text. If the knowledge base was indexed using one version but queries are embedded using another, the vector spaces are no longer aligned, and retrieval quality degrades. In extreme cases, retrieval may return entirely irrelevant documents. This version mismatch risk requires coordination between the embedding model version and the knowledge base index version. The Technical SME maintains a version record linking each knowledge base index to the embedding model version used to generate it. Any change to the embedding model version triggers a re-indexing of the knowledge base. For API-accessed embedding models, version pinning prevents the provider from silently updating the model. For downloaded models, content hashing (SHA-256) ensures the deployed version matches the documented version. Sentinel testing provides an additional safeguard: a fixed set of test queries is submitted to the embedding model, and the results are compared against the expected outputs. If the sentinel results deviate beyond a defined tolerance, the model version has changed, and re-indexing is triggered. The AISDP documents the version control mechanism, the sentinel testing configuration, and the re-indexing procedure. The version linkage record forms part of the traceability chain required by Article 12. Key outputs Embedding model version record linked to knowledge base index version Sentinel testing configuration Re-indexing trigger specification --- ## End-to-End Inference Path Tests (Known Input → Expected Output + Logs) URL: https://docs.standardintelligence.com/end-to-end-inference-path-tests-known-input-expected-output Breadcrumb: Development › CI › CD Pipelines › Integration Testing › End-to-End Inference Path Tests (Known Input → Expected Output + Logs) Last updated: 28 Feb 2026 End-to-End Inference Path Tests (Known Input → Expected Output + Logs) AISDP module(s): Module 5 (Testing and Validation) Regulatory basis: Article 12 , Article 15 End-to-end inference path tests exercise the complete chain from data ingestion through feature engineering, model inference, post-processing, explanation generation, and output delivery. A curated test dataset with known expected outcomes is submitted to the system's external interface. The tests validate the system's end-to-end accuracy, latency, and output format. The most compliance-critical aspect of end-to-end testing is the verification that the correct log entries were created at each stage. A test that confirms the correct output was produced but does not verify that the logging layer captured the inference event, the feature values, the model version, and the post-processing decisions is incomplete from an Article 12 perspective. End-to-end tests should therefore validate both the output and the audit trail. The test dataset should include cases that exercise each branch of the post-processing logic, each explanation type, and each output format. It should also include cases from each protected characteristic subgroup, ensuring that the end-to-end pipeline produces correct results for all subgroups. The test dataset is version-controlled and expanded over time as new edge cases are discovered through production operation. Key outputs Curated test dataset with known expected outcomes per subgroup End-to-end tests validating output correctness and audit trail completeness Latency and format validation Module 5 AISDP evidence --- ## Ensemble Methods URL: https://docs.standardintelligence.com/ensemble-methods Breadcrumb: Development › Model Selection › Full-Spectrum Evaluation › Ensemble Methods Last updated: 28 Feb 2026 Ensemble Methods AISDP module(s): 2, 3 Regulatory basis: Articles 13, 14; Annex IV (2)(b) Gradient-boosted decision tree ensembles (XGBoost, LightGBM, CatBoost) and random forests offer a strong balance between predictive performance and explainability. They frequently represent the best compliance trade-off for tabular data tasks in high-risk domains. The primary explainability mechanism for ensemble methods is SHAP (SHapley Additive exPlanations). SHAP values provide theoretically grounded feature attribution at the individual prediction level, decomposing each output into the contribution of each input feature. For tree-based models, the TreeExplainer algorithm computes exact SHAP values efficiently, enabling per-decision explanations that satisfy the Article 14 human oversight requirement for most applications. Operators reviewing a system recommendation can see which features drove the ranking and how confident the system is. On the compliance criteria, ensemble methods score adequately to strongly across all six dimensions. Documentability is adequate: the architecture can be described precisely (number of trees, depth, feature splits), though the learned parameters across hundreds or thousands of trees cannot be enumerated individually. Testability is strong, with standard evaluation methodologies well-established. Auditability is strong given SHAP-based attribution. Bias detectability is strong, as SHAP values can identify proxy variable effects at the individual prediction level. Maintainability is strong; gradient-boosted trees produce stable, predictable changes when retrained on augmented data. Determinism is strong, since outputs are fully reproducible for a given model version and input. The fidelity of SHAP explanations for ensemble methods is generally high, as TreeExplainer computes exact (not approximate) Shapley values. The AISDP should nonetheless include fidelity validation results, confirming that perturbing the features identified as most important by SHAP produces corresponding changes in the model's output. Key outputs SHAP-based attribution methodology documentation Fidelity validation results for explanation method Compliance criteria scoring for ensemble candidates --- ## Equalised Odds — TPR & FPR Parity URL: https://docs.standardintelligence.com/equalised-odds-tpr-and-fpr-parity Breadcrumb: Development › Data Governance › Post-Training Bias Evaluation › Equalised Odds — TPR & FPR Parity Last updated: 28 Feb 2026 Equalised Odds — TPR & FPR Parity AISDP module(s): 4 ( Data Governance and Dataset Documentation ), 5 (Testing and Validation) Regulatory basis: Article 10(2)(f); Article 9 Equalised odds requires that the model's true positive rate (TPR) and false positive rate (FPR) are consistent across protected subgroups. Differences in error rates mean the model makes systematically different types of mistakes for different groups, constituting unfair treatment even if overall accuracy is similar. If a credit scoring model correctly identifies 90% of creditworthy applicants in one ethnic group but only 70% in another, it has unequal true positive rates. The group with the lower TPR is systematically disadvantaged: creditworthy individuals from that group are disproportionately denied. Similarly, if the model incorrectly classifies 5% of non-creditworthy applicants as creditworthy in one group but 15% in another, the FPR disparity means one group bears disproportionate cost from false approvals. Computing equalised odds requires access to ground truth labels, which may not be available for all predictions in production. This makes it primarily a development-time and periodic-review metric. The AISDP records the TPR and FPR per protected subgroup, the parity thresholds applied, and any subgroups that fall outside the acceptable range. The metric is computed as part of the fairness evaluation suite integrated into the CI pipeline. Fairlearn's MetricFrame computes per-subgroup TPR and FPR and reports the disparity. Threshold breaches trigger the bias mitigation process and block deployment through the fairness gate. Key outputs TPR and FPR per protected subgroup Parity assessment and threshold compliance Deployment gate status --- ## Exception Approval Records URL: https://docs.standardintelligence.com/exception-approval-records Breadcrumb: Development › CI › CD Pipelines › Artefacts › Exception Approval Records Last updated: 28 Feb 2026 Exception Approval Records AISDP module(s): Module 6 (Risk Management System), Module 10 (Record-Keeping) Regulatory basis: Article 9 This artefact comprises the collection of exception approvals granted through the severity-based failure handling process and the fairness gate override process described above. Each record documents an instance where a deployment proceeded despite a test or gate failure. The record captures the specific failure that triggered the exception, the severity classification, the approver's identity, the justification for proceeding, the compensating controls in place, the conditions under which the exception expires, and the remediation plan. For fairness gate overrides, the record additionally includes the root cause analysis and the time-bound commitment for deploying a remediated model. Exception approval records are sensitive compliance evidence. A pattern of frequent exceptions may indicate that the system's thresholds are miscalibrated, that the development process is under excessive pressure, or that the governance framework is being circumvented. The AI Governance Lead reviews exception frequency and patterns as part of the periodic governance review, and the review findings are documented in the risk register. Key outputs Exception approval records per instance Justification, compensating controls, and remediation plan per record Exception frequency and pattern analysis Module 6 and Module 10 AISDP evidence --- ## Explainability Tests (Coverage, Attribution Sums, Fidelity, Format) URL: https://docs.standardintelligence.com/explainability-tests-coverage-attribution-sums-fidelity Breadcrumb: Development › CI › CD Pipelines › Unit Testing › Explainability Tests (Coverage, Attribution Sums, Fidelity, Format) Last updated: 28 Feb 2026 Explainability Tests (Coverage, Attribution Sums, Fidelity, Format) AISDP module(s): Module 5 (Testing and Validation), Module 7 (Human Oversight) Regulatory basis: Article 13 , Article 14 The explanation generation component requires unit tests verifying four properties. Coverage tests confirm that explanations are produced for every inference, with no silent omissions. Attribution sum tests verify that, for additive explanation methods such as SHAP, the feature attributions sum to the expected value (the difference between the model's output and the base value). Rounding errors or implementation bugs can cause attribution sums to diverge. Fidelity tests verify that the explanation accurately represents the model's actual decision-making process. The fidelity metric measures how well the explanation (a simplified representation) approximates the model's actual behaviour. If the fidelity score falls below a defined threshold, the explanation may be misleading, which undermines Article 13's transparency objective and Article 14's human oversight requirement. Format tests verify that explanations are correctly structured for their target audience. Operator-facing explanations (detailed, technical) and affected-person-facing explanations (plain language, accessible) have different format requirements. The tests confirm that each format meets its specification, that mandatory fields are populated, and that the explanation's content is consistent with the inference output it describes. Key outputs Coverage tests confirming explanation generation for every inference Attribution sum validation for additive methods Fidelity threshold tests per explanation method Format validation for operator and affected-person audiences --- ## Fairness Concept Priority Decision & Documented Rationale URL: https://docs.standardintelligence.com/fairness-concept-priority-decision-and-documented-rationale Breadcrumb: Development › Data Governance › Post-Training Bias Evaluation › Fairness Concept Priority Decision & Documented Rationale Last updated: 28 Feb 2026 Fairness Concept Priority Decision & Documented Rationale AISDP module(s): 4 ( Data Governance and Dataset Documentation ) Regulatory basis: Article 10(2)(f); Article 9 The five post-training fairness metrics (selection rate ratio, equalised odds, predictive parity, calibration within groups, and counterfactual fairness) capture different fairness concepts, and they can conflict with each other. A model that achieves equalised odds may fail predictive parity. A model that achieves calibration within groups may violate the four-fifths rule. Mathematical impossibility results (Chouldechova, 2017; Kleinberg et al., 2016) demonstrate that perfect satisfaction of multiple fairness criteria simultaneously is impossible except in trivial cases. The organisation must decide which fairness concept takes priority for its specific system, and document the rationale. This decision is not purely technical; it is an ethical and policy choice that the AI Governance Lead makes with input from the Technical SME, the Legal and Regulatory Advisor, and the Business Owner. The rationale should consider the system's intended purpose and the nature of the decisions it supports, the consequences of different types of errors for different subgroups, the regulatory and legal expectations in the deployment domain (employment law may emphasise selection rate parity; financial services regulation may emphasise calibration), and the preferences of affected persons and stakeholders to the extent ascertainable. The AISDP records the prioritised fairness concept, the rationale, the acceptance thresholds for the prioritised metric, and the monitoring approach for the non-prioritised metrics (which remain relevant even if they are not the primary target). The decision is revisited at each major review cycle and whenever post-market monitoring reveals fairness-relevant changes. Key outputs Fairness concept prioritisation decision Rationale document with stakeholder input Acceptance thresholds per fairness metric --- ## Fairness Tooling (Fairlearn, AI Fairness 360) URL: https://docs.standardintelligence.com/fairness-tooling-fairlearn-ai-fairness-360 Breadcrumb: Development › Data Governance › Post-Training Bias Evaluation › Fairness Tooling (Fairlearn, AI Fairness 360) Last updated: 28 Feb 2026 Fairness Tooling (Fairlearn, AI Fairness 360) AISDP module(s): 4 ( Data Governance and Dataset Documentation ), 5 (Testing and Validation) Regulatory basis: Article 10(2)(f) The fairness evaluation suite integrates all five post-training metrics into a single evaluation report that runs as part of the CI pipeline. The AISDP documents the tooling selected, its configuration, and the integration into the development workflow. Fairlearn's MetricFrame is the most flexible tool for this purpose. The developer defines the metrics, the sensitive features, and the dataset; MetricFrame produces a structured report with per-subgroup values for all metrics. It supports intersectional analysis by accepting multiple sensitive features and computing metrics for every combination. Fairlearn integrates with scikit-learn estimators and supports both evaluation (MetricFrame) and mitigation (ExponentiatedGradient, ThresholdOptimizer). AI Fairness 360 (IBM) provides a broader toolkit with additional bias detection and mitigation algorithms, including the disparate impact remover, reweighting preprocessor, and calibrated equalised odds post-processor. It also includes dataset bias metrics that complement the pre-training analysis. The fairness evaluation report is stored as a Module 4 and Module 5 evidence artefact. It is compared against the declared thresholds established in Article 81. Any threshold breach blocks deployment through the fairness gate in the CI/CD pipeline . The tooling configuration (metric definitions, sensitive feature specifications, threshold values) is version-controlled alongside the model code, ensuring reproducibility. Key outputs Fairness tooling selection and configuration documentation CI pipeline integration specification Sample fairness evaluation report --- ## Feature Engineering Tests (Registry Match, Determinism, Imputation, Range) URL: https://docs.standardintelligence.com/feature-engineering-tests-registry-match-determinism Breadcrumb: Development › CI › CD Pipelines › Unit Testing › Feature Engineering Tests (Registry Match, Determinism, Imputation, Range) Last updated: 28 Feb 2026 Feature Engineering Tests (Registry Match, Determinism, Imputation, Range) AISDP module(s): Module 5 (Testing and Validation) Regulatory basis: Annex IV (3), Article 15 Each feature computation must have unit tests verifying four properties. First, the feature's output must match the specification in the feature registry ; if the registry declares that a feature is computed as a ratio of two source columns, the test verifies that the computation produces the documented ratio. Second, the computation must be deterministic: the same input must produce the same output across repeated executions and across training and serving environments. Third, the feature must handle missing input values according to the documented imputation strategy. If the feature registry specifies median imputation for a given feature, the test verifies that null inputs are replaced with the training-set median, not the test-set median or zero. Fourth, the feature's output range must fall within the expected bounds documented in the registry. A feature expected to produce values between 0 and 1 that occasionally produces 1.001 due to floating-point arithmetic may cause downstream issues. These tests serve a dual purpose. They verify the correctness of the feature engineering code, and they verify that the feature engineering code is consistent with the feature registry documentation. A mismatch between the two, where the code does one thing and the registry documents another, is a traceability gap. Key outputs Registry-match tests verifying code against feature registry specifications Determinism tests across repeated executions and environments Imputation strategy tests per documented imputation method Output range validation tests --- ## Feature Registry with Proxy Variable Assessments URL: https://docs.standardintelligence.com/feature-registry-with-proxy-variable-assessments Breadcrumb: Development › Data Governance › Artefacts › Feature Registry with Proxy Variable Assessments Last updated: 28 Feb 2026 Feature Registry with Proxy Variable Assessments AISDP module(s): 4 ( Data Governance and Dataset Documentation ) Regulatory basis: Article 10(2)(f) The Feature Registry is a centralised reference documenting every feature used in the model, its definition, its source, its proxy variable assessment, and its lineage to source data. It serves as a single point of reference for reviewers, auditors, and the internal team. For each feature, the registry records the feature name and description, the source dataset and field, the transformation applied to derive the feature (referencing the lineage records), the proxy variable assessment (correlation with each protected characteristic, the justification review outcome from Article 74, and the retention or removal decision), the feature importance (SHAP-based or permutation importance) in the current model version, and any fairness-related observations (such as the feature contributing disproportionately to predictions for specific subgroups). The registry is maintained as a living artefact, updated whenever features are added, removed, or modified. Feature store tools (Feast, Tecton, Hopsworks) can partially automate the registry by centralising feature definitions and metadata. For organisations without feature store infrastructure, a structured spreadsheet or database provides the minimum viable registry. Key outputs Feature Registry (complete, covering all model features) Proxy variable assessment per feature Version tracking for registry updates --- ## Fine-Tuning Provider Boundary URL: https://docs.standardintelligence.com/fine-tuning-provider-boundary Breadcrumb: Development › Model Selection › Fine-Tuning Provider Boundary (Art. 25) Last updated: 28 Feb 2026 Substantial Modification Assessment AISDP module(s): 3 Regulatory basis: Article 3(23) ; Article 25(1)(b) Fine-tuning a GPAI model for use in a high-risk system raises the question of whether the modification constitutes a substantial modification under Article 3(23), and whether it triggers the provider boundary shift under Article 25(1)(b). This assessment is central to determining the organisation's regulatory obligations. The Legal and Regulatory Advisor assesses the fine-tuning activity against three criteria. First, does the fine-tuning change the model's intended purpose as documented by the GPAI provider? A general-purpose text generation model fine-tuned for medical triage or credit assessment has undergone a change of intended purpose. Second, does the fine-tuning alter the model's risk profile? Fine-tuning on domain-specific data may introduce new bias patterns, failure modes, or accuracy characteristics absent from the base model. Third, does the fine-tuning affect the model's compliance with the GPAI provider's own obligations under Articles 51 to 56? Fine-tuning may void safety evaluations or alignment testing conducted on the base model. Where any criterion is satisfied, the modification is likely substantial. The organisation should treat itself as a provider with full Article 16 obligations and prepare the AISDP accordingly. The assessment and its reasoning are documented in the Fine-Tuning Provider Boundary Determination artefact. Key outputs Substantial modification assessment with three-criteria analysis Provider status determination Provider Obligation Transfer Determination AISDP module(s): 3 Regulatory basis: Article 25(1)(b); Article 16 Where the substantial modification assessment determines that the organisation has assumed provider status, the full set of provider obligations under Article 16 transfers to the fine-tuning organisation. These include maintaining the AISDP ( Article 11 ), conducting conformity assessment , signing the Declaration of Conformity , affixing CE marking , registering in the EU database , establishing post-market monitoring , and reporting serious incident s. The determination document should clearly delineate which obligations are satisfied by the GPAI provider's existing compliance artefacts and which fall exclusively on the downstream organisation. For example, the GPAI provider's technical documentation may partially satisfy AISDP Module 3 requirements for the base model architecture, but the fine-tuning organisation bears full responsibility for documenting the fine-tuning process, the evaluation results, and the system-level integration. The AI Governance Lead reviews and approves the obligation transfer determination, ensuring that all assumed obligations are assigned to specific roles with clear timelines for fulfillment. This determination should be made during Phase 3 (Architecture and Design), since it materially affects the scope and cost of the remaining delivery phases. Key outputs Provider obligation transfer determination document Obligation-to-role assignment matrix AI Governance Lead approval Decision Flow for Borderline Cases AISDP module(s): 3 Regulatory basis: Article 25(1)(b); Article 3(23) Not all fine-tuning activities produce a clear determination. Some cases are genuinely borderline: the fine-tuning may narrow the model's domain without clearly changing its intended purpose, or the risk profile may shift modestly without crossing a definitive threshold. The decision flow for these cases must be documented to demonstrate a rigorous and defensible analysis. The decision flow should proceed through sequential questions. Has the model's intended purpose as stated by the GPAI provider been changed? If yes, provider status is triggered. If no, has the model's risk profile changed materially, considering new bias patterns, new failure modes, altered accuracy characteristics, or new deployment populations? If yes, provider status is likely triggered. If no, has the GPAI provider's own safety or alignment testing been invalidated by the fine-tuning? If yes, provider status is triggered. For genuinely ambiguous cases where none of these questions produces a clear answer, the Legal and Regulatory Advisor should apply the precautionary principle: treat the organisation as a provider. The compliance cost of assuming provider obligations unnecessarily is materially lower than the enforcement risk of incorrectly claiming deployer status for a system that a competent authority later determines should have been treated as provider-level. The decision flow, including the reasoning at each step and the individuals involved in the determination, is retained in the evidence pack. If the organisation determines that provider status is not triggered, the justification must be specific and evidence-based, not merely a statement that the modification was minor. Key outputs Borderline case decision flow documentation Precautionary principle application record (where applicable) --- ## Foundation Models & LLMs — Article 53 GPAI Obligations URL: https://docs.standardintelligence.com/foundation-models-and-llms-article-53-gpai-obligations Breadcrumb: Development › Model Selection › Full-Spectrum Evaluation › Foundation Models & LLMs — Article 53 GPAI Obligations Last updated: 28 Feb 2026 Foundation Models & LLMs — Article 53 GPAI Obligations AISDP module(s): 3, 6 Regulatory basis: Articles 25, 51, 53, 55, 56 When a high-risk system incorporates a general-purpose AI model, the downstream provider bears full responsibility for compliance under Article 16, yet has limited visibility into the GPAI model's training data, architecture, and behavioural characteristics. Understanding the GPAI provider's obligations under Article 53 is essential for managing this asymmetry. Article 53 requires GPAI model providers to draw up and make available technical documentation, provide information and documentation to downstream providers integrating the model into high-risk systems, establish a copyright compliance policy, and publish a summary of the training data content. The specific content requirements for GPAI provider documentation are set out in Annex XI (general GPAI models) and Annex XII (models presenting systemic risk). Article 25 (3) entitles the downstream provider to request specific information so that the high-risk system can comply with the Act. If a GPAI provider refuses or fails to respond to a properly formulated Article 25(3) request, the downstream provider should document the refusal and consider reporting it to the AI Office or relevant national competent authorit y, since the refusal may itself constitute non-compliance by the GPAI provider. The AI System Assessor should submit a structured information request covering training data governance (sources, methodology, geographic and demographic coverage, known biases, copyright measures), model architecture and behaviour (architecture family, parameter count, alignment approach, known failure modes), versioning and change policy (deprecation policy, change notification commitments), data handling practices (whether inference inputs are retained, whether they are used for further training), and safety and security (red-teaming methodology, vulnerability disclosure policy). Where the GPAI provider participates in the Code of Practice under Article 56, the downstream provider can reasonably expect compliance with its transparency commitments. The first general-purpose AI Code of Practice was published on 4 August 2025; organisations should verify whether their GPAI provider has signed the Code and assess adherence against its specific commitments. Where the Code of Practice has not yet matured into a stable compliance benchmark, or where the provider does not participate, information gaps are likely to be wider and compensating controls more demanding. In either case, the downstream provider should not rely on the Code of Practice alone; the structured information request under Article 25(3) remains the primary mechanism for obtaining the disclosures needed for AISDP compliance. For GPAI models classified as presenting systemic risk under Article 51, the provider bears additional obligations under Article 55, including model evaluations, adversarial testing, serious incident reporting , and cybersecurity protection. Article 51(2) establishes a rebuttable presumption that a GPAI model presents systemic risk when the cumulative amount of computation used for its training, measured in floating point operations (FLOPs), exceeds 10^25. The Commission may update this threshold by delegated act. The downstream provider should request access to the GPAI provider's systemic risk documentation and assess which inherited risks are covered by the provider's own controls. Contractual risk transfer must also be assessed. Where the GPAI provider's terms of service limit liability or disclaim responsibility for downstream use, the resulting gap in risk allocation is recorded in the risk register . Key outputs Structured GPAI provider information request record GPAI disclosure register (per Code of Practice commitment area) Inherited risk analysis (AISDP Module 6 ) Contractual risk gap documentation --- ## Foundation Models & LLMs — Fine-Tuning Records URL: https://docs.standardintelligence.com/foundation-models-and-llms-fine-tuning-records Breadcrumb: Development › Model Selection › Full-Spectrum Evaluation › Foundation Models & LLMs — Fine-Tuning Records Last updated: 28 Feb 2026 Foundation Models & LLMs — Fine-Tuning Records AISDP module(s): 2, 3 Regulatory basis: Article 25 (1)(b); Annex IV (2) Organisations that fine-tune a foundation model for use in a high-risk system must document the fine-tuning process with the same rigour applied to in-house model development. This obligation arises because fine-tuning typically changes the model's intended purpose and may trigger provider status under Article 25(1)(b). Fine-tuning records should capture the fine-tuning data governance, addressing Article 10 requirements for the fine-tuning dataset, including provenance, composition, demographic representativeness, bias assessment, and known limitations. The fine-tuning methodology must be documented: the approach (full fine-tuning, LoRA, QLoRA, prefix tuning, adapters), hyperparameters, training duration, convergence metrics, and random seed. The evaluation results compare the fine-tuned model against the AISDP-declared performance and fairness thresholds. A clear delineation is required between the base model's characteristics inherited from the GPAI provider and the fine-tuned model's characteristics that fall under the organisation's responsibility. This boundary determines which compliance obligations are addressed by the provider's own documentation and which the fine-tuning organisation must satisfy independently. For parameter-efficient fine-tuning methods, the compliance boundary does not depend on the volume of parameters modified; it depends on whether the modification changes the model's intended purpose or risk profile. A LoRA adapter that redirects a general-purpose model toward a high-risk use case triggers the same Article 25(1)(b) analysis as full fine-tuning. The fine-tuning records are stored in the model registry alongside the provenance metadata for the base model. The Model Selection Record should document the base model selection as a GPAI integration decision and the fine-tuning as a development decision, with separate risk assessment s for each. Key outputs Fine-tuning data governance documentation (AISDP Module 4 ) Fine-tuning methodology and hyperparameter record (AISDP Module 2 ) Base model / fine-tuned model responsibility boundary documentation Evaluation results comparing fine-tuned model against declared thresholds --- ## Foundation Models & LLMs — Provenance Documentation URL: https://docs.standardintelligence.com/foundation-models-and-llms-provenance-documentation Breadcrumb: Development › Model Selection › Full-Spectrum Evaluation › Foundation Models & LLMs — Provenance Documentation Last updated: 28 Feb 2026 Foundation Models & LLMs — Provenance Documentation AISDP module(s): 2, 3 Regulatory basis: Articles 11, 53; Annex IV (2) Large language models and foundation models used as components within high-risk systems introduce documentation challenges that go beyond those of conventional models. AISDP Module 3 must record the base model's provenance with a level of detail that enables a competent authority to assess the compliance implications of the model choice. Provenance documentation for foundation models covers several dimensions. The model's origin must be recorded: the provider, the model family and version identifier, the date of access or download, and the access mechanism (API, downloaded weights, or fine-tuned variant). The training data provenance should be documented to the extent available from the provider; where the provider's disclosures are insufficient, the gaps are recorded as non-conformities, and the compensating controls applied (such as sentinel testing) are described. The model's architecture family, parameter count, training methodology, and known limitations should be captured. For models accessed via API, the provider's versioning policy is documented, including whether the provider may silently update the model within a version identifier. The provider's data handling practices are recorded. The licensing terms and their compatibility with the system's commercial and regulatory context are assessed. For models downloaded from public repositories such as Hugging Face, best practice is to download the model once, compute a SHA-256 cryptographic hash, store the model and hash in the internal model registry , and reference only the internal copy thereafter. This prevents silent changes if the repository updates the model under the same identifier. Hugging Face's revision parameter supports pinning to a specific Git commit SHA for this purpose. The provenance record for each model version should capture, at minimum: origin, training data version, training code commit, hyperparameters, pipeline execution ID, evaluation metrics, content hash, and digital signature. This record is attached as structured metadata in the model registry and referenced by AISDP Module 3. Key outputs Foundation model provenance record Provider disclosure gap assessment Cryptographic hash and version pinning records --- ## Foundation Models & LLMs — Stochastic Output Handling URL: https://docs.standardintelligence.com/foundation-models-and-llms-stochastic-output-handling Breadcrumb: Development › Model Selection › Full-Spectrum Evaluation › Foundation Models & LLMs — Stochastic Output Handling Last updated: 28 Feb 2026 Foundation Models & LLMs — Stochastic Output Handling AISDP module(s): 3, 10 Regulatory basis: Articles 12, 15 The stochastic nature of LLM outputs, where the same input may produce different outputs across invocations, requires specific attention in the context of Article 15 's accuracy requirements and Article 12 's logging requirements. The AISDP must document the controls applied to manage this stochasticity. For accuracy compliance under Article 15, the organisation must establish that the system's outputs fall within acceptable bounds despite stochasticity. Temperature clamping reduces output variance; setting temperature to zero or near-zero produces more deterministic behaviour, though it may reduce output quality for generative tasks. The AISDP documents the temperature setting, the rationale for the chosen value, and the measured impact on output variance and quality. For record-keeping under Article 12, each inference must be logged with sufficient context to enable reconstruction. Unlike deterministic models where the input and model version are sufficient to reproduce the output, stochastic models require logging the actual output alongside the input, the model version, and any runtime parameters (temperature, top-p, random seed if applicable). This ensures that every decision can be examined after the fact, even though it cannot be reproduced deterministically. Seed fixing supports reproducibility in testing and evaluation environments. By fixing the random seed, the organisation can reproduce specific outputs for validation, debugging, and conformity assessment . The AISDP documents the seed management approach and clarifies that production inference may not use fixed seeds (to avoid gaming or predictability risks), while evaluation and testing environments do. Output logging must capture not merely the final output but any intermediate reasoning steps, chain-of-thought outputs, or retrieval context that contributed to the response. For RAG-based systems, the retrieved documents and their relevance scores form part of the audit trail. The logging infrastructure must handle the substantially larger payload sizes that LLM outputs generate compared to traditional model predictions. Key outputs Stochastic output handling specification Temperature and sampling parameter documentation Logging architecture for LLM inference (AISDP Module 10 ) --- ## Full-Spectrum Evaluation URL: https://docs.standardintelligence.com/full-spectrum-evaluation Breadcrumb: Development › Model Selection › Full-Spectrum Evaluation Last updated: 28 Feb 2026 Heuristic & Rule-Based Systems Statistical & Econometric Models Ensemble Methods Deep Neural Networks — Types & Post-Hoc Explanation Methods Deep Neural Networks — Candid Explainability Limitations Assessment Foundation Models & LLMs — Provenance Documentation Foundation Models & LLMs — Fine-Tuning Records Foundation Models & LLMs — Stochastic Output Handling Foundation Models & LLMs — Article 53 GPAI Obligations Hybrid Architectures — Component Documentation & Conflict Resolution --- ## GDPR Status of Stored Embeddings URL: https://docs.standardintelligence.com/gdpr-status-of-stored-embeddings Breadcrumb: Development › Data Governance › RAG-Specific Governance › GDPR Status of Stored Embeddings Last updated: 28 Feb 2026 GDPR Status of Stored Embeddings AISDP module(s): 4 (Data Governance and Dataset Documentation ) Regulatory basis: GDPR ; Article 10 Dense vector embeddings derived from text containing personal data may themselves constitute personal data if the original information can be recovered through inversion techniques. The DPO Liaison assesses whether the stored embeddings constitute personal data by applying the Recital 26 test: whether re-identification is achievable using means reasonably likely to be used, taking into account available technology, the cost of identification, and the time required. The CJEU's Breyer ruling (C-582/14) confirms that the availability of legal means to obtain identifying information is relevant to this assessment. The assessment considers the embedding model's dimensionality (higher dimensions preserve more information and increase inversion risk), the availability of inversion techniques for the specific model architecture, and whether the embeddings are stored alongside metadata (document identifiers, timestamps, user identifiers) that could facilitate re-identification. The state of the art in embedding inversion techniques evolves; the assessment must reflect current capabilities. Where the DPO Liaison determines that embeddings constitute personal data, the full GDPR compliance framework applies. A lawful basis must be identified for storing the embeddings. The retention policy must specify a deletion schedule. Data subject access and erasure requests must be serviceable, which may require the ability to identify and delete specific embeddings from the vector store. The DPIA must address the embedding-specific risks. The practical challenge is that vector databases are optimised for similarity search, not record-level deletion. The Technical SME assesses the vector database's deletion capabilities at architecture design time and documents the approach for servicing erasure requests. Where the database does not support efficient single-record deletion, a mapping between embeddings and source documents enables erasure at the next scheduled re-indexing. For systems where embeddings are determined to constitute personal data, the Technical SME implements monitoring for embedding inversion attacks: access logging for the vector database, anomaly detection on query patterns, and periodic reassessment of the inversion risk landscape. Key outputs GDPR status determination for stored embeddings DPO Liaison assessment record Erasure request handling specification Inversion monitoring specification (where applicable) --- ## Heuristic & Rule-Based Systems URL: https://docs.standardintelligence.com/heuristic-and-rule-based-systems Breadcrumb: Development › Model Selection › Full-Spectrum Evaluation › Heuristic & Rule-Based Systems Last updated: 28 Feb 2026 Heuristic & Rule-Based Systems AISDP module(s): 2, 3 Regulatory basis: Article 3(1); Annex IV (2)(b) Many organisations operate data-driven decisioning systems that predate the machine learning era. These include expert systems encoding domain knowledge as decision trees or rule sets, business rules engines processing structured if-then-else conditions against customer or transaction data, scoring models based on weighted criteria defined by subject-matter experts, and threshold-based systems triggering actions when observed values cross predefined boundaries. These systems may fall within the Article 3(1) definition of an AI system if they are designed to operate with varying levels of autonomy, may exhibit adaptiveness after deployment, and infer from inputs how to generate outputs. Their principal compliance advantage is transparency: every decision pathway is deterministic and documentable. The rules can be enumerated, each rule tested in isolation, and the system's behaviour explained by reference to the specific rule that fired. For high-risk domains where explainability is paramount, such as credit decisioning, benefits eligibility, or judicial risk assessment , heuristic approaches may satisfy regulatory requirements more naturally than opaque machine learning models. The compliance disadvantage is that heuristic systems can embed their designers' biases without the statistical tools available to detect and mitigate those biases in learned models. A manually designed scoring model may assign weights to features that correlate with protected characteristics without the designer recognising the correlation. The risk assessment for heuristic systems must therefore include a retrospective bias audit, testing the system's historical decisions for disparate impact across protected groups. When evaluating heuristic approaches against the six compliance criteria, they score strongly on documentability, testability, auditability, and determinism; adequately on maintainability; and variably on bias detectability depending on how feature weights were derived. Key outputs Compliance criteria scoring for heuristic candidates Retrospective bias audit results (if selected) --- ## Human Oversight Interface Specification URL: https://docs.standardintelligence.com/human-oversight-interface-specification Breadcrumb: Development › Architectures › Artefacts › Human Oversight Interface Specification Last updated: 28 Feb 2026 Human Oversight Interface Specification AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 The Human Oversight Interface Specification is the design document for the Layer 6 interface through which operators review, accept, override, or reject the system's outputs. It consolidates the requirements into a single artefact that guides interface development and serves as compliance evidence. The specification should include wireframes or screenshots annotated to show the information presented, the actions available, and the workflow enforced. It must document the mandatory review workflow and the technical mechanisms preventing bypass, the four automation bias countermeasures and their configuration, the override capability and rationale capture design, and the monitoring instrumentation for override rates and review times. Human interface specifications should also document the sequence of interactions for critical patterns (case presentation, data-first display, recommendation reveal, operator decision, rationale capture) through sequence diagrams. These diagrams demonstrate to an assessor exactly what information is presented to the operator, in what order, and what happens when the operator accepts, modifies, or rejects the recommendation. Key outputs Annotated wireframes or screenshots of the oversight interface Sequence diagrams for critical interaction patterns Configuration parameters for all countermeasures Module 7 AISDP evidence --- ## Human Oversight Interface Testing (Selenium/Playwright/Cypress Automation) URL: https://docs.standardintelligence.com/human-oversight-interface-testing-seleniumplaywrightcypress Breadcrumb: Development › CI › CD Pipelines › Integration Testing › Human Oversight Interface Testing (Selenium/Playwright/Cypress Automation) Last updated: 28 Feb 2026 Human Oversight Interface Testing (Selenium/Playwright/Cypress Automation) AISDP module(s): Module 7 (Human Oversight), Module 5 (Testing and Validation) Regulatory basis: Article 14 Human oversight interface testing is frequently neglected but compliance-critical. Automated UI testing tools (Selenium, Playwright, or Cypress) verify that the oversight interface behaves correctly in an integrated environment, complementing the unit tests described above. The automated tests verify that the interface displays the required information (case data, model recommendation, confidence score, explanation), that the approval and override workflows function correctly end-to-end, that minimum dwell time enforcement works under realistic interaction patterns, and that operator actions are correctly logged in the audit trail. These tests interact with the actual interface, simulating operator actions through browser automation. Interface testing should run on every interface change and on a periodic schedule (weekly) to catch regressions introduced by infrastructure changes, browser updates, or dependency updates that do not directly modify the interface code. The test results are retained as Module 5 and Module 7 evidence. For high-risk systems, a failed oversight interface test is a critical failure that blocks deployment. Key outputs Automated UI tests (Selenium, Playwright, or Cypress) for the oversight interface Workflow verification (approval, override, dwell time, logging) Weekly regression schedule in addition to change-triggered runs Module 5 and Module 7 AISDP evidence --- ## Human Oversight Interface Tests (Bypass Prevention, Rationale, Confidence, Countermeasures) URL: https://docs.standardintelligence.com/human-oversight-interface-tests-bypass-prevention-rationale Breadcrumb: Development › CI › CD Pipelines › Unit Testing › Human Oversight Interface Tests (Bypass Prevention, Rationale, Confidence, Countermeasures) Last updated: 28 Feb 2026 Human Oversight Interface Tests (Bypass Prevention, Rationale, Confidence, Countermeasures) AISDP module(s): Module 7 (Human Oversight), Module 5 (Testing and Validation) Regulatory basis: Article 14 The human oversight interface is a compliance-critical component that requires dedicated unit tests. The mandatory review workflow must not be bypassable: tests confirm that there is no API endpoint, configuration flag, or administrative override that allows system outputs to be applied without human review. Override functionality must work correctly, and the rationale capture must record the required information. Confidence indicator tests verify that the confidence score displayed to the operator matches the model's actual output confidence, and that the visual representation (colour coding, gauge, or equivalent) accurately reflects the configured thresholds. Automation bias countermeasure tests verify that the data-first display pattern works correctly (case data shown before the model's recommendation), that the minimum dwell time enforcement functions as configured, and that calibration cases are injected at the specified rate. These tests should run on every interface change and on a periodic schedule to catch regressions. For high-risk systems, the bypass prevention tests are critical failures: any test path that allows auto-acceptance of the model's recommendation without human review is a compliance gap that blocks deployment. Key outputs Bypass prevention tests confirming no auto-acceptance pathway exists Override and rationale capture functional tests Confidence indicator accuracy tests Automation bias countermeasure verification tests --- ## Hybrid Architectures — Component Documentation & Conflict Resolution URL: https://docs.standardintelligence.com/hybrid-architectures-component-documentation-and-conflict Breadcrumb: Development › Model Selection › Full-Spectrum Evaluation › Hybrid Architectures — Component Documentation & Conflict Resolution Last updated: 28 Feb 2026 Hybrid Architectures — Component Documentation & Conflict Resolution AISDP module(s): 3 Regulatory basis: Annex IV (2)(b–e) Many production systems combine multiple decisioning approaches: a rules engine for hard constraints, a statistical model for initial scoring, a machine learning model for fine-grained ranking, and perhaps an LLM for generating explanatory text. Hybrid architectures are documented as integrated systems in the AISDP. Module 3 must describe how each component contributes to the final output, how conflicts between components are resolved, and which component bears primary responsibility for each aspect of the system's behaviour. Conflict resolution logic is a compliance-critical element. When a rules engine rejects a case that the ML model would have approved, the resolution logic determines the system's output. This logic must be documented, tested, and monitored. The AISDP describes the precedence hierarchy (which component overrides which), the conditions under which each component's output prevails, and the logging of conflict events for audit purposes. The Model Selection Record should include a separate entry for each model component, proportionate to that component's influence on the final output. A primary decision model warrants full evaluation against all six compliance criteria. An embedding model in a RAG pipeline warrants focused assessment covering provenance, linguistic performance, known biases, and version pinning. An auxiliary safety classifier warrants documentation of its accuracy, failure modes, and consequences for the primary system's compliance profile. The AI System Assessor verifies that the Model Selection Record is complete with respect to the system's architecture diagram. Any model component visible in the architecture that lacks a corresponding entry in the Record is a documentation gap. The per-component documentation also supports the substantial modification assessment : changes to any component, including auxiliary models, may trigger a reassessment depending on that component's influence on the system's outputs. Key outputs Hybrid architecture documentation with component interaction maps Conflict resolution logic specification Per-component Model Selection Record entries --- ## Immutable Versioning — Unique Non-Reusable IDs URL: https://docs.standardintelligence.com/immutable-versioning-unique-non-reusable-ids Breadcrumb: Development › Version Control › Model Registry › Immutable Versioning — Unique Non-Reusable IDs Last updated: 28 Feb 2026 Immutable Versioning — Unique Non-Reusable IDs AISDP module(s): Module 10 (Record-Keeping) Regulatory basis: Article 12 Each registered model version must be assigned a unique, non-reusable identifier. This requirement ensures that a model version, once registered, cannot be overwritten, replaced, or confused with another version. The identifier is the anchor for the entire traceability chain: it is referenced in inference logs, deployment records, the AISDP, and the Declaration of Conformity . Immutable versioning means that once a model artefact is registered under a given identifier, neither the artefact nor its identifier can be changed. If the model needs to be updated, a new version is registered under a new identifier. The previous version remains in the registry under its original identifier, available for retrieval throughout the ten-year retention period. This approach prevents the scenario in which a model is silently replaced in the registry, invalidating all references to the original version. The immutability guarantee should be enforced at the registry level, not merely by policy. MLflow, SageMaker, and Vertex AI model registries all support version immutability through their native access control mechanisms. Organisations should verify that the registry's configuration prevents overwriting or deletion of registered versions, and that administrative overrides are logged and auditable. A model version identifier that can be reused or overwritten undermines the entire compliance record. Key outputs Registry configuration enforcing version immutability Verification that identifiers cannot be reused or overwritten Administrative override logging Module 10 AISDP evidence --- ## In-Processing Techniques (Fairness Constraints, Adversarial Debiasing, Invariant Representations) URL: https://docs.standardintelligence.com/in-processing-techniques-fairness-constraints-adversarial Breadcrumb: Development › Data Governance › Bias Mitigation › In-Processing Techniques (Fairness Constraints, Adversarial Debiasing, Invariant Representations) Last updated: 28 Feb 2026 In-Processing Techniques (Fairness Constraints, Adversarial Debiasing, Invariant Representations) AISDP module(s): 4 ( Data Governance and Dataset Documentation ), 2 (Development Process) Regulatory basis: Article 10(2)(f) In-processing mitigations modify the model's training procedure to incorporate fairness objectives. They are more technically demanding than pre-processing techniques and require careful hyperparameter tuning to balance fairness and accuracy. Fairlearn's ExponentiatedGradient is the most practically accessible approach. It solves a constrained optimisation problem, maximising accuracy subject to a fairness constraint such as demographic parity or equalised odds. The algorithm trains many candidate models with different constraint levels and returns the one that best balances the two objectives. It integrates with scikit-learn estimators and requires minimal additional code. Adversarial debiasing (Zhang et al., 2018) trains an adversary network that attempts to predict the protected characteristic from the model's internal representations. The main model is penalised for leaking information about protected characteristics. This technique is effective for deep learning models but requires careful tuning; the adversary's learning rate relative to the main model critically affects the fairness-accuracy trade-off. Learning fair representations (Zemel et al., 2013) takes a more aggressive approach, learning a new feature space that is explicitly uninformative about protected characteristics while remaining predictive for the target variable. The disparate impact remover (Feldman et al., 2015) modifies feature values to reduce correlations with protected characteristics while preserving predictive value. The AISDP documents the specific technique selected, the mathematical formulation of the fairness constraint, the observed trade-off between fairness and accuracy, and the hyperparameter choices. The selection rationale should explain why in-processing was chosen over pre-processing or post-processing, considering the specific bias patterns identified in the pre-training analysis. Key outputs In-processing technique selection and configuration Mathematical formulation of fairness constraint Fairness-accuracy trade-off analysis with hyperparameter documentation --- ## Infrastructure Design URL: https://docs.standardintelligence.com/infrastructure-design Breadcrumb: Development › Architectures › Infrastructure Design Last updated: 28 Feb 2026 Cloud Deployment (Multi-AZ, Multi-Region) AISDP module(s): Module 3 (Architecture and Design), Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 , Annex IV (2)(f) For cloud-hosted high-risk AI systems, the AISDP must specify the cloud provider, the deployment region, the specific services used, and the instance types and resource allocations for inference workloads. Where the system processes personal data, deployment should take place within the EU/EEA unless a valid GDPR Chapter V transfer mechanism is in place; adequacy decisions under GDPR Article 45 (including the EU-US Data Privacy Framework for certified US organisations), standard contractual clauses under GDPR Article 46(2)(c), and binding corporate rules under GDPR Article 47 may each provide a lawful basis for third-country processing. The chosen mechanism, along with an assessment of its suitability for the specific data categories and processing activities, must be documented in the AISDP. Multi-availability-zone deployment ensures that the system survives the failure of a single data centre within a region. Multi-region deployment provides resilience against the failure of an entire cloud region, though it introduces additional complexity around data consistency and latency. The AISDP must document the resilience architecture, including the failover mechanism (active-active or active-passive), the expected failover time, and the data consistency model during failover. Cloud provider data processing agreements must be in place and referenced in the AISDP. These agreements confirm that the provider processes data only on documented instructions, that appropriate security measures are in place, and that the provider assists with GDPR compliance obligations. For systems using managed AI services (SageMaker, Vertex AI, Azure Machine Learning, Databricks), the AISDP must additionally document which managed services are used, what data flows through them, the provider's data handling practices, availability SLAs, and fallback strategies. Key outputs Cloud deployment specification (provider, region, services, instance types) Resilience architecture documentation (multi-AZ, multi-region, failover) Cloud provider data processing agreements Managed AI service dependency documentation Containerisation AISDP module(s): Module 3 (Architecture and Design) Regulatory basis: Annex IV(2)(f), Article 15 Containerisation with Docker and orchestration with Kubernetes provide the infrastructure for reproducible, versioned deployment environments. A Docker container image is immutable once built: it captures the exact operating system, libraries, framework versions, and application code that constitute the runtime environment. This immutability is valuable for compliance because the container image tested during conformity assessment is exactly the image that runs in production. Module 3 of the AISDP captures the container image build process (Dockerfile, base images, build arguments), the container registry (a private registry with access controls and image signing), the orchestration configuration (Kubernetes manifests, Helm charts, deployment strategies), and resource limits and scaling policies. Each container image is tagged with the corresponding code and model version and stored in the private registry with access logging. The supply chain risk in containerisation is the base image. A container built from python:3.11-slim inherits whatever is in that base image at build time. The mitigation is to pin the base image to a specific digest (a SHA-256 hash), scan the built image for vulnerabilities with Trivy, Grype, or Snyk Container, sign it with Docker Content Trust or Sigstore cosign, and store it in a private registry such as Harbor, AWS ECR, Azure ACR, or Google Artifact Registry. The CI pipeline should fail if the container scan reveals critical or high-severity vulnerabilities. Key outputs Dockerfiles with pinned base image digests Private container registry with access controls and image signing Kubernetes manifests or Helm charts for orchestration Container vulnerability scan results as Module 3 evidence Edge Deployment Considerations AISDP module(s): Module 3 (Architecture and Design), Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15, Annex IV(2)(f) Systems embedded in physical products, deployed in air-gapped environments, or operating at the network edge present distinct infrastructure documentation requirements. The AISDP must specify the target hardware platform, the model optimisation techniques used (quantisation to INT8 or FP16, structured pruning, knowledge distillation, framework-specific compilation with TensorRT, ONNX Runtime, TensorFlow Lite, or Core ML) and their measured impact on accuracy and fairness. Each optimisation technique alters the model's behaviour to some degree. The AISDP must document which techniques were applied, the performance and fairness evaluation results for the optimised model (not merely the original), and any subgroups for which the optimisation disproportionately affects accuracy. The evaluation of the optimised model, not the pre-optimisation model, is the compliance-relevant evidence. Edge-deployed models require an over-the-air update mechanism supporting four capabilities: version verification through cryptographic signatures, rollback capability if on-device validation fails, staged rollout to a subset of devices before full fleet deployment, and update logging recording which version each device runs. Logging and monitoring in disconnected environments requires on-device log buffering, log integrity during buffering, batch upload when connectivity is restored, and graceful degradation of monitoring capabilities. Physical security measures (tamper-evident enclosures, secure boot, encrypted storage) must also be documented. Key outputs Target hardware and optimisation technique documentation Performance and fairness evaluation of the optimised model Over-the-air update mechanism specification Disconnected monitoring and physical security documentation Data Sovereignty & Residency AISDP module(s): Module 3 (Architecture and Design), Module 4 ( Data Governance ) Regulatory basis: Article 15, GDPR Chapter V Organisations deploying across multiple EU member states must map each data category to its residency constraints and document how the infrastructure enforces those constraints. Different member states may impose data residency requirements through national legislation, sector-specific regulation, or competent authority guidance. Health data in certain jurisdictions must remain within the member state's borders; financial data may be subject to sector-specific localisation requirements. The data sovereignty analysis must distinguish between training data flows and inference data flows. Training typically occurs in a central location; inference occurs wherever the system is deployed. For systems where inference inputs contain personal data, the inference infrastructure must process the data within the jurisdiction where the data subject resides, or the organisation must have a valid GDPR Chapter V transfer mechanism. Deploying the same model across multiple jurisdictions also raises the question of whether the model itself constitutes a data transfer. If the model was trained on personal data from one jurisdiction and deployed in another, regulators may consider the model's learned parameters to be derived personal data. The EDPB's guidance on anonymisation techniques (Opinion 05/2014) and the Recital 26 "means reasonably likely to be used" test provide the current analytical framework for assessing whether learned parameters retain personal data character. The organisation should document its position on this question and the analysis supporting it, referencing the applicable framework. Systems deployed across multiple regions must also validate inference consistency, running a standardised test suite across all deployment regions to verify that outputs are identical or within defined tolerance bounds. Key outputs Data residency mapping per data category and jurisdiction Infrastructure enforcement mechanisms (region-locked storage policies) Cross-border model deployment analysis Multi-region inference consistency validation results Disaster Recovery Planning (RTO, DR Region) AISDP module(s): Module 3 (Architecture and Design), Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 High-risk AI systems must be resilient to infrastructure failures. Module 3 captures the recovery point objective (RPO) and recovery time objective (RTO), the backup strategy for model artefacts, configuration, and critical data, the failover architecture, the disaster recovery testing schedule and results, and the degraded-mode behaviour. For systems where the AI component is safety-critical, the failsafe behaviour must be explicitly documented: when the AI system fails, what default behaviour takes over? For a recruitment screening system, the failsafe might be routing all applications to human review. For a medical diagnostic system, the failsafe might be displaying a warning that the AI assessment is unavailable. The choice of failsafe behaviour is a design decision with compliance implications, as it determines how the system behaves when it cannot fulfil its intended purpose. The disaster recovery plan is tested periodically, and the test results are retained as Module 3 evidence. The test should verify that the system can be restored within the declared RTO, that the restored system serves the correct model version, that no data is lost beyond the declared RPO, and that the failsafe behaviour activates correctly when the primary system is unavailable. Key outputs RPO and RTO specifications Failover architecture and failsafe behaviour documentation Disaster recovery test schedule and results Module 3 and Module 9 AISDP evidence --- ## Integration Testing URL: https://docs.standardintelligence.com/integration-testing Breadcrumb: Development › CI › CD Pipelines › Integration Testing Last updated: 28 Feb 2026 Contract Tests (Service-to-Service) End-to-End Inference Path Tests (Known Input → Expected Output + Logs) Regression Tests — Golden Dataset with Per-Subgroup Cases Human Oversight Interface Testing (Selenium/Playwright/Cypress Automation) Load Testing (Locust, k6) — Latency & Throughput Under Load Chaos & Fault Injection Testing (Gremlin, Litmus) — Graceful Degradation --- ## Intersectional Pre-Training Analysis — Subgroups & Cell Size Thresholds URL: https://docs.standardintelligence.com/intersectional-pre-training-analysis-subgroups-and-cell Breadcrumb: Development › Data Governance › Pre-Training Bias Assessment › Intersectional Pre-Training Analysis — Subgroups & Cell Size Thresholds Last updated: 28 Feb 2026 Intersectional Pre-Training Analysis — Subgroups & Cell Size Thresholds AISDP module(s): 4 ( Data Governance and Dataset Documentation ) Regulatory basis: Article 10(2)(f) Standard bias analysis examines each protected characteristic in isolation. Intersectional analysis examines combinations, such as female applicants over 55 or disabled applicants from ethnic minority backgrounds. A dataset may be adequate for each characteristic individually yet have critically small cell sizes for intersectional subgroups, making reliable bias detection impossible for those groups. The Technical SME identifies the intersectional subgroups relevant to the system's deployment context. The selection should be informed by the system's intended purpose, the deployment population demographics, and any domain-specific knowledge about which intersectional groups face heightened risk. Cell sizes are reported for all examined intersectional subgroups. Where cell sizes fall below the minimum threshold for meaningful statistical analysis (commonly 30 instances for basic metrics, 100 or more for reliable fairness metrics), the AISDP states this limitation explicitly rather than reporting unreliable metrics. Fairlearn's MetricFrame supports intersectional analysis by accepting multiple sensitive features, computing metrics for every combination, and reporting confidence intervals. Wide confidence intervals signal insufficient data for reliable conclusions. The practical consequence of small cell sizes is that the organisation cannot verify the model's fairness for those subgroups through data-driven testing alone. Compensating controls include synthetic data augmentation targeted at the underrepresented intersections, enhanced post-deployment monitoring with longer observation windows to accumulate sufficient data, mandatory human review for decisions affecting individuals from intersectional subgroups with insufficient testing data, and conservative deployment restrictions. These controls are documented in the AISDP alongside the cell size analysis. Key outputs Intersectional subgroup definition and cell size report Reliability assessment per intersectional subgroup Compensating controls for insufficient cell sizes --- ## Knowledge Base Completeness & Currency URL: https://docs.standardintelligence.com/knowledge-base-completeness-and-currency Breadcrumb: Development › Data Governance › RAG-Specific Governance › Knowledge Base Completeness & Currency Last updated: 28 Feb 2026 Knowledge Base Completeness & Currency AISDP module(s): 4 (Data Governance and Dataset Documentation ) Regulatory basis: Article 10(2), 10(3) In a RAG architecture, the knowledge base functions as the information source that directly shapes the system's outputs. The LLM generates its response based on retrieved documents; if the knowledge base is incomplete, outdated, or biased, the outputs reflect those deficiencies regardless of how well the LLM performs. The prudent compliance approach is to apply Article 10 's data governance requirements to the knowledge base, adapted for inference-time retrieval. Completeness requires the knowledge base to be representative of the domain the system serves. A medical decision-support system whose knowledge base covers only English-language guidelines from US institutions will produce systematically different responses for patients in EU member states where national clinical guidelines differ. A legal research system that underrepresents case law from smaller member states will produce less reliable results for queries concerning those jurisdictions. The Technical SME assesses completeness against the system's intended deployment context and documents coverage gaps. Currency requires a defined staleness threshold: the maximum acceptable age for documents, which varies by domain. Medical guidelines may have a short threshold (updated annually); foundational legal texts may have a longer one. Documents exceeding the threshold are flagged for review, update, or removal. The staleness monitoring process is documented in the PMM plan . The Technical SME implements an automated knowledge base quality pipeline that validates new documents before addition, checking format, metadata, currency, deduplication, and incremental coverage. Documents failing validation are quarantined for manual review. Key outputs Knowledge base completeness assessment Staleness threshold definition per document category Automated quality pipeline specification --- ## Known Limitations URL: https://docs.standardintelligence.com/known-limitations Breadcrumb: Development › Data Governance › Dataset Documentation › Known Limitations Last updated: 28 Feb 2026 Known Limitations AISDP module(s): 4 ( Data Governance and Dataset Documentation ) Regulatory basis: Article 10(2)(f), 10(3); Annex IV (2)(d) Every dataset has limitations. The compliance value lies in documenting them candidly rather than concealing them behind aggregate statistics. The AISDP must record the known gaps, biases, and limitations for each dataset, addressing both what the data contains and what it does not. The limitations record should cover subgroup under-representation (which demographic groups have insufficient data for reliable model performance), temporal biases (whether data was collected during an unusual period that may not generalise), geographic biases (whether data was collected predominantly from certain member states or regions), label quality concerns (whether outcome labels reflect human biases or historical discrimination), missing features (whether features logically required for the system's purpose are absent, forcing reliance on proxy variables), and data quality issues (error rates, missing value patterns, and their potential impact on model behaviour). The Datasheets for Datasets framework's "uses" section requires explicitly stating the limitations relevant to the system's intended purpose. A dataset that is adequate for one application may be unsuitable for another; the limitations assessment must be contextualised against the specific use case. Known limitations feed into two downstream processes. First, they inform the risk assessment (AISDP Module 6 ), where data limitations may translate into risk register entries. Second, they inform the Instructions for Use (AISDP Module 8 ), where deployers must be told about limitations that may affect the system's performance in their deployment context. Key outputs Known limitations record per dataset Limitation-to-risk mapping (feeding AISDP Module 6) Deployer-relevant limitations summary (feeding AISDP Module 8) --- ## Label Bias Analysis — Ground Truth Contamination Assessment URL: https://docs.standardintelligence.com/label-bias-analysis-ground-truth-contamination-assessment Breadcrumb: Development › Data Governance › Pre-Training Bias Assessment › Label Bias Analysis — Ground Truth Contamination Assessment Last updated: 28 Feb 2026 Label Bias Analysis — Ground Truth Contamination Assessment AISDP module(s): 4 ( Data Governance and Dataset Documentation ) Regulatory basis: Article 10(2)(f) Ground truth contamination occurs when the labels used for training are themselves the product of a biased process that the AI system is intended to replicate or improve upon. This is distinct from annotation bias; it concerns the systemic nature of the outcomes the labels represent. In a criminal justice context, the label "re-offended" may reflect differential policing rather than differential behaviour: communities that are policed more heavily generate more arrests, which inflates the apparent re-offence rate for those communities. In a healthcare context, labels derived from historical treatment decisions may reflect access disparities rather than clinical need. In a credit scoring context, historical default data reflects the outcomes of previous credit policies, which may themselves have been discriminatory. The assessment examines the process that generated the labels. Was the process subject to human discretion that could introduce bias? Were there structural factors (differential enforcement, access disparities, historical discrimination) that could contaminate the labels? The Technical SME documents the label generation process, the known or suspected sources of contamination, and their potential impact on the model. Where ground truth contamination is identified, the AISDP documents the compensating controls. These may include using proxy labels that are less susceptible to human bias (though proxy labels carry their own risks), applying bias-aware label smoothing, training the model with fairness constraints that explicitly counteract the known bias direction, or excluding the most contaminated data segments and compensating through synthetic augmentation. Where contamination cannot be adequately mitigated, this is recorded as a residual risk and communicated to the AI Governance Lead for acceptance. Key outputs Ground truth contamination assessment Label generation process documentation Compensating controls for identified contamination --- ## Label Bias Analysis — Inter-Rater Reliability & Relabelling URL: https://docs.standardintelligence.com/label-bias-analysis-inter-rater-reliability-and-relabelling Breadcrumb: Development › Data Governance › Pre-Training Bias Assessment › Label Bias Analysis — Inter-Rater Reliability & Relabelling Last updated: 28 Feb 2026 Label Bias Analysis — Inter-Rater Reliability & Relabelling AISDP module(s): 4 ( Data Governance and Dataset Documentation ) Regulatory basis: Article 10(2)(f) Label bias arises when the outcome labels used as ground truth for training reflect the biases of the humans or processes that generated them. In a recruitment context, the label "hired/not hired" encodes the decisions of human recruiters who may have been influenced by conscious or unconscious bias. Training a model on biased labels teaches the model to replicate that bias. Inter-rater reliability analysis is the primary detection method. Multiple independent labellers rate the same instances, and agreement is measured using Cohen's kappa (for two raters) or Krippendorff's alpha (for multiple raters). Low inter-rater agreement indicates subjective or inconsistent labelling, which increases the risk that labels encode individual biases rather than objective ground truth. The AISDP documents the inter-rater reliability statistics, the labeller qualifications and training, the annotation guidelines provided, and the method used to resolve disagreements. Where inter-rater reliability is low, or where analysis reveals systematic patterns in labelling differences across protected characteristic subgroups, relabelling by diverse panels provides a corrective dataset. The relabelled subset is compared against the original labels to quantify the label bias. The AISDP records the relabelling methodology, the panel composition, the divergence between original and relabelled outcomes, and the decision on how to handle the divergence (replace original labels, use relabelled data as a validation benchmark, or apply bias-aware label smoothing). The annotation process should also be assessed for conditions that support quality. Annotators should be compensated fairly and working under conditions that do not incentivise speed over accuracy. Annotation quality directly affects label accuracy, which directly affects model fairness and performance. Key outputs Inter-rater reliability statistics (Cohen's kappa, Krippendorff's alpha) Label bias assessment (subgroup-level analysis) Relabelling methodology and results (where applicable) --- ## Layer 1 — Data Ingestion URL: https://docs.standardintelligence.com/layer-1--data-ingestion Breadcrumb: Development › Architectures › Eight-Layer Architecture › Layer 1 Last updated: 28 Feb 2026 Schema Contracts & Quality Specification AISDP module(s): Module 4 (Data Governance), Module 3 (Architecture and Design) Regulatory basis: Article 10 , Article 12 The data ingestion layer is the system's first contact with external data, and schema contracts are the primary mechanism for ensuring that only conforming data enters the pipeline. Every data source must have a defined contract specifying a schema (field names, types, and formats), a quality specification (acceptable missing-value rates, value ranges, and distributional properties), and a freshness requirement. The ingestion pipeline enforces these contracts before data enters the system. For batch ingestion, tools such as Great Expectations or Soda Core run expectation suites against each incoming dataset. For streaming ingestion, Apache Kafka's Schema Registry enforces schema validation on every message. Records that fail validation are rejected with a logged error rather than being silently coerced, ensuring that malformed or out-of-distribution data does not corrupt the training or inference pipeline. This control addresses the risk of intent drift at the data source. Upstream systems change: a CRM vendor may modify a field's enumeration values, a data provider may alter a date encoding, or a partner organisation may change how it computes a derived field. Without boundary validation, these changes enter the pipeline undetected. With it, they are caught, quarantined, and investigated. The investigation log becomes Module 4 evidence in the AISDP. Key outputs Schema contract per data source (field names, types, formats, value ranges) Quality specification per data source (missing-value thresholds, distributional properties) Validation pipeline configuration (Great Expectations, Soda Core, or Kafka Schema Registry) Investigation and resolution logs for rejected records Freshness Requirements AISDP module(s): Module 4 (Data Governance) Regulatory basis: Article 10 Each data source feeding the AI system must have a documented freshness requirement: the maximum acceptable age of records at the point of ingestion. This requirement forms part of the schema contract described in Article 110 and ensures that the system does not make decisions based on stale information. Freshness requirements vary by data source and use case. A real-time fraud detection system may require transaction data no older than seconds; a quarterly workforce planning model may accept data that is weeks old. The Technical SME defines the freshness threshold based on the system's intended purpose and the rate at which the underlying data changes. Records that exceed the freshness threshold are flagged or rejected at the ingestion layer. The freshness requirement also has implications for data distribution monitoring. If the temporal profile of incoming data shifts (for example, because a batch feed is delayed), the ingestion layer should detect this and raise an alert. Stale data that passes schema validation but falls outside the expected temporal window may introduce subtle biases, particularly if the staleness affects some subgroups more than others. Key outputs Freshness threshold per data source, documented in the schema contract Alerting configuration for freshness violations Evidence of Technical SME rationale for each threshold Boundary Validation Tooling (Great Expectations, Soda Core, Kafka Schema Registry) AISDP module(s): Module 4 (Data Governance), Module 3 (Architecture and Design) Regulatory basis: Article 10, Annex IV (2)(d) Boundary validation tooling implements the schema contracts and quality specifications described in Article 110 as automated, repeatable checks. Three tools are identified suited to different ingestion patterns. Great Expectations is an open-source framework for defining data quality expectations as code. Expectation suites declare what properties a dataset must satisfy (column types, value ranges, uniqueness constraints, distributional bounds) and are executed against each incoming batch. Results are logged, and failures trigger quarantine workflows. Soda Core offers similar capabilities with a SQL-based syntax that integrates well with warehouse-centric architectures. For streaming architectures, Apache Kafka's Schema Registry validates every message against a registered Avro, Protobuf, or JSON schema before it is committed to a topic. Messages that fail validation are rejected to a dead-letter queue. The dead-letter queue is not a discard mechanism; it is an investigation queue. Records routed there must be examined, the root cause identified, and the resolution documented. The choice of tooling depends on the system's ingestion pattern (batch, streaming, or hybrid) and the organisation's existing data infrastructure. Regardless of which tool is selected, the validation layer must be automated, version-controlled alongside the pipeline code, and produce audit-grade logs of every validation run. Key outputs Configured validation tooling (Great Expectations, Soda Core, and/or Kafka Schema Registry) Expectation suites or schema definitions version-controlled in the repository Dead-letter queue configuration and investigation procedures Validation run logs as Module 4 evidence Dead-Letter Queue for Non-Conforming Records AISDP module(s): Module 4 (Data Governance), Module 10 (Record-Keeping) Regulatory basis: Article 10, Article 12 When the ingestion layer rejects a data record for failing schema validation, range checks, or freshness requirements, the record must not be silently discarded. It is routed to a dead-letter queue: a holding area for non-conforming records that preserves them for investigation. The dead-letter queue serves two compliance functions. First, it provides evidence that the validation controls are operating correctly; a queue that is never populated may indicate that validation rules are too permissive. Second, it creates an investigation trail. For each record in the queue, the data engineering team documents what failed, why, and how the issue was resolved (correction and re-ingestion, permanent rejection with rationale, or escalation to the data source owner). Investigation records from the dead-letter queue feed into AISDP Module 4 as evidence of data governance diligence. They also support post-market monitoring by revealing patterns in data quality issues. A sustained increase in dead-letter queue volume from a particular source, for instance, may indicate an upstream change that requires contractual or technical remediation. Key outputs Dead-letter queue configuration in the ingestion pipeline Investigation and resolution procedures for queued records Periodic summary reports on queue volume and root causes Module 4 evidence records Intent Drift Control — Source Change Detection & Quarantine AISDP module(s): Module 4 (Data Governance), Module 12 (Post-Market Monitoring) Regulatory basis: Article 10, Article 72 Intent drift at the data source is one of the most insidious risks to a high-risk AI system. Upstream systems change without notice: a CRM vendor modifies enumeration values, a data provider alters a field format, or a partner organisation changes how it computes a derived metric. Each of these changes alters the data the model receives, potentially degrading performance or shifting fairness profiles without triggering any visible error. The ingestion layer's boundary validation catches structural changes such as schema violations and out-of-range values. Source change detection goes further by monitoring for subtler distributional shifts that pass schema validation. The ingestion layer computes real-time summary statistics (mean, variance, quantile distributions) for incoming data and compares them against the training data baseline. Statistically significant shifts are reported to the monitoring layer. When a source change is detected, the affected data is quarantined pending investigation. The quarantine prevents potentially compromised data from entering the training or inference pipeline. The investigation determines whether the change is benign (a natural evolution in the underlying population), material (requiring model revalidation or retraining), or indicative of a data quality failure at the source. The resolution is documented in the AISDP and feeds into the post-market monitoring record. Key outputs Source change detection configuration (statistical tests, thresholds, baseline definitions) Quarantine procedures for flagged data Investigation and resolution log template Integration with post-market monitoring alerting --- ## Layer 2 — Feature Engineering URL: https://docs.standardintelligence.com/layer-2--feature-engineering Breadcrumb: Development › Architectures › Eight-Layer Architecture › Layer 2 Last updated: 28 Feb 2026 Training-Serving Consistency — Feature Stores & Single Computation Spec AISDP module(s): Module 3 (Architecture and Design), Module 5 (Testing and Validation) Regulatory basis: Article 15 , Annex IV (2)(b) Training-serving skew is a pernicious failure mode in which the features used during production inference are computed differently from those used during training. This often occurs because the training feature pipeline and the serving feature pipeline are maintained by different teams, use different code paths, or run on different infrastructure. A model trained on features computed with one normalisation scheme, then served features computed with a slightly different scheme, will produce silently degraded predictions. Feature stores are the standard mitigation. Feast, Tecton, and Hopsworks each centralise feature definitions so that each feature has a single computation specification used for both training and serving. The store also versions feature values, making the exact features that trained a given model version retrievable for audit purposes. Feast is open-source and integrates with most cloud and on-premises data infrastructure; Tecton and Hopsworks are commercial offerings with additional real-time computation and monitoring capabilities. The single computation specification is a compliance requirement as well as an engineering best practice. If the features entering the model at inference time differ from those it was trained on, the validation results documented in AISDP Module 5 are no longer representative of the system's actual behaviour. Training-serving consistency is therefore a prerequisite for the validity of the conformity assessment . Key outputs Feature store deployment (Feast, Tecton, or Hopsworks) Single computation specification per feature, version-controlled Parity verification tests confirming training and serving feature equivalence Module 3 and Module 5 documentation of the consistency mechanism Feature Distribution Monitoring vs Training Baseline AISDP module(s): Module 5 (Testing and Validation), Module 12 (Post-Market Monitoring) Regulatory basis: Article 15, Article 72 Computed feature values in production must be monitored for distributional shift against the training baseline. Drift in individual features can cause localised performance degradation that aggregate metrics may miss entirely. A feature whose distribution shifts may push the model into a regime it was not trained for, with potentially subgroup-specific consequences. Evidently AI provides automated feature distribution monitoring, computing drift metrics such as Population Stability Index (PSI), Kolmogorov-Smirnov tests, and Jensen-Shannon divergence on a per-feature basis. Thresholds are configured for each metric, and breaches trigger alerts that feed into the post-market monitoring framework. This monitoring should run continuously on production data. The distinction between feature-level drift and aggregate output drift is significant. A system may continue to produce acceptable aggregate accuracy while individual features shift in ways that degrade performance for specific subgroups. Feature distribution monitoring catches these shifts early, before they manifest as fairness violations in the post-processing layer. The monitoring results are documented in AISDP Module 12 as part of the ongoing post- market surveillance record. Key outputs Per-feature drift monitoring configuration (metrics, thresholds, alerting) Baseline feature distributions from the training dataset Integration with post-market monitoring dashboards Module 12 evidence of continuous feature drift surveillance Feature Registry — Proxy Variable Flags & Justifications AISDP module(s): Module 4 (Data Governance), Module 6 (Risk Management System) Regulatory basis: Article 10 , Article 9 Every feature used by the system must be defined in a central feature registry . The registry records each feature's name, source, transformation logic, data type, expected distribution, business justification for inclusion, and an assessment of its proxy variable risk. New features cannot be added to the production system without registry approval. Proxy variable risk is a particular concern for high-risk systems. A feature that correlates strongly with a protected characteristic (such as postcode correlating with ethnicity) may introduce indirect discrimination even when the protected characteristic itself is excluded from the model's inputs. The feature engineering layer computes each feature's correlation with protected characteristics and records the result in the registry. Features exceeding a defined correlation threshold are reviewed by the Technical SME and the AI Governance Lead . Where such a feature is retained, the registry must include a documented justification explaining why the feature's predictive value outweighs the proxy risk. This justification must be specific and evidence-based, not a generic assertion that the feature improves accuracy. The feature registry feeds into AISDP Module 4 (data governance) and Module 6 (risk management) and supports the post-training fairness evaluation described above. Key outputs Central feature registry with all required metadata fields Proxy variable correlation analysis per feature Documented justifications for retained high-correlation features AI Governance Lead sign-off on proxy variable decisions Intent Drift Control — Upstream Normalisation Change Detection AISDP module(s): Module 4 (Data Governance), Module 12 (Post-Market Monitoring) Regulatory basis: Article 10, Article 72 The feature engineering layer is vulnerable to a specific class of intent drift: upstream normalisation changes. When a source system alters the way it computes or formats a field, the raw data may still pass schema validation at the ingestion layer, but the features derived from that data may shift in meaning. A field that previously represented a percentage expressed as a decimal (0.00–1.00) reinterpreted as a whole number (0–100) would produce wildly different feature values without triggering a schema error. Detection of these changes requires monitoring the distribution of computed features against the training baseline, combined with transformation versioning. Feature transformation logic is version-controlled alongside model code, and each model version is explicitly linked to the specific transformation version that produced its training features. If the feature values in production diverge from the expected distribution for the active transformation version, the system flags the anomaly. When a normalisation change is detected, the affected feature pipeline is investigated. The resolution may involve updating the transformation logic to accommodate the upstream change, reverting to the previous source format through coordination with the data provider, or retraining the model on features computed under the new normalisation. The investigation and resolution are documented in the AISDP, contributing to Module 4 and Module 12 evidence. Key outputs Transformation versioning linked to model versions Feature distribution anomaly detection at the engineering layer Investigation and resolution procedures for normalisation changes AISDP Module 4 and Module 12 evidence records --- ## Layer 3 — Model Inference URL: https://docs.standardintelligence.com/layer-3--model-inference Breadcrumb: Development › Architectures › Eight-Layer Architecture › Layer 3 Last updated: 28 Feb 2026 Model Version Pinning & Cryptographic Hash Verification AISDP module(s): Module 3 (Architecture and Design), Module 10 (Record-Keeping) Regulatory basis: Article 15 , Article 12 The inference layer must serve a specific, immutable model version. Model version pinning ensures that the exact model validated during testing is the model that runs in production. Switching to a different version constitutes a deployment event requiring human approval and CI/CD pipeline validation; it cannot happen silently through an automatic update. The model registry enforces this through stage management. Models progress through defined stages: experimental, staging, production, and archived. Only models in the production stage can be loaded by the inference service, and promotion to production requires documented approval. MLflow, SageMaker, and Vertex AI model registries all support this pattern. Cryptographic hash verification adds a further assurance layer. Each model artefact is hashed at the point of registration, and the inference service verifies the hash on load. This confirms that the model binary has not been tampered with between registration and deployment. The hash, together with the model version identifier and the deployment approval record, is logged as part of the Article 12 audit trail, enabling precise reconstruction of which model was serving at any given point in time. Key outputs Model registry with stage management (experimental → staging → production → archived) Cryptographic hash computation at registration and verification at load Deployment event logging with version, hash, and approval evidence Module 10 audit trail entries Confidence Thresholding — Below-Threshold → Human Review AISDP module(s): Module 7 (Human Oversight), Module 3 (Architecture and Design) Regulatory basis: Article 14 , Article 15 Every classifier produces some form of confidence estimate: a probability score, a softmax output, or a distance from the decision boundary. Confidence thresholding routes predictions that fall below a defined threshold to human review before they are acted upon. This control prevents the system from acting on uncertain predictions, which are the most likely to diverge from intended behaviour. The Technical SME calibrates the threshold carefully, using the validation dataset. The threshold is set at the level where the model's error rate for below-threshold predictions is unacceptably high. Too high a threshold, and most predictions are sent to humans, defeating the purpose of automation. Too low, and uncertain predictions slip through with potential adverse consequences for affected persons. The threshold value, the calibration methodology, and the resulting human review volume are all documented in the AISDP. The review volume has operational implications: the human oversight interface must be designed to handle the expected caseload without creating bottlenecks that tempt operators to reduce review thoroughness. Confidence thresholding is also subject to ongoing monitoring; if the distribution of confidence scores shifts in production, the threshold may need recalibration. Key outputs Defined confidence threshold with calibration methodology Documentation of expected human review volume at the chosen threshold Integration with human oversight interface workflow Module 7 and Module 3 AISDP entries Output Constraint Enforcement (Pydantic Schema Validation) AISDP module(s): Module 3 (Architecture and Design), Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Output constraint enforcement is the last-resort guard at the inference layer. It enforces hard bounds on what the model can output: scores must fall within a defined range, classifications must be drawn from a defined set, and generated text must conform to length and format constraints. This prevents pathological model behaviour, such as extreme score values from adversarial inputs or hallucinated classification labels, from propagating downstream. Pydantic schema validation provides a clean implementation. The output schema is defined as a Pydantic model specifying field types, value ranges, and enumeration constraints. Every inference output is validated against this schema, and outputs that do not conform are rejected. The rejection is logged with the original output, the validation failure reason, and the request context, creating an audit record that supports both debugging and compliance evidence. This control is particularly important for robustness under adversarial conditions. Adversarial inputs may cause the model to produce outputs far outside its expected range. Without output constraints, these outputs would propagate to the post-processing layer, the explainability layer, and ultimately to human operators or affected persons. Output constraint enforcement ensures that even if the model misbehaves, the system as a whole remains within its documented operational bounds. Key outputs Pydantic output schema definition Validation pipeline integrated into the inference service Rejection logging configuration Module 3 and Module 9 documentation of the constraint mechanism Intent Drift Control — Production-Stage Models Only AISDP module(s): Module 3 (Architecture and Design), Module 10 (Record-Keeping) Regulatory basis: Article 15, Article 12 A specific form of intent drift at the inference layer occurs when a model that has not completed the full validation and approval pipeline is inadvertently served in production. This can happen through misconfigured deployment scripts, manual overrides during debugging, or automation that promotes models between stages without the required governance checks. The control is architectural: the inference service is configured to load only models that the model registry marks as being in the production stage. Promotion to production requires documented approval through the CI/CD compliance gates. The inference service verifies the model's stage on load, and any attempt to serve a model in experimental, staging, or archived status is blocked and logged as a security event. This constraint ensures that every model serving predictions has passed the conformity checks, fairness evaluations, and governance approvals documented in the AISDP. It also simplifies audit: the model registry's stage history, combined with the inference service's load logs, provides a complete record of which validated model was active at any point. Penetration testing should specifically verify that this constraint cannot be bypassed. Key outputs Inference service configuration restricting loads to production-stage models Model registry stage management with approval requirements Security event logging for attempted non-production model loads Penetration test verification of the constraint --- ## Layer 4 — Post-Processing URL: https://docs.standardintelligence.com/layer-4--post-processing Breadcrumb: Development › Architectures › Eight-Layer Architecture › Layer 4 Last updated: 28 Feb 2026 Business Rule Application — Documentation & Override Logging AISDP module(s): Module 3 (Architecture and Design), Module 6 (Risk Management System) Regulatory basis: Article 9 , Annex IV (2)(b) Many high-risk AI systems apply business rules after model inference: minimum score thresholds, hard rejection criteria, fairness calibrations, and score adjustments. Each rule modifies the model's raw output and changes the outcome that the affected person experiences. The post-processing layer is where these rules are applied, and each one must be transparently documented. The Technical SME documents every post-processing rule in the AISDP with three elements: the rule itself (what it does), the rationale (why it exists), and the fairness impact assessment (how the rule affects different subgroups). A rule that automatically rejects applicants without a university degree, for instance, may disproportionately affect certain demographic groups; the Technical SME acknowledges and assesses this impact rather than treating the rule as a neutral business requirement. Override logging at this layer is essential. Every instance where a business rule or fairness calibration changes the model's raw output is logged with the original output, the modified output, and the specific rule that triggered the modification. This log enables retrospective analysis: if the fairness profile shifts in production, the organisation can determine whether the shift originates in the model's predictions or in the post-processing rules. Key outputs Documented catalogue of all post-processing business rules Per-rule rationale and fairness impact assessment Override logging capturing original output, modified output, and triggering rule Module 3 and Module 6 AISDP entries Threshold Stability Monitoring AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 , Article 15 Where the post-processing layer applies decision thresholds (for example, "shortlist candidates with scores above 65"), the proportion of inputs crossing each threshold must be monitored over time. Changes in the crossing rate may indicate that the model's score distribution has shifted, even if the threshold itself has not been altered. A stable threshold applied to a shifting score distribution produces shifting outcomes. If the model begins producing higher scores on average, a fixed threshold will admit more candidates, potentially changing the system's effective behaviour without any configuration change. Conversely, a downward score drift may cause the system to reject more candidates than intended. Both scenarios represent outcome drift. Threshold stability monitoring tracks the crossing rate, identifies statistically significant changes, and triggers alerts when the rate deviates beyond configured bounds. The alert feeds into the post-market monitoring framework and may prompt a threshold recalibration exercise. The monitoring results are documented in AISDP Module 12 as part of the continuous surveillance record. Key outputs Threshold crossing rate monitoring per decision threshold Statistical significance testing and alerting configuration Integration with post-market monitoring dashboards Module 12 evidence of threshold stability surveillance Fairness Re-Evaluation on Production Data AISDP module(s): Module 5 (Testing and Validation), Module 12 (Post-Market Monitoring) Regulatory basis: Article 9, Article 72 Fairness metrics computed during development reflect the system's behaviour on the validation and test datasets. Production data may differ from those datasets in distribution, composition, and temporal characteristics. The fairness profile observed during development is therefore not guaranteed to hold in production. The post-processing layer periodically recomputes the fairness metrics that were evaluated during development, using production data that has passed through the complete pipeline. This re-evaluation catches drift that affects the final outputs, not just the model's raw predictions. A subgroup that received equitable treatment during testing may experience disparate outcomes in production if the input population has shifted or if business rules interact differently with production data patterns. Fairlearn's ThresholdOptimizer can be applied at this layer to find subgroup-specific thresholds that satisfy a fairness constraint while maximising accuracy. The optimised thresholds should be re-validated periodically, because distributional shifts in the model's raw outputs can render previously optimal thresholds sub-optimal. The re-evaluation results feed into AISDP Module 5 (as updated validation evidence) and Module 12 (as post-market monitoring records). Key outputs Periodic fairness re-evaluation schedule and methodology Production fairness metrics compared against development baselines Threshold re-optimisation records where applicable Module 5 and Module 12 AISDP evidence Outcome Drift Control — Periodic Threshold Recalibration AISDP module(s): Module 12 (Post-Market Monitoring), Module 6 (Risk Management System) Regulatory basis: Article 72, Article 9 Threshold recalibration is the corrective action triggered when threshold stability monitoring or fairness re-evaluation identifies that the system's post-processing thresholds are no longer producing the intended outcomes. Recalibration adjusts the thresholds to restore alignment between the system's actual behaviour and its documented intent. The recalibration process follows a structured workflow. The Technical SME analyses the drift signal to determine whether the root cause lies in the model's raw output distribution, in the production input distribution, or in both. The appropriate corrective action depends on the root cause: if the model's outputs have shifted, retraining may be required; if the input distribution has shifted, threshold adjustment may suffice. Any threshold change is a material modification to the system's behaviour and must pass through the governance framework. The change is documented in the AISDP with the analysis that prompted it, the old and new threshold values, the expected impact on fairness and accuracy metrics, and the approval record. The updated thresholds are validated on recent production data before deployment, and post-deployment monitoring confirms that the recalibration achieved its intended effect. Key outputs Root cause analysis for threshold drift Updated threshold values with supporting evidence Governance approval for the threshold change Post-deployment validation of recalibration effectiveness Module 6 and Module 12 AISDP entries --- ## Layer 5 — Explainability URL: https://docs.standardintelligence.com/layer-5--explainability Breadcrumb: Development › Architectures › Eight-Layer Architecture › Layer 5 Last updated: 28 Feb 2026 Explanation Methods (SHAP, LIME, GradCAM, Attention) AISDP module(s): Module 3 (Architecture and Design), Module 7 (Human Oversight) Regulatory basis: Article 14 , Article 13 , Article 86 The explainability layer generates human-readable explanations of individual predictions, supporting the Article 14 requirement that operators be able to "correctly interpret the system's output." The choice of explanation method depends on the model architecture and the computational constraints of the production environment. SHAP TreeExplainer is suited to gradient-boosted tree models (XGBoost, LightGBM) and runs in near-linear time, making it viable for per-prediction explanation at high throughput. KernelSHAP and DeepSHAP handle neural networks but are significantly more expensive, often requiring hundreds or thousands of model evaluations per explanation. LIME (Local Interpretable Model-agnostic Explanations) offers a model-agnostic alternative by fitting a local surrogate model to each prediction. GradCAM and attention-based methods are specific to convolutional and transformer architectures respectively. Fiddler AI provides production-grade explainability infrastructure, hooking into the serving pipeline to compute feature attributions per prediction, store them alongside the inference log, and provide monitoring dashboards. The AISDP must document the explanation method selected, its computational cost, the latency impact on the inference pipeline, and the validation performed to confirm that the explanations are faithful to the model's actual reasoning. Key outputs Selected explanation method(s) with justification for the choice Computational cost and latency impact assessment Integration with the inference pipeline and logging infrastructure Module 3 and Module 7 documentation Fidelity Validation — Attribution vs Model Sensitivity AISDP module(s): Module 5 (Testing and Validation), Module 7 (Human Oversight) Regulatory basis: Article 14 An explanation that attributes a decision to Feature A when the model actually relied on Feature B is worse than no explanation at all, because it misleads the human overseer. Fidelity validation tests whether the explainability layer's attributions accurately reflect the model's actual behaviour. The Technical SME validates explanations by comparing the explanation's feature attributions against the model's sensitivity to feature perturbations. If the explanation claims that Feature A was the dominant driver, perturbing Feature A should produce a larger change in the model's output than perturbing other features. Systematic disagreement between the attributions and the perturbation analysis indicates that the explanation method is unreliable for the given model. Fidelity validation should be performed during development (as part of the initial model validation), at deployment (to confirm that the production explanation pipeline matches the development results), and periodically during operation (to catch cases where explanation fidelity degrades due to input distribution shifts). The validation results are documented in AISDP Module 5 and feed into the human oversight design documented in Module 7. Key outputs Fidelity validation test suite (attribution vs perturbation analysis) Validation results at development, deployment, and periodic intervals Documented limitations where fidelity is imperfect Module 5 and Module 7 AISDP evidence Audience-Appropriate Abstraction (Operators vs Affected Persons) AISDP module(s): Module 7 (Human Oversight), Module 8 (Transparency and User Information) Regulatory basis: Article 14, Article 13, Article 86 Explanations must be tailored to their audience. Technical operators require precise feature contributions and confidence indicators to evaluate the system's output and decide whether to accept or override it. Affected persons require plain-language explanations that avoid jargon and focus on the factors most relevant to their individual situation. The Technical SME designs at least two explanation formats: one for operators (detailed, quantitative, showing feature attributions and confidence scores) and one for affected persons (narrative, accessible, focusing on the key reasons for the outcome). The AISDP must document both formats, the rationale for the abstraction choices, and the validation performed to confirm comprehensibility. Comprehensibility validation involves testing the explanations with representative users from each audience. Can operators use the detailed explanation to form an independent judgement? Can affected persons understand the plain-language explanation well enough to identify potential errors or challenge the outcome? The results of this validation feed into AISDP Module 7 (human oversight design) and Module 8 (transparency and user information), and they also inform the ongoing explanation consistency monitoring described above. Key outputs Operator-facing explanation format with feature attributions and confidence indicators Affected-person-facing explanation format in plain language Comprehensibility validation results for each audience Module 7 and Module 8 documentation Explanation Consistency Monitoring AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 The explanations provided by the explainability layer should remain consistent over time for similar inputs. If the dominant features in explanations shift without a corresponding model update, this may indicate that the model's internal behaviour is changing in response to input distribution shifts, or that the explanation method itself is unstable. Fiddler AI and similar monitoring tools track explanation patterns over time, identifying changes in the ranking or magnitude of feature attributions across the prediction population. An alert on explanation pattern change serves as a valuable early warning signal for outcome drift, sometimes surfacing issues before they are detected by aggregate performance or fairness metrics. Explanation consistency monitoring feeds into the post-market monitoring framework documented in AISDP Module 12. It complements the feature distribution monitoring at Layer 2 and the threshold stability monitoring at Layer 4, providing a multi-layered view of the system's behavioural stability. When an explanation consistency alert fires, the investigation should determine whether the root cause is input drift, model drift, or an artefact of the explanation method. Key outputs Explanation pattern tracking configuration Alerting thresholds for attribution shifts Integration with post-market monitoring dashboards Investigation procedures for explanation consistency alerts Comprehensibility Validation Records AISDP module(s): Module 5 (Testing and Validation), Module 7 (Human Oversight) Regulatory basis: Article 14, Article 13 Comprehensibility validation confirms that the explanations generated by the system are understandable to their intended audiences. This is distinct from fidelity validation, which tests whether explanations are accurate. An explanation can be faithful to the model's reasoning yet incomprehensible to the person reading it. The validation typically involves presenting explanations to representative users from each target audience and assessing whether they can correctly interpret the explanation's meaning. For operators, the test is whether the explanation supports independent judgement: can the operator identify which factors drove the outcome and assess whether those factors are reasonable? For affected persons, the test is whether the explanation enables them to understand the key reasons for the decision and identify grounds for challenge. The AISDP must retain records of comprehensibility validation, including the methodology used, the participant profiles, the scenarios tested, the results, and any design changes made in response to the findings. These records form part of Module 5 (validation evidence) and Module 7 (human oversight design), demonstrating that the organisation has taken active steps to ensure its explanations are fit for purpose. Key outputs Comprehensibility validation methodology and participant profiles Test scenarios and results Design changes made in response to findings Module 5 and Module 7 AISDP evidence records --- ## Layer 6 — Human Oversight Interface URL: https://docs.standardintelligence.com/layer-6--human-oversight-interface Breadcrumb: Development › Architectures › Eight-Layer Architecture › Layer 6 Last updated: 28 Feb 2026 Mandatory Review Workflow — Auto-Acceptance Prevention AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 The human oversight interface must enforce a review step before any system output is acted upon. For high-risk systems, auto-acceptance configurations, where the system's outputs are applied without human review, must be technically prevented. This is an architectural constraint, not a policy constraint. The deployment infrastructure is designed so that the only path from model inference to consequential action passes through the human review interface. There should be no API endpoint, configuration flag, or administrative override that allows system outputs to bypass human review. Penetration testing should specifically test for human oversight bypass paths. This control directly implements Article 14's requirement that high-risk AI systems be designed to be effectively overseen by natural persons. A system that offers oversight as an option but permits bypass under certain conditions does not meet this standard. The mandatory review workflow is documented in AISDP Module 7, including the technical mechanisms that enforce it and the testing performed to verify that bypass is not possible. Key outputs Architectural design ensuring all outputs pass through human review Verification that no bypass paths exist (API, configuration, or administrative) Penetration test results confirming bypass prevention Module 7 AISDP documentation Automation Bias Countermeasures (Data-First Display, Dwell Time, Calibration Cases) AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 Article 14(4) requires that oversight measures enable individuals, as appropriate to the circumstances, to "properly understand the relevant capacities and limitations of the AI system" and to "correctly interpret the system's output." An interface that presents the system's recommendation with a prominent "Accept" button and a small "Override" link does not satisfy this requirement, even if the operator technically has override capability. Effective countermeasures against automation bias are rooted in human factors research. Four specific techniques have evidence behind them. Data-before-recommendation display shows the underlying case data (the applicant's profile, the patient's history, the transaction details) before revealing the system's recommendation, forcing the operator to begin forming their own assessment before being anchored by the system's suggestion. Minimum dwell time prevents the operator from accepting the recommendation until a minimum period has elapsed, typically 15 to 60 seconds depending on case complexity, blocking rapid bulk-acceptance without review. Confidence visualisation displays the system's confidence level prominently, with uncertainty highlighted rather than hidden. A prediction at 52% confidence should look visually different from one at 98% confidence. Calibration cases are injected at random intervals, presenting the operator with cases where the correct answer is known (drawn from the golden test dataset) and recording whether the operator agrees with the system. Operators who agree with the system on cases where the system is wrong are exhibiting automation bias, and this signal feeds into operator training and oversight review. Key outputs Interface design implementing all four countermeasures Dwell time configuration per case complexity tier Calibration case injection schedule and golden dataset specification Module 7 documentation of countermeasure design and rationale Override Capability — Rationale Capture AISDP module(s): Module 7 (Human Oversight), Module 10 (Record-Keeping) Regulatory basis: Article 14, Article 12 Operators must have the ability to override any system recommendation. This capability is a core requirement of Article 14 and must be genuinely accessible, not hidden behind multiple menu levels or discouraged through interface design. When an operator exercises an override, the system captures a structured record of the event. The override record includes the operator's identity, the original system recommendation, the override decision (what the operator chose instead), and the stated rationale for the override. The rationale field should be structured to capture meaningful information; a free-text field that operators routinely fill with "disagree" provides little analytical value. Drop-down selections for common override reasons, supplemented by a free-text field for unusual cases, strike a practical balance. Override records serve multiple compliance purposes. They contribute to the Article 12 audit trail documented in Module 10. They feed into override rate monitoring, which detects patterns indicative of automation bias or system degradation. They also provide evidence for the ongoing adequacy of human oversight: a consistently high override rate may indicate that the system's recommendations are not meeting quality expectations, prompting investigation and potential model retraining. Key outputs Override interface design with structured rationale capture Override record schema (operator, recommendation, decision, rationale) Integration with logging infrastructure and Module 10 audit trail Module 7 AISDP documentation Override Rate Monitoring (Aggregate, Per-Deployer, Per-Operator) AISDP module(s): Module 12 (Post-Market Monitoring), Module 7 (Human Oversight) Regulatory basis: Article 72 , Article 14 The percentage of system recommendations that operators override is a signal of system health and oversight quality. The monitoring layer tracks override rates at three levels of granularity: aggregate (across all operators and deployers), per-deployer (to identify deployer-specific patterns), and per-operator (to identify individual operators who may be exhibiting automation bias or, conversely, overriding excessively). Consistently low override rates may indicate automation bias: operators are accepting the system's recommendations without meaningful review. Suddenly increasing override rates may indicate that the system's outputs are degrading. Divergent rates between operators or deployers may reveal training gaps, differing operational contexts, or inconsistent application of the review workflow. Review time monitoring complements override rate monitoring. Average review time per case is a proxy for review thoroughness. Operators consistently reviewing cases in under 60 seconds are unlikely to be performing meaningful oversight; this threshold is flagged for investigation. Together, override rates and review times provide a comprehensive picture of whether the human oversight measures documented in AISDP Module 7 are functioning as intended in practice. The results feed into Module 12 as post-market monitoring evidence. Key outputs Override rate dashboards at aggregate, per-deployer, and per-operator levels Review time monitoring with sub-60-second flagging Alerting thresholds for abnormal override rates and review times Module 7 and Module 12 AISDP evidence Review Time Monitoring — Sub-60-Second Flagging AISDP module(s): Module 7 (Human Oversight), Module 12 (Post-Market Monitoring) Regulatory basis: Article 14, Article 72 Average review time per case serves as a proxy for review thoroughness. Operators who consistently review cases in under 60 seconds are unlikely to be performing meaningful oversight, regardless of how many cases they process. This metric complements the override rate monitoring described in and together the two provide a comprehensive picture of whether human oversight is functioning as intended. The monitoring layer tracks per-operator review times and flags cases where the elapsed time between case presentation and operator action falls below the 60-second threshold. The threshold may be adjusted based on case complexity; straightforward cases may legitimately require less time, while complex cases should require more. The system should categorise cases by complexity tier and apply tier-appropriate review time thresholds. When sub-threshold review times are detected, the investigation may reveal legitimate explanations (experienced operators making rapid but well-informed decisions) or concerning patterns (operators bulk-accepting without review). The response depends on the finding: additional training, interface redesign, workload adjustment, or, in persistent cases, escalation through the governance framework. Review time monitoring results are documented in AISDP Module 7 and Module 12 as evidence that the organisation actively monitors the quality of human oversight. Key outputs Per-operator review time tracking with sub-60-second flagging Complexity-tiered review time thresholds Investigation procedures for flagged patterns Module 7 and Module 12 AISDP evidence --- ## Layer 7 — Logging & Audit URL: https://docs.standardintelligence.com/layer-7--logging-and-audit Breadcrumb: Development › Architectures › Eight-Layer Architecture › Layer 7 Last updated: 28 Feb 2026 Immutable Logging — Append-Only & Cryptographic Hash Chains AISDP module(s): Module 10 (Record-Keeping) Regulatory basis: Article 12 Article 12 requires automatic recording of events during the system's operation. The word "automatic" is significant: logging must be a structural property of the system, not something that depends on application code remembering to write a log entry. OpenTelemetry provides the architectural pattern, instrumenting the application at the framework level to capture traces, spans, and structured log events automatically. Immutability is enforced at the storage layer. The simplest approach is WORM (Write Once Read Many) storage. AWS S3 Object Lock in compliance mode prevents deletion or modification of log objects for a defined retention period; no user, including the account administrator, can override the lock. Azure Immutable Blob Storage and Google Cloud Logging retention locks offer equivalent capabilities. For organisations requiring the highest assurance, cryptographic hash chains add tamper evidence. Each log entry includes a hash of the preceding entry, creating a chain that breaks visibly if any entry is modified or deleted. This is computationally inexpensive and can be implemented as a thin layer on top of any logging backend. The combination of WORM storage and hash chains provides both prevention (logs cannot be altered) and detection (alterations are visible), satisfying the immutability requirement for Article 12 compliance. Key outputs OpenTelemetry instrumentation across all system layers WORM-configured log storage (S3 Object Lock, Azure Immutable Blob, or equivalent) Cryptographic hash chain implementation (where required) Module 10 documentation of the immutability mechanism Comprehensive Event Coverage (Nine Event Types) AISDP module(s): Module 10 (Record-Keeping) Regulatory basis: Article 12, Annex IV (3) Gaps in logging create blind spots that undermine auditability. The logging layer must capture every material event in the system's operation. A minimum event set is specified comprising nine event types. Data ingestion events record the source, timestamp, record count, and quality check result. Feature computation events record the feature version and computation status. Inference events record the input hash, model version, raw output, and confidence score. Post-processing events record the rules applied, the original output, and the modified output. Explanation events record the method used and the feature attributions generated. Operator events record the review timestamp, the decision made, and the override rationale if applicable. Configuration change events record what changed, who changed it, and when. Deployment events record the version deployed and the approval evidence. Monitoring alert events record the alert type, severity, and initial response. Each event must include a correlation ID that ties it to the specific inference request, enabling end-to-end trace retrieval. The comprehensiveness of the event coverage is validated through audit exercises that attempt to reconstruct the full history of a sample of inference requests. Key outputs Logging schema covering all nine event types Correlation ID implementation for end-to-end traceability Audit exercise results confirming coverage completeness Module 10 AISDP documentation Log-Based Drift Detection AISDP module(s): Module 10 (Record-Keeping), Module 12 (Post-Market Monitoring) Regulatory basis: Article 12, Article 72 Aggregated log data feeds the monitoring layer's drift detection algorithms. Changes in inference patterns, error rates, or operator behaviour that are detectable in the logs provide early warning of outcome drift. The logging layer is therefore not merely an archival function; it is an active input to the system's ongoing compliance monitoring. Log-based drift detection analyses trends across the nine event types described above. An increase in data ingestion quality check failures may indicate upstream source changes. A shift in the distribution of model confidence scores may indicate input drift. A change in operator override rates may signal degradation in the model's recommendations. Each of these trends is detectable through statistical analysis of the structured log data. The detection algorithms operate on aggregated log data, not individual records. They compute rolling statistics, compare them against historical baselines, and trigger alerts when statistically significant deviations are detected. These alerts feed into the severity-based escalation framework described above, ensuring that log-derived signals receive appropriate attention and response. Key outputs Log aggregation pipeline feeding drift detection algorithms Statistical baselines and deviation thresholds per event type Alert integration with the post-market monitoring framework Module 10 and Module 12 evidence records Regulatory Export Capability — On-Demand NCA Format Conversion AISDP module(s): Module 10 (Record-Keeping) Regulatory basis: Article 12, Article 72 The logging layer must support export of logs in formats suitable for regulatory inspection. National competent authorit ies (NCAs) may request access to system logs as part of market surveillance , incident investigation, or routine inspection. The export must be available on demand, within the response timelines expected by the relevant authority. The regulatory export capability converts the internal log format into a structured, portable format that an external reviewer can consume without requiring access to the organisation's logging infrastructure. This typically means exporting to standardised formats such as JSON, CSV, or XML, with accompanying metadata describing the schema, the time period covered, and the completeness of the export. The export process should support filtering by time range, event type, and correlation ID, so that the organisation can provide precisely the records requested without disclosing unrelated operational data. Access to the export function is restricted to authorised personnel, and every export event is itself logged, creating an audit trail of regulatory data disclosures. The Technical SME tests the export capability periodically to confirm that it produces complete, accurate, and timely results. Key outputs Log export function supporting NCA-compatible formats (JSON, CSV, XML) Filtering by time range, event type, and correlation ID Export event logging for audit trail Periodic testing of export completeness and accuracy --- ## Layer 8 — Monitoring URL: https://docs.standardintelligence.com/layer-8--monitoring Breadcrumb: Development › Architectures › Eight-Layer Architecture › Layer 8 Last updated: 28 Feb 2026 Intent Alignment Dashboards — Real-Time vs AISDP Thresholds AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 , Article 9 The monitoring layer provides dashboards that show the system's current behaviour relative to its documented intended purpose. If the AISDP declares that the system must achieve AUC-ROC ≥ 0.80 and fairness ratios ≥ 0.90, the dashboard displays these metrics in real time with a clear indication of whether the system is within specification. Intent alignment dashboards serve two audiences. The engineering team needs operational visibility into the system's technical metrics: latency, throughput, error rates, feature distributions, and model performance. The governance team needs compliance visibility: are all declared thresholds being met, when was the last threshold breach, and what was the response? A layered monitoring approach addresses both audiences from the same underlying data. At the base layer, infrastructure monitoring (Prometheus with Grafana, or Datadog) tracks system health. Above it, ML-specific monitoring (Evidently AI, NannyML, Fiddler AI, or Arize AI) tracks feature drift, prediction drift, performance estimation, and fairness metrics. Above that, the governance layer aggregates the ML metrics into compliance-relevant summaries. This layered approach ensures that each audience sees the information it needs without being overwhelmed by detail intended for others. Key outputs Intent alignment dashboards with real-time threshold comparison Layered monitoring stack (infrastructure, ML-specific, governance) Dashboard access configured for engineering and governance audiences Module 12 documentation of monitoring architecture Statistical Anomaly Detection & Severity-Based Escalation AISDP module(s): Module 12 (Post-Market Monitoring), Module 6 (Risk Management System) Regulatory basis: Article 72, Article 9 Statistical anomaly detection algorithms identify unusual patterns in the system's inputs, outputs, or operational metrics. Anomalies may indicate data quality problems, model degradation, adversarial inputs, or infrastructure failures. The monitoring layer runs these algorithms continuously on production data. When anomalies are detected, severity-based escalation ensures they receive an appropriate response. Three severity tiers are defined. Low-severity anomalies are logged and reviewed during the next scheduled monitoring cycle. Medium-severity anomalies trigger immediate alerts to the technical team and are investigated within a defined timeframe. High-severity anomalies trigger automatic escalation to the AI Governance Lead and may activate break-glass mechanisms if the anomaly indicates an imminent risk to affected persons. The escalation thresholds are calibrated to balance sensitivity against alert fatigue. Thresholds that are too sensitive will flood the team with false positives, eroding responsiveness. Thresholds that are too permissive will miss genuine problems. The calibration is documented in the AISDP and reviewed periodically based on operational experience. The anomaly detection methods, escalation rules, and response procedures collectively form a critical part of the post-market monitoring plan documented in Module 12. Key outputs Anomaly detection algorithms configured for inputs, outputs, and operational metrics Three-tier severity classification with defined escalation paths Calibrated alert thresholds with documented rationale Module 6 and Module 12 AISDP evidence Multi-Dimensional Drift Monitoring (Input, Output, Fairness, Error, Override) AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 Single-dimension monitoring may miss drift that manifests across multiple dimensions without crossing any individual threshold. The monitoring layer tracks drift across five dimensions simultaneously: input feature distributions, output score distributions, fairness metrics, error rates, and operator override rates. Each dimension provides a different perspective on the system's behavioural stability. Input drift (monitored at Layer 2) indicates that the population the system serves is changing. Output drift (monitored at Layer 4) indicates that the system's decisions are shifting. Fairness drift indicates that the system's impact on different subgroups is evolving. Error rate changes indicate that the system's accuracy is degrading. Override rate changes indicate that human operators are finding the system's recommendations less reliable. Correlating drift signals across dimensions yields diagnostic insight. Input drift accompanied by output drift suggests the model is responding appropriately to a changing population. Output drift without input drift suggests the model itself is changing, perhaps through an undetected configuration change or a feedback loop effect. The monitoring layer should include specific checks for feedback loops, where the system's outputs influence the data that is subsequently used to evaluate or retrain the system. These checks require comparing the training data distribution against the production data distribution while controlling for the system's own influence. Key outputs Five-dimensional drift monitoring configuration Cross-dimensional correlation analysis Feedback loop detection methodology Module 12 post-market monitoring evidence Feedback Loop Detection — Training vs Production Distribution AISDP module(s): Module 12 (Post-Market Monitoring), Module 6 (Risk Management System) Regulatory basis: Article 72, Article 9 Feedback loops occur when the system's outputs influence the data that is subsequently used to evaluate or retrain the system. A recruitment screening system that reduces the diversity of candidates presented to hiring managers may, over time, generate training data that reflects the system's own biases rather than genuine hiring outcomes. This self-reinforcing dynamic can amplify small initial biases into significant disparate impact. Detecting feedback loops requires comparing the system's training data distribution against the production data distribution while controlling for the system's own influence on that distribution. If the production data is increasingly shaped by the system's past decisions, the distributions will converge in ways that mask real-world changes. The monitoring layer includes specific statistical tests for this convergence pattern. When a feedback loop is detected, the response may involve collecting external validation data that is not influenced by the system's outputs, adjusting the retraining methodology to account for the system's selection effects, or pausing automated retraining until the loop is broken. The detection methodology, response procedures, and any corrective actions taken are documented in AISDP Module 12 and feed into the risk management system documented in Module 6. Key outputs Feedback loop detection methodology (training vs production distribution comparison) Statistical tests for distribution convergence Response procedures for confirmed feedback loops Module 6 and Module 12 AISDP evidence --- ## Licence Compliance Scanning (FOSSA, Black Duck, pip-licenses) URL: https://docs.standardintelligence.com/licence-compliance-scanning-fossa-black-duck-pip-licenses Breadcrumb: Development › CI › CD Pipelines › Static Analysis › Licence Compliance Scanning (FOSSA, Black Duck, pip-licenses) Last updated: 28 Feb 2026 Licence Compliance Scanning (FOSSA, Black Duck, pip-licenses) AISDP module(s): Module 3 (Architecture and Design), Module 9 (Robustness and Cybersecurity) Regulatory basis: Annex IV (2) Automated licence compliance scanning prevents the organisation from inadvertently using libraries with licence terms that conflict with the system's deployment model. An ML system that uses an AGPL-licensed library may be required to open-source its own code; a system using a library with a non-commercial licence cannot be deployed commercially. These conflicts can emerge deep in the dependency tree, invisible without automated scanning. FOSSA and Black Duck provide comprehensive automated licence analysis and conflict detection. For a lightweight approach, pip-licenses enumerates all Python dependency licences for review, and the pre-commit configuration can be set to fail on prohibited licence types (for example, AGPL-3.0 or GPL-3.0 where incompatible with the system's distribution model). The licence audit is particularly relevant for AI systems incorporating open-source model components, where licence terms may impose obligations on downstream use. The Technical SME documents the licence audit and retains it as Module 3 evidence. Any licence conflicts identified are resolved before deployment, either by replacing the conflicting dependency or by obtaining appropriate permissions. Key outputs Licence scanning tool configuration (FOSSA, Black Duck, or pip-licenses) Prohibited licence list aligned with the system's deployment model CI pipeline integration blocking builds on licence conflicts Module 3 and Module 9 evidence --- ## Lineage Tracking (Data → Code → Pipeline → Model) URL: https://docs.standardintelligence.com/lineage-tracking-data-code-pipeline-model Breadcrumb: Development › Version Control › Model Registry › Lineage Tracking (Data → Code → Pipeline → Model) Last updated: 28 Feb 2026 Lineage Tracking (Data → Code → Pipeline → Model) AISDP module(s): Module 10 (Record-Keeping), Module 3 (Architecture and Design) Regulatory basis: Article 12 , Annex IV (2) Lineage tracking links each model version to the specific data version, code version, and pipeline execution that produced it. Given any deployed model, the organisation must be able to trace backwards through this chain to reconstruct the complete provenance: what data was used, what code processed it, what pipeline orchestrated the execution, and what validation results were produced. This four-link chain (data → code → pipeline → model) is the backbone of the technical traceability described above. The model registry entry references the data version and code commit; the code commit references the pipeline definition; the pipeline execution log records the entire workflow. OpenLineage with Marquez provides a standardised service for capturing and querying this lineage, with each pipeline component emitting lineage events that Marquez stores and exposes through a traversal API. For organisations using simpler tooling, the lineage chain can be implemented through cross-references in the model registry metadata combined with a provenance query script that chains lookups across Git, DVC, and MLflow. The Technical SME tests this query capability periodically by running sample provenance queries and verifying that the results are complete and correct. The test results are retained as Module 10 evidence. Key outputs Lineage chain linking data, code, pipeline, and model versions OpenLineage/Marquez integration or equivalent provenance query capability Periodic lineage query tests with retained results Module 10 and Module 3 AISDP evidence --- ## Linting & Type Checking URL: https://docs.standardintelligence.com/linting-and-type-checking Breadcrumb: Development › CI › CD Pipelines › Static Analysis › Linting & Type Checking Last updated: 28 Feb 2026 Linting & Type Checking AISDP module(s): Module 2 (Development Process), Module 5 (Testing and Validation) Regulatory basis: Annex IV (2) Standard linting and type checking form the foundation of the static analysis toolchain. Tools such as flake8, pylint, Ruff, and ESLint enforce coding standards, while type checkers such as mypy and pyright verify type annotations. Complexity analysis (cyclomatic complexity, cognitive complexity) flags code that exceeds defined thresholds and may introduce maintenance and auditability risks. These tools run as pre-commit hooks (catching issues before code enters the repository) and as CI pipeline stages (catching issues that bypassed the hooks). Code that fails linting or type checking is blocked from merging into the main branch. The enforcement is automatic and applies equally to all contributors. For high-risk AI systems, the compliance value of standard code quality tools lies in their contribution to maintainability and auditability. Code that is poorly structured, inconsistently typed, or excessively complex is harder to review, harder to test, and harder for a notified body assessor to evaluate. Clean, well-typed code supports the broader goal of demonstrating that the system's implementation is comprehensible and traceable. Key outputs Linting configuration (flake8, pylint, Ruff, or ESLint) Type checking configuration (mypy or pyright) Complexity thresholds and enforcement Module 2 and Module 5 documentation --- ## Load Testing (Locust, k6) — Latency & Throughput Under Load URL: https://docs.standardintelligence.com/load-testing-locust-k6-latency-and-throughput-under-load Breadcrumb: Development › CI › CD Pipelines › Integration Testing › Load Testing (Locust, k6) — Latency & Throughput Under Load Last updated: 28 Feb 2026 Load Testing (Locust, k6) — Latency & Throughput Under Load AISDP module(s): Module 5 (Testing and Validation), Module 3 (Architecture and Design) Regulatory basis: Article 15 The AISDP declares the system's performance characteristics, including latency at the p50, p95, and p99 percentiles and throughput capacity. These declarations are compliance commitments that must hold under production load. Load testing verifies that the deployed system meets these commitments under realistic traffic conditions. Locust and k6 are modern load testing tools that generate configurable traffic patterns and report percentile latency distributions. Load tests should simulate the expected request rate, request size distribution, and concurrent user count. The tests are run against the staging environment before every production deployment, and periodically against production during off-peak windows to confirm that the system's performance characteristics have not degraded. The test results, including the exact test configuration and load profile, are retained as Module 5 evidence. If the load test reveals that the system's latency exceeds the declared threshold at the expected production load, the deployment is blocked until the performance issue is resolved or the AISDP's performance declarations are updated through the governance process. Key outputs Load test configuration simulating production traffic patterns Percentile latency measurements (p50, p95, p99) under load Throughput capacity verification Module 5 and Module 3 AISDP evidence --- ## Long-Term Retrieval (Ten-Year Archive) URL: https://docs.standardintelligence.com/long-term-retrieval-ten-year-archive Breadcrumb: Development › Version Control › Model Registry › Long-Term Retrieval (Ten-Year Archive) Last updated: 28 Feb 2026 Long-Term Retrieval (Ten-Year Archive) AISDP module(s): Module 10 (Record-Keeping) Regulatory basis: Article 18 Archived models must be retrievable for the full ten-year retention period mandated by Article 18. This is not merely a storage requirement; it is a retrieval requirement. The organisation must be able to load, inspect, and if necessary re-evaluate a model that was archived years ago. The registry's underlying storage must be durable, with replication and backup to protect against data loss. The storage tier must balance cost against retrieval latency: deep archive storage (S3 Glacier Deep Archive, Azure Archive) is cost-effective but may have retrieval times measured in hours. For models that may need to be retrieved for incident response , a storage tier with faster retrieval is appropriate, at least for the most recent archived versions. The retrieval process must account for framework versioning. A model serialised with PyTorch 1.x may not load with PyTorch 3.x a decade later. Organisations should consider archiving the container image alongside the model artefact, preserving the complete runtime environment needed to load and execute the model. The ten-year archive infrastructure should be tested periodically by retrieving a sample of archived models and confirming they can be loaded and evaluated. Key outputs Archive storage with ten-year durability guarantees Retrieval latency specification per archive tier Container image archival alongside model artefacts (recommended) Periodic retrieval testing with retained results --- ## Maintainability URL: https://docs.standardintelligence.com/maintainability Breadcrumb: Development › Model Selection › Compliance Criteria Scoring › Maintainability Last updated: 28 Feb 2026 Maintainability AISDP module(s): 3 Regulatory basis: Article 15 Maintainability asks whether the model can be retrained, fine-tuned, or recalibrated in response to post-market monitoring findings without triggering a substantial modification under Article 3(23) . It also asks whether the model's behaviour is stable across minor updates. The assessment evaluates the model's sensitivity to retraining. Gradient-boosted trees and logistic regression produce stable, predictable changes when retrained on augmented data; the performance shift is typically proportional to the data change and can be estimated in advance. Deep neural networks can exhibit large behavioural shifts from small data changes, making incremental maintenance more difficult without triggering a substantial modification assessment. The assessment also considers the operational effort required for maintenance. Models that can be retrained and revalidated through the existing CI/CD pipeline score higher than models requiring manual intervention, custom infrastructure, or lengthy retraining cycles. The availability of parameter-efficient fine-tuning methods (LoRA, adapters) for the candidate architecture may improve the maintainability score by enabling targeted updates without full retraining. Quantitative substantial modification thresholds should be estimated for each candidate: what magnitude of retraining-induced performance change would cross the threshold? Architectures where normal maintenance frequently crosses the threshold impose a heavy governance burden. Key outputs Maintainability score per candidate model Estimated substantial modification sensitivity --- ## Manual Alternative (Directories, Spreadsheet, Signed Approval) URL: https://docs.standardintelligence.com/manual-alternative-directories-spreadsheet-signed-approval Breadcrumb: Development › Version Control › Model Registry › Manual Alternative (Directories, Spreadsheet, Signed Approval) Last updated: 28 Feb 2026 Manual Alternative (Directories, Spreadsheet, Signed Approval) AISDP module(s): Module 3 (Architecture and Design), Module 10 (Record-Keeping) Regulatory basis: Article 12 For organisations that cannot deploy a model registry tool, model management reverts to manual file management with a tracking spreadsheet. A dedicated directory or cloud storage bucket is organised by model name and version (for example, models/recruitment_screener/v2.3/ ), with access controls preventing unauthorised modification. The model tracking spreadsheet serves as the registry's metadata layer. It should include columns for model name, version, storage location, content hash (SHA-256), training data version, code commit, training date, evaluation metrics (performance and fairness), stage (experimental, staging, production, or archived), approval evidence (approver name and date), and deployment date. Stage transitions from staging to production require a signed approval entry by the AI Governance Lead . No model artefact may be deleted; archived models are moved to a separate archive directory. The Technical SME follows the provenance chain manually: look up the model version in the spreadsheet, find the code commit, find the data version. This approach is manageable for a single system with infrequent model updates but becomes burdensome for multiple systems or frequent retraining. MLflow is open-source and free; the cost is engineering time for integration. Key outputs Directory structure with access controls Model tracking spreadsheet with all required metadata columns Signed approval entries for production stage transitions No-delete policy for archived model artefacts --- ## Metadata Attachment (Dataset, Code, Pipeline, Hash, Hyperparameters, Metrics, Approval) URL: https://docs.standardintelligence.com/metadata-attachment-dataset-code-pipeline-hash Breadcrumb: Development › Version Control › Model Registry › Metadata Attachment (Dataset, Code, Pipeline, Hash, Hyperparameters, Metrics, Approval) Last updated: 28 Feb 2026 Metadata Attachment (Dataset, Code, Pipeline, Hash, Hyperparameters, Metrics, Approval) AISDP module(s): Module 3 (Architecture and Design), Module 10 (Record-Keeping) Regulatory basis: Article 12 , Annex IV (2) Each model version in the registry must carry structured metadata that links it to every artefact in its provenance chain. This metadata transforms the registry from a simple artefact store into the navigable index that makes end-to-end traceability practical. The minimum metadata set includes the training dataset version (DVC, Delta Lake, or LakeFS reference), the training code version (Git commit SHA), the pipeline execution ID (identifying the specific CI/CD run that built and tested the model), the content hash (SHA-256 of the model artefact for integrity verification), the hyperparameter configuration, the full set of performance and fairness metrics from the validation gates, and the approval status. A reference MLflow implementation is provided below demonstrating how these metadata fields are attached as model version tags. Each tag uses a namespaced key (e.g. aisdp.training_data_version , aisdp.code_commit , aisdp.content_hash ) to distinguish compliance metadata from operational metadata. The metadata attachment should be automated as part of the CI/CD pipeline 's model registration step, ensuring that no model enters the registry without its complete provenance record. Key outputs Metadata schema defining all required fields per model version Automated metadata attachment in the CI/CD registration step Validation that all required fields are populated before registration completes Module 3 and Module 10 AISDP documentation --- ## Mitigation Effectiveness Assessments URL: https://docs.standardintelligence.com/mitigation-effectiveness-assessments Breadcrumb: Development › Data Governance › Artefacts › Mitigation Effectiveness Assessments Last updated: 28 Feb 2026 Mitigation Effectiveness Assessments AISDP module(s): 4 ( Data Governance and Dataset Documentation ) Regulatory basis: Article 10(2)(f) Each bias mitigation technique applied (pre-processing, in-processing, or post-processing) requires an effectiveness assessment documenting whether it achieved its intended objective and what trade-offs it introduced. The assessment compares the fairness metrics before and after mitigation, using the same evaluation methodology and the same test dataset. The comparison should cover all five post-training metrics, not only the targeted metric, since mitigation of one fairness dimension may adversely affect others. The accuracy impact is measured: what performance was lost in exchange for the fairness improvement? The subgroup-level impact is examined: did the mitigation improve fairness for the targeted subgroup without degrading performance for other subgroups? Where multiple mitigation techniques were applied in combination, the assessment should decompose the contribution of each technique where possible, enabling the organisation to understand which techniques are most effective for its specific context. This information feeds into future model development cycles and into the broader organisational learning about bias mitigation. Key outputs Per-technique effectiveness assessment Before/after fairness metric comparison Accuracy-fairness trade-off documentation --- ## Model Cards URL: https://docs.standardintelligence.com/model-cards Breadcrumb: Development › CI › CD Pipelines › Artefacts › Model Cards Last updated: 28 Feb 2026 Model Cards AISDP module(s): Module 5 (Testing and Validation), Module 3 (Architecture and Design) Regulatory basis: Annex IV (2), Annex IV(3) This artefact comprises the collection of model cards generated across the system's lifecycle. Each model card documents a specific model version: its architecture, training data, evaluation metrics, intended use, and known limitations. The model card for the currently deployed model version is the primary reference for Module 5. The model card collection provides a history of the system's model evolution. Reviewing model cards from successive versions reveals how the model architecture, training data, and performance characteristics have changed over time. For substantial modification assessments, comparing model cards between versions provides a structured, readable summary of what changed. Model cards are stored alongside the model registry entries and cross-referenced by model version. The card for each production model version should be readily accessible to the AI Governance Lead , notified body assessors, and national competent authorit ies. Archived model cards are retained for the ten-year period alongside their corresponding model artefacts. Key outputs Model card collection across all model versions Cross-referencing with model registry entries Ready accessibility for governance and regulatory audiences Module 5 and Module 3 evidence --- ## Model Inference Tests (Registry Load, Format, Determinism, Latency, Degradation) URL: https://docs.standardintelligence.com/model-inference-tests-registry-load-format-determinism Breadcrumb: Development › CI › CD Pipelines › Unit Testing › Model Inference Tests (Registry Load, Format, Determinism, Latency, Degradation) Last updated: 28 Feb 2026 Model Inference Tests (Registry Load, Format, Determinism, Latency, Degradation) AISDP module(s): Module 5 (Testing and Validation) Regulatory basis: Article 15 , Annex IV (3) The model serving component requires unit tests confirming five properties. The model must load correctly from the model registry using the registry client. Inference must produce outputs in the expected format and range. For deterministic architectures, inference must be deterministic for a given model version and input. The model's latency must fall within the documented Service Level Agreement. A test that submits a representative input and measures the inference duration confirms that the serving path meets the performance declaration in the AISDP. If the latency exceeds the declared threshold, the test fails, preventing deployment of a version that would immediately violate a documented commitment. Error handling must produce graceful degradation rather than silent failures. When the model receives malformed input, when a dependency is unavailable, or when the serving infrastructure is under stress, the test verifies that the system returns an appropriate error response, logs the event, and does not produce a plausible but incorrect output. Silent failures, where the system returns a default value without indicating an error, are particularly dangerous for high-risk systems because they may go undetected. Key outputs Registry load tests confirming model loads via the registry client Output format and range validation tests Latency tests against declared SLA thresholds Graceful degradation tests for error conditions --- ## Model Origin Risk URL: https://docs.standardintelligence.com/model-origin-risk Breadcrumb: Development › Model Selection › Model Origin Risk Last updated: 28 Feb 2026 Open-Source Models — Training Data Provenance Open-Source Models — Licensing Compatibility Open-Source Models — Development Governance Gaps Open-Source Models — Bias Testing & Adversarial Evaluation History Open-Source Models — Residual Non-Conformity Risk Commercial APIs — Contractual Terms & SLAs Commercial APIs — Provider Data Handling & Geographic Considerations --- ## Model Registry URL: https://docs.standardintelligence.com/model-registry Breadcrumb: Development › Version Control › Model Registry Last updated: 28 Feb 2026 Tooling (MLflow, W&B, SageMaker, Vertex AI) Immutable Versioning — Unique Non-Reusable IDs Metadata Attachment (Dataset, Code, Pipeline, Hash, Hyperparameters, Metrics, Approval) Lineage Tracking (Data → Code → Pipeline → Model) Stage Management (Experimental, Staging, Production, Archived) Access Control — CI/CD Promotes; No Manual Promotion Long-Term Retrieval (Ten-Year Archive) Manual Alternative (Directories, Spreadsheet, Signed Approval) --- ## Model Selection Artefacts URL: https://docs.standardintelligence.com/model-selection-artefacts Breadcrumb: Development › Model Selection › Artefacts Last updated: 28 Feb 2026 Five artefacts are produced during the model selection process. Together they provide the documented evidence trail from candidate evaluation through to the formal selection decision. Model Selection Record AISDP module(s): 2, 3 Regulatory basis: Annex IV (2)(b) The Model Selection Record is a core component of AISDP Module 3 , documenting the organisation's rationale for its model architecture choice. Annex IV requires describing "the key design choices and their rationale," and the Model Selection Record is the primary artefact that satisfies this requirement. The Record covers the system's functional requirements, the compliance requirements derived from the risk assessment (including minimum explainability, testability, and auditability standards), the candidate architectures evaluated (including traditional heuristic and statistical approaches), the evaluation methodology (datasets, metrics, compliance criteria scoring ), the evaluation results presented as a comparison table, the recommended selection with rationale including trade-offs accepted, and the governance approval record. The scope extends beyond the primary decision-making model. Every learned component in the system architecture requires an entry: embedding models, re-ranking models, classification heads, auxiliary monitoring models, and safety classifiers. Each entry is proportionate to the component's influence on the final output. The AI System Assessor verifies that the Record is complete against the architecture diagram; any model component visible in the architecture without a corresponding entry is a documentation gap. This documentation serves two audiences. The Technical SME reviews it for technical soundness. The Classification Reviewer and any notified body review it for evidence that the organisation made a considered, risk-aware choice and did not simply default to the most complex available model. Key outputs Model Selection Record (complete, covering all model components) Compliance criteria comparison table Governance approval record Model Origin Risk Assessments AISDP module(s): 3, 6 Regulatory basis: Articles 9, 11 The Model Origin Risk Assessment is an artefact documenting the provenance risk profile of each model component in the system. It consolidates the analysis into a structured assessment per component. For each model component, the assessment records the origin category (in-house, open-source, or proprietary), the provenance documentation available, the gaps in provenance documentation, the compensating controls applied (sentinel testing, output filtering , continuous monitoring), and the residual origin risk after controls. The assessment also records the GPAI provider due diligence performed (where applicable), including the Article 25 (3) information request and the provider's response. In-house models offer the greatest control over documentation and governance; the risk profile centres on process discipline and whether the development team followed the documented methodology. Open-source models carry provenance, governance, and testing gaps that must be compensated. Proprietary models may have documentation gaps where the provider refuses disclosure, requiring compensating evaluation. The assessment rates each component on a consistent scale and aggregates the ratings into an overall model origin risk profile for the system. Key outputs Per-component model origin risk assessment Aggregated system-level origin risk profile IP & Licensing Analysis AISDP module(s): 3, 4 Regulatory basis: Article 53 (1)(c); Annex IV(2) The IP and Licensing Analysis is a consolidated artefact addressing copyright, personal data, and licence risks across all model components, training data sources, and third-party dependencies. It brings together the assessments into a single document. The analysis covers training data copyright status and legal basis for each data source, open-source component licence terms and compatibility with the system's commercial and regulatory context, proprietary model contractual representations regarding IP, personal data consent verification status, and residual IP risks with ratings and compensating controls. The document is maintained as a living artefact. Changes to model components, data sources, or licence terms trigger updates. The Legal and Regulatory Advisor reviews the analysis at each major phase gate, and the AI Governance Lead signs off on residual IP risk acceptance. The analysis is referenced from AISDP Modules 3 and 4, and it forms part of the evidence pack reviewed during conformity assessment . Key outputs IP and Licensing Analysis document Legal and Regulatory Advisor review record AI Governance Lead residual risk acceptance Fine-Tuning Provider Boundary Determination AISDP module(s): 3 Regulatory basis: Article 25(1)(b); Article 3(23) The Fine-Tuning Provider Boundary Determination is the artefact documenting whether the organisation's fine-tuning of a GPAI model triggers provider status under Article 25(1)(b). It consolidates the analysis into a formal determination. The artefact records the base model identified (provider, model name, version), the fine-tuning activity performed (methodology, data, scope), the three-criteria assessment (intended purpose change, risk profile change, safety testing invalidation), the determination reached (provider status triggered or not triggered), and the rationale with supporting evidence. Where the case is borderline, the decision flow documentation is incorporated by reference. If provider status is triggered, the artefact also records the obligation transfer determination: which Article 16 obligations are assumed, which are partially satisfied by the GPAI provider's existing artefacts, and which fall exclusively on the downstream organisation. The role assignments and timelines for fulfilling assumed obligations are documented. The determination is approved by the AI Governance Lead and the Legal and Regulatory Advisor. It is retained in the evidence pack and referenced from AISDP Module 3. If challenged by a competent authority, this artefact provides the organisation's documented reasoning for its compliance posture. Key outputs Fine-Tuning Provider Boundary Determination document AI Governance Lead and Legal and Regulatory Advisor approval Obligation transfer matrix (where provider status is triggered) Compliance Criteria Scoring Matrix AISDP module(s): 3 Regulatory basis: Annex IV(2)(b) The Compliance Criteria Scoring Matrix is the artefact that brings together the six compliance criterion scores for each candidate model architecture evaluated during model selection. It is the quantitative backbone of the Model Selection Record. The matrix presents each candidate architecture as a row, with columns for documentability, testability, auditability, bias detectability, maintainability, and determinism. Each cell contains the score (strong, adequate, or weak) and a brief evidence-based justification. The matrix includes weighting: for a high-risk system in the employment domain where human oversight is paramount, explainability-adjacent criteria (documentability, auditability, bias detectability) carry higher weight; for a safety-critical system, testability and determinism carry higher weight. The weights are documented and approved by the AI Governance Lead before the evaluation begins, preventing post-hoc rationalisation of a preferred choice. The matrix also includes non-compliance columns: performance metrics (accuracy, precision, recall), cost estimates, and the IP risk profile for each candidate. This enables the selection decision to weigh compliance criteria alongside traditional engineering criteria in a single view. The completed matrix, with the weights, scores, justifications, and the resulting recommendation, is stored in the evidence pack and forms the core of the model selection rationale presented in AISDP Module 3. It should be retrievable if a notified body or competent authority asks why a particular model architecture was chosen over alternatives. Key outputs Compliance Criteria Scoring Matrix (completed) Weighting rationale approved by AI Governance Lead Selection recommendation derived from the matrix --- ## Model Selection URL: https://docs.standardintelligence.com/model-selection Breadcrumb: Development › Model Selection (S.3) Last updated: 28 Feb 2026 Model selection is the first major technical decision in the AISDP lifecycle. The choice of model architecture determines the system's compliance profile across six criteria: documentability, testability, auditability, bias detectability, maintainability, and determinism. This section covers the full evaluation process, from candidate architecture assessment through to the formal Model Selection Record. The evaluation begins with a full-spectrum review of candidate architectures, spanning heuristic systems through to foundation models. It then addresses model origin risk for open-source and commercial components, copyright and IP exposure, the fine-tuning provider boundary under Article 25 , and the six compliance criteria scored against each candidate. The section concludes with the artefacts produced: the Model Selection Record, origin risk assessment s, IP analysis, provider boundary determination, and the compliance criteria scoring matrix. ℹ This section corresponds to the Model Selection section and feeds primarily into AISDP Modules 2 (Development Process) and 3 (Architecture and Design). --- ## Model Validation Gates URL: https://docs.standardintelligence.com/model-validation-gates Breadcrumb: Development › CI › CD Pipelines › Model Validation Gates Last updated: 28 Feb 2026 Performance Gate (AUC-ROC, F1, Precision, Recall, Brier, Calibration) AISDP module(s): Module 5 (Testing and Validation) Regulatory basis: Article 15 , Annex IV (3) The performance gate is the first of the four sequential validation gates in the CI pipeline. It computes the model's accuracy metrics on the holdout test set and compares them against the thresholds declared in the AISDP. Any metric falling below its declared threshold fails the gate, and subsequent gates do not run. Two subtleties require attention. First, the holdout set must be truly held out: it must not have been used during training, hyperparameter tuning, or feature selection. If the holdout set has leaked into the training process, the performance gate is testing on training data and the results are unreliable. Second, the Technical SME computes the metrics with confidence intervals (bootstrap or cross-validation), and the threshold comparison uses the lower bound of the confidence interval rather than the point estimate. A model achieving 0.86 AUC-ROC with a 95% confidence interval of [0.82, 0.90] is compared against the threshold using 0.82, because that represents the worst-case plausible performance. The gate produces a structured report recording the gate name, each metric's value, the threshold, the confidence interval, and the pass/fail determination. This report is stored as a pipeline artefact and retained as Module 5 evidence. Key outputs Performance metrics computed on a genuinely held-out test set Confidence interval computation with lower-bound threshold comparison Structured gate report stored as a pipeline artefact Module 5 AISDP evidence Fairness Gate — Non-Negotiable (SRR, Equalised Odds, Predictive Parity, Calibration) AISDP module(s): Module 5 (Testing and Validation), Module 6 (Risk Management System) Regulatory basis: Article 9 , Article 10 The fairness gate is the second validation gate, running only after the performance gate passes. It computes the full fairness evaluation suite (selection rate ratios, equalised odds, predictive parity, calibration within groups) across all measured protected characteristic subgroups. Any subgroup metric breaching its threshold fails the gate. This gate is non-negotiable; it cannot be overridden without the process described above. The fairness gate's most common failure mode involves small subgroups. The model may pass fairness for all subgroups except one with a small sample size, where the metric is unreliable due to statistical noise. Gate design must handle this: either the small-subgroup metrics are reported with confidence intervals and compared using the lower bound, or the gate explicitly flags subgroups below the minimum sample size (typically 30 observations) as "insufficient data, manual review required." The fairness gate report disaggregates results by subgroup, providing per-subgroup metric values, confidence intervals, threshold comparisons, and pass/fail determinations. This disaggregated report is essential Module 5 evidence; an aggregate fairness metric that masks disparate impact within subgroups does not satisfy Article 10's requirements. Key outputs Per-subgroup fairness metrics (SRR, equalised odds, predictive parity, calibration) Small-subgroup handling (confidence intervals or manual review flagging) Disaggregated fairness gate report Module 5 and Module 6 AISDP evidence Fairness Gate Override — AI Governance Lead Approval (Logged as Risk Acceptance) AISDP module(s): Module 6 (Risk Management System), Module 5 (Testing and Validation) Regulatory basis: Article 9 The fairness gate cannot be overridden through standard deployment processes. If a candidate model fails the fairness gate, the only pathway to deployment is a formal risk acceptance approved by the AI Governance Lead . This approval is logged as a risk acceptance decision, not as a routine exception. The risk acceptance record must document which specific subgroup metrics breached which thresholds, the root cause analysis (why the model fails fairness for this subgroup), the remediation plan (what steps will be taken to address the fairness gap), the interim risk assessment (what is the impact on affected persons during the period before remediation), and the time-bound commitment (when will the remediated model be deployed). The Conformity Assessment Coordinator retains the risk acceptance record as Module 6 evidence. Risk acceptance is not a permanent state; it is a time-limited acknowledgement that the system falls short of its declared fairness standards, with a documented plan to close the gap. The post-market monitoring system ( Module 12 ) tracks progress against the remediation plan and escalates if the timeline slips. Key outputs Risk acceptance approval by the AI Governance Lead Root cause analysis and remediation plan Time-bound commitment for remediated model deployment Module 6 and Module 5 AISDP evidence Robustness Gate (Adversarial Examples, Input Perturbation) AISDP module(s): Module 9 (Robustness and Cybersecurity), Module 5 (Testing and Validation) Regulatory basis: Article 15 The robustness gate is the third validation gate, testing the model's stability under input perturbation. Article 15(3) requires high-risk AI systems to be resilient to errors, faults, and inconsistencies, and Article 15(5) requires that robustness measures be proportionate to the relevant circumstances and take due account of the state of the art as reflected in relevant harmonised standards or common specifications. IBM's Adversarial Robustness Toolbox (ART) provides a comprehensive set of adversarial attack methods and defence evaluations. Perturbations are calibrated to realistic input noise levels, and the gate verifies that the model's accuracy does not degrade beyond a defined tolerance. For tabular models, feature perturbation at realistic noise levels is the practical approach: ±5% on continuous features and random category flips at a 1% rate are typical starting points. For neural networks, ART supports FGSM, PGD, C&W, and DeepFool attack methods. For image or text models, ART's attack methods provide systematic adversarial evaluation. The perturbation configuration is version-controlled alongside the threshold configuration. Performance degradation exceeding the defined tolerance fails the gate. The gate report records the perturbation methods applied, the perturbation magnitudes, the model's performance under each perturbation, and the tolerance comparison. A model that passes the performance and fairness gates but fails the robustness gate may be accurate under normal conditions but fragile to input variation, posing a risk in production environments where inputs are noisier than in controlled evaluation datasets. Key outputs Adversarial and perturbation testing using ART or equivalent Perturbation configuration aligned to realistic input noise levels Robustness gate report with per-perturbation performance results Module 9 and Module 5 AISDP evidence Documentation Gate (Model Card Completeness, AISDP Currency) AISDP module(s): Module 5 (Testing and Validation), Module 10 (Record-Keeping) Regulatory basis: Annex IV The documentation gate is the fourth and final validation gate. It verifies that the compliance documentation is complete and current for the candidate model version. A model that passes all technical gates but lacks a complete model card , or whose AISDP sections have not been updated to reflect the candidate's characteristics, should not be deployed. The gate checks that the model card has been generated and contains all required sections (architecture summary, training data version, evaluation metrics disaggregated by subgroup, intended use, known limitations). It verifies that the AISDP sections most closely tied to the model (Modules 3, 5, and 10) have been updated or re-generated to reflect the candidate version. It confirms that the evidence pack (model card, gate reports, data quality reports) is complete and that integrity hashes have been computed. Auto-generated documentation simplifies this gate: if the model card and AISDP drafts are generated by the pipeline, the documentation gate primarily verifies that the generation succeeded and that the generated documents contain no empty or missing sections. For manually maintained documentation, the gate checks timestamps to confirm that the documentation has been updated since the candidate model was registered. Key outputs Model card completeness verification AISDP currency verification for affected modules Evidence pack integrity hash verification Module 5 and Module 10 AISDP evidence --- ## Multilingual Performance URL: https://docs.standardintelligence.com/multilingual-performance Breadcrumb: Development › Data Governance › RAG-Specific Governance › Multilingual Performance Last updated: 28 Feb 2026 Multilingual Performance AISDP module(s): 4 ( Data Governance and Dataset Documentation ) Regulatory basis: Article 10(2)(f), 10(3) Most widely available embedding models perform best on English-language text. For high-risk systems deployed across multiple EU member states, uneven embedding performance across languages could cause the system to retrieve more relevant information for queries in some languages than others. This constitutes a materially different quality of service to users in different member states. The Technical SME evaluates the embedding model's retrieval performance across all languages in which the system will operate. The evaluation should use language-specific retrieval benchmarks (MIRACL for multilingual information retrieval, MTEB for multilingual text embedding evaluation) and domain-specific test queries in each language. Performance gaps exceeding a defined threshold are documented as known limitations. Compensating controls for multilingual performance gaps include language-specific fine-tuning of the embedding model, translation preprocessing for underperforming languages (translating queries into the language where the model performs best before retrieval, then translating results back), or deploying separate embedding models optimised for specific language families. Each approach carries trade-offs: translation introduces latency and potential meaning loss; separate models increase infrastructure complexity. The AI Governance Lead , in consultation with the Technical SME, sets the performance gap threshold. The threshold should reflect the deployment context: a system deployed only in English and French may tolerate larger gaps than a system serving all 24 official EU languages. Key outputs Multilingual retrieval performance evaluation results Performance gap identification per language Compensating controls for underperforming languages --- ## Open-Source Models — Bias Testing & Adversarial Evaluation History URL: https://docs.standardintelligence.com/open-source-models-bias-testing-and-adversarial-evaluation Breadcrumb: Development › Model Selection › Model Origin Risk › Open-Source Models — Bias Testing & Adversarial Evaluation History Last updated: 28 Feb 2026 Open-Source Models — Bias Testing & Adversarial Evaluation History AISDP module(s): 3, 5 Regulatory basis: Articles 9, 10, 15 For open-source models incorporated into high-risk systems, the AISDP must document whether the model has undergone bias testing and adversarial evaluation, and the extent to which the downstream provider can rely on the results. Many open-source models publish evaluation results on standard benchmarks, yet these benchmarks rarely include the disaggregated fairness analysis that Article 10 requires. A model evaluated on GLUE or SuperGLUE demonstrates general capability; it does not demonstrate equitable performance across the demographic subgroups relevant to the deployment context. The AI System Assessor should record what evaluations exist, assess their relevance to the intended purpose and deployment population, and identify the testing gaps that the downstream provider must fill. Adversarial evaluation history is similarly important. Some open-source models have undergone red-teaming by the developer or community; others have not. For LLMs, prompt injection resilience, jailbreak resistance, and content safety evaluation are particularly relevant. The MITRE ATLAS threat taxonomy provides the reference framework for adversarial evaluation. Where the model lacks adversarial evaluation history, the downstream provider must conduct its own testing programme. The AISDP documents the complete testing provenance: evaluations conducted by the original developer (with citations), evaluations conducted by the community (with citations), evaluations conducted by the downstream provider (with full methodology and results), and the residual testing gaps with justification for why they were accepted. This transparency about the testing chain is more credible to a competent authority than presenting only the downstream provider's results without acknowledging the inherited testing gaps. Key outputs Inherited evaluation history assessment Downstream provider's supplementary testing plan and results Residual testing gap documentation --- ## Open-Source Models — Development Governance Gaps URL: https://docs.standardintelligence.com/open-source-models-development-governance-gaps Breadcrumb: Development › Model Selection › Model Origin Risk › Open-Source Models — Development Governance Gaps Last updated: 28 Feb 2026 Open-Source Models — Development Governance Gaps AISDP module(s): 3, 6 Regulatory basis: Articles 9, 10; Annex IV (2) Open-source models are frequently developed without the governance structures that the EU AI Act expects for high-risk system components. The development process may lack formal version control discipline, structured experiment tracking, documented evaluation methodology, or bias and fairness testing across protected characteristic subgroups. These governance gaps create inherited risk that the downstream provider must assess and mitigate. The AI System Assessor should examine the open-source model's available documentation, including model card s, dataset descriptions, evaluation reports, and community discussion, to identify which governance practices were followed and which were absent. Common gaps include the absence of disaggregated performance metrics across demographic subgroups, no adversarial robustness evaluation, incomplete documentation of hyperparameter selection rationale, and no formal change management process between model versions. Each identified gap becomes a risk register entry. The downstream provider must determine whether the gap can be compensated through the organisation's own evaluation and testing, or whether it represents a non-conformity risk that cannot be adequately mitigated. A model with no published fairness evaluation, for example, requires the downstream provider to conduct comprehensive bias testing against the deployment population; the cost and feasibility of this testing should factor into the model selection decision. The risk assessment should distinguish between gaps that are inherent to the open-source development model (and therefore predictable and manageable) and gaps that indicate poor development practices (and therefore signal higher inherent risk). A model from a well-maintained repository with active community review and published evaluation methodology presents a different risk profile from a model uploaded without documentation by an unknown contributor. Key outputs Development governance gap assessment per open-source component Risk register entries for identified gaps Compensating evaluation plan --- ## Open-Source Models — Licensing Compatibility URL: https://docs.standardintelligence.com/open-source-models-licensing-compatibility Breadcrumb: Development › Model Selection › Model Origin Risk › Open-Source Models — Licensing Compatibility Last updated: 28 Feb 2026 Open-Source Models — Licensing Compatibility AISDP module(s): 3 Regulatory basis: Annex IV (2)(b) The licensing terms attached to open-source models carry compliance implications that extend beyond intellectual property law into the regulatory domain. The AISDP must document the licence under which each open-source component is used and confirm that the terms are compatible with the system's commercial context, distribution model, and regulatory requirements. Common licensing considerations include whether the licence permits commercial use, whether it imposes copyleft obligations that would require the organisation to open-source its own modifications or the broader system, whether it restricts the model's use in specific domains (some model licences prohibit use in surveillance, military, or law enforcement applications), and whether the licence terms conflict with the organisation's data processing obligations under GDPR . Automated licence compliance scanning (as part of the CI pipeline) should cover all model dependencies, including ML framework versions, third-party libraries, and pre-trained model components. The SBOM (Software/ML Bill of Materials) generated using SPDX or CycloneDX standards documents these dependencies with their licence terms, enabling both vulnerability scanning and licence compliance checking. Where licence terms are ambiguous or potentially incompatible, the Legal and Regulatory Advisor should review the specific provisions and document the organisation's interpretation and risk acceptance. This analysis is retained in the evidence pack as part of the IP and Licensing Analysis artefact. Key outputs Licence compatibility assessment per open-source component SBOM with licence terms Legal review of ambiguous licence provisions (where applicable) --- ## Open-Source Models — Residual Non-Conformity Risk URL: https://docs.standardintelligence.com/open-source-models-residual-non-conformity-risk Breadcrumb: Development › Model Selection › Model Origin Risk › Open-Source Models — Residual Non-Conformity Risk Last updated: 28 Feb 2026 Open-Source Models — Residual Non-Conformity Risk AISDP module(s): 3, 6 Regulatory basis: Articles 9, 11; Annex IV After completing provenance assessment, licence review, governance gap analysis, and bias and adversarial evaluation, a residual non-conformity risk profile remains for each open-source model component. This residual profile aggregates the risks that the downstream provider's compensating controls cannot fully eliminate. The AI System Assessor documents each residual risk with its source (provenance gap, governance gap, testing gap, or licence uncertainty), its potential impact on the system's compliance posture, the compensating controls applied, and the residual risk rating after those controls. The AI Governance Lead reviews the aggregate residual profile and makes a formal risk acceptance decision, retained in the evidence pack . The residual non-conformity risk should factor into the model selection decision. A model with high residual non-conformity risk may be inappropriate for a high-risk system even if its technical performance is superior, because the documentation and compliance gaps may be difficult or impossible to close. The model selection rationale document should record this trade-off explicitly, demonstrating that compliance risk was weighted alongside performance in the selection process. Residual risk from open-source components is subject to the same periodic review as all risk register entries. Changes in the open-source community (new evaluation results, disclosed vulnerabilities, licence amendments, provider cessation) may alter the residual risk profile and trigger reassessment. Key outputs Residual non-conformity risk profile per open-source component AI Governance Lead risk acceptance decision Periodic review schedule for open-source component risks --- ## Open-Source Models — Training Data Provenance URL: https://docs.standardintelligence.com/open-source-models-training-data-provenance Breadcrumb: Development › Model Selection › Model Origin Risk › Open-Source Models — Training Data Provenance Last updated: 28 Feb 2026 Open-Source Models — Training Data Provenance AISDP module(s): 3, 4 Regulatory basis: Article 10 ; Annex IV (2)(d) Open-source models from repositories such as Hugging Face, GitHub, or academic publications offer accessibility and community validation. They also introduce training data provenance risks that the AISDP must address. The training data may be unknown or poorly documented. It may include copyrighted material, personal data processed without consent, or data unrepresentative of the intended deployment population. The development process may not have included the bias testing, adversarial evaluation, or governance records that the AI Act requires. Any organisation using an open-source model as a component of a high-risk system inherits these documentation gaps. Module 3 records which open-source components are incorporated, the due diligence performed on each, the licensing terms and their regulatory compatibility, and the residual risk s arising from provenance gaps. Where provenance documentation is unavailable, the organisation must compensate through its own testing and evaluation. Sentinel datasets exercising the risk dimensions that the training data disclosures do not address provide one mechanism. Output filtering and validation layers constraining the model's outputs to the acceptable range for the intended purpose provide another. Continuous monitoring of the model's behaviour for drift, with automated alerting when output distributions shift beyond defined tolerances, closes the loop. The SLSA (Supply-chain Levels for Software Artifacts) framework, originally designed for software supply chain security , adapts well to ML artefacts. Level 2 (automated build process with verifiable provenance metadata) is the minimum practical target. For models downloaded from public repositories, best practice is to download once, compute a cryptographic hash, and store in the internal model registry , ensuring all subsequent references use the internal copy. Key outputs Open-source model provenance assessment Due diligence documentation per component Compensating control specifications for provenance gaps --- ## Per-Layer Control Specifications URL: https://docs.standardintelligence.com/per-layer-control-specifications Breadcrumb: Development › Architectures › Artefacts › Per-Layer Control Specifications Last updated: 28 Feb 2026 Per-Layer Control Specifications AISDP module(s): Module 3 (Architecture and Design), Module 6 (Risk Management System) Regulatory basis: Article 9 , Article 15 , Annex IV (2)(b) The Per-Layer Control Specifications document consolidates the compensating controls implemented at each of the eight architectural layers described above. For each layer, the specification records the controls implemented, the intent drift risk each control addresses, the outcome drift risk each control addresses, the configuration parameters and thresholds, and the monitoring and alerting mechanisms. This artefact serves as both a design document and a compliance checklist. During the conformity assessment , the assessor can verify that each layer has the controls documented in the specification and that the controls are configured as described. During production operation, the specification serves as the reference against which the monitoring layer checks the system's behaviour. The specification must be version-controlled and updated whenever a control is added, modified, or removed. Changes to control specifications should be assessed for their impact on the system's overall compliance posture and may trigger a substantial modification assessment. The Per-Layer Control Specifications feed into both Module 3 and Module 6 of the AISDP. Key outputs Control specification per architectural layer Mapping of controls to intent drift and outcome drift risks Configuration parameters and monitoring thresholds Module 3 and Module 6 AISDP evidence --- ## Post-Processing Techniques (Threshold Calibration, Score Adjustment, Reject Option) URL: https://docs.standardintelligence.com/post-processing-techniques-threshold-calibration-score Breadcrumb: Development › Data Governance › Bias Mitigation › Post-Processing Techniques (Threshold Calibration, Score Adjustment, Reject Option) Last updated: 28 Feb 2026 Post-Processing Techniques (Threshold Calibration, Score Adjustment, Reject Option) AISDP module(s): 4 ( Data Governance and Dataset Documentation ), 2 (Development Process) Regulatory basis: Article 10(2)(f) Post-processing mitigations modify the model's outputs after inference, avoiding model retraining. They are the simplest to implement but may be criticised as cosmetic corrections that mask underlying model bias without addressing root causes. Threshold calibration adjusts the decision threshold for each subgroup to equalise selection rates or error rates. Fairlearn's ThresholdOptimizer automates this, finding per-subgroup thresholds that satisfy a given fairness constraint while maximising accuracy. The AISDP documents the per-subgroup thresholds, the fairness constraint targeted, and the resulting impact on the overall accuracy and per-subgroup error rates. Score adjustment applies additive or multiplicative corrections to the model's raw outputs for specific subgroups. Calibrated equalised odds (Pleiss et al., 2017) adjusts probability scores per subgroup to achieve both calibration and equalised odds simultaneously. The adjustment parameters are documented along with the mathematical formulation. Reject option classification routes borderline predictions (where the model's confidence is low) to human review, reducing the chance that uncertain predictions disproportionately harm one subgroup. The confidence threshold for routing to human review is documented, along with the expected volume of cases routed and the capacity of the human review process to handle them. The AISDP must document why post-processing was chosen over root-cause mitigation. Valid reasons include that the root-cause mitigation would require protected characteristic data the organisation cannot lawfully obtain, that root-cause mitigation would reduce accuracy below declared thresholds, or that the bias is an artefact of historical data that cannot be corrected within the training data. The AI Governance Lead signs off on the choice. Key outputs Post-processing technique selection and rationale Adjusted parameters (thresholds, scores) per subgroup AI Governance Lead sign-off on technique choice --- ## Post-Processing Tests (Thresholds, Calibration, Business Rules, Edge Cases) URL: https://docs.standardintelligence.com/post-processing-tests-thresholds-calibration-business-rules Breadcrumb: Development › CI › CD Pipelines › Unit Testing › Post-Processing Tests (Thresholds, Calibration, Business Rules, Edge Cases) Last updated: 28 Feb 2026 Post-Processing Tests (Thresholds, Calibration, Business Rules, Edge Cases) AISDP module(s): Module 5 (Testing and Validation) Regulatory basis: Article 15 , Annex IV (3) Threshold application, score calibration, business rule application, and output formatting each require unit tests confirming correctness, edge case handling, and consistency with the documented behaviour. The post-processing layer transforms the model's raw output into the decision that affects individuals; errors in post-processing can negate even a well-performing model. Threshold tests should verify the threshold value itself (confirming it matches the version-controlled configuration), the behaviour at exactly the threshold (boundary case), and the logging of the decision. If a rule rejects applicants below a threshold of 0.65, the test should verify what happens at 0.65, at 0.6499, and at 0.6501. Calibration tests should verify that the calibrated scores produce the expected probability distribution on a reference dataset. Business rule tests should verify that each rule produces the documented effect, that rules are applied in the correct sequence, and that the override logging captures the required information when a rule modifies the model's raw output. If a fairness calibration adjusts thresholds per subgroup, the test should verify that the adjusted thresholds produce the expected selection rate ratios on a reference dataset. Edge cases, such as a score that falls exactly on a subgroup-specific threshold, deserve dedicated test cases. Key outputs Threshold boundary tests at and around each configured threshold Calibration validation on a reference dataset Business rule sequence and effect verification tests Fairness calibration tests verifying per-subgroup selection rate ratios --- ## Bias Mitigation URL: https://docs.standardintelligence.com/post-training-bias-evaluation--bias-mitigation Breadcrumb: Development › Data Governance › Post-Training Bias Evaluation › Bias Mitigation Last updated: 28 Feb 2026 Bias Mitigation --- ## Post-Training Bias Evaluation URL: https://docs.standardintelligence.com/post-training-bias-evaluation Breadcrumb: Development › Data Governance › Post-Training Bias Evaluation Last updated: 28 Feb 2026 Selection Rate Ratio (Four-Fifths Rule) Equalised Odds — TPR & FPR Parity Predictive Parity Calibration Within Groups — Reliability Diagrams Counterfactual Fairness Testing Fairness Concept Priority Decision & Documented Rationale Fairness Tooling (Fairlearn, AI Fairness 360) Bias Mitigation --- ## Pre-Processing Techniques (Oversampling, Undersampling, Reweighting, Synthetic Data) URL: https://docs.standardintelligence.com/pre-processing-techniques-oversampling-undersampling Breadcrumb: Development › Data Governance › Bias Mitigation › Pre-Processing Techniques (Oversampling, Undersampling, Reweighting, Synthetic Data) Last updated: 28 Feb 2026 Pre-Processing Techniques (Oversampling, Undersampling, Reweighting, Synthetic Data) AISDP module(s): 4 ( Data Governance and Dataset Documentation ) Regulatory basis: Article 10(2)(f) Pre-processing mitigations modify the training data before the model encounters it. They are the most accessible class of bias mitigation techniques and are documented in the AISDP with their trade-offs. Oversampling creates additional copies or synthetic examples of underrepresented subgroups. SMOTE (Synthetic Minority Over-sampling Technique) generates synthetic examples by interpolating between existing minority records, reducing overfitting risk compared to simple duplication. ADASYN (Adaptive Synthetic Sampling) focuses synthetic generation on boundary regions where the classifier struggles. The risk with oversampling is overfitting to the minority class; the risk with simple duplication is even greater. Undersampling removes records from overrepresented subgroups to balance the dataset. The risk is discarding potentially useful data, reducing the model's overall performance. Both oversampling and undersampling are validated by comparing the model's performance on an unaltered holdout set to ensure the technique has not degraded generalisation. Reweighting assigns higher training weights to underrepresented subgroups, ensuring each subgroup contributes equally to the loss function. AI Fairness 360's reweighting preprocessor computes optimal weights automatically. Reweighting preserves all data while adjusting the model's attention, making it generally preferable to undersampling for high-risk systems. Synthetic data augmentation uses generative techniques (SDV, Gretel.ai, MOSTLY AI) to create additional training examples for underrepresented subgroups. The AISDP documents the generation algorithm, the validation of synthetic data against real distributions, the proportion of synthetic data in the final training set, and the risks of over-reliance on synthetic data (which may not capture real-world complexity). Key outputs Pre-processing technique selection and rationale Trade-off analysis (fairness improvement vs accuracy impact) Validation results on unaltered holdout set --- ## Post-Training Bias Evaluation URL: https://docs.standardintelligence.com/pre-training-bias-assessment--post-training-bias-evaluation Breadcrumb: Development › Data Governance › Pre-Training Bias Assessment › Post-Training Bias Evaluation Last updated: 28 Feb 2026 Post-Training Bias Evaluation --- ## Pre-Training Bias Assessment URL: https://docs.standardintelligence.com/pre-training-bias-assessment Breadcrumb: Development › Data Governance › Pre-Training Bias Assessment Last updated: 28 Feb 2026 Distributional Analysis — Statistical Tests & Output Matrix Label Bias Analysis — Inter-Rater Reliability & Relabelling Label Bias Analysis — Ground Truth Contamination Assessment Proxy Variable Detection — Correlation Methods & Thresholds Proxy Variable Detection — Justification Review for Retained Proxies Intersectional Pre-Training Analysis — Subgroups & Cell Size Thresholds Post-Training Bias Evaluation --- ## Predictive Parity URL: https://docs.standardintelligence.com/predictive-parity Breadcrumb: Development › Data Governance › Post-Training Bias Evaluation › Predictive Parity Last updated: 28 Feb 2026 Predictive Parity AISDP module(s): 4 ( Data Governance and Dataset Documentation ), 5 (Testing and Validation) Regulatory basis: Article 10(2)(f); Article 9 Predictive parity asks whether positive predictions are equally accurate across subgroups. If the model's positive predictions are correct 85% of the time for one subgroup but only 65% for another, individuals in the second subgroup face a higher risk of being incorrectly subjected to the system's consequences. This metric is particularly important for high-stakes decisions such as credit denial, job rejection, or benefits eligibility, where a false positive imposes a real cost on the affected person. In a recruitment screening system, a positive prediction (candidate recommended for interview) that is less reliable for one demographic group means that group experiences a higher rate of "wasted" interviews or false encouragement, while negative predictions (candidate not recommended) that are less reliable mean qualified candidates are disproportionately screened out. The AISDP records the positive predictive value (precision) per protected subgroup, the parity thresholds applied, and the disparity between the best-performing and worst-performing subgroups. The report should contextualise the metric by explaining what predictive parity means for the specific deployment: which real-world consequences flow from false positives and false negatives, and how disparities in predictive accuracy translate into differential harm. Predictive parity can conflict with equalised odds; a model that achieves one may fail the other. Article 81 addresses the fairness concept prioritisation decision that resolves such conflicts. Key outputs Positive predictive value per protected subgroup Parity assessment and disparity measurement Contextualised impact analysis --- ## Proxy Variable Detection — Correlation Methods & Thresholds URL: https://docs.standardintelligence.com/proxy-variable-detection-correlation-methods-and-thresholds Breadcrumb: Development › Data Governance › Pre-Training Bias Assessment › Proxy Variable Detection — Correlation Methods & Thresholds Last updated: 28 Feb 2026 Proxy Variable Detection — Correlation Methods & Thresholds AISDP module(s): 4 ( Data Governance and Dataset Documentation ) Regulatory basis: Article 10(2)(f) Proxy variables are features that are not themselves protected characteristics but correlate strongly enough with protected characteristics to serve as surrogates. Postcode correlates with ethnicity and socioeconomic status. University name correlates with social class. Name correlates with gender and ethnicity. A model that excludes protected characteristics from its inputs but includes strong proxies can still discriminate. The detection method computes correlation between each feature and each protected characteristic using the appropriate measure: Pearson for continuous-continuous pairs, point-biserial for continuous-binary pairs, Spearman for ordinal pairs, and mutual information as a non-linear alternative for any pair type. Features with correlation coefficients above a defined threshold are flagged for review. A threshold of 0.3 is a common starting point, though the Technical SME calibrates this to the domain; in domains where even modest proxy effects carry serious consequences (employment, credit), a lower threshold may be appropriate. The output is a correlation matrix: each feature against each protected characteristic, with the correlation statistic and confidence interval. Features exceeding the threshold are flagged, but the flag does not automatically mean removal. The Technical SME conducts a justification review for each flagged feature. Some features may have strong predictive value for the legitimate intended purpose and may be retainable if the proxy risk is mitigated through fairness constraints during training. Column-level lineage supports proxy detection by revealing indirect relationships. A derived feature such as "risk_score" may not correlate directly with a protected characteristic, yet if its constituent source features (for example, postcode and annual income) are themselves proxies, the derived feature inherits the proxy risk. Lineage enables the Technical SME to trace these indirect pathways. Key outputs Proxy variable correlation matrix Flagged features register (exceeding threshold) Correlation methodology and threshold documentation --- ## Proxy Variable Detection — Justification Review for Retained Proxies URL: https://docs.standardintelligence.com/proxy-variable-detection-justification-review-for-retained Breadcrumb: Development › Data Governance › Pre-Training Bias Assessment › Proxy Variable Detection — Justification Review for Retained Proxies Last updated: 28 Feb 2026 Proxy Variable Detection — Justification Review for Retained Proxies AISDP module(s): 4 ( Data Governance and Dataset Documentation ) Regulatory basis: Article 10(2)(f) When a feature is flagged as a potential proxy variable, the Technical SME conducts a justification review to determine whether the feature should be retained, decorrelated, or removed. The review balances the feature's predictive value against its proxy risk, and the reasoning is documented for each flagged feature. The review considers three questions. First, is the feature's predictive value for the legitimate intended purpose substantive and difficult to replace with alternative features that carry lower proxy risk? A feature that marginally improves accuracy but strongly correlates with a protected characteristic is harder to justify than a feature that is essential for the system's core function. Second, can the proxy risk be mitigated through in-processing techniques (fairness constraints, adversarial debiasing) without unacceptable accuracy loss? If fairness constraints can neutralise the proxy effect during training, retention may be defensible. Third, does the feature's removal introduce other risks, such as reduced model performance for the subgroup the proxy represents? For each flagged feature, the AISDP records the correlation statistic, the predictive importance (SHAP-based or permutation importance), the justification for the retention decision, and the mitigation applied if the feature is retained. Features that are removed are also documented, with the rationale for removal and any impact on model performance. The feature registry maintains the complete set of proxy variable assessments, providing a single reference for reviewers and auditors. The registry is updated whenever features are added, removed, or modified. Key outputs Justification review documentation per flagged feature Retention/removal decision with rationale Mitigation specification for retained proxy variables --- ## Data Governance Artefacts URL: https://docs.standardintelligence.com/rag-specific-governance--data-governance-artefacts Breadcrumb: Development › Data Governance › RAG-Specific Governance › Data Governance Artefacts Last updated: 28 Feb 2026 Data Governance Artefacts --- ## RAG-Specific Governance URL: https://docs.standardintelligence.com/rag-specific-governance Breadcrumb: Development › Data Governance › RAG-Specific Governance Last updated: 28 Feb 2026 Knowledge Base Completeness & Currency Embedding Bias & Representational Risk Multilingual Performance GDPR Status of Stored Embeddings Embedding Version Control Data Governance Artefacts --- ## Record Count, Schema & Version Identifier URL: https://docs.standardintelligence.com/record-count-schema-and-version-identifier Breadcrumb: Development › Data Governance › Dataset Documentation › Record Count, Schema & Version Identifier Last updated: 28 Feb 2026 Record Count, Schema & Version Identifier AISDP module(s): 4 ( Data Governance and Dataset Documentation ) Regulatory basis: Article 10(2); Annex IV (2)(d) The composition section of the dataset documentation captures the structural characteristics that enable a reviewer to understand the dataset's scale, shape, and technical format. For each dataset, the Technical SME records the total record count, the number of features (columns), the storage size, the schema (field names, data types, value formats), and the immutable version identifier. The version identifier is critical for traceability . It links the dataset to the model versions trained on it, enabling the AISDP to state precisely which data was used to train each model version. Tools such as DVC, Delta Lake, and LakeFS assign immutable version identifiers to dataset snapshots. Cloud-native versioning (S3 object versioning, for example) provides an alternative for simpler architectures. Schema documentation should be sufficiently detailed for a technical reviewer to reconstruct the data's structure without accessing the data itself. Each field entry records the field name, data type, permitted values or value range, a brief description of the field's meaning, and whether it contains personal data or special category data . Automated schema validation (using Pandera for Pandas DataFrames or dbt's built-in tests for SQL pipelines) should enforce consistency between the documented schema and the actual data, catching structural drift before it affects model training. Documentation depth should be proportionate to the dataset's role. Training datasets for high-risk systems warrant comprehensive treatment; static reference datasets warrant a lighter approach. The AI System Assessor documents the standard applied to each dataset category and the rationale for the proportionality decision. Key outputs Composition record per dataset (count, schema, version) Schema validation configuration Proportionality rationale for documentation depth --- ## Regression Tests — Golden Dataset with Per-Subgroup Cases URL: https://docs.standardintelligence.com/regression-tests-golden-dataset-with-per-subgroup-cases Breadcrumb: Development › CI › CD Pipelines › Integration Testing › Regression Tests — Golden Dataset with Per-Subgroup Cases Last updated: 28 Feb 2026 Regression Tests — Golden Dataset with Per-Subgroup Cases AISDP module(s): Module 5 (Testing and Validation) Regulatory basis: Article 9 , Article 10 , Article 15 A golden dataset of historical inputs with known correct outputs serves as the regression baseline. Every candidate release is evaluated against this dataset to detect behavioural regression. The golden dataset is distinct from the training or evaluation datasets; it is a curated collection of cases selected specifically for regression detection. The golden dataset must include cases drawn from each protected characteristic subgroup. This ensures that regressions do not disproportionately affect vulnerable populations. A candidate model that maintains overall accuracy but degrades accuracy for a specific demographic group would pass a naive regression test but fail a subgroup-aware regression test. The per-subgroup structure of the golden dataset makes this visible. The golden dataset is version-controlled and expanded over time as new edge cases are discovered through production operation, incident investigation, or user feedback. Cases that previously caused errors, near-misses, or fairness concerns should be added to the golden dataset to prevent recurrence. The regression test results, including per-subgroup breakdowns, are retained as Module 5 evidence and feed into the model validation gate s. Key outputs Golden dataset with per-subgroup case coverage Version-controlled dataset expanded over time with discovered edge cases Per-subgroup regression analysis for each candidate release Module 5 AISDP evidence --- ## SBOMs URL: https://docs.standardintelligence.com/sboms Breadcrumb: Development › CI › CD Pipelines › Artefacts › SBOMs Last updated: 28 Feb 2026 SBOMs AISDP module(s): Module 9 (Robustness and Cybersecurity), Module 3 (Architecture and Design) Regulatory basis: Article 15 , Annex IV (2) This artefact comprises the collection of SBOMs generated across the system's lifecycle. Each SBOM captures the complete dependency inventory for a specific build, including ML-specific components. The SBOM collection provides a dependency evolution history. When a supply chain vulnerability is disclosed, the organisation can search historical SBOMs to determine which deployed versions were affected and whether the vulnerability was present during periods when the system was processing personal data. This retrospective analysis capability supports incident response and regulatory reporting obligations. Each SBOM is linked to the container image version it describes and to the deployment ledger entry that recorded the image's deployment. The SBOM for the currently deployed version is the primary reference for Module 9's cybersecurity documentation. Archived SBOMs are retained for the ten-year period. Key outputs SBOM collection across all builds Linkage to container image versions and deployment ledger entries Retrospective vulnerability search capability Module 9 and Module 3 evidence --- ## Secret Detection (Pre-Commit Hooks & CI Steps) URL: https://docs.standardintelligence.com/secret-detection-pre-commit-hooks-and-ci-steps Breadcrumb: Development › CI › CD Pipelines › Static Analysis › Secret Detection (Pre-Commit Hooks & CI Steps) Last updated: 28 Feb 2026 Secret Detection (Pre-Commit Hooks & CI Steps) AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Credentials, API keys, database connection strings, and personal data must never appear in the version control history. Accidentally committed secrets are a persistent security risk: Git history retains the secret even after the offending commit is amended or removed. Tools such as git-secrets, truffleHog, GitLeaks, and detect-secrets scan for credential patterns in code and configuration files. Secret detection runs at two points. Pre-commit hooks catch secrets before they enter the repository, providing the fastest feedback loop. CI pipeline steps catch secrets that bypassed the hooks, either because the hooks were not installed on a developer's machine or because the pattern was not matched locally. Both layers are necessary for defence in depth. The security team treats any committed secret as compromised and rotates it immediately, regardless of whether the commit was subsequently removed. In production, secrets are sourced from a dedicated secrets manager (HashiCorp Vault, AWS Secrets Manager, Azure Key Vault), never from the repository. The secret detection configuration, scan results, and any incident response records for committed secrets are retained as Module 9 evidence. Key outputs Secret detection tool configuration (detect-secrets, truffleHog, or GitLeaks) Pre-commit hook and CI pipeline integration Incident response procedure for committed secrets Module 9 AISDP evidence --- ## Security Scan Results & Remediation Records URL: https://docs.standardintelligence.com/security-scan-results-and-remediation-records Breadcrumb: Development › CI › CD Pipelines › Artefacts › Security Scan Results & Remediation Records Last updated: 28 Feb 2026 Security Scan Results & Remediation Records AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 This artefact comprises the results from dependency scanning, licence compliance scanning, secret detection, and container vulnerability scanning . Each scan result is timestamped and linked to the pipeline execution that produced it. Alongside the scan results, remediation records document how identified vulnerabilities were addressed. For each vulnerability, the record captures the vulnerability identifier (CVE or equivalent), the severity, the affected component, the remediation action (patch, upgrade, replacement, or exception), the date of remediation, and the identity of the person who performed or approved the remediation. For vulnerabilities that were accepted through the exception process, the remediation record includes the exception justification, the compensating controls, and the expiry date. The collection of scan results and remediation records demonstrates proactive security management to assessors and regulators, showing that vulnerabilities are identified, tracked, and resolved systematically. Key outputs Security scan result archive across all pipeline executions Remediation records per identified vulnerability Exception records with justification and expiry Module 9 AISDP evidence --- ## Selection Rate Ratio (Four-Fifths Rule) URL: https://docs.standardintelligence.com/selection-rate-ratio-four-fifths-rule Breadcrumb: Development › Data Governance › Post-Training Bias Evaluation › Selection Rate Ratio (Four-Fifths Rule) Last updated: 28 Feb 2026 Selection Rate Ratio (Four-Fifths Rule) AISDP module(s): 4 ( Data Governance and Dataset Documentation ), 5 (Testing and Validation) Regulatory basis: Article 10(2)(f); Article 9 For binary classification systems, the selection rate ratio is the simplest and most widely understood fairness metric. It computes the positive outcome rate for each protected subgroup, divides each by the positive outcome rate for the majority group, and flags any ratio below 0.80 (the four-fifths rule). This metric has regulatory heritage from US employment law, where it has served as a screening device for adverse impact for decades. The four-fifths rule does not have specific regulatory status under EU law; the AI Act does not prescribe fairness thresholds. The 0.80 threshold is used here as an industry convention with broad practitioner acceptance, and organisations should calibrate it to their system's risk profile and deployment context. The AISDP reports selection rate ratios for all measured subgroups, with a clear indication of which subgroups meet the 0.80 threshold and which do not. The report should include confidence intervals to indicate the statistical reliability of the ratios, particularly for smaller subgroups where sampling variation may produce misleading values. The four-fifths rule's limitation is that it measures only outcome rates, not outcome quality. A model that gives positive outcomes to the same proportion of each subgroup but makes systematically worse predictions for one subgroup (more false positives, more false negatives) would pass the four-fifths test while still being unfair. The selection rate ratio is therefore a necessary but insufficient fairness check; it is complemented by the equalised odds, predictive parity, calibration, and individual fairness metrics. Where the selection rate ratio falls below 0.80 for any subgroup, the finding triggers the bias mitigation process. The threshold is configurable; some organisations may adopt a stricter threshold (0.90) based on the AI Governance Lead 's assessment of the system's risk profile. Key outputs Selection rate ratio report per protected subgroup Threshold compliance indication per subgroup Confidence intervals for smaller subgroups --- ## Service Dependency Management URL: https://docs.standardintelligence.com/service-dependency-management Breadcrumb: Development › Version Control › Service Dependency Management Last updated: 28 Feb 2026 Microservice Dependency Mapping AISDP module(s): Module 3 (Architecture and Design) Regulatory basis: Annex IV (2)(b), Article 12 High-risk AI systems built on microservice architectures require a current dependency map showing how each service communicates with every other service, the data contracts between them, the sequence in which services process data for a given inference request, and the failure modes that propagate across service boundaries. This is a compliance artefact, not an architectural convenience. Without a dependency map, the organisation cannot assess whether a change to one service constitutes a substantial modification to the system as a whole. A modification to the data ingestion service that alters how missing values are handled will change the feature vectors produced by the feature engineering service, which will change the model's inference behaviour. The Technical SME must be able to trace this chain for every proposed change. Before any service is updated, a change impact analysis traces the change's effects through the dependency map. The analysis references the specific AISDP modules affected and assesses whether the combined effect crosses the substantial modification threshold. The composite version identifier captures the specific combination of service versions currently deployed; a deployment event that changes one microservice changes the composite version, even if the other services remain unchanged. Key outputs Service dependency map with communication paths, data contracts, and failure modes Change impact analysis template Integration with the composite version identifier Module 3 AISDP evidence Contract Tests (Consumer Expectation Validation) AISDP module(s): Module 5 (Testing and Validation), Module 3 (Architecture and Design) Regulatory basis: Article 15 , Annex IV(3) Contract testing addresses a failure mode that integration testing misses: the silent breaking change. When a data provider modifies an API response format, or a feature computation service changes its rounding behaviour, the dependent system may continue operating without errors yet produce incorrect results. Contract testing detects these breaks before they reach production. Consumer-driven contract testing (Pact) works by having each consumer of a service define a contract: "I expect to send this request and receive a response with these fields, of these types, within these value ranges." The contracts are stored in a broker and verified against the provider on every provider build. If the provider makes a change that violates a consumer's contract, the provider's build fails before the change is deployed. Statistical contract testing (Great Expectations applied to data interfaces) extends the concept to data quality. A data consumer defines statistical expectations: "I expect the income column to have no null values, to be non-negative, and to have a mean within 10% of the historical mean." Statistical contracts are particularly important for ML systems, because a delivery that satisfies the schema contract but violates the statistical contract may be silently accepted and degrade model performance. Contract tests run as part of the CI pipeline for every service, and a failure blocks deployment. Key outputs Consumer-driven contract definitions (Pact or equivalent) Statistical contract definitions (Great Expectations or equivalent) CI pipeline integration with deployment blocking on failure Module 5 and Module 3 AISDP evidence --- ## Source & Acquisition Method URL: https://docs.standardintelligence.com/source-and-acquisition-method Breadcrumb: Development › Data Governance › Dataset Documentation › Source & Acquisition Method Last updated: 28 Feb 2026 Source & Acquisition Method AISDP module(s): 4 ( Data Governance and Dataset Documentation ) Regulatory basis: Article 10(2); Annex IV (2)(d) Every dataset used in the system's lifecycle, whether for training, validation, testing, calibration, or fine-tuning, requires provenance documentation that specifies how and from where the data was acquired. Provenance must be specific: "data collected from deployer ATS systems between January 2021 and December 2023 under data processing agreements" is acceptable; "data from various sources" is not. For each dataset, the Technical SME records the original collection methodology, identifying whether the data was collected through direct observation, user interaction, sensor capture, manual entry, or automated scraping. The legal basis under GDPR Article 6 for the collection must be documented, along with any consent mechanisms used. Where the data was licensed from a third party, the licensing terms and their compatibility with the intended use are recorded. The Datasheets for Datasets framework (Gebru et al., 2021) provides a structured template. Its seven sections cover motivation, composition, collection process, preprocessing, uses, distribution, and maintenance. For EU AI Act compliance, the collection process section requires additional depth beyond the standard template, specifically the GDPR lawful basis and the data processing agreements governing cross-organisational transfers. Dataset documentation is treated as a living artefact. A version bump, whether from new records, modified features, or changed quality rules, triggers a corresponding documentation update. Tools such as OpenMetadata and DataHub support attaching structured documentation to dataset versions with change tracking. For lighter approaches, a Markdown file co-located with the dataset in the versioning system (DVC, Delta Lake) provides version-controlled documentation that evolves alongside the data. Key outputs Source and acquisition record per dataset GDPR lawful basis documentation Data processing agreement references (where applicable) --- ## Special Category Data URL: https://docs.standardintelligence.com/special-category-data Breadcrumb: Development › Data Governance › Special Category Data (Art. 10(5)) Last updated: 28 Feb 2026 Legal Basis & Purpose Limitation AISDP module(s): 4 ( Data Governance and Dataset Documentation ) Regulatory basis: Article 10(5); GDPR Article 9 Article 10(5) permits the processing of special category personal data (race, ethnicity, health, sexual orientation, religious belief, trade union membership, genetic and biometric data) strictly to support bias monitoring and detection, subject to specific conditions. This provision resolves a tension: meaningful bias detection is frequently impossible without access to the demographic data that data protection law restricts. Before processing real special category data, the organisation must demonstrate that such processing is strictly necessary for the purposes of ensuring bias monitoring, detection, and correction, and that the purpose cannot be achieved through less intrusive means. This strict necessity test requires the organisation to actually attempt the alternatives, evaluate their adequacy, and document the results. Synthetic data frequently falls short because it fails to capture the correlational structure between protected characteristics and other features with sufficient fidelity. Anonymised data may not preserve the subgroup structure needed for disaggregated fairness analysis. The assessment should compare bias metrics computed on synthetic/anonymised data against metrics computed on a small, carefully governed sample of real data to quantify the adequacy gap. Where the sufficiency test concludes that alternatives are insufficient, the organisation may process special category data under Article 10(5), but the purpose must be strictly limited to bias monitoring and detection. The data must not be used for model training, feature engineering, or any other purpose. The governance workflow requires a Special Category Data Processing Request from the Technical SME, a GDPR Article 9 compliance review by the DPO Liaison, and approval from the AI Governance Lead . Key outputs Sufficiency test results (synthetic and anonymised alternatives) Special Category Data Processing Request DPO Liaison compliance review AI Governance Lead approval Pseudonymisation & Automatic Deletion AISDP module(s): 4 (Data Governance and Dataset Documentation) Regulatory basis: Article 10(5); GDPR Article 9 Processing special category data under Article 10(5) requires rigorous technical and organisational safeguards. The AISDP documents the five-layer safeguard architecture and the automatic deletion or anonymisation process. Isolation requires the special category data to be stored in a dedicated, physically or logically separated environment, inaccessible from the main development and production data stores. Pseudonymisation replaces direct identifiers with pseudonymous keys, with the mapping table stored separately under stricter access controls (HashiCorp Vault or equivalent). Encryption applies AES-256 at rest and TLS 1.3 in transit. Access control restricts processing to named individuals with documented business need, with all access events logged in an immutable audit trail. For the highest assurance, confidential computing (Azure SGX, AWS Nitro, Google Confidential VMs) runs the bias computation within a hardware-secured enclave. Automatic deletion is triggered after the bias detection purpose is complete. The DPO Liaison verifies deletion technically, confirming removal from all storage locations including backups, caches, and derived datasets. A simple delete command is insufficient; verification must confirm that no residual copies exist. Where anonymisation rather than deletion is applied, a re-identification risk assessment confirms that individuals cannot reasonably be re-linked. Module 4 records whether special category data was processed, the specific categories and purpose, the safeguards applied, the processing dates and scope, the deletion or anonymisation schedule, the verification results, and the DPO Liaison's attestation of compliance. Key outputs Five-layer safeguard implementation documentation Deletion/anonymisation verification record DPO Liaison compliance attestation RAG-Specific Governance --- ## Stage Management (Experimental, Staging, Production, Archived) URL: https://docs.standardintelligence.com/stage-management-experimental-staging-production-archived Breadcrumb: Development › Version Control › Model Registry › Stage Management (Experimental, Staging, Production, Archived) Last updated: 28 Feb 2026 Stage Management (Experimental, Staging, Production, Archived) AISDP module(s): Module 3 (Architecture and Design), Module 10 (Record-Keeping) Regulatory basis: Article 12, Article 15 Models progress through defined stages in the registry: experimental (initial registration after training), staging (under validation), production (approved for deployment), and archived (retired). Stage transitions are governed events requiring documented approval; they are not automatic or self-service. The stage management workflow ensures that no model reaches production without passing through the full validation pipeline. Promotion from staging to production requires approval from a named role, typically the AI Governance Lead or a delegate. The approval event is logged with the approver's identity and timestamp. This approval log is Module 10 evidence demonstrating that every production model has received governance sign-off. When a system reaches end-of-life, the production model transitions to the archived stage. The AI System Assessor verifies the artefact's cryptographic signature one final time before archival, and the registry records the decommission date, the end-of-life trigger , and the archive storage location. The archived model must remain retrievable for the ten-year retention period under Article 18 . Key outputs Stage management workflow with defined transition criteria Approval requirements and logging for each stage transition End-of-life archival procedure with final integrity verification Module 3 and Module 10 AISDP evidence --- ## Statement of Business Intent (Signed) URL: https://docs.standardintelligence.com/statement-of-business-intent-signed Breadcrumb: Development › Architectures › Artefacts › Statement of Business Intent (Signed) Last updated: 28 Feb 2026 Statement of Business Intent (Signed) AISDP module(s): Module 1 (System Identity) Regulatory basis: Annex IV (1) The Statement of Business Intent is the foundational artefact for the system's compliance documentation. It captures the precise articulation of what the system is intended to achieve, for whom, and within what constraints, as described in Article 10 6. The signed version represents the Business Owner's formal approval of the intent and its alignment with the organisation's values, the EU AI Act's requirements, and the fundamental rights of affected persons. This artefact should include the system's intended purpose in specific, measurable terms; the target beneficiaries and affected persons; the constraints within which the system operates (including the prohibited outcomes from Article 107 and the ethical framework from Article 108); and the transparency commitments from Article 109. The Business Owner's signature confirms that the intent has been assessed and approved through the governance process. The signed Statement of Business Intent serves as the reference point against which subsequent design decisions are measured throughout the system's lifecycle. It feeds into AISDP Module 1 and forms part of the evidence pack for the conformity assessment . Any material change to the business intent requires a new governance approval and a reassessment of whether the change constitutes a substantial modification . Key outputs Signed Statement of Business Intent Business Owner approval record Module 1 AISDP entry --- ## Statement of Business Intent URL: https://docs.standardintelligence.com/statement-of-business-intent Breadcrumb: Development › Architectures › Statement of Business Intent Last updated: 28 Feb 2026 System Purpose & Constraints AISDP module(s): Module 1 (System Identity), Module 3 (Architecture and Design) Regulatory basis: Annex IV(1), Article 9 Before any architectural work begins, the Business Owner must articulate a precise statement of business intent. This statement defines what the system is intended to achieve, for whom it operates, and within what constraints. Precision matters here: "to assist human recruiters in screening high-volume applications by ranking candidates against role-specific competency profiles" is adequate. "To improve recruitment efficiency" is too vague to constrain design decisions or enable meaningful compliance assessment. The Business Owner assesses the business intent for alignment with the organisation's values, the EU AI Act's requirements, and the fundamental rights of affected persons. If the intent cannot be satisfied without creating unacceptable risks to fundamental rights, the organisation must modify the intent or decline to develop the system. This assessment is documented in the AISDP as a precondition to Module 1. The statement of business intent also serves as the reference point against which every subsequent design decision is measured. Architectural choices, feature selection, threshold calibration, and post-processing rules should all trace back to this statement. Where a design decision cannot be justified in terms of the stated purpose and constraints, it requires either revision of the decision or revision of the intent statement through the appropriate governance gate. Key outputs Statement of business intent with specific purpose, beneficiaries, and constraints Business Owner's assessment of alignment with organisational values and regulatory requirements Documented precondition to AISDP Module 1 Prohibited Outcomes AISDP module(s): Module 1 (System Identity), Module 6 (Risk Management System) Regulatory basis: Article 5, Article 9 The ethical framework established before development must explicitly identify outcomes that the system is prohibited from producing. These prohibitions translate high-level principles into concrete design constraints that the engineering team can implement and test against. The development team addresses several foundational questions when defining prohibited outcomes. What are the potential harms this system could cause? Who bears those harms? Are the harms distributed equitably? The answers inform a set of boundaries that the system must never cross, regardless of what the model's raw outputs might suggest. For a recruitment screening system, a prohibited outcome might be that no protected characteristic subgroup receives a selection rate below 90% of the highest-performing group. For a credit scoring system, it might be that no applicant is rejected solely on the basis of postcode. These prohibitions become testable acceptance criteria, embedded in the CI/CD pipeline and monitored in production. The AISDP must demonstrate the translation from principles to constraints. "The system must not discriminate" is a principle; a specific selection rate ratio threshold is a design constraint. The distinction is important because principles alone cannot be verified through testing, while constraints can. Key outputs Enumerated list of prohibited outcomes with measurable thresholds Mapping from ethical principles to testable design constraints Integration points with CI/CD acceptance criteria and production monitoring Ethical Framework — Design Constraints & Non-Deployment Thresholds AISDP module(s): Module 1 (System Identity), Module 6 (Risk Management System) Regulatory basis: Article 9, Article 14 The ethical framework documented before development begins should reference recognised principles such as the EU's Ethics Guidelines for Trustworthy AI, the OECD AI Principles, or the organisation's own responsible AI framework. Its purpose is to translate those principles into concrete design constraints and, critically, to establish non-deployment thresholds: the conditions under which the system must not proceed to production. Design constraints derived from the ethical framework address questions of harm distribution and redress. What mechanisms allow affected persons to understand, challenge, and seek redress for the system's decisions? What safeguards ensure the system serves its intended beneficiaries without unfairly disadvantaging others? These questions yield specific requirements for the explainability layer, the human oversight interface, and the post-processing rules. Non-deployment thresholds define the performance and fairness boundaries below which the system is considered unfit for production. If the system's fairness metrics, accuracy measures, or robustness scores fall below these thresholds during pre-deployment validation, deployment is blocked. The AI Governance Lead approves the ethical framework and the thresholds it establishes before development resources are committed. Key outputs Ethical framework document referencing recognised principles Design constraints derived from the framework, with measurable criteria Non-deployment thresholds for performance, fairness, and robustness AI Governance Lead approval record Transparency Commitment (Deployer, Affected Person, Regulator, Internal) AISDP module(s): Module 8 (Transparency and User Information) Regulatory basis: Article 13 , Article 50 , Annex IV(1)(c) Before development begins, the organisation commits to a level of transparency appropriate to the system's risk tier and the expectations of its stakeholders. For high-risk systems, this commitment spans four dimensions. Transparency to deployers specifies what information will be provided about the system's capabilities, limitations, and operational requirements. This feeds directly into the Instructions for Use documentation required under Article 13. Transparency to affected persons specifies how individuals will be informed of the system's involvement in decisions that affect them, and how they can obtain explanations of individual outcomes. This dimension intersects with the right to explanation under AI Act Article 86 and the right not to be subject to solely automated decision-making under GDPR Article 22 (see also Recital 71). Transparency to regulators specifies how the AISDP and its evidence base will be made available for inspection by national competent authorit ies. The commitment should address response timelines and export formats. Internal transparency specifies how the development team, governance leads, and organisational leadership will maintain ongoing visibility into the system's behaviour during both development and production operation. These commitments are documented and approved by the AI Governance Lead before development resources are committed. They become the basis for the transparency measures documented in AISDP Module 8 and the monitoring dashboards described above. Key outputs Transparency commitment document covering all four dimensions AI Governance Lead approval record Mapping to Module 8 deliverables and Article 13 requirements --- ## Static Analysis URL: https://docs.standardintelligence.com/static-analysis Breadcrumb: Development › CI › CD Pipelines › Static Analysis Last updated: 28 Feb 2026 Linting & Type Checking AI-Specific Custom Rules (Semgrep) — Demographic Feature Flagging AI-Specific Custom Rules — Hardcoded Threshold Detection AI-Specific Custom Rules — Missing Logging Detection (Art. 12) AI-Specific Custom Rules — Model Registry Bypass Detection Dependency Scanning (Snyk, Dependabot, pip-audit, OWASP) Licence Compliance Scanning (FOSSA, Black Duck, pip-licenses) Secret Detection (Pre-Commit Hooks & CI Steps) --- ## Statistical & Econometric Models URL: https://docs.standardintelligence.com/statistical-and-econometric-models Breadcrumb: Development › Model Selection › Full-Spectrum Evaluation › Statistical & Econometric Models Last updated: 28 Feb 2026 Statistical & Econometric Models AISDP module(s): 2, 3 Regulatory basis: Article 3(1); Articles 13, 14; Annex IV (2)(b) Logistic regression, linear regression, generalised linear models, and survival models occupy a middle ground between heuristic systems and machine learning. They learn from data, yet their structure is transparent and their parameters directly interpretable. A logistic regression model for credit scoring produces coefficients corresponding to the contribution of each input variable to the probability of default. These coefficients can be documented, challenged, and explained to affected persons. Statistical models are well-suited to domains with established modelling conventions and regulatory expectations, such as insurance pricing, credit risk assessment , and epidemiological modelling. Regulatory bodies in these sectors have decades of experience reviewing statistical models, and they will evaluate AI Act compliance against that baseline of expectations. The AISDP documentation for statistical models is straightforward: the model specification (equation form, feature set, coefficient values) can be presented to a qualified technical reviewer with complete transparency. On the compliance criteria, statistical models score strongly on documentability (every parameter is a named coefficient), testability (standard evaluation methodologies are well-established), auditability (individual decisions can be reconstructed from inputs and coefficients), and determinism (outputs are fully reproducible). Bias detectability is also strong, since the model's reliance on specific features is visible directly from the coefficients, enabling proxy variable identification. Maintainability depends on the stability of the underlying domain; models retrained on updated data typically produce predictable, incremental changes. Where statistical models fall short is predictive performance on complex, non-linear tasks. The model selection rationale should document whether the performance gap relative to more complex alternatives is material to the system's intended purpose and whether the compliance advantages of interpretability justify the performance trade-off. Key outputs Compliance criteria scoring for statistical model candidates Performance comparison against more complex alternatives --- ## Substantial Modification Detection URL: https://docs.standardintelligence.com/substantial-modification-detection Breadcrumb: Development › Version Control › Substantial Modification Detection Last updated: 28 Feb 2026 Modification Threshold Framework AISDP module(s): Module 6 (Risk Management System), Module 12 (Post-Market Monitoring) Regulatory basis: Article 3(23) Article 3(23) defines substantial modification as a change "not foreseen or planned in the initial conformity assessment " that affects compliance or the intended purpose. The definition is qualitative; the regulation does not specify numeric thresholds. The organisation must therefore define its own thresholds, document the rationale for each, and encode them in automated gates. Several starting points are recommended. A change in AUC-ROC exceeding ±0.03, any subgroup fairness metric breaching its established threshold, a change in the model's top-five feature importance ranking, the introduction or removal of input features, a change in the intended purpose or deployment context, or a modification to the human oversight architecture would all be candidates for substantial modification assessment. These thresholds are calibrated by the Technical SME to the system's specific risk profile. A recruitment screening system processing thousands of candidates per month requires tighter thresholds than an internal document classification system. The calibration rationale is documented in the risk register and reviewed annually. The thresholds are encoded as assertions in the CI/CD pipeline , and tools such as Evidently AI or NannyML can generate profile-comparison reports with configurable pass/fail conditions. Key outputs Defined quantitative thresholds per measurable dimension of change Calibration rationale documented in the risk register CI/CD pipeline encoding of thresholds as automated assertions Module 6 and Module 12 AISDP documentation Cumulative Baseline Tracking AISDP module(s): Module 12 (Post-Market Monitoring), Module 6 (Risk Management System) Regulatory basis: Article 3(23), Article 72 A series of individually sub-threshold changes that collectively alter the system's behaviour significantly may constitute a cumulative substantial modification. The system's performance may degrade by 0.005 AUC-ROC with each minor update; after ten updates, the cumulative drift of 0.05 exceeds the threshold, even though no individual change triggered the flag. The mitigation is to maintain a baseline snapshot captured at the time of the last conformity assessment. This baseline records the evaluation metrics, the output distribution profile, the fairness metrics, and the feature importance rankings from the assessed version. Every subsequent version is compared against both the preceding version (to detect individual-change threshold breaches) and the baseline (to detect cumulative drift). Two parallel metric tracks implement this. The version-to-version comparison runs in the CI pipeline for every candidate release. The baseline comparison runs on a scheduled basis, quarterly being aligned with the risk review cadence, and additionally whenever a new version is deployed. Evidently AI supports this through its time-series monitoring capability, comparing each subsequent dataset or model version against the reference and alerting when cumulative drift exceeds a configured threshold. Key outputs Baseline snapshot from the last conformity assessment (metrics, distributions, feature importance) Version-to-version comparison in the CI pipeline Baseline comparison on a scheduled and per-deployment basis Module 6 and Module 12 AISDP evidence Decision Flow for Borderline Cases AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 3(23) When the automated quality gates flag a change that approaches or exceeds a substantial modification threshold, the determination follows a defined decision flow. The Technical SME conducts an initial assessment, documenting which metrics have changed, by how much, and why. The AI Governance Lead then determines whether the change constitutes a substantial modification under Article 3(23). For borderline cases, the Legal and Regulatory Advisor provides input, particularly where the change involves the intended purpose or deployment context. The determination is a documented decision with three possible outcomes: the change is a substantial modification (triggering a new conformity assessment), the change is within acceptable bounds (documented with supporting evidence in Module 12), or the cumulative baseline comparison has been triggered (requiring the full substantial modification assessment even though the individual change was sub-threshold). If a substantial modification is confirmed, the consequence is significant: a new conformity assessment is required before the modified system can be placed on the market or put into service. The system re-enters Phase 5 of the delivery process. Organisations should design their change management processes to anticipate this possibility, assessing changes against the thresholds before implementation rather than after. Key outputs Documented decision flow with role assignments Initial assessment template for the Technical SME Legal and Regulatory Advisor consultation for borderline cases Determination records with rationale and evidence Trigger: Re-Assessment Under the Act AISDP module(s): Module 6 (Risk Management System), Module 12 (Post-Market Monitoring) Regulatory basis: Article 3(23), Article 43 When a change is determined to be a substantial modification, the regulatory consequence is that a new conformity assessment must be completed before the modified system can be placed on the market or put into service. This is the operational trigger that connects the version control and change management framework to the conformity assessment process. The re-assessment follows the same conformity assessment process as the initial assessment, though it may be scoped to focus on the aspects of the system affected by the modification. The assessor evaluates the modified system against the requirements of the EU AI Act, taking into account the nature of the modification and its impact on the system's compliance posture. The CI/CD pipeline should provide early warning when a change-in-progress is trending toward the substantial modification threshold. This enables the team to adjust the change, seek governance approval in advance, or prepare for a new conformity assessment. The substantial modification determination record, including the assessment of whether a new conformity assessment was required and the outcome of any such assessment, is retained in Module 12 as part of the system's change history. Key outputs Re-assessment trigger documentation Scoping guidance for modification-focused conformity assessment Early warning mechanism in the CI/CD pipeline Module 6 and Module 12 AISDP evidence --- ## System Architecture Document (C4 Diagrams) URL: https://docs.standardintelligence.com/system-architecture-document-c4-diagrams Breadcrumb: Development › Architectures › Artefacts › System Architecture Document (C4 Diagrams) Last updated: 28 Feb 2026 System Architecture Document (C4 Diagrams) AISDP module(s): Module 3 (Architecture and Design) Regulatory basis: Annex IV (2)(b-e) The System Architecture Document provides a layered description of the system's structure using the C4 model. It comprises a System Context Diagram (Level 1) showing the system within its broader environment, a Container Diagram (Level 2) showing the major technical building blocks within the system boundary, and Component Diagrams (Level 3) for sufficiently complex containers. Each diagram serves a different audience. The Context and Container diagrams are appropriate for the AI Governance Lead , Legal and Regulatory Advisor, and notified body reviewers. Component and Sequence diagrams serve the Technical SME and engineering team. The document is structured in layers of increasing detail so that each reader can access the level appropriate to their role. The architecture document must be version-controlled alongside the code and model artefacts. A diagram that shows the architecture as designed six months ago rather than the architecture as deployed today constitutes a non-conformity. Tooling that generates diagrams from code or infrastructure definitions (Structurizr for C4, Terraform graph for deployment diagrams) reduces the risk of documentation drift. The architecture document feeds into AISDP Module 3 and is a primary reference during the conformity assessment . Key outputs System Context Diagram (C4 Level 1) Container Diagram (C4 Level 2) Component Diagrams (C4 Level 3) for complex containers Version-controlled architecture document as Module 3 evidence --- ## Temporal & Geographic Scope URL: https://docs.standardintelligence.com/temporal-and-geographic-scope Breadcrumb: Development › Data Governance › Dataset Documentation › Temporal & Geographic Scope Last updated: 28 Feb 2026 Temporal & Geographic Scope AISDP module(s): 4 ( Data Governance and Dataset Documentation ) Regulatory basis: Article 10(2)(f), 10(3) The temporal and geographic scope of a dataset directly affects its suitability for training a high-risk AI system deployed in the EU. Article 10(3) requires datasets to be "relevant, sufficiently representative, and to the best extent possible, free of errors and complete." Temporal and geographic coverage are core dimensions of representativeness. Temporal coverage records the start and end dates of the data collection period. The Technical SME assesses whether the period is sufficient to capture seasonal, cyclical, and trend variations relevant to the system's intended purpose. A model trained on twelve months of data may miss multi-year patterns; a model trained during a period of unusual economic conditions (a pandemic, a financial crisis) may not generalise to normal conditions. The assessment is documented with the conclusion and supporting rationale. Geographic scope records the jurisdictions, regions, or member states from which the data originates. For systems intended for deployment across the EU/EEA, the data should reflect the deployment population's geographic diversity. A credit scoring model trained predominantly on UK financial behaviour data may not generalise to markets in other member states with different consumer protection frameworks and lending practices. Geographic gaps are documented as known limitations. Where temporal or geographic coverage is insufficient, the AISDP records the compensating controls: synthetic data augmentation, transfer learning from related domains, stratified sampling to ensure small subgroups are represented in validation and test sets, or deployment restrictions limiting the system's use to populations the data adequately represents. Key outputs Temporal coverage assessment per dataset Geographic scope assessment per dataset Gap documentation and compensating controls --- ## Testability URL: https://docs.standardintelligence.com/testability Breadcrumb: Development › Model Selection › Compliance Criteria Scoring › Testability Last updated: 28 Feb 2026 Testability AISDP module(s): 3, 5 Regulatory basis: Article 15 Testability asks whether the model architecture supports the testing required by Article 15. Can accuracy, robustness, and fairness be evaluated meaningfully? Can adversarial robustness be tested systematically? The assessment determines whether standard evaluation methodologies exist for the candidate architecture and whether they are sufficient for the system's risk profile. Decision trees and linear models produce deterministic outputs that simplify testing: a given input always produces the same output, enabling straightforward pass/fail comparison against expected values. LLMs and diffusion models produce stochastic outputs, requiring statistical testing frameworks that evaluate output distributions rather than individual predictions; the assessment specifies the testing methodology needed and estimates the testing effort. Adversarial robustness testing varies by architecture. Tabular models can be tested through feature perturbation. Image classification models have well-established adversarial example generation methods. LLMs require prompt injection testing, jailbreak evaluation, and content safety assessment. The assessment identifies which adversarial testing methods are applicable and whether they are mature enough to produce reliable results. The testability score reflects the combined ease and reliability of performance testing, fairness testing, and adversarial robustness testing for the candidate architecture. Key outputs Testability score per candidate model Required testing methodology specification --- ## Tooling (MLflow, W&B, SageMaker, Vertex AI) URL: https://docs.standardintelligence.com/tooling-mlflow-wandb-sagemaker-vertex-ai Breadcrumb: Development › Version Control › Model Registry › Tooling (MLflow, W&B, SageMaker, Vertex AI) Last updated: 28 Feb 2026 Tooling (MLflow, W&B, SageMaker, Vertex AI) AISDP module(s): Module 3 (Architecture and Design), Module 10 (Record-Keeping) Regulatory basis: Article 12 , Annex IV (2) The model registry is the central repository for trained model artefacts, serving the same role for models that the code repository serves for source code. Four tooling options are recommended: MLflow Model Registry, Weights & Biases Model Registry, Amazon SageMaker Model Registry, and Vertex AI Model Registry. The choice between these tools should be informed by the organisation's existing infrastructure, the level of integration with the CI/CD pipeline , and the registry's support for immutable versioning and access control. MLflow is open-source and vendor-neutral, making it suitable for organisations that need flexibility. Weights & Biases offers strong experiment tracking integration. SageMaker and Vertex AI integrate deeply with their respective cloud ecosystems. Regardless of the tool selected, the model registry must support six capabilities: immutable versioning, metadata attachment, lineage tracking, stage management, access control, and long-term retrieval. Organisations that self-host their registry should ensure the underlying storage meets the durability and availability requirements for a compliance-critical artefact store. The registry's contents, specifically the metadata for each production model version, are themselves evidence artefacts for the conformity assessment evidence pack. Key outputs Selected model registry tool with deployment configuration Verification that all six required capabilities are supported Module 3 and Module 10 AISDP documentation --- ## Traceability URL: https://docs.standardintelligence.com/traceability Breadcrumb: Development › Version Control › Traceability Last updated: 28 Feb 2026 Technical Traceability (Model, Code, Data, Infrastructure, Input — All Hash-Verified) AISDP module(s): Module 10 (Record-Keeping) Regulatory basis: Article 12 , Annex IV (2) Technical traceability answers the question: for a given system output, what produced it? The answer must identify the exact model version (serialised model file referenced by hash), the exact code version (Git commit SHA for every service involved), the exact data versions (training dataset version, feature transformation version, configuration version), the exact infrastructure state (container image versions, environment configuration), and the exact input data (the specific record processed, captured in the logging layer). This traceability chain enables the engineering team to reproduce any historical inference, diagnose the root cause of unexpected outputs, and demonstrate to a technical auditor that the system's behaviour is deterministic and traceable. The model registry , code repository, data versioning system, container registry, and logging infrastructure must be integrated so that a single query against the composite version identifier retrieves the complete provenance chain. The serving infrastructure tags every inference request with the composite system version at the point of execution. This tag is embedded in the log record and cannot be modified after the fact. Model artefacts are stored with cryptographic hashes verified at load time. Feature transformation code is shared between training and serving pipelines to eliminate training-serving skew. Deployment events are recorded in an immutable deployment ledger. Key outputs End-to-end traceability chain from inference to full provenance Composite version tagging on every inference request Cryptographic hash verification at model load Module 10 AISDP documentation Business & Outcome Traceability (Alignment, Experience, Satisfaction, Overrides, Complaints) AISDP module(s): Module 12 (Post-Market Monitoring), Module 1 (System Identity) Regulatory basis: Article 72 , Annex IV(1) Business traceability answers a different question from technical traceability: is the system achieving the outcomes it was designed to achieve, and are those outcomes aligned with the organisation's stated intent? This dimension is owned by product management and business stakeholders, not by the engineering team. It requires different metrics, cadences, and reporting formats. The product manager or business owner should track five dimensions. Outcome alignment asks whether the system's actual deployment outcomes are consistent with the intended purpose documented in AISDP Module 1. Affected person experience asks whether individuals are receiving the transparency, explanations, and redress pathways documented in Module 8 . Deployer satisfaction assesses whether deployer organisations find the system useful, trustworthy, and aligned with their own compliance obligations. Override and intervention patterns track what proportion of recommendations are modified by human operators and what those modifications reveal. Complaint and escalation volumes track whether affected persons are raising concerns and whether those concerns are being resolved. A translation layer between technical metrics and business outcomes is needed: a 0.02-point AUC-ROC drop is a technical fact whose business significance depends on its real-world impact on affected persons and deployers. Key outputs Business traceability metrics across five dimensions Translation layer between technical and business metrics Periodic business outcome reporting Module 12 and Module 1 AISDP evidence Deployment Ledger (Before/After State, Authoriser, Evidence, Immutable Record) AISDP module(s): Module 10 (Record-Keeping), Module 12 (Post-Market Monitoring) Regulatory basis: Article 12 The deployment ledger is an immutable record of every deployment event in the system's lifecycle. Each entry captures the before-state (the version of each artefact prior to the deployment), the after-state (the version of each artefact after the deployment), the identity of the person who authorised the deployment, and the evidence reviewed as part of the authorisation (validation gate results, governance approvals, substantial modification determinations). The ledger provides the definitive record of the system's version history in production. Given any point in time, the ledger identifies exactly which combination of code, model, data, configuration, and infrastructure was deployed. Combined with the inference logging, this enables precise reconstruction of the system's state for any historical inference. The deployment ledger should be implemented as an append-only data structure, either in the version control system (as tagged commits with structured metadata) or in a dedicated audit log with immutability protections (WORM storage, cryptographic hash chains). GitOps tools such as ArgoCD and Flux naturally produce a deployment ledger through their Git-based workflow: every deployment change is a Git commit, providing an immutable audit trail of what was deployed, when, by whom, and through which approval process. Key outputs Immutable deployment ledger with before/after state records Authoriser identity and evidence references per entry Append-only implementation (GitOps, WORM storage, or hash chains) Module 10 and Module 12 AISDP evidence --- ## Unit Testing URL: https://docs.standardintelligence.com/unit-testing Breadcrumb: Development › CI › CD Pipelines › Unit Testing Last updated: 28 Feb 2026 Data Pipeline Tests (Normal, Boundary, Pathological, Schema, Distribution, Property-Based) Feature Engineering Tests (Registry Match, Determinism, Imputation, Range) Model Inference Tests (Registry Load, Format, Determinism, Latency, Degradation) Post-Processing Tests (Thresholds, Calibration, Business Rules, Edge Cases) Explainability Tests (Coverage, Attribution Sums, Fidelity, Format) Human Oversight Interface Tests (Bypass Prevention, Rationale, Confidence, Countermeasures) --- ## Version Control Artefacts URL: https://docs.standardintelligence.com/version-control-artefacts Breadcrumb: Development › Version Control › Artefacts Last updated: 28 Feb 2026 Version-Controlled Code, Data, Model, Config AISDP module(s): Module 2 (Development Process), Module 10 (Record-Keeping) Regulatory basis: Article 12, Article 18 This artefact encompasses the complete version-controlled estate of the AI system: the code repositories (Git), the data versions (DVC, Delta Lake, or LakeFS), the model registry entries, and the configuration-as-code repositories. Together, these form the evidential backbone of the AISDP's traceability claims. The artefact's compliance value lies in its completeness and its linkage. Each artefact type is version-controlled individually, but the cross-references between them (a model entry referencing its training data version and code commit, a code commit referencing the data version it was validated against) create the navigable traceability chain. The composite version identifier ties these cross-references together at the system level. For AISDP purposes, the evidence comprises the version control governance policy and branch protection configuration (Module 2), sample merge request records demonstrating the approval workflow in practice, repository configuration exports, and the version history excerpts for each artefact type. The complete version-controlled estate must be retained for the ten-year period under Article 18. Key outputs Complete version-controlled code, data, model, and config artefacts Cross-referencing between artefact types Version control governance policy and configuration exports Module 2 and Module 10 evidence Model Registry with Compliance Metadata AISDP module(s): Module 3 (Architecture and Design), Module 10 (Record-Keeping) Regulatory basis: Article 12, Annex IV (2) This artefact is the model registry itself, populated with the compliance metadata described above. It represents the authoritative record of every model version that has been trained, evaluated, deployed, or archived throughout the system's lifecycle. The registry's value as a compliance artefact derives from the completeness and accuracy of its metadata. For each production model version, the registry should contain the full provenance chain (data version, code commit, pipeline execution), the complete validation gate results (performance, fairness, robustness, drift), the stage transition history with approval records, and the content hash for integrity verification. The registry content feeds into Module 3 (as evidence of the model architecture and selection rationale) and Module 10 (as the record-keeping foundation for model-related traceability). A worked traceability example, demonstrating end-to-end provenance retrieval for a specific inference, should be prepared and retained as evidence that the traceability chain is functional. Key outputs Populated model registry with compliance metadata per version Stage transition history with approval records Worked traceability example demonstrating end-to-end provenance Module 3 and Module 10 evidence Substantial Modification Assessment Records AISDP module(s): Module 12 ( Post-Market Monitoring ), Module 6 (Risk Management System) Regulatory basis: Article 3(23) Every change assessed against the substantial modification thresholds produces a determination record. This artefact comprises the collection of all such records, forming the system's change assessment history. Each record documents which metrics changed and by how much, the root cause of the change, whether the cumulative baseline comparison was triggered, the determination (substantial modification or not), the rationale for the determination, the evidence reviewed (validation gate reports, baseline comparisons, impact analyses), and, if the change was determined to be a substantial modification, the re-assessment outcome. The records are retained for the ten-year period regardless of whether the determination was positive or negative. The collection of records serves two purposes. For the organisation, it demonstrates a consistent and documented approach to change assessment, which is a QMS requirement. For regulatory inspectors and notified bodies , it provides transparency into how the system has evolved and how the organisation has governed that evolution. A gap in the assessment records, where changes were made without documented assessment, is a non-conformity. Key outputs Determination records for every change assessed against thresholds Supporting evidence (gate reports, baseline comparisons, impact analyses) Ten-year retention of all records Module 6 and Module 12 AISDP evidence Contract Test Results AISDP module(s): Module 5 (Testing and Validation) Regulatory basis: Article 15 , Annex IV(3) Contract test results from the consumer-driven and statistical contract testing described in are retained as evidence that the system's interfaces are functioning within their documented specifications. Each CI pipeline run produces contract test results; these are stored as pipeline artefacts and referenced in the AISDP. The artefact comprises the contract definitions themselves (what each consumer expects from each provider), the test execution results for each pipeline run (pass/fail per contract, with details for failures), and any contract violations that were detected and the resolution actions taken. The contracts serve as executable documentation of the system's interface assumptions; the test results demonstrate that those assumptions are verified continuously. For AISDP Module 5, the contract test results complement the model validation gate results by demonstrating that the system's components interact correctly, not just that the model produces acceptable outputs in isolation. A model that passes all validation gates but receives malformed inputs due to a broken upstream contract may still produce non-compliant outputs in production. Key outputs Contract definitions version-controlled alongside system code Contract test execution results per CI pipeline run Contract violation logs with resolution actions Module 5 AISDP evidence Deployment Ledger Entries AISDP module(s): Module 10 (Record-Keeping), Module 12 (Post-Market Monitoring) Regulatory basis: Article 12 The deployment ledger entries are the materialised output of the deployment ledger described above. Each entry records a single deployment event: the before and after system state, the authoriser, the evidence reviewed, and the timestamp. The collection of entries forms the system's deployment history. For AISDP Module 10, the most recent deployment entries demonstrate the current system state and the governance that produced it. For Module 12, the complete deployment history provides the change log that tracks the system's evolution. Inspectors and notified bodies may request deployment ledger entries for specific time periods to understand what changed, when, and under whose authority. The entries must be immutable once created. Any correction or amendment to a deployment record is itself a new record referencing the original, not an in-place modification. This immutability ensures that the deployment history is a reliable audit trail, not a revisable narrative. Key outputs Immutable deployment ledger entries per deployment event Before/after state, authoriser, evidence, and timestamp per entry Complete deployment history accessible by time period Module 10 and Module 12 AISDP evidence --- ## Version Control URL: https://docs.standardintelligence.com/version-control Breadcrumb: Development › Version Control (S.6) Last updated: 28 Feb 2026 Version control for high-risk AI systems extends well beyond conventional source code management. The composite versioning scheme assigns each release an identifier combining code SHA, dataset hash, model version, configuration hash, and prompt version. Code version control enforces branch protection and mandatory review with CODEOWNERS for fairness-critical paths. Data version control applies DVC, Delta Lake, or LakeFS to ensure every dataset version is immutable and retrievable for ten years. The model registry tracks each model from experimental through staging to production with full metadata, lineage, and access controls. Configuration and prompt versioning treats decision thresholds, business rules, and LLM system instructions as first-class versioned artefacts. Substantial modification detection implements a cumulative baseline tracking framework that triggers re-assessment under the Act when changes cross defined thresholds. Service dependency management maps microservice interactions and enforces contract tests. Traceability links technical artefacts to business outcomes. The section concludes with the artefacts produced. ℹ This section corresponds to the Version Control section and feeds primarily into AISDP Module 2 (Development Process) and Module 10 (Record-Keeping). --- # Security --- ## Additional Threat-Specific Testing URL: https://docs.standardintelligence.com/additional-threat-specific-testing Breadcrumb: Security › Testing Programme › Additional Threat-Specific Testing Last updated: 28 Feb 2026 Output Validation Testing AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 For systems where model outputs are consumed by downstream components (web interfaces, databases, APIs, workflow engines), the testing programme verifies that no model output can trigger a secondary vulnerability. Test cases include generating outputs containing SQL injection payloads, cross-site scripting vectors, command injection strings, and malformed data structures, then verifying that the output validation layer neutralises each payload before it reaches the downstream component. The testing should cover every downstream consumption path identified in the threat assessment. Each path requires dedicated test cases because the injection mechanisms and encoding requirements differ: HTML encoding for web rendering, parameterised queries for database consumption, shell escaping for command execution. A test that verifies XSS protection does not also verify SQL injection protection. The Technical SME updates the test suite whenever a new downstream integration is added. The test results are documented as Module 9 evidence. A finding that any model output can reach a downstream component without validation is a critical failure requiring immediate remediation. Key outputs Injection payload testing across all downstream consumption paths Per-path test cases (XSS, SQL injection, command injection, malformed data) Test suite updates on new downstream integration Module 9 AISDP evidence Denial of Service Testing AISDP module(s): Module 9 (Robustness and Cybersecurity), Module 5 (Testing and Validation) Regulatory basis: Article 15 Denial-of-service testing verifies the system's resilience to resource exhaustion attacks. Three test categories are required. Sustained high-volume testing submits requests at rates exceeding the expected peak load by at least 3x, verifying that rate limiting activates correctly and the system maintains service for requests within the limit. Adversarial input testing submits inputs designed to maximise inference time (unusual dimensions, extreme values, pathological structures), verifying that timeouts terminate long-running inferences. Combined testing submits high-volume legitimate requests simultaneously with adversarial inputs, simulating a realistic attack scenario. The pass criteria are that the system maintains the documented latency and throughput targets under load, that rate limiting and timeout enforcement function correctly, and that the system recovers automatically after the attack ceases. Recovery time should be measured and compared against the declared recovery objective. The test configuration (load profile, adversarial input specifications, duration) should be documented so that the test is repeatable. The test results are documented as Module 9 and Module 5 evidence, and fed into the load testing results from for a complete picture of the system's performance under stress. Key outputs Three-category DoS testing (volume, adversarial input, combined) Pass criteria including latency maintenance, control activation, and recovery Repeatable test configuration Module 9 and Module 5 AISDP evidence Plugin/Tool Security Testing AISDP module(s): Module 7 (Human Oversight), Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 14 , Article 15 For systems where the AI component invokes external tools or plugins, the testing programme verifies four properties. The tool allowlist is enforced: attempts to invoke unlisted tools are rejected. Parameter validation prevents the system from passing malicious or out-of-scope parameters to authorised tools. Human approval gates function correctly for high-impact actions. Comprehensive logging captures every tool invocation with its parameters and outcome. Test cases include attempting to invoke disallowed tools through crafted model outputs, passing boundary and malformed parameters to allowed tools, verifying that the human approval workflow cannot be bypassed through rapid sequential requests, and confirming that tool invocation logs are complete and accurate. For agentic systems, this testing is particularly critical because the system's action space directly affects real-world outcomes. The Technical SME conducts plugin/tool security testing after every change to the system's tool integrations or permission model. The test results are documented as Module 7 and Module 9 evidence. A finding that the allowlist can be bypassed or that human approval can be circumvented is a critical failure. Key outputs Allowlist enforcement testing Parameter validation and boundary testing Human approval bypass testing Module 7 and Module 9 AISDP evidence Excessive Agency Testing AISDP module(s): Module 1 (System Identity), Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Testing verifies that the system's actual capabilities do not exceed its documented intended scope. Three test categories address this. Permission boundary testing attempts to access resources, APIs, or data stores that the system should not be able to reach, confirming that the principle of least privilege is technically enforced. Privilege escalation testing attempts to increase the system's permissions through its own actions. Scope creep testing presents the system with tasks that fall outside its documented intended purpose and verifies that it declines or escalates rather than attempting to fulfil them. For an agentic system designed to manage customer support tickets, scope creep testing might present a request to modify a financial transaction, verifying that the system refuses the action. For agentic systems as described in Article 43, this testing is particularly important. The Technical SME conducts it after every change to the system's tool integrations or permission model. Any finding that the system can access resources or perform actions beyond its documented scope is a critical non-conformity, because it represents a gap between the AISDP's intended purpose declaration and the system's actual capability. Key outputs Permission boundary testing (resource, API, data store access) Privilege escalation testing Scope creep testing with out-of-purpose task presentation Module 1 and Module 9 AISDP evidence --- ## Adversarial ML Test Results URL: https://docs.standardintelligence.com/adversarial-ml-test-results Breadcrumb: Security › Artefacts › Adversarial ML Test Results Last updated: 28 Feb 2026 Adversarial ML Test Results AISDP module(s): Module 9 (Robustness and Cybersecurity), Module 5 (Testing and Validation) Regulatory basis: Article 15 The adversarial ML test results archive contains the structured reports from every adversarial testing execution. Each report documents the testing methodology, the attack types tested, the perturbation budgets used, the success rates achieved, the comparison against declared robustness thresholds, and any findings that exceeded the risk acceptance threshold. The archive is organised chronologically, enabling trend analysis: are adversarial success rates improving or degrading over successive model versions? Are new attack types revealing previously unknown vulnerabilities? The trend data informs the risk register and the threat model update cycle. Remediation records are linked to specific findings, documenting the actions taken (model retraining, architecture changes, control strengthening) and the re-testing results confirming effectiveness. The archive is retained for the ten-year period. Key outputs Chronological archive of adversarial ML test reports Per-report methodology, results, and threshold comparison Linked remediation records with re-testing verification Module 9 and Module 5 AISDP evidence --- ## Adversarial ML Testing Frequency URL: https://docs.standardintelligence.com/adversarial-ml-testing-frequency Breadcrumb: Security › Testing Programme › Adversarial ML Testing › Adversarial ML Testing Frequency Last updated: 28 Feb 2026 Adversarial ML Testing Frequency AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 The Technical SME conducts the full adversarial ML testing suite at least biannually and additionally after any significant model change. Significant changes include model retraining on new data, architecture modifications, changes to the system's input or output format, changes to the guardrails or safety constraints, and changes to the model's deployment context or intended purpose. The biannual cadence ensures that the testing results remain current even for systems that do not undergo frequent changes. Between full testing cycles, the CI pipeline's robustness gate provides continuous verification using a subset of the adversarial testing suite. The robustness gate does not replace the full adversarial ML testing programme; it provides early warning of regressions. The testing frequency, the trigger conditions for additional testing, and the relationship between the full testing programme and the CI pipeline's robustness gate are documented in Module 9. The Technical SME documents all adversarial testing results in structured reports, stored as Module 9 evidence, and fed back into the threat model and risk register . Key outputs Biannual full adversarial ML testing suite Change-triggered additional testing for significant modifications CI pipeline robustness gate providing continuous subset verification Module 9 AISDP documentation --- ## Adversarial ML Testing URL: https://docs.standardintelligence.com/adversarial-ml-testing Breadcrumb: Security › Testing Programme › Adversarial ML Testing Last updated: 28 Feb 2026 Evasion/Adversarial Examples — White-Box & Black-Box Adversarial Testing by Modality Data Poisoning Simulation Prompt Injection Testing (LLM Systems) Model Extraction Testing Membership Inference Testing Adversarial ML Testing Frequency --- ## Adversarial Testing by Modality URL: https://docs.standardintelligence.com/adversarial-testing-by-modality Breadcrumb: Security › Testing Programme › Adversarial ML Testing › Adversarial Testing by Modality Last updated: 28 Feb 2026 Adversarial Testing by Modality AISDP module(s): Module 9 (Robustness and Cybersecurity), Module 5 (Testing and Validation) Regulatory basis: Article 15 Adversarial testing methodologies vary by the model's input modality. For tabular models, the testing protocol perturbs input features at realistic noise levels (±5% on continuous features, random category flips at 1% rate) and records the prediction change rate. This approach reflects real-world attack vectors: an applicant slightly modifying their reported income or a data entry error altering a critical field. For image models, ART generates adversarial images at varying perturbation magnitudes using FGSM, PGD, and C&W methods. The perturbation budget (measured in L2 or L∞ norms) should reflect the threat model 's assessment of realistic attack capabilities. For text models, TextAttack provides character-level (typos, homoglyph substitutions), word-level (synonym replacement), and sentence-level (paraphrase) perturbation attacks. The testing results for each modality are reported with the attack methods used, the perturbation magnitudes tested, the success rates at each magnitude, and the comparison against the declared robustness thresholds. The Technical SME selects the attack methods most relevant to the system's modality and deployment context, documenting the selection rationale in Module 9. Key outputs Modality-specific adversarial testing (tabular, image, text) Perturbation budgets reflecting realistic attack capabilities Per-modality success rate reporting against declared thresholds Module 9 and Module 5 AISDP evidence --- ## AI-Specific Rules URL: https://docs.standardintelligence.com/ai-specific-rules Breadcrumb: Security › DevSecOps Integration (S.8.3) › AI-Specific Rules Last updated: 28 Feb 2026 AI-Specific Rules AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Standard security scanning tools do not detect AI-specific security risks. AI-specific rules augment the SAST, DAST, and SCA scanning with checks tailored to machine learning systems. These rules overlap with the compliance-focused AI-specific rules described in but are oriented toward security rather than governance. Security-oriented AI-specific rules should flag direct model file loading that bypasses signature verification (complementing the model registry bypass rule in with a security dimension), unencrypted transmission of model artefacts or training data, hardcoded API keys or credentials in ML pipeline code (complementing the secret detection in ), and inference endpoints without authentication or rate limiting . SIEM correlation rules provide the runtime equivalent: a sudden increase in inference API calls from a single consumer may indicate model extraction , a pattern of systematically varied inputs may indicate adversarial probing, and changes to model artefact files outside the CI/CD pipeline indicate unauthorised modification. The AI-specific rules and SIEM correlation rules are documented in Module 9. Key outputs AI-specific security rules in the SAST/DAST pipeline SIEM correlation rules for AI-specific attack patterns Integration with the broader scanning and monitoring framework Module 9 AISDP documentation --- ## AI-Specific Threat Categories URL: https://docs.standardintelligence.com/ai-specific-threat-categories Breadcrumb: Security › Threat Modelling › AI-Specific Threats Last updated: 28 Feb 2026 This section covers the following topics: OWASP LLM01: Prompt Injection OWASP LLM02: Sensitive Info Disclosure OWASP LLM03: Supply Chain OWASP LLM04: Data and Model Poisoning OWASP LLM05: Improper Output Handling Plugin Security (cf. LLM06 Agency) OWASP LLM06: Excessive Agency OWASP LLM09: Misinformation OWASP LLM10: Unbounded Consumption Model Theft Beyond OWASP Top 10 --- ## API Logging & Audit URL: https://docs.standardintelligence.com/api-logging-and-audit Breadcrumb: Security › API Security (S.8.2.2) › API Logging & Audit Last updated: 28 Feb 2026 API Logging & Audit AISDP module(s): Module 10 (Record-Keeping), Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 12 , Article 15 Every inference request and response is logged with sufficient detail for forensic analysis. The log record includes the consumer identity (API key or OAuth subject), the input (or a hash of the input for privacy-sensitive systems), the output, the model version, the inference latency, and any validation or filtering actions taken. These logs serve dual purposes. For compliance, they satisfy Article 12's automatic recording requirement and provide the raw data for the post-market monitoring system ( Module 12 ). For security, they enable forensic analysis of suspected attacks: model extraction attempts leave distinctive patterns in the query logs; prompt injection attempts may be identifiable through anomalous input characteristics. The logs are stored in immutable, append-only storage and retained for the ten-year period. Access to inference logs is restricted to authorised monitoring and audit personnel. Where inference logs are used for model retraining, the data governance controls described in apply to the retraining dataset derived from those logs. Key outputs Comprehensive inference logging (consumer, input/hash, output, version, latency, actions) Immutable storage with ten-year retention Restricted access to authorised personnel Module 10 and Module 9 AISDP evidence --- ## API Security URL: https://docs.standardintelligence.com/api-security Breadcrumb: Security › API Security (S.8.2.2) Last updated: 28 Feb 2026 Authentication — API Keys & Per-Consumer Identity Rate Limiting Input Validation & Sanitisation Output Filtering Inference Timeout Enforcement API Versioning & Deprecation API Logging & Audit --- ## API Versioning & Deprecation URL: https://docs.standardintelligence.com/api-versioning-and-deprecation Breadcrumb: Security › API Security (S.8.2.2) › API Versioning & Deprecation Last updated: 28 Feb 2026 API Versioning & Deprecation AISDP module(s): Module 9 (Robustness and Cybersecurity), Module 3 (Architecture and Design) Regulatory basis: Article 12 When a model is updated, the API version should change to prevent consumers from unknowingly receiving outputs from a different model version. API versioning ensures that consumers can pin to a specific version and receive consistent behaviour until they explicitly migrate to a newer version. Deprecated API versions are retired on a documented schedule, with consumers notified in advance. The notification period should be sufficient for consumers to test and migrate to the new version. For deployers of high-risk AI systems, migration to a new API version may require the deployer to update their own documentation and processes; the deprecation timeline should account for this. The API versioning scheme, the deprecation policy, and the notification process are documented in Module 3 (as part of the system's architectural specification) and Module 9 (as the change may affect security properties). Each API version change is recorded in the deployment ledger. Key outputs API version changes aligned with model updates Documented deprecation policy with consumer notification timeline Deployment ledger entries for API version changes Module 3 and Module 9 AISDP documentation --- ## Applicable Regimes URL: https://docs.standardintelligence.com/applicable-regimes Breadcrumb: Security › Cross-Regulatory Mapping (S.8.1) › Applicable Regimes Last updated: 28 Feb 2026 Regime Determination AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 , NIS2 , CRA , DORA The first step in the cross-regulatory mapping is determining which cybersecurity regimes apply to the specific AI system. The AI Act applies to all high-risk AI systems. NIS2 applies if the deploying entity is an essential or important entity under Directive (EU) 2022/2555 (covering sectors including energy, transport, health, digital infrastructure, public administration, and ICT service management). The CRA applies if the system is a product with digital elements placed on the EU market. DORA applies if the deploying entity is a financial entity under Regulation (EU) 2022/2554. The regime determination is produced by the AI System Assessor during Phase 2 ( Risk Assessment ) and reviewed by the Legal and Regulatory Advisor. For each regime, the determination records whether it applies, the basis for the determination (entity classification, product classification, sector), and any borderline cases with the reasoning for the conclusion. Where a determination is borderline, treating the system as within scope is the safer position. The regime determination shapes the entire Module 9 structure: which cross-regulatory mapping tables are needed, whether the CRA deemed compliance pathway applies, which incident reporting streams are required, and which third-party risk management requirements must be satisfied. Key outputs Per-regime applicability determination (AI Act, NIS2, CRA, DORA) Basis and reasoning for each determination Borderline case documentation Module 9 AISDP documentation --- ## Art. 73(9) Simplification for NIS2/DORA Entities URL: https://docs.standardintelligence.com/art-739-simplification-for-nis2dora-entities Breadcrumb: Security › Incident Response › Integrated Plan › Art. 73(9) Simplification for NIS2/DORA Entities Last updated: 28 Feb 2026 Art. 73(9) Simplification for NIS2/DORA Entities AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 73(9) Article 73(9) provides a simplification for sectors where equivalent reporting obligations exist. Entities subject to NIS2 , DORA , or sector-specific legislation with equivalent reporting requirements are limited to reporting fundamental rights infringements under Article 3(49)(c) through the AI Act. Other serious incident categories are reported through the sector-specific regime. This reduces the AI Act reporting burden without eliminating it. Fundamental rights infringements, including systematic discrimination in credit decisions or denial of essential services based on protected characteristics, remain reportable under Article 73 even when the same incident is already reported under NIS2 or DORA. The triage process must assess the fundamental rights dimension separately. Whether the simplification applies in the entity's jurisdiction depends on whether the relevant member state's transposition of NIS2 (or the DORA implementation) covers the incident categories that Article 73 would otherwise require. The Legal and Regulatory Advisor documents the determination, including the specific sector-specific reporting obligations relied upon, and records it in Module 9. Key outputs Article 73(9) applicability determination by Legal and Regulatory Advisor Fundamental rights reporting preserved regardless of simplification Documentation of sector-specific obligations relied upon Module 9 AISDP documentation --- ## Attack Surface Identification URL: https://docs.standardintelligence.com/attack-surface-identification Breadcrumb: Security › Threat Modelling › Attack Surfaces & Actors › Attack Surface Identification Last updated: 28 Feb 2026 Attack Surfaces (Eight Categories) AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 The first stage of threat modelling scopes the system's attack surface: every point where external input enters the system and every point where the system produces output that affects decisions. Eight categories are identified of attack surface relevant to high-risk AI systems. Data ingestion APIs accept raw data from external sources and are vulnerable to data poisoning , schema manipulation, and injection attacks. Model serving endpoints accept inference requests and are vulnerable to adversarial inputs, model extraction , and denial of service. Operator interfaces present the human oversight layer and are vulnerable to session hijacking, privilege escalation, and interface manipulation. Administrative endpoints provide system management and are vulnerable to unauthorised access and configuration tampering. Inter-service communication channels carry data between microservices and are vulnerable to man-in-the-middle attacks and data interception. Training pipelines process data into model artefacts and are vulnerable to data poisoning and code injection. Configuration stores hold thresholds, feature flags, and business rules, and are vulnerable to unauthorised modification. External integrations connect to third-party APIs, model providers, and data enrichment services, and are vulnerable to supply chain attacks and data exfiltration. Each attack surface point is assessed against the combined STRIDE + ATLAS threat taxonomy, with identified threats scored and documented in the threat model. Key outputs Eight-category attack surface inventory Per-surface threat enumeration using STRIDE + ATLAS Risk scoring per identified threat Module 9 AISDP documentation --- ## Authentication & Access Control URL: https://docs.standardintelligence.com/authentication-and-access-control Breadcrumb: Security › Cybersecurity Foundations › Authentication & Access Control Last updated: 28 Feb 2026 MFA, RBAC & Service-to-Service mTLS AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Multi-factor authentication (MFA) should be mandatory for all operator accounts and all administrative access to the AI system's infrastructure. This covers the human oversight interface, the model registry , the data pipeline administration, the monitoring dashboards, and any other system that a human accesses. MFA prevents credential compromise from granting immediate access. Role-based access control (RBAC) enforces the principle of least privilege. Each user and service account is assigned only the permissions required for their current function. The RBAC model should distinguish between roles with different access needs: operators (read access to inference outputs and explanations), data engineers (read/write access to data pipelines), model developers (read/write access to training code and experiment tracking), the AI Governance Lead (read access to all modules, approval authority for deployments), and administrators (infrastructure management). Service-to-service communication uses mutual TLS (mTLS) or equivalent, ensuring that both the client and server authenticate each other. Combined with the identity-based access described above, mTLS prevents man-in-the-middle attacks and ensures that only authorised services communicate with each other. The access control configuration is documented in Module 9 and subject to quarterly review. Key outputs MFA enforcement for all human access RBAC model with role definitions and permission matrices mTLS for all service-to-service communication Module 9 AISDP documentation Model Artefact, Training Data & Config Access AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Access to model artefacts, training data, and system configuration requires specific controls beyond the general RBAC framework. These three artefact categories are uniquely sensitive: model artefacts are valuable intellectual property and a potential attack vector; training data may contain personal information and is the primary target for data poisoning ; configuration changes can alter the system's behaviour as materially as code or model changes. The security team restricts access to model artefacts to the CI/CD pipeline service account (for deployment), the model registry administrators (for maintenance), and named personnel with a documented business need (for investigation or audit). Training data access is restricted to authorised data engineers and model developers, with every access event (read, write, delete) logged in an immutable audit trail. Configuration access follows the same governance as code changes: modifications require pull request review and approval. Access to all three artefact categories is auditable. The audit logs record who accessed what, when, from where, and for what purpose. The logs are immutable and retained for the ten-year period. Penetration testing should specifically test whether unauthorised access to model artefacts, training data, or configuration is possible through any path. Key outputs Access controls specific to model artefacts, training data, and configuration Immutable audit logging for all access events Quarterly access reviews Module 9 AISDP documentation --- ## Authentication — API Keys & Per-Consumer Identity URL: https://docs.standardintelligence.com/authentication-api-keys-and-per-consumer-identity Breadcrumb: Security › API Security (S.8.2.2) › Authentication — API Keys & Per-Consumer Identity Last updated: 28 Feb 2026 Authentication — API Keys & Per-Consumer Identity AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Every inference endpoint should require authentication, even for internal consumers. API keys or OAuth tokens identify each consumer, enabling per-consumer rate limiting , usage tracking, and forensic attribution. Without per-consumer identity, the organisation cannot distinguish legitimate high-volume usage from model extraction attacks, and it cannot attribute anomalous query patterns to specific consumers. The authentication mechanism should support revocation (disabling a compromised key without affecting other consumers), rotation (periodically replacing keys), and granular permissions (different consumers may have different rate limits or access to different model versions). For external consumers (deployers, integrators), the authentication credentials are provisioned through a governed onboarding process and documented in the deployer agreement. Per-consumer identity also supports the contractual controls against model extraction: if a deployer is suspected of systematic querying for extraction purposes, the organisation can review that consumer's query history. The authentication configuration, the per-consumer rate limits, and the credential management process are documented in Module 9. Key outputs Mandatory authentication on all inference endpoints Per-consumer identity with revocation and rotation support Governed credential provisioning for external consumers Module 9 AISDP documentation --- ## Beyond OWASP Top 10 URL: https://docs.standardintelligence.com/beyond-owasp-top-10 Breadcrumb: Security › Threat Modelling › AI-Specific Threats › Beyond OWASP Top 10 Last updated: 28 Feb 2026 Adversarial Examples — Attack Vectors & Controls (Adversarial Training, Input Validation, Ensemble Methods) AISDP module(s): Module 5 (Testing and Validation), Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Adversarial examples are inputs crafted with imperceptible perturbations that cause the model to produce incorrect outputs with high confidence. This threat affects image classification, speech recognition, and other perceptual AI systems, as well as tabular data models through feature manipulation. A loan applicant who slightly modifies their reported income to cross a decision boundary is executing a real-world adversarial attack. Controls include adversarial training, which incorporates adversarial examples in the training data to improve the model's robustness to perturbations. Input validation detects out-of-distribution inputs that may indicate adversarial manipulation, flagging inputs whose feature distributions fall outside the training data's range. Ensemble methods, which aggregate predictions from multiple models, are more robust to adversarial perturbations than single models because an adversarial example crafted for one model is unlikely to fool all models in the ensemble. Regular adversarial testing as part of the CI pipeline robustness gate provides ongoing verification. The adversarial robustness evaluation methodology, the attack types tested against (FGSM, PGD, C&W for neural networks; feature perturbation for tabular models), the model's measured robustness, and the residual risk for attack types where full robustness cannot be achieved are documented in Module 9. Module 5 should include adversarial robustness metrics alongside standard accuracy metrics. Key outputs Adversarial training integration (where applicable) Input validation for out-of-distribution detection CI pipeline adversarial testing (robustness gate integration) Module 5 and Module 9 AISDP evidence Model Inversion — Controls (Output Granularity Restriction, Differential Privacy, Probing Monitoring) AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15, GDPR Model inversion attacks use the model's outputs, including confidence scores and probability distributions, to reconstruct information about the training data. In a classification model, inversion can recover representative examples of each class. For models trained on personal data, this can expose sensitive information about individuals in the training set. Restricting the granularity of output information is the most effective countermeasure. Returning only the top prediction or a coarsened confidence band, rather than full probability distributions, reduces the information available to an attacker. For classification systems, returning a binary decision (approve/reject) with a broad confidence category (high/medium/low) rather than a precise probability score significantly limits the inversion attack surface. Differential privacy during training provides a formal guarantee that the model's outputs do not reveal disproportionate information about any individual training record. Monitoring output patterns for signs of systematic probing, where a consumer submits inputs designed to explore the model's decision boundary, supports early detection. Module 9 captures the model inversion threat, the output granularity restrictions in place, and any differential privacy parameters applied during training. Key outputs Output granularity restriction policy (coarsened confidence bands) Differential privacy parameters (if applied) Probing pattern monitoring and alerting Module 9 AISDP documentation Federated/Distributed Training Risks (Poisoned Gradients, Data Inference, Aggregation Manipulation) AISDP module(s): Module 5 (Testing and Validation), Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Organisations using federated learning or distributed training across multiple data holders face threats specific to these architectures. Malicious participants can submit poisoned gradient updates that corrupt the global model, infer information about other participants' data from gradient exchanges, or exploit the aggregation protocol to manipulate the training outcome. Controls include secure aggregation protocols that prevent the central coordinator from seeing individual gradient updates, differential privacy applied to gradient updates to limit information leakage from any single participant, Byzantine-robust aggregation methods that detect and exclude anomalous gradient updates, and participant authentication and access controls ensuring that only authorised parties contribute to the training process. Audit trails must record every gradient exchange and aggregation step to support forensic investigation. Organisations using federated learning should document the architecture, the security controls, the trust model (which participants are trusted, what verification mechanisms are in place), and the residual risks in both Module 5 (Architecture) and Module 9 (Cybersecurity). If the system does not use federated or distributed training, this threat category is documented as not applicable in the threat model . Key outputs Federated/distributed training architecture documentation (if applicable) Secure aggregation, differential privacy, and Byzantine-robust aggregation controls Participant authentication and audit trail implementation Module 5 and Module 9 AISDP evidence --- ## Consolidated Mapping URL: https://docs.standardintelligence.com/consolidated-mapping Breadcrumb: Security › Cross-Regulatory Mapping (S.8.1) › Consolidated Mapping Last updated: 28 Feb 2026 Mapping Table AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 , NIS2 , CRA , DORA The consolidated mapping table maps seven cybersecurity domains across all applicable regimes, identifying where a single implementation satisfies multiple requirements and where regime-specific work is needed. The seven domains are risk management, incident reporting, vulnerability management , supply chain security , penetration testing , security monitoring, and business continuity. For each domain, the table records the AI Act requirement and Article reference, the NIS2 requirement and Article reference (if applicable), the CRA requirement and Article reference (if applicable), the DORA requirement and Article reference (if applicable), and the integration approach. The integration approach states whether a single control satisfies all applicable regimes, or whether regime-specific extensions are needed. For example, risk management can use whichever framework is broadest (NIS2 or DORA) as the baseline, extending with AI-specific threat categor ies. Incident reporting requires parallel streams because authorities, timelines, and content differ. The AI System Assessor produces a system-specific version of this mapping, tailored to the regimes that apply to the specific system. Module 9 holds the system-specific mapping; the Legal and Regulatory Advisor reviews it. Key outputs Seven-domain cross-regulatory mapping table Per-domain integration approach (single control or regime-specific extensions) System-specific tailoring by the AI System Assessor Module 9 AISDP documentation --- ## CRA Deemed Compliance Pathway URL: https://docs.standardintelligence.com/cra-deemed-compliance-pathway Breadcrumb: Security › Cross-Regulatory Mapping (S.8.1) › CRA Deemed Compliance Pathway Last updated: 28 Feb 2026 CRA Scope & Product Classification AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: CRA Article 12 , AI Act Article 15 The CRA scope determination addresses whether the AI system qualifies as a product with digital elements. Standalone software installed on the deployer's infrastructure qualifies. An AI system embedded in a physical product (medical device, industrial control system, autonomous vehicle component) qualifies through the product. A purely cloud-hosted SaaS system, consumed entirely via API, may fall outside scope; Commission interpretation of the SaaS boundary is still evolving as of early 2026. Products within scope are classified as default, important (Class I or Class II), or critical. Default products use self-assessment for CRA conformity. Important products require EU-type examination or production quality assurance modules. Critical products require European cybersecurity certification. High-risk AI systems in critical infrastructure, healthcare, or industrial control may qualify as important or critical under the CRA, creating a conformity assessment interaction with the AI Act's Annex VI internal assessment. Module 9 records the CRA scope determination (including the system's delivery model and the reasoning for or against CRA applicability), the product classification, the resulting CRA conformity assessment route, and the coordination plan with the AI Act conformity assessment. If the determination is borderline, treating the system as within scope is the safer position. Key outputs CRA scope determination with delivery model analysis Product classification (default, important, critical) CRA conformity assessment route identification Module 9 AISDP documentation CRA Condition — AI-Specific Threat Coverage AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: CRA Article 12, CRA Recital 51 CRA Article 12 provides that high-risk AI systems which are also products with digital elements and which comply with the CRA's essential cybersecurity requirements ( Annex I , Parts I and II) shall be deemed to comply with the cybersecurity requirements of AI Act Article 15. This deemed compliance covers cybersecurity specifically; accuracy and robustness under Article 15 remain independently governed by the AI Act. Crucially, CRA Recital 51 requires that the CRA conformity assessment also consider AI-specific attack vectors, including adversarial attacks and training data poisoning . A CRA assessment that evaluates network security , update mechanisms, and vulnerability handling, without evaluating the AI system's resilience to adversarial inputs, data poisoning, or model extraction , does not fully satisfy the condition. Verification that the CRA assessment scope explicitly includes the AI-specific threat categor ies from the threat model is the Conformity Assessment Coordinator 's responsibility. The categories include adversarial examples, data poisoning, prompt injection , model extraction, membership inference , and information disclosure. If the CRA assessment does not cover these categories, deemed compliance cannot be relied upon for them, and Module 9 must address them independently. The Conformity Assessment Coordinator documents which AI-specific threats are covered by the CRA assessment and which require independent Module 9 treatment. This analysis forms part of the two-layer Module 9 structure described above. Key outputs Verification of CRA assessment coverage of AI-specific threat categories Documentation of covered and uncovered categories Deemed compliance determination per threat category Module 9 AISDP documentation Module 9 Two-Layer Structure AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: CRA Article 12, AI Act Article 15 When the CRA deemed compliance pathway applies, the AI System Assessor structures Module 9 in two layers. The first layer references the CRA conformity assessment and identifies which Article 15 sub-requirements are satisfied by the CRA evidence. Traditional cybersecurity work (network security, encryption, access control, vulnerability management ) that is already covered by CRA conformity evidence does not need to be duplicated. The second layer documents the AI-specific cybersecurity measures that extend beyond the CRA's scope: adversarial ML testing , data poisoning controls, model-specific threat modelling, prompt injection defences, model extraction protections, and the other AI-native threats covered in the threat modelling section. Both the AI Act competent authority and the CRA notified body (if applicable) can then see clearly which requirements are addressed by which evidence, without duplication or ambiguity. This structure reduces documentation effort whilst maintaining full compliance coverage. If the CRA does not apply (per the scope determination above), Module 9 uses a single-layer structure covering all cybersecurity requirements independently. Key outputs Two-layer Module 9 structure (CRA cross-reference layer, AI-specific layer) Clear mapping of requirements to evidence sources Elimination of duplication between CRA and AI Act evidence Module 9 AISDP documentation --- ## CRA Scope Determination & Product Classification URL: https://docs.standardintelligence.com/cra-scope-determination-and-product-classification Breadcrumb: Security › Artefacts › CRA Scope Determination & Product Classification Last updated: 28 Feb 2026 CRA Scope Determination & Product Classification AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: CRA Article 12 , AI Act Article 15 The CRA scope determination and product classification is retained as a standalone Module 9 artefact. The artefact documents the system's delivery model, the reasoning for CRA applicability (or non-applicability), the product classification (default, important, critical), the CRA conformity assessment route, and the deemed compliance analysis. If the CRA applies, the artefact also documents the coordination plan between the CRA conformity assessment and the AI Act conformity assessment. If the CRA does not apply, the artefact documents the reasoning for exclusion, ensuring that the determination can be defended if challenged. The artefact is reviewed whenever the system's delivery model changes or when the Commission issues guidance that affects the scope determination. If the system is not a product with digital elements, the artefact documents the non-applicability determination. Key outputs CRA scope determination and product classification document Deemed compliance analysis (covered and uncovered threat categories) Coordination plan for dual conformity assessment (where applicable) Module 9 AISDP evidence --- ## Cross-Regulatory Mapping Tables URL: https://docs.standardintelligence.com/cross-regulatory-mapping-tables Breadcrumb: Security › Artefacts › Cross-Regulatory Mapping Tables Last updated: 28 Feb 2026 Cross-Regulatory Mapping Tables AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 , NIS2 , CRA , DORA The cross-regulatory mapping tables are retained as a standalone Module 9 artefact. The system-specific mapping identifies which regimes apply, maps requirements across seven cybersecurity domains, documents the integration approach for each domain, and records the Legal and Regulatory Advisor's review and approval. The mapping tables are updated when the regulatory landscape changes (new implementing acts, Commission guidance, member state transposition updates), when the system's deployment context changes (new deployment in a different member state or sector), or when the regime determination is revised. Each update is version-controlled with the update rationale documented. The mapping tables enable efficient preparation for inspections or audits from different regulatory perspectives. An AI Act market surveillance authority, a NIS2 auditor, a CRA notified body , and a DORA financial supervisor may all examine the same system's cybersecurity controls; the mapping tables show each authority which controls address their specific requirements. Key outputs System-specific cross-regulatory mapping (seven domains, all applicable regimes) Version-controlled with update rationale Multi-authority inspection readiness Module 9 AISDP evidence --- ## Cross-Regulatory Mapping URL: https://docs.standardintelligence.com/cross-regulatory-mapping Breadcrumb: Security › Cross-Regulatory Mapping (S.8.1) Last updated: 28 Feb 2026 This section covers the following topics: Applicable Regimes Consolidated Mapping CRA Deemed Compliance Pathway NIS2 Interaction DORA Interaction Emerging Interactions --- ## Cybersecurity Foundations URL: https://docs.standardintelligence.com/cybersecurity-foundations Breadcrumb: Security › Cybersecurity Foundations (S.8.2) Last updated: 28 Feb 2026 Cybersecurity foundations establish the baseline security posture for the AI system's infrastructure and operations. Network security covers dedicated VPC segmentation, ingress/egress restriction with WAF, and DDoS protection. Zero trust architecture implements identity-based access with SPIFFE/SPIRE, microsegmentation, and continuous verification. Authentication and access control enforces MFA, RBAC, and service-to-service mTLS alongside granular access controls for model artefacts, training data, and configuration. Encryption applies AES-256 at rest and TLS 1.3 in transit with key management through HSM or cloud KMS. Vulnerability management maintains a centralised register with severity-based SLAs. Patch management defines patching cadences and emergency procedures. ℹ This section corresponds to the Cybersecurity Foundations section and feeds primarily into AISDP Module 9 (Robustness and Cybersecurity). --- ## Cybersecurity Testing Programme URL: https://docs.standardintelligence.com/cybersecurity-testing-programme Breadcrumb: Security › Cybersecurity Testing Programme (S.8.5) Last updated: 28 Feb 2026 The cybersecurity testing programme validates the security controls documented throughout the AISDP . Penetration testing requires annual independent assessments with severity-based remediation SLAs. Vulnerability scanning implements continuous automated scanning across four layers with a centralised vulnerability management register. Adversarial ML testing addresses AI-specific attack vectors including evasion, data poisoning , model extraction , membership inference , prompt injection , and supply chain attacks. Additional threat-specific testing covers OWASP LLM categories not already addressed by adversarial ML testing. Red team exercises simulate multi-stage attack scenarios against the complete system. Test result mapping links findings to specific AISDP controls and regulatory requirements. ℹ This section corresponds to the Cybersecurity Testing section and feeds primarily into AISDP Module 9 (Robustness and Cybersecurity). --- ## DAST URL: https://docs.standardintelligence.com/dast Breadcrumb: Security › DevSecOps Integration (S.8.3) › DAST Last updated: 28 Feb 2026 DAST AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Dynamic Application Security Testing (DAST) tests running application instances for vulnerabilities by sending requests and analysing responses. Unlike SAST, which examines source code, DAST exercises the deployed application and can detect vulnerabilities that arise from configuration, deployment, or runtime behaviour. DAST scans should cover all internet-facing endpoints (inference API, human oversight interface) and internal endpoints (administrative interfaces, inter-service APIs). OWASP ZAP provides open-source DAST capability. The DAST scan is integrated into the deployment pipeline, running against the staging environment before production deployment. DAST findings are prioritised by CVSS score and tracked in the vulnerability management register with the same remediation SLAs as other vulnerability findings. The scan configuration should be tuned to the AI system's specific endpoints and traffic patterns to minimise false positives. DAST results are retained as Module 9 evidence. Key outputs DAST scanning of all internet-facing and internal endpoints CI/CD integration running against the staging environment Findings tracked in the vulnerability management register Module 9 AISDP evidence --- ## Data Poisoning Simulation URL: https://docs.standardintelligence.com/data-poisoning-simulation Breadcrumb: Security › Testing Programme › Adversarial ML Testing › Data Poisoning Simulation Last updated: 28 Feb 2026 Data Poisoning Simulation AISDP module(s): Module 4 (Data Governance), Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 10 , Article 15 Data poisoning simulation tests the model's resilience to corrupted training data. ART's poisoning modules provide simulation capabilities. The test inserts known poisoned records into a copy of the training dataset, retrains the model, and evaluates whether the poisoned model's behaviour deviates from the clean model's behaviour on the poisoned trigger inputs and on legitimate inputs. The simulation should test at multiple poisoning rates (for example, 0.1%, 0.5%, 1%, and 5% of the training data) to determine the minimum poisoning rate that produces a detectable effect. This threshold informs the data integrity monitoring sensitivity: the anomaly detection on the data pipeline (Great Expectations, Evidently AI) must be configured to detect modifications at or below this rate. The simulation also validates the effectiveness of the data integrity controls described above. If the poisoned records should have been caught by the anomaly detection, but were not, the detection configuration needs tuning. The simulation results are documented as Module 9 evidence and fed into the risk register . Key outputs Poisoning simulation at multiple rates using ART Minimum detectable poisoning rate determination Data integrity control effectiveness validation Module 4 and Module 9 AISDP evidence --- ## Data Security in ML Pipelines URL: https://docs.standardintelligence.com/data-security-in-ml-pipelines Breadcrumb: Security › Data Security in ML Pipelines (S.8.2.3) Last updated: 28 Feb 2026 Training Data Security Feature Store Security Model Artefact Security Inference Log Security Vector Database Security — Write/Read Separation Vector Database Security — Adversarial Document Injection Vector Database Security — Bulk Extraction Monitoring --- ## Dependency Management URL: https://docs.standardintelligence.com/dependency-management Breadcrumb: Security › Supply Chain Security › Dependency Management Last updated: 28 Feb 2026 Version Pinning & Private Repositories AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Every dependency is pinned to an exact version in a lock file: Poetry lock, pip freeze, or npm shrinkwrap. Version pinning prevents automatic upgrades to compromised versions, ensuring that the deployed system contains exactly the dependencies that were tested and validated. Unpinned or loosely pinned dependencies introduce non-determinism and a supply chain attack vector. Dependencies are fetched from a private repository (JFrog Artifactory, Sonatype Nexus, or equivalent) that caches approved packages, not directly from public registries. The private repository provides two benefits: it acts as a curated supply, preventing typosquatting and dependency confusion attacks; and it ensures that packages remain available even if the public registry experiences outages or removes packages. All cached packages are scanned for known vulnerabilities (Snyk, Trivy), and packages with critical vulnerabilities are rejected. For model artefacts sourced externally (pre-trained models from Hugging Face or similar), the revision parameter pins to a specific Git commit SHA. The engineering team computes and records the SHA-256 content hash at download and verifies it before use. The version pinning policy, private repository configuration, and model artefact verification process are documented in Module 9. Key outputs Exact version pinning in lock files for all dependencies Private repository caching with vulnerability scanning Model artefact pinning to commit SHA with hash verification Module 9 AISDP documentation Signature Verification & Continuous Vulnerability Scanning AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Sigstore cosign provides cryptographic signing for both container images and model artefacts. The provider signs the artefact at build time; the consumer verifies the signature before deployment or use. Any artefact that fails signature verification is rejected by the pipeline, and the event triggers a security alert. This prevents deployment of tampered artefacts, whether the tampering occurred in transit, in storage, or through a compromised build process. Continuous vulnerability scanning extends beyond the CI pipeline to monitor the deployed system's actual dependency tree. Snyk Monitor tracks the resolved versions in the production container against continuously updated vulnerability databases. A vulnerability disclosed after deployment triggers an alert within hours, closing the gap between disclosure and detection. Automated dependency monitoring tools (Dependabot, Renovate) watch the dependency manifest and can automatically open pull requests to update vulnerable dependencies. For critical vulnerabilities (CVSS 9.0+), the remediation SLA is 24–72 hours. The automated PR from Dependabot or Renovate accelerates the response by eliminating the manual step of identifying the vulnerable package and preparing the update. The pipeline's validation gates run on the update before it reaches production, ensuring that the fix does not introduce regressions. Key outputs Sigstore cosign signing and verification for images and model artefacts Continuous vulnerability monitoring of deployed dependencies (Snyk Monitor) Automated update PRs (Dependabot, Renovate) for vulnerable dependencies Module 9 AISDP evidence --- ## Detection & Triage — Cross-Regime Assessment URL: https://docs.standardintelligence.com/detection-and-triage-cross-regime-assessment Breadcrumb: Security › Incident Response › Integrated Plan › Detection & Triage — Cross-Regime Assessment Last updated: 28 Feb 2026 Detection & Triage — Cross-Regime Assessment AISDP module(s): Module 9 (Robustness and Cybersecurity), Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 73 , NIS2 , DORA , CRA At the triage stage of any cybersecurity incident, a multi-regime decision tree is activated. Four questions are answered in sequence. Does the incident meet DORA's major ICT-related incident criteria? If yes, the four-hour clock starts. Does it meet NIS2's significant incident criteria? If yes, the 24-hour clock starts. Does it involve an actively exploited vulnerability in a CRA-scoped product? If yes, the CRA 24-hour clock starts. Does it meet Article 3(49)'s serious incident definition? If yes, the applicable Article 73 clock starts (2, 10, or 15 days depending on severity). The triage process classifies the incident across four dimensions: the model-related, data-related, infrastructure-related, or human-oversight-related nature of the incident (determining the response team composition); the severity (determining the escalation path); the affected regime(s) (determining the reporting obligations); and the fundamental rights dimension (determining whether Article 73 applies even when other regime reporting covers the event). This multi-dimensional triage should be rehearsed through tabletop exercises at least annually. A shared incident fact sheet is prepared immediately, containing fields common to all regimes. Regime-specific annexes are attached as each reporting deadline approaches. The incident management platform should support tagging incidents with applicable regimes and tracking each regime's deadline independently. Key outputs Multi-regime decision tree activated at triage Four-question sequential regime assessment Shared fact sheet with regime-specific annexes Module 9 and Module 12 AISDP documentation --- ## DevSecOps Integration URL: https://docs.standardintelligence.com/devsecops-integration Breadcrumb: Security › DevSecOps Integration (S.8.3) Last updated: 28 Feb 2026 SAST (Bandit, SonarQube, Semgrep) DAST SCA/Dependency & Container Image Scanning IaC Security Scanning AI-Specific Rules Manual Security Code Review SBOM Generation — CycloneDX/SPDX with ML Components --- ## DORA Interaction URL: https://docs.standardintelligence.com/dora-interaction Breadcrumb: Security › Cross-Regulatory Mapping (S.8.1) › DORA Interaction Last updated: 28 Feb 2026 DORA ICT Risk Extension AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: DORA, Article 15 DORA's requirements overlap substantially with the AI Act's cybersecurity requirements, covering ICT risk management (Article 6), incident classification and reporting (Articles 17–19), digital operational resilience testing (Articles 24–27), and third-party risk management (Articles 28–30). The practical challenge is satisfying both regimes through integrated controls. The AI system's risk management satisfies DORA for the AI-specific components; broader ICT risk management covering non-AI systems is separate. Incident classification can use a unified severity taxonomy, though classification criteria differ between DORA (major ICT-related incident under Article 18 ) and the AI Act (serious incident under Article 3(49) ). DORA's testing programme can incorporate AI-specific testing ( adversarial ML , data poisoning ), with the combined scope documented. For TLPT under Article 26 (significant financial entities), a single testing exercise can serve both regimes if the scope explicitly includes AI-specific attack scenarios. DORA's third-party risk management requirements are more prescriptive and should be used as the baseline, extended with AI-specific controls. If the system is not subject to DORA, this article is documented as not applicable. Key outputs DORA–AI Act requirement mapping across five domains Integrated control approach with regime-specific extensions TLPT scope alignment (where applicable) Module 9 AISDP documentation --- ## DORA Third-Party Register & Risk Assessments URL: https://docs.standardintelligence.com/dora-third-party-register-and-risk-assessments Breadcrumb: Security › Artefacts › DORA Third-Party Register & Risk Assessments Last updated: 28 Feb 2026 DORA Third-Party Register & Risk Assessments AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: DORA Articles 28–30 The DORA third-party register is retained as a standalone Module 9 artefact for financial entities. Each entry contains the provider identity, service description, criticality classification, risk assessment outcome, contractual provisions summary, concentration risk assessment, and monitoring status. The associated vendor risk assessments are retained alongside the register. Each assessment documents the provider's security certifications, data handling commitments, financial stability evaluation, business continuity capabilities, and the assessment date. Assessments are reviewed annually and re-conducted when the provider's service scope or security posture changes. If the system is not subject to DORA, this artefact is documented as not applicable. For non-DORA entities, the equivalent artefact is a simplified third-party register satisfying Annex IV 's component documentation requirements without DORA's prescriptive contractual and risk fields. Key outputs DORA third-party register with structured per-entry information Associated vendor risk assessments with annual review DORA Article 28(3) compliance documentation Module 9 AISDP evidence --- ## DORA Third-Party Register URL: https://docs.standardintelligence.com/dora-third-party-register Breadcrumb: Security › Supply Chain Security › DORA Third-Party Register Last updated: 28 Feb 2026 All AI-Related Service Providers AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: DORA Article 28(3), Annex IV Every AI-related service provider should appear in the third-party register. The register covers five provider categories. Foundation model providers supply the core model, whether accessed via API or downloaded for local deployment. Embedding model providers supply the embedding models used in vector search, semantic similarity, or retrieval-augmented generation. Annotation and labelling services provide human labelling for training data. Cloud infrastructure providers host the AI workloads, including compute, storage, networking, and managed services. Managed ML services (SageMaker, Vertex AI, Azure ML) provide platform-level capabilities including training orchestration, model hosting, and experiment tracking. For DORA-scoped entities, the register satisfies DORA Article 28(3)'s requirement for a comprehensive ICT third-party register. For all organisations, it satisfies the Annex IV requirement to document the system's components and third-party dependencies. The register should be maintained as a structured dataset, not embedded in prose, to support querying, reporting, and automated compliance checks. Key outputs Five-category third-party register (foundation models, embeddings, annotation, cloud, managed ML) Structured dataset format for querying and reporting DORA Article 28(3) compliance (where applicable) Module 9 AISDP evidence Per-Entry Contractual & Risk Information AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: DORA Articles 28–30, Annex IV Each entry in the third-party register carries structured risk and contractual information. The risk information includes the vendor risk assessment outcome, the provider's criticality classification (critical, important, or standard), the concentration risk assessment (whether multiple services depend on this provider), and the most recent security posture evaluation. The contractual information includes the contract reference, the contractual provisions covering the six domains described above, the contract renewal date, the exit strategy summary, and the status of any audit rights exercised. For DORA-scoped entities, additional fields capture the DORA-specific contractual requirements: the provider's cooperation obligation with the financial supervisor, the sub-outsourcing notification status, and the provider's participation in resilience testing. The register is reviewed annually. The AI Governance Lead reviews the register to confirm that all current providers are listed, that risk assessments are current, that contractual provisions remain adequate, and that ongoing monitoring arrangements are operational. The review outcome is documented as Module 9 evidence. Key outputs Per-entry risk assessment, criticality classification, and concentration risk Per-entry contractual provisions across six domains DORA-specific fields where applicable Module 9 AISDP evidence --- ## Emerging Interactions URL: https://docs.standardintelligence.com/emerging-interactions Breadcrumb: Security › Cross-Regulatory Mapping (S.8.1) › Emerging Interactions Last updated: 28 Feb 2026 EU Data Act & EHDS Overlays AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: EU Data Act, EHDS Two additional regulatory instruments have cybersecurity relevance for specific system categories. The EU Data Act (Regulation (EU) 2023/2854), applicable from 12 September 2025, governs access to and use of data generated by connected products. AI systems embedded in IoT devices or connected products may face data-sharing obligations that require Module 9's security architecture to accommodate secure data export mechanisms, access-controlled sharing interfaces, and audit logging of shared data. The European Health Data Space (EHDS) Regulation establishes rules for the secondary use of electronic health data, including for AI training and development. Healthcare AI systems ( Annex III , Area 5(a)) using health data must comply with EHDS security requirements for data access environments: data minimisation, access controls, audit trails, and prohibition on re-identification. Healthcare AI systems face a particularly dense regulatory overlay: the AI Act, NIS2 , the CRA (if embedded in a medical device), the Medical Devices Regulation, GDPR , and the EHDS. Module 9 for such systems will be among the most complex. The AI System Assessor monitors the development of both instruments and documents their applicability and any additional security requirements in Module 9. Key outputs EU Data Act applicability assessment for IoT/connected product AI systems EHDS applicability assessment for healthcare AI systems Additional security requirements identified from each instrument Module 9 AISDP documentation --- ## Encryption URL: https://docs.standardintelligence.com/encryption Breadcrumb: Security › Cybersecurity Foundations › Encryption Last updated: 28 Feb 2026 Data at Rest (AES-256) & Data in Transit (TLS 1.3) AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 The engineering team encrypts all data at rest using AES-256 or equivalent, and all data in transit using TLS 1.3. This applies to every data category in the AI system: training data, evaluation data, feature stores, model artefacts, inference logs, monitoring data, configuration data, and the AISDP documentation itself. Encryption at rest protects against storage-level access: a compromised cloud account, a misconfigured storage bucket, or an exfiltrated backup yields only encrypted data without the corresponding encryption keys. Encryption in transit protects against network-level interception: even within the VPC, mTLS ensures that inter-service communication is encrypted. For AI systems processing personal data, encryption is both a cybersecurity control and a GDPR safeguard. The encryption configuration should be documented in Module 9, including the encryption algorithm, the key length, the scope of encryption (which data stores, which communication channels), and any exceptions with their justification. Cloud-native encryption services (AWS KMS, Azure Key Vault, Google Cloud KMS) provide managed encryption with HSM-backed key storage. Key outputs AES-256 encryption at rest for all data stores TLS 1.3 encryption in transit for all communication channels Encryption scope documentation with any exceptions justified Module 9 AISDP documentation Key Management AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Encryption is only as strong as the key management that underpins it. The security team manages encryption keys through a dedicated key management service (AWS KMS, Azure Key Vault, Google Cloud KMS, or HashiCorp Vault) with rotation policies, access logging, and separation of duties. Key rotation ensures that keys are periodically replaced, limiting the exposure if a key is compromised. The rotation schedule should be documented: annual rotation for master keys, more frequent rotation for data encryption keys if supported by the KMS. Separation of duties ensures that no single individual can both access the encrypted data and manage the encryption keys. Access to key management operations is restricted to named security personnel and logged. For the ten-year retention period, key management must account for credential survivability. Keys used to encrypt archived data must remain available and functional for the full retention period. If a key is rotated, the previous key must be retained (in a disabled state, available only for decryption) until all data encrypted with it has been re-encrypted or has passed its retention expiry. The key management policy and configuration are documented in Module 9. Key outputs Dedicated KMS with documented rotation policies Separation of duties for key management operations Credential survivability planning for the ten-year retention period Module 9 AISDP documentation --- ## Evasion/Adversarial Examples — White-Box & Black-Box URL: https://docs.standardintelligence.com/evasionadversarial-examples-white-box-and-black-box Breadcrumb: Security › Testing Programme › Adversarial ML Testing › Evasion/Adversarial Examples — White-Box & Black-Box Last updated: 28 Feb 2026 Evasion/Adversarial Examples — White-Box & Black-Box AISDP module(s): Module 9 (Robustness and Cybersecurity), Module 5 (Testing and Validation) Regulatory basis: Article 15 Adversarial example testing evaluates the model's susceptibility to input perturbations designed to cause incorrect predictions. IBM's Adversarial Robustness Toolbox (ART) provides the most comprehensive library. Testing should span both white-box attacks (FGSM, PGD, C&W, DeepFool), which use knowledge of the model's gradients to craft minimal perturbations, and black-box attacks, which work without gradient access by querying the model and using the responses to guide perturbation. White-box testing represents a worst-case scenario: an attacker with full knowledge of the model's architecture and parameters. Black-box testing represents a more realistic scenario for externally facing systems: an attacker who can only query the model's API. Both should be included because a model that is robust to black-box attacks but fragile to white-box attacks is vulnerable to any attacker who gains internal access. The test results report the attack success rate at each perturbation magnitude and compare against the robustness thresholds declared in the AISDP. Findings are documented in a structured report and fed back into the threat model and risk register . The robustness gate in the CI pipeline provides ongoing verification using a subset of the adversarial testing suite. Key outputs White-box testing (FGSM, PGD, C&W, DeepFool) using ART Black-box testing without gradient access Attack success rates compared against declared thresholds Module 9 and Module 5 AISDP evidence --- ## Evidence Preservation — No System Alteration Prior to Notification URL: https://docs.standardintelligence.com/evidence-preservation-no-system-alteration-prior-to Breadcrumb: Security › Incident Response › Integrated Plan › Evidence Preservation — No System Alteration Prior to Notification Last updated: 28 Feb 2026 Evidence Preservation — No System Alteration Prior to Notification AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 73(6) Article 73(6) explicitly prohibits altering the AI system in a way that could affect subsequent evaluation of the causes before informing the competent authorities. This requirement means that the evidence preservation procedure must be executed before any remediation action, unless the system is actively causing harm (in which case the break-glass procedure is activated simultaneously with evidence capture). Upon incident detection, an automated snapshot script captures the currently deployed model version from the model registry , the current configuration from the config management system, the inference logs for the incident period from the logging infrastructure, the monitoring metrics for the incident period from the monitoring platform, and the current data pipeline state from the orchestration tool. These snapshots are written to immutable storage (S3 Object Lock, Azure Immutable Blob Storage) immediately. The evidence preservation procedure is tested periodically as part of disaster recovery testing to confirm that it captures all required evidence and that the captured evidence is retrievable and intact. The procedure, the immutable storage configuration, and the test results are documented in Module 9. Key outputs Automated evidence snapshot script triggered on incident detection Immutable storage for captured evidence Execution before any system modification (per Article 73(6)) Module 9 AISDP evidence --- ## Feature Store Security URL: https://docs.standardintelligence.com/feature-store-security Breadcrumb: Security › Data Security in ML Pipelines (S.8.2.3) › Feature Store Security Last updated: 28 Feb 2026 Feature Store Security AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Feature stores aggregate and serve pre-computed features for model training and inference. They can become single points of compromise: an attacker who can modify feature values can influence model outputs without touching the model itself. A corrupted feature store produces corrupted inputs for every inference request. Feature stores should enforce four controls. Write access controls ensure that only authorised pipeline components can write features. Integrity checks using checksums or cryptographic signatures on feature values detect unauthorised modifications. Versioning ensures that every feature value change is recorded with a timestamp and provenance. Read access controls ensure that only authorised model serving components can read features. The feature store security configuration is documented in Module 9. The integrity checking mechanism should be tested by the security team periodically: introduce a known modification to a feature value and verify that the integrity check detects it. This test confirms that the control is operational, not merely configured. Key outputs Write access controls on feature store ingestion Integrity checks (checksums or cryptographic signatures) Feature versioning with provenance tracking Module 9 AISDP documentation --- ## Fundamental Rights Dimension Assessment URL: https://docs.standardintelligence.com/fundamental-rights-dimension-assessment Breadcrumb: Security › Incident Response › Integrated Plan › Fundamental Rights Dimension Assessment Last updated: 28 Feb 2026 Fundamental Rights Dimension Assessment AISDP module(s): Module 9 (Robustness and Cybersecurity), Module 6 (Risk Management System) Regulatory basis: Article 73 , Article 9 Every incident triage must separately assess the fundamental rights dimension, regardless of whether the incident is also reportable under NIS2 , DORA , or the CRA . Article 73(9) may simplify reporting for entities subject to other regimes, but it explicitly preserves the requirement to report fundamental rights infringements under Article 3(49)(c). The fundamental rights assessment evaluates whether the incident has caused or could cause systematic discrimination in the system's decisions (affecting protected characteristic groups disproportionately), denial of essential services or benefits based on the system's outputs, or harm to an individual's health, safety, or other fundamental right as a direct consequence of the system's operation. A data poisoning attack on a credit scoring model, for example, may constitute both a DORA-reportable ICT incident and an Article 73-reportable fundamental rights infringement if the poisoned model systematically denies credit to a protected group. The fundamental rights dimension is assessed by the Legal and Regulatory Advisor in consultation with the AI Governance Lead , and the assessment is documented in the incident record. Key outputs Mandatory fundamental rights dimension assessment per incident Evaluation of discrimination, denial of services, and individual harm Legal and Regulatory Advisor assessment with documentation Module 9 and Module 6 AISDP evidence --- ## IaC Security Scanning URL: https://docs.standardintelligence.com/iac-security-scanning Breadcrumb: Security › DevSecOps Integration (S.8.3) › IaC Security Scanning Last updated: 28 Feb 2026 IaC Security Scanning AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Infrastructure-as-code security scanning validates that infrastructure definitions follow security best practices before deployment. Checkov, tfsec, and KICS scan Terraform, Kubernetes manifests, CloudFormation templates, and other IaC definitions for security misconfigurations: open security groups, unencrypted storage, overly permissive IAM policies, missing logging configurations, and non-compliant data residency settings. IaC scanning runs in the CI pipeline on every infrastructure change, catching misconfigurations before they reach production. Reference OPA/Rego policies that enforce compliance-specific constraints such as mandatory tags on AI infrastructure resources and EU data residency enforcement. IaC scanning complements the cloud security posture management (CSPM) described above. CSPM detects configuration drift in deployed infrastructure; IaC scanning prevents misconfigurations from being deployed in the first place. Both layers are necessary for defence in depth. Scan results are retained as Module 9 evidence. Key outputs IaC scanning in the CI pipeline (Checkov, tfsec, KICS) Compliance-specific policy enforcement (OPA/Rego) Complementary to CSPM for deployed infrastructure Module 9 AISDP evidence --- ## Incident Response Plan with Decision Tree URL: https://docs.standardintelligence.com/incident-response-plan-with-decision-tree Breadcrumb: Security › Artefacts › Incident Response Plan with Decision Tree Last updated: 28 Feb 2026 Incident Response Plan with Decision Tree AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 73 , NIS2 , DORA , CRA The incident response plan is a standalone Module 9 artefact containing the complete AI-specific incident response procedure. It includes the AI-specific incident category definitions (model performance degradation, fairness drift, data poisoning , adversarial exploitation, privacy breach, human oversight failure), the detection mechanisms for each category, the severity classification matrix, and the response procedures. The multi-regime decision tree is embedded in the plan, providing the triage team with a step-by-step process for determining which reporting obligations apply. The plan includes the pre-drafted reporting templates, the evidence preservation procedure, the role assignments with named alternates, and the escalation paths. The plan is tested through tabletop exercises annually and live simulation exercises biannually. Exercise results, including identified weaknesses and improvement actions, are appended to the plan's revision history. The plan is retained for the ten-year period. Key outputs Complete AI-specific incident response plan Multi-regime decision tree and pre-drafted templates Annual tabletop and biannual live simulation testing Module 9 AISDP evidence --- ## Incident Response URL: https://docs.standardintelligence.com/incident-response Breadcrumb: Security › Incident Response (S.8.6) Last updated: 28 Feb 2026 Incident response for high-risk AI systems must account for multiple overlapping reporting obligations under the AI Act, GDPR , DORA (Regulation (EU) 2022/2554), NIS2 (Directive (EU) 2022/2555), and the CRA (Regulation (EU) 2024/2847). The integrated incident response plan defines a single workflow that satisfies all applicable regimes, with parallel reporting streams, pre-drafted templates, evidence preservation procedures, and cross-regime severity assessment. The regulator contact register maintains per-jurisdiction authority contacts for rapid notification. ℹ This section corresponds to the Incident Response section and feeds primarily into AISDP Module 9 (Robustness and Cybersecurity). --- ## Inference Log Security URL: https://docs.standardintelligence.com/inference-log-security Breadcrumb: Security › Data Security in ML Pipelines (S.8.2.3) › Inference Log Security Last updated: 28 Feb 2026 Inference Log Security AISDP module(s): Module 9 (Robustness and Cybersecurity), Module 10 (Record-Keeping) Regulatory basis: Article 12 , Article 15 Inference logs contain the system's production inputs and outputs, which may include personal data, commercially sensitive information, or data subject to legal privilege. Access to inference logs is restricted by the security team to authorised monitoring and audit personnel. Logs are encrypted at rest and retained according to the documented retention policy. Where inference logs are used for model retraining (a common practice for continuous improvement), the data governance controls described in apply to the retraining dataset derived from those logs. This means the retraining dataset requires its own data governance assessment, including legal basis evaluation, purpose limitation, and data minimisation. The inference log security configuration, including access controls, encryption settings, and the boundary between log access for monitoring purposes and log access for retraining purposes, is documented in Module 9 and Module 10. Key outputs Access controls restricting inference log access to authorised personnel Encryption at rest for all inference logs Governance boundary between monitoring access and retraining access Module 9 and Module 10 AISDP documentation --- ## Inference Timeout Enforcement URL: https://docs.standardintelligence.com/inference-timeout-enforcement Breadcrumb: Security › API Security (S.8.2.2) › Inference Timeout Enforcement Last updated: 28 Feb 2026 Inference Timeout Enforcement AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Inference timeout enforcement sets a maximum execution time per request, terminating any request that exceeds it. This prevents a single adversarially crafted input, designed to trigger worst-case computational complexity, from consuming resources indefinitely. The timeout should be set above the p99 latency for legitimate requests and below the threshold where a single request materially impacts other users. The reference nginx configuration uses proxy_read_timeout 30s for the inference endpoint. The appropriate timeout value depends on the model's architecture and typical inference latency. For a model with p99 latency of 500ms, a timeout of 5 seconds provides generous headroom; for a large language model with p99 latency of 10 seconds, the timeout must be higher. Timed-out requests receive a structured error response (HTTP 504 or equivalent) and the timeout event is logged with the request metadata. A high rate of timeouts may indicate an adversarial denial-of-service attempt or a legitimate performance degradation; the monitoring layer should alert on timeout rate anomalies. The timeout configuration is documented in Module 9 and tested as part of the denial-of-service testing described above. Key outputs Inference timeout enforcement above p99, below impact threshold Structured error responses for timed-out requests Timeout rate monitoring and alerting Module 9 AISDP documentation --- ## Input Validation & Sanitisation URL: https://docs.standardintelligence.com/input-validation-and-sanitisation Breadcrumb: Security › API Security (S.8.2.2) › Input Validation & Sanitisation Last updated: 28 Feb 2026 Input Validation & Sanitisation AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Model endpoints validate inputs against a strict schema before they reach the model. Input dimensions, data types, value ranges, and content length are enforced by the serving infrastructure. Inputs that fail validation are rejected with a structured error response and the rejection is logged. For text inputs, injection pattern detection filters known adversarial patterns, including prompt injection payloads for LLM-based systems. For image inputs, format validation, dimension checks, and anomaly detection on pixel distributions can identify adversarial perturbations. For tabular inputs, range checks and type enforcement prevent out-of-specification values from reaching the model. Request size limits (the nginx client_max_body_size directive or equivalent) prevent oversized adversarial inputs designed to consume excessive memory or processing time. The input validation schema is derived from the model's documented input specification and updated whenever the input format changes. Validation is enforced at the serving infrastructure level, not within the model code, ensuring it cannot be bypassed. Key outputs Strict input schema validation at the serving infrastructure level Injection pattern detection for text inputs Request size limits preventing oversized inputs Module 9 AISDP documentation --- ## Integrated Incident Response Plan URL: https://docs.standardintelligence.com/integrated-incident-response-plan Breadcrumb: Security › Incident Response › Integrated Plan Last updated: 28 Feb 2026 Detection & Triage — Cross-Regime Assessment Fundamental Rights Dimension Assessment Parallel Reporting Streams — DORA Parallel Reporting Streams — NIS2 Parallel Reporting Streams — CRA Parallel Reporting Streams — AI Act Art. 73 Pre-Drafted Reporting Templates Art. 73(9) Simplification for NIS2/DORA Entities Evidence Preservation — No System Alteration Prior to Notification --- ## Manual Security Code Review URL: https://docs.standardintelligence.com/manual-security-code-review Breadcrumb: Security › DevSecOps Integration (S.8.3) › Manual Security Code Review Last updated: 28 Feb 2026 Manual Security Code Review AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Beyond automated scanning, the Technical SME conducts manual security code review for security-critical components. Automated tools catch known vulnerability patterns but miss logic flaws and design-level vulnerabilities. Manual review provides the human judgement needed to assess whether the code's logic is correct, whether the security boundaries are properly enforced, and whether the architecture's trust assumptions are sound. Security-critical components requiring manual review include the authentication and authorisation logic, the model serving and API gateway code, the data validation and sanitisation logic, the logging and audit trail implementation, and any custom cryptographic implementations. The review should follow a structured checklist that includes the AI-specific concerns from the threat model . Manual security code review findings are tracked alongside automated findings in the vulnerability management register, with the same severity classification and remediation SLAs. The review is conducted at least annually for the security-critical components and additionally when those components are modified. Review records are retained as Module 9 evidence. Key outputs Manual security code review for security-critical components Structured review checklist including AI-specific concerns Findings tracked in the vulnerability management register Module 9 AISDP evidence --- ## Membership Inference Testing URL: https://docs.standardintelligence.com/membership-inference-testing Breadcrumb: Security › Testing Programme › Adversarial ML Testing › Membership Inference Testing Last updated: 28 Feb 2026 Membership Inference Testing AISDP module(s): Module 4 ( Data Governance ), Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 , GDPR Membership inference testing evaluates whether an attacker can determine if a specific individual's data was included in the training set. ML Privacy Meter implements state-of-the-art membership inference attacks. The testing protocol trains an attack model on a shadow dataset (data from the same distribution as the training data) and evaluates its ability to distinguish training members from non-members. If the attack achieves significantly better than random accuracy, the model is leaking membership information. The recommended starting threshold is an attack AUC-ROC below 0.55 (only marginally better than chance) as a starting threshold. An attack AUC-ROC above this threshold indicates that the model retains sufficient information about individual training records to enable membership determination, which has direct GDPR implications. If the membership inference test fails (attack AUC-ROC above threshold), the controls described in should be strengthened: differential privacy parameters tightened, output granularity further restricted, or the model architecture revised to reduce memorisation. The test results are documented as Module 4 and Module 9 evidence, with the chosen threshold and its justification. Key outputs Membership inference testing using ML Privacy Meter Attack AUC-ROC measurement against defined threshold ( Remediation triggers for above-threshold results Module 4 and Module 9 AISDP evidence --- ## Model Artefact Security URL: https://docs.standardintelligence.com/model-artefact-security Breadcrumb: Security › Data Security in ML Pipelines (S.8.2.3) › Model Artefact Security Last updated: 28 Feb 2026 Model Artefact Security AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Trained model files are valuable intellectual property and a potential attack vector. Model artefacts are stored in encrypted, access-controlled repositories with immutable versioning. The engineering team implements cryptographic signing of model artefacts, enabling the inference infrastructure to verify that the model loaded for production serving matches the model that passed the validation gates. Any model artefact that fails signature verification is rejected by the pipeline, and the event triggers a security alert. Docker Content Trust and Sigstore cosign provide the signing and verification infrastructure. The signing key is managed through the key management service with restricted access. Model artefact security also covers the model in transit: from the registry to the serving infrastructure, model files are transferred over encrypted channels (TLS 1.3 or mTLS). Backup copies of model artefacts are encrypted and stored with the same access controls as the primary copies. The model artefact security controls are documented in Module 9. Key outputs Encrypted, access-controlled model artefact storage Cryptographic signing with verification at load time Rejection and alerting on signature verification failure Module 9 AISDP documentation --- ## Model Extraction Testing URL: https://docs.standardintelligence.com/model-extraction-testing Breadcrumb: Security › Testing Programme › Adversarial ML Testing › Model Extraction Testing Last updated: 28 Feb 2026 Model Extraction Testing AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Model extraction testing evaluates whether an attacker can reconstruct the model's decision boundaries through systematic querying. The test protocol allocates a query budget (for example, 10,000 queries), submits systematic inputs designed to explore the model's behaviour, collects the model's outputs, trains a surrogate model on the collected input-output pairs, and evaluates the surrogate's fidelity to the original. The test reports the fidelity achieved at the allocated query budget, quantifying the extraction risk. A surrogate that achieves high fidelity at a low query budget indicates that the model is vulnerable to extraction. The fidelity metric informs the rate limiting configuration: if 10,000 queries are sufficient for meaningful extraction, the daily per-consumer query cap must be set well below this level. The test also evaluates whether the rate limiting and anomaly detection controls detect and respond to the extraction attempt. If the test completes its query budget without triggering any alert, the detection configuration needs tuning. The extraction testing results are documented as Module 9 evidence. Key outputs Extraction testing with defined query budget Surrogate model fidelity measurement Rate limiting and anomaly detection effectiveness validation Module 9 AISDP evidence --- ## Model Theft URL: https://docs.standardintelligence.com/model-theft Breadcrumb: Security › Threat Modelling › AI-Specific Threats › Model Theft Last updated: 28 Feb 2026 Model Theft — Attack Vectors (Extraction via Querying, Infrastructure Compromise, Artefact Exfiltration) AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Model theft encompasses attacks that extract the model's parameters, architecture, or decision boundaries. Three primary vectors are identified. Extraction via querying involves the attacker submitting thousands to millions of queries, collecting input-output pairs, and training a surrogate model that approximates the original. The surrogate may not match the original precisely, but it can replicate its functionality without the development cost, training data, or compliance controls. Infrastructure compromise gives the attacker direct access to the model artefact files. A compromised cloud account, a misconfigured storage bucket, or an insider with excessive access can exfiltrate the serialised model directly. Artefact exfiltration from the supply chain occurs when model artefacts are intercepted during distribution (from the model registry to the serving infrastructure) or through backup systems. For high-risk AI systems, the consequences extend beyond intellectual property loss. The adversary obtains a model without the associated compliance controls, monitoring, and governance, and may deploy it in contexts that the original AISDP's risk assessment did not contemplate. For API-accessed systems where deployers have legitimate query access, the extraction risk is elevated because the deployer already has an authorised query channel. Key outputs Assessment of extraction, infrastructure compromise, and exfiltration vectors Likelihood scoring per vector based on the system's access model Compliance impact assessment (uncontrolled deployment of extracted models) Module 9 AISDP documentation Model Theft — Controls (Rate Limiting, Encrypted Storage, Watermarking, Segmentation) AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Four control layers address the model theft vectors described above. Rate limiting is the first defence against extraction via querying. Rate limits that accommodate legitimate usage patterns but cap total query volume per client over longer time windows (daily, weekly) make extraction prohibitively slow. The limits are enforced per authenticated identity, not merely per IP address, to prevent circumvention through distributed querying. Query patterns suggesting extraction behaviour trigger automated alerts. Network segmentation restricts which systems can access the model serving endpoint. The inference service is accessible only through the application layer, not directly from the internet. Kubernetes NetworkPolicies and service mesh mTLS (Istio, Linkerd) ensure authenticated, encrypted connections. Encrypted model storage protects artefacts at rest in the registry, deployment storage, and backups using a key management service (AWS KMS, Azure Key Vault, Google Cloud KMS) with restricted key access. Model watermarking is a detection control rather than a prevention control. It embeds a detectable signal in the model's behaviour (specific output patterns on trigger inputs) that survives the extraction process. Backdoor-based watermarking is the most practical current approach. The watermark specification is stored securely and independently of the model artefact. Contractual controls (usage limits, prohibitions on systematic querying) complement the technical controls for API-accessed systems. Key outputs Rate limiting with anomaly detection for extraction patterns Network segmentation and encrypted model storage Model watermarking specification (where applicable) Module 9 AISDP evidence --- ## Module 9 Test Summary Table URL: https://docs.standardintelligence.com/module-9-test-summary-table Breadcrumb: Security › Artefacts › Module 9 Test Summary Table Last updated: 28 Feb 2026 Module 9 Test Summary Table AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 The Module 9 test summary table is retained as the primary navigation artefact for Module 9's testing evidence. It maps each test type to its most recent execution date, scope, finding count by severity, remediation status, and next scheduled execution date. The table covers penetration testing , vulnerability scanning , adversarial ML testing , additional threat-specific testing, red team exercises , and manual security code reviews. The summary table is the first document an assessor, competent authority inspector, or notified body reviewer examines when evaluating Module 9's testing evidence. From the summary table, they navigate to the detailed reports in the evidence pack . A current, complete summary table demonstrates an active, ongoing testing programme; gaps in the table indicate areas where testing has lapsed. The summary table is updated after each test execution and reviewed quarterly by the AI Governance Lead . It is retained alongside the detailed reports for the ten-year period. Key outputs Single-page summary covering all Module 9 test types Navigation index to detailed reports in the evidence pack Quarterly governance review Module 9 AISDP evidence --- ## Network Security URL: https://docs.standardintelligence.com/network-security Breadcrumb: Security › Cybersecurity Foundations › Network Security Last updated: 28 Feb 2026 Dedicated VPC with Segmentation AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 The AI system's infrastructure should be deployed within a dedicated Virtual Private Cloud (VPC) or equivalent network isolation boundary, separate from the organisation's general-purpose infrastructure. Network segmentation within the VPC isolates different system components (data ingestion, feature engineering, model serving, human oversight interface, logging) into distinct subnets with controlled communication paths. Segmentation reduces the blast radius of a compromise. If an attacker gains access to the data ingestion subnet, segmentation prevents lateral movement to the model serving infrastructure, the model artefact store, or the logging backend. The security team defines network policies as allowlists: traffic not explicitly permitted is denied by default. Security groups, network ACLs, and Kubernetes NetworkPolicies provide the enforcement mechanisms. The VPC configuration is documented as infrastructure-as-code and subject to version control and security scanning. Cloud security posture management (CSPM) tools continuously verify that the deployed network configuration matches the declared baseline. Drift from the baseline triggers automated alerts. The VPC architecture, segmentation policy, and CSPM configuration are documented in Module 9. Key outputs Dedicated VPC with subnet segmentation per system component Allowlist-based network policies (security groups, NetworkPolicies) CSPM continuous verification against the declared baseline Module 9 AISDP documentation Ingress/Egress Restriction & WAF AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Ingress restrictions control which external traffic can reach the AI system's components. Only the inference API endpoint and the human oversight interface should be accessible from outside the VPC; all other components (data pipelines, model registry , logging infrastructure, administrative interfaces) should be internal only. A Web Application Firewall (WAF) sits in front of internet-facing endpoints and filters common attack patterns (SQL injection, XSS, request smuggling). Egress restrictions control which external destinations the AI system's components can reach. The model serving infrastructure should have no outbound internet access unless it needs to call a third-party API (such as a cloud-hosted LLM). Egress restrictions prevent data exfiltration: even if an attacker compromises an internal component, they cannot transmit data to an external destination if egress is blocked. Both ingress and egress rules are defined as infrastructure-as-code and enforced at the VPC level. The WAF configuration should be tuned to the AI system's specific traffic patterns to minimise false positives that could block legitimate inference requests. WAF logs are integrated with the SIEM for correlation and alerting. The ingress/egress configuration and WAF rules are documented in Module 9. Key outputs Ingress restrictions limiting external access to inference API and oversight interface Egress restrictions preventing unauthorised outbound connections WAF configuration tuned to AI system traffic patterns Module 9 AISDP documentation DDoS Protection AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Distributed denial-of-service attacks target the availability of internet-facing endpoints. For high-risk AI systems, unavailability may have compliance implications: if the system is required to process decisions within defined timeframes (for example, credit decisions, employment screening), prolonged unavailability may constitute a failure to meet the system's intended purpose. Cloud-native DDoS protection services (AWS Shield, Azure DDoS Protection, Google Cloud Armor) provide volumetric attack mitigation at the network edge. These services absorb traffic floods before they reach the AI system's infrastructure. Application-layer DDoS protection combines the rate limiting described in with traffic analysis that distinguishes legitimate inference requests from attack traffic. The DDoS protection configuration should be documented in Module 9, including the protection tier (basic or advanced), the traffic thresholds that trigger mitigation, the expected behaviour during an attack (partial degradation, queueing, or rejection of excess requests), and the integration with the incident response plan . The system's behaviour during DDoS attacks should be tested periodically through controlled load testing. Key outputs Cloud-native DDoS protection configuration Application-layer rate limiting integration Expected system behaviour during DDoS events Module 9 AISDP documentation --- ## NIS2 Interaction URL: https://docs.standardintelligence.com/nis2-interaction Breadcrumb: Security › Cross-Regulatory Mapping (S.8.1) › NIS2 Interaction Last updated: 28 Feb 2026 NIS2 Scope, Dual Reporting & Simplification AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: NIS2, Article 73(9) NIS2 applies to essential and important entities across sectors including energy, transport, health, digital infrastructure, public administration, and ICT service management. The AI system's cybersecurity controls should be built on top of the organisation's NIS2 compliance framework, adding AI-specific threat modelling , AI-specific testing, and AI-specific incident response procedures as extensions. Module 9 references the organisation's NIS2 risk management measures where they apply and documents the AI-specific extensions. Dual reporting coordination is required because a single cybersecurity event can trigger both NIS2 and AI Act reporting obligations. NIS2's 24-hour early warning and Article 73 's 2/10/15-day timelines run in parallel. Content across both reports must be consistent; the shared incident fact sheet and regime-specific annexes provide this consistency. Article 73(9) provides a simplification: entities subject to NIS2 are limited to reporting fundamental rights infringements under Article 3(49)(c) through the AI Act; other serious incident categories are reported through NIS2. The Legal and Regulatory Advisor confirms whether the NIS2 transposition in the relevant member state covers the incident categories that Article 73 would otherwise require, and documents the determination. If the entity is not subject to NIS2, this article is documented as not applicable. Key outputs NIS2 scope determination and framework integration Dual reporting coordination with content consistency Article 73(9) simplification analysis and documentation Module 9 AISDP documentation --- ## Output Filtering URL: https://docs.standardintelligence.com/output-filtering Breadcrumb: Security › API Security (S.8.2.2) › Output Filtering Last updated: 28 Feb 2026 Output Filtering AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Model outputs pass through a filtering layer before reaching the consumer. For classification models, confidence scores below a minimum threshold trigger a "low confidence" flag rather than a definitive classification, routing the decision to human review. For generative models, output filters detect and redact personally identifiable information, detect content that falls outside the system's intended purpose, and enforce output length limits. The output filtering layer implements the "untrusted output" principle described in : all model outputs are treated as potentially containing content that could cause harm if consumed without validation. The filtering logic is implemented as a dedicated middleware or service on the inference output path, architecturally separate from the model itself. This separation ensures that filtering cannot be bypassed and that changes to the filtering logic are visible as discrete, reviewable events. The output filtering configuration is version-controlled and subject to the same governance as other configuration changes. Changes that alter which content is filtered or how filtering decisions are made should be assessed against the substantial modification thresholds. The filtering logic, its configuration, and the filtering rates are documented in Module 9. Key outputs Confidence-based routing for low-confidence outputs PII redaction and content scope filtering for generative models Dedicated filtering middleware on the inference output path Module 9 AISDP documentation --- ## OWASP LLM01: Prompt Injection URL: https://docs.standardintelligence.com/owasp-llm01-prompt-injection Breadcrumb: Security › Threat Modelling › AI-Specific Threats › OWASP LLM01: Prompt Injection Last updated: 28 Feb 2026 Prompt Injection — Attack Vectors (Direct, Indirect, Multi-Turn) AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Prompt injection is the most widely discussed threat for systems incorporating large language models. Attackers craft inputs that cause the model to deviate from its intended behaviour, ignore its instructions, or execute actions beyond its authorised scope. The attack manifests in three primary vectors. Direct prompt injection occurs when the attacker provides malicious input directly through the system's user interface or API. The input is designed to override the system prompt's instructions, causing the model to produce outputs outside its intended scope. Indirect prompt injection is more insidious: the attacker plants malicious content in data sources that the model consults during retrieval-augmented generation. When the model retrieves the poisoned content, it follows the embedded instructions rather than the system prompt. Multi-turn injection exploits conversational context to gradually erode the system prompt's constraints over successive interactions. For high-risk AI systems, prompt injection is a compliance risk: an injected prompt that causes the system to produce outputs inconsistent with its declared intended purpose effectively changes the system's behaviour without any authorised modification. The threat model must document the injection vectors relevant to the specific system, assess their likelihood and impact, and map them to the controls described above. Key outputs Assessment of direct, indirect, and multi-turn injection vectors Likelihood and impact scoring per vector Documentation of system-specific injection risk factors Module 9 AISDP evidence Prompt Injection — Controls (Sanitisation, Validation, Privilege Separation, Anchoring, Monitoring) AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Five control categories address the prompt injection vectors described above. Input sanitisation filters or escapes known injection patterns before they reach the model. This includes stripping or encoding control characters, detecting known jailbreak patterns, and validating input structure against expected formats. Sanitisation is a necessary but insufficient defence; novel injection techniques will bypass pattern-based filters. Output validation verifies that the model's response falls within the expected output space. If the system is designed to produce structured classification outputs, any response that deviates from the expected format is flagged and blocked. Privilege separation ensures that the LLM component cannot access resources or execute actions beyond its documented scope; even if injection succeeds, the damage is limited. Instruction anchoring uses system prompt design techniques (clear delimiters, repeated instructions, explicit refusal patterns) to make the prompt more resistant to override. Input-output monitoring provides detection rather than prevention: it flags anomalous patterns that may indicate injection attempts, enabling investigation and response even when other controls do not prevent the injection. The combination of these five layers provides defence in depth. Module 9 captures the specific controls deployed, the testing performed (including adversarial prompt injection testing), and the residual risk . Key outputs Five-layer control implementation (sanitisation, validation, privilege separation, anchoring, monitoring) Adversarial prompt injection test results Residual risk documentation Module 9 AISDP evidence --- ## OWASP LLM02: Sensitive Info Disclosure URL: https://docs.standardintelligence.com/owasp-llm02-sensitive-info-disclosure Breadcrumb: Security › Threat Modelling › AI-Specific Threats › OWASP LLM02: Sensitive Info Disclosure Last updated: 28 Feb 2026 Information Disclosure — Attack Vectors (Memorisation, Membership Inference, Property Inference) AISDP module(s): Module 4 ( Data Governance ), Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 , GDPR The model may leak sensitive information from its training data through its outputs. This risk manifests in three forms. Memorisation occurs when the model has memorised specific training examples and can be prompted to reproduce them; this is most acute for large language models, which can reproduce verbatim passages including personal information. Membership inference allows an attacker to determine whether a specific individual's data was included in the training set, violating that individual's privacy even if the model does not reveal their specific data. Property inference enables an attacker to deduce aggregate properties of the training data (such as the proportion of a specific demographic group) that the organisation intended to keep confidential. For high-risk AI systems processing personal data, information disclosure has direct GDPR implications. A model that leaks personal data from its training set may constitute an unauthorised disclosure under GDPR Article 5 (1)(f). The threat assessment should evaluate which disclosure vectors are relevant to the specific system, considering the volume and sensitivity of personal data in the training set, the model architecture's propensity for memorisation, and the access patterns of the system's consumers. Key outputs Assessment of memorisation, membership inference, and property inference risks Sensitivity analysis based on training data contents GDPR impact assessment for identified disclosure risks Module 4 and Module 9 AISDP documentation Information Disclosure — Controls (Differential Privacy, Output Filtering, Membership Inference Testing) AISDP module(s): Module 4 (Data Governance), Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15, GDPR Differential privacy techniques during training (OpenDP, TensorFlow Privacy, Opacus) limit the model's memorisation of individual training records. Differentially private stochastic gradient descent (DP-SGD) clips per-example gradients and adds Gaussian noise during training, providing a mathematical guarantee parameterised by epsilon (ε). Lower epsilon provides stronger privacy at the cost of model utility; the chosen epsilon, the resulting accuracy trade-off, and the rationale are documented in AISDP Module 6 . Output filtering (Microsoft Presidio, spaCy NER) provides a runtime defence for generative models. Before generated text is returned, a PII detection pipeline scans for personal names, addresses, phone numbers, email addresses, and national ID numbers. Detected PII is redacted or replaced with placeholder tokens. The Technical SME monitors the false positive rate to balance privacy protection against system utility. Membership inference testing (ML Privacy Meter) evaluates the model's susceptibility by training an attack model on a shadow dataset and evaluating its ability to distinguish training members from non-members. If the attack achieves significantly better than random accuracy (the recommended starting threshold is an attack AUC-ROC below 0.55 as a starting threshold), the model is leaking membership information and further controls are required. Data minimisation in training reduces the volume of sensitive data available for the model to memorise. Key outputs Differential privacy implementation with documented epsilon and accuracy trade-off Output filtering pipeline (Presidio or equivalent) with false positive monitoring Membership inference testing results against defined thresholds Module 4 and Module 9 AISDP evidence --- ## OWASP LLM03: Supply Chain URL: https://docs.standardintelligence.com/owasp-llm03-supply-chain Breadcrumb: Security › Threat Modelling › AI-Specific Threats › OWASP LLM03: Supply Chain Last updated: 28 Feb 2026 Supply Chain — Attack Vectors (Dependencies, Pre-Trained Models, Third-Party Services) AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 The AI system's supply chain has three layers, each with distinct attack vectors. The software dependency layer (ML frameworks, data processing libraries, serving frameworks) is vulnerable to typosquatting, dependency confusion, and compromised package updates. An attacker who publishes a malicious package with a name similar to a legitimate dependency can compromise the system if the developer installs the wrong package. The model component layer (pre-trained models, tokenisers, embedding models) is vulnerable to model backdoors and poisoned weights. These artefacts are distributed without the same signing and verification infrastructure that software packages enjoy. A compromised pre-trained model can introduce systematic biases or hidden triggers that are invisible during standard evaluation. The infrastructure layer (container base images, operating system packages, cloud service configurations) is vulnerable to compromised base images and misconfigured cloud services. This assessment should be read alongside (dependency scanning), (licence compliance), (SBOMs), and, which provide the operational framework for ongoing supply chain monitoring. The supply chain risk assessment , SBOM generation and review process, and vendor security assessment results feed into Module 9. Key outputs Three-layer supply chain risk assessment Per-layer attack vector identification Integration with SBOM, dependency scanning, and licence compliance processes Module 9 AISDP documentation --- ## OWASP LLM04: Data and Model Poisoning URL: https://docs.standardintelligence.com/owasp-llm04-data-and-model-poisoning Breadcrumb: Security › Threat Modelling › AI-Specific Threats › OWASP LLM04: Data and Model Poisoning Last updated: 28 Feb 2026 Data Poisoning — Attack Vectors (Targeted, Untargeted, Label Flipping, Backdoor) AISDP module(s): Module 4 (Data Governance), Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 10 , Article 15 An attacker who can manipulate the training data can introduce biases, backdoors, or performance degradation into the trained model. Data poisoning manifests in several forms. Targeted poisoning affects the model's behaviour for specific inputs (for example, causing a specific individual's application to always be approved) whilst leaving general performance intact. Untargeted poisoning degrades overall model performance, making the system unreliable. Label flipping changes the correct labels on training examples, teaching the model incorrect associations. Backdoor insertion embeds a hidden trigger in the training data; the model performs normally on clean inputs but produces attacker-controlled outputs when the trigger is present. For RAG-based systems, adversarial document injection into the knowledge base is a form of poisoning that can influence the model's outputs without modifying the model itself. The threat assessment should evaluate which poisoning vectors are relevant to the specific system. Systems that retrain on production data (where outputs are labelled by deployers or affected persons) are more vulnerable to label flipping than systems trained on curated, internally labelled datasets. Systems using RAG are vulnerable to document injection. The assessment informs the controls described above. Key outputs Assessment of relevant poisoning vectors (targeted, untargeted, label flipping, backdoor, document injection) Likelihood scoring based on the system's data ingestion architecture Integration with the overall threat model Module 4 and Module 9 AISDP documentation Data Poisoning — Controls (Provenance Tracking, Anomaly Detection, Sentinel Testing, Access Controls) AISDP module(s): Module 4 (Data Governance), Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 10, Article 15 Four control layers address the data poisoning vectors described above. Data provenance tracking, using the data lineage infrastructure described in Section 4 and the version control described in Section 6, enables detection of unauthorised data modifications. Every modification to training data is logged with the modifier's identity, timestamp, description, and rationale. DVC and Delta Lake provide version-controlled data storage where every change is attributed. Statistical anomaly detection on the data pipeline (Great Expectations, Evidently AI) identifies suspicious records or distributional shifts before training. Isolation forests and distributional tests flag unusual data points. The challenge is that sophisticated poisoning attacks may modify only a small fraction of records, keeping the overall distribution within normal bounds. For high-risk systems, periodic manual review of random training record samples provides a complementary human verification layer. Sentinel input testing after each retraining cycle checks for unexpected changes in outputs for known inputs, detecting whether the model's behaviour has been altered by poisoned data. Access controls on training data repositories restrict modification to named individuals with documented business needs, with every access event logged in an immutable audit trail. The training data integrity controls belong jointly in Module 4 and Module 9. Key outputs Data provenance tracking with immutable modification logs Statistical anomaly detection on the data pipeline Sentinel input testing after each retraining cycle Access controls with audit logging on training data repositories --- ## OWASP LLM05: Improper Output Handling URL: https://docs.standardintelligence.com/owasp-llm05-improper-output-handling Breadcrumb: Security › Threat Modelling › AI-Specific Threats › OWASP LLM05: Improper Output Handling Last updated: 28 Feb 2026 Insecure Output — Attack Vectors (XSS, SQL Injection, Command Injection via Output) AISDP module(s): Module 9 (Robustness and Cybersecurity), Module 3 (Architecture and Design) Regulatory basis: Article 15 Model outputs that are passed to downstream systems without validation can trigger secondary vulnerabilities. If model outputs are rendered in web interfaces, they may contain cross-site scripting (XSS) payloads. If outputs are incorporated into database queries, they may contain SQL injection payloads. If outputs are passed to system shells, they may contain command injection payloads. This threat is distinct from the model itself being compromised; it arises from the way downstream systems consume model outputs. A model that produces correct, well-intentioned outputs can still introduce vulnerabilities if those outputs happen to contain characters or patterns that downstream systems interpret as executable code. For generative models, this risk is particularly acute because the model's output is free-form text that may contain any character sequence. The threat assessment should identify every downstream system that consumes the model's output and evaluate the injection risk for each consumption path. Web rendering, database queries, file system operations, email generation, and API calls to other services are all potential injection vectors. The assessment feeds into Module 9 and informs the output handling controls described above. Key outputs Downstream system inventory with injection risk per consumption path XSS, SQL injection, and command injection vector assessment Integration with the overall threat model Module 9 and Module 3 AISDP documentation Insecure Output — Controls (Untrusted Treatment, Encoding, Schema Validation, Sandboxing) AISDP module(s): Module 9 (Robustness and Cybersecurity), Module 3 (Architecture and Design) Regulatory basis: Article 15 The foundational principle is that all model outputs are treated as untrusted input by downstream systems. This architectural decision eliminates an entire class of vulnerabilities by ensuring that no downstream component assumes model outputs are safe. Output encoding ensures that special characters in model outputs are escaped before they reach consuming systems. For web rendering, HTML entity encoding prevents XSS. For database operations, parameterised queries prevent SQL injection. For system commands, shell escaping or, preferably, avoiding shell invocation entirely prevents command injection. Schema validation, enforced by a dedicated validation middleware on the inference output path, verifies that every output conforms to the expected structure before it is passed downstream. Outputs that fail validation are replaced with safe default responses and the failure is logged. Sandboxed execution environments provide a final layer of defence. If model outputs must be executed (for example, code generation systems), the execution occurs in an isolated sandbox with no access to production systems, no network access, and resource limits. The output validation layer is enforced at the infrastructure level, not within the model's code, ensuring it cannot be bypassed. Module 3 should describe the output validation layer as a distinct architectural component. Key outputs Untrusted-output architectural principle documented in Module 3 Output encoding per downstream consumption path Schema validation middleware on the inference output path Sandboxed execution for code-generating systems --- ## OWASP LLM06: Excessive Agency URL: https://docs.standardintelligence.com/owasp-llm06-excessive-agency Breadcrumb: Security › Threat Modelling › AI-Specific Threats › OWASP LLM06: Excessive Agency Last updated: 28 Feb 2026 Excessive Agency — Attack Vectors & Controls (Least Privilege, Permission Inventory, Access Reviews) AISDP module(s): Module 1 (System Identity), Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 When the AI system is granted more autonomy, permissions, or capabilities than its intended purpose requires, unnecessary risk surfaces are created. Excessive agency is not an attack in the traditional sense; it is a design flaw that amplifies the impact of any other vulnerability. A prompt injection attack against a system with minimal permissions causes limited damage; the same attack against an over-permissioned system can be catastrophic. The principle of least privilege, applied to the AI system's access rights, API permissions, and action capabilities, is the primary control. The system's authorised scope is documented in the AISDP (Module 1 defines the intended scope of autonomy) and enforced through technical controls, not merely policy. Every access right, API credential, and action capability should be documented in a permission inventory with a justification for each. Regular access reviews (quarterly at minimum) confirm that the system's permissions remain proportionate to its documented purpose. Permissions added during development or testing that are not needed in production should be removed. Any gap between the system's technical capabilities and its documented intended purpose is an excessive agency risk that the AISDP must acknowledge and control. Module 9 documents the permission inventory and the access review schedule. Key outputs Permission inventory with per-permission justification Least-privilege enforcement through technical controls Quarterly access reviews confirming proportionate permissions Module 1 and Module 9 AISDP documentation Overreliance — Controls (Human Oversight Enforcement, Automation Bias Countermeasures) AISDP module(s): Module 7 (Human Oversight), Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 14 , Article 15 Overreliance occurs when users or downstream systems treat AI outputs as authoritative without adequate verification, leading to propagating errors, hallucinations, or biased outputs. For high-risk systems, this threat undermines Article 14's human oversight requirement, because oversight that defers uncritically to the model provides no actual safeguard. This threat is addressed primarily through the human oversight measures: mandatory review workflows, automation bias countermeasures (data-first display, minimum dwell time, confidence visualisation, calibration cases), and override capability with rationale capture. The cybersecurity dimension is the technical enforcement of human review: the system should not be configurable to operate without human oversight for high-risk decisions. Module 9 should document the technical controls that prevent bypass of human oversight, including how the system enforces mandatory review workflows and prevents operators from bulk-approving recommendations without individual assessment. Module 7 should cross-reference Module 9 for the enforcement mechanisms. Penetration testing should specifically test for human oversight bypass paths. Key outputs Technical enforcement preventing bypass of human oversight Cross-reference between Module 7 (human oversight) and Module 9 (enforcement) Penetration testing scope including oversight bypass testing Module 7 and Module 9 AISDP evidence --- ## OWASP LLM09: Misinformation URL: https://docs.standardintelligence.com/owasp-llm09-misinformation Breadcrumb: Security › Threat Modelling › AI-Specific Threats › OWASP LLM09: Misinformation Last updated: 28 Feb 2026 ℹ This topic is covered within the parent article. See the full AI-Specific Threat Categories page. --- ## OWASP LLM10: Unbounded Consumption URL: https://docs.standardintelligence.com/owasp-llm10-unbounded-consumption Breadcrumb: Security › Threat Modelling › AI-Specific Threats › OWASP LLM10: Unbounded Consumption Last updated: 28 Feb 2026 Model DoS — Attack Vectors & Controls (Rate Limiting, Timeouts, Cost Caps) AISDP module(s): Module 9 (Robustness and Cybersecurity), Module 5 (Testing and Validation) Regulatory basis: Article 15 An attacker submits inputs designed to consume excessive computational resources, degrading or denying service to legitimate users. AI systems are particularly vulnerable because individual inference requests can be computationally expensive; a large transformer model may require seconds of GPU time per request. A volumetric attack that would be trivially absorbed by a web server can exhaust an inference service. Three control layers address this threat. Rate limiting on inference endpoints enforces a maximum request rate per client, identified by API key, IP address, or authenticated identity. Kong, NGINX, and cloud API gateways all support configurable rate limiting. The rate limit should accommodate legitimate peak usage with a margin; excess requests receive an HTTP 429 response. For neural networks, input complexity analysis can detect and reject inputs designed to trigger pathological computation paths. Inference timeout enforcement sets a maximum execution time per request, terminating any request that exceeds it. The timeout should be set above the p99 latency for legitimate requests and below the threshold where a single request materially impacts other users. Autoscaling with cost caps provides the third layer: the system scales up to handle increased load but will not exceed a defined cost ceiling, preventing sustained attacks from generating unbounded cloud bills. Module 9 records the rate limiting configuration, timeout thresholds, and autoscaling boundaries. Module 5 states the system's expected throughput under normal and adversarial load conditions. Key outputs Rate limiting configuration per client identity type Inference timeout enforcement (above p99, below impact threshold) Autoscaling with cost caps Module 9 and Module 5 AISDP documentation --- ## Parallel Reporting Streams — AI Act Art. 73 URL: https://docs.standardintelligence.com/parallel-reporting-streams-ai-act-art-73 Breadcrumb: Security › Incident Response › Integrated Plan › Parallel Reporting Streams — AI Act Art. 73 Last updated: 28 Feb 2026 Parallel Reporting Streams — AI Act Art. 73 AISDP module(s): Module 9 (Robustness and Cybersecurity), Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 73 Serious incidents meeting the Article 3(49) definition are reported to the market surveillance authority of the member state where the incident occurred. The timelines follow a tiered structure under Article 73(2)–(4): the default deadline is fifteen days after the provider or deployer becomes aware of the serious incident. A shortened two-day deadline applies to widespread infringements and to serious and irreversible disruption of critical infrastructure (Article 3(49)(b)). In the event of death, the deadline is ten days. Providers may submit an initial incomplete report under Article 73(5) , followed by a complete report as the investigation progresses. Article 73-specific fields include the suspected causal link between the AI system and the harm, the fundamental rights dimension, and the system's EU database registration reference. These fields are prepared as a regime-specific annex to the shared incident fact sheet. The Legal and Regulatory Advisor coordinates the Article 73 report to ensure consistency with any parallel reports to other authorities. Article 73(6) prohibits altering the AI system in a way that could affect subsequent evaluation of the causes before informing the competent authorities. The evidence preservation procedure must be executed before any system modification. Key outputs Article 73 reporting stream (2d/10d/15d) to market surveillance authority Causal link, fundamental rights dimension, and EU database reference No-alteration obligation until authorities notified Module 9 and Module 12 AISDP documentation --- ## Parallel Reporting Streams — CRA URL: https://docs.standardintelligence.com/parallel-reporting-streams-cra Breadcrumb: Security › Incident Response › Integrated Plan › Parallel Reporting Streams — CRA Last updated: 28 Feb 2026 Parallel Reporting Streams — CRA AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: CRA Article 14 For products with digital elements within the CRA's scope, actively exploited vulnerabilities are reported to ENISA through a single reporting platform. The timeline is aggressive: a 24-hour early warning, a 72-hour vulnerability notification, and a 14-day final report. CRA reporting is triggered by actively exploited vulnerabilities, not by all incidents; the distinction matters for triage. Pre-drafted templates for the ENISA early warning and the full vulnerability notification should be maintained by the Legal and Regulatory Advisor. Ongoing CRA vulnerability management obligations also affect the AISDP's maintenance cycle: the vulnerability management register serves as evidence for both CRA compliance and Module 9. If the system is not within the CRA's scope (for example, a purely cloud-hosted SaaS system), this article is documented as not applicable, with the scope determination reasoning recorded. Key outputs CRA reporting stream (24h/72h/14d) to ENISA Triggered by actively exploited vulnerabilities specifically Pre-drafted ENISA notification templates Module 9 AISDP documentation --- ## Parallel Reporting Streams — DORA URL: https://docs.standardintelligence.com/parallel-reporting-streams-dora Breadcrumb: Security › Incident Response › Integrated Plan › Parallel Reporting Streams — DORA Last updated: 28 Feb 2026 Parallel Reporting Streams — DORA AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: DORA Article 19 For financial entities subject to DORA, major ICT-related incidents are reported to the competent financial authority (national financial supervisor). The reporting follows a structured timeline: an initial notification within four hours of classifying the incident as major (or within 24 hours of becoming aware of the incident, whichever is earlier), an intermediate report within 72 hours, and a final report within one month. DORA's four-hour deadline is the most aggressive of all applicable regimes and drives the operational cadence of incident response . The initial notification requires the incident classification under Article 18 's criteria and the financial impact assessment. Pre-drafted templates reduce preparation time. Content must be consistent with any parallel reports to other authorities; contradictory statements create legal exposure. If the system is not subject to DORA (because the deploying entity is not a financial entity), this article is documented as not applicable. Key outputs DORA reporting stream (4h/72h/1mo) to financial supervisor Pre-drafted templates with financial-sector-specific fields Content consistency with parallel reporting streams Module 9 AISDP documentation --- ## Parallel Reporting Streams — NIS2 URL: https://docs.standardintelligence.com/parallel-reporting-streams-nis2 Breadcrumb: Security › Incident Response › Integrated Plan › Parallel Reporting Streams — NIS2 Last updated: 28 Feb 2026 Parallel Reporting Streams — NIS2 AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: NIS2 Article 23 For entities subject to NIS2, significant incidents are reported to the national CSIRT or competent authority. The reporting follows a tiered timeline: an early warning within 24 hours of becoming aware of a significant incident, an incident notification within 72 hours providing an updated severity and impact assessment, and a final report within one month. NIS2-specific fields include the number of users affected, the cross-border impact, and the CSIRT-specific format required by the relevant member state's transposition. These fields are prepared as a regime-specific annex to the shared incident fact sheet. NIS2's 24-hour early warning is typically the second-earliest deadline after DORA 's four hours (if applicable). For entities subject to both NIS2 and the AI Act but not DORA, the NIS2 early warning drives the preparation cadence. If the entity is not subject to NIS2, this article is documented as not applicable. Key outputs NIS2 reporting stream (24h/72h/1mo) to national CSIRT NIS2-specific fields (users affected, cross-border impact, CSIRT format) Content consistency with parallel reporting streams Module 9 AISDP documentation --- ## Patch Management URL: https://docs.standardintelligence.com/patch-management Breadcrumb: Security › Cybersecurity Foundations › Patch Management Last updated: 28 Feb 2026 Documented Schedule, Zero-Day Process & Staging Testing AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Operating system, framework, and dependency patches are applied by the engineering team on a documented schedule. The patch management policy specifies the cadence for routine patches (monthly or aligned with the CI/CD release cycle), the process for applying patches, and the testing requirements before production deployment. Emergency patches for zero-day vulnerabilities follow an expedited process. When a zero-day affecting any system component is disclosed, the security team assesses the exposure, the engineering team prepares the patch, and the patch is tested in the staging environment before production deployment. The expedited process has a shorter SLA (24–72 hours for critical zero-days) and may bypass certain non-essential pipeline stages, though the core validation gates (performance, fairness, robustness) should still run to confirm the patch does not introduce regressions. All patches, including emergency patches, are tested in the staging environment before production deployment. The staging test confirms that the patch resolves the vulnerability, that no regressions have been introduced, and that the system's declared performance and fairness metrics remain within thresholds. The patch management schedule, the zero-day process, and the staging test results are documented in Module 9. Key outputs Documented patch schedule with cadence and process Zero-day expedited process with shortened SLAs Mandatory staging testing for all patches Module 9 AISDP documentation --- ## Penetration Test Reports URL: https://docs.standardintelligence.com/penetration-test-reports Breadcrumb: Security › Artefacts › Penetration Test Reports Last updated: 28 Feb 2026 Penetration Test Reports AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 The penetration test report archive contains the full reports from each annual penetration test, including findings, CVSS scores, affected AISDP modules, recommended remediations, remediation records, and re-testing verification. The archive also contains the engagement briefs specifying the testing scope, the threat model entries to exercise, and the OWASP LLM Top 10 categories to cover. For DORA -scoped entities, the archive includes TLPT reports alongside standard penetration test reports. TLPT reports that are shared with the financial supervisor are structured to serve both DORA and AI Act purposes. The archive enables the organisation to demonstrate a continuous programme of security testing, not isolated annual exercises. Each report references the previous report's open findings, showing the remediation trajectory. The archive is retained for the ten-year period with immutable timestamps and integrity hashes. Key outputs Annual penetration test report archive TLPT reports where applicable Remediation trajectory across successive tests Module 9 AISDP evidence --- ## Penetration Testing URL: https://docs.standardintelligence.com/penetration-testing Breadcrumb: Security › Testing Programme › Penetration Testing Last updated: 28 Feb 2026 Annual Independent Penetration Testing AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Penetration testing is conducted annually by an independent firm with expertise in both traditional application security and AI-specific threats. The scope must cover all attack surfaces identified in the threat model : internet-facing APIs, operator interfaces, administrative endpoints, inter-service communication, and model serving infrastructure. For AI systems, the penetration test scope extends beyond traditional targets. Model API endpoints are tested for model extraction through repeated querying and information leakage through output analysis. Data pipeline endpoints are tested for data injection or poisoning through input manipulation. The human oversight interface is tested for privilege escalation, session hijacking, or interface manipulation that could cause operators to approve harmful outputs. AI-specific penetration testing requires specialist expertise. Traditional penetration testing firms may not have ML security experience. Firms with documented AI security capabilities are engaged by the AI Governance Lead for the AI-specific components. The engagement brief references the threat model and the OWASP Top 10 for LLM Applications, specifying which threats the test should exercise. Testing frequency is annual at minimum and additionally after every substantial modification . Key outputs Annual penetration test by an independent firm Scope covering traditional and AI-specific attack surfaces Engagement brief referencing the threat model and OWASP LLM Top 10 Module 9 AISDP evidence Penetration Test Reporting & Remediation SLAs AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 The penetration testing firm provides a structured report mapping each finding to a severity rating (CVSS score), the affected AISDP module, and a recommended remediation. The report format should enable direct traceability between the finding, the threat model entry it exercises, and the AISDP module it affects. Critical findings have a remediation SLA of 30 days; high-severity findings have a remediation SLA of 90 days. The Technical SME verifies remediation through re-testing, confirming that the vulnerability is no longer exploitable. Findings that cannot be remediated within the SLA are escalated to the AI Governance Lead and recorded in the risk register with a documented justification, compensating controls, and a revised remediation timeline. For AI-specific findings, remediation may require model retraining, architecture changes, or updates to the human oversight interface. These remediation paths are typically longer than patching a software vulnerability, and the SLA should account for the validation gate cycle that any model or architecture change must pass before deployment. The penetration test report and all remediation records are retained as Module 9 evidence. Key outputs Structured penetration test report with CVSS scoring per finding Remediation SLAs (critical: 30 days; high: 90 days) Re-testing verification of remediation Module 9 AISDP evidence --- ## Plugin Security (cf. LLM06 Agency) URL: https://docs.standardintelligence.com/plugin-security-cf-llm06-agency Breadcrumb: Security › Threat Modelling › AI-Specific Threats › Plugin Security (cf. LLM06 Agency) Last updated: 28 Feb 2026 Plugin Security — Attack Vectors & Controls (Allowlists, Validation, Human Approval, Logging) AISDP module(s): Module 7 (Human Oversight), Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 14 , Article 15 For systems where the AI model interfaces with external tools, APIs, or plugins, insufficient validation of the model's tool usage can lead to unauthorised actions. A model that can call an API to modify a database, send emails, or execute code expands the system's risk surface significantly. The attack vector is the model generating tool calls that the system executes without adequate validation. Four controls address this threat. Tool call allowlists restrict the model to a defined set of permitted actions with permitted parameters. Any tool call not explicitly on the allowlist is rejected. Parameter validation verifies that each tool call's arguments fall within expected ranges and formats; a model that attempts to call a database API with a crafted SQL payload is blocked before the call reaches the database. Human approval for high-impact actions ensures that consequential tool calls (data modifications, financial transactions, external communications) require operator confirmation before execution. Comprehensive logging of all tool invocations enables post-hoc review and forensic investigation. Module 9 records the tool and plugin inventory, the permission model for each tool, and the validation controls. Module 7 captures which tool actions require human approval. This threat is closely related to the agentic AI system design controls addressed in the plugin security section. Key outputs Tool call allowlist with permitted actions and parameters Parameter validation on all tool invocations Human approval workflow for high-impact actions Comprehensive tool invocation logging --- ## Pre-Drafted Reporting Templates URL: https://docs.standardintelligence.com/pre-drafted-reporting-templates Breadcrumb: Security › Incident Response › Integrated Plan › Pre-Drafted Reporting Templates Last updated: 28 Feb 2026 Pre-Drafted Reporting Templates AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 73 , NIS2 , DORA , CRA Pre-drafted dual-regime and multi-regime reporting templates are maintained by the incident response team. Shared fields common to all regimes (entity identity, system identity, timeline of events, nature and scope, containment actions, initial impact assessment) are populated once from the shared incident fact sheet. Regime-specific fields are completed separately as each reporting deadline approaches. The templates reduce preparation time when multiple reporting deadlines are running concurrently. For a financial entity subject to DORA, NIS2, and the AI Act simultaneously, the four-hour DORA deadline leaves no time for drafting from scratch. DORA requires the incident classification under Article 18 's criteria and the financial impact assessment. NIS2 requires the number of users affected and the cross-border impact. Article 73 requires the suspected causal link and the fundamental rights dimension. CRA requires the vulnerability details and affected product versions. The Legal and Regulatory Advisor reviews and approves all templates. Templates are tested during the annual tabletop exercises to confirm they remain current and complete. Key outputs Pre-drafted templates with shared fields and per-regime annexes Templates for DORA, NIS2, CRA, and Article 73 reporting Annual testing during tabletop exercises Module 9 AISDP documentation --- ## Prompt Injection Testing (LLM Systems) URL: https://docs.standardintelligence.com/prompt-injection-testing-llm-systems Breadcrumb: Security › Testing Programme › Adversarial ML Testing › Prompt Injection Testing (LLM Systems) Last updated: 28 Feb 2026 Prompt Injection Testing (LLM Systems) AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 For systems incorporating LLMs, prompt injection testing uses both automated scanning and manual assessment. Garak (NVIDIA) provides automated scanning, sending a battery of prompt injection payloads and recording the model's responses. The payload categories include direct injection (overriding system prompt instructions), indirect injection via document content (embedding instructions in retrieved documents), jailbreak prompts (persuading the model to bypass safety constraints), and system prompt extraction attempts. The automated testing should be supplemented with custom injection payloads derived from the system's specific context. If the LLM processes user-uploaded documents, the test embeds injection prompts within documents and verifies that the system's guardrails detect and reject them. If the system uses RAG, injection payloads are placed in the knowledge base to test indirect injection resilience. Prompt injection testing is conducted at least biannually and after any significant model change, guardrail update, or system prompt modification. The test results document the payload categories tested, the success rates per category, and the controls that prevented successful injections. Payloads that successfully bypass controls are escalated for immediate remediation. Key outputs Automated prompt injection testing (Garak) Custom context-specific injection payloads Biannual testing cadence with change-triggered additional runs Module 9 AISDP evidence --- ## Rate Limiting URL: https://docs.standardintelligence.com/rate-limiting Breadcrumb: Security › API Security (S.8.2.2) › Rate Limiting Last updated: 28 Feb 2026 Rate Limiting AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Rate limiting on inference endpoints prevents denial of service and model extraction attacks. A reference nginx configuration is provided with two layers: per-consumer rate limiting (keyed to the API key header) and global rate limiting across all consumers. Both layers allow short bursts whilst enforcing sustained rate ceilings. The rate limits should be calibrated to the system's legitimate usage patterns. The per-consumer limit should accommodate the consumer's expected peak request rate with a reasonable margin; the global limit should accommodate the expected total request rate across all consumers. Anti-extraction measures extend beyond simple rate limiting: application-level middleware can track unique input patterns per consumer per hour, flagging consumers who submit systematically varied inputs that suggest extraction behaviour. Rate limit configuration is version-controlled as infrastructure-as-code and documented in Module 9. The limits are reviewed periodically and adjusted as usage patterns evolve. Rate limit enforcement is tested as part of the denial-of-service testing described above. Key outputs Per-consumer and global rate limiting on inference endpoints Anti-extraction monitoring for systematic input variation Version-controlled rate limit configuration Module 9 AISDP documentation --- ## Red Team Exercise Reports URL: https://docs.standardintelligence.com/red-team-exercise-reports Breadcrumb: Security › Artefacts › Red Team Exercise Reports Last updated: 28 Feb 2026 Red Team Exercise Reports AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 The red team report archive contains the full reports from each annual red team exercise . Each report documents the scenarios tested, the attack chains attempted, the successful and unsuccessful attacks, the exploited vulnerabilities, and the recommended mitigations. Findings are cross-referenced to the threat model entries they exercise. The archive provides evidence of a realistic, ongoing security testing programme that goes beyond automated scanning and penetration testing . Red team exercises test the organisation's detection and response capabilities, not just the system's technical controls. A red team that successfully corrupts a data source without triggering any monitoring alert reveals a detection gap that no automated scan would find. The archive is retained for the ten-year period with immutable timestamps. Remediation records linked to red team findings show the organisation's response to identified weaknesses. Key outputs Annual red team exercise report archive Scenario documentation with attack chain analysis Cross-reference to threat model entries Module 9 AISDP evidence --- ## Red Team Exercises URL: https://docs.standardintelligence.com/red-team-exercises Breadcrumb: Security › Testing Programme › Red Team Exercises Last updated: 28 Feb 2026 Annual Red Team — Scenarios AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Annual red team exercises simulate realistic threat scenarios combining technical attacks with social engineering. Five scenario categories are relevant for high-risk AI systems. Data source corruption attempts to manipulate the system's outputs by corrupting a data source it depends upon, testing the data integrity controls and anomaly detection. Automation bias exploitation attempts to cause the human oversight layer to approve harmful outputs by exploiting operator trust in the model's recommendations. Sensitive information extraction attempts to retrieve personal or confidential information from the model through carefully crafted queries. Denial of service via adversarial inputs attempts to trigger resource exhaustion through inputs designed to maximise computational cost. Model manipulation attempts to alter the system's behaviour through infrastructure compromise, configuration tampering, or supply chain exploitation. The Technical SME conducts red team exercises with personnel who were not involved in the system's development and who have realistic threat actor capabilities. For financial-sector systems subject to DORA , the scenarios should include sector-specific threats such as adversarial manipulation of credit scoring outputs or data poisoning to influence lending decisions. Key outputs Five-category red team scenario coverage Independent exercise personnel with realistic threat actor capabilities Sector-specific scenarios where applicable Module 9 AISDP evidence TLPT Alignment (DORA Art. 26, TIBER-EU) AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15, DORA Article 26 Significant financial entities subject to DORA Article 26 must conduct threat-led penetration testing (TLPT) using intelligence-led methodologies such as TIBER-EU. For AI systems deployed within such entities, the TLPT scope should explicitly include AI-specific attack scenarios. The threat intelligence phase should address AI-specific threat actors and techniques, using MITRE ATLAS alongside MITRE ATT&CK. The TLPT scope should encompass adversarial inputs designed to manipulate financial decisions, model extraction attempts, data poisoning scenarios targeting the training pipeline, and prompt injection attacks for LLM-based systems. This ensures that the TLPT exercises the AI-specific attack surface in addition to the traditional infrastructure and application targets. TLPT reports, shared with the financial supervisor, also serve as Module 9 evidence for the AI Act, provided they are structured to address both regimes' expectations. The engagement brief should reference both the DORA TLPT requirements and the AI Act threat model , specifying which threats from each regime the test should exercise. If the system is not subject to DORA, this article is documented as not applicable. Key outputs TLPT scope including AI-specific attack scenarios (if DORA applicable) MITRE ATLAS alongside MITRE ATT&CK in threat intelligence phase Dual-regime report structure serving both DORA and AI Act Module 9 AISDP evidence Red Team Outputs AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Red team exercises produce a detailed report containing findings, exploited vulnerabilities, the attack chain for each successful scenario, and recommended mitigations. The report should map each finding to the threat model entry it exercises and the AISDP module it affects, enabling direct traceability from the exercise to the compliance documentation. Findings are tracked in the vulnerability management register with the same severity classification and remediation SLAs as other security findings. Critical findings from red team exercises, such as a successful data source corruption that alters the model's outputs without triggering any alert, require immediate remediation. The remediation actions and their verification are documented alongside the original findings. The red team report also feeds into the threat model update cycle. Attack chains that were not anticipated in the threat model indicate gaps in the threat assessment. Successful attacks that bypassed controls that were assumed to be effective indicate control weaknesses. Both findings result in threat model updates and potential revisions to the risk register . Key outputs Detailed red team report with findings, attack chains, and recommendations Findings tracked in the vulnerability management register Threat model and risk register updates from exercise findings Module 9 AISDP evidence --- ## Regulator Contact Register URL: https://docs.standardintelligence.com/regulator-contact-register Breadcrumb: Security › Incident Response › Regulator Contact Register Last updated: 28 Feb 2026 Per-Jurisdiction Authority Contacts AISDP module(s): Module 9 (Robustness and Cybersecurity), Module 11 (Regulator Interaction) Regulatory basis: Article 73 , NIS2 , DORA , CRA A regulator contact register is maintained by the AI Governance Lead , listing for each member state where the system is deployed: the AI Act market surveillance authority, the NIS2 competent authority or CSIRT, the DORA competent financial authority (if applicable), the ENISA reporting portal for CRA notifications, and the contact details, preferred communication channels, and reporting portals for each. The register enables the incident response team to initiate reporting immediately without first researching which authority to contact. During a multi-regime incident with DORA's four-hour deadline running, the team cannot afford to spend time identifying the correct authority and contact channel. The register is updated whenever authority designations change and tested during annual tabletop exercises. For inspection readiness , the organisation should anticipate that different authorities may examine the same system from different regulatory perspectives. Module 9's evidence pack should serve both NIS2 audits and AI Act inspections without requiring reorganisation under time pressure. Key outputs Regulator contact register per jurisdiction and per regime Contact details, communication channels, and reporting portals Annual testing during tabletop exercises Module 9 and Module 11 AISDP documentation --- ## SAST (Bandit, SonarQube, Semgrep) URL: https://docs.standardintelligence.com/sast-bandit-sonarqube-semgrep Breadcrumb: Security › DevSecOps Integration (S.8.3) › SAST (Bandit, SonarQube, Semgrep) Last updated: 28 Feb 2026 SAST (Bandit, SonarQube, Semgrep) AISDP module(s): Module 9 (Robustness and Cybersecurity), Module 2 (Development Process) Regulatory basis: Article 15 Static Application Security Testing (SAST) scans source code for common vulnerability patterns, including injection flaws, authentication weaknesses, and insecure defaults. Bandit provides Python-specific security analysis; SonarQube provides multi-language analysis with quality and security rules; Semgrep provides pattern-based scanning with custom rule support. SAST runs in the CI pipeline for every code change and blocks merges if critical or high-severity findings are identified. The AI-specific custom rules described in (demographic feature flagging, hardcoded threshold detection, missing logging detection, model registry bypass detection) extend the SAST scope to cover compliance-relevant patterns unique to high-risk AI systems. SAST findings are tracked in the vulnerability management register alongside findings from other scanning tools. The remediation SLAs from apply. SAST scan results are retained as Module 9 evidence. Key outputs SAST integration in the CI pipeline (Bandit, SonarQube, Semgrep) AI-specific custom rules extending standard SAST coverage Merge blocking on critical/high findings Module 9 and Module 2 AISDP evidence --- ## SBOM Generation — CycloneDX/SPDX with ML Components URL: https://docs.standardintelligence.com/sbom-generation-cyclonedxspdx-with-ml-components Breadcrumb: Security › DevSecOps Integration (S.8.3) › SBOM Generation — CycloneDX/SPDX with ML Components Last updated: 28 Feb 2026 SBOM Generation — CycloneDX/SPDX with ML Components AISDP module(s): Module 9 (Robustness and Cybersecurity), Module 3 (Architecture and Design) Regulatory basis: Article 15 , Annex IV (2) SBOM generation is automated by the engineering team and integrated into the CI pipeline. Syft scans container images and code repositories, producing an SBOM in CycloneDX or SPDX format. CycloneDX is the more ML-friendly format: it supports component types beyond software libraries, including machine learning models, datasets, and services. CycloneDX's ML extension allows the SBOM to reference model artefacts with their provenance metadata. For ML systems, the SBOM extends beyond traditional software dependencies to include the ML framework version (TensorFlow, PyTorch, scikit-learn), pre-trained model components (base models, embedding models, tokenisers), and external API dependencies (third-party model APIs, data enrichment services). The complete SBOM, covering both software and ML components, is stored by the Conformity Assessment Coordinator in the evidence register and updated on every deployment. The SBOM serves three compliance functions: vulnerability management (input to scanning tools), licence compliance (input to licence analysis), and provenance documentation (Annex IV evidence). The SBOM is attached to the container image as a cosign attestation, linking it to the specific image version. covers the per-build SBOM as a CI/CD artefact; this article addresses the generation process and ML-specific extension. Key outputs Automated SBOM generation (Syft) in CycloneDX or SPDX format ML-specific component inclusion via CycloneDX ML extension Cosign attestation linking SBOM to container image Module 9 and Module 3 AISDP evidence --- ## SBOM (Per Build) URL: https://docs.standardintelligence.com/sbom-per-build Breadcrumb: Security › Artefacts › SBOM (Per Build) Last updated: 28 Feb 2026 SBOM (Per Build) AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 , CRA The SBOM archive contains the SBOM generated for every build. Each SBOM is linked to the specific container image version it describes through cosign attestation, creating a verifiable chain between the deployed artefact and its dependency inventory. The archive enables retrospective vulnerability analysis: when a new CVE is disclosed, the organisation can search the SBOM archive to determine immediately which deployed versions (current and historical) are affected. This capability is essential for the incident response process and for CRA vulnerability management obligations. The SBOM archive also supports dependency evolution analysis, showing how the system's supply chain has changed over time. New dependencies introduced, dependencies removed, and version changes across the system's lifetime are all visible. The archive is retained for the ten-year period. Key outputs Per-build SBOM archive with cosign attestation linking Retrospective vulnerability search capability Dependency evolution analysis over the system's lifetime Module 9 AISDP evidence --- ## SBOM URL: https://docs.standardintelligence.com/sbom Breadcrumb: Security › Supply Chain Security › SBOM Last updated: 28 Feb 2026 SBOM — Standard & ML-Specific Components AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 , CRA The AI system's supply chain extends well beyond traditional software dependencies. The SBOM must capture five component categories. Software dependencies include ML frameworks (TensorFlow, PyTorch, scikit-learn), data processing libraries (Pandas, NumPy, Apache Spark), serving frameworks (Triton, TorchServe), and their transitive dependencies. Model components include pre-trained foundation models, embedding models, tokenisers, and any third-party model weights. Infrastructure components include container base images, operating system packages, and cloud service configurations. Data processing components include annotation tools, data labelling services, and data enrichment APIs. External service dependencies include third-party model APIs, vector database services, and managed ML platform services (SageMaker, Vertex AI, Azure ML). CycloneDX's ML extension supports all five categories, allowing the SBOM to reference model artefacts with their provenance metadata alongside traditional software components. For CRA-scoped products, the SBOM format and content may need to align with implementing guidance from the Commission. The AI System Assessor monitors guidance updates and adjusts the SBOM generation accordingly. addresses the generation process; this article addresses the content scope. Key outputs Five-category SBOM scope (software, model, infrastructure, data, services) CycloneDX ML extension for model component documentation CRA alignment where applicable Module 9 AISDP evidence SBOM — CI/CD Integration & Format AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15, CRA The SBOM is generated automatically by the CI pipeline for every build, ensuring that each release has a current, accurate dependency inventory. Syft scans the container image and code repository, producing the SBOM in CycloneDX or SPDX format. CycloneDX is the preferred format for ML systems because its ML extension supports model components, datasets, and services alongside traditional software libraries. SPDX (an ISO/IEC standard) may be required for CRA compliance depending on implementing guidance. The generated SBOM is stored as a pipeline artefact alongside the container image it describes. Cosign attestation links the SBOM to the specific container image version, creating a verifiable chain between the image and its dependency inventory. The pipeline fails if SBOM generation fails, ensuring that no release proceeds without an up-to-date dependency record. For CRA-scoped products, three alignment points require verification. Implementing guidance from the Commission may impose specific SBOM format requirements. The CRA requires ongoing SBOM updates throughout the product lifecycle; automated generation on every build satisfies this. The CRA may require SBOM delivery to deployers; Module 8 's transparency documentation should reference the delivery mechanism. The AI System Assessor monitors CRA guidance updates and adjusts the SBOM generation accordingly. Key outputs Automated SBOM generation per build in the CI pipeline CycloneDX or SPDX format with cosign attestation Pipeline failure on SBOM generation failure Module 9 AISDP evidence --- ## SCA/Dependency & Container Image Scanning URL: https://docs.standardintelligence.com/scadependency-and-container-image-scanning Breadcrumb: Security › DevSecOps Integration (S.8.3) › SCA/Dependency & Container Image Scanning Last updated: 28 Feb 2026 SCA/Dependency & Container Image Scanning AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Software Composition Analysis (SCA) scans the project's dependency tree against known vulnerability databases (CVE, OSV). Container image scanning examines base images, installed packages, and system libraries for known vulnerabilities. Together, these tools protect against supply chain attacks and ensure that the deployed system does not contain known-vulnerable components. SCA tools (Snyk, Dependabot, pip-audit) run on every code commit via the CI pipeline. Container image scanning tools (Trivy, Grype, Snyk Container) run on every container build. Both layers also run periodically against deployed systems (daily or weekly) to catch vulnerabilities disclosed after deployment. The four-layer scanning architecture covers application dependencies, container images, infrastructure configurations, and operating system packages. Findings are prioritised by severity and tracked in the vulnerability management register. Critical and high-severity findings block merges (for CI scans) or trigger expedited remediation (for production scans). The scanning configuration, results, and remediation records are retained as Module 9 evidence. Key outputs SCA scanning on every commit; container scanning on every build Periodic scanning of deployed systems Four-layer scanning architecture Module 9 AISDP evidence --- ## Pre-Drafted Reporting Templates URL: https://docs.standardintelligence.com/security-artefacts--pre-drafted-reporting-templates Breadcrumb: Security › Artefacts › Pre-Drafted Reporting Templates Last updated: 28 Feb 2026 Pre-Drafted Reporting Templates AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 73 , NIS2 , DORA , CRA The pre-drafted reporting templates are retained as standalone Module 9 artefacts. The template set includes the shared incident fact sheet (common fields across all regimes), the DORA initial notification template (four-hour deadline), the NIS2 early warning template (24-hour deadline), the CRA early warning template (24-hour deadline), and the Article 73 initial report template (2/10/15-day deadline). Each template is pre-populated with static fields (entity identity, system identity, EU database registration reference) that do not change between incidents. Dynamic fields (timeline, scope, impact, containment actions) are clearly marked for completion during an incident. Regime-specific fields are clearly labelled. The templates are reviewed and updated annually by the Legal and Regulatory Advisor, and tested during tabletop exercises. Any change to reporting requirements, authority designations, or reporting portal specifications triggers a template update. Key outputs Pre-drafted templates for all applicable reporting regimes Static field pre-population with dynamic field markers Annual review and tabletop exercise testing Module 9 AISDP evidence --- ## Regulator Contact Register URL: https://docs.standardintelligence.com/security-artefacts--regulator-contact-register Breadcrumb: Security › Artefacts › Regulator Contact Register Last updated: 28 Feb 2026 Regulator Contact Register AISDP module(s): Module 9 (Robustness and Cybersecurity), Module 11 (Regulator Interaction) Regulatory basis: Article 73 , NIS2 , DORA , CRA The regulator contact register is retained as a standalone artefact. For each member state where the system is deployed, the register lists the AI Act market surveillance authority, the NIS2 competent authority or CSIRT (if applicable), the DORA competent financial authority (if applicable), the ENISA reporting portal for CRA notifications, and the contact details, preferred communication channels, and reporting portals for each. The register is maintained by the AI Governance Lead and updated whenever authority designations change. It is tested during annual tabletop exercises to confirm that the contact information is current and that the reporting portals are accessible. The register enables immediate reporting without research delays during a multi-regime incident. It sits alongside the regulator engagement documentation described above. Key outputs Per-jurisdiction, per-regime authority contact register Annual testing during tabletop exercises Immediate reporting enablement without research delays Module 9 and Module 11 AISDP evidence --- ## Security Artefacts URL: https://docs.standardintelligence.com/security-artefacts Breadcrumb: Security › Artefacts Last updated: 28 Feb 2026 Threat Model (Living Document) Cross-Regulatory Mapping Tables CRA Scope Determination & Product Classification DORA Third-Party Register & Risk Assessments Adversarial ML Test Results Penetration Test Reports Vulnerability Management Register SBOM (Per Build) Red Team Exercise Reports Incident Response Plan with Decision Tree Pre-Drafted Reporting Templates Regulator Contact Register Security Code Review Records Supply Chain Risk Assessments Module 9 Test Summary Table --- ## Security Code Review Records URL: https://docs.standardintelligence.com/security-code-review-records Breadcrumb: Security › Artefacts › Security Code Review Records Last updated: 28 Feb 2026 Security Code Review Records AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 The security code review records archive contains the documentation from each manual security code review. Each record identifies the component reviewed, the review date, the reviewer identity, the checklist used, the findings (including severity and affected code locations), and the remediation status. The archive demonstrates that security-critical components (authentication logic, model serving code, data validation, logging implementation, cryptographic implementations) receive human review beyond automated scanning. The review cadence (annually and on modification) is evidenced by the record dates. Findings from manual reviews are tracked in the vulnerability management register alongside automated findings, ensuring a single tracking mechanism with consistent severity classification and remediation SLAs. The archive is retained for the ten-year period. Key outputs Per-review documentation (component, reviewer, checklist, findings) Coverage evidence for security-critical components Findings tracked in the vulnerability management register Module 9 AISDP evidence --- ## Supply Chain Risk Assessments URL: https://docs.standardintelligence.com/supply-chain-risk-assessments Breadcrumb: Security › Artefacts › Supply Chain Risk Assessments Last updated: 28 Feb 2026 Supply Chain Risk Assessments AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 , DORA Articles 28–30 The supply chain risk assessment archive contains the vendor risk assessments for every third-party component, the ongoing monitoring records, and the sentinel test results that verify provider behaviour stability. Each assessment is dated, version-controlled, and linked to the corresponding entry in the third-party register. The archive enables trend analysis of supply chain risk: are providers improving or degrading their security posture? Are new dependencies introducing concentration risk? The trend data informs the annual supply chain risk reassessment. For DORA-scoped entities, the archive also contains the DORA-specific contractual provision documentation, the concentration risk assessments, and the critical provider contingency plans. The archive is retained for the ten-year period. Key outputs Vendor risk assessment archive per third-party component Ongoing monitoring records and sentinel test results DORA-specific documentation where applicable Module 9 AISDP evidence --- ## Supply Chain Security URL: https://docs.standardintelligence.com/supply-chain-security Breadcrumb: Security › Supply Chain Security (S.8.7) Last updated: 28 Feb 2026 Supply chain security addresses the risks introduced by third-party components: open-source libraries, pre-trained models, commercial APIs, and cloud infrastructure services. SBOM management generates and maintains software bills of materials in CycloneDX or SPDX format with ML-specific component metadata. Dependency management enforces version pinning, signature verification, and continuous vulnerability scanning . Third-party model provider assessment applies the AI Act's model origin risk framework alongside DORA 's third-party risk requirements. The DORA third-party register maintains the structured register of ICT third-party service providers required for financial services deployers. ℹ This section corresponds to the Supply Chain Security section and feeds primarily into AISDP Module 9 (Robustness and Cybersecurity). --- ## Test Result Mapping URL: https://docs.standardintelligence.com/test-result-mapping Breadcrumb: Security › Testing Programme › Test Result Mapping Last updated: 28 Feb 2026 Summary Table per Test Type AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 All cybersecurity testing produces a summary table that maps each test type to its most recent execution date, the scope covered, the number of findings by severity, the remediation status, and the next scheduled execution date. This summary provides the governance team and assessors with a single-page view of the cybersecurity testing programme's status. The table should cover penetration testing , vulnerability scanning , adversarial ML testing , additional threat-specific testing, and red team exercises . For each test type, the table indicates whether the testing is current (executed within the scheduled cadence), whether findings remain open, and what the overall risk posture is. The summary table is updated after each test execution and reviewed by the AI Governance Lead at the quarterly governance review. It is the primary navigation aid for Module 9 evidence: an assessor starts with the summary table, identifies areas of interest, and navigates to the detailed reports. Key outputs Single-page summary table covering all cybersecurity test types Per-test-type status (execution date, findings, remediation, next scheduled) Quarterly governance review Module 9 AISDP evidence Detailed Reports in Evidence Pack AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Behind the summary table, detailed test reports and remediation records are maintained in the evidence pack with immutable timestamps. Each detailed report captures the test methodology, the scope, the specific findings with evidence (screenshots, logs, reproduction steps), the severity classification, and the recommended remediation. Remediation records capture the action taken, the verification method, and the verification date. The evidence pack is organised by test type and date, enabling rapid retrieval for conformity assessment , market surveillance , or incident investigation. Each report carries a hash for integrity verification, ensuring that the report has not been modified since it was produced. The retention period is ten years from the system's placement on the market. The Conformity Assessment Coordinator maintains an index of all test reports, linked to the summary table and the threat model . This three-layer structure (summary table, detailed reports, threat model) provides navigable, auditable evidence of the cybersecurity testing programme's coverage and effectiveness. Key outputs Detailed test reports per test execution with immutable timestamps Remediation records per finding Hash-based integrity verification Module 9 evidence pack --- ## Third-Party Model Provider Assessment URL: https://docs.standardintelligence.com/third-party-model-provider-assessment Breadcrumb: Security › Supply Chain Security › Third-Party Model Providers Last updated: 28 Feb 2026 AI Act Model Origin Risk Assessment AISDP module(s): Module 3 (Architecture and Design), Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 , Annex IV Every third-party model component undergoes a vendor risk assessment before adoption, proportionate to its criticality. For foundation model providers, the assessment covers the provider's data governance practices (training data composition, personal data inclusion, copyright compliance), security certifications (SOC 2, ISO 27001), data handling commitments (whether inference inputs are used for training, retention periods), contractual commitments regarding model versioning and change notification, and incident response capabilities. For data providers, the assessment covers data provenance and licensing, data quality controls, data handling and security practices, and compliance with applicable data protection legislation. For embedding model and tokeniser providers, the assessment focuses on provenance verification, version stability, and the potential for silent behavioural changes. The model origin risk assessment integrates with the model selection process described above. The selection record should document both the functional evaluation (fitness for intended purpose, performance characteristics, architectural suitability) and the security/compliance evaluation (vendor risk, supply chain exposure, contractual coverage). The security team retains vendor risk assessments as Module 9 evidence, reviewed annually and re-conducted when the vendor's service scope or security posture changes materially. Key outputs Pre-adoption vendor risk assessment per third-party model component Combined functional and security/compliance evaluation Annual review with change-triggered re-assessment Module 3 and Module 9 AISDP evidence DORA Third-Party Risk AISDP module(s): Module 9 (Robustness and Cybersecurity), Module 3 (Architecture and Design) Regulatory basis: DORA Articles 28–30 Financial entities subject to DORA face more prescriptive third-party risk management requirements than the AI Act alone imposes. DORA requires a comprehensive register of all ICT third-party service providers, pre-contractual risk assessments, specific contractual clauses, and ongoing monitoring of providers classified as critical. For AI systems consuming third-party model APIs, the model provider is an ICT third-party service provider under DORA. The model selection record must therefore satisfy DORA's pre-contractual assessment requirements in addition to the AI Act's model origin risk analysis. The DORA risk assessment covers financial stability, business continuity, security certifications, data handling, and concentration risk. Concentration risk is particularly relevant for AI systems: if multiple critical financial services depend on the same foundation model provider, provider failure affects all of them simultaneously. Where a financial entity designates an AI model provider as a critical ICT third-party service provider, DORA's enhanced oversight requirements apply: more intensive ongoing monitoring, enhanced contractual protections, and contingency planning for provider failure. Module 3 should address the contingency plan, including multi-provider strategies or fallback to internally hosted models. If the system is not subject to DORA, this article is documented as not applicable. Key outputs DORA-compliant pre-contractual risk assessment per provider Concentration risk assessment for shared model providers Contingency planning for critical provider failure Module 9 and Module 3 AISDP documentation Contractual Provisions AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15, DORA Articles 28–30 Contractual provisions with third-party providers address six domains. Audit rights grant the organisation (and, where applicable, the financial supervisor) the right to audit the provider's security practices, data handling, and compliance controls. Security SLAs define the provider's commitments regarding availability, incident response, vulnerability management , and security certifications. Data location provisions specify where the provider stores and processes the organisation's data, ensuring compliance with EU data residency requirements. Sub-outsourcing restrictions require the provider to notify the organisation of any sub-processing arrangements and obtain approval before engaging sub-processors that handle the organisation's data. Exit strategy provisions define the process for transitioning away from the provider, including data portability, model artefact return, and transition timeline commitments. For DORA-scoped systems, the contractual provisions must satisfy the specific requirements of Articles 28–30. DORA requires that the contract address the right to terminate in the event of significant performance shortfalls, the provider's obligation to cooperate with the financial supervisor, and the provider's obligation to participate in the entity's resilience testing programme. The contractual provisions are documented in Module 9. Key outputs Six-domain contractual framework (audit, SLAs, data location, sub-outsourcing, exit, DORA-specific) DORA-compliant clauses where applicable Provider notification and approval requirements Module 9 AISDP documentation Ongoing Provider Monitoring AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Supply chain risk does not remain static. Foundation model providers update their models, sometimes changing behaviour in ways that affect downstream systems. Data providers may alter their data collection practices or experience data breaches. Ongoing provider monitoring ensures that changes in the supply chain are detected, assessed, and addressed before they affect the system's compliance posture. Four monitoring activities are required. Subscribing to the provider's changelog via RSS, webhook, or email notifications detects version changes, feature modifications, and deprecation announcements. Running sentinel tests at regular intervals detects behavioural changes that the provider may not announce. Reviewing the provider's terms of service periodically detects material changes to data handling, availability commitments, or liability terms. Tracking the provider's security posture through incident disclosures, compliance certifications, and audit reports detects security degradation. The Technical SME assesses any material change for its impact on the downstream system's compliance profile. A provider that changes its content filtering, modifies its API behaviour, or retrains its model may alter the downstream system's outputs without any change to the downstream system's code. Material changes are documented in the risk register with an impact assessment. The MITRE ATLAS navigator provides a structured way to track the evolving threat landscape for AI supply chain attacks. Key outputs Four-activity monitoring programme (changelog, sentinel tests, ToS review, security tracking) Material change impact assessment documented in the risk register MITRE ATLAS threat landscape tracking Module 9 AISDP evidence --- ## Threat Actor Profiles URL: https://docs.standardintelligence.com/threat-actor-profiles Breadcrumb: Security › Threat Modelling › Attack Surfaces & Actors › Threat Actor Profiles Last updated: 28 Feb 2026 Threat Actors AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 The threat model must identify the threat actors whose capabilities, motivations, and access levels shape the threat landscape. Four primary categories are identified relevant to high-risk AI systems. External attackers range from opportunistic attackers exploiting known vulnerabilities to sophisticated adversaries conducting targeted campaigns against the AI system. Their motivations may include financial gain, competitive intelligence, ideological disruption, or state-sponsored espionage. Malicious insiders have legitimate access to one or more system components and can exploit that access to modify training data, tamper with model artefacts, exfiltrate intellectual property, or sabotage the system's outputs. Compromised deployers have legitimate access to the system through the deployer relationship but may use that access for purposes beyond the intended scope, including model extraction or systematic querying. Adversarial users interact with the system through its intended interfaces but submit inputs designed to exploit the model's behaviour, such as adversarial examples, prompt injection , or systematic probing to discover the model's decision boundaries. Each threat actor category has different capabilities, different access levels, and different motivations. The threat model should assess each identified threat against the relevant actor categories to determine realistic likelihood scores. An attack that requires insider access scores differently from one that can be executed by an anonymous external attacker. Key outputs Threat actor profiles with capabilities, motivations, and access levels Mapping of threats to relevant actor categories Likelihood scoring informed by actor capability assessment Module 9 AISDP documentation --- ## Threat Model (Living Document) URL: https://docs.standardintelligence.com/threat-model-living-document Breadcrumb: Security › Artefacts › Threat Model (Living Document) Last updated: 28 Feb 2026 Threat Model (Living Document) AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 The threat model is the primary security artefact for Module 9. It is a living document, version-controlled in the documentation repository, combining the methodology (STRIDE + MITRE ATLAS + OWASP LLM within a PASTA framework), the attack surface inventory (eight categories), the threat actor profiles (four categories), the enumerated threats with risk scores, the mitigations for each threat above the risk acceptance threshold, and the residual risk s. The threat model is produced using structured tooling (IriusRisk, OWASP Threat Dragon, or equivalent) and maintained by the Technical SME. It is reviewed annually and updated whenever the system's architecture, data sources, deployment context, or threat landscape changes materially. The threat model feeds directly into the cybersecurity testing programme: every identified threat should be exercised by at least one test. The threat model is retained for the ten-year period. Each version is preserved, enabling an assessor to understand how the threat landscape evolved over the system's lifetime and how the organisation responded. Key outputs Living threat model document (version-controlled, structured tooling) Annual review with change-triggered updates Ten-year retention with version history Module 9 AISDP evidence --- ## Threat Modelling — Attack Surfaces & Actors URL: https://docs.standardintelligence.com/threat-modelling--attack-surfaces-and-actors Breadcrumb: Security › Threat Modelling › Attack Surfaces & Actors Last updated: 28 Feb 2026 This section covers the following topics: Attack Surface Identification Threat Actor Profiles --- ## Threat Modelling — Methodology URL: https://docs.standardintelligence.com/threat-modelling--methodology Breadcrumb: Security › Threat Modelling › Methodology Last updated: 28 Feb 2026 STRIDE (Traditional Software Threats — Six Categories) AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 STRIDE is a threat classification framework that categorises traditional software threats into six categories: Spoofing (impersonating a user or system), Tampering (modifying data or code), Repudiation (denying an action), Information Disclosure (exposing protected data), Denial of Service (making a system unavailable), and Elevation of Privilege (gaining unauthorised access). For high-risk AI systems, STRIDE provides the baseline threat taxonomy covering the system's traditional software attack surface. Each attack surface point (data ingestion APIs, operator interfaces, administrative endpoints, inter-service communication, model serving infrastructure) is assessed against all six STRIDE categories. However, STRIDE was not designed for machine learning systems and does not address threats that exploit the model's learning and inference processes. Data poisoning , adversarial examples, model extraction , and prompt injection fall outside STRIDE's scope. The threat modelling exercise therefore uses STRIDE as one component of a combined framework. STRIDE covers the traditional software threats; MITRE ATLAS covers the AI-specific threats; OWASP Top 10 for LLM Applications provides a focused checklist for LLM-based systems. The combined taxonomy ensures comprehensive coverage. The Technical SME documents the threat model as a structured artefact using IriusRisk or OWASP Threat Dragon and maintains it as a living document. Key outputs STRIDE analysis per attack surface point Structured threat model artefact (IriusRisk or OWASP Threat Dragon) Integration with MITRE ATLAS and OWASP LLM Top 10 Module 9 AISDP documentation MITRE ATLAS (AI-Specific Threat Taxonomy — Seven Phases) AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 MITRE ATLAS (Adversarial Threat Landscape for AI Systems) provides the taxonomic foundation for AI-specific threats. Analogous to MITRE ATT&CK for traditional cyber threats, ATLAS catalogues real-world adversarial techniques against ML systems, organised into seven phases: reconnaissance (discovering model architecture and training data characteristics), resource development (building adversarial capabilities), initial access (gaining query access to the model), execution (submitting adversarial inputs), persistence (maintaining access or influence), evasion (avoiding detection), and impact (the actual harm achieved). Each technique in the ATLAS matrix has real-world case studies and documented mitigations. The taxonomy provides a structured vocabulary for discussing AI-specific threats and ensures that the threat model covers the full range of adversarial techniques, not only those the team has encountered or read about. ATLAS is particularly valuable for threat modelling sessions involving participants with varying levels of ML security expertise, as the matrix provides a systematic checklist. For the AISDP, the threat modelling exercise enumerates threats at each attack surface point using the combined STRIDE + ATLAS taxonomy. Threats above the risk acceptance threshold (scored using the likelihood × impact methodology) receive documented mitigations mapped to the compensating controls. The ATLAS-derived threats and their mitigations feed directly into Module 9. Key outputs ATLAS-based enumeration of AI-specific threats per attack surface Risk scoring (likelihood × impact) for each identified threat Mitigation mapping to Section 8 compensating controls Module 9 AISDP documentation OWASP Top 10 for LLM Applications AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 The OWASP Top 10 for LLM Applications (2025, v2.0) provides a focused checklist for systems incorporating large language models. It covers ten threat categories: Prompt Injection (LLM01), Sensitive Information Disclosure (LLM02), Supply Chain (LLM03), Data and Model Poisoning (LLM04), Improper Output Handling (LLM05), Excessive Agency (LLM06), System Prompt Leakage (LLM07), Vector and Embedding Weaknesses (LLM08), Misinformation (LLM09), and Unbounded Consumption (LLM10). Each category maps to specific AISDP modules. Prompt injection, improper output handling, and system prompt leakage affect Module 9. Data and model poisoning affects both Module 4 ( Data Governance ) and Module 9. Excessive agency has significant overlaps with Module 7 (Human Oversight) and Module 1 (System Identity). Vector and embedding weaknesses are particularly relevant for RAG-based systems, overlapping with the RAG-specific governance in and Module 4. Misinformation overlaps with Module 7 (Human Oversight) through automation bias countermeasures. Detailed coverage of each category appears in, with attack vectors, practical control strategies, and documentation requirements for the AISDP. Two categories new to the 2025 edition, system prompt leakage and vector and embedding weaknesses, should receive specific threat model coverage for any system incorporating LLMs or RAG architectures. For systems that do not incorporate LLMs, several categories remain relevant to any ML system: data and model poisoning, supply chain vulnerabilities, sensitive information disclosure, and unbounded consumption. The Technical SME should assess which categories apply to the specific system and document the determination in the threat model. Key outputs Assessment of each OWASP LLM category against the specific system Mapping of applicable categories to AISDP modules Determination and rationale for non-applicable categories Module 9 AISDP documentation PASTA (Process for Attack Simulation and Threat Analysis) AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 PASTA (Process for Attack Simulation and Threat Analysis) is a risk-centric threat modelling methodology that provides a structured seven-stage process: define objectives, define the technical scope, application decomposition, threat analysis, vulnerability analysis, attack modelling and simulation, and risk and impact analysis. Unlike STRIDE (which is threat-classification-focused) and ATLAS (which is taxonomy-focused), PASTA emphasises the attacker's perspective and the business impact of each threat. PASTA's risk-centric approach aligns well with the EU AI Act's emphasis on risk management ( Article 9 ). Each identified threat is assessed not only for technical severity but for its potential impact on fundamental rights, affected persons, and the system's compliance posture. This business-impact dimension is often absent from purely technical threat modelling exercises. For AISDP purposes, PASTA can serve as the overarching methodology that organises the threat modelling exercise, with STRIDE and ATLAS providing the threat taxonomies used within PASTA's threat analysis stage. The four-stage approach described in (scope attack surfaces, enumerate threats using STRIDE + ATLAS, assess risk using likelihood × impact scoring, define mitigations) is compatible with PASTA's structure. The choice of methodology should be documented in Module 9 alongside the resulting threat model. Key outputs PASTA methodology applied to the AI system's threat model Integration of STRIDE and ATLAS within the PASTA framework Risk-centric assessment with business and fundamental rights impact Module 9 AISDP documentation --- ## Threat Modelling Artefacts URL: https://docs.standardintelligence.com/threat-modelling-artefacts Breadcrumb: Security › Threat Modelling › Artefacts Last updated: 28 Feb 2026 Threat Model Document (Living) AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 The threat model document is the central artefact produced by the threat modelling exercise described above. It is a living document, version-controlled and reviewed whenever the system's architecture, data sources, deployment context, or threat landscape changes materially. At minimum, the document is reviewed annually, aligned with the risk register review cadence. The document should include the methodology used (STRIDE + ATLAS + OWASP LLM , within a PASTA framework or equivalent), the attack surface inventory, the threat actor profiles, the enumerated threats with risk scores, the mitigations for each threat above the risk acceptance threshold, and the residual risk s for threats where full mitigation is not achievable. The threat model should be structured so that each identified threat maps to a specific control in the AISDP and to a specific test in the cybersecurity testing programme. The Technical SME produces the threat model using structured tooling (IriusRisk, OWASP Threat Dragon, or equivalent) and stores it in the version control system. The threat model feeds directly into the cybersecurity testing programme: penetration testing and adversarial ML testing should exercise the threats identified in the model. The document is Module 9's primary reference and is reviewed by the notified body or competent authority during conformity assessment . Key outputs Structured threat model document (living, version-controlled) Methodology, attack surfaces, threat actors, threats, mitigations, residual risks Annual review schedule with change-triggered updates Module 9 AISDP evidence Per-Threat Control Mapping AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 The per-threat control mapping is a structured table or matrix that links each identified threat from the threat model to the specific controls that mitigate it, the AISDP module that documents the control, and the test that verifies the control's effectiveness. This mapping serves as the navigable index between the threat model and the rest of the AISDP. The mapping enables an assessor to trace from any threat to its mitigations, from any mitigation to its documentation, and from any test to the threat it exercises. It also enables gap analysis: a threat without a mapped control is an unmitigated risk; a control without a mapped test is an unverified mitigation. The mapping should cover both traditional software threats (STRIDE-derived) and AI-specific threats (ATLAS and OWASP LLM-derived). The mapping is maintained alongside the threat model and updated whenever threats, controls, or tests change. It feeds into the cybersecurity testing programme's scope definition and the conformity assessment's evidence trail. The per-threat control mapping is retained as Module 9 evidence. Key outputs Threat-to-control-to-test mapping matrix Gap analysis for unmitigated threats and unverified controls Maintained alongside the threat model document Module 9 AISDP evidence Bow-Tie Diagrams (Top 10 Risks) AISDP module(s): Module 9 (Robustness and Cybersecurity), Module 6 (Risk Management System) Regulatory basis: Article 9 , Article 15 Bow-tie diagrams provide a visual representation of the relationship between threats, barriers (preventive controls), the hazardous event, consequences, and recovery controls. For the top ten risks identified in the threat model, bow-tie diagrams present the risk management story in a format accessible to both technical and governance audiences. Each diagram shows the threat sources on the left, the preventive barriers (controls that reduce the likelihood of the threat materialising), the central hazardous event, the consequence pathways on the right, and the recovery barriers (controls that reduce the impact if the event occurs). This structure makes the defence-in-depth strategy visible: an assessor can see how many independent barriers stand between each threat and its consequences, and what happens if one barrier fails. The bow-tie diagrams complement the per-threat control mapping by providing a visual, narrative format that is more accessible for governance reviews and stakeholder communication. They are particularly useful for communicating risk management decisions to the AI Governance Lead , Legal and Regulatory Advisor, and senior stakeholders who may not engage with the full threat model detail. The diagrams are version-controlled and updated alongside the threat model. Key outputs Bow-tie diagrams for the top ten risks from the threat model Visual mapping of preventive and recovery barriers per risk Governance-accessible format for risk communication Module 6 and Module 9 AISDP evidence --- ## Threat Modelling URL: https://docs.standardintelligence.com/threat-modelling Breadcrumb: Security › Threat Modelling (S.8.3, S.8.4) Last updated: 28 Feb 2026 Threat modelling for AI systems combines traditional software security analysis with AI-specific threat taxonomies. Methodology applies STRIDE for conventional software threats, MITRE ATLAS for AI-specific attack patterns, the OWASP Top 10 for LLM Applications, and PASTA for attack simulation and threat analysis. Attack surface identification catalogues eight exposure categories spanning training data, model artefacts, inference APIs, human oversight interfaces, feature stores, vector databases, configuration stores, and monitoring infrastructure. Threat actor profiling assesses capabilities and motivation across four actor categories. AI-specific threat categor ies provide per-threat analysis of attack vectors and controls for each of the OWASP LLM Top 10 plus additional AI-specific threats including adversarial examples, model inversion, and federated training risks. The section concludes with artefacts : the living threat model document, per-threat control mappings, and bow-tie diagrams for the top ten risks. ℹ This section corresponds to the Threat Modelling section and feeds primarily into AISDP Module 9 (Robustness and Cybersecurity). --- ## Training Data Security URL: https://docs.standardintelligence.com/training-data-security Breadcrumb: Security › Data Security in ML Pipelines (S.8.2.3) › Training Data Security Last updated: 28 Feb 2026 Training Data Security AISDP module(s): Module 4 (Data Governance), Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 10 , Article 15 Training datasets often contain the most sensitive data in the ML pipeline. Access is restricted by the security team to authorised data engineers and model developers, with access logged and reviewed. Training data is encrypted at rest and in transit. Where training data includes personal data, the encryption key management aligns with the GDPR retention and deletion requirements documented in the data governance section. Immutable audit logs record every access to training data, enabling the organisation to demonstrate that data handling complied with the documented governance framework. The audit trail captures read, write, and delete events with the user identity, timestamp, and scope of access. This audit trail is essential both for data poisoning detection and for GDPR accountability. The training data security controls are documented jointly in Module 4 (data governance) and Module 9 (cybersecurity). The security team conducts quarterly access reviews to confirm that only currently authorised personnel have access, and that access permissions are proportionate to each person's role. Key outputs Access controls restricting training data to authorised personnel Encryption at rest and in transit Immutable access audit trail Module 4 and Module 9 AISDP documentation --- ## Vector Database Security — Adversarial Document Injection URL: https://docs.standardintelligence.com/vector-database-security-adversarial-document-injection Breadcrumb: Security › Data Security in ML Pipelines (S.8.2.3) › Vector Database Security — Adversarial Document Injection Last updated: 28 Feb 2026 Vector Database Security — Adversarial Document Injection AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Adversarial document injection is a novel attack surface introduced by vector databases. An attacker who can insert documents into the knowledge base can craft documents designed to be retrieved for specific target queries, allowing indirect manipulation of the LLM's output without modifying the model itself. A document containing misleading safety information, crafted to be semantically similar to common queries about a product, would be retrieved and presented to the LLM as authoritative context. Controls include strict access control on the indexing pipeline, content validation and provenance verification for all documents entering the knowledge base, anomaly detection on newly indexed documents (flagging documents whose embeddings are unusually close to high-frequency queries), and monitoring retrieval patterns for sudden changes in which documents are being retrieved for stable queries. The adversarial document injection threat and its controls are documented in the threat model and in Module 9. Testing should include attempts to inject crafted documents through the indexing pipeline and verification that the controls detect and reject them. Key outputs Content validation and provenance verification on all indexed documents Anomaly detection on newly indexed document embeddings Retrieval pattern monitoring for sudden changes Module 9 AISDP documentation --- ## Vector Database Security — Bulk Extraction Monitoring URL: https://docs.standardintelligence.com/vector-database-security-bulk-extraction-monitoring Breadcrumb: Security › Data Security in ML Pipelines (S.8.2.3) › Vector Database Security — Bulk Extraction Monitoring Last updated: 28 Feb 2026 Vector Database Security — Bulk Extraction Monitoring AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 An attacker with query access to the vector database could systematically probe the embedding space to reconstruct or infer the contents of the knowledge base. Sequential queries that systematically scan the embedding space, or queries with unusual patterns that suggest automated probing, indicate a bulk extraction attempt. Controls include rate limiting on vector search queries, anomaly detection on query patterns (particularly sequential queries that explore the embedding space systematically), and audit logging of all queries. The monitoring should track per-consumer query volumes and patterns, flagging consumers whose behaviour deviates from the established baseline. Bulk extraction monitoring complements the model theft controls described above. Where the model itself is protected by rate limiting and network segmentation, the knowledge base requires its own parallel protections. The extraction monitoring configuration and alerting thresholds are documented in Module 9. Key outputs Rate limiting on vector search queries Anomaly detection on query patterns for extraction behaviour Per-consumer query monitoring against established baselines Module 9 AISDP documentation --- ## Vector Database Security — Write/Read Separation URL: https://docs.standardintelligence.com/vector-database-security-writeread-separation Breadcrumb: Security › Data Security in ML Pipelines (S.8.2.3) › Vector Database Security — Write/Read Separation Last updated: 28 Feb 2026 Vector Database Security — Write/Read Separation AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Systems using retrieval-augmented generation, semantic search, or embedding-based matching store dense vector embeddings in specialised databases (Pinecone, Weaviate, Qdrant, Milvus, pgvector, Chroma). Access control must enforce separation between write access (used during knowledge base indexing) and read access (used during inference-time retrieval). The indexing pipeline authenticates as a dedicated service identity with write permissions; the inference service authenticates as a separate identity with read-only permissions. Administrative operations (index deletion, schema changes, bulk exports) require elevated privileges and produce audit log entries. This separation ensures that a compromised inference service cannot modify the knowledge base, and a compromised indexing pipeline cannot exfiltrate query patterns. Encryption at rest protects stored embeddings. As discussed in Article 97 ( GDPR status of stored embeddings), embeddings derived from documents containing personal data may themselves constitute personal data under GDPR. The encryption, retention, and deletion requirements that apply to the source documents therefore extend to the embeddings. The vector database security configuration is documented in Module 9. Key outputs Write/read access separation with dedicated service identities Elevated privilege requirements for administrative operations Encryption at rest for stored embeddings Module 9 AISDP documentation --- ## Vulnerability Management Register URL: https://docs.standardintelligence.com/vulnerability-management-register Breadcrumb: Security › Artefacts › Vulnerability Management Register Last updated: 28 Feb 2026 Vulnerability Management Register AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 The vulnerability management register is retained as a standalone Module 9 artefact. It provides the complete history of every vulnerability discovered across all scanning layers and testing activities, together with the remediation status, SLA compliance, and exception records. The register enables three compliance functions. Current posture assessment shows the number of open vulnerabilities by severity and the remediation timeline for each. Trend analysis shows whether the vulnerability discovery rate, remediation speed, and SLA compliance are improving or degrading over time. Exception audit shows every vulnerability that was accepted through the exception process, with the justification, compensating controls, and expiry date. The register is the primary input for the Module 9 compliance metrics reported to the governance team. The register is retained for the ten-year period. Key outputs Complete vulnerability history with remediation tracking Current posture, trend analysis, and exception audit capability Module 9 compliance metrics input Module 9 AISDP evidence --- ## Vulnerability Management URL: https://docs.standardintelligence.com/vulnerability-management Breadcrumb: Security › Cybersecurity Foundations › Vulnerability Management Last updated: 28 Feb 2026 Continuous Scanning & Remediation SLAs AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Continuous vulnerability scanning covers all system components: application code dependencies, container images, infrastructure configurations, and operating system packages. Scans run both in the CI pipeline (catching vulnerabilities before deployment) and against production environments on a scheduled basis (catching vulnerabilities disclosed after deployment). Critical and high-severity vulnerabilities have documented remediation timelines. The recommended timelines are 72 hours for critical findings and 30 days for high-severity findings. These SLAs are documented in the AISDP and tracked in a vulnerability management register. The register records each vulnerability's identifier (CVE or equivalent), severity, affected component, discovery date, remediation deadline, remediation status, and the identity of the person responsible. The remediation SLAs are compliance commitments. A critical vulnerability that remains unpatched beyond the SLA is a non-conformity tracked in the non-conformity register . The vulnerability count by severity and the remediation status are reported to the governance team as Module 9 compliance metrics. The security team retains scan results and remediation records as Module 9 evidence. Key outputs Continuous scanning across all component layers Documented remediation SLAs (critical: 72 hours; high: 30 days) Vulnerability management register with tracking per finding Module 9 AISDP evidence --- ## Vulnerability Scanning URL: https://docs.standardintelligence.com/vulnerability-scanning Breadcrumb: Security › Testing Programme › Vulnerability Scanning Last updated: 28 Feb 2026 Continuous Automated Scanning — Four Layers AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 The vulnerability scanning programme operates across four layers, each targeting a distinct component category. Application dependency scanning (Snyk, Dependabot, pip-audit) runs on every code commit via the CI pipeline, alerting on vulnerabilities in the project's dependency tree. Container image scanning (Trivy, Grype, Snyk Container) runs on every container build and periodically on deployed images, catching vulnerabilities in base images and OS packages disclosed after the image was built. Infrastructure-as-code scanning (Checkov, tfsec, KICS) runs on every IaC change, catching security misconfigurations before deployment. Operating system scanning (Qualys, Nessus, OpenVAS) runs periodically on the deployed infrastructure, catching OS-level vulnerabilities that arise from unpatched systems or newly disclosed exploits. Each layer has a defined remediation SLA: 24–72 hours for critical findings, one to two weeks for high-severity findings, and the next planned maintenance window for medium-severity findings. The four-layer architecture ensures that no component category is a blind spot. Findings from all four layers feed into the unified vulnerability management register. Key outputs Four-layer scanning architecture (dependencies, containers, IaC, OS) Continuous scanning in CI pipeline and periodic production scanning Per-layer remediation SLAs Module 9 AISDP evidence CI Pipeline & Production Scanning AISDP module(s): Module 9 (Robustness and Cybersecurity), Module 2 (Development Process) Regulatory basis: Article 15 Vulnerability scanning operates on two cadences. CI pipeline scanning catches vulnerabilities before deployment: every code commit, container build, and infrastructure change triggers the relevant scanning layers. Findings that meet the blocking threshold (critical or high severity) prevent the build from proceeding, ensuring that known-vulnerable components do not reach production. Production scanning catches vulnerabilities disclosed after deployment. A vulnerability in a dependency that was clean at build time may be disclosed days, weeks, or months later. Daily or weekly scans of deployed container images, dependency manifests, and infrastructure configurations against continuously updated vulnerability databases detect these post-deployment disclosures. Snyk Monitor provides continuous monitoring of the deployed dependency tree, alerting within hours of a new disclosure. The two cadences are complementary. CI scanning prevents introduction; production scanning detects emergence. Both feed findings into the vulnerability management register with the same severity classification and remediation SLAs. The scanning configuration, cadence, and blocking thresholds are documented in Module 9 and Module 2. Key outputs CI pipeline scanning with merge/build blocking on critical/high findings Production scanning on a daily or weekly cadence Continuous monitoring (Snyk Monitor) for post-deployment disclosures Module 9 and Module 2 AISDP evidence Vulnerability Management Register AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 The vulnerability management register is a structured record that tracks every identified vulnerability across all scanning layers and testing activities. Each entry records the vulnerability identifier (CVE or equivalent), the severity (CVSS score), the affected component, the discovery source (which scanning tool or test identified it), the discovery date, the remediation deadline (derived from the severity-based SLA), the remediation status, the responsible person, and the verification date (when re-testing confirmed remediation). The register serves as the single source of truth for the organisation's vulnerability posture. The current vulnerability count by severity and the remediation status are reported to the governance team as Module 9 compliance metrics. Trends in vulnerability discovery and remediation rates provide leading indicators of the security programme's effectiveness. Vulnerabilities that are accepted through an exception process (where remediation is not feasible within the SLA) are documented with the exception justification, compensating controls, and an expiry date. The register is a Module 9 evidence artefact, retained for the ten-year period. Key outputs Structured vulnerability register with per-finding tracking Severity-based remediation SLAs and status tracking Exception documentation for accepted vulnerabilities Module 9 AISDP evidence --- ## Zero Trust Architecture URL: https://docs.standardintelligence.com/zero-trust-architecture Breadcrumb: Security › Cybersecurity Foundations › Zero Trust Architecture Last updated: 28 Feb 2026 Independent Service Authentication & Authorisation AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 A zero trust architecture assumes no implicit trust based on network location and verifies every access request regardless of its origin. For AI systems, this means that every service component, from data ingestion to model inference to post-processing, authenticates and authorises independently. The model serving layer should not trust the data pipeline simply because both run within the same VPC. Each component validates inputs against expected schemas, authenticates the calling service, and verifies authorisation before processing. Service identities are managed through SPIFFE/SPIRE, cloud-native workload identity, or equivalent mechanisms. Human identities flow through the entire request chain, so that audit logs capture which operator's action triggered which model inference. Session tokens carry the minimum claims necessary, and token lifetimes are short enough to limit the window of compromise. A model serving endpoint that normally processes fifty requests per second should trigger additional verification if it suddenly receives five hundred. The zero trust architecture, identity and access management framework, and continuous verification mechanisms feed into both Module 9 (cybersecurity) and Module 3 (architecture). Key outputs Independent authentication and authorisation per service component Service identity management (SPIFFE/SPIRE or cloud-native workload identity) Human identity propagation through the request chain Module 9 and Module 3 AISDP documentation Identity-Based Access (SPIFFE/SPIRE) AISDP module(s): Module 9 (Robustness and Cybersecurity), Module 3 (Architecture and Design) Regulatory basis: Article 15 In a zero trust architecture, identity-based access replaces network-based trust. Service identities managed through SPIFFE (Secure Production Identity Framework for Everyone) and SPIRE (SPIFFE Runtime Environment), cloud-native workload identity, or equivalent mechanisms authenticate each microservice in the AI system's architecture. Every component presents a cryptographically verifiable identity, eliminating the assumption that services within the same network are trustworthy. SPIFFE assigns each service a unique identity (a SPIFFE ID) and provides short-lived X.509 certificates or JWT tokens that the service uses to authenticate itself to other services. SPIRE manages the lifecycle of these identities: registration, attestation (verifying that the service is running on authorised infrastructure), certificate issuance, and rotation. For cloud-native deployments, managed workload identity services (AWS IAM Roles for Service Accounts, Azure Workload Identity, GCP Workload Identity) provide equivalent functionality integrated with the cloud provider's IAM layer. Human identities should flow through the entire request chain so that audit logs capture which operator's action triggered which model inference. Session tokens carry the minimum claims necessary, and token lifetimes are kept short enough to limit the window of compromise. The identity management architecture is documented in both Module 9 and Module 3. Key outputs Service identity management using SPIFFE/SPIRE or cloud-native workload identity Short-lived certificates with automated rotation Human identity propagation through request chains Module 9 and Module 3 AISDP documentation Microsegmentation AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Microsegmentation at the workload level restricts lateral movement within the AI system's infrastructure. Even if an attacker compromises the feature engineering service, microsegmentation prevents them from reaching the model artefact store, the training data repository, or the logging infrastructure. This granular isolation limits the blast radius of any individual compromise to the affected component. The security team defines network policies as allowlists; any traffic not explicitly permitted is denied by default. Kubernetes NetworkPolicies provide pod-level enforcement, restricting communication between pods based on labels, namespaces, and ports. A service mesh (Istio or Linkerd) adds mutual TLS to every service-to-service connection, providing both authentication and encryption at the transport layer. The microsegmentation policy should reflect the AI system's architectural boundaries. The data ingestion layer communicates with the feature engineering layer; the feature engineering layer communicates with the model serving layer; the model serving layer communicates with the post-processing layer and the logging layer. Each of these communication paths is explicitly permitted; all other paths are denied. The policy is defined as code, version-controlled, and subject to the same review process as infrastructure changes. Key outputs Allowlist-based network policies per workload Kubernetes NetworkPolicies and/or service mesh enforcement Policy-as-code with version control and review Module 9 AISDP documentation Continuous Verification AISDP module(s): Module 9 (Robustness and Cybersecurity) Regulatory basis: Article 15 Continuous verification replaces one-time authentication. Access decisions are re-evaluated based on context: the requesting service's current security posture, the sensitivity of the requested resource, the time of day, and the anomaly score of the request pattern. A model serving endpoint that normally processes fifty requests per second should trigger additional verification if it suddenly receives five hundred. This principle extends beyond network access to data access, model artefact access, and administrative operations. A service account that normally reads feature data from the feature store should trigger an alert if it attempts to write to the model registry . An administrator who normally accesses the logging infrastructure during business hours should trigger additional authentication if access is attempted at 03:00. The continuous verification framework integrates with the SIEM to correlate access patterns with known baselines and flag anomalies. The framework is documented in Module 9, including the baseline definitions, the anomaly thresholds, and the escalation procedures for triggered alerts. Key outputs Context-based re-evaluation of access decisions Baseline definitions per service and user role Anomaly detection integration with SIEM Module 9 AISDP documentation --- # Governance --- ## Agile Adaptation URL: https://docs.standardintelligence.com/agile-adaptation Breadcrumb: Governance › Delivery › Agile Adaptation Last updated: 28 Feb 2026 Sprint-Level Compliance Activities AISDP module(s): All modules (incremental) Regulatory basis: Articles 8–15 The compliance framework integrates with agile practices rather than imposing a waterfall overlay. The Technical Owner embeds compliance activities in the sprint cadence as native tasks. Each sprint includes updating relevant AISDP modules for design decisions made during the sprint, running the full test suite (including fairness and robustness gates) as part of the definition of done, reviewing new risks identified during development and adding them to the risk register , and updating the evidence pack with artefacts produced during the sprint. The sprint retrospective includes a compliance dimension: what evidence was generated, what gaps remain, what risks were introduced. This cadence ensures that compliance evidence accumulates naturally through the development process rather than being assembled retrospectively under time pressure. Compliance tasks are visible in the sprint backlog, estimated alongside feature work, and tracked through the same workflow. A separate compliance workstream that runs in parallel but disconnected from the sprint cadence creates a documentation lag that compounds over multiple sprints. Key outputs Compliance tasks embedded in sprint backlog and definition of done Per-sprint AISDP updates and evidence pack additions Sprint retrospective compliance dimension Continuous evidence accumulation through development Incremental AISDP Assembly AISDP module(s): All 12 modules Regulatory basis: Articles 8–15, Annex IV The AI System Assessor assembles the AISDP incrementally throughout development, beginning from Phase 1. Module 1 (System Identity) is completed during Phase 1. Module 6 (Risk Management) is drafted during Phase 2 and updated continuously. Module 3 (Architecture) is populated during Phase 3 and refined as the architecture evolves. Module 4 ( Data Governance ) grows as the data engineering work progresses. By the time Phase 5 arrives, the AISDP should be substantially complete, requiring only final review and consistency checking. This incremental approach avoids the common failure mode of attempting to write the entire AISDP in the weeks before deployment, when time pressure leads to superficial documentation and missed requirements. The module-by-phase mapping provides a clear schedule: each module has a phase in which it is primarily authored, subsequent phases in which it is refined, and a final review in Phase 5. The Conformity Assessment Coordinator tracks module completion status against this mapping, flagging modules that fall behind schedule. Key outputs Module-by-phase authoring schedule Substantially complete AISDP by Phase 5 Completion status tracking by Conformity Assessment Coordinator Avoidance of last-minute documentation sprint Feature Flags Within Compliance Boundaries AISDP module(s): Module 2 (Development Process), Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 3(23) Agile teams frequently use feature flags to deploy partially complete features behind toggles. For high-risk AI systems, feature flags operate within the compliance framework. A flag that enables a new model version, a new data source, or a new decision pathway is a system change that the AI System Assessor assesses against the substantial modification thresholds (Article 3(23)). Feature flag configuration is version-controlled alongside the system's code and configuration. The engineering team logs each flag activation in the deployment ledger . The assessment against substantial modification thresholds occurs before the flag is activated in production, not after; a flag activation that constitutes a substantial modification triggers the conformity re-assessment pathway. Feature flags that control non-AI aspects of the system (UI presentation, logging verbosity, non-model configuration) are managed through standard engineering governance and do not require compliance assessment. Key outputs Feature flags assessed against substantial modification thresholds before activation Version-controlled flag configuration Flag activations logged in deployment ledger Non-AI flags excluded from compliance assessment Continuous Conformity Assessment — CI/CD as Automated Checking AISDP module(s): Module 2 (Development Process), Module 6 (Risk Management System) Regulatory basis: Annex VI The organisation checks conformity continuously throughout development, spreading the assessment workload across the full lifecycle. The CI/CD pipeline 's quality gates provide automated continuous checking: every commit triggers static analysis , testing, and compliance verification. Manual assessment activities (documentation review, evidence verification) are conducted by the Conformity Assessment Coordinator at defined milestones rather than concentrated at the end. This approach reduces the risk of discovering fundamental non-conformities late in the development cycle when remediation is costly and time-constrained. A non-conformity identified in sprint 3 costs a fraction of the same non-conformity identified during Phase 5's formal assessment. The continuous checking model complements the formal Annex VI assessment; it does not replace it. The formal assessment in Phase 5 examines the complete AISDP against all requirements. Continuous checking ensures that the formal assessment is unlikely to uncover surprises. Key outputs Automated conformity checking through CI/CD quality gates Milestone-based manual assessment throughout development Early non-conformity detection reducing remediation cost Continuous checking complementing (not replacing) formal Annex VI assessment --- ## AI Governance Lead — Responsibilities & Authority URL: https://docs.standardintelligence.com/ai-governance-lead-responsibilities-and-authority Breadcrumb: Governance › Delivery › Organisational Roles › AI Governance Lead — Responsibilities & Authority Last updated: 28 Feb 2026 AI Governance Lead — Responsibilities & Authority AISDP module(s): All modules (approval authority) Regulatory basis: Article 17 The AI Governance Lead holds ultimate accountability for the organisation's AI compliance programme. Responsibilities include reviewing and approving the AISDP, signing the Declaration of Conformity , managing relationships with competent authorities, and having the authority to compel remediation, halt deployment, and allocate resources. The AI Governance Lead is accountable (A in RACI) for risk classification , risk assessment , FRIA , data governance , conformity assessment , PMM operation, serious incident reporting , and break-glass authorisation. The role should be held by a senior leader (CRO, CTO, or Head of AI Governance) with sufficient organisational authority to override commercial pressure when compliance requires it. For small organisations with five to ten AI systems, the AI Governance Lead may combine with other senior leadership responsibilities. For large enterprises with thirty or more systems, the role leads a dedicated AI Compliance Office reporting to the board or executive committee. Key outputs Ultimate accountability for AI compliance programme Declaration of Conformity signatory Authority to halt deployment and compel remediation RACI "A" across all compliance domains --- ## AI Office & European-Level Oversight URL: https://docs.standardintelligence.com/ai-office-and-european-level-oversight Breadcrumb: Governance › Regulator Interaction › AI Office Last updated: 28 Feb 2026 AI Office Functions AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Articles 64–68, Article 88 The European AI Office coordinates the consistent application of the AI Act across member states. Its functions include developing guidelines, templates, and codes of practice; overseeing GPAI model compliance and enforcement under Article 88; managing the scientific panel of independent experts; administering the EU database ; and monitoring implementation across member states. For providers of high-risk systems, the AI Office's most relevant outputs are implementing acts, delegated acts, and guidance documents that clarify ambiguous requirements. The AI Office also has direct enforcement powers over GPAI model providers, creating a dual regulatory relationship for high-risk systems incorporating GPAI models: the national competent authorit y oversees the high-risk system, and the AI Office oversees the underlying model. Organisations should monitor the AI Office's publications systematically. Published guidelines carry significant interpretive weight; departing from them without documented justification is a compliance risk. Key outputs AI Office publication monitoring as standing activity Dual regulatory relationship awareness for GPAI-incorporating systems Guidance integration into AISDP compliance posture Module 10 AISDP documentation Engagement Strategy AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Articles 64–68 Organisations engage with the AI Office through three channels. Monitoring: systematically tracking publications, guidelines, codes of practice, and enforcement actions. Participation: contributing to public consultations on draft guidelines and codes of practice, shaping the regulatory framework and anticipating forthcoming requirements. Contribution: sharing expertise through the AI Office's stakeholder engagement mechanisms, building a reputation for constructive cooperation. The AI Governance Lead or Legal and Regulatory Advisor maintains awareness of open consultations and contributes where the organisation's expertise is relevant. Consultation responses are documented in the NCA engagement log alongside national authority interactions. Engagement with the AI Office is a long-term investment. Organisations that contribute constructively to the regulatory framework's development build credibility that serves them during subsequent compliance interactions. Key outputs Three-channel engagement (monitoring, participation, contribution) Consultation responses documented in NCA engagement log Long-term credibility investment Module 10 AISDP documentation --- ## AI System Assessor — Classification, AISDP, Independence URL: https://docs.standardintelligence.com/ai-system-assessor-classification-aisdp-independence Breadcrumb: Governance › Delivery › Organisational Roles › AI System Assessor — Classification, AISDP, Independence Last updated: 28 Feb 2026 AI System Assessor — Classification, AISDP, Independence AISDP module(s): All modules (compilation) Regulatory basis: Article 17 , Annex VI The AI System Assessor conducts discovery, classification, risk assessment , and AISDP compilation for each system. The Assessor examines each system against the Article 3(1) definition, classifies within risk tiers, performs gap assessment for brownfield system s, and assembles the AISDP from engineering artefacts. The role must combine regulatory and technical understanding. The Assessor is responsible (R in RACI) for risk classification , risk assessment, and conformity assessment . The Assessor is consulted (C) on FRIA , architecture review, PMM operation, and serious incident reporting . Functional independence from the development team is required during the conformity assessment phase. For small organisations, one to two Assessors cover the portfolio. For medium organisations, two to four Assessors are dedicated. For large enterprises, multiple Assessors are organised by business domain. The Assessor's competence framework ensures regulatory, technical, and audit methodology knowledge. Key outputs Classification, risk assessment, and AISDP compilation responsibility Functional independence from development during assessment Competence across regulatory, technical, and audit domains RACI "R" for classification, risk, and conformity assessment --- ## Annex I Product Integration — Three Coordination Models URL: https://docs.standardintelligence.com/annex-i-product-integration-three-coordination-models Breadcrumb: Governance › Conformity Assessment › Notified Bodies › Annex I Product Integration — Three Coordination Models Last updated: 28 Feb 2026 Annex I Product Integration — Three Coordination Models AISDP module(s): Module 6 (Risk Management System), Module 3 (Architecture) Regulatory basis: Article 43(3), Annex I For AI systems that are safety components of products covered by Annex I harmonisation legislation, the conformity assessment landscape involves coordination between the product notified body and the AI Act notified body (or internal assessor). Article 43(3) provides that the AI system conformity assessment may be carried out as part of the product conformity assessment. Three coordination models are emerging. In the single-body model, one notified body is designated under both the product legislation and the AI Act and conducts an integrated assessment; this is simplest but requires dual competence, which is currently rare. In the sequential model, the product notified body conducts its assessment first, and the AI Act notified body then assesses AI-specific requirements using the product assessment as input; this preserves specialist expertise but extends the timeline. In the parallel model, both assessments proceed concurrently with a defined coordination protocol ensuring findings are shared. The Conformity Assessment Coordinator engages with both bodies early to agree the coordination model, document exchange arrangements, and timeline dependencies. The AISDP and the product technical file are maintained as separate, cross-referenced documents rather than merged, to avoid version control complications and audience confusion. Key outputs Coordination model selection (single-body, sequential, or parallel) Early engagement with both bodies for scope agreement Separate AISDP and product technical file with bidirectional cross-references Timeline dependency management --- ## Annex VI Internal Assessment URL: https://docs.standardintelligence.com/annex-vi-internal-assessment Breadcrumb: Governance › Conformity Assessment › Annex VI Internal Assessment Last updated: 28 Feb 2026 ℹ Awaiting content from a subsequent batch (v13). Awaiting content. --- ## Annex VII Procedural Mapping URL: https://docs.standardintelligence.com/annex-vii-procedural-mapping Breadcrumb: Governance › Conformity Assessment › Notified Bodies › Annex VII Procedural Mapping Last updated: 28 Feb 2026 Annex VII Procedural Mapping AISDP module(s): All 12 modules Regulatory basis: Annex VII Annex VII establishes a five-point sequence running from initial application through ongoing surveillance. The organisation maps each Annex VII point to the corresponding AISDP section, responsible role, and evidence artefact. Point 2 provides the overview: QMS examined per point 3, technical documentation per point 4. Points 3.1 through 3.4 cover the QMS assessment: application contents, NB assessment against Article 17 , approved QMS maintenance, and change notification. Points 4.1 through 4.7 cover the technical documentation assessment: separate application per system, application contents, NB examination with dataset access, supplementary evidence requests, model access with IP protections, certificate issuance, and change assessment. Points 5.1 through 5.3 cover ongoing surveillance: QMS compliance verification, premises access, periodic audits. For Annex I product AI systems, Article 43(3) specifies that points 4.3, 4.4, 4.5, and the fifth paragraph of point 4.6 apply even where the conformity assessment is conducted as part of the product-level assessment. The Conformity Assessment Coordinator flags these points to the product notified body. Key outputs Per-point Annex VII mapping to AISDP sections and evidence Responsible role identification per requirement Annex I product integration flagging Assessment preparation documentation --- ## Article 6(3) Exception Assessment URL: https://docs.standardintelligence.com/article-63-exception-assessment Breadcrumb: Governance › Risk Assessment › Article 6(3) Exception Assessment Last updated: 28 Feb 2026 Art. 6(3) Functional Criterion AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 6(3) Article 6(3) allows certain systems that would otherwise be classified as high-risk to be treated as lower risk if two conditions are both satisfied. The first is the functional criterion. The system's function must fall within one of the specified categories: performing narrow procedural tasks, improving the results of previously completed human activities, or detecting decision-making patterns without replacing human assessment. The functional criterion requires the AI System Assessor to analyse what the system actually does in its deployment context, not its theoretical capability. A system that could replace human assessment but is deployed solely to assist human decision-makers may satisfy the functional criterion; the same system deployed to make autonomous decisions would not. The analysis must be grounded in the system's actual deployment configuration, operational procedures, and the contractual commitments governing its use. Satisfying the functional criterion alone is insufficient; the risk criterion must also be met. The AI System Assessor documents the functional analysis with specific evidence addressing which specified category applies and why. Key outputs Functional analysis against Article 6(3) specified categories Grounding in actual deployment configuration (not theoretical capability) Evidence-based determination with documented reasoning Module 6 AISDP documentation Art. 6(3) Risk Criterion — No Significant Risk AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 6(3) The second condition for the Article 6(3) exception is the risk criterion: the system must not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons. This assessment considers the severity of potential harm, the number of persons potentially affected, the vulnerability of those persons, the reversibility of harm, and the availability of redress mechanisms. The risk criterion analysis should treat the exception as a hypothesis to be tested against evidence, not as a convenient exit from compliance obligations. A conservative approach is warranted: if the analysis is borderline, treating the system as high-risk is the safer position. The consequences of incorrectly claiming the exception (deploying a non-compliant high-risk system) are substantially more severe than the cost of unnecessarily complying with high-risk requirements. Both the Legal and Regulatory Advisor and the AI Governance Lead must review and approve any claim of the Article 6(3) exception. Their approval confirms that the risk analysis is thorough, that the conclusion is defensible, and that the organisation accepts the residual risk of the classification decision. Key outputs Risk criterion analysis (severity, population, vulnerability, reversibility, redress) Hypothesis-testing approach with conservative default Legal and Regulatory Advisor and AI Governance Lead dual approval Module 6 AISDP documentation Review & Treatment — Hypothesis Tested Against Evidence AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 6(3) The Article 6(3) exception assessment is treated as a hypothesis to be tested, not a conclusion to be confirmed. The AI System Assessor assembles evidence both supporting and challenging the exception claim. Supporting evidence might include the system's narrow procedural scope, the availability of human override, and the low severity of potential harm. Challenging evidence might include the system's influence on consequential decisions, the vulnerability of affected populations, and the difficulty of detecting errors. The assessor weights the evidence and reaches a determination. If the determination favours the exception, both criteria are documented with the specific evidence supporting each. If the determination does not favour the exception, the system is treated as high-risk and the full AISDP proceeds. The analysis is retained regardless of outcome; an assessor or competent authority reviewing the CDR should be able to see that the exception was genuinely tested. This rigorous treatment protects the organisation against enforcement risk. A competent authority that encounters a system claiming the Article 6(3) exception will scrutinise the analysis carefully; a superficial or one-sided analysis will not survive that scrutiny. Key outputs Evidence assembled both supporting and challenging the exception Weighted determination with documented reasoning Full analysis retained regardless of outcome Module 6 AISDP documentation --- ## Assessment Checklist URL: https://docs.standardintelligence.com/assessment-checklist Breadcrumb: Governance › Conformity Assessment › Artefacts › Assessment Checklist Last updated: 28 Feb 2026 Assessment Checklist AISDP module(s): All 12 modules Regulatory basis: Articles 8–15, Article 17 , Annex IV The completed assessment checklist is retained as the detailed record of which requirements were assessed, what evidence was examined, and what conclusion was reached for each. The checklist provides granular traceability that the Assessment Report summarises. A competent authority conducting a targeted review of a specific Article (for example, Article 10 data governance) should be able to locate the relevant checklist items and follow the trail to the underlying evidence without needing to read the entire Assessment Report. Key outputs Completed per-requirement checklist with evidence and determinations Granular traceability for targeted review Ten-year retention All 12 modules covered --- ## Assessment Execution Methodology URL: https://docs.standardintelligence.com/assessment-execution-methodology Breadcrumb: Governance › Conformity Assessment › Assessment Execution Methodology Last updated: 28 Feb 2026 ℹ is populated. are awaiting content from a subsequent batch. Phase 5: Synthesis & Reporting AISDP module(s): All 12 modules Regulatory basis: Annex VI Phase 5 consolidates findings from all preceding phases into the formal Assessment Report. The assessor classifies each non-conformity by severity (critical, major, minor), reconciles findings across phases to avoid duplication, and reaches an overall assessment conclusion. Typical duration is two to three days. The conclusion takes one of three forms. "Conformity demonstrated" means the system satisfies all applicable requirements with no outstanding non-conformities. "Conformity demonstrated subject to remediation" means the system satisfies the requirements in substance, with outstanding major or minor non-conformities that have documented remediation plans and deadlines. "Conformity not demonstrated" means critical non-conformities remain unresolved, or the cumulative weight of major non-conformities is such that the Declaration of Conformity cannot be justified. The Assessment Report is signed by the lead assessor and reviewed by the AI Governance Lead . It summarises the assessment scope, methodology, assessor team, findings by phase, the Non-Conformity Register summary, and the overall conclusion. The report is retained for the ten-year period as the evidential foundation for the Declaration of Conformity. Key outputs Assessment Report with scope, methodology, findings, and conclusion Non-conformity classification by severity Three-form conclusion (demonstrated, demonstrated subject to remediation, not demonstrated) Two-to-three-day typical duration --- ## Assessment Failure Pathways URL: https://docs.standardintelligence.com/assessment-failure-pathways Breadcrumb: Governance › Certification › Assessment Failure Pathways Last updated: 28 Feb 2026 Remediation & Re-Assessment AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Annex VI Remediation and re-assessment is the most common pathway when the conformity assessment identifies non-conformities that cannot be resolved within the planned timeline. The non-conformities are remediated, the affected AISDP modules updated, and the remediated areas re-assessed. A full re-assessment is not necessary unless the remediation affected the system's architecture or intended purpose; the AI System Assessor scopes the re-assessment to the remediated areas. This pathway is appropriate when the non-conformities are bounded and the remediation is technically feasible within a reasonable timeframe. The Conformity Assessment Coordinator tracks each non-conformity through the remediation workflow, with the assessor verifying each remediation before the assessment can conclude. Assessment failure should be treated as a normal part of the compliance process, not as a crisis. Organisations undertaking AISDP preparation for the first time should expect at least one remediation cycle. The assessment timeline should include contingency for remediation. Key outputs Scoped re-assessment of remediated areas Non-conformity remediation tracked through standard workflow Contingency time built into the assessment timeline Module 6 AISDP documentation Deployment Deferral — Fundamental Issues AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Annex VI If non-conformities are fundamental, where the system's architecture does not support required human oversight, the training data cannot be shown to be representative, or the model's explainability is insufficient for the deployment context, remediation may require rearchitecture or redevelopment. The system cannot be deployed until the fundamental issues are resolved. The AI Governance Lead communicates the deferral to the Business Owner, including the deployment timeline impact, the resource requirements for remediation, and the business case for proceeding with remediation versus alternative approaches. Deployment deferral is a significant business decision. Deploying a non-conforming system carries greater risk than deferral: Tier 2 penalties of up to EUR 15 million or 3% of global turnover, reputational harm, and potential enforcement action. The deferral decision is documented in the Assessment Report with the specific non-conformities that triggered it, the remediation plan, and the revised timeline. The AISDP is updated to reflect the system's deferred status. Key outputs Deployment deferral for fundamental non-conformities Business impact communication to Business Owner Documented remediation plan with revised timeline Module 6 AISDP documentation System Withdrawal — Irremediable Within Constraints AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Annex VI If non-conformities are irremediable within the system's economic or technical constraints, the system may need to be withdrawn from AISDP preparation entirely. This may lead to decommissioning the system, replacing it with an alternative that can achieve conformity, or reclassifying the system if the assessment reveals that the actual risk profile differs from the initial classification. The AI Governance Lead documents the withdrawal decision with the rationale, the non-conformities that triggered it, and the alternatives considered. The withdrawal record is retained in the AISDP evidence pack. If the system was already operational (a brownfield system undergoing retrospective compliance), the withdrawal triggers the end-of-life procedures described above, including deployer notification, EU database status update, and ongoing documentation retention. System withdrawal is a governance outcome, not a failure. A system that cannot achieve conformity should not be forced into compliance theatre. Key outputs Documented withdrawal decision with rationale and alternatives Decommissioning, replacement, or reclassification pathway End-of-life procedures triggered for operational systems Module 6 AISDP evidence Notified Body Rejection — Budget for 2–3 Cycles AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 43, Annex VII For systems subject to third-party conformity assessment, the notified body may decline to certify. A rejection carries greater consequence than internal assessment failure: the rejection is documented in the body's records, may be communicated to the competent authority, and for mandatory assessments (biometric identification under Annex III , point 1), the provider cannot self-certify as an alternative. When a notified body identifies non-conformities, it typically provides a detailed report specifying the deficiencies. The provider treats this report as a remediation plan, addresses each deficiency, and resubmits. Multiple rounds of review and remediation are common. Organisations should budget for at least two to three assessment cycles when planning for notified body engagement. The financial and timeline implications of a rejection are significant. Each additional cycle adds weeks to the timeline and incremental fees. The assessment timeline and budget should include contingency for rejection and resubmission. Key outputs Notified body rejection treated as remediation opportunity Detailed deficiency report as remediation plan Budget and timeline contingency for 2–3 assessment cycles Module 6 AISDP documentation --- ## Assessment Plan URL: https://docs.standardintelligence.com/assessment-plan Breadcrumb: Governance › Conformity Assessment › Artefacts › Assessment Plan Last updated: 28 Feb 2026 Assessment Plan AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Annex VI The Assessment Plan is prepared before the assessment begins and approved by the AI Governance Lead . It defines the scope (system, version, AISDP modules), the assessment methodology, the assessor team composition and qualifications, the assessment schedule, and the acceptance criteria. The plan is the governing document for the assessment; deviations from the plan are documented with justification. The plan also records the pre-assessment readiness review findings and the go/no-go recommendation. Where the readiness review identified gaps, the plan notes whether those gaps were resolved before the assessment commenced. Key outputs Scope, methodology, team, schedule, and acceptance criteria AI Governance Lead approval before commencement Readiness review findings incorporated Ten-year retention --- ## Assessment Tools & Technology URL: https://docs.standardintelligence.com/assessment-tools-and-technology Breadcrumb: Governance › Conformity Assessment › QMS Framework › Assessment Tools & Technology Last updated: 28 Feb 2026 ℹ This topic is covered within the parent article. See the full QMS Framework page. --- ## Assessor Independence & Competence Records URL: https://docs.standardintelligence.com/assessor-independence-and-competence-records Breadcrumb: Governance › Conformity Assessment › Artefacts › Assessor Independence & Competence Records Last updated: 28 Feb 2026 Assessor Independence & Competence Records AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 17 , Annex VI The assessor records archive contains the conflict of interest declarations, the competence evidence (qualifications, training records, CPD logs), and the independence arrangements for each assessment. The archive demonstrates that the assessment was conducted by competent, independent assessors. For each assessment cycle, the records identify the assessors, their qualifications, their independence from the development team, and their CPD status. A competent authority questioning the assessment's credibility will examine these records alongside the Assessment Report. Key outputs Per-assessor conflict of interest declarations Competence evidence (qualifications, CPD logs) Independence arrangement documentation Retained per assessment cycle --- ## Assessor Independence & Competence URL: https://docs.standardintelligence.com/assessor-independence-and-competence Breadcrumb: Governance › Conformity Assessment › Assessor Independence & Competence Last updated: 28 Feb 2026 Conflict of Interest Declarations AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 17 , Annex VI Each assessor participating in the internal conformity assessment completes a conflict of interest declaration before the assessment begins. The declaration confirms that the assessor has no direct involvement in the system's development, no financial interest in the assessment outcome, and no personal relationship with the development team that could compromise objectivity. The declaration is a formal document, signed and dated, retained alongside the Assessment Plan. Where a potential conflict is identified, the AI Governance Lead assesses whether it is material and, if so, replaces the assessor or implements mitigating measures (such as additional review by an independent party). The conflict assessment and its outcome are documented. For organisations using external consultants as assessors, the declaration should also cover the consulting firm's commercial relationships with the organisation, ensuring that the assessment is not compromised by a desire to maintain a lucrative advisory relationship. Key outputs Signed conflict of interest declaration per assessor Material conflict assessment with documented outcome External consultant commercial relationship coverage Retained alongside the Assessment Plan Functional Independence from Development AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 17, Annex VI The AI Act does not require external assessors for internal conformity assessment, but it does require that the assessment be credible. The assessor team must have no direct involvement in the system's development, to avoid self-review bias. A developer assessing their own work will unconsciously confirm their own assumptions and overlook gaps that an independent reviewer would catch. Functional independence means the assessor does not report to the same management chain as the development team for the purposes of the assessment. An engineer from a different business unit, an internal audit professional, or an external consultant can all satisfy this requirement. The independence arrangement is documented in the Assessment Plan and verified by the AI Governance Lead. For smaller organisations where complete separation is impractical, compensating measures include peer review arrangements with other organisations, external consultants for critical assessment phases, or additional oversight by the Internal Audit Assurance Lead. Key outputs Functional independence from the development team Independence arrangement documented in the Assessment Plan Compensating measures for smaller organisations AI Governance Lead verification Competence Framework AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 17, Annex VI The credibility of an internal conformity assessment depends on the assessor's competence. The organisation defines a competence framework specifying the knowledge, skills, and experience required for assessment roles, covering three domains. Regulatory knowledge: assessors must have a working understanding of the AI Act's requirements for high-risk systems (Articles 8–15, Article 17, Annex IV , Annex VI), the conformity assessment procedures, and the interaction between the AI Act and related regulations ( GDPR , NIS2 , sector-specific legislation). Technical knowledge: assessors must be able to read and evaluate technical documentation for the types of AI system they assess, including model architectures, training methodologies, evaluation metrics, fairness measures, and data governance practices, at a level sufficient to verify accuracy and identify unsupported claims. Audit methodology: assessors should have training in structured assessment methodology (evidence collection, sampling, verification, non-conformity classification, reporting), with ISO 19011 (guidelines for auditing management systems) as a suitable foundation. The competence framework is documented in the QMS and applied during assessor selection for each assessment cycle. Key outputs Three-domain competence framework (regulatory, technical, audit methodology) ISO 19011 foundation for audit methodology Application during assessor selection QMS documentation CPD & Refresher Training AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 17 The AI Act's regulatory landscape is evolving. Harmonised standards are under development, AI Office and national competent authorit y guidance is being published, and enforcement practice is emerging. Assessors participate in continuing professional development (CPD) that keeps their regulatory and technical knowledge current. A minimum of 20 hours per year of AI Act-relevant CPD is a reasonable expectation. CPD activities include attending AI Act training courses, participating in industry working groups, reviewing published enforcement actions and AI Office guidance, and studying emerging harmonised standards. The organisation tracks CPD hours and topics per assessor, retaining the records as competence evidence. New assessors complete a calibration exercise and an orientation covering the organisation's AISDP framework, QMS, and assessment methodology before conducting their first live assessment. Refresher training is triggered when significant regulatory changes occur, such as the publication of harmonised standards or new AI Office guidance that affects the assessment methodology. Key outputs Minimum 20 hours per year AI Act-relevant CPD per assessor CPD tracking with hours and topics recorded Orientation and calibration for new assessors Refresher training on significant regulatory changes --- ## Brownfield Compliance URL: https://docs.standardintelligence.com/brownfield-compliance Breadcrumb: Governance › Delivery › Brownfield Compliance Last updated: 28 Feb 2026 Gap Assessment — Per Module Documentation Reconstruction — Transparent Labelling Retrofitting Version Control — Baseline Capture Retrofitting Testing — Comprehensive Retrospective Phased Compliance (A: Critical, B: Documentation, C: Infrastructure) Milestones Before August 2026 --- ## CE Marking URL: https://docs.standardintelligence.com/ce-marking Breadcrumb: Governance › Certification › CE Marking Last updated: 28 Feb 2026 Affixation Requirements AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 48 Article 48 requires high-risk AI systems to bear the CE marking after the Declaration of Conformity is signed. For digital systems, the marking is displayed in the user interface and accompanying documentation. It must be visible, legible, and indelible. The Conformity Assessment Coordinator affixes the marking before the system is placed on the market or put into service. The CE marking signals to deployers, users, and authorities that the system has undergone conformity assessment and that the provider has declared compliance. For software-only systems, "indelible" means the marking cannot be removed or obscured through normal use of the system; embedding it in the system's interface and documentation satisfies this requirement. Where the system is also subject to other Union harmonisation legislation ( Annex I products), a single CE marking indicates conformity with all applicable legislation. The Conformity Assessment Coordinator confirms that the marking covers the AI Act and all other applicable regulations before affixation. Key outputs CE marking displayed in user interface and documentation Visible, legible, indelible placement Affixed before market placement or service entry Module 10 AISDP documentation Affixing to Non-Conforming System — Offence (Art. 48, Art. 99) AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 48, Article 99, Regulation (EC) No 765/2008 Article 30 Affixing the CE marking to a system that has not completed conformity assessment, or that is non-conforming, constitutes a breach of the CE marking requirements under Article 48 and the general principles in Article 30 of Regulation (EC) No 765/2008, carrying penalties under Article 99. The offence falls within Tier 2 (breach of obligations under Articles 43–49), exposing the organisation to fines of up to EUR 15 million or 3% of global annual turnover, and creating personal accountability for the individual who authorises affixation. The AI Governance Lead or Conformity Assessment Coordinator must be confident that the conformity assessment supports the marking before authorising it. The safeguard against improper affixation is the assessment process itself. The Declaration of Conformity must be signed before the CE marking is affixed. The Declaration can only be signed when the Assessment Report supports it. The Assessment Report is supported by the assessment evidence. This chain of dependency ensures that the CE marking cannot lawfully be affixed without a complete, documented basis. Organisations should establish a formal CE marking approval step in their deployment workflow, requiring explicit confirmation from the Conformity Assessment Coordinator that the Declaration has been signed and the marking is authorised. Key outputs Formal CE marking approval step in deployment workflow Chain of dependency (assessment, report, declaration, marking) Offence and penalty awareness for authorising personnel Module 10 AISDP documentation --- ## Certification, Standards & Legal URL: https://docs.standardintelligence.com/certification-standards-and-legal Breadcrumb: Governance › Certification, Standards & Legal (S.10) Last updated: 28 Feb 2026 Certification, standards, and legal obligations intersect to define the compliance pathway for high-risk AI systems. The harmonised standards landscape covers the CEN/CENELEC JTC 21 programme, interim reference standards, the presumption of conformity under Article 40, and the re-mapping exercise required when new standards are published. CE marking documents the affixation requirements and the relationship between the marking and the underlying conformity assessment . The declaration of conformity specifies the document content requirements, maintenance obligations, and multilingual provisions. Liability and insurance addresses the AI Liability Directive interaction, product liability under the revised PLD, insurance requirements, and deployer liability. Assessment failure pathways defines the remediation workflow when conformity assessment identifies non-conformities that cannot be resolved before the intended market placement date. ℹ This section corresponds to the Certification, Standards & Legal section and feeds primarily into AISDP Module 11 (Certification and Legal). --- ## Change Management (S.6 Integration) URL: https://docs.standardintelligence.com/change-management-s6-integration Breadcrumb: Governance › Conformity Assessment › QMS Framework › QMS Framework › Change Management (S.6 Integration) Last updated: 28 Feb 2026 Change Management (S.6 Integration) AISDP module(s): Module 2 (Development Process), Module 6 (Risk Management System) Regulatory basis: Article 17 Change management requires that every change to the system, whether to code, data, model, configuration, or infrastructure, flows through a controlled process with defined approval authority. The version control governance in, deployment controls in, and substantial change thresholds in collectively satisfy this requirement. The QMS integrates these controls into a single change management framework: a change request is submitted, assessed for impact (including whether it constitutes a substantial modification requiring new conformity assessment ), approved by the appropriate authority, implemented through the controlled deployment pipeline, and verified through post-deployment checks. The AISDP is updated to reflect the change. The integration with Section 6 is deliberate. Change management for AI systems is not a separate governance process; it is embedded in the development and deployment pipeline. The QMS documents the change management policy; the development infrastructure enforces it. Key outputs Integrated change management framework across code, data, model, and infrastructure Substantial modification assessment at each change AISDP update requirement per change Section 6 development pipeline integration --- ## Classification Decision Record URL: https://docs.standardintelligence.com/classification-decision-record Breadcrumb: Governance › Risk Assessment › Classification Decision Record Last updated: 28 Feb 2026 CDR Content AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Articles 5, 6 The Classification Decision Record (CDR) is the formal artefact documenting the system's risk tier determination. It contains the system's description (intended purpose, deployment context, affected population, input/output specification), the classification determination (tier, Annex III domain or Annex I legislation if applicable, Article 50 category if applicable), the Article 6(3) exception analysis (if claimed, with both functional and risk criteria addressed), and the supporting evidence for the determination. The CDR also records the Classification Reviewer's independent assessment, the AI Governance Lead 's approval, and the date of the determination. Where the determination is borderline, the CDR documents the reasoning for the chosen classification, the arguments for the alternative classification, and the factors that tipped the decision. The CDR is a living document in the sense that reclassification trigger s can require its revision. Each revision creates a new version; all versions are retained for the ten-year period required by Article 18 . The CDR is the first document an assessor examines when reviewing the AISDP; it sets the context for everything that follows. Key outputs System description, classification determination, and Article 6(3) analysis Independent review and AI Governance Lead approval Version-controlled with all versions retained Module 6 AISDP evidence Independent Review by Classification Reviewer AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Articles 5, 6 The CDR is subject to independent review by a Classification Reviewer, a role that requires functional independence from the system's development team. The reviewer assesses whether the classification analysis is thorough, whether the evidence supports the conclusion, whether alternative classifications were fairly considered, and whether the Article 6(3) exception analysis (if present) meets the rigorous standard described above. The Classification Reviewer does not need to agree with every aspect of the analysis; they need to confirm that the analysis is defensible. If the reviewer identifies gaps, inconsistencies, or unsupported conclusions, these are documented as findings that must be addressed before the CDR is approved. The reviewer's assessment, including any findings and their resolution, is recorded in the CDR. The independence requirement is critical. A classification review conducted by a member of the development team, or by a person with a commercial interest in a lower classification, lacks the objectivity needed to ensure the analysis is rigorous. Key outputs Independent Classification Reviewer assessment Functional independence from the development team Findings documented with resolution required before approval Module 6 AISDP evidence Disagreement Escalation to AI Governance Lead AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Articles 5, 6 Where the Classification Reviewer disagrees with the AI System Assessor 's classification determination, the disagreement is escalated to the AI Governance Lead for resolution. The AI Governance Lead reviews both positions, may request additional analysis or evidence, and makes a binding determination. The escalation process is documented: the original determination, the reviewer's objection with its reasoning, any additional analysis requested, and the AI Governance Lead's final decision with its rationale. The documentation ensures that classification disagreements are resolved through governance, not through informal pressure or hierarchy. Classification disagreements are a healthy sign, indicating that the review process is genuinely independent. An organisation that never experiences a classification disagreement should examine whether its review process is sufficiently rigorous. Key outputs Formal escalation process for classification disagreements AI Governance Lead binding determination Complete documentation of positions, analysis, and resolution Module 6 AISDP evidence --- ## Classification Reviewer — Independent CDR Validation URL: https://docs.standardintelligence.com/classification-reviewer-independent-cdr-validation Breadcrumb: Governance › Delivery › Organisational Roles › Classification Reviewer — Independent CDR Validation Last updated: 28 Feb 2026 Classification Reviewer — Independent CDR Validation AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Articles 5, 6 The Classification Reviewer independently reviews the AI System Assessor 's classification determination before the CDR is approved by the AI Governance Lead . The Reviewer is an experienced compliance or risk professional, functionally independent from both the development team and the Assessor. The review assesses the classification's defensibility: would the determination withstand scrutiny by a competent authority? Disagreements between the Assessor and the Reviewer are escalated to the AI Governance Lead for binding determination. The complete documentation of positions, analysis, and resolution is retained in the CDR. This independent review mechanism prevents classification errors from propagating through the entire AISDP process. For small organisations, the Classification Reviewer may double as the holistic AISDP reviewer. For medium and large organisations, a dedicated Reviewer (or team of Reviewers) provides consistent classification standards across the portfolio. Key outputs Independent CDR review for defensibility Disagreement escalation with documented resolution Functional independence from development and Assessor Module 6 AISDP evidence --- ## Communication Protocols URL: https://docs.standardintelligence.com/communication-protocols Breadcrumb: Governance › Regulator Interaction › Communication Protocols Last updated: 28 Feb 2026 Proactive Engagement & Consistent Messaging AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 70 Organisations should introduce themselves to relevant competent authorities early in the compliance process, providing a brief overview of their AI portfolio, compliance approach, and contact points. This is particularly valuable in member states where the authority is newly established. Early engagement builds a constructive relationship before it is needed in a crisis. Communication with authorities follows a consistent messaging framework: the organisation's AI governance approach, its compliance methodology, and its contact structure are described in the same terms regardless of which authority is addressed. Inconsistent messaging across jurisdictions creates compliance risk if authorities compare notes. Incident communication pre-defines the channel, format, signatory authority, and follow-up schedule for each jurisdiction. Routine reporting obligations (where member states require periodic compliance reporting beyond the Act's minimum) are tracked in the jurisdiction register and submitted on schedule. Key outputs Early proactive engagement with competent authorities Consistent messaging framework across jurisdictions Pre-defined incident communication protocol per jurisdiction Module 10 AISDP documentation NCA Engagement Log AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 70 The NCA engagement log records every substantive interaction with competent authorities, the AI Office , and notified bodies . Each entry documents the date, the authority, the nature of the interaction (proactive introduction, consultation response, incident report, inspection, information request, sandbox participation), a summary of the discussion, any commitments made, and follow-up actions. The log serves as the organisation's institutional memory of regulatory interactions and as evidence of cooperative engagement. During an enforcement proceeding, the log demonstrates the organisation's track record of constructive cooperation, which is a mitigating factor under Article 99(7). The log is maintained by the Conformity Assessment Coordinator, with entries contributed by the AI Governance Lead , Legal and Regulatory Advisor, and any other personnel who interact with authorities. It is reviewed quarterly alongside the jurisdiction register. Key outputs Per-interaction documentation (date, authority, nature, summary, commitments) Evidence of cooperative engagement for penalty mitigation Quarterly review alongside jurisdiction register Module 10 AISDP evidence --- ## Conflicting Guidance Position Papers URL: https://docs.standardintelligence.com/conflicting-guidance-position-papers Breadcrumb: Governance › Regulator Interaction › Artefacts › Conflicting Guidance Position Papers Last updated: 28 Feb 2026 Conflicting Guidance Position Papers AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 70 Where conflicting member state guidance was identified and a position adopted, the position paper documents the conflict, the interpretations considered, the position chosen, the reasoning, the evidence supporting the position, and any advisory opinions sought. Position papers demonstrate good faith compliance effort. Where a conflict is later resolved (by the AI Office , a court, or harmonised standards ), the resolution assessment and any AISDP updates are appended to the position paper. Key outputs Per-conflict position paper with reasoning and evidence Resolution assessments appended when conflicts are resolved Good faith compliance effort evidence Module 10 AISDP evidence --- ## Conflicting Guidance URL: https://docs.standardintelligence.com/conflicting-guidance Breadcrumb: Governance › Regulator Interaction › Conflicting Guidance Last updated: 28 Feb 2026 Identifying, Resolving & Documenting Conflicting Positions AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 70 In the early implementation period, conflicting guidance from different member states is a real risk. The Legal and Regulatory Advisor maintains a conflicting guidance register recording, for each identified conflict, the specific provision in dispute, the conflicting interpretations, the affected AISDP modules, and the organisation's chosen position. The resolution approach adopts the more conservative interpretation unless doing so would conflict with a third authority's guidance or create a technical impossibility. If the conflict is material (affecting the system's design, human oversight model, or intended purpose scope), the organisation considers raising the issue with the AI Office . An advisory opinion from the competent authority in the primary deployment jurisdiction may also be sought. Regardless of resolution method, the organisation documents its interpretation, reasoning, the guidance considered, and supporting evidence. A well-documented position, even if later proved incorrect, demonstrates good faith compliance effort, which is a mitigating factor under Article 99(7). Key outputs Conflicting guidance register with per-conflict documentation Conservative interpretation as default resolution approach AI Office referral for material conflicts Module 10 AISDP evidence --- ## Conformity Assessment Artefacts URL: https://docs.standardintelligence.com/conformity-assessment-artefacts Breadcrumb: Governance › Conformity Assessment › Artefacts Last updated: 28 Feb 2026 Internal Assessment Report Non-Conformity Register Evidence Register Assessment Checklist Assessor Independence & Competence Records Assessment Plan Stakeholder Interview Records --- ## Conformity Assessment Coordinator — Gates, Evidence, Registration URL: https://docs.standardintelligence.com/conformity-assessment-coordinator-gates-evidence Breadcrumb: Governance › Delivery › Organisational Roles › Conformity Assessment Coordinator — Gates, Evidence, Registration Last updated: 28 Feb 2026 Conformity Assessment Coordinator — Gates, Evidence, Registration AISDP module(s): Module 10 (Compliance Record), Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 17 , Articles 47–49 The Conformity Assessment Coordinator manages the end-to-end certification workflow: coordinating the assessment plan, managing the Non-Conformity Register , preparing the Declaration of Conformity for signature, managing EU database registration , and maintaining the evidence register and its currency tracking. The Coordinator is responsible (R) for conformity assessment execution, Declaration of Conformity preparation, and EU database registration. The role manages logistics across all assessment phases, tracks non-conformity remediation deadlines, coordinates notified body interactions (where applicable), and maintains the portfolio assessment calendar. For small organisations, the Coordinator may be the same individual as the AI System Assessor . For medium and large organisations, a dedicated Coordinator manages the assessment process independently from the assessment itself, ensuring that the process is followed and deadlines are met. Key outputs End-to-end certification workflow management Evidence register and currency tracking ownership EU database registration and Declaration preparation RACI "R" for conformity assessment process and registration --- ## Conformity Assessment URL: https://docs.standardintelligence.com/conformity-assessment Breadcrumb: Governance › Conformity Assessment (S.9) Last updated: 28 Feb 2026 Conformity assessment is the process by which the provider demonstrates that the high-risk AI system meets the requirements of the EU AI Act. Annex VI internal assessment is the default pathway for most high-risk systems, enabling the provider to conduct the assessment without notified body involvement. Assessment execution methodology defines the five-phase assessment process. Pre-assessment readiness addresses evidence currency, the evidence register , and per-requirement assessment checklists. Assessor independence and competence establishes qualifications, conflict-of-interest declarations, and ongoing training. Non-conformity management defines the resolution workflow from finding through corrective action to closure verification. Notified bodies covers the external assessment pathway required for biometric systems and other designated categories. Multi-system assessment, continuous assessment and surveillance, and assessment tools and technology address scaling and efficiency. The QMS framework documents the quality management system required by Article 17 . The section concludes with artefacts. ℹ are populated. (Annex VI Internal Assessment, Assessment Execution Methodology start) are awaiting content from a subsequent batch. --- ## Continual Improvement URL: https://docs.standardintelligence.com/continual-improvement Breadcrumb: Governance › Conformity Assessment › QMS Framework › QMS Framework › Continual Improvement Last updated: 28 Feb 2026 Continual Improvement AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 17 Continual improvement requires mechanisms for learning from incidents, assessment findings, and operational experience, then feeding those lessons back into the system's design and governance. Post-incident reviews and periodic risk register reviews are the primary vehicles. The improvement cycle connects assessment findings to systemic change. A major non-conformity in one system's fairness testing might lead to a revision of the organisation's fairness testing standard that applies to all systems. A serious incident might reveal a gap in the human oversight design that triggers a review of the oversight framework across the portfolio. The AI Governance Lead tracks improvement actions arising from assessments, incidents, and governance reviews, ensuring that lessons learned are translated into concrete changes rather than archived in reports. The improvement record (actions identified, actions taken, outcomes observed) is retained as QMS evidence. Key outputs Incident, assessment, and operational learning feeding back into governance Cross-system lesson propagation Improvement action tracking with outcome measurement QMS evidence --- ## Continuous Assessment & Surveillance URL: https://docs.standardintelligence.com/continuous-assessment-and-surveillance Breadcrumb: Governance › Conformity Assessment › Multi-System & Continuous Assessment › Continuous Assessment & Surveillance Last updated: 28 Feb 2026 Continuous Assessment & CI/CD as Continuous Checking AISDP module(s): All modules (cross-cutting) Regulatory basis: Articles 9, 18, 72 The internal conformity assessment under Annex VI is often understood as a point-in-time exercise. The ongoing obligations under Articles 9, 18, and 72 require a more sustained approach. The continuous assessment model operates on three cadences. Monthly automated checks verify technical compliance: monitoring systems operational, evidence artefacts current, PMM metric thresholds unbreached, non-conformities within remediation deadlines. The engineering team automates these checks (Airflow or GitHub Actions scripts querying the monitoring infrastructure, evidence repository, and non-conformity register ) and produces a structured report. Quarterly governance reviews bring the AI Governance Lead , technical leads, and DPO Liaison together to review monthly reports, assess the overall compliance posture, and make governance decisions. Annual formal reassessment repeats the full Annex VI assessment on the updated AISDP and evidence pack. Trigger-based assessment supplements the calendar: a substantial modification , a serious incident , a regulatory enforcement action, new harmonised standards , or a material deployment context change triggers an unscheduled assessment of the affected areas. Key outputs Three-cadence continuous assessment (monthly automated, quarterly governance, annual formal) Automated compliance checking via Airflow or GitHub Actions Trigger-based unscheduled assessment Sustained conformity assurance between formal assessment cycles GRC Platforms, Evidence Repos, NC Tracking, Currency Monitoring AISDP module(s): All modules (cross-cutting) Regulatory basis: Article 17 Organisations with larger AI portfolios invest in tooling that supports structured assessment. The tooling landscape spans four categories. Compliance management platforms (OneTrust, ServiceNow GRC, Archer, IBM OpenPages) or AI-specific platforms (Credo AI, Holistic AI, Monitaur) host the assessment checklist, track non-conformities, manage evidence register s, and generate assessment reports. Key requirements include structured checklist management with Article-level traceability , non-conformity tracking with severity classification and remediation workflow, evidence register with metadata tagging and expiry monitoring, and role-based access control with audit trail. Evidence repositories (Git-based for code and configuration, SharePoint or Confluence for narrative documentation, S3/Azure Blob/GCS for large binary artefacts) enforce immutability for submitted evidence and retain artefacts for the ten-year period. Non-conformity tracking (Jira with custom workflows, ServiceNow) enforces the remediation workflow. Currency monitoring (scheduled scripts comparing evidence register dates against freshness requirements) generates gap reports for overdue artefacts. For smaller organisations, a Confluence or SharePoint space with structured templates, a Jira project for non-conformity tracking, and scheduled scripts for automated checks is viable, though it scales poorly. Key outputs Platform selection guidance (general GRC vs. AI-specific) Evidence repository with immutability and ten-year retention Non-conformity workflow automation Automated currency monitoring with gap reports --- ## Data Access Protocol URL: https://docs.standardintelligence.com/data-access-protocol Breadcrumb: Governance › Conformity Assessment › Notified Bodies › Data Access Protocol Last updated: 28 Feb 2026 Data Access Protocol AISDP module(s): Module 4 ( Data Governance ), Module 9 (Cybersecurity) Regulatory basis: Annex VII , points 4.3, 4.5 Annex VII points 4.3 and 4.5 grant the notified body access to training, validation, and testing datasets, and to trained models including parameters, where necessary for the assessment. This access must be managed through a defined protocol that balances the assessment's information needs against intellectual property and data protection requirements. The data access protocol specifies the access mechanism (API access, remote desktop, on-site inspection, or anonymised dataset provision), the scope of access (which datasets, which model parameters, which training infrastructure components), the confidentiality arrangements (NDA, data handling commitments, return or destruction of data after assessment), and the data protection measures (particularly where datasets contain personal data). For datasets containing personal data, the data access protocol must be consistent with the system's DPIA and the applicable data processing agreements. The DPO Liaison reviews the protocol before it is shared with the notified body. Key outputs Defined data access protocol for NB assessment Access mechanism, scope, confidentiality, and data protection DPO Liaison review for personal data considerations Annex VII points 4.3 and 4.5 compliance --- ## Data Sovereignty Constraints URL: https://docs.standardintelligence.com/data-sovereignty-constraints Breadcrumb: Governance › Regulator Interaction › Multi-Jurisdiction Deployment › Data Sovereignty Constraints Last updated: 28 Feb 2026 Data Sovereignty Constraints AISDP module(s): Module 4 ( Data Governance ), Module 9 (Cybersecurity) Regulatory basis: GDPR , National data protection provisions Multi-jurisdiction deployment intersects with data residency requirements. Personal data processed by the AI system in one member state may be subject to additional national data protection provisions beyond the GDPR. The Technical SME documents these constraints in the AISDP's data governance module, and the infrastructure architecture enforces them. Data residency constraints may require per-jurisdiction data processing infrastructure, restricting where training data, inference inputs, and monitoring data are stored and processed. The architecture documentation ( Module 3 ) and data governance documentation (Module 4) reflect these constraints with clear per-jurisdiction data flow diagrams. Where data sovereignty requirements conflict with the system's centralised architecture, the Technical SME assesses architectural options (data localisation, federated processing, anonymisation before centralisation) and documents the chosen approach with its compliance rationale. Key outputs Per-jurisdiction data residency documentation Infrastructure enforcement of data sovereignty constraints Architectural options assessment for conflicting requirements Module 4 and Module 9 AISDP documentation --- ## Declaration of Conformity URL: https://docs.standardintelligence.com/declaration-of-conformity Breadcrumb: Governance › Certification › Declaration of Conformity Last updated: 28 Feb 2026 Eight Mandatory Content Points AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 47, Annex V Annex V specifies eight mandatory content points for the Declaration of Conformity. Point 1: AI system name, type, and unambiguous identification reference. Point 2: provider name and address (or authorised representative ). Point 3: statement that the Declaration is issued under the provider's sole responsibility. Point 4: statement of conformity with the AI Act and any other applicable Union law. Point 5: where personal data is processed, statement of GDPR , EUDPR, and Law Enforcement Directive compliance. Point 6: references to harmonised standards or other specifications applied. Point 7: where applicable, notified body details and certificate identification. Point 8: place, date, signatory name and function, and signature. Each point must be populated with accurate, traceable information drawn from the AISDP and the conformity assessment record. The Conformity Assessment Coordinator assembles the Declaration by extracting each field from its documented source; the Legal and Regulatory Advisor reviews for legal accuracy before signature. Key outputs Eight mandatory content points populated from AISDP evidence Per-point verification against source documentation Legal review before signature Module 10 AISDP evidence Each Point Traced to AISDP Evidence AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 47, Annex V Each of the eight Annex V content points is traced to its specific AISDP evidence source and verified by the responsible role. Point 1 traces to Module 1 (System Description) and the model registry ; the Conformity Assessment Coordinator cross-checks against the EU database entry. Point 2 traces to corporate records and the authorised representative mandate (for third-country providers). Point 4 traces to the conformity assessment report; the AI System Assessor confirms an unqualified conformity finding. Point 5 traces to the DPIA and data governance documentation; the DPO Liaison confirms currency. Point 6 traces to the standards compliance register. Point 7 traces to the notified body engagement record and Annex VII procedural mapping. A pre-signature checklist covering all eight fields, with confirmation that the source evidence is current and the field content accurate, is completed and retained as part of the conformity assessment record. This traceability ensures that the Declaration is defensible under regulatory scrutiny. Key outputs Per-point evidence source identification and verification Pre-signature checklist with currency confirmation Traceability from Declaration fields to underlying evidence Module 10 AISDP evidence Combined DoC for Multiple Regulations (Art. 47(3)) AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 47(3) Where the high-risk AI system is also subject to other Union harmonisation legislation requiring a Declaration of Conformity, Article 47(3) permits the provider to draw up a single combined Declaration. The combined Declaration must contain all information required to identify each applicable regulation and must satisfy the content requirements of each. The Conformity Assessment Coordinator maintains a mapping from each regulation's Declaration requirements to the combined Declaration's content, ensuring no requirement is omitted in the consolidation. For Annex I product systems, this mapping is particularly important: the product legislation's Declaration requirements and the AI Act's Annex V requirements must both be fully satisfied. The combined Declaration should clearly identify which regulatory requirements it addresses. A deployer reading the combined Declaration should understand which regulations have been assessed and can refer to the corresponding conformity assessment records for details. Key outputs Single combined Declaration covering all applicable regulations Cross-regulation requirement mapping maintained by Conformity Assessment Coordinator Clear identification of each regulation addressed Module 10 AISDP evidence --- ## Delivery Artefacts URL: https://docs.standardintelligence.com/delivery-artefacts Breadcrumb: Governance › Delivery › Artefacts Last updated: 28 Feb 2026 Programme Plan & Milestone Calendar AISDP module(s): Cross-cutting Regulatory basis: Articles 8–15 The programme plan documents the seven-phase delivery sequence for each system, the milestone calendar with gate dates, the resource allocation across phases, and the dependencies between parallel tracks. The plan is maintained by the Conformity Assessment Coordinator and approved by the AI Governance Lead . For organisations with multiple systems, the programme plan incorporates the portfolio prioritisation, shared resource scheduling, and staggered governance gates. The plan is reviewed monthly at the portfolio status review. Key outputs Per-system seven-phase delivery sequence Milestone calendar with gate dates Portfolio-level coordination for multi-system organisations Monthly review at portfolio status meetings Phase Gate Approval Records AISDP module(s): Cross-cutting Regulatory basis: Articles 8–15 Phase gate approval records document each governance gate: the gate ( CDR approval, risk acceptance, architecture sign-off, Declaration signing, deployment authorisation), the date, the approver, the evidence reviewed, the decision (approved, approved with conditions, rejected), and any conditions imposed. Records are retained for the ten-year period. Key outputs Per-gate approval documentation Evidence reviewed and decision recorded Conditions tracked to completion Ten-year retention Resource Allocation Plan AISDP module(s): Cross-cutting Regulatory basis: Article 17 The resource allocation plan maps each role's availability against the portfolio's milestone calendar, identifying bottlenecks and mitigation strategies. The plan is updated quarterly at the resource review and adjusted when systems enter or exit the delivery pipeline. Key outputs Per-role availability mapped to milestones Bottleneck identification and mitigation Quarterly update cycle Portfolio-level resource management Portfolio Prioritisation Matrix AISDP module(s): Cross-cutting Regulatory basis: Articles 8–15 The portfolio prioritisation matrix scores each system on four axes (risk tier, deployment timeline, deployment scale, compliance readiness) and produces a sequencing recommendation. The matrix is reviewed quarterly and updated as circumstances change. It provides the evidence base for the AI Governance Lead's prioritisation decisions. Key outputs Four-axis scoring per system Sequencing recommendation Quarterly review and update AI Governance Lead decision support Brownfield Gap Assessment & Remediation Plan AISDP module(s): All 12 modules Regulatory basis: Articles 8–15 For each brownfield system , the gap assessment and remediation plan document the per-module gap analysis, the phased remediation plan (A/B/C), the milestone schedule, the responsible owners, and the progress tracking. The plan demonstrates to a competent authority that the organisation has a structured, time-bound approach to achieving compliance for legacy systems. Key outputs Per-module gap analysis for brownfield systems Phased remediation plan with milestones Progress tracking toward August 2026 deadline All 12 modules covered --- ## Deployer Communication Records URL: https://docs.standardintelligence.com/deployer-communication-records Breadcrumb: Governance › Regulator Interaction › Artefacts › Deployer Communication Records Last updated: 28 Feb 2026 Deployer Communication Records AISDP module(s): Module 8 (Transparency), Module 11 (Deployer Obligations) Regulatory basis: Article 13 , Article 26 The deployer communication records archive documents all provider-to-deployer communications: Instructions for Use versions and translations, Article 26 obligation briefings, FRIA notification support, system update notifications, and residual risk communications. Records are maintained per jurisdiction and per deployer. Key outputs Per-jurisdiction, per-deployer communication archive Translated Instructions for Use versions Article 26 briefing records Module 8 and Module 11 AISDP evidence --- ## Deployer Communications per Member State URL: https://docs.standardintelligence.com/deployer-communications-per-member-state Breadcrumb: Governance › Regulator Interaction › Multi-Jurisdiction Deployment › Deployer Communications per Member State Last updated: 28 Feb 2026 Deployer Communications per Member State AISDP module(s): Module 8 (Transparency), Module 11 (Deployer Obligations) Regulatory basis: Article 13 , Article 26 Deployers in different member states may have different expectations, capabilities, and legal obligations under national implementations. The provider's deployer communication framework accommodates this variability. Instructions for Use are translated into each deployment language. Deployer briefings on Article 26 obligations are calibrated to the deployer's regulatory context. Where a member state has published specific guidance on deployer obligations, the provider's deployer communications reflect that guidance. For example, if one member state requires deployers to maintain specific operator training records while another does not specify this, the provider's Instructions for Use for the first jurisdiction should reference the training requirement. Deployer communication records are maintained per jurisdiction, documenting what was communicated, when, and to whom. These records are Module 11 evidence. Key outputs Jurisdiction-calibrated deployer communications Translated Instructions for Use per deployment language Per-jurisdiction communication records Module 8 and Module 11 AISDP evidence --- ## Deployer Registration (Art. 49(3), Annex VIII-C) URL: https://docs.standardintelligence.com/deployer-registration-art-493-annex-viii-c Breadcrumb: Governance › Regulator Interaction › EU Database Registration › Deployer Registration (Art. 49(3), Annex VIII-C) Last updated: 28 Feb 2026 Deployer Registration (Art. 49(3), Annex VIII-C) AISDP module(s): Module 11 (Deployer Obligations) Regulatory basis: Article 49 (3), Annex VIII Section C Public authority deployers (or persons acting on their behalf) register themselves and their use of the system. Section C requires the deployer's identity and contact details, the submitter's details, the URL of the system's existing EU database entry (linking to the provider's registration), a summary of the FRIA findings under Article 27 , and a summary of the DPIA under GDPR Article 35 where applicable. The deployer registration creates a chain from the provider's system entry to each public authority's specific use. This enables oversight of how high-risk systems are deployed across the public sector. For organisations that are both provider and deployer, both Section A and Section C registrations are required. The FRIA and DPIA summaries in the deployer registration should be substantive enough to demonstrate that the assessments were conducted, without disclosing sensitive operational detail. The Legal and Regulatory Advisor reviews the summaries before submission. Key outputs Deployer registration linking to provider's EU database entry FRIA and DPIA summaries included Dual registration for provider-deployer organisations Module 11 AISDP evidence --- ## Document Control URL: https://docs.standardintelligence.com/document-control Breadcrumb: Governance › Conformity Assessment › QMS Framework › QMS Framework › Document Control Last updated: 28 Feb 2026 Document Control AISDP module(s): All modules (cross-cutting) Regulatory basis: Article 17 , Article 18 Document control requires that every AISDP module, procedure, and evidence artefact has a defined owner, a version history, a review cycle, and a defined retention period. Version control records every change with the changer's identity, the timestamp, and the ability to retrieve any historical version. Git-based documentation repositories provide the strongest version control: every change is a commit with attribution and a complete diff against the previous version. Confluence and SharePoint provide adequate version control for non-technical teams. For long-term retention, documents are archived to cold storage (S3 Glacier, Azure Archive Storage, Google Archive Storage) with lifecycle policies preventing deletion before the retention period expires. The ten-year retention challenge deserves explicit planning. Over a decade, cloud accounts may be migrated, storage services deprecated, and file formats may become obsolete. Retention planning addresses storage durability through geographic redundancy, format longevity through open formats (Markdown, PDF/A, JSON, CSV), access continuity through credentials not tied to individuals, and index maintenance through the evidence register . A biennial retention health check verifies all elements. Key outputs Per-document ownership, version history, review cycle, retention period Git-based or equivalent version control with full attribution Ten-year retention with cold storage archival Biennial retention health check --- ## Documentation Reconstruction — Transparent Labelling URL: https://docs.standardintelligence.com/documentation-reconstruction-transparent-labelling Breadcrumb: Governance › Delivery › Brownfield Compliance › Documentation Reconstruction — Transparent Labelling Last updated: 28 Feb 2026 Documentation Reconstruction — Transparent Labelling AISDP module(s): All 12 modules Regulatory basis: Articles 11, 18, Annex IV For brownfield system s, some documentation will need to be reconstructed from available artefacts. Training data that was not version-controlled may need characterisation through statistical analysis of the deployed model's behaviour. Model architecture details that were not formally documented may need extraction from the codebase. Design decisions that were never recorded may need recovery through interviews with the development team. The AISDP should clearly indicate where documentation has been reconstructed rather than generated contemporaneously. Transparency about the reconstruction process is more credible to a competent authority than retroactive documentation that claims to be original. Each reconstructed section should note the reconstruction date, the method used, and the sources from which the information was recovered. A competent authority reviewing a brownfield AISDP will expect to see gaps in the historical record. What it will not accept is fabrication: documentation that purports to describe a contemporaneous process that never occurred. Key outputs Documentation reconstruction from available artefacts Transparent labelling of reconstructed content (date, method, sources) Credibility through honesty about historical gaps All 12 modules covered where reconstruction is needed --- ## DPO Liaison — DPIA & Special Category Data URL: https://docs.standardintelligence.com/dpo-liaison-dpia-and-special-category-data Breadcrumb: Governance › Delivery › Organisational Roles › DPO Liaison — DPIA & Special Category Data Last updated: 28 Feb 2026 DPO Liaison — DPIA & Special Category Data AISDP module(s): Module 4 (Data Governance) Regulatory basis: GDPR, Article 10 The DPO Liaison confirms that data governance documentation is consistent with GDPR obligations, verifies that DPIAs are complete and current, and ensures that special category data handling follows the documented procedures. The role is responsible (R) for FRIA data protection elements and is consulted (C) on data governance, PMM operation, and serious incident reporting (where personal data is involved). The DPO Liaison validates Module 4 content, ensuring that the AISDP's data governance documentation aligns with the organisation's GDPR compliance posture. Where the system processes special category data under Article 10's exception, the DPO Liaison confirms that the legal basis, safeguards, and processing justification are documented and defensible. The role serves as the bridge between the AI compliance programme and the data protection function, preventing the two programmes from diverging. Key outputs DPIA currency and completeness verification Special category data handling validation Module 4 alignment with GDPR compliance posture RACI "R" for FRIA data protection elements --- ## End-to-End Technical Delivery URL: https://docs.standardintelligence.com/end-to-end-technical-delivery Breadcrumb: Governance › End-to-End Technical Delivery (S.14) Last updated: 28 Feb 2026 End-to-end technical delivery translates the AISDP 's compliance requirements into a practical project execution framework. The seven-phase delivery framework maps each phase from discovery through operational monitoring to specific AISDP modules. Agile adaptation demonstrates how compliance activities integrate into sprint cycles, backlog management, and iterative development without creating a waterfall bottleneck. Brownfield compliance addresses the challenge of bringing existing AI systems into conformity, covering gap assessment, documentation reconstruction, and retrofit prioritisation. Parallel track coordination manages the relationship between development, compliance, and governance workstreams. Organisational roles defines eight delivery roles and their responsibilities across phases. Resource estimation provides a framework for estimating the effort required for compliance activities. The section concludes with delivery artefacts. ℹ This section corresponds to the End-to-End Technical Delivery section and supports project planning and execution across all AISDP modules. --- ## Enforcement & Penalties URL: https://docs.standardintelligence.com/enforcement-and-penalties Breadcrumb: Governance › Regulator Interaction › Enforcement & Penalties Last updated: 28 Feb 2026 Penalty Tiers (Art. 99) AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 99 Article 99 establishes three penalty tiers calibrated to violation severity. Tier 1 covers prohibited AI practices ( Article 5 ), with maximum fines of EUR 35 million or 7% of global annual turnover. Tier 2 covers high-risk system obligation breaches (Articles 8–15, 16, 17, 25–27, 43–49), with maximum fines of EUR 15 million or 3% of turnover. Tier 3 covers providing incorrect, incomplete, or misleading information, with maximum fines of EUR 7.5 million or 1% of turnover. The higher of the two amounts (absolute figure or turnover percentage) applies, except for SMEs and start-ups where Article 99(6) provides that the lower amount applies. Tier 1's maximum exceeds GDPR 's penalty ceiling, signalling the Act's prioritisation of fundamental rights protection. Tier 2 covers the obligations most directly relevant to AISDP preparation: inadequate technical documentation, failure to conduct conformity assessment , missing EU database registration , inadequate human oversight, absent post-market monitoring , and delayed serious incident reporting. Article 99(4) applies broadly to non-compliance with any requirement or obligation under the Regulation, extending beyond the commonly enumerated articles. Key outputs Three-tier penalty structure documented with thresholds Tier 2 obligations mapped to AISDP activities SME and start-up reduced threshold awareness Module 10 AISDP documentation Enforcement Triggers AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 74, Article 99 Competent authorities may initiate enforcement proceedings through several channels: proactive market surveillance (systematic review of AI systems in the jurisdiction), reactive investigation following a complaint from an affected person, deployer, or competitor, the provider's own Article 73 serious incident notification prompting broader investigation, cross-border referral from another member state's authority, or media and civil society reporting drawing attention to a system's behaviour. The AISDP is central to the enforcement process. The authority's first request will typically be for the complete technical documentation. An AISDP that is incomplete, inconsistent with the deployed system, or unsupported by evidence is itself a non-compliance finding triggering Tier 2 penalties. The AISDP is therefore both the object of scrutiny and the organisation's primary defence. Maintaining inspection readiness is the practical response to enforcement trigger risk. Key outputs Five enforcement trigger channels identified AISDP as primary evidence in enforcement proceedings Inspection readiness as trigger risk mitigation Module 10 AISDP documentation Mitigating Factors (Art. 99(7)) AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 99(7) Article 99(7) directs competent authorities to consider mitigating and aggravating factors when determining penalty amounts. Mitigating factors include the nature, gravity, and duration of the infringement; whether corrective actions were taken promptly; the degree of cooperation with the authority; the technical and organisational measures the provider had implemented; whether the provider proactively brought the infringement to the authority's attention; and the size, annual turnover, and market share of the operator committing the infringement. Aggravating factors include the deliberate nature of the infringement, failure to take corrective action after authority identification, and a history of previous infringements. The practical implication is that the quality of the compliance programme is itself a mitigating factor. An organisation that can demonstrate a thorough AISDP, a functioning PMM system, responsive incident reporting, and a cooperative posture will face materially lower penalty exposure. The AISDP is both the document the authority reviews and part of the evidence determining the consequence of any deficiency found. Key outputs Mitigating factors mapped to AISDP activities Thorough AISDP as penalty reduction evidence Cooperative posture and proactive disclosure as mitigation Module 10 AISDP documentation --- ## EU Database Registration Confirmation URL: https://docs.standardintelligence.com/eu-database-registration-confirmation Breadcrumb: Governance › Regulator Interaction › Artefacts › EU Database Registration Confirmation Last updated: 28 Feb 2026 EU Database Registration Confirmation AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 49 , Article 71 The EU database registration confirmation is retained as Module 10 evidence. It documents the registration date, the submitted information, the post-submission verification (confirming the published entry matches the submission), and the registration-to-AISDP mapping table. Updates to the registration are tracked with their trigger (material change, new jurisdiction, status update) and the corresponding AISDP version. Key outputs Registration confirmation with post-submission verification Registration-to-AISDP mapping table Update history with triggers Module 10 AISDP evidence --- ## EU Database Registration URL: https://docs.standardintelligence.com/eu-database-registration Breadcrumb: Governance › Regulator Interaction › EU Database Registration Last updated: 28 Feb 2026 Provider Registration (Art. 49(1), Annex VIII-A) Non-High-Risk Provider Registration (Art. 49(2), Annex VIII-B) Deployer Registration (Art. 49(3), Annex VIII-C) Real-World Testing Registration (Art. 60, Annex IX) Sensitive Domains — Non-Public Section Multi-Jurisdiction Registration Registration Data Quality Assurance --- ## Evidence Register URL: https://docs.standardintelligence.com/evidence-register Breadcrumb: Governance › Conformity Assessment › Artefacts › Evidence Register Last updated: 28 Feb 2026 Evidence Register AISDP module(s): All 12 modules Regulatory basis: Articles 11, 18, Annex IV The evidence register is retained as the master index linking every compliance claim to its supporting artefact. Each entry records artefact ID, AISDP module, Article reference, description, current version, storage location, last updated date, freshness requirement, next update due, and responsible person. The register serves multiple functions: assessment navigation (directing assessors to evidence), currency tracking (identifying stale artefacts), and retrieval during regulatory inspection (enabling rapid response to information requests). It is maintained continuously, not only during assessment cycles. The register is retained for ten years. Key outputs Master index of all compliance evidence Currency tracking and inspection retrieval functions Continuous maintenance Ten-year retention --- ## Fee Structures & Budget URL: https://docs.standardintelligence.com/fee-structures-and-budget Breadcrumb: Governance › Conformity Assessment › Notified Bodies › Fee Structures & Budget Last updated: 28 Feb 2026 Fee Structures & Budget AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 43 Notified body assessment is a commercial service. Common fee models include fixed fee for a defined scope (providing cost certainty but limited scope flexibility), time-and-materials (charging for actual hours spent), and hybrid models (fixed base fee for the desktop review with time-and-materials for the technical assessment phase). The AI Governance Lead budgets for the assessment as a distinct line item in the compliance programme, separate from internal assessment costs. Early-stage fee variability is expected as the notified body ecosystem matures. Organisations should obtain quotes from multiple bodies where possible, comparing not only cost but assessment methodology, timeline, and industry experience. The budget should account for the full assessment lifecycle, including pre-engagement (body selection, scope agreement, contract negotiation), desktop review, gap remediation, technical assessment, and final reporting. Organisations should also budget for at least two to three assessment cycles, as multiple rounds of review and remediation are common. Key outputs Assessment budgeted as a distinct compliance programme line item Fee model comparison across available notified bodies Full lifecycle costing (pre-engagement through certification) Budget for multiple assessment cycles --- ## Five-Method Risk Identification URL: https://docs.standardintelligence.com/five-method-risk-identification Breadcrumb: Governance › Risk Assessment › Five-Method Risk Identification Last updated: 28 Feb 2026 FMEA — Structured Failure Mode Analysis AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 9 Failure Mode and Effects Analysis (FMEA), following the IEC 60812 framework, is the workhorse of AI risk identification. For each system component (data pipeline, feature engineering, model inference, post-processing, user interface), the team enumerates the ways the component can fail, the effects of each failure, and the severity of those effects. In an AI context, failure modes extend beyond traditional software failures. They include data drift , concept drift, adversarial manipulation, distributional shift in input data, label noise propagation, and emergent biases. The practical approach starts with the system's architecture diagram, walking through each component and asking three questions at each node: what can go wrong with the data entering this component, what can go wrong with the processing within this component, and what can go wrong with the output leaving this component? Each failure mode is assigned a Risk Priority Number (RPN): Severity (1–10) × Occurrence (1–10) × Detectability (1–10). RPNs above the defined threshold (often 100 on a 1,000-point scale, though this is system-specific) trigger mandatory mitigation. The completed FMEA worksheet is a Module 6 evidence artefact. Dedicated software (Relyence, Jama Connect) provides structured worksheets with automatic RPN calculation; spreadsheet templates serve smaller teams. Key outputs Component-by-component FMEA with AI-specific failure modes RPN scoring (Severity × Occurrence × Detectability) per failure mode Mandatory mitigation for RPNs above threshold Module 6 AISDP evidence Stakeholder Consultation AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 9 Risks to fundamental rights are frequently invisible from within the engineering team. Structured consultation with deployers, affected persons, civil society representatives, domain experts, and legal advisors surfaces risks that technical analysis alone will miss. The AI System Assessor schedules consultations at multiple points in the development lifecycle; a single pre-deployment review is insufficient. Consultations are documented with attendees, the questions posed, the responses received, and the actions taken in response. Each stakeholder concern is traced to a risk register entry or a documented rationale for exclusion. Generic stakeholder statements are insufficient; the consultation must produce actionable insights that inform the risk assessment. Stakeholder recruitment should prioritise persons and groups who will be directly affected by the system's decisions. For a recruitment screening system, this includes job applicants, HR professionals, diversity officers, and employment law specialists. For a credit scoring system, this includes consumer advocates, financial inclusion organisations, and data protection specialists. The AI System Assessor documents the recruitment methodology and any limitations on representativeness. Key outputs Structured stakeholder consultation at multiple lifecycle points Documentation (attendees, questions, responses, actions) Traceability from each concern to the risk register or exclusion rationale Module 6 AISDP evidence Regulatory Gap Analysis AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Articles 8–15 The team systematically maps the system's characteristics against every obligation in Articles 8 through 15, identifying areas where current design, testing, or operational practice falls short. Each gap becomes a risk entry in the risk register. The gap analysis should also consider the Master Compliance Questionnaire, working through each applicable question to ensure comprehensive coverage. Credo AI's policy engine can automate the Article-to-control mapping and flag unmatched requirements. For manual execution, a structured walkthrough of the Articles proceeds requirement by requirement, asking for each: is this requirement addressed in the current system design? If so, where is the evidence? If not, what is the gap and what is the associated risk? The regulatory gap analysis is particularly valuable early in the development lifecycle, when gaps can be addressed through design changes rather than retrospective remediations. It should be repeated after any substantial modification to confirm that no new gaps have been introduced. Key outputs Systematic Article 8–15 mapping against current system state Gap-to-risk-entry traceability Master Compliance Questionnaire walkthrough Module 6 AISDP evidence Adversarial Red-Teaming AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 9 A dedicated adversarial testing programme subjects the system to deliberate misuse scenarios, input manipulation, data poisoning attempts, and social engineering of the human oversight layer. Red-team exercises are conducted by the Technical SME with personnel who were not involved in the system's development, ensuring fresh perspectives and reduced confirmation bias. MITRE ATLAS provides the threat taxonomy, cataloguing known adversarial techniques against AI systems (evasion, poisoning, extraction, inference) with real-world case studies. The red team works through the ATLAS matrix, assessing which techniques are applicable and attempting to execute them in a controlled environment. Microsoft's PyRIT automates portions of this for LLM-based systems; for non-LLM systems, the red team manually crafts adversarial inputs. Each red-team finding becomes a risk entry recording the attack description, the severity, and the required mitigation. Red-teaming for risk identification purposes (this article) is distinct from the cybersecurity red-team exercises described in the security section; the former is oriented towards understanding the risk landscape, the latter towards testing the controls that mitigate those risks. Key outputs Adversarial red-teaming by independent personnel MITRE ATLAS threat taxonomy coverage Per-finding risk register entries Module 6 AISDP evidence Horizon Scanning AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 9 Horizon scanning reviews incidents, enforcement actions, and published risk assessments from comparable AI systems in the same or adjacent domains. A recruitment screening system provider should review enforcement actions against algorithmic hiring tools, published bias audits of similar systems, and academic literature on fairness in automated selection. This method identifies risks that the organisation might not have encountered internally. Three curated sources provide the foundation: the OECD AI Policy Observatory for regulatory developments, the Stanford HAI tracker for policy and research, and the AI Incident Database (maintained by the Responsible AI Collaborative) for real-world incident scenarios. The AI System Assessor performs horizon scanning at least quarterly and before every risk register review. Each horizon scanning finding is assessed for its relevance to the system's specific risk profile. A finding from a comparable system in the same domain warrants a risk register entry; a finding from a dissimilar system in a different domain may warrant a watching brief. The AI Governance Lead reviews horizon scanning findings as part of the quarterly governance review. Key outputs Quarterly horizon scanning using OECD, Stanford HAI, and AI Incident Database Relevance assessment per finding Risk register entries or watching briefs as appropriate Module 6 AISDP evidence --- ## Four-Tier Framework Overview URL: https://docs.standardintelligence.com/four-tier-framework-overview Breadcrumb: Governance › Risk Assessment › Risk Classification › Four-Tier Framework Overview Last updated: 28 Feb 2026 Four-Tier Framework Overview AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Articles 5, 6, 50 The EU AI Act establishes a four-tier risk classification framework that determines the obligations attaching to each AI system. Understanding where a system falls within this framework is the precondition for every subsequent risk assessment activity. Tier 1 covers prohibited practices under Article 5; systems falling here must cease operation immediately. Tier 2 covers high-risk systems under Annex III and Article 6, requiring the full AISDP (all 12 modules), conformity assessment , CE marking , and EU database registration . Tier 3 covers limited-risk systems under Article 50 , requiring a standard AISDP addressing transparency measures for chatbots, emotion recognition, biometric categorisation, and synthetic content generation. Tier 4 covers minimal-risk systems that trigger no specific obligations, requiring only a baseline AISDP confirming the classification rationale. The classification determination is the first step in the risk assessment process. Before conducting any detailed risk analysis, the assessor verifies that the system's Classification Decision Record (CDR) is current, that no reclassification trigger s have been activated, and that the classification rationale remains sound given the system's current deployment context. Key outputs Four-tier classification determination CDR currency verification before risk assessment proceeds AISDP scope determination (full, standard, or baseline) Module 6 AISDP documentation --- ## Fundamental Rights Impact Assessment URL: https://docs.standardintelligence.com/fundamental-rights-impact-assessment Breadcrumb: Governance › Risk Assessment › FRIA Last updated: 28 Feb 2026 ℹ is populated. are awaiting content from a subsequent batch. FRIA Scope & EU Charter Rights AISDP module(s): Module 11 (Deployer Obligations), Module 6 (Risk Management System) Regulatory basis: Article 27 Article 27 requires certain categories of deployer to conduct a Fundamental Rights Impact Assessment (FRIA) before putting the system into service. The obligation applies to deployers that are bodies governed by public law, private entities providing public services, and deployers using high-risk systems for creditworthiness evaluation or risk assessment and pricing in life and health insurance (as specified in Article 27(1), cross-referencing Article 26(10)). Other deployers are not subject to the Article 27 FRIA obligation, although conducting a voluntary FRIA is strongly recommended as a governance best practice. The assessment must cover all fundamental rights recognised under the EU Charter that could plausibly be affected by the system's operation. The FRIA must not be treated as a template exercise; the AI System Assessor tailors it to the specific system, deployment context, and affected populations. For a recruitment screening system, relevant Charter rights include non-discrimination (Charter Article 21), freedom to choose an occupation (Charter Article 15 ), protection of personal data (Charter Article 8), the right to an effective remedy (Charter Article 47), and the right to good administration (Charter Article 41). For a credit scoring system, the right to property (Charter Article 17 ) and the prohibition of discrimination in access to services become relevant. Each system requires a bespoke rights analysis. The FRA's 2023 methodology template provides the most applicable framework, structuring the assessment around six steps: describe the AI system and context; identify rights at stake; assess risks to those rights; evaluate existing safeguards; assess proportionality and necessity; and identify additional mitigation measures. Each step produces documentation feeding into AISDP Module 11. Key outputs Bespoke EU Charter rights analysis per system and deployment context Six-step FRA methodology application Documentation per step feeding into Module 11 Module 6 and Module 11 AISDP evidence --- ## Gap Assessment — Per Module URL: https://docs.standardintelligence.com/gap-assessment-per-module Breadcrumb: Governance › Delivery › Brownfield Compliance › Gap Assessment — Per Module Last updated: 28 Feb 2026 Gap Assessment — Per Module AISDP module(s): All 12 modules Regulatory basis: Articles 8–15 Many organisations must bring existing AI systems into compliance. The first step for brownfield system s is a gap assessment comparing the existing system against each AISDP module. The AI System Assessor examines what documentation exists, what is missing, what testing has been performed, what testing is needed, what governance controls are in place, and what controls are absent. The gap assessment produces a remediation plan with priorities, owners, and timelines. Priorities are set by compliance criticality: gaps in human oversight capability, serious incident reporting , and basic PMM are more urgent than documentation formatting deficiencies. The gap assessment should be conducted with realistic expectations. Systems developed before the AI Act's requirements were well understood will have significant gaps. The purpose of the assessment is to quantify those gaps and plan their remediation, not to achieve compliance in a single step. Key outputs Per-module gap assessment against AISDP requirements Remediation plan with priorities, owners, and timelines Realistic gap quantification for brownfield systems All 12 modules assessed --- ## GPAI Model Risk Assessment URL: https://docs.standardintelligence.com/gpai-model-risk-assessment Breadcrumb: Governance › Risk Assessment › GPAI Model Risk Last updated: 28 Feb 2026 ℹ Awaiting content from a subsequent batch (v13). Awaiting content. --- ## Harmonised Standards Landscape URL: https://docs.standardintelligence.com/harmonised-standards-landscape Breadcrumb: Governance › Certification › Harmonised Standards Landscape Last updated: 28 Feb 2026 CEN/CENELEC JTC 21 AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 40 CEN/CENELEC Joint Technical Committee 21 (JTC 21) is responsible for developing the harmonised standards under the EU AI Act. As of early 2026, the standards development process remains ongoing, with the first tranche anticipated but not yet finalised. The working groups are developing standards that will map to the AI Act's requirements across risk management, data governance , transparency, human oversight, cybersecurity, and conformity assessment . Organisations should monitor the JTC 21 working groups closely. Participation in national mirror committees (the national standards bodies that feed into CEN/CENELEC) provides early visibility into emerging standards and an opportunity to influence their content. The Legal and Regulatory Advisor maintains a standards monitoring register recording each relevant working group, its current status, the anticipated publication timeline, and the AISDP modules it will affect. Once harmonised standards are published in the Official Journal, the compliance landscape changes significantly. Organisations that have built their compliance posture on international reference standards will need to conduct a re-mapping exercise to align with the harmonised standards. Key outputs JTC 21 monitoring through national mirror committee participation Standards monitoring register with per-working-group status Anticipated impact assessment on AISDP modules Module 6 AISDP documentation Interim Reference Standards AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 40 In the absence of harmonised standards, reference to international standards provides a credible compliance framework. Six standards form the interim foundation: ISO/IEC 42001:2023 (AI management systems), ISO/IEC 23894:2023 (AI risk management), ISO/IEC 25012:2008 (data quality), ISO/IEC 25010:2023 (system and software quality), ISO/IEC 27001:2022 (information security management), and ISO/IEC 38507:2022 (governance implications of AI). These standards do not carry the presumption of conformity that harmonised standards provide under Article 40. They do provide a structured, internationally recognised framework that demonstrates a credible compliance approach. An assessor or competent authority will view compliance with relevant international standards more favourably than an ad hoc approach, even if the standards do not trigger the formal burden shift. The AISDP's standards compliance register records each standard applied, the specific clauses relied upon, and the mapping to the corresponding AI Act requirements. Where only part of a standard was applied, the register identifies the specific clauses and the rationale for partial application. Key outputs Six interim reference standards applied and documented Standards compliance register with per-clause mapping Partial application documented with rationale Module 6 AISDP documentation Presumption of Conformity (Art. 40) — Burden Shift AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 40 Article 40 provides that high-risk AI systems conforming to harmonised standards published in the Official Journal shall be presumed to conform to the requirements covered by those standards. This presumption significantly strengthens the provider's compliance posture by shifting the burden of proof: rather than the provider demonstrating compliance from first principles, the competent authority must demonstrate non-compliance despite adherence to the harmonised standard. The presumption is rebuttable. A competent authority can challenge the provider's claim of standard compliance if the evidence does not support it. The presumption also covers only the requirements addressed by the specific harmonised standard; requirements not covered by the standard must still be demonstrated through other evidence. For the AISDP, the practical implication is that compliance with harmonised standards should be documented meticulously. Each standard's scope should be mapped to the AI Act requirements it covers, and evidence of compliance should be structured around the standard's requirements. Where a harmonised standard does not fully cover an AI Act requirement, the gap and the supplementary evidence addressing it should be explicitly documented. Key outputs Burden shift from provider demonstration to authority challenge Per-standard scope mapping to AI Act requirements Gap identification for requirements not covered by standards Module 6 AISDP documentation Re-Mapping Exercise When Standards Published AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 40 The transition from interim international standards to harmonised standards requires a structured re-mapping exercise. Organisations trace their existing compliance evidence to the new standard's requirements, identify gaps where the harmonised standard introduces requirements beyond the international standard, and plan remediation for any gaps identified. The re-mapping exercise proceeds in three steps. First, the AI System Assessor maps each harmonised standard requirement to the existing compliance evidence, identifying which evidence satisfies the new requirement and which does not. Second, the Conformity Assessment Coordinator assesses the gaps and estimates the remediation effort. Third, the AI Governance Lead approves a remediation plan and timeline. The re-mapping should be completed within six months of publication of the harmonised standard in the Official Journal, allowing sufficient time to benefit from the presumption of conformity. The exercise is documented as a Module 6 evidence artefact, demonstrating the organisation's proactive response to the evolving standards landscape. Key outputs Three-step re-mapping (evidence mapping, gap assessment , remediation plan) Six-month completion target from Official Journal publication Gap remediation planned and tracked Module 6 AISDP evidence --- ## Incident Reporting Across Borders URL: https://docs.standardintelligence.com/incident-reporting-across-borders Breadcrumb: Governance › Regulator Interaction › Multi-Jurisdiction Deployment › Incident Reporting Across Borders Last updated: 28 Feb 2026 Incident Reporting Across Borders AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 73 A serious incident under Article 73 must be reported to the market surveillance authority of the member state where the incident occurred. For systems deployed across multiple member states, the incident response plan must include pre-identified reporting channels for every deployment jurisdiction, pre-translated incident report templates where the authority requires the national language, and a coordination procedure ensuring the same incident is reported consistently to all relevant authorities. Where an incident occurs in one jurisdiction but affects persons in another, the coordination procedure addresses which authorities receive which reports and in what sequence. The parallel reporting framework described in provides the operational basis, extended to cover all deployment jurisdictions. The incident response plan identifies the authority contact, reporting format, language requirement, and deadline for each deployment jurisdiction. This information is pre-populated in the regulator contact register and tested annually during tabletop exercises. Key outputs Pre-identified reporting channels per deployment jurisdiction Pre-translated incident report templates Cross-border coordination procedure Module 10 AISDP documentation --- ## Inspection Readiness Drill Records URL: https://docs.standardintelligence.com/inspection-readiness-drill-records Breadcrumb: Governance › Regulator Interaction › Artefacts › Inspection Readiness Drill Records Last updated: 28 Feb 2026 Inspection Readiness Drill Records AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 74 The drill records archive documents each annual inspection rehearsal: the date, the mock inspector team, the requests made, the time to fulfil each request, the gaps identified, and the remediation actions taken. The archive demonstrates continuous improvement in inspection readiness over time. Key outputs Per-drill documentation with timing and gap identification Remediation actions tracked and closed Continuous improvement trajectory visible Module 10 AISDP evidence --- ## Inspection Readiness URL: https://docs.standardintelligence.com/inspection-readiness Breadcrumb: Governance › Regulator Interaction › Inspection Readiness Last updated: 28 Feb 2026 Readiness Capability — AISDP Retrieval & Drills AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 74 The organisation maintains an "inspection-ready" posture at all times. The AISDP and evidence pack are current. Evidence is organised and accessible; an inspector should not need to wait for someone to locate it. Monitoring dashboards are operational and displaying current data. The human oversight interface can be demonstrated on request. Annual rehearsal exercises (the "30-minute drill") test whether the team can produce each category of requested artefact within 30 minutes. Mock inspectors request specific artefacts using regulatory language (for example, "please provide the records required under Annex IV point 2(b)"), ask probing questions, and test the team's ability to explain the risk management process and fairness methodology. A pre-configured IAM role ("regulatory-inspector") provides read-only access to the evidence repository, monitoring dashboards, logging infrastructure, model registry , and AISDP documentation. Proprietary source code, commercial contracts, and unrelated information are excluded. The Legal and Regulatory Advisor tests the role monthly. Key outputs Inspection-ready posture maintained continuously Annual 30-minute drill with mock inspectors Pre-configured regulatory access IAM role Drill results documented as Module 10 evidence During Inspection — Spokesperson, SME, Access, Logging AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 74 When an inspection is initiated, the AI Governance Lead serves as primary point of contact. A designated Inspection Coordinator manages logistics: scheduling interviews, retrieving documents, arranging system access, maintaining a log of every document provided and every question asked. The inspection log serves as the organisation's record of the inspection. The organisation provides everything within the lawful scope of the inspection promptly and cooperatively. Obstructing or delaying carries penalties under Article 99(5). Where a request touches on commercially sensitive information beyond the regulatory scope, the Legal and Regulatory Advisor engages with inspectors to agree confidentiality protections. Key personnel (AI Governance Lead, Technical SME, Legal and Regulatory Advisor) are available at short notice. Their roles during inspection are predefined: the AI Governance Lead addresses governance and strategic questions, the Technical SME addresses technical architecture and testing questions, and the Legal and Regulatory Advisor addresses regulatory interpretation and data protection questions. Key outputs AI Governance Lead as primary contact, Inspection Coordinator for logistics Inspection log of all documents provided and questions asked Predefined roles for key personnel Module 10 AISDP evidence Post-Inspection Actions AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 74 Following an inspection, the authority may issue findings, recommendations, or corrective action requirements. The Conformity Assessment Coordinator enters each finding into the Non-Conformity Register , assigns a responsible person, and tracks remediation within the required timeline. Remediation evidence is documented and, where the authority requests confirmation, submitted. Inspection findings may reveal systemic weaknesses affecting other systems in the portfolio. The AI Governance Lead assesses whether findings indicate organisation-wide gaps and, if so, initiates a broader remediation programme. A finding about inadequate evidence currency for one system, for example, may indicate a process weakness affecting all systems. The post-inspection record (findings received, remediation actions, evidence of closure, authority confirmation) is retained as Module 10 evidence for the ten-year period. Key outputs Inspection findings entered into Non-Conformity Register Cross-portfolio systemic weakness assessment Post-inspection record retained for ten years Module 10 AISDP evidence Dual Readiness (NIS2 & AI Act) AISDP module(s): Module 9 (Cybersecurity), Module 10 (Compliance Record) Regulatory basis: Article 74, NIS2 Directive Organisations subject to both the AI Act and NIS2 may face inspections from different authorities under different legal bases. The inspection readiness framework should accommodate both regimes. The regulatory access IAM role includes both AI Act evidence (AISDP, assessment records, monitoring dashboards) and NIS2 evidence (security policies, incident logs, vulnerability management records, supply chain documentation). The cross-regulatory mapping tables demonstrate how controls satisfy both regimes simultaneously. During an AI Act inspection, the organisation can show that its cybersecurity controls also satisfy NIS2 requirements; during a NIS2 audit, the organisation can show that its AI-specific security measures are part of a comprehensive programme. The 30-minute drill should include NIS2-specific requests alongside AI Act requests, testing the team's ability to serve both regulatory audiences from the same evidence infrastructure. Key outputs Dual-regime inspection readiness (AI Act and NIS2) Shared regulatory access IAM role covering both regimes Cross-regulatory mapping tables as dual-purpose evidence Module 9 and Module 10 AISDP documentation --- ## Interaction Protocol URL: https://docs.standardintelligence.com/interaction-protocol Breadcrumb: Governance › Conformity Assessment › Notified Bodies › Interaction Protocol Last updated: 28 Feb 2026 Interaction Protocol AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Annex VII A formal interaction protocol is established with the notified body before the assessment begins. The protocol covers the single point of contact on each side (the Conformity Assessment Coordinator for the provider, the lead assessor for the notified body), the communication channels and document exchange mechanisms (encrypted transfer for sensitive materials), the scope of access required (source code repositories, training infrastructure, production systems), the confidentiality arrangements for proprietary model architectures and commercially sensitive data, and the dispute resolution procedure. The Conformity Assessment Coordinator maintains a formal interaction log recording every substantive communication: meeting minutes, document submissions, questions raised, responses provided, interim findings received. This log serves as evidence of cooperative engagement, which is a mitigating factor under Article 99(7) if compliance issues arise later. Internal SLAs for responding to Requests for Information (RFIs) and Requests for Evidence (RFEs) are established: five business days for routine queries and two business days for urgent queries. Delayed responses extend the assessment timeline and signal inadequate internal coordination. Key outputs Formal interaction protocol with SPOC, channels, access scope, confidentiality Interaction log as compliance evidence Internal SLAs for RFI/RFE response (5 days routine, 2 days urgent) Dispute resolution procedure --- ## Internal Assessment Report URL: https://docs.standardintelligence.com/internal-assessment-report Breadcrumb: Governance › Conformity Assessment › Artefacts › Internal Assessment Report Last updated: 28 Feb 2026 Internal Assessment Report AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Annex VI The Internal Assessment Report is the formal output of the conformity assessment . It summarises the assessment scope, methodology, assessor team and their qualifications, findings by phase (desktop review, evidence verification, live system verification, stakeholder interviews), the Non-Conformity Register summary by severity, and the overall conclusion (conformity demonstrated, conformity demonstrated subject to remediation, or conformity not demonstrated). The report is signed by the lead assessor and reviewed by the AI Governance Lead . It is retained for ten years as the evidential foundation for the Declaration of Conformity . A competent authority reviewing the Declaration will expect to see the Assessment Report that supports it. Key outputs Formal Assessment Report with scope, methodology, findings, and conclusion Lead assessor signature and AI Governance Lead review Ten-year retention Module 6 AISDP evidence --- ## Internal Audit Assurance Lead — Annual Audit URL: https://docs.standardintelligence.com/internal-audit-assurance-lead-annual-audit Breadcrumb: Governance › Delivery › Organisational Roles › Internal Audit Assurance Lead — Annual Audit Last updated: 28 Feb 2026 Internal Audit Assurance Lead — Annual Audit AISDP module(s): All modules (assurance) Regulatory basis: Article 17 The Internal Audit Assurance Lead provides independent verification that the certification process was followed correctly, evidence is complete and authentic, and no material deficiencies were overlooked. The role conducts the annual oversight audit (testing monitoring infrastructure, escalation pathways, break-glass procedures , training currency, and non-retaliation commitments) and reports findings to the audit committee. The Assurance Lead is informed (I) during the certification process and provides an independent assurance layer after the assessment is complete. The role tests whether the assessment was conducted with adequate rigour and whether the evidence supports the Declaration of Conformity . For organisations with a dedicated internal audit function, this role integrates naturally. For smaller organisations, external consultants or peer review arrangements provide the independent assurance function. Key outputs Independent verification of certification process integrity Annual oversight audit with board/audit committee reporting Post-assessment assurance layer RACI "I" during assessment, independent review after --- ## ISO 42001:2023 — Foundation URL: https://docs.standardintelligence.com/iso-420012023-foundation Breadcrumb: Governance › Conformity Assessment › QMS Framework › QMS Framework › ISO 42001:2023 — Foundation Last updated: 28 Feb 2026 ISO 42001:2023 — Foundation AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 17 ISO/IEC 42001:2023 (Artificial Intelligence Management System) provides the most directly relevant framework for the QMS. Published in December 2023, it specifies requirements for establishing, implementing, maintaining, and continually improving an AI management system. Its control set aligns with the AI Act's requirements, covering risk management, data management, system engineering, verification, validation, deployment, operation, and monitoring. Certification to ISO 42001 does not constitute EU AI Act conformity assessment ; the two are distinct processes with different legal significance. ISO 42001 provides a structured foundation that makes conformity assessment significantly more efficient by establishing the governance processes, documentation practices, and review cycles that the AI Act's QMS requirements demand. For organisations already ISO-aligned (for example, through ISO 27001 for information security or ISO 9001 for quality management), extending to ISO 42001 leverages existing management system infrastructure and reduces the incremental effort. The AISDP's QMS documentation should cross-reference the ISO 42001 controls to the corresponding AI Act requirements. Key outputs ISO 42001:2023 as QMS foundation (not conformity assessment substitute) Control set alignment with AI Act Article 17 requirements Cross-reference to existing ISO certifications QMS documentation --- ## Iterative Risk Management URL: https://docs.standardintelligence.com/iterative-risk-management Breadcrumb: Governance › Risk Assessment › Iterative Risk Management Last updated: 28 Feb 2026 ℹ Awaiting content from a subsequent batch (v13). Awaiting content. --- ## Jurisdiction-Specific Guidance & Quarterly Monitoring URL: https://docs.standardintelligence.com/jurisdiction-specific-guidance-and-quarterly-monitoring Breadcrumb: Governance › Regulator Interaction › Multi-Jurisdiction Deployment › Jurisdiction-Specific Guidance & Quarterly Monitoring Last updated: 28 Feb 2026 Jurisdiction-Specific Guidance & Quarterly Monitoring AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 70 The Legal and Regulatory Advisor maintains a jurisdiction register capturing, for each deployment member state, the designated competent authority, market surveillance authority, data protection authority, sector-specific regulators, published guidance and interpretive notes, preferred communication channels, and language requirements. The register is reviewed quarterly as new guidance is published and authority structures evolve. When a national competent authorit y publishes new guidance, the Legal and Regulatory Advisor assesses whether it is consistent with other authorities' positions and with AI Office publications. Inconsistencies are documented in the conflicting guidance register with the organisation's chosen position and rationale. Systematic monitoring uses the IAPP EU AI Act Regulatory Directory, the Future of Life Institute's national implementation tracker, and direct subscriptions to NCA publication feeds where available. Key outputs Per-jurisdiction register with authority details and published guidance Quarterly review and consistency assessment Systematic monitoring through IAPP, FLI, and NCA feeds Module 10 AISDP documentation --- ## Keeping Registration Current URL: https://docs.standardintelligence.com/keeping-registration-current Breadcrumb: Governance › Regulator Interaction › Keeping Registration Current Last updated: 28 Feb 2026 Updates on Material Changes & Version Alignment AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 49 The AI Act requires registration information to be kept up to date throughout the system's operational lifetime. Any change to the registered information, including system status, intended purpose, deployment member states, or certification details, must be reflected in the database. The Conformity Assessment Coordinator establishes a change management process that triggers database updates whenever relevant changes occur. This process integrates with the broader AISDP change management framework. When a substantial modification or other material change is processed through the change management workflow, the Conformity Assessment Coordinator assesses whether the change affects any registered information and submits an update if so. The registration version should align with the AISDP version; a discrepancy indicates that a registration update was missed. Key outputs Registration updates triggered by change management process Alignment between registration and AISDP versions Conformity Assessment Coordinator responsibility for ongoing updates Module 10 AISDP documentation --- ## Language & Translation URL: https://docs.standardintelligence.com/language-and-translation Breadcrumb: Governance › Regulator Interaction › Multi-Jurisdiction Deployment › Language & Translation Last updated: 28 Feb 2026 Language & Translation AISDP module(s): Module 8 (Transparency), Module 10 (Compliance Record) Regulatory basis: Article 13 (3)(b)(ii), Article 47, Article 73 Different compliance documents have different language requirements. The AISDP itself is maintained in the provider's working language (typically English). Instructions for Use must be translated into the official language of each deployment member state under Article 13(3)(b)(ii). The Declaration of Conformity follows member state requirements; some accept English, others require the national language. Serious incident reports follow the receiving authority's language requirements. Translation quality requires domain expertise across EU regulatory language, AI terminology, and the application domain. Mistranslation of a performance threshold or limitation can have compliance consequences. The workflow is: human translation by a domain-aware translator, technical review by a bilingual domain expert, and retention as a controlled document. A standardised glossary mapping key terms across deployment languages ensures consistency. For a five-language deployment, initial translation costs typically fall between EUR 10,000 and EUR 30,000 per system, with annual maintenance of EUR 3,000 to EUR 10,000. Timeline planning allows two to four weeks for initial translation and one to two weeks for updates. Key outputs Per-document-type language requirements mapped Domain-expert translation with technical review Standardised multi-language glossary Module 8 and Module 10 AISDP documentation --- ## Legal & Regulatory Advisor — Provider Boundary, IP, Cross-Regulatory URL: https://docs.standardintelligence.com/legal-and-regulatory-advisor-provider-boundary-ip-cross Breadcrumb: Governance › Delivery › Organisational Roles › Legal & Regulatory Advisor — Provider Boundary, IP, Cross-Regulatory Last updated: 28 Feb 2026 Legal & Regulatory Advisor — Provider Boundary, IP, Cross-Regulatory AISDP module(s): Cross-cutting (legal review) Regulatory basis: Article 17 The Legal and Regulatory Advisor reviews evidence for legal sufficiency, advises on novel or ambiguous regulatory interpretations, and reviews the Declaration of Conformity for accuracy. The role is consulted (C) on risk classification , risk assessment , conformity assessment , and serious incident reporting . The Advisor is responsible (R) for FRIA oversight and Declaration of Conformity legal review. The Advisor manages cross-regulatory coordination (AI Act, GDPR , NIS2 , sector-specific legislation), insurance review, translation quality oversight, conflicting guidance resolution, and the jurisdiction register for multi-state deployments. The role also advises on provider-deployer boundary questions, intellectual property issues, and the legal implications of model selection decisions. For small organisations, legal counsel contributes on a consultancy basis during certification cycles. For medium and large organisations, dedicated legal capacity is provided during assessment periods. Key outputs Legal sufficiency review across all compliance domains Cross-regulatory coordination (GDPR, NIS2, sector-specific) Declaration of Conformity legal review before signature RACI "R" for FRIA oversight and "C" across most domains --- ## Liability & Insurance URL: https://docs.standardintelligence.com/liability-and-insurance Breadcrumb: Governance › Certification › Liability & Insurance Last updated: 28 Feb 2026 Legal Significance — Binding Statement & Personal Exposure AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 47, Article 99(5) The Declaration is a formal legal assertion that the system conforms to each applicable requirement at the time of signing. If the system is subsequently found to be non-conforming, the Declaration becomes evidence that the provider either knew or should have known of the non-conformity. A Declaration signed in the face of unresolved non-conformities exposes the signatory to personal liability and the organisation to Tier 3 penalties under Article 99(5) of up to EUR 7.5 million or 1% of global annual turnover for providing misleading information. The AI Governance Lead , who typically signs the Declaration, must ensure that the internal conformity assessment is complete, all critical non-conformities are resolved, remaining non-conformities have documented remediation plans, and the assessment report supports the Declaration's claims. The Declaration is not a statement of intent or aspiration; it is a binding commitment. The signing ceremony should be treated with appropriate gravity. The AI Governance Lead confirms awareness of the legal implications before signing. The Legal and Regulatory Advisor witnesses the signature and confirms the Declaration's legal sufficiency. Key outputs Legal significance communicated to signatory Pre-signature confirmation that assessment supports claims Signing ceremony with Legal and Regulatory Advisor review Module 10 AISDP evidence D&O Insurance Exposure AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 99 The AI Governance Lead who signs the Declaration of Conformity may face personal liability if the Declaration is found to be inaccurate. Directors' and officers' (D&O) insurance is reviewed by the Legal and Regulatory Advisor to confirm that AI Act compliance decisions fall within the policy's coverage. Some D&O policies exclude regulatory fines; the Legal and Regulatory Advisor assesses this exclusion in light of the Article 99 penalty framework. If the policy excludes regulatory penalties, supplementary coverage may be needed, or the organisation may need to negotiate a policy amendment. The D&O review should also consider whether the policy covers defence costs in the event of a competent authority investigation. The D&O review findings are documented and shared with the AI Governance Lead before the Declaration is signed. The AI Governance Lead should understand their personal exposure and the insurance coverage available before accepting the signing responsibility. Key outputs D&O policy review for AI Act coverage Regulatory fine exclusion assessment Personal exposure communicated to signatory before signing Module 10 AISDP documentation Professional Indemnity & Product Liability AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Directive (EU) 2024/2853 (Product Liability) The revised Product Liability Directive includes software and AI systems within its scope. Defective AI system outputs that cause damage to individuals may give rise to product liability claims. The provider's product liability insurance is reviewed by the Legal and Regulatory Advisor to confirm that AI system outputs are within scope and that policy limits are adequate for the system's deployment scale. For SaaS-based high-risk AI systems, professional indemnity insurance may be more relevant than product liability. The policy should cover claims arising from fairness deficiencies, inaccurate outputs, and failures of human oversight mechanisms. The Legal and Regulatory Advisor assesses whether existing coverage extends to AI-specific failure modes or whether supplementary coverage is needed. The insurance review should also consider cyber insurance coverage for AI-specific incidents ( model extraction , data poisoning , adversarial attacks), which may fall outside traditional cyber policies. The organisation confirms that cyber insurance covers AI-specific incident types and that the policy's incident response provisions align with the AI Act incident response plan . Key outputs Product liability review for AI system output coverage Professional indemnity review for SaaS-based systems Cyber insurance review for AI-specific incident types Module 10 AISDP documentation Insurance Review Before Signing AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 99 The Legal and Regulatory Advisor conducts the insurance review during Phase 3 (Architecture and Design) of the delivery process, when the system's risk profile is sufficiently defined to inform the coverage assessment. The review covers D&O, product liability, professional indemnity, and cyber insurance across four dimensions: coverage scope (do the policies cover AI Act-related claims?), exclusions (are regulatory fines, AI-specific incidents, or compliance decisions excluded?), policy limits (are the limits adequate for the system's deployment scale and penalty exposure?), and notification requirements (do the policies require early notification of potential claims, and are the organisation's incident response procedures aligned with these requirements?). The review findings are documented and shared with the AI Governance Lead and the organisation's risk management function. Any coverage gaps are escalated as risk register entries. The insurance review is completed before the Declaration of Conformity is signed, ensuring the signatory understands the insurance protection available. Key outputs Four-dimension insurance review (scope, exclusions, limits, notification) Completed during Phase 3 before Declaration signing Coverage gaps escalated to the risk register Module 10 AISDP documentation --- ## Maintaining NB Certification URL: https://docs.standardintelligence.com/maintaining-nb-certification Breadcrumb: Governance › Conformity Assessment › Notified Bodies › Maintaining NB Certification Last updated: 28 Feb 2026 Maintaining NB Certification AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 44, Annex VII points 5.1–5.3 Notified body certification is not permanent. Article 44 provides for periodic reassessment, and the notified body may conduct unannounced audits. Annex VII points 5.1 through 5.3 require ongoing surveillance: QMS compliance verification, premises access, and periodic audits with possible additional testing. Changes to the system that constitute substantial modifications under Article 3(23) may trigger a supplementary assessment. The Conformity Assessment Coordinator maintains a change notification log, submitting notifications to the notified body when changes to the QMS or the system list occur. The notified body then decides whether reassessment is needed. Organisations must maintain the same documentation discipline and evidence currency after certification as during the initial assessment. A certification that was hard-won through a rigorous assessment process can be lost through post-certification complacency. The continuous assessment model provides the framework for maintaining certification-grade compliance on an ongoing basis. Key outputs Periodic reassessment and unannounced audit readiness Change notification log submitted to the notified body Post-certification documentation discipline maintained Continuous assessment model as certification maintenance framework --- ## Milestones Before August 2026 URL: https://docs.standardintelligence.com/milestones-before-august-2026 Breadcrumb: Governance › Delivery › Brownfield Compliance › Milestones Before August 2026 Last updated: 28 Feb 2026 Milestones Before August 2026 AISDP module(s): All 12 modules Regulatory basis: Article 113 The August 2026 deadline applies to the Chapter 2 requirements (Articles 8–15) and the conformity assessment , registration, and CE marking obligations for Annex III high-risk systems. Annex I product systems benefit from an extended deadline of August 2027. Article 4 (AI literacy) and Article 5 (prohibited practices) already apply. GPAI obligations under Articles 51–56 apply from August 2025. For brownfield compliance , the AI Governance Lead sets interim milestones. A reasonable schedule for a medium-complexity system: Phase A complete by Q4 2025 (critical controls operational), Phase B complete by Q1 2026 (AISDP substantially assembled), Phase C complete by Q2 2026 (infrastructure gaps closed), and formal conformity assessment conducted in Q2–Q3 2026 with the Declaration of Conformity signed before August 2026. Organisations with large portfolios may not achieve full compliance for all systems by August 2026. In this case, the portfolio prioritisation framework determines which systems are addressed first, and the organisation documents its compliance programme's scope and timeline for the remaining systems. Key outputs Milestone schedule aligned with August 2026 deadline Phased A/B/C completion targets by quarter Portfolio prioritisation for organisations with large portfolios Compliance programme documentation for systems beyond the deadline --- ## Multi-Jurisdiction Checklist URL: https://docs.standardintelligence.com/multi-jurisdiction-checklist Breadcrumb: Governance › Regulator Interaction › Artefacts › Multi-Jurisdiction Checklist Last updated: 28 Feb 2026 Multi-Jurisdiction Checklist AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Articles 49, 70, 73 Completed per-jurisdiction deployment checklists are retained as Module 10 evidence, documenting that all pre-deployment steps were completed for each deployment jurisdiction. The checklists form a deployment audit trail showing systematic expansion across the single market. Key outputs Completed checklist per deployment jurisdiction Deployment audit trail Module 10 AISDP evidence --- ## Multi-Jurisdiction Cost Implications & Phased Rollout URL: https://docs.standardintelligence.com/multi-jurisdiction-cost-implications-and-phased-rollout Breadcrumb: Governance › Regulator Interaction › Multi-Jurisdiction Deployment › Multi-Jurisdiction Cost Implications & Phased Rollout Last updated: 28 Feb 2026 Multi-Jurisdiction Cost Implications & Phased Rollout AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Articles 49, 70 Multi-jurisdiction deployment adds incremental cost across several categories: translation, regulatory monitoring (scaling with the number of jurisdictions), local legal counsel for jurisdiction-specific guidance review, incident response capability across time zones and languages, and deployer support varying by jurisdiction. The AI Governance Lead estimates incremental costs per jurisdiction and factors them into the deployment business case. A common approach is to prioritise deployment to a small number of member states initially, building operational maturity before expanding. This phased rollout allows the organisation to validate its compliance infrastructure in a manageable number of jurisdictions, refine processes based on early experience, and scale incrementally as capacity grows. The phased approach also mitigates the risk of early enforcement: an organisation that has achieved mature compliance in three jurisdictions is better positioned than one that has achieved thin compliance across fifteen. Key outputs Per-jurisdiction incremental cost estimation Phased rollout strategy (small initial deployment, expand with maturity) Cost factored into deployment business case Module 10 AISDP documentation --- ## Multi-Jurisdiction Deployment URL: https://docs.standardintelligence.com/multi-jurisdiction-deployment Breadcrumb: Governance › Regulator Interaction › Multi-Jurisdiction Deployment Last updated: 28 Feb 2026 Language & Translation Jurisdiction-Specific Guidance & Quarterly Monitoring Deployer Communications per Member State Incident Reporting Across Borders Data Sovereignty Constraints Mutual Recognition & Single Market Third-Country Providers — Authorised Representative (Art. 22) Per-Jurisdiction Deployment Checklist Multi-Jurisdiction Cost Implications & Phased Rollout --- ## Multi-Jurisdiction Registration URL: https://docs.standardintelligence.com/multi-jurisdiction-registration Breadcrumb: Governance › Regulator Interaction › EU Database Registration › Multi-Jurisdiction Registration Last updated: 28 Feb 2026 Multi-Jurisdiction Registration AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 49 , Article 71 The EU database is a single European database; a provider's registration covers the EU market. The coordination challenge arises with deployer registration (which is jurisdiction-specific for public authorities) and with supporting deployers in different member states who may request different information or assistance with their own registration obligations. The Conformity Assessment Coordinator maintains a register of all deployment jurisdictions and confirms that the EU database registration accurately reflects the member states where the system has been placed on the market, put into service, or made available. Updates are submitted when the system is deployed in additional jurisdictions. For multi-jurisdiction deployment s, the registration data must be consistent across all interactions with authorities. A discrepancy between the EU database entry and information provided to a national competent authorit y creates a compliance vulnerability. Key outputs Single provider registration covering all deployment jurisdictions Jurisdiction register maintained by Conformity Assessment Coordinator Consistency between EU database and national authority interactions Module 10 AISDP documentation --- ## Multi-System & Continuous Assessment URL: https://docs.standardintelligence.com/multi-system-and-continuous-assessment Breadcrumb: Governance › Conformity Assessment › Multi-System & Continuous Assessment Last updated: 28 Feb 2026 This section covers the following topics: Multi-System Assessment Continuous Assessment & Surveillance --- ## Multi-System Assessment URL: https://docs.standardintelligence.com/multi-system-assessment Breadcrumb: Governance › Conformity Assessment › Multi-System & Continuous Assessment › Multi-System Assessment Last updated: 28 Feb 2026 Multi-System Assessment Coordination AISDP module(s): All modules (cross-cutting) Regulatory basis: Article 17 , Annex VI Organisations with multiple high-risk AI systems coordinate their assessments to avoid duplication and maintain consistency. Three mechanisms support this coordination. A shared evidence strategy: evidence artefacts applying to multiple systems (QMS documentation, organisational policies, infrastructure security configurations, training records) are assessed once by the Conformity Assessment Coordinator and referenced by each system's assessment. The evidence register for each system distinguishes between system-specific and shared evidence with clear version references. A staggered assessment calendar: a quarterly rolling schedule distributes the assessor workload, ensuring continuous compliance verification and avoiding an annual compliance sprint. Cross-system findings analysis: the AI Governance Lead reviews the aggregate Non-Conformity Register across all systems quarterly, identifying patterns suggesting organisational gaps rather than system-specific problems. Recurring non-conformities across systems (persistent training deficiencies, common documentation omissions, evidence currency issues) signal systemic weaknesses in the QMS that require organisational remediation, not system-by-system fixes. Key outputs Shared evidence strategy with clear version references Staggered quarterly assessment calendar Cross-system non-conformity pattern analysis Organisational gap identification and remediation --- ## Mutual Recognition & Single Market URL: https://docs.standardintelligence.com/mutual-recognition-and-single-market Breadcrumb: Governance › Regulator Interaction › Multi-Jurisdiction Deployment › Mutual Recognition & Single Market Last updated: 28 Feb 2026 Mutual Recognition & Single Market AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Regulation (EU) 2024/1689 (direct applicability as EU Regulation); Regulation (EC) No 765/2008 Article 30 ( CE marking general principles) The AI Act is a Regulation, meaning it applies directly and uniformly across all member states without requiring national transposition. A system that has undergone conformity assessment and bears the CE marking is accepted across the single market without additional national conformity assessment. Member states cannot impose further conformity requirements on compliant systems, as this would contradict the Regulation's direct applicability and the general internal market principles under the TFEU. In practice, mutual recognition may be tested as national competent authorit ies develop their own interpretive approaches. The AISDP serves as the universal compliance evidence package; its completeness and rigour determine whether mutual recognition operates smoothly. An organisation with a thorough, well-evidenced AISDP can demonstrate compliance to any member state's authority, regardless of where the initial assessment was conducted. Where a competent authority challenges the system's compliance despite CE marking and valid Declaration, the organisation's response is grounded in the AISDP evidence pack. The Legal and Regulatory Advisor manages such challenges, referencing the AI Act's direct applicability and the supporting evidence. Key outputs Single market principle with CE marking acceptance AISDP as universal compliance evidence for all member states Article 23 reference for challenging national barriers Module 10 AISDP documentation --- ## National Competent Authority Landscape URL: https://docs.standardintelligence.com/national-competent-authority-landscape Breadcrumb: Governance › Regulator Interaction › NCA Landscape Last updated: 28 Feb 2026 NCA Maturity Levels AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 70 National competent authorities are at varying stages of operational readiness as of early 2026. Frontrunner member states (Spain with AESIA, Ireland, the Netherlands) have designated authorities, published procedures, and in some cases launched regulatory sandbox es. Progressing member states (Germany, France) have draft legislation or designated partial authority structures. Lagging member states (fourteen as of late 2025) have not yet designated any competent authority. This fragmentation creates practical challenges: organisations cannot complete certain compliance steps (identifying the correct authority for incident reports, for example) until the relevant member state has designated its authorities. The Legal and Regulatory Advisor monitors the IAPP EU AI Act Regulatory Directory and the Future of Life Institute's national implementation tracker, establishing contact with designated authorities as early as possible. Ireland's distributed model (15 sector-specific authorities and 9 fundamental rights authorities) illustrates the complexity: organisations must identify the correct sector-specific authority for their system type. Germany's Bundesnetzagentur designation and France's multi-authority approach present different coordination challenges. Key outputs NCA maturity classification (frontrunner, progressing, lagging) per jurisdiction Monitoring through IAPP and FLI trackers Early authority contact establishment Module 10 AISDP documentation Engagement Strategies by Maturity AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 70 The organisation calibrates its engagement strategy to each authority's maturity level. With mature authorities (frontrunners), proactive engagement and early dialogue build a constructive relationship; these authorities have the capacity to provide substantive feedback. With developing authorities (progressing), monitoring and preparation are appropriate; the organisation prepares its compliance documentation in formats that can be adapted as the authority's procedures are published. With silent authorities (lagging), a conservative compliance posture is prudent, following the most demanding interpretation available from other jurisdictions. Where an authority has published guidance, the Legal and Regulatory Advisor assesses it for consistency with other authorities' positions and with AI Office publications. Where no guidance has been published, the organisation defaults to the AI Office's guidance supplemented by the most conservative position from designated authorities. The engagement strategy is documented in the jurisdiction register and reviewed quarterly as the NCA landscape evolves. Key outputs Maturity-calibrated engagement strategy per jurisdiction Conservative default where no guidance is published Quarterly review as NCA landscape evolves Module 10 AISDP documentation --- ## NB Evidence Pack URL: https://docs.standardintelligence.com/nb-evidence-pack Breadcrumb: Governance › Conformity Assessment › Notified Bodies › NB Evidence Pack Last updated: 28 Feb 2026 NB Evidence Pack AISDP module(s): All 12 modules Regulatory basis: Article 43, Annex VII Notified body documentation expectations are materially more demanding than for internal assessment. Where internal assessment might accept a cross-reference to a test result stored in MLflow, a notified body expects the result to be extracted, contextualised, and presented as a self-standing evidence artefact. The NB evidence pack supplements the standard evidence register with narrative summaries explaining the significance of each artefact, a traceability matrix mapping every AISDP claim to its supporting evidence, test result summaries that include methodology, sample sizes, statistical significance, and limitations, and a glossary of organisation-specific terminology. The pack should be self-contained: a notified body assessor should be able to evaluate the system's compliance using the pack alone, without requiring access to internal systems for basic understanding. The AI System Assessor should request the notified body's published assessment methodology before engagement, as this shapes how the evidence pack is structured. Different bodies may emphasise different aspects of the assessment. Key outputs Self-contained evidence pack with narrative summaries Traceability matrix (AISDP claims to evidence) Test result summaries with methodology and limitations Glossary of organisation-specific terminology --- ## NCA Engagement Log URL: https://docs.standardintelligence.com/nca-engagement-log Breadcrumb: Governance › Regulator Interaction › Artefacts › NCA Engagement Log Last updated: 28 Feb 2026 NCA Engagement Log AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 70 The NCA engagement log archive is the master record of all regulatory interactions. It is retained for the ten-year period, demonstrating the organisation's regulatory engagement history. Entries from all sources ( AI Governance Lead , Legal and Regulatory Advisor, Conformity Assessment Coordinator) are consolidated in a single chronological record. Key outputs Consolidated chronological regulatory interaction record Ten-year retention Mitigating factor evidence for enforcement proceedings Module 10 AISDP evidence --- ## Non-Conformity Management URL: https://docs.standardintelligence.com/non-conformity-management Breadcrumb: Governance › Conformity Assessment › Non-Conformity Management Last updated: 28 Feb 2026 Critical NC — Definition & Effect AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 17 , Annex VI A critical non-conformity indicates a fundamental failure to meet a requirement that could result in serious harm, a violation of fundamental rights, or a material misstatement in the Declaration of Conformity . The system cannot be placed on the market or continue in service until the non-conformity is resolved. Remediation must begin immediately and be verified by the assessor before the assessment can conclude. The Declaration of Conformity cannot be signed while any critical non-conformity remains open. Examples include a complete absence of human oversight capability for a system requiring it, fabricated or falsified evidence, a fundamental rights impact assessment that was never conducted, or a risk register that does not exist. Critical non-conformities are rare when the pre-assessment readiness review ('s evidence currency checks) is conducted properly. Their identification during the formal assessment typically indicates that the readiness review was either not conducted or was not sufficiently rigorous. Key outputs Blocks Declaration of Conformity and market placement Immediate remediation with assessor verification required Root cause analysis mandatory Non-Conformity Register documentation Major NC — Definition & Effect AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 17, Annex VI A major non-conformity indicates a significant gap that weakens the compliance posture without presenting an immediate risk of serious harm. The system may proceed to market with a documented remediation plan and a defined deadline, typically 30 to 90 days. Remediation must be verified by the assessor, and the AISDP must be updated to reflect the corrected state. Examples include fairness testing that omits a relevant protected characteristic, a PMM plan that defines metrics but has no alerting thresholds, cybersecurity testing conducted more than eighteen months ago, a risk register that exists but has not been reviewed since the initial assessment, or an Instructions for Use document that does not adequately communicate known limitations. The Conformity Assessment Coordinator tracks each major non-conformity to closure. The assessment conclusion may read "conformity demonstrated subject to remediation of [N] major non-conformities," with the remediation plan and deadlines appended to the Assessment Report. Key outputs Documented remediation plan with 30–90 day deadline Assessor verification of remediation required Permits market placement with conditions Non-Conformity Register documentation Minor NC — Definition & Effect AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 17, Annex VI A minor non-conformity is a documentation deficiency or minor inconsistency that does not affect the system's substantive compliance. Remediation is recorded and tracked, with a deadline of up to six months. Minor non-conformities do not block the Declaration of Conformity or prevent market placement. Examples include typographical errors in the AISDP, a cross-reference that points to the wrong evidence artefact, a minor version discrepancy between the AISDP and the evidence register , or an organisational chart that does not reflect a recent personnel change. These findings are individually trivial, but an accumulation of minor non-conformities may signal a broader documentation discipline problem. The Conformity Assessment Coordinator reviews minor non-conformities at each assessment cycle. A pattern of recurring minor non-conformities in the same area (for example, persistent cross-reference errors in Module 4 ) may warrant escalation to a major non-conformity if the pattern suggests a systemic documentation management failure. Key outputs Up to six-month remediation window Does not block Declaration of Conformity Pattern analysis for escalation to major Non-Conformity Register documentation Remediation Workflow AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 17 Each non-conformity follows a seven-step workflow. Identification and logging by the assessor. Assignment to the responsible person by the Conformity Assessment Coordinator. Root cause analysis to ensure remediation addresses the underlying cause. Remediation action by the responsible person. Evidence of remediation as documented proof. Verification by the assessor confirming the remediation is effective and complete. Closure, with the non-conformity marked as resolved in the register with the closure date and verification evidence. The workflow is consistent regardless of severity; the urgency and scrutiny applied vary by classification. Critical non-conformities require immediate action with escalation to the AI Governance Lead . Major non-conformities follow the defined timeline with regular progress tracking. Minor non-conformities are tracked to closure at the next assessment cycle. Non-conformities that remain open beyond their deadline require escalation to the AI Governance Lead with a documented justification for the delay and a revised timeline. Jira or ServiceNow with pre-configured non-conformity workflows support this process, though a spreadsheet-based register is adequate for smaller portfolios. Key outputs Seven-step workflow (log, assign, root cause, remediate, evidence, verify, close) Severity-appropriate urgency and scrutiny Escalation for overdue non-conformities Non-Conformity Register documentation Root Cause Analysis for Critical & Major AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 17 Root cause analysis is mandatory for critical and major non-conformities. The analysis ensures that remediation addresses the underlying cause rather than the symptom. A fairness testing gap (the symptom) might have a root cause in the test plan's scope definition process, in the assessor's competence framework, or in the data availability for the omitted characteristic. The root cause analysis is documented alongside the non-conformity entry. It records the symptom (the non-conformity as identified), the investigation method (five-whys analysis, fishbone diagram, or structured review), the root cause identified, the corrective action (addressing the root cause), and the preventive action (preventing recurrence). The preventive action may affect the QMS, the assessment methodology, or the organisation's training programme. Root cause analysis for critical non-conformities should involve the AI Governance Lead, as the root cause may indicate a governance failure rather than a technical one. A critical non-conformity caused by fabricated evidence, for instance, has a root cause in the organisation's integrity culture, not in a technical process. Key outputs Mandatory root cause analysis for critical and major NCs Documented investigation method and findings Corrective and preventive actions AI Governance Lead involvement for critical NCs --- ## Non-Conformity Register URL: https://docs.standardintelligence.com/non-conformity-register Breadcrumb: Governance › Conformity Assessment › Artefacts › Non-Conformity Register Last updated: 28 Feb 2026 Non-Conformity Register AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 17 , Annex VI The Non-Conformity Register is a compliance artefact in its own right. It records every identified gap, deficiency, or inconsistency with a unique identifier, severity classification (critical, major, minor), description, affected AISDP module and Article, required remediation, responsible person, remediation deadline, verification method, and closure date with verification evidence. The register demonstrates the organisation's ability to identify, classify, and resolve gaps, which is itself a QMS requirement. A register showing a history of identified and resolved non-conformities is stronger compliance evidence than a register showing no non-conformities, which may suggest inadequate assessment rigour. The register is retained for ten years. Key outputs Per-NC structured documentation with severity, remediation, and closure Compliance artefact demonstrating gap management capability Ten-year retention Module 6 AISDP evidence --- ## Non-High-Risk Provider Registration (Art. 49(2), Annex VIII-B) URL: https://docs.standardintelligence.com/non-high-risk-provider-registration-art-492-annex-viii-b Breadcrumb: Governance › Regulator Interaction › EU Database Registration › Non-High-Risk Provider Registration (Art. 49(2), Annex VIII-B) Last updated: 28 Feb 2026 Non-High-Risk Provider Registration (Art. 49(2), Annex VIII-B) AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 49 (2), Annex VIII Section B Providers who have concluded under Article 6(3) that their Annex III system is not high-risk must still register. They submit provider identification, system trade name and identification, intended purpose, the conditions under Article 6(3) justifying the non-high-risk determination, a short summary of the grounds for this conclusion, and the system's status. This registration is particularly important because it creates a public record of the provider's self-assessment. If a market surveillance authority later disagrees with the classification, the registered justification becomes central evidence. The Article 6(3) justification must therefore be thorough and defensible, consistent with the CDR analysis. A superficial registration that states the conclusion without supporting reasoning invites scrutiny. The Legal and Regulatory Advisor reviews the registration justification before submission, confirming it accurately reflects and is supported by the CDR. Key outputs Article 6(3) justification registered as public record Consistency with CDR analysis verified Legal review before submission Module 10 AISDP evidence --- ## Notified Bodies URL: https://docs.standardintelligence.com/notified-bodies Breadcrumb: Governance › Conformity Assessment › Notified Bodies Last updated: 28 Feb 2026 When NB Required NB Evidence Pack Data Access Protocol Annex VII Procedural Mapping Interaction Protocol Fee Structures & Budget Timeline Planning Annex I Product Integration — Three Coordination Models Maintaining NB Certification --- ## Organisational Roles URL: https://docs.standardintelligence.com/organisational-roles Breadcrumb: Governance › Delivery › Organisational Roles Last updated: 28 Feb 2026 AI Governance Lead — Responsibilities & Authority AI System Assessor — Classification, AISDP, Independence Conformity Assessment Coordinator — Gates, Evidence, Registration Technical SME — Risk, Architecture, Testing Legal & Regulatory Advisor — Provider Boundary, IP, Cross-Regulatory Classification Reviewer — Independent CDR Validation Internal Audit Assurance Lead — Annual Audit DPO Liaison — DPIA & Special Category Data --- ## Parallel Track Coordination URL: https://docs.standardintelligence.com/parallel-track-coordination Breadcrumb: Governance › Delivery › Parallel Track Coordination Last updated: 28 Feb 2026 Portfolio Prioritisation — Four Axes AISDP module(s): Cross-cutting Regulatory basis: Articles 8–15 Organisations with multiple high-risk systems cannot address all systems simultaneously. The AI Governance Lead prioritises the portfolio on four axes. Risk tier: highest-risk systems (those in the most sensitive Annex III domains, those with the largest affected populations) take priority. Deployment timeline: systems approaching deployment deadlines are addressed before those in early development. Deployment scale: systems affecting more people carry greater enforcement risk and should be prioritised accordingly. Compliance readiness: systems with less existing documentation require more effort and should start earlier to avoid deadline pressure. The prioritisation produces a portfolio sequencing plan: which systems enter the seven-phase delivery workflow in which order, and how shared resources are allocated across parallel tracks. The plan is reviewed quarterly and adjusted as circumstances change (a regulatory enforcement action may reprioritise a specific system; a deployment deferral may free resources for another). Key outputs Four-axis prioritisation (risk tier, timeline, scale, readiness) Portfolio sequencing plan Quarterly review and adjustment AI Governance Lead decision Shared Resource Planning AISDP module(s): Cross-cutting Regulatory basis: Article 17 The AI Governance Lead, Legal and Regulatory Advisor, Conformity Assessment Coordinator, and Internal Audit Assurance Lead are typically shared across the portfolio. Their availability is planned against the portfolio's milestone calendar. Governance gates ( CDR approval, risk register acceptance, Declaration of Conformity signing) are staggered by the AI Governance Lead to avoid queuing. If multiple systems reach Phase 5 simultaneously, the assessment workload may exceed available capacity. The resource plan identifies these bottlenecks in advance and either staggers the phase entries or secures additional assessment capacity (external consultants, temporary secondments from the internal audit function). Cross-system synergies reduce per-system effort. Systems sharing common components (the same GPAI model, data sources, or deployment infrastructure) can share compliance artefacts. A GPAI model risk assessment conducted for one system is reused, with system-specific adaptation, for another. Data governance documentation for shared sources is written once and referenced by multiple AISDs. Key outputs Shared resource availability mapped to portfolio milestones Governance gate staggering to avoid bottlenecks Cross-system synergy identification for artefact reuse Bottleneck mitigation planning Staggered Governance Gates AISDP module(s): Cross-cutting Regulatory basis: Articles 8–17 The portfolio governance cadence operates above individual system cadences. Monthly portfolio status reviews track each system's progress against phase milestones. Quarterly resource reviews assess whether planned resource allocation is sufficient. Annual strategic reviews assess the portfolio's overall compliance posture and plan for the coming year. Governance gates for individual systems are scheduled into the portfolio calendar. The AI Governance Lead blocks out time for each gate (CDR approval, risk acceptance, Declaration signing) weeks in advance. Where two systems' gates would coincide, the lower-priority system's gate is moved to avoid splitting the AI Governance Lead's attention. This disciplined scheduling prevents the common failure mode where the AI Governance Lead is asked to review and approve multiple systems' Declarations of Conformity in the same week, leading to superficial review and elevated risk. Key outputs Portfolio governance cadence (monthly, quarterly, annual) Individual system gates scheduled into the portfolio calendar Gate staggering to prevent simultaneous review overload AI Governance Lead time allocation planned in advance --- ## Per-Jurisdiction Deployment Checklist URL: https://docs.standardintelligence.com/per-jurisdiction-deployment-checklist Breadcrumb: Governance › Regulator Interaction › Multi-Jurisdiction Deployment › Per-Jurisdiction Deployment Checklist Last updated: 28 Feb 2026 Per-Jurisdiction Deployment Checklist AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Articles 49, 70, 73 Each additional deployment jurisdiction requires a structured pre-deployment checklist. The eight steps, each completed before deployment, are: identify national competent authorit y and market surveillance authority; review jurisdiction-specific guidance for conflicts with existing compliance posture; translate Instructions for Use into the member state's official language; translate Declaration of Conformity if required; verify EU database registration covers the new jurisdiction; pre-identify serious incident reporting channel; pre-translate incident report template if the authority requires the national language; and confirm data residency and sovereignty compliance. At deployment, two additional steps: brief deployers in the new jurisdiction on their Article 26 obligations, and add the jurisdiction to the quarterly guidance monitoring cycle. The checklist is maintained by the Conformity Assessment Coordinator and completed per jurisdiction. Completed checklists are retained as Module 10 evidence. Key outputs Ten-step per-jurisdiction deployment checklist Completed before and at deployment Conformity Assessment Coordinator responsibility Module 10 AISDP evidence --- ## Phase 1: Discovery & Classification (Weeks 1–3) URL: https://docs.standardintelligence.com/phase-1-discovery-and-classification-weeks-1-3 Breadcrumb: Governance › Delivery › Seven-Phase Framework › Phase 1: Discovery & Classification (Weeks 1–3) Last updated: 28 Feb 2026 Phase 1: Discovery & Classification (Weeks 1–3) AISDP module(s): Module 1 (System Identity), Module 6 (Risk Management System) Regulatory basis: Articles 3, 5, 6, 7 Phase 1 determines whether the system falls within the AI Act's scope, classifies its risk tier, and produces the Classification Decision Record . The AI System Assessor examines the system against the Article 3(1) definition of an AI system. If the system meets the definition, the Assessor classifies it against the four risk tiers: prohibited under Article 5, high-risk under Articles 6–7 and Annex III, limited risk under Article 50 , or minimal risk. For systems falling within Annex III categories, the Assessor evaluates the Article 6(3) exception by testing the functional criterion and the risk criterion separately. The Classification Reviewer independently reviews the determination. Disagreements are escalated to the AI Governance Lead for binding resolution. Phase 1 produces the CDR (with classification determination, rationale, Article 6(3) assessment if applicable, and supporting evidence), an initial risk profile identifying triggered regulatory obligations, and the evidence pack informing the classification. The AI Governance Lead approves the CDR before Phase 2 begins. This gate prevents wasted effort: if the system is not high-risk, the subsequent phases follow a lighter pathway. Key outputs Classification Decision Record approved by AI Governance Lead Initial risk profile and triggered obligations Evidence pack supporting classification Gate: CDR approval before Phase 2 --- ## Phase 2: Risk Assessment & FRIA (Weeks 2–6) URL: https://docs.standardintelligence.com/phase-2-risk-assessment-and-fria-weeks-2-6 Breadcrumb: Governance › Delivery › Seven-Phase Framework › Phase 2: Risk Assessment & FRIA (Weeks 2–6) Last updated: 28 Feb 2026 Phase 2: Risk Assessment & FRIA (Weeks 2–6) AISDP module(s): Module 6 (Risk Management System), Module 11 (Deployer Obligations) Regulatory basis: Article 9 , Article 27 Phase 2 conducts the comprehensive risk assessment that informs all subsequent design and development decisions. The Technical SME and AI System Assessor conduct the five-method risk identification (FMEA, stakeholder consultation, regulatory gap analysis, adversarial red-teaming, horizon scanning). The risk register is established, with each risk scored across four dimensions. Residual risk acceptability is assessed against Article 9(4). For deployers of high-risk systems, the FRIA is conducted in parallel. It examines the impact on all potentially affected EU Charter rights, with attention to intersectional effects. The reputational risk framework assesses customer, market, regulatory, shareholder, and employee dimensions. Phase 2 produces the risk register (populating Module 6), the FRIA report (populating Module 11), the reputational risk assessment, and the risk mitigation plan with assigned owners and timelines. The AI Governance Lead reviews the risk register and accepts the residual risk profile before development proceeds. This gate ensures that design decisions in Phase 3 are informed by a complete risk picture. Key outputs Risk register with four-dimension scoring FRIA report and reputational risk assessment Risk mitigation plan with owners and timelines Gate: risk profile acceptance before Phase 3 --- ## Phase 3: Architecture & Design (Weeks 4–8) URL: https://docs.standardintelligence.com/phase-3-architecture-and-design-weeks-4-8 Breadcrumb: Governance › Delivery › Seven-Phase Framework › Phase 3: Architecture & Design (Weeks 4–8) Last updated: 28 Feb 2026 Phase 3: Architecture & Design (Weeks 4–8) AISDP module(s): Module 2 (Development Process), Module 3 (Architecture), Module 4 ( Data Governance ), Module 9 (Cybersecurity) Regulatory basis: Articles 9–15 Phase 3 designs the system architecture informed by the risk assessment , selects the model approach, and establishes the data governance framework. The Statement of Business Intent is drafted and approved. Model selection uses the compliance criteria (documentability, testability, auditability, bias detectability, maintainability, determinism), evaluating the full spectrum from heuristic systems to LLMs. Model origin risk, copyright risk, and nation-alignment risk are assessed. The layered architecture is designed with per-layer compensating controls. The data governance framework is established, including dataset documentation , data lineage infrastructure, fairness assessment methodology, and special category data handling. Version control strategy, CI/CD pipeline design, and infrastructure-as-code approach are defined. The cybersecurity threat model is developed using STRIDE/PASTA. The insurance review is conducted during this phase, when the risk profile is sufficiently defined. Phase 3 produces the Statement of Business Intent, model selection rationale, system architecture document with dependency maps, data governance plan, version control and CI/CD design, and cybersecurity threat model. Architecture review by the Technical SME, Legal and Regulatory Advisor, and AI Governance Lead confirms the design satisfies the risk mitigation plan. Key outputs Statement of Business Intent, model selection rationale, architecture document Data governance plan, CI/CD design, cybersecurity threat model Insurance review completed Gate: architecture review sign-off --- ## Phase 4: Development & Testing (Weeks 6–18) URL: https://docs.standardintelligence.com/phase-4-development-and-testing-weeks-6-18 Breadcrumb: Governance › Delivery › Seven-Phase Framework › Phase 4: Development & Testing (Weeks 6–18) Last updated: 28 Feb 2026 Phase 4: Development & Testing (Weeks 6–18) AISDP module(s): Modules 2–5, 7, 9 (Development, Architecture, Data, Testing, Transparency, Cybersecurity) Regulatory basis: Articles 9–15 Phase 4 builds the system in accordance with the approved architecture, with compliance evidence generated as a natural byproduct of the engineering workflow. Development uses version-controlled code, model, and data artefacts. The CI/CD pipeline enforces quality gates at every commit: static analysis (including AI-specific rules), unit testing , contract testing, dependency and licence scanning, and secret detection. Data engineering follows the pre-step/post-step capture methodology, with each transformation documented before execution and verified after. Model training, validation, and testing follow the documented methodology; performance, fairness, robustness, and calibration metrics are computed and recorded. The model validation gate blocks promotion of any model that fails AISDP-declared thresholds. The human oversight interface is developed with automation bias countermeasures, mandatory review workflows, and override capability. Cybersecurity testing is integrated throughout: SAST and DAST in the pipeline, dependency scanning, container image scanning, infrastructure-as-code scanning, and adversarial ML testing . Phase 4 produces continuously: version-controlled artefacts, automated test reports, model card s (auto-generated), data quality reports, training pipeline logs, and cybersecurity scan results. Key outputs Version-controlled code, model, and data artefacts with full audit trail Automated test reports (unit, integration, regression, fairness, robustness) Model cards, data quality reports, cybersecurity scan results Gates: model validation (automated), security review (manual), integration test pass --- ## Phase 5: Pre-Deployment Validation (Weeks 16–20) URL: https://docs.standardintelligence.com/phase-5-pre-deployment-validation-weeks-16-20 Breadcrumb: Governance › Delivery › Seven-Phase Framework › Phase 5: Pre-Deployment Validation (Weeks 16–20) Last updated: 28 Feb 2026 Phase 5: Pre-Deployment Validation (Weeks 16–20) AISDP module(s): All 12 modules Regulatory basis: Articles 8–17, Annex IV , Annex VI Phase 5 validates the complete system in a production-representative environment and compiles the AISDP. The system is deployed to staging; end-to-end inference, regression, and chaos/fault injection tests are executed. Performance, fairness, and robustness metrics are computed against AISDP-declared thresholds. The AISDP is compiled from artefacts produced during development, not written from scratch. Each module is populated from engineering artefacts. The Conformity Assessment Coordinator reviews for completeness and consistency. The internal conformity assessment (Annex VI) is conducted: QMS assessment verifies Article 17 elements, technical documentation assessment examines Articles 8–15, and consistency assessment traces from AISDP to source artefacts. Non-conformities are recorded and remediated. The operational oversight framework is established: monitoring infrastructure configured, alerting thresholds set, escalation procedures documented, break-glass procedures tested, operator training completed. Phase 5 produces the complete AISDP, internal assessment report, Non-Conformity Register , assessment evidence register , oversight readiness confirmation, and operator training records. The AI Governance Lead signs the Declaration of Conformity once all critical non-conformities are resolved. Key outputs Complete AISDP (all 12 modules) Internal conformity assessment report and Non-Conformity Register Declaration of Conformity signed by AI Governance Lead Gate: Declaration signing after assessment review --- ## Phase 6: Registration & Deployment (Weeks 20–22) URL: https://docs.standardintelligence.com/phase-6-registration-and-deployment-weeks-20-22 Breadcrumb: Governance › Delivery › Seven-Phase Framework › Phase 6: Registration & Deployment (Weeks 20–22) Last updated: 28 Feb 2026 Phase 6: Registration & Deployment (Weeks 20–22) AISDP module(s): Module 10 (Compliance Record), Module 8 (Transparency) Regulatory basis: Articles 48, 49, 71 Phase 6 registers the system in the EU database, affixes the CE marking , and deploys to production. The Conformity Assessment Coordinator submits the Annex VIII registration information, ensuring it reflects all deployment member states. For sensitive domain systems, registration goes to the non-public section. The CE marking is affixed to the user interface and documentation. Deployment follows the CI/CD pipeline 's compliance controls: staging validation, canary or shadow deployment, human approval gate, and deployment logging. The AI Governance Lead (for initial deployment) reviews validation results and authorises production deployment. The deployment event is recorded in the immutable deployment ledger . Deployers receive the Instructions for Use ( Article 13 ), covering intended purpose, capabilities and limitations, performance characteristics, human oversight requirements, and maintenance obligations. Phase 6 produces the EU database registration confirmation, CE marking evidence, deployment ledger entry, deployer communication records, and the filed Declaration of Conformity . Key outputs EU database registration confirmed CE marking affixed and evidenced Production deployment authorised and logged Deployer Instructions for Use delivered --- ## Phase 7: Operational Monitoring (Ongoing) URL: https://docs.standardintelligence.com/phase-7-operational-monitoring-ongoing Breadcrumb: Governance › Delivery › Seven-Phase Framework › Phase 7: Operational Monitoring (Ongoing) Last updated: 28 Feb 2026 Phase 7: Operational Monitoring (Ongoing) AISDP module(s): Module 12 ( Post-Market Monitoring ), all modules (living document updates) Regulatory basis: Articles 9, 18, 72, 73 Phase 7 maintains the system's compliance posture throughout its operational lifetime. The PMM system operates continuously across five dimensions: performance, fairness, data drift , operational, and human oversight. Alerts are triaged according to the severity framework. Quarterly PMM review meetings examine monitoring trends, operator escalation patterns, deployer feedback, and the non-conformity register . The annual oversight audit tests monitoring infrastructure, escalation pathways, break-glass procedures , and training currency. Serious incidents are detected, triaged, reported under Article 73 , investigated, and remediated. System changes are managed through the version control framework; each change is assessed against substantial modification thresholds. Changes crossing the threshold trigger a return to Phase 5 for new conformity assessment . Regulatory developments are monitored and assessed for impact. The AISDP is maintained as a living document. Each material change creates a new version. The version history demonstrates continuous compliance discipline. Phase 7 produces monthly PMM reports, quarterly review minutes, annual audit reports, serious incident reports, AISDP version updates, risk register updates, and regulatory horizon scanning summaries. Key outputs Continuous PMM across five dimensions Quarterly governance reviews and annual oversight audit Serious incident management under Article 73 AISDP maintained as living document with version history --- ## Phased Compliance (A: Critical, B: Documentation, C: Infrastructure) URL: https://docs.standardintelligence.com/phased-compliance-a-critical-b-documentation-c Breadcrumb: Governance › Delivery › Brownfield Compliance › Phased Compliance (A: Critical, B: Documentation, C: Infrastructure) Last updated: 28 Feb 2026 Phased Compliance (A: Critical, B: Documentation, C: Infrastructure) AISDP module(s): All 12 modules Regulatory basis: Articles 8–15 Brownfield compliance need not be achieved in a single effort. A phased approach may be appropriate, structured in three phases. Phase A addresses critical gaps: human oversight controls, serious incident reporting capability, and basic PMM. These are the capabilities whose absence creates the greatest immediate compliance and safety risk. Phase B addresses documentation gaps: assembling the AISDP from existing and reconstructed artefacts, conducting the gap assessment remediation, and establishing the evidence register . Phase C addresses infrastructure gaps: establishing version control , extending the CI/CD pipeline , and building the monitoring infrastructure. The phased plan is documented and approved by the AI Governance Lead , with milestones that demonstrate progress toward full compliance. The August 2026 deadline for high-risk system obligations provides the outer boundary. The AI Governance Lead should set interim milestones that create accountability and prevent a last-minute compliance rush. Phase A should be achievable within three months for most systems. Phases B and C vary depending on the system's existing documentation and infrastructure maturity. Key outputs Three-phase brownfield compliance plan (critical, documentation, infrastructure) AI Governance Lead approval with milestones August 2026 outer boundary for high-risk obligations All 12 modules addressed across the three phases --- ## Pre-Assessment Readiness URL: https://docs.standardintelligence.com/pre-assessment-readiness Breadcrumb: Governance › Conformity Assessment › Pre-Assessment Readiness Last updated: 28 Feb 2026 Evidence Currency — 60-Day Maximum & Staleness Tracking AISDP module(s): All 12 modules Regulatory basis: Articles 11, 18 Evidence artefacts have a freshness requirement: each artefact in the evidence register specifies how frequently the responsible role must refresh it. Model evaluation reports are refreshed with every model update; PMM reports monthly; penetration test reports annually. Evidence that has exceeded its freshness window is stale and cannot support a conformity claim. A scheduled script (running monthly via Airflow or GitHub Actions) scans the evidence register, compares each artefact's last-updated date against its freshness requirement, and generates a gap report listing overdue artefacts. The gap report is sent to the AI Governance Lead and the responsible team members. Overdue artefacts are treated as non-conformities and tracked in the Non-Conformity Register . As a general rule, no evidence artefact relied upon in the conformity assessment should be more than 60 days old at the point the Declaration of Conformity is signed. Evidence older than 60 days raises the question of whether the documentation reflects the system's current state. The Conformity Assessment Coordinator confirms evidence currency as part of the pre-signature checklist. Key outputs Per-artefact freshness requirements in the evidence register Automated monthly staleness scanning (Airflow, GitHub Actions) 60-day maximum currency at Declaration signing Overdue artefacts tracked as non-conformities Evidence Register AISDP module(s): All 12 modules Regulatory basis: Articles 11, 18, Annex IV The evidence register catalogues every artefact reviewed during the assessment, with its location, version, date, and the assessment finding it supports. It serves as the bridge between the assessment findings and the underlying proof. Each entry records the artefact identifier (a unique reference), the AISDP module it supports, the EU AI Act Article it demonstrates compliance with, the artefact's current version and location, the date it was last updated, and the freshness requirement. The register is maintained as a structured dataset in Airtable, a Notion database, a SharePoint list, or a YAML file in the documentation repository. Free-form text in a document is insufficient; the register must be queryable so that an assessor can identify all evidence supporting a specific Article, all evidence owned by a specific role, or all evidence that is overdue for refresh. The register distinguishes between system-specific evidence and shared evidence (where the organisation operates multiple high-risk systems). Shared evidence artefacts are assessed once and referenced by each system's register with clear version references. Key outputs Structured evidence register with per-artefact metadata AISDP module and Article traceability per entry Queryable format (Airtable, Notion, SharePoint, YAML) Distinction between system-specific and shared evidence Assessment Checklist — Per Art. 8–15 Sub-Requirement AISDP module(s): All 12 modules Regulatory basis: Articles 8–15, Article 17 , Annex IV The assessment checklist maps every requirement of Articles 8 through 15, Article 17, and Annex IV to specific questions, evidence expectations, and pass/fail criteria. The checklist must be granular; a single line item such as " Article 10 compliance" is insufficient. Each sub-requirement of Article 10 (relevance, representativeness, freedom from errors, completeness, statistical properties, bias detection measures, special category data processing) is a separate checklist item with its own evidence requirement. The checklist is prepared before the assessment begins as part of the Assessment Plan. During the assessment, the assessor works through each item, recording the evidence examined, the determination (conformant, non-conformant, partially conformant), and any observations. Partially conformant items include an explanation of what is present and what is missing. The completed checklist is a core assessment artefact. An assessor or competent authority reviewing the assessment should be able to work through the checklist and understand, for each requirement, what evidence was examined and what conclusion was reached. Key outputs Granular per-sub-requirement checklist (Articles 8–15, 17, Annex IV) Per-item evidence expectations and pass/fail criteria Completed checklist with determinations and observations Core assessment artefact retained for ten years --- ## Procedural Alternative for Small Portfolios URL: https://docs.standardintelligence.com/procedural-alternative-for-small-portfolios Breadcrumb: Governance › Conformity Assessment › QMS Framework › QMS Framework › Procedural Alternative for Small Portfolios Last updated: 28 Feb 2026 Procedural Alternative for Small Portfolios AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 17 A QMS is fundamentally a set of documented procedures, not a software platform. ISO 42001 certification can be achieved with paper-based or spreadsheet-based documentation for organisations with a small number of AI systems. The minimal QMS comprises a QMS manual describing policies, procedures, roles, and responsibilities; a document control register tracking every controlled document with its version, owner, review date, and retention period; a non-conformity register tracking every identified gap; internal audit schedules and records; and quarterly management review meeting minutes. Workflow automation, dashboard views, and integrated reporting are lost. Manual QMS management works for organisations with one to three AI systems; for larger portfolios, a platform becomes justified. The trade-off is explicit: lower licensing cost in exchange for higher manual effort and greater risk of process breakdown as complexity increases. Key outputs Spreadsheet and document-based QMS for small portfolios Minimal artefact set (manual, registers, audit records, meeting minutes) Viable for one to three systems Explicit trade-off documentation (cost vs. scalability) --- ## Provider Registration (Art. 49(1), Annex VIII-A) URL: https://docs.standardintelligence.com/provider-registration-art-491-annex-viii-a Breadcrumb: Governance › Regulator Interaction › EU Database Registration › Provider Registration (Art. 49(1), Annex VIII-A) Last updated: 28 Feb 2026 Provider Registration (Art. 49(1), Annex VIII-A) AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 49 (1), Annex VIII Section A Providers of high-risk systems under Annex III (excluding critical infrastructure under Annex III, point 2) register in the EU database before placing the system on the market. Annex VIII Section A specifies twelve information items: provider identity and contact details, submitter identity (if different), authorised representative details (if applicable), system trade name and identification, intended purpose description, system status, notified body certificate details, deployment member states, comparable database URLs, additional information URLs, concise data collection description, and electronic instructions for use. Each item demands preparation. The "description of the intended purpose" must be precise and consistent with AISDP Module 1 wording. The "concise description of data collection means" must be factually accurate without disclosing commercially sensitive detail. The Technical SME prepares the electronic instructions for use in a format suitable for digital publication. The Conformity Assessment Coordinator completes registration through the Commission's online platform, with the Technical SME reviewing technical content and Legal counsel reviewing legal content before submission. Key outputs Twelve-item Annex VIII-A registration completed before market placement Intended purpose and data collection descriptions consistent with AISDP Electronic instructions for use prepared for digital publication Module 10 AISDP evidence --- ## Non-Conformity Management URL: https://docs.standardintelligence.com/qms-framework--non-conformity-management Breadcrumb: Governance › Conformity Assessment › QMS Framework › QMS Framework › Non-Conformity Management Last updated: 28 Feb 2026 Non-Conformity Management AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 17 When a gap between the system's actual state and its declared compliance state is identified, whether through monitoring, assessment, or incident, the Conformity Assessment Coordinator logs it, assesses it, assigns it to an owner, tracks it to closure, and verifies the fix. This process applies to non-conformities identified during formal assessment and to gaps identified through continuous monitoring between assessment cycles. The non-conformity management process within the QMS framework is distinct from the assessment-specific remediation workflow, though both follow the same seven-step pattern. The QMS process applies continuously; the assessment workflow applies during and immediately after formal assessment events. In practice, the same Non-Conformity Register serves both purposes, with entries categorised by their source (formal assessment, continuous monitoring, incident response , deployer complaint). Jira or ServiceNow with pre-configured non-conformity workflows support this process. The non-conformity register is itself a compliance artefact demonstrating the organisation's ability to identify and resolve gaps. Key outputs Continuous non-conformity management (not assessment-only) Single register serving both assessment and ongoing monitoring Per-entry source categorisation QMS documentation and compliance evidence --- ## QMS Framework URL: https://docs.standardintelligence.com/qms-framework Breadcrumb: Governance › Conformity Assessment › QMS Framework › QMS Framework Last updated: 28 Feb 2026 ISO 42001:2023 — Foundation Document Control Change Management (S.6 Integration) Non-Conformity Management Continual Improvement Procedural Alternative for Small Portfolios --- ## Real-World Testing Registration (Art. 60, Annex IX) URL: https://docs.standardintelligence.com/real-world-testing-registration-art-60-annex-ix Breadcrumb: Governance › Regulator Interaction › EU Database Registration › Real-World Testing Registration (Art. 60, Annex IX) Last updated: 28 Feb 2026 Real-World Testing Registration (Art. 60, Annex IX) AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 60 , Annex IX Article 60 permits providers to test high-risk Annex III systems in real-world conditions before market placement, subject to specific safeguards. Before commencing, the provider registers the test in the EU database using Annex IX's five information categories: a unique test identification number (assigned by the database), provider and deployer contact details, system description and intended purpose, a summary of the testing plan, and information on any suspension or termination. The Article 60 regime applies when the system processes real inputs from, or produces outputs that affect, persons under real or near-real operational conditions. Internal testing on historical or synthetic data does not trigger the obligation. The AI System Assessor documents the threshold determination for each pilot deployment. The real-world testing plan addresses objectives, duration, geographical scope, test subject demographics, data collection and retention, informed consent procedures, human oversight during testing, suspension/termination criteria, and serious incident reporting . Testing results feed into the risk assessment , fairness evaluation, and PMM baseline. Key outputs Annex IX five-category registration before testing commences Article 60 threshold determination documented per pilot Real-world testing plan with safeguards and informed consent Module 10 AISDP evidence --- ## Real-World Testing Registration URL: https://docs.standardintelligence.com/real-world-testing-registration Breadcrumb: Governance › Regulator Interaction › Artefacts › Real-World Testing Registration Last updated: 28 Feb 2026 Real-World Testing Registration AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 60 , Annex IX Where real-world testing was conducted under Article 60, the registration and associated documentation are retained: the Annex IX submission, the real-world testing plan, informed consent records, testing results, and any suspension or termination records. The final test report is incorporated into the AISDP evidence pack . Key outputs Annex IX registration and testing plan Informed consent records Testing results and final report Module 10 AISDP evidence --- ## Reclassification Triggers URL: https://docs.standardintelligence.com/reclassification-triggers Breadcrumb: Governance › Risk Assessment › Reclassification Triggers Last updated: 28 Feb 2026 Reclassification Triggers AISDP module(s): Module 6 (Risk Management System), Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 6, Article 9 A system that has drifted from its intended purpose into a higher-risk domain since classification requires reclassification before any further risk assessment proceeds. Reclassification triggers include a material change in the system's intended purpose or deployment context, expansion to a new sector or jurisdiction that alters the system's risk profile, post-market monitoring data revealing risks not anticipated in the original classification, competent authority guidance or enforcement action affecting the classification of comparable systems, and amendments to the AI Act's Annexes that change which systems qualify as high-risk. The risk assessment cycle includes a classification confirmation step at every quarterly review, verifying that no reclassification triggers have been activated. The PMM feedback loop specifically monitors for deployment context changes that could affect classification. Where a reclassification trigger is identified, the AI System Assessor conducts a fresh classification analysis, produces a revised CDR , and submits it for independent review. If the reclassification moves the system to a higher tier, the AISDP must be extended to meet the higher tier's requirements before the system continues operating. Key outputs Defined reclassification triggers (purpose drift, sector expansion, PMM findings, regulatory change) Classification confirmation at every quarterly review Fresh CDR analysis and independent review on trigger activation Module 6 and Module 12 AISDP documentation --- ## Registration Data Quality Assurance URL: https://docs.standardintelligence.com/registration-data-quality-assurance Breadcrumb: Governance › Regulator Interaction › EU Database Registration › Registration Data Quality Assurance Last updated: 28 Feb 2026 Registration Data Quality Assurance AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 49 , Article 71 The EU database registration is publicly accessible; errors are visible to authorities, deployers, affected persons, competitors, and the media. A structured internal review process ensures accuracy before submission. The review follows four steps: the Conformity Assessment Coordinator prepares the registration data by extracting fields from the AISDP; the Technical SME reviews technical content (system description, data collection, intended purpose) for accuracy and AISDP consistency; Legal counsel reviews legal content (provider identification, authorised representative , non-high-risk justification) for accuracy; and the AI Governance Lead approves submission. A mapping table traces each registration field to its AISDP source, enabling rapid consistency verification whenever either is updated. Post-submission, the Conformity Assessment Coordinator verifies the published entry within one week, checking for display formatting, character encoding, and field truncation discrepancies. Errors are corrected immediately. Key outputs Four-step internal review (prepare, technical review, legal review, approval) Registration-to-AISDP mapping table for consistency verification Post-submission verification within one week Module 10 AISDP evidence --- ## Regulator Interaction & Registration URL: https://docs.standardintelligence.com/regulator-interaction-and-registration Breadcrumb: Governance › Regulator Interaction & Registration (S.11) Last updated: 28 Feb 2026 Regulator interaction spans the full lifecycle of a high-risk AI system, from initial EU database registration through ongoing reporting to inspection readiness . EU database registration covers the Article 71 requirements, data fields, timing, provider versus deployer responsibilities, and GPAI model registration. Keeping registration current addresses the obligation to update entries when material changes occur. The AI Office and European-level oversight describes the central coordination body's role. The national competent authorit y landscape maps the member-state-level authorities. Regulatory sandbox es provide a framework for controlled testing with regulatory support. Inspection readiness prepares the organisation for announced and unannounced inspections. Multi-jurisdiction deployment addresses the complexities of operating across multiple member states, including lead authority identification, mutual recognition, language obligations, deployer coordination, and local counsel engagement. Conflicting guidance provides a resolution framework. Enforcement and penalties documents the graduated penalty structure. Communication protocols define internal and external communication standards. The section concludes with artefacts. ℹ This section corresponds to the Regulator Interaction section and feeds primarily into AISDP Module 11 (Certification and Legal). --- ## Regulator Interaction Artefacts URL: https://docs.standardintelligence.com/regulator-interaction-artefacts Breadcrumb: Governance › Regulator Interaction › Artefacts Last updated: 28 Feb 2026 EU Database Registration Confirmation Real-World Testing Registration Deployer Communication Records Inspection Readiness Drill Records NCA Engagement Log Translation Records Multi-Jurisdiction Checklist Conflicting Guidance Position Papers --- ## Regulatory Sandbox URL: https://docs.standardintelligence.com/regulatory-sandbox Breadcrumb: Governance › Regulator Interaction › Regulatory Sandbox Last updated: 28 Feb 2026 Strategic Benefits & Practical Considerations (Regulatory Sandbox) AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 57 Sandbox participation provides direct regulatory feedback on the system's compliance approach before the conformity assessment , reducing the risk of failed assessment or post-market enforcement. It creates a documented track record of regulatory cooperation, strengthens credibility with market surveillance authorities, and may benefit from Article 57(8)'s regulatory flexibility provisions. Sandbox programmes typically run six to twelve months and require dedicated effort: application preparation, regular progress reporting, test result sharing, and issue escalation. Organisations should reserve sandbox participation for their highest-risk or most novel systems, where regulatory uncertainty is greatest. Lower-risk systems with well-understood compliance pathways are better served by standard internal conformity assessment. Sandbox findings and supervisory feedback are integrated into the AISDP. Where the competent authority has reviewed and accepted specific aspects of the system's design, the Legal and Regulatory Advisor documents this acceptance as supporting evidence. Sandbox exit reports are valuable Module 10 artefacts. Key outputs Sandbox participation for highest-risk or most novel systems Six-to-twelve-month commitment with regular reporting Supervisory feedback integrated into the AISDP Module 10 AISDP evidence Sandbox Does Not Constitute Conformity Assessment AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 57 Sandbox participation, however constructive, does not constitute conformity assessment under Annex VI or Annex VII . A system that has completed a sandbox programme must still undergo the full conformity assessment before it can be placed on the market. Supervisory feedback received during the sandbox is evidence that supports the assessment; it does not replace the assessment. Organisations should be explicit about this distinction internally. A Business Owner who believes that sandbox completion equates to market readiness will be disappointed. The Conformity Assessment Coordinator clarifies the relationship between sandbox participation and conformity assessment during the delivery planning phase. The sandbox's value lies in de-risking the conformity assessment by identifying and resolving compliance issues early, not in bypassing the assessment altogether. Key outputs Clear distinction between sandbox participation and conformity assessment Sandbox as assessment de-risking, not replacement Internal communication to Business Owner on the distinction Module 10 AISDP documentation --- ## Reputational Risk Assessment URL: https://docs.standardintelligence.com/reputational-risk-assessment Breadcrumb: Governance › Risk Assessment › Reputational Risk Assessment Last updated: 28 Feb 2026 Five Reputational Risk Dimensions AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 9 Reputational risk, though not explicitly within the AI Act's scope, is among the most consequential risks an organisation faces when deploying AI systems. Five dimensions structure the reputational risk assessment. Customer reputational risk considers how deployers and end users would respond to a publicised system failure; customer attrition in AI-dependent services tends to be abrupt. Market reputational risk considers the broader market perception, which is amplified for organisations in regulated sectors. Regulatory reputational risk considers the organisation's visibility to national competent authorit ies; early enforcement actions will attract disproportionate media attention. Shareholder and investor reputational risk considers ESG rating impacts and cost-of-capital effects. Employee reputational risk considers the effect on talent recruitment and retention; engineers and data scientists increasingly evaluate employers' AI governance practices. For each identified technical, fairness, and compliance risk, the AI System Assessor assesses the reputational dimension using five factors: the probability of public discovery, the narrative severity, the stakeholder groups affected, the organisation's ability to contain damage, and the likely duration of the reputational effect. Reputational risk mitigations include proactive transparency measures, crisis communication planning, and deployer notification procedures. Key outputs Five-dimension reputational risk assessment per identified risk Five-factor reputational severity analysis Reputational mitigations (transparency, crisis planning, notification procedures) Module 6 AISDP documentation --- ## Residual Risk & Acceptability URL: https://docs.standardintelligence.com/residual-risk-and-acceptability Breadcrumb: Governance › Risk Assessment › Residual Risk & Acceptability Last updated: 28 Feb 2026 Operationalising "As Far As Possible" AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 9 (4) Article 9(4) requires that risks be eliminated or reduced "as far as possible through adequate design and development," with any remaining residual risk judged acceptable in relation to the system's intended purpose and the persons or groups of persons on whom it is intended to be used. This standard does not require zero residual risk; it requires evidence that the organisation has pursued risk reduction to the point where further reduction would be disproportionate, technically infeasible, or counterproductive. For each risk above the acceptance threshold, the assessor documents the mitigations already implemented, the residual risk rating after those mitigations, the alternative mitigations considered and rejected, and the rationale for rejection. The rationale must be specific: "Too expensive" is insufficient without context on the cost relative to the system's economic value and the severity of the risk. "Not technically feasible" requires the Technical SME to provide supporting evidence that the alternative was investigated. "Would degrade performance" must quantify the degradation and explain why the current performance level is necessary. This structured approach creates a defensible record showing that the organisation genuinely pursued risk reduction, rather than accepting risk through inattention or convenience. The record is examined during conformity assessment and must withstand scrutiny. Key outputs Documented risk reduction journey per risk above threshold Alternative mitigations considered with specific rejection rationale Evidence that reduction was pursued to the disproportionality boundary Module 6 AISDP evidence Alternative Mitigations Considered & Rejection Rationale AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 9(4) For each risk where residual risk remains above the acceptance threshold after primary mitigations, the AI System Assessor documents the alternative mitigations that were considered and the rationale for their rejection. This documentation is distinct from the primary mitigation documentation; it specifically addresses what else could have been done and why it was not. Each alternative mitigation entry records the proposed measure, its expected risk reduction effect, its cost (financial, performance, operational complexity), any adverse effects it would introduce (for example, a stricter input filter that reduces the system's utility for legitimate users), and the specific reason for rejection. Rejection rationales must address cost, technical feasibility, and adverse effects substantively. The alternative mitigations documentation is a direct response to the "as far as possible" standard. An assessor reviewing the AISDP should be able to see that the organisation explored multiple risk reduction options and made informed, defensible decisions about which to implement and which to reject. Key outputs Per-risk documentation of alternative mitigations considered Rejection rationale addressing cost, feasibility, and adverse effects Defensible record of informed risk treatment decisions Module 6 AISDP evidence Formal Risk Acceptance — Signed Attestation AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 9(4) Residual risk acceptance is a governance decision. Each acceptance is a formal determination, signed by the AI Governance Lead , confirming that the residual risk is proportionate to the system's benefits and that no reasonably practicable further mitigation is available. The signed attestation is retained in the AISDP evidence pack. The attestation records the risk identifier, the residual risk score (after mitigations), the mitigations in place, the alternative mitigations considered and rejected, and the AI Governance Lead's determination that the residual level is acceptable. For risks with residual scores at or near the treatment threshold, the attestation should include a re-review trigger (for example, the residual risk will be reassessed if the deployment scale exceeds a specified level or if monitoring data indicates the risk is materialising at a higher rate than predicted). The AI Governance Lead cannot delegate the risk acceptance decision to the development team. This separation of authority ensures that risk acceptance decisions are made with full organisational awareness of their implications. Key outputs Signed risk acceptance attestation per residual risk above threshold Residual score, mitigations, alternatives considered, and determination Re-review triggers for borderline acceptances Module 6 AISDP evidence Communicating Residual Risk to Deployers AISDP module(s): Module 6 (Risk Management System), Module 8 (Transparency) Regulatory basis: Article 9(4), Article 13 Residual risks that deployers inherit must be communicated through the Instructions for Use (Module 8). The communication must be specific: stating that "the system has residual fairness risk" is inadequate. The deployer must know which subgroups are affected, the magnitude of the risk, the conditions under which the risk is most likely to materialise, and the compensating controls the deployer should apply. For each residual risk communicated to deployers, the Instructions for Use should describe the risk in terms the deployer can understand (avoiding unnecessary jargon), specify the conditions or input characteristics that increase the risk's likelihood, recommend specific compensating controls the deployer should implement, and state the monitoring or oversight measures the deployer should maintain. The deployer-facing residual risk communication is subject to continuous monitoring through the post-market monitoring plan ( Module 12 ). If monitoring data indicates that a residual risk is materialising at a higher rate than predicted, the communication may need updating and deployers notified of the change. Key outputs Specific, actionable residual risk communication per risk Deployer-facing description, conditions, compensating controls, and monitoring guidance PMM-driven update triggers for residual risk communication Module 6 and Module 8 AISDP documentation Periodic Residual Risk Review AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 9 Residual risk acceptability changes over time. A risk that was acceptable at deployment may become unacceptable if the deployment scale increases, exposing more affected persons. It may become unacceptable if the affected population changes, for instance when the system is deployed in a new jurisdiction with different demographic composition. New evidence may emerge, such as academic research identifying a failure mode not previously anticipated. The regulatory standard may tighten through new AI Office guidance. The quarterly risk register review must re-assess residual risk acceptability, not merely confirm that the original acceptance remains on file. Each residual risk acceptance is re-examined in light of current monitoring data, current deployment context, and current regulatory expectations. If the re-assessment determines that a previously accepted residual risk is no longer acceptable, the risk is escalated for additional mitigation. Trigger-based reassessment supplements the quarterly cadence. A serious incident , a substantial modification , or a significant change in the deployment context triggers an immediate re-examination of all affected residual risk acceptances. Key outputs Quarterly re-assessment of residual risk acceptability Re-examination in light of current data, context, and regulatory expectations Escalation for additional mitigation when acceptability changes Module 6 AISDP documentation --- ## Resource Estimation URL: https://docs.standardintelligence.com/resource-estimation Breadcrumb: Governance › Delivery › Resource Estimation Last updated: 28 Feb 2026 FTE per System AISDP module(s): Cross-cutting Regulatory basis: Article 17 Resource estimation for a single medium-complexity high-risk system typically requires approximately 0.5 FTE for the AI System Assessor (classification, risk assessment , AISDP compilation, conformity assessment ), 0.3 FTE for the Technical SME (engineering evidence, technical queries, testing), 0.1 FTE for the Legal and Regulatory Advisor (legal review, cross-regulatory coordination, Declaration review), and 0.1 FTE for the AI Governance Lead (governance decisions, gate approvals, Declaration signing). These estimates cover the initial AISDP preparation period (20–28 weeks). Ongoing compliance requires approximately 0.2 FTE for the Assessor (PMM review, AISDP updates, annual re-assessment), 0.1 FTE for the Technical SME (monitoring support, change assessment), and smaller allocations for legal and governance oversight. Factors that increase effort include GPAI model integration with limited disclosures (add 3–6 weeks), brownfield system s with limited documentation (add 4–10 weeks), biometric identification requiring notified body assessment (add 6–12 weeks), and multi-jurisdiction deployment (add 2–4 weeks). Factors that decrease effort include well-documented explainable models (reduce Phase 3 by 1–2 weeks) and reusable artefacts from comparable systems (reduce total effort by 20–30%). Key outputs Per-role FTE estimates for initial preparation and ongoing compliance Effort adjustment factors (increasing and decreasing) Basis for programme budgeting Cross-cutting resource planning Duration: 20–28 Weeks AISDP module(s): Cross-cutting Regulatory basis: Articles 8–15 Total elapsed time from initiation to production deployment is typically 20 to 28 weeks for a medium-complexity high-risk system with cooperative stakeholders. Phases overlap: risk assessment informs architecture, which informs development, which begins before risk assessment is fully complete. The timeline assumes the organisation has established foundational infrastructure ( version control , CI/CD, monitoring) before commencing the system-specific workflow. Where foundational infrastructure must be built concurrently, add 8 to 16 weeks; this investment benefits all subsequent systems. The timeline also assumes that stakeholders (Business Owner, Technical SME, Legal counsel) are available when needed; stakeholder availability bottlenecks are one of the most common causes of timeline overrun. For brownfield systems, the timeline depends on the gap assessment results. A system with substantial existing documentation may require 12 to 16 weeks of remediation; a system with minimal documentation may require 24 to 36 weeks. Key outputs 20–28 week baseline for medium-complexity greenfield systems Adjustment factors for infrastructure build, brownfield, and stakeholder availability Phase overlap enabling parallel execution Basis for deployment timeline planning Cost: €150K–400K Initial; €50K–150K Annual AISDP module(s): Cross-cutting Regulatory basis: Articles 8–15 The fully loaded cost (personnel, tooling, infrastructure, external support) for preparing an AISDP for a medium-complexity high-risk system ranges from EUR 150,000 to EUR 400,000 for initial preparation, with annual ongoing compliance costs of EUR 50,000 to EUR 150,000. These figures vary widely by jurisdiction, organisation size, and system complexity. The initial cost includes personnel time (the largest component), tooling licences (GRC platforms, monitoring infrastructure, testing frameworks), external support (legal counsel, notified body fees where applicable, translation), and infrastructure (evidence repository, monitoring stack, CI/CD enhancements). Annual ongoing costs include PMM operation, quarterly governance reviews, annual re-assessment, regulatory monitoring, evidence currency maintenance, and operator training refreshers. The AI Governance Lead validates these estimates against the organisation's specific circumstances during Phase 1. For organisations with existing GRC infrastructure, monitoring capabilities, and compliance teams, the incremental cost may fall at the lower end. For organisations building compliance capability from scratch, the cost may exceed the upper estimate. Key outputs EUR 150K–400K initial preparation cost range EUR 50K–150K annual ongoing compliance cost range Cost components identified (personnel, tooling, external, infrastructure) AI Governance Lead validation during Phase 1 Multi-Jurisdiction Incremental Costs AISDP module(s): Cross-cutting Regulatory basis: Articles 49, 70, 73 Multi-jurisdiction deployment adds incremental costs per jurisdiction: translation of Instructions for Use and the Declaration of Conformity (EUR 10,000–30,000 initial per five-language deployment, EUR 3,000–10,000 annual), regulatory monitoring (scaling with the number of jurisdictions tracked), local legal counsel for jurisdiction-specific guidance review, incident response capability across time zones and languages, and deployer support varying by jurisdiction. The AI Governance Lead estimates incremental costs per jurisdiction and factors them into the deployment business case. A phased rollout strategy (deploying to a small number of member states first, then expanding) manages cost exposure while building operational maturity. The total multi-jurisdiction premium for a five-state deployment typically adds 15–25% to the base compliance cost. Organisations deploying across ten or more states should budget for a dedicated multi-jurisdiction coordination function. Key outputs Per-jurisdiction incremental cost estimation 15–25% premium for five-state deployment Phased rollout as cost management strategy Dedicated coordination function for 10+ state deployments --- ## Retrofitting Testing — Comprehensive Retrospective URL: https://docs.standardintelligence.com/retrofitting-testing-comprehensive-retrospective Breadcrumb: Governance › Delivery › Brownfield Compliance › Retrofitting Testing — Comprehensive Retrospective Last updated: 28 Feb 2026 Retrofitting Testing — Comprehensive Retrospective AISDP module(s): Module 5 (Testing and Validation) Regulatory basis: Articles 9, 15 Systems that were not subject to the full test suite described in undergo comprehensive retrospective testing. This includes fairness testing across all protected characteristic subgroups, robustness testing (adversarial examples, input perturbation), performance benchmarking against the thresholds that the AISDP will declare, and security testing. The results of retrospective testing become the baseline against which future changes are evaluated. The testing methodology and results are documented in Module 5, with clear indication that the testing was conducted retrospectively on the deployed system rather than during development. Where retrospective testing reveals performance or fairness deficiencies that the AISDP thresholds cannot accommodate, remediation is required before the Declaration of Conformity can be signed. This may involve model retraining, threshold adjustment, additional mitigations, or in severe cases, system withdrawal. Key outputs Comprehensive retrospective testing (fairness, robustness, performance, security) Baseline establishment for future change evaluation Transparent labelling as retrospective testing Module 5 AISDP documentation --- ## Retrofitting Version Control — Baseline Capture URL: https://docs.standardintelligence.com/retrofitting-version-control-baseline-capture Breadcrumb: Governance › Delivery › Brownfield Compliance › Retrofitting Version Control — Baseline Capture Last updated: 28 Feb 2026 Retrofitting Version Control — Baseline Capture AISDP module(s): Module 2 (Development Process) Regulatory basis: Article 11 , Article 18 Systems that were not developed under version control can be brought into the framework from the current point forward. The Technical SME captures the current state of all artefacts (code, models, data, configuration) as a baseline version. From this baseline, all subsequent changes are fully version-controlled with attribution, timestamping, and complete diff history. The AISDP documents the date from which formal version control was established and acknowledges that the version history prior to that date is incomplete. This acknowledgement is essential: a version history that purports to cover the system's entire development but was actually created retroactively is misleading evidence. For model artefacts, the baseline capture includes the model's current parameters, the training configuration, and the evaluation metrics at the point of capture. For data artefacts, the baseline includes the current dataset versions, their statistical profiles, and their storage locations. Key outputs Baseline version capture of all current artefacts Version control established from baseline date forward Explicit acknowledgement of incomplete pre-baseline history Module 2 AISDP documentation --- ## Risk Assessment Artefacts URL: https://docs.standardintelligence.com/risk-assessment-artefacts Breadcrumb: Governance › Risk Assessment › Artefacts Last updated: 28 Feb 2026 ℹ Awaiting content from a subsequent batch (v13). Awaiting content. --- ## Risk Assessment for Specific Categories URL: https://docs.standardintelligence.com/risk-assessment-for-specific-categories Breadcrumb: Governance › Risk Assessment › Specific Categories Last updated: 28 Feb 2026 ℹ Awaiting content from a subsequent batch (v13). Awaiting content. --- ## Risk Assessment URL: https://docs.standardintelligence.com/risk-assessment Breadcrumb: Governance › Risk Assessment (S.2) Last updated: 28 Feb 2026 Risk assessment under the EU AI Act begins with classification and extends through identification, scoring, mitigation, and iterative review. Risk classification applies the four-tier framework, assessing prohibited practices, high-risk categorisation under Annex III and Annex I, and the full obligation set. The Article 6(3) exception assessment evaluates whether a system that falls within Annex III may qualify for an exception. The classification decision record documents the determination with supporting evidence. Reclassification trigger s define the events that require reassessment. Five-method risk identification combines structured workshops, historical analysis, regulatory checklists, adversarial analysis, and stakeholder interviews. Risk scoring and calibration applies likelihood-impact matrices with calibration against documented precedent. Reputational risk extends the assessment beyond regulatory harm. Residual risk and acceptability documents the risk remaining after controls, with deployer communication and periodic review. The fundamental rights impact assessment maps the system's effects against the EU Charter. Risk assessment for specific categories addresses biometric, critical infrastructure, employment, and law enforcement contexts. GPAI model risk assessment covers systemic risk evaluation. Iterative risk management ensures risk assessment is a continuous process. The section concludes with artefacts. ℹ are populated. (FRIA continuation, specific categories, GPAI risk, iterative management, artefacts) are awaiting content from a subsequent batch. --- ## Risk Classification URL: https://docs.standardintelligence.com/risk-classification Breadcrumb: Governance › Risk Assessment › Risk Classification Last updated: 28 Feb 2026 Four-Tier Framework Overview Tier 1: Prohibited Practices (Art. 5) — Seven Categories & Immediate Cessation Tier 2: High-Risk (Annex III) — Eight Domains Tier 2: Annex I Safety Components Tier 2: Full Obligation Set (AISDP, Conformity Assessment, CE, EU DB) Tier 3: Limited Risk (Art. 50) — Transparency Obligations Tier 4: Minimal Risk — Baseline AISDP Only --- ## Risk Scoring & Calibration URL: https://docs.standardintelligence.com/risk-scoring-and-calibration Breadcrumb: Governance › Risk Assessment › Risk Scoring & Calibration Last updated: 28 Feb 2026 Four Scoring Dimensions AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 9 Risks are scored using a likelihood-impact matrix. Impact is assessed against four dimensions: health and safety, fundamental rights, operational integrity, and reputational exposure. Each dimension has a calibrated five-point rubric. Health and safety scores range from negligible (1, no measurable consequence) to catastrophic (5, irreversible harm to life or safety affecting a large and vulnerable population). Fundamental rights scores range from negligible (1, no discernible Charter right effect) to catastrophic (5, large-scale or irreversible infringement, or infringement affecting a right of particular sensitivity such as human dignity or non-discrimination). Operational integrity scores range from negligible (1, no effect on availability or accuracy) to catastrophic (5, total system failure or integrity compromise). Reputational exposure scores range from negligible (1, internal awareness only) to catastrophic (5, sustained public attention, political scrutiny, and regulatory enforcement). Likelihood is scored separately on a five-point scale: rare (1), unlikely (2), possible (3), likely (4), and almost certain (5). Each score must be accompanied by a written rationale citing specific evidence. Key outputs Four-dimension impact assessment (health/safety, rights, operational, reputational) Five-point calibrated rubrics per dimension Separate likelihood scoring with evidence-based rationale Module 6 AISDP documentation Composite Scoring & Documented Weighting Rationale AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 9 The composite risk score is the product of the likelihood rating and the highest impact rating across the four dimensions. This "worst-case dimension" approach ensures that a risk with low operational impact but catastrophic fundamental rights impact is not diluted by averaging. The AI System Assessor records all four impact ratings; the composite score drives treatment priority, but the individual dimension scores inform the type of mitigation required. Risks scoring above the organisation's defined threshold (typically 12 or above on a 25-point scale) require specific, documented mitigation measures. Those scoring below the threshold may be accepted, with the acceptance recorded and signed by the AI Governance Lead . The threshold itself should be documented with its rationale and reviewed periodically; a threshold set too high leaves material risks unmitigated, while one set too low creates an unmanageable mitigation burden. The weighting rationale (why the worst-case dimension approach was chosen over averaging or other composite methods) is documented in Module 6. This rationale enables an assessor to understand the scoring methodology and evaluate its appropriateness for the system's risk profile. Key outputs Composite score = Likelihood × Highest Impact Dimension Treatment threshold documented with rationale All four dimension scores retained alongside the composite Module 6 AISDP documentation Calibration Workshops — Reference Scenarios & Anchors AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 9 Scoring is inherently subjective. Calibration workshops present assessors with five to ten reference scenarios drawn from published enforcement actions, the AI Incident Database, or internal near-miss events. Assessors score the scenarios independently, then compare results. Divergences are discussed and the group agrees on reference scores for each scenario; these become calibration anchors. When scoring a new risk, assessors compare it to the anchored scenarios, grounding their scoring in concrete reference points rather than abstract rubric definitions. Systematic divergences (one assessor consistently scoring likelihood higher than another) are identified by the AI Governance Lead and addressed through shared reference cases and discussion. Calibration workshops should precede each assessment cycle. New assessors complete a calibration exercise before conducting their first live assessment. Where the organisation has multiple high-risk systems, cross-system calibration ensures that a "Significant" rating carries the same meaning across the portfolio, enabling meaningful portfolio-level risk reporting. The calibration results are retained as Module 6 compliance evidence. Key outputs Annual calibration workshops with 5–10 reference scenarios Calibration anchors agreed by the assessor group Cross-system calibration for portfolio consistency Module 6 AISDP evidence Semi-Quantitative Bayesian Scoring AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 9 For high-uncertainty risks where the team cannot confidently distinguish between likelihood levels, semi-quantitative Bayesian scoring offers a more defensible approach than forcing a single point estimate. Each assessor provides a probability distribution across the five likelihood levels, for example: 10% rare, 30% unlikely, 40% possible, 15% likely, 5% almost certain. The distributions are aggregated across assessors, and the resulting expected value and confidence interval are reported alongside the risk. This makes uncertainty visible rather than concealing it behind a point estimate. A risk with a narrow confidence interval around "Possible" represents a different level of confidence than a risk with a wide distribution spanning "Unlikely" to "Almost Certain," even if both have the same expected value. Most GRC platforms do not natively support distributional scoring. Implementation may require a custom tool: a Python script or a simple web form that collects distributions and computes aggregates. Semi-quantitative Bayesian scoring is recommended for the system's top-ten risks and for any risk where assessors disagree by more than one point on the standard scale. The distributions and aggregation methodology are documented in Module 6. Key outputs Probability distributions across likelihood levels per assessor Aggregated expected values and confidence intervals Uncertainty made visible for high-uncertainty risks Module 6 AISDP evidence Written Rationale per Score AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 9 Every risk score must be accompanied by a written rationale citing specific evidence. "Medium likelihood" is insufficient; the assessor must explain why medium rather than high, citing the frequency of a particular failure mode observed during testing, the exposure of the affected population, comparable incidents in similar systems, or the maturity of the mitigations in place. The written rationale serves two functions. During the assessment, it forces the assessor to ground their judgement in evidence rather than intuition. During conformity assessment or regulatory inspection, it enables a reviewer to evaluate whether the score is defensible and to challenge it if the evidence does not support the conclusion. Scoring patterns across the register are reviewed by the AI Governance Lead to identify systematic inconsistencies before the assessment is finalised. A register where all risks cluster at the same score suggests that the rubric is not being applied with sufficient granularity. Key outputs Written rationale per risk score citing specific evidence Evidence grounding for both likelihood and impact dimensions AI Governance Lead review of scoring patterns for consistency Module 6 AISDP evidence --- ## Sensitive Domains — Non-Public Section URL: https://docs.standardintelligence.com/sensitive-domains-non-public-section Breadcrumb: Governance › Regulator Interaction › EU Database Registration › Sensitive Domains — Non-Public Section Last updated: 28 Feb 2026 Sensitive Domains — Non-Public Section AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 49 (4) High-risk AI systems under Annex III points 1, 6, and 7 (biometric identification for law enforcement, migration/asylum/border control) are registered in a secure, non-public section of the EU database. Only the Commission and nationally designated authorities under Article 74(8) can access this section. The information submitted is a subset of the full Section A requirements. Critical infrastructure systems under Annex III, point 2 are registered at national level, outside the EU database entirely, reflecting national security sensitivities. The Conformity Assessment Coordinator confirms the correct registration pathway for each system based on its Annex III classification. The non-public registration follows the same data quality assurance process as public registration. The reduced visibility does not reduce the accuracy requirement; competent authorities will scrutinise non-public registrations with the same rigour as public ones. Key outputs Non-public database section for Annex III points 1, 6, 7 National-level registration for Annex III point 2 Same data quality standard as public registration Module 10 AISDP evidence --- ## Seven-Phase Delivery Framework URL: https://docs.standardintelligence.com/seven-phase-delivery-framework Breadcrumb: Governance › Delivery › Seven-Phase Framework Last updated: 28 Feb 2026 Phase 1: Discovery & Classification (Weeks 1–3) Phase 2: Risk Assessment & FRIA (Weeks 2–6) Phase 3: Architecture & Design (Weeks 4–8) Phase 4: Development & Testing (Weeks 6–18) Phase 5: Pre-Deployment Validation (Weeks 16–20) Phase 6: Registration & Deployment (Weeks 20–22) Phase 7: Operational Monitoring (Ongoing) --- ## Stakeholder Interview Records URL: https://docs.standardintelligence.com/stakeholder-interview-records Breadcrumb: Governance › Conformity Assessment › Artefacts › Stakeholder Interview Records Last updated: 28 Feb 2026 Stakeholder Interview Records AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Annex VI The stakeholder interview records from Phase 4 of the assessment are retained as evidence. Each record documents the interviewee (role, not personal identification where GDPR considerations apply), the interview date, the questions asked, the responses given, and any findings arising. Interview records from the Technical SME, Business Owner, and Operators collectively demonstrate that the assessment verified the system's compliance not only through documentation but through direct engagement with the persons responsible for building, governing, and operating it. Findings from interviews (training gaps, communication gaps, process gaps) are recorded in the Non-Conformity Register where they constitute non-conformities. Key outputs Per-interview documentation (interviewee role, questions, responses, findings) Verification of compliance through direct stakeholder engagement Interview findings linked to the Non-Conformity Register Ten-year retention --- ## Technical SME — Risk, Architecture, Testing URL: https://docs.standardintelligence.com/technical-sme-risk-architecture-testing Breadcrumb: Governance › Delivery › Organisational Roles › Technical SME — Risk, Architecture, Testing Last updated: 28 Feb 2026 Technical SME — Risk, Architecture, Testing AISDP module(s): Modules 2–5, 9, 10 (Development, Architecture, Data, Testing, Cybersecurity) Regulatory basis: Articles 9–15 The Technical SME provides engineering evidence: architecture documentation, model evaluation results, data governance artefacts, and testing reports. Typically the engineering lead or senior ML engineer, the Technical SME is the primary source of truth for the system's technical design, data, and operational behaviour. The Technical SME is responsible (R) for architecture review, data governance, and (alongside the AI System Assessor ) risk assessment . The role responds to technical queries during conformity assessment , provides evidence for the NB evidence pack where applicable, and supports inspection readiness by explaining the system's technical details to assessors and inspectors. The Technical SME works with the Technical Owner (the engineering lead or CTO who ensures design and testing satisfy Articles 9–15) and the Business Owner (the product manager who ensures intended purpose and deployment context are correctly documented). Key outputs Engineering evidence provision for AISDP compilation Technical queries during assessment and inspection RACI "R" for architecture, data governance, and testing evidence Collaboration with Technical Owner and Business Owner --- ## Third-Country Providers — Authorised Representative (Art. 22) URL: https://docs.standardintelligence.com/third-country-providers-authorised-representative-art-22 Breadcrumb: Governance › Regulator Interaction › Multi-Jurisdiction Deployment › Third-Country Providers — Authorised Representative (Art. 22) Last updated: 28 Feb 2026 Third-Country Providers — Authorised Representative (Art. 22) AISDP module(s): Module 10 (Compliance Record) Regulatory basis: Article 22 Providers established outside the EU who place AI systems on the EU market must appoint an authorised representative established in the EU. The representative's responsibilities include maintaining a copy of the technical documentation, cooperating with competent authorities, and providing information to demonstrate compliance. The authorised representative should have technical understanding sufficient to respond meaningfully to authority inquiries. A purely legal appointment, where the representative holds documentation but cannot explain technical details, may prove inadequate during an inspection. The representative should have access to technical support from the provider's engineering team and understand the AISDP at a level sufficient to navigate routine authority interactions independently. The representative's written mandate must be on file and cover the AI Act scope. The mandate is referenced in the Declaration of Conformity (Annex V, point 2) and the EU database registration . Key outputs Authorised representative with technical understanding Written mandate covering AI Act scope Access to technical support from provider's team Module 10 AISDP evidence --- ## Tier 1: Prohibited Practices (Art. 5) — Seven Categories & Immediate Cessation URL: https://docs.standardintelligence.com/tier-1-prohibited-practices-art-5-seven-categories-and Breadcrumb: Governance › Risk Assessment › Risk Classification › Tier 1: Prohibited Practices (Art. 5) — Seven Categories & Immediate Cessation Last updated: 28 Feb 2026 Tier 1: Prohibited Practices (Art. 5) — Seven Categories & Immediate Cessation AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 5 Article 5 prohibits eight categories of AI practice. Subliminal, manipulative, or deceptive techniques that materially distort behaviour. Exploitation of vulnerabilities arising from age, disability, or social or economic situation. Social scoring by public authorities or on their behalf. Untargeted facial recognition scraping for database building. Emotion recognition in workplaces or educational institutions (outside narrow medical and safety exceptions). Risk assessment of natural persons for criminal offending based solely on profiling. Biometric categorisation systems that individually categorise natural persons based on biometric data to deduce or infer sensitive attributes such as race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation (outside narrow law enforcement exceptions). Real-time remote biometric identification in publicly accessible spaces (outside narrow law enforcement exceptions). Systems falling within any of these categories cannot proceed through the AISDP process. Their identification triggers immediate escalation to the AI Governance Lead and the Legal and Regulatory Advisor, followed by cessation of operation. The risk assessment must screen for prohibited practices before any other analysis begins. The screening should be thorough. Some systems may inadvertently perform a prohibited function through a secondary capability or an emergent behaviour. The AI System Assessor documents the screening analysis, confirming which prohibited categories were considered and why the system does not fall within any of them. Key outputs Screening against all eight prohibited practice categories Immediate escalation and cessation procedure if triggered Documented screening analysis confirming non-applicability Module 6 AISDP documentation --- ## Tier 2: Annex I Safety Components URL: https://docs.standardintelligence.com/tier-2-annex-i-safety-components Breadcrumb: Governance › Risk Assessment › Risk Classification › Tier 2: Annex I Safety Components Last updated: 28 Feb 2026 Tier 2: Annex I Safety Components AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 6(1), Annex I AI systems that constitute safety components of products governed by Annex I harmonisation legislation are classified as high-risk regardless of whether they fall within an Annex III domain. Annex I covers Union harmonisation legislation including the Machinery Regulation, the Medical Devices Regulation, the Radio Equipment Directive, civil aviation safety regulations, motor vehicle type-approval regulations, and marine equipment requirements. A safety component is an AI system that, if it malfunctions or fails, can endanger the health or safety of persons. The determination requires both a product-level analysis (is the product governed by Annex I legislation?) and a component-level analysis (does the AI system perform a safety function within that product?). An AI system that optimises an industrial robot's trajectory is a safety component; an AI system that schedules maintenance for the same robot may not be. For Annex I safety components, the AI Act conformity assessment must coordinate with the product-level conformity assessment under the relevant harmonisation legislation. The Conformity Assessment Coordinator documents both assessments and their interaction. Key outputs Product-level Annex I legislation identification Component-level safety function determination Coordinated conformity assessment planning Module 6 AISDP documentation --- ## Tier 2: Full Obligation Set (AISDP, Conformity Assessment, CE, EU DB) URL: https://docs.standardintelligence.com/tier-2-full-obligation-set-aisdp-conformity-assessment-ce Breadcrumb: Governance › Risk Assessment › Risk Classification › Tier 2: Full Obligation Set (AISDP, Conformity Assessment, CE, EU DB) Last updated: 28 Feb 2026 Tier 2: Full Obligation Set (AISDP, Conformity Assessment, CE, EU DB) AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Articles 8–15, 43, 47, 49 High-risk systems (whether Annex III or Annex I) bear the full obligation set under the AI Act. A complete AISDP covering all 12 modules as specified by Article 11 and Annex IV. A conformity assessment under Annex VI (internal control, the default for most Annex III systems) or Annex VII (involving a notified body, required for real-time remote biometric identification and certain Annex I products). A CE marking affixed to the system or its documentation under Article 48 , signifying conformity. Registration in the EU database under Article 49 , providing public transparency regarding the system's existence and key characteristics. A Declaration of Conformity under Article 47 and Annex V, constituting the provider's legally binding statement of compliance. The full obligation set also includes post-market monitoring under Article 72 , serious incident reporting under Article 73 , and quality management system requirements under Article 17 . The risk assessment documents this obligation set and confirms that the AISDP is structured to satisfy each element. Key outputs Full 12-module AISDP requirement confirmation Conformity assessment route determination (Annex VI or VII) CE marking, EU database registration, and Declaration of Conformity planning Module 6 AISDP documentation --- ## Tier 2: High-Risk (Annex III) — Eight Domains URL: https://docs.standardintelligence.com/tier-2-high-risk-annex-iii-eight-domains Breadcrumb: Governance › Risk Assessment › Risk Classification › Tier 2: High-Risk (Annex III) — Eight Domains Last updated: 28 Feb 2026 Tier 2: High-Risk (Annex III) — Eight Domains AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 6, Annex III Annex III defines eight domains within which AI systems are classified as high-risk. Biometrics covers remote biometric identification and biometric categorisation. Critical infrastructure covers AI components in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating and electricity. Education and vocational training covers systems determining access to or outcomes within educational institutions. Employment, workers management and access to self-employment covers systems used in recruitment, candidate screening, performance evaluation, and task allocation. Access to and enjoyment of essential private services and essential public services and benefits covers systems determining eligibility for public assistance, creditworthiness assessment, insurance pricing, and emergency services dispatch. Law enforcement covers systems used for risk assessment of individuals, polygraphs, evidence reliability assessment, and crime prediction. Migration, asylum and border control management covers systems used for asylum application assessment, border surveillance, and security risk assessment. Administration of justice and democratic processes covers systems used for legal research assistance and alternative dispute resolution. The AI System Assessor maps the system's intended purpose against all eight domains. A system may fall within multiple domains if its use spans different contexts. The domain determination drives the specific risk factors that the subsequent risk assessment must address. Key outputs Mapping of intended purpose against all eight Annex III domains Multi-domain identification where applicable Domain-specific risk factor identification Module 6 AISDP documentation --- ## Tier 3: Limited Risk (Art. 50) — Transparency Obligations URL: https://docs.standardintelligence.com/tier-3-limited-risk-art-50-transparency-obligations Breadcrumb: Governance › Risk Assessment › Risk Classification › Tier 3: Limited Risk (Art. 50) — Transparency Obligations Last updated: 28 Feb 2026 Tier 3: Limited Risk (Art. 50) — Transparency Obligations AISDP module(s): Module 6 (Risk Management System), Module 8 (Transparency) Regulatory basis: Article 50 Systems triggering Article 50 transparency obligations include chatbots and conversational AI (which must inform users they are interacting with an AI system), emotion recognition systems (which must inform exposed persons), biometric categorisation systems (which must inform categorised persons), and systems generating or manipulating synthetic content including deepfakes (which must label outputs as artificially generated or manipulated). These systems require a standard AISDP addressing the specific transparency measures applicable to their category. The standard AISDP is lighter than the full high-risk AISDP; it focuses on the transparency controls, the technical mechanisms for delivering the required disclosures, and the evidence that the disclosures are effective and comprehensible. A system may be both limited-risk (triggering Article 50 obligations) and high-risk (falling within Annex III). In such cases, the full high-risk obligation set applies, and the Article 50 transparency obligations are subsumed within the Module 8 transparency documentation. Key outputs Article 50 category determination Standard AISDP scoped to transparency obligations Dual classification handling where both limited and high-risk apply Module 6 and Module 8 AISDP documentation --- ## Tier 4: Minimal Risk — Baseline AISDP Only URL: https://docs.standardintelligence.com/tier-4-minimal-risk-baseline-aisdp-only Breadcrumb: Governance › Risk Assessment › Risk Classification › Tier 4: Minimal Risk — Baseline AISDP Only Last updated: 28 Feb 2026 Tier 4: Minimal Risk — Baseline AISDP Only AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Residual (no specific AI Act obligation) Systems that do not trigger any prohibited practice, high-risk classification, or Article 50 transparency obligation fall into the minimal-risk tier. The AI Act imposes no specific requirements on these systems, though Recital 95 encourages the application of voluntary codes of conduct. Even for minimal-risk systems, a baseline AISDP is recommended confirming the classification rationale. The baseline AISDP documents the system's intended purpose, the classification analysis (including why the system does not fall within Annex III or Article 50), and the date of the classification. This documentation protects the organisation against subsequent challenges to the classification and provides the foundation for reclassification if the system's deployment context changes. The baseline AISDP is a lightweight document, typically a few pages, retained for the system's operational lifetime. It demonstrates that the organisation conducted a deliberate classification analysis rather than defaulting to minimal risk through inattention. Key outputs Baseline AISDP with classification rationale Documented analysis confirming non-applicability of higher tiers Retention for the system's operational lifetime Module 6 AISDP documentation --- ## Timeline Planning URL: https://docs.standardintelligence.com/timeline-planning Breadcrumb: Governance › Conformity Assessment › Notified Bodies › Timeline Planning Last updated: 28 Feb 2026 Timeline Planning AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 43 Timeline planning accounts for the full assessment lifecycle. Pre-engagement (body selection, scope agreement, contract negotiation) typically requires four to eight weeks. Desktop review requires four to twelve weeks depending on the body's workload and the AISDP's readiness. Gap remediation adds two to eight weeks. Technical assessment requires two to four weeks of active engagement. Final reporting and certification adds two to four weeks. The total timeline from initial engagement to certification typically spans four to eight months. Organisations should begin the notified body engagement process at least nine months before their target deployment date. This timeline is for a well-prepared organisation; organisations with significant documentation gaps may require longer. For mandatory assessments (biometric identification under Annex III , point 1), timeline overruns directly delay deployment. The AI Governance Lead incorporates assessment timeline risk into the overall deployment plan, with contingency provisions for additional remediation cycles. Key outputs Four-to-eight-month total timeline from engagement to certification Nine-month minimum lead time before target deployment Phase-by-phase timeline allocation Contingency for additional remediation cycles --- ## Translation Records URL: https://docs.standardintelligence.com/translation-records Breadcrumb: Governance › Regulator Interaction › Artefacts › Translation Records Last updated: 28 Feb 2026 Translation Records AISDP module(s): Module 8 (Transparency), Module 10 (Compliance Record) Regulatory basis: Article 13 (3)(b)(ii) The translation records archive documents each translation commissioned: the source document, target language, translator credentials, technical reviewer, review outcome, and the controlled document version of the final translation. The standardised multi-language glossary is maintained as a standing translation resource. Key outputs Per-translation documentation with quality assurance trail Standardised glossary maintained across languages Controlled document versioning for translations Module 8 and Module 10 AISDP evidence --- ## When NB Required URL: https://docs.standardintelligence.com/when-nb-required Breadcrumb: Governance › Conformity Assessment › Notified Bodies › When NB Required Last updated: 28 Feb 2026 When NB Required AISDP module(s): Module 6 (Risk Management System) Regulatory basis: Article 43(1) Article 43(1) establishes the conformity assessment regime for high-risk AI systems referred to in Annex III , point 1 (biometric identification), in so far as those systems are used for the purposes of law enforcement, migration, asylum, and border control management. Where the provider has applied harmonised standards or common specifications, the provider may choose between internal control under Annex VI or third-party assessment under Annex VII involving a notified body. Where harmonised standards have not been applied, or do not exist, or common specifications are unavailable, the provider must follow the Annex VII procedure, which requires notified body involvement. Biometric identification systems used outside of these specific domains follow the standard Annex VI internal control procedure applicable to other Annex III high-risk systems. For Annex VII assessments where the system is intended for use by law enforcement, immigration or asylum authorities, or EU institutions, the market surveillance authority acts as the notified body rather than a freely chosen one. This does not exempt such systems from third-party assessment; it designates a specific entity to perform it. For all other Annex III high-risk systems (points 2 to 8), internal control under Annex VI is the required procedure, without notified body involvement. The designation of notified bodies under the AI Act is proceeding gradually. As of early 2026, only a small number of bodies have been formally designated. Organisations anticipating mandatory or voluntary third-party assessment should monitor the NANDO database for AI Act-designated bodies and engage early to understand assessment methodology, timeline, and fees. Key outputs Annex III point 1 (biometrics): Annex VI or Annex VII depending on harmonised standard application Annex VII mandatory where harmonised standards not applied or unavailable Law enforcement systems: market surveillance authority acts as notified body Annex III points 2–8: internal control under Annex VI (no NB involvement) NANDO database monitoring for designated bodies Voluntary NB engagement available for non-biometric high-risk systems --- # Operations --- ## AI Literacy URL: https://docs.standardintelligence.com/ai-literacy Breadcrumb: Operations › Oversight › AI Literacy Last updated: 28 Feb 2026 Tiered Programme — Five Levels AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 4 The AI Governance Lead tiers AI literacy training to the individual's role in the oversight pyramid . Level 1 (Engineering): deep technical training on model behaviour, failure modes, monitoring tools, and incident response . Level 2 (Operators): practical training on the specific system, capabilities and limitations, confidence indicators, override procedures, and escalation pathways. Level 3 (Product Management): AI compliance obligations, business-compliance metric relationships, deployer management, and affected person rights. Level 4 (Compliance, Legal, DPO): EU AI Act requirements, AISDP structure, conformity assessment , and GDPR interaction. Level 5 (Executive): portfolio overview, risk posture, compliance status, and regulatory environment. Each tier receives training calibrated to the decisions and actions required of that role. Generic "AI awareness" training satisfies the spirit of Article 4 for no tier; each needs targeted content. Key outputs Five-tier training programme aligned to oversight pyramid Role-specific content for each tier Generic AI awareness insufficient for any tier Module 7 AISDP documentation Operator Training — Hands-On, Calibration & Scenario Exercises AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 4 Level 2 operator training must be custom to the specific system. Generic AI literacy training does not prepare an operator to review specific cases in the specific domain with the specific interface. Training includes hands-on exercises using the actual oversight interface, worked examples from the system's domain, calibration exercises where the operator reviews cases with known outcomes (testing whether the operator can identify the system's errors), and scenario exercises practising the override and break-glass procedures . Calibration exercises are particularly valuable for automation bias detection. An operator who consistently agrees with the system's recommendation on cases where the system is known to be wrong is exhibiting automation bias and requires additional training or workload adjustment. Key outputs Hands-on training with actual oversight interface Domain-specific worked examples Calibration exercises with known-outcome cases Override and break-glass scenario practice Training Cadence (Initial, Annual, Event-Triggered) AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 4 The training programme includes initial training before a person assumes their role in the oversight pyramid, periodic refresher training at least annually, and event-triggered training after a significant incident, after a substantial system modification, or after a regulatory update. Completion is tracked by the AI Governance Lead in a learning management system (Docebo, TalentLMS, Moodle) and retained as Module 7 evidence. The LMS generates compliance reports showing, for each person in the oversight pyramid, their current training status, last completed training date, and any overdue refreshers. Overdue refreshers trigger automated reminders escalating to the AI Governance Lead. Key outputs Three-cadence training (initial, annual refresher, event-triggered) LMS tracking with compliance reporting Automated overdue reminders Module 7 AISDP evidence Records & Certification (LMS Tracking, Operator Certification) AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 4, Article 17 Training completion records are retained as evidence for the AISDP's quality management documentation. For operators of high-risk AI systems, certification records confirm that the operator has completed required training and demonstrated competence through the calibration and scenario exercises. Certification is a prerequisite for operating the system. An operator whose certification has lapsed (refresher overdue) should not operate the system until recertified. The LMS enforces this through automated access control where feasible, or through the AI Governance Lead's manual oversight of the certification register. Key outputs Training completion records retained as Module 7 evidence Operator certification as prerequisite for system operation Lapsed certification prevents system operation LMS or manual certification register --- ## AISDP Version Updates URL: https://docs.standardintelligence.com/aisdp-version-updates Breadcrumb: Operations › PMM › Artefacts › AISDP Version Updates Last updated: 28 Feb 2026 AISDP Version Updates AISDP module(s): All modules (living document) Regulatory basis: Article 11 , Article 18 Each material change to the system, its documentation, or its operational context triggered by PMM findings creates a new AISDP version. The version history demonstrates continuous compliance discipline, linking each version change to the PMM finding, risk register entry, or governance decision that motivated it. The version update record captures the version number, date, the modules changed, the triggering event (PMM alert, governance decision, incident finding, regulatory development), and the approver. Version history is maintained as Module 12 evidence. Key outputs Per-change AISDP versioning with triggering event Continuous compliance discipline demonstrated through version history Approver documented per version Ten-year retention as Module 12 evidence --- ## Alerting & Escalation Framework URL: https://docs.standardintelligence.com/alerting-and-escalation-framework Breadcrumb: Operations › PMM › Alerting & Escalation Last updated: 28 Feb 2026 Informational Tier Warning Tier Critical Tier Escalation Path Design Silent Escalation Detection Threshold Calibration — Derivation & Quarterly Review --- ## Alerting Layer URL: https://docs.standardintelligence.com/alerting-layer Breadcrumb: Operations › PMM › PMM Infrastructure Architecture › Alerting Layer Last updated: 28 Feb 2026 Alerting Layer AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 Alerts are routed through a dedicated alerting service (PagerDuty, Opsgenie, or equivalent) that ensures delivery, tracks acknowledgement, and escalates unacknowledged alerts according to the escalation framework. The alerting service must guarantee delivery; an alert that is generated but not delivered is worse than no alert, because it creates a false sense that the monitoring system is working. Alert fatigue is a serious operational risk. Too many low-value alerts cause operators to ignore the alerting system entirely, including the high-value alerts. Threshold tuning and alert suppression for known, documented conditions are essential. The PMM plan documents the suppression rules and the rationale for each, ensuring that suppression does not mask genuine compliance issues. Each alert carries metadata linking it to the relevant AISDP module, the compliance obligation it relates to, and the severity tier. This metadata enables compliance-focused triage and reporting. Key outputs Dedicated alerting service with guaranteed delivery Alert fatigue mitigation through threshold tuning and suppression Per-alert metadata (AISDP module, compliance obligation, severity) Suppression rules documented with rationale --- ## Annual Break-Glass Testing URL: https://docs.standardintelligence.com/annual-break-glass-testing Breadcrumb: Operations › Oversight › Break-Glass Procedures › Annual Break-Glass Testing Last updated: 28 Feb 2026 Annual Break-Glass Testing AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 The Technical Owner tests the break-glass procedure at least annually through a simulated exercise. The exercise verifies that the technical stop mechanism works correctly (in-application stop button, infrastructure kill switch, and feature flag all function as documented), the notification chain delivers alerts to all recipients, affected deployers receive timely communication through the pre-established channels, and the system can be restarted through the documented resumption process. The exercise is conducted during a maintenance window under controlled conditions. Test results and any deficiencies identified are documented. Deficiencies are remediated and re-tested before the exercise is marked as complete. Exercise records are retained as Module 7 evidence. A break-glass mechanism that has never been tested may not work when needed. Annual testing provides confidence that the mechanism will function under the time pressure of a real incident. Key outputs Annual simulated exercise during maintenance window Four verification areas (stop mechanism, notification, deployer communication, resumption) Deficiency remediation and re-testing Exercise records retained as Module 7 evidence --- ## Annual Oversight Audit Report URL: https://docs.standardintelligence.com/annual-oversight-audit-report Breadcrumb: Operations › PMM › Artefacts › Annual Oversight Audit Report Last updated: 28 Feb 2026 Annual Oversight Audit Report AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 17 , Article 72 The Internal Audit Assurance Lead's annual audit report tests whether the PMM infrastructure is capturing required data, whether escalation pathways are functioning, whether break-glass procedures work as documented, whether training records are current, and whether the feedback loop is producing traceable actions. Findings are reported to the audit committee alongside any financial statement implications. Key outputs Independent annual audit of PMM and oversight effectiveness Audit committee and board reporting Findings entered into Non-Conformity Register Ten-year retention as Module 12 evidence --- ## Art. 3(49) Definition — Five Categories of Serious Incident URL: https://docs.standardintelligence.com/art-349-definition-five-categories-of-serious-incident Breadcrumb: Operations › PMM › Serious Incident Reporting › Art. 3(49) Definition — Five Categories of Serious Incident Last updated: 28 Feb 2026 Art. 3(49) Definition — Five Categories of Serious Incident AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 3(49), Article 73 Article 3(49) defines a serious incident as an incident or malfunction that directly or indirectly results in one of five outcomes: death of a person, serious harm to a person's health (including life-threatening illness, temporary or permanent bodily impairment, hospitalisation, or medical intervention required to prevent such outcomes), serious and irreversible disruption to the management or operation of critical infrastructure, infringement of obligations under EU law intended to protect fundamental rights, or serious harm to property or the environment. The Commission's September 2025 draft guidance clarifies interpretive boundaries. "Serious and irreversible disruption of critical infrastructure" requires both seriousness (imminent threat to life or physical safety) and irreversibility (physical infrastructure requiring reconstruction, essential data irrecoverable, or specialised equipment irreparably damaged). Fundamental rights infringements must "significantly interfere" with Charter-protected rights "on a large scale," establishing a high threshold intended to prevent trivial reporting. Examples include recruitment systems systematically discriminating based on ethnicity, or credit scoring systems categorically rejecting individuals from specific neighbourhoods. Indirect causation is sufficient. An AI system providing incorrect medical analysis that leads to patient harm through subsequent physician decisions constitutes an indirect serious incident. Key outputs Five-category serious incident definition understood across the organisation Commission draft guidance interpretive thresholds applied Indirect causation included in the assessment scope Module 12 AISDP documentation --- ## Availability & Uptime vs SLO URL: https://docs.standardintelligence.com/availability-and-uptime-vs-slo Breadcrumb: Operations › PMM › Operational Monitoring › Availability & Uptime vs SLO Last updated: 28 Feb 2026 Availability & Uptime vs SLO AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 15 The engineering team measures system availability against a defined SLO documented in the AISDP. For high-risk systems where unavailability could force deployers to make decisions without AI support, availability degradation is a compliance concern. The monitoring tracks both planned and unplanned downtime, computes rolling availability percentages over defined windows (hourly, daily, monthly), and alerts when availability trends downward or when a single outage exceeds the maximum tolerable duration. Where a fallback mechanism does not exist (the deployer's process assumes a valid AI output is always available), unavailability can introduce silent failures in the deployer's workflow. The SLO must account for this dependency, and the fallback procedure documentation (Instructions for Use) must specify what the deployer should do when the system is unavailable. Key outputs Availability monitoring against AISDP-declared SLO Planned and unplanned downtime tracking Rolling availability computation over multiple windows Fallback procedure documentation for deployers --- ## Board & Committee Reporting Materials URL: https://docs.standardintelligence.com/board-and-committee-reporting-materials Breadcrumb: Operations › Oversight › Artefacts › Board & Committee Reporting Materials Last updated: 28 Feb 2026 Board & Committee Reporting Materials AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 17 Board and committee reporting materials (risk committee, audit committee, compliance committee presentations and minutes) are retained as evidence of corporate governance engagement with AI compliance. These materials demonstrate that executive leadership was informed of and actively engaged in the organisation's AI governance programme. Key outputs Board and committee presentations and minutes retained Executive engagement evidence Decision documentation Ten-year retention --- ## Break-Glass Procedures URL: https://docs.standardintelligence.com/break-glass-procedures Breadcrumb: Operations › Oversight › Break-Glass Procedures Last updated: 28 Feb 2026 Who Can Trigger Break-Glass (Level 2 or Above) In-Application Stop Button Infrastructure Kill Switch Feature Flag Pattern Immediate Actions (Halt, Hold, Notify Deployers) Notification Chain Resumption Criteria Non-Retaliation for Break-Glass Annual Break-Glass Testing --- ## Break-Glass Test Records URL: https://docs.standardintelligence.com/break-glass-test-records Breadcrumb: Operations › Oversight › Artefacts › Break-Glass Test Records Last updated: 28 Feb 2026 Break-Glass Test Records AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 Break-glass test records document each annual exercise: the date, the participants, the mechanisms tested (stop button, kill switch, feature flag), the notification chain verification results, the deployer communication verification, the resumption process verification, any deficiencies identified, and the remediation actions taken. Key outputs Per-exercise documentation with test results Deficiency identification and remediation tracking Mechanism and notification chain verification Module 7 AISDP evidence --- ## Change Impact Assessment URL: https://docs.standardintelligence.com/change-impact-assessment Breadcrumb: Operations › PMM › Governance & Maintenance › Change Impact Assessment Last updated: 28 Feb 2026 Substantial Modification Threshold Check per Change AISDP module(s): Module 12 ( Post-Market Monitoring ), Module 2 (Development Process) Regulatory basis: Article 3(23) Every system change identified through the PMM feedback loop is assessed against the substantial modification thresholds defined in Article 3(23). A change that crosses the threshold triggers a new conformity assessment cycle (returning to Phase 5 of the delivery framework, ). A change that does not cross the threshold is documented in the AISDP change history (Module 12) through the standard change management framework. The assessment uses the criteria established during Phase 3: does the change affect the system's intended purpose, its architecture, its training data, its performance characteristics, or its risk profile in a way that could affect compliance with Articles 8–15? The AI System Assessor conducts the assessment, and the AI Governance Lead reviews borderline cases. A model retrain on updated data where the retrained model meets all existing validation gates and does not change the system's intended purpose typically does not constitute a substantial modification. A model retrain that changes the feature set, alters the decision boundary significantly, or introduces a new data source may cross the threshold. Key outputs Per-change substantial modification assessment Article 3(23) criteria applied by AI System Assessor Threshold crossing triggers return to Phase 5 Non-threshold changes documented in AISDP change history --- ## Composite System Monitoring URL: https://docs.standardintelligence.com/composite-system-monitoring Breadcrumb: Operations › PMM › Composite System Monitoring Last updated: 28 Feb 2026 Per-Component & Aggregate Monitoring AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 15 Composite systems (combining multiple models, modalities, or pipeline stages) require monitoring at both the component and aggregate levels. Degradation in one component can be masked by stability in another when only aggregate output is monitored. A medical imaging system where the vision component becomes less accurate but the text generation component continues producing fluent summaries would not be detected by end-to-end quality monitoring alone. The Technical SME computes performance metrics at the component level and at the system level. Discrepancies between component and aggregate metrics generate alerts. For example, if a component's accuracy has degraded by 5% but the aggregate accuracy has degraded by only 1% (because another component partially compensates), the component-level degradation should still be investigated. The PMM plan specifies thresholds at both levels. Component-level thresholds may be tighter than aggregate thresholds, reflecting the principle that catching problems at the component level is cheaper and faster than waiting for them to manifest in aggregate output. Key outputs Component-level and aggregate-level metric computation Discrepancy alerting between component and aggregate Component thresholds potentially tighter than aggregate Module 12 AISDP documentation Intermediate Representation Monitoring AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 15 Between pipeline stages, data takes on intermediate representations (feature vectors, embedding spaces, intermediate predictions). Monitoring the distribution of these intermediate representations detects problems that neither component-level nor aggregate-level monitoring catches. If a feature engineering component silently changes its output distribution (due to a data source change), the downstream model may continue producing outputs within the expected range (compensating for small input changes) but with degraded accuracy for specific subgroups. The intermediate representation distribution shift is detectable even when aggregate metrics remain within threshold. Where intermediate outputs are not directly interpretable, proxy measures (distribution statistics, anomaly scores, consistency checks between parallel paths) provide detection capability. The Technical SME establishes baseline distributions for intermediate representations at deployment and monitors for shifts on the same schedule as input and output drift monitoring. Key outputs Intermediate representation distribution monitoring Silent upstream change detection Proxy measures for non-interpretable intermediates Baseline establishment at deployment with ongoing tracking Cross-Modal Consistency Checks AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 15 For systems processing multiple input modalities (text and image, structured data and free text), the outputs based on each modality should be consistent. If the text modality suggests one conclusion and the image modality suggests another, the system logs the conflict and the Technical SME monitors its resolution. A persistently high inconsistency rate may indicate that one modality's model has drifted or that the fusion mechanism is not functioning as designed. The Technical SME tracks the cross-modal inconsistency rate as a PMM metric, with thresholds based on the expected disagreement rate observed during validation. The fusion logic itself (whether a weighted ensemble, a learned fusion layer, or a rule-based aggregation) is also monitored. Changes in the relative contribution of each modality to the final output, even when individual modality performance is stable, can indicate drift in the fusion logic or a shift in input patterns changing the effective weighting. Key outputs Cross-modal inconsistency rate tracking Fusion logic contribution monitoring Thresholds based on validation-stage disagreement rates Module 12 AISDP documentation --- ## Computation Layer URL: https://docs.standardintelligence.com/computation-layer Breadcrumb: Operations › PMM › PMM Infrastructure Architecture › Computation Layer Last updated: 28 Feb 2026 Computation Layer AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 Metric computation runs on a scheduled basis (hourly, daily, weekly) as defined in the PMM plan . The computation layer must be idempotent and deterministic: running the same computation on the same input data must produce the same result, ensuring metrics are reproducible and auditable. A metric value that cannot be reproduced is useless as compliance evidence. Where metrics depend on ground truth labels that arrive with a delay, the computation pipeline handles late-arriving data and recomputes affected metrics. The recomputation process updates the historical record, and both the estimated and recomputed values are retained (demonstrating the evolution from estimate to confirmed value). The computation layer is typically implemented as scheduled Airflow DAGs, Prefect flows, or similar orchestration tools, with each computation job producing a structured output (JSON or Parquet) that is stored in the monitoring data warehouse and visualised through the dashboard layer. Key outputs Scheduled, idempotent, deterministic metric computation Late-arriving ground truth handling with recomputation Structured output (JSON, Parquet) for audit trail Orchestration via Airflow, Prefect, or equivalent --- ## Continuous Oversight Governance URL: https://docs.standardintelligence.com/continuous-oversight-governance Breadcrumb: Operations › Oversight › Continuous Governance Last updated: 28 Feb 2026 Quarterly Oversight Reviews — Six Agenda Items AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 The AI Governance Lead convenes a quarterly review examining six areas: monitoring metric trends and threshold adequacy, operator escalation patterns (are operators escalating, and if not, is the pathway working?), break-glass procedure readiness, non-conformity register status, training and certification currency, and external developments affecting the system's risk profile. The review produces documented minutes with action items, owners, and deadlines. Each item is tracked to completion. The quarterly review is the primary governance mechanism ensuring that operational oversight remains effective over time. Key outputs Six-area structured review agenda Documented minutes with tracked action items Governance mechanism for sustained oversight effectiveness Module 7 AISDP evidence Annual Oversight Audit — Six Verification Areas AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 17 The Internal Audit Assurance Lead conducts an annual audit testing six areas: whether the monitoring infrastructure is capturing required data, whether escalation pathways are functioning, whether break-glass procedures work as documented, whether training records are current, whether non-retaliation commitments are being honoured, and whether the oversight framework is proportionate to the system's risk profile. Findings are reported to the audit committee. The annual audit provides the independent assurance that the quarterly self-assessment cannot; the AI Governance Lead's review of their own governance framework benefits from external verification. Key outputs Six verification areas tested annually Internal Audit Assurance Lead independence Audit committee reporting Module 7 AISDP evidence Lessons Learned Integration AISDP module(s): Module 7 (Human Oversight), all modules (AISDP updates) Regulatory basis: Article 14 Findings from quarterly reviews, annual audits, break-glass exercises, and actual incidents are documented and integrated into the AISDP. Each finding that results in a change to the system, its documentation, or its operational procedures creates a new AISDP version, maintaining the living document principle. Lessons learned integration closes the governance feedback loop: operational experience improves the oversight framework, the improved framework produces better oversight, and better oversight produces new lessons. A system whose oversight framework is identical to its initial deployment state after two years of operation has failed to learn from its operational experience. Key outputs Findings from reviews, audits, exercises, and incidents integrated AISDP version updates from lessons learned Governance feedback loop closure Living document principle maintained --- ## Corporate Governance Integration URL: https://docs.standardintelligence.com/corporate-governance-integration Breadcrumb: Operations › Oversight › Corporate Governance Last updated: 28 Feb 2026 Board Risk Committee — AI Compliance Reporting AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 17 For organisations with material AI exposure, the board receives periodic reporting covering the number and classification of AI systems, compliance status of each high-risk system, serious incident s and resolution status, material regulatory developments, and overall risk posture. Quarterly reporting is appropriate for large portfolios; semi-annually for smaller ones. Board reporting is concise, decision-oriented, and escalates issues requiring board-level authority (risk appetite adjustments, material compliance investments, system withdrawal decisions). The AI Governance Lead prepares the report; the CRO or CTO presents it to the board. Key outputs Board-level AI compliance reporting (quarterly or semi-annual) Portfolio status, incidents, regulatory developments, risk posture Decision-oriented with clear escalation points Module 7 AISDP evidence Audit Committee — AI Compliance Scope & Financials AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 17 The audit committee includes AI compliance within its scope. The Internal Audit Assurance Lead's annual oversight audit is reported to the committee, alongside any findings affecting the financial statements: provisions for potential regulatory fines or the carrying value of AI system assets that may be subject to mandatory withdrawal. The audit committee's oversight ensures that the AI compliance programme receives independent board-level scrutiny beyond the AI Governance Lead's self-assessment. Key outputs AI compliance within audit committee scope Annual oversight audit reported to committee Financial statement implications assessed Independent board-level scrutiny Risk Committee — Risk Appetite & Insurance AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 17 The risk committee receives the portfolio-level risk register and reviews the organisation's AI risk appetite. Key questions include whether residual risk acceptance criteria are appropriately calibrated, whether AI compliance investment is proportionate to risk exposure, and whether insurance coverage addresses AI-specific liabilities. The risk committee's engagement ensures that AI risk appetite is set at the appropriate organisational level, not delegated to the AI Governance Lead alone. Key outputs Portfolio risk register reviewed by risk committee AI risk appetite set at board level Insurance coverage adequacy assessment Module 7 AISDP evidence Compliance Committee — AI Act Integration AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 17 Where the organisation has a compliance committee (common in financial services and healthcare), the AI Governance Lead integrates AI Act compliance into the committee's agenda alongside GDPR , sector-specific regulation, and other obligations. The AI Act-GDPR interaction is particularly relevant; the DPO Liaison's role in the oversight pyramid should be reflected in the compliance committee's reporting structure. Integration avoids the risk of AI compliance operating as an isolated programme disconnected from the organisation's broader compliance framework. Key outputs AI Act integrated into compliance committee agenda Cross-regulatory coordination (GDPR, sector-specific) DPO Liaison reporting structure reflected Compliance programme integration --- ## Critical Tier URL: https://docs.standardintelligence.com/critical-tier Breadcrumb: Operations › PMM › Alerting & Escalation › Critical Tier Last updated: 28 Feb 2026 Critical Tier AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 , Article 73 A critical alert indicates that a metric has breached its compliance threshold, a fundamental rights concern has been identified, or multiple warning-level alerts have occurred simultaneously or in rapid succession. Immediate investigation is initiated. The AI Governance Lead is notified within 24 hours. If the breach indicates potential harm, the break-glass procedure is considered. The serious incident reporting process is assessed for applicability: does the event meet the Article 3(49) definition of a serious incident? If so, the Article 73 reporting timeline begins. Critical alerts are routed through the real-time alerting service with guaranteed delivery and acknowledgement tracking. An unacknowledged critical alert escalates automatically to the named alternate and then to the AI Governance Lead. Key outputs Immediate investigation with AI Governance Lead notification within 24 hours Break-glass procedure consideration Serious incident reporting assessment Real-time alerting with acknowledgement tracking and auto-escalation --- ## Cross-Deployer Pattern Analysis URL: https://docs.standardintelligence.com/cross-deployer-pattern-analysis Breadcrumb: Operations › PMM › Deployer Monitoring Support › Cross-Deployer Pattern Analysis Last updated: 28 Feb 2026 Cross-Deployer Pattern Analysis AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 Individual deployer reports may appear minor in isolation, but patterns across multiple deployers reveal systemic issues. The PMM team aggregates deployer feedback and analyses it for trends: recurring complaints about specific output types, clusters of anomaly reports from a particular deployment context, or gradual changes in deployer satisfaction metrics. Cross-deployer analysis provides insights that no individual deployer can see. A deployer experiencing a 3% increase in override rates may dismiss it as normal variation; the same 3% increase observed simultaneously across five deployers suggests a systemic model issue. The provider's PMM function is uniquely positioned to perform this aggregation. Cross-deployer pattern findings feed into the risk register and may trigger threshold recalibration, proactive investigation, or system-wide corrective action. They are reported in the quarterly PMM review . Key outputs Cross-deployer feedback aggregation and trend analysis Systemic pattern detection invisible to individual deployers Risk register integration and threshold recalibration trigger Quarterly PMM review reporting --- ## Cross-Regime Interaction (Art. 73(9)) URL: https://docs.standardintelligence.com/cross-regime-interaction-art-739 Breadcrumb: Operations › PMM › Serious Incident Reporting › Cross-Regime Interaction (Art. 73(9)) Last updated: 28 Feb 2026 Cross-Regime Interaction (Art. 73(9)) AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 73(9) High-risk AI systems in sectors with existing equivalent reporting obligations have simplified AI Act reporting requirements. Under Article 73(9), where the system is subject to NIS2 (critical infrastructure), DORA (financial services), or medical device vigilance regulations, the AI Act reporting obligation is limited to fundamental rights infringements as defined in Article 3(49) (c); other serious incidents are reported through the sector-specific regime. Organisations operating under multiple reporting regimes map the overlap, identify which incidents trigger which reporting obligations, and ensure internal processes route incidents to the correct authority through the correct channel within the correct timeline. A single incident may trigger reporting under the AI Act, NIS2, GDPR (data breach notification under Article 33), and sector-specific legislation simultaneously. The incident response plan includes a cross-regime reporting matrix documenting, for each incident category, which regimes are triggered, which authorities receive reports, and the applicable timelines. This matrix prevents the organisation from satisfying one reporting obligation while inadvertently missing another. Key outputs Article 73(9) sector-specific simplification applied where eligible Cross-regime reporting matrix in the incident response plan Parallel reporting obligations identified and mapped Module 12 AISDP documentation --- ## Dashboard Layer URL: https://docs.standardintelligence.com/dashboard-layer Breadcrumb: Operations › PMM › PMM Infrastructure Architecture › Dashboard Layer Last updated: 28 Feb 2026 Dashboard Layer AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 Dashboards serve two audiences. The operational dashboard provides the Technical SME and operators with real-time or near-real-time visibility into system behaviour: current metric values, alert status, recent trends, and active investigations. The governance dashboard provides the AI Governance Lead and compliance team with summary views: compliance metric status (green/amber/red), alert history and resolution statistics, trend analysis over weeks and months, and the current non-conformity register status. Both dashboards should be accessible without specialist tooling. Grafana (open-source) is the most common choice for operational dashboards; it integrates with Prometheus, Elasticsearch, and most time-series databases. Governance dashboards may be built in Grafana with a simplified view, or in a BI tool (Metabase, Superset) that the compliance team already uses. Dashboards are Module 12 AISDP evidence. Screenshots or exports from the dashboard at defined intervals (quarterly, at minimum) are retained in the evidence register , demonstrating that the monitoring infrastructure was operational and the metrics were within thresholds. Key outputs Operational dashboard for Technical SME and operators (real-time) Governance dashboard for AI Governance Lead (summary views) Grafana or equivalent, accessible without specialist tooling Dashboard exports retained as Module 12 evidence --- ## Data Collection Layer URL: https://docs.standardintelligence.com/data-collection-layer Breadcrumb: Operations › PMM › PMM Infrastructure Architecture › Data Collection Layer Last updated: 28 Feb 2026 Data Collection Layer AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 The data collection layer captures inference inputs, outputs, and metadata from the production system. It operates asynchronously to avoid adding latency to the inference path. Common patterns include streaming inference events to a message queue (Kafka, AWS Kinesis, Google Pub/Sub) from which the monitoring pipeline consumes. The collection layer must handle the production system's peak throughput without data loss. Dropped monitoring events create blind spots in the compliance record. The layer should also be independent of the AI system it monitors; a monitoring system that fails when the AI system fails provides no information at the moment it is most needed. For systems processing personal data, the data collection layer must comply with the same data governance requirements ( Module 4 ) as the inference system itself. Monitoring data that includes personal data requires the same retention policies, access controls, and processing justification. Key outputs Asynchronous streaming to message queue (Kafka, Kinesis, Pub/Sub) Peak throughput handling without data loss Independence from the monitored system Data governance compliance for monitoring data --- ## Data Drift Monitoring URL: https://docs.standardintelligence.com/data-drift-monitoring Breadcrumb: Operations › PMM › Data Drift Monitoring Last updated: 28 Feb 2026 Input Drift AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 15 (1) The Technical SME compares the distribution of incoming data against the training data distribution using statistical measures. Population Stability Index (PSI), Kolmogorov-Smirnov test statistics, Jensen-Shannon divergence, and Wasserstein distance each capture different aspects of distributional change. Each input feature is monitored individually, and the Technical SME computes composite drift scores. Defined thresholds guide the response: PSI below 0.1 is stable, 0.1 to 0.2 warrants investigation, and above 0.2 requires immediate attention. The thresholds are calibrated during initial deployment based on the feature's natural variability, and tuned through operational experience. Input drift monitoring detects situations where the system is receiving data it was not designed for. A recruitment screening system trained on applications from software engineers that begins receiving applications from financial analysts has experienced input drift that may invalidate the model's predictions, even if the model's behaviour appears superficially normal. Key outputs Per-feature drift monitoring with PSI, KS, JS divergence, Wasserstein distance Composite drift scores across features Three-tier thresholds (stable, investigate, immediate attention) Detection of out-of-distribution deployment contexts Concept Drift AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 15(1) Where ground truth labels become available (even with delay), the Technical SME monitors the relationship between input features and outcomes for changes. Concept drift occurs when the underlying relationship between inputs and outputs changes, meaning the model's learned patterns no longer reflect reality. Concept drift is often more consequential than input drift because it means the model is fundamentally wrong, not merely operating on unfamiliar data. A credit scoring model trained during an economic expansion may experience concept drift during a recession, as the relationship between income levels and default probability changes. The model's predictions remain confident but are miscalibrated for the new economic reality. Detection approaches include monitoring the model's residual error distribution over time (increasing residuals suggest concept drift), comparing the model's feature importance ranking against a baseline (a change in which features are most predictive suggests a concept shift), and applying drift detection methods on the input-output joint distribution. Where concept drift is detected, model retraining or recalibration is typically required. Key outputs Input-output relationship monitoring for concept drift Residual error distribution tracking Feature importance stability analysis Model retraining or recalibration trigger Feature Drift AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 15(1) The Technical SME monitors individual feature distributions for shifts that may not be captured by aggregate drift measures. A single feature shifting significantly while others remain stable can cause localised performance degradation that aggregate metrics miss. Feature drift often has identifiable root causes: an upstream data source changes its encoding or scale, a data pipeline introduces a transformation error, a categorical feature acquires a new value that the model was not trained on, or a seasonal pattern affects a specific feature. Identifying the specific drifted feature (rather than observing aggregate drift) accelerates root cause analysis and remediation. Feature-level drift monitoring generates per-feature drift scores on the computation schedule defined in the PMM plan . Features are ranked by drift magnitude, and the top-N drifted features are flagged for investigation. The monitoring should also track feature availability: a feature that becomes missing for a significant proportion of inputs (due to an upstream data source failure) may cause the model to use a default or imputed value, silently degrading performance. Key outputs Per-feature drift scores on scheduled computation Root cause acceleration through feature-level identification Feature availability monitoring for missing data detection Top-N drifted feature flagging for investigation --- ## Data Lifecycle Closure Record URL: https://docs.standardintelligence.com/data-lifecycle-closure-record Breadcrumb: Operations › End-of-Life › Artefacts › Data Lifecycle Closure Record Last updated: 28 Feb 2026 Data Lifecycle Closure Record AISDP module(s): Module 4 ( Data Governance ) Regulatory basis: Article 18 , GDPR Article 5 (1)(e) The data lifecycle closure record documents per-data-category decisions: what was deleted, what was retained, the legal justification for each decision, and the deletion verification. The DPO Liaison's signed attestation confirms that all personal data scheduled for deletion was removed from all storage locations. The record includes the post- decommission data subject rights process and the responsible person. Key outputs Per-category retention/deletion schedule with justification DPO Liaison signed deletion verification attestation Post-decommission data subject rights process documented Module 4 AISDP evidence --- ## Decommission Record URL: https://docs.standardintelligence.com/decommission-record Breadcrumb: Operations › End-of-Life › Artefacts › Decommission Record Last updated: 28 Feb 2026 Decommission Record AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 18 The decommission record consolidates the seven workstream outputs into a single document: the end-of-life plan , deployer notification records, technical shutdown log, data lifecycle closure record, downstream decision monitoring plan, documentation finalisation confirmation, and regulatory notification records. It provides a comprehensive audit trail of the decommission process from trigger to completion. For organisations using the manual procedural alternative, the decommission record is the completed checklist with sign-offs at each step. Key outputs Seven-workstream consolidated audit trail Trigger-to-completion documentation Manual alternative: completed checklist with sign-offs Module 12 AISDP evidence --- ## Dependency Health URL: https://docs.standardintelligence.com/dependency-health Breadcrumb: Operations › PMM › Operational Monitoring › Dependency Health Last updated: 28 Feb 2026 Dependency Health AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 15 The AI system depends on upstream services (data sources, feature stores, external APIs) and downstream consumers (deployer applications, reporting systems). Dependency health monitoring tracks the availability, latency, and error rates of these external touchpoints. A degradation in an upstream data source can corrupt the system's inputs without triggering any model-level alert. Dependency monitoring covers every external integration documented in the system architecture ( Module 3 ), with alerting thresholds calibrated to the dependency's criticality. The failure modes for each dependency, and the system's expected behaviour when a dependency is unavailable, are documented by the Technical SME in the PMM plan and tested periodically through chaos testing. For GPAI model dependencies (where the system relies on an external model API), dependency health monitoring extends to model behaviour: response quality, latency, and output distribution should be tracked alongside availability. Key outputs Per-dependency availability, latency, and error rate monitoring Criticality-calibrated alerting thresholds Failure mode documentation and periodic testing GPAI model dependency behaviour monitoring --- ## Deployer Feedback Channels URL: https://docs.standardintelligence.com/deployer-feedback-channels Breadcrumb: Operations › PMM › Deployer Monitoring Support › Deployer Feedback Channels Last updated: 28 Feb 2026 Deployer Feedback Channels AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 Structured feedback channels make it easy for deployers to report issues. A dedicated web portal or API endpoint collects incident reports, anomaly observations, and general feedback in a structured format. Required fields capture the deployer's identity, affected system version, observation date and time, observed behaviour description, expected behaviour, and supporting evidence. Feedback triage classifies incoming reports by severity within defined timeframes: 24 hours for incident reports, five working days for general feedback. Reports suggesting potential serious incident s are escalated immediately to the incident response process. The engineering team routes performance reports to the PMM team. Feature requests and general feedback are logged for product consideration. The feedback loop must close: deployers who report issues receive acknowledgement, investigation outcome summaries (within confidentiality bounds), and notification of corrective actions affecting them. A feedback loop that goes dark erodes trust and discourages future reporting, weakening the PMM system's detection capability. Key outputs Structured reporting portal or API with mandatory fields Severity-based triage (24 hours for incidents, 5 days for general) Closed feedback loop with deployer acknowledgement Module 12 AISDP documentation --- ## Deployer Monitoring Support URL: https://docs.standardintelligence.com/deployer-monitoring-support Breadcrumb: Operations › PMM › Deployer Monitoring Support Last updated: 28 Feb 2026 Instructions for Use Guidance (Art. 26(4)) Deployer Feedback Channels Limited-Visibility Deployments — Telemetry Agents Limited-Visibility Deployments — Callback APIs Limited-Visibility Deployments — Synthetic Monitoring Periodic Deployer Audits & Satisfaction Surveys Cross-Deployer Pattern Analysis --- ## Deployer Notification Records URL: https://docs.standardintelligence.com/deployer-notification-records Breadcrumb: Operations › End-of-Life › Artefacts › Deployer Notification Records Last updated: 28 Feb 2026 Deployer Notification Records AISDP module(s): Module 8 (Transparency), Module 11 (Deployer Obligations) Regulatory basis: Article 13 , Article 20 Deployer notification records document every notification sent: the date, recipient, content, delivery method, delivery confirmation, and any deployer response or acknowledgement. For API-served systems, the records include the date sunset headers were added and the date 410 responses began. For embedded systems, the records include software update distribution logs. Key outputs Per-deployer notification with delivery confirmation API deprecation sequence records Deployer acknowledgement tracking Module 8 and Module 11 AISDP evidence --- ## Detection Infrastructure URL: https://docs.standardintelligence.com/detection-infrastructure Breadcrumb: Operations › PMM › Serious Incident Reporting › Detection Infrastructure Last updated: 28 Feb 2026 Detection Infrastructure AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 73 The PMM system is configured to detect events that could constitute serious incidents. Automated alerts trigger when the safety monitoring layer flags outputs suggesting health or safety impact, when deployers report unexpected or harmful outcomes through established channels, when complaint volumes spike or patterns emerge suggesting systematic harm, and when monitoring metrics breach critical thresholds in ways suggesting real-world impact. Critical-severity alert thresholds corresponding to potential serious incidents should be more sensitive than standard PMM thresholds: a fairness metric breach exceeding a magnitude that could affect fundamental rights, a performance degradation in a safety-critical function that could affect health, a security breach that could compromise critical infrastructure. It is better to triage a false positive than to miss a genuine serious incident. Detection also depends on deployer feedback channels and complaint analysis. A pattern of similar complaints from affected persons may indicate a systematic problem that constitutes a serious incident even if no individual complaint describes severe harm. Key outputs PMM alert rules mapped to Article 3(49) criteria Critical thresholds more sensitive than standard PMM thresholds Deployer feedback and complaint pattern analysis as detection sources False positive tolerance preferred over missed detection --- ## End-of-Life Artefacts URL: https://docs.standardintelligence.com/end-of-life-artefacts Breadcrumb: Operations › End-of-Life › Artefacts Last updated: 28 Feb 2026 End-of-Life Plan Deployer Notification Records Technical Shutdown Log Data Lifecycle Closure Record Final AISDP Version Decommission Record Post-Decommission Monitoring Schedule --- ## End-of-Life Plan URL: https://docs.standardintelligence.com/end-of-life-plan Breadcrumb: Operations › End-of-Life › Artefacts › End-of-Life Plan Last updated: 28 Feb 2026 End-of-Life Plan AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 16 The end-of-life plan documents the trigger, the governance approval, the seven workstreams with assigned owners and timelines, the stakeholder impact assessment, and the milestone schedule. The plan is prepared, reviewed, and approved per the governance process. It is retained as AISDP evidence and serves as the master coordination document for all decommission activities. Key outputs Seven-workstream decommission plan Stakeholder impact assessment Milestone schedule with owners AISDP evidence --- ## End-of-Life Planning URL: https://docs.standardintelligence.com/end-of-life-planning Breadcrumb: Operations › End-of-Life › Planning Last updated: 28 Feb 2026 Lead Times by Trigger Type AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 16, Article 79 Lead times vary by trigger type. Planned retirement: six months or more, permitting sequential workstream execution, phased deployer transition, and thorough documentation finalisation. Voluntary compliance withdrawal: weeks to months, depending on the compliance gap severity; critical safety issues may compress to days ( break-glass followed by structured withdrawal). Mandated withdrawal: 15 working days or less (authority may set a shorter timeframe), requiring parallel workstream execution and pre-prepared templates. The AI Governance Lead initiates planning as soon as a trigger is identified. For planned retirements, the planning timeline begins at T-6 months. For voluntary withdrawals, planning begins at the governance decision to withdraw. For mandated withdrawals, planning begins at the receipt of the authority's order, and the organisation draws on pre-prepared plans. Pre-prepared end-of-life plan templates significantly reduce response time for unplanned withdrawals. The template includes the seven workstreams, pre-assigned responsibilities, and pre-drafted deployer notification text. Key outputs Three lead time categories (6+ months, weeks-months, 15 working days) Immediate planning initiation on trigger identification Pre-prepared templates for unplanned withdrawals Module 12 AISDP documentation Stakeholder Impact Assessment AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 13 , Article 20 Before executing any decommission activity, the AI Governance Lead assesses the impact on all stakeholders. Deployers need time to transition to alternative systems or manual processes. Affected persons whose decisions remain in effect need clarity on how those decisions will be reviewed or supported after the system is decommissioned. Internal teams that depend on the system's outputs need replacement workflows. The stakeholder impact assessment identifies each stakeholder group, the nature and severity of the impact, the mitigation measures available, and the timeline required for those mitigations. For mandated withdrawals where the timeline is compressed, the assessment prioritises the most severely affected stakeholders (typically affected persons and deployers providing critical services). The assessment informs the deployer transition workstream and the downstream decision monitoring plan. Key outputs Per-stakeholder-group impact identification Mitigation measures and timeline requirements Prioritisation for compressed timelines Input to deployer transition and downstream monitoring Timeline & Milestones (T-6 to T+0) AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 16 The plan sets a target decommission date and works backward to define milestones for each workstream. A planned retirement follows a typical timeline: deployer announcement at T-6 months, deployer transition support beginning at T-5, new deployments blocked at T-3, inference endpoints deprecated at T-1, full shutdown at T-0, and post-decommission monitoring beginning immediately after shutdown. A mandated withdrawal compresses this into 15 working days or less. Several workstreams execute in parallel: deployer notification and technical shutdown preparations proceed simultaneously, with data lifecycle closure and documentation finalisation overlapping the shutdown itself. Each milestone has a responsible owner (from the ten organisational roles), a target date, and a completion criterion. The AI Governance Lead tracks milestone progress and escalates delays that threaten the decommission deadline. Key outputs Backward-planned milestones from target decommission date Standard timeline (T-6 to T+0) for planned retirement Compressed parallel execution for mandated withdrawal Per-milestone owner, target date, and completion criterion Plan Governance (Prepared, Reviewed, Approved) AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 16 The end-of-life plan is prepared by the AI Governance Lead, reviewed by the Legal and Regulatory Advisor (who assesses the legal implications, particularly for mandated withdrawals and Article 20 notification obligations), and approved by the appropriate governance authority. Planned retirements are approved by the AI Governance Lead. Mandated withdrawals, which may carry enforcement consequences, require executive leadership approval. The plan covers seven workstreams, each with a responsible owner, timeline, dependencies, and completion criteria. The plan itself is retained as end-of-life evidence in the AISDP. For mandated withdrawals, the plan may be prepared and approved within hours of receiving the authority's order, drawing on the pre-prepared templates. The Legal and Regulatory Advisor's review is expedited but not skipped; even under time pressure, the legal implications of the decommission actions must be assessed. Key outputs AI Governance Lead preparation, Legal review, governance approval Seven workstreams with owners, timelines, and dependencies Executive approval for mandated withdrawals Plan retained as AISDP evidence --- ## End-of-Life Triggers URL: https://docs.standardintelligence.com/end-of-life-triggers Breadcrumb: Operations › End-of-Life › Triggers Last updated: 28 Feb 2026 Planned Retirement (Commercial, Technical, Strategic) AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 16 A system reaches planned retirement when it has completed its intended operational life. Commercial factors include product line discontinuation or replacement by a successor system. Technical factors include the underlying technology stack becoming unmaintainable or the model architecture being superseded. Strategic factors include the organisation exiting the market segment the system served. Planned retirement allows the longest lead time for a structured decommission . The AI Governance Lead should begin planning at least six months before the target date. The extended timeline permits deployer transition support, phased API deprecation, and thorough data lifecycle closure without the time pressure of compliance-driven or mandated withdrawals. The AI Governance Lead monitors for planned retirement triggers through the portfolio review process, where systems approaching the end of their useful life are identified and decommission planning is initiated. Key outputs Three planned retirement drivers (commercial, technical, strategic) Longest lead time for structured decommission Six-month minimum planning horizon Portfolio review as trigger identification mechanism Voluntary Withdrawal (Non-Conformities, Risk, Drift, Fairness/Safety) AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 20 The organisation determines that the system cannot achieve or maintain conformity with the AI Act. This may arise from an internal assessment finding irremediable non-conformities, a risk reassessment elevating residual risk above the acceptability threshold with no viable mitigation, a substantial modification assessment revealing the system has drifted beyond its documented intended purpose without a feasible path to realignment, or a PMM finding revealing systemic fairness or safety issues beyond remediation within acceptable cost or time constraints. Voluntary withdrawal is a compliance-preserving decision: the organisation recognises that continued operation presents greater regulatory risk than withdrawal. Under Article 20, the provider must immediately take corrective action to bring the system into conformity, or withdraw, disable, or recall it. Deployers, distributors, authorised representative s, and importers must be informed. The voluntary withdrawal timeline depends on the severity of the compliance gap. A critical safety issue may require immediate suspension ( break-glass activation) followed by a structured withdrawal over weeks. A documentation gap may allow a more extended timeline. Key outputs Four voluntary withdrawal triggers (non-conformities, risk, drift, fairness/safety) Compliance-preserving decision framework Article 20 corrective action and notification obligations Timeline calibrated to compliance gap severity Mandated Withdrawal/Recall (Art. 79, 15 Working Days) AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 79 A market surveillance authority orders the system's withdrawal under Article 79. This pathway allows the least time and imposes the most stringent obligations. The authority may prescribe a specific timeframe; Article 79(2) sets a backstop of 15 working days. Where the non-compliance is not restricted to one member state, the authority informs the Commission and other member states, potentially triggering parallel enforcement actions. Failure to comply with a withdrawal order is an aggravating factor in penalty determination under Article 99. The 15 working-day timeline means that the decommission workstreams must execute in parallel rather than sequentially. Pre-prepared end-of-life plan s and templates are essential; an organisation that begins planning only after receiving the order will struggle to meet the deadline. Mandated withdrawal requires executive-level governance : the AI Governance Lead escalates immediately to the CEO/CTO with the Legal and Regulatory Advisor assessing enforcement consequences. Key outputs 15 working-day backstop under Article 79(2) Parallel workstream execution required Failure to comply as aggravating factor under Article 99 Executive-level governance with immediate escalation --- ## Error Rate Tracking by Type URL: https://docs.standardintelligence.com/error-rate-tracking-by-type Breadcrumb: Operations › PMM › Operational Monitoring › Error Rate Tracking by Type Last updated: 28 Feb 2026 Error Rate Tracking by Type AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 15 Errors are classified by type, each with different compliance implications. Input validation failures: the system correctly rejected malformed input. Inference failures: the model failed to produce an output. Post-processing failures: the output was produced but could not be delivered. Timeout failures: the inference took too long and was abandoned. A rising input validation failure rate may indicate a change in the data source or upstream pipeline. A rising inference failure rate may indicate a model or infrastructure problem. Each category has its own alert threshold defined in the PMM plan . The error taxonomy is documented, and the Technical SME ensures that every error is classified into exactly one category, with no errors falling through to an uncategorised bucket. Error rate tracking should also distinguish between errors that are visible to deployers (failed API responses, error messages) and errors that are silently handled (fallback values, default responses). Silent errors are more dangerous because the deployer has no signal that the system's output may be unreliable. Key outputs Four-category error taxonomy (validation, inference, post-processing, timeout) Per-category alert thresholds in PMM plan Silent error detection (fallback values, defaults) Error taxonomy documentation --- ## Escalation & Override Logs URL: https://docs.standardintelligence.com/escalation-and-override-logs Breadcrumb: Operations › Oversight › Artefacts › Escalation & Override Logs Last updated: 28 Feb 2026 Escalation & Override Logs AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 Escalation and override logs capture every operator escalation (date, operator, case, reason, resolution) and every override (date, operator, case, system recommendation, operator decision, reason). These logs provide the raw data for human oversight monitoring and are retained as Module 7 evidence. Key outputs Per-escalation and per-override detailed records Raw data for human oversight monitoring metrics Operator-level and aggregate analysis capability Module 7 AISDP evidence --- ## Escalation Path Design URL: https://docs.standardintelligence.com/escalation-path-design Breadcrumb: Operations › PMM › Alerting & Escalation › Escalation Path Design Last updated: 28 Feb 2026 Escalation Path Design AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 The AI Governance Lead documents and rehearses the escalation path, ensuring accessibility to every person who may need to initiate it. For each severity tier, the path defines who is notified, through which channel, within what timeframe, and what actions are expected. Escalation paths account for out-of-hours scenarios (on-call rotation with defined response SLAs), key person unavailability (named alternates for every role), and multi-jurisdiction incidents (where different authorities in different time zones need notification by the Legal and Regulatory Advisor). The escalation path is documented in the PMM plan and tested annually through tabletop exercises. Each escalation event is logged: the alert that triggered it, the persons notified, the response time, the actions taken, and the resolution. This log provides evidence of a functioning escalation framework and identifies areas where the framework needs improvement. Key outputs Per-tier escalation path with channels, timeframes, and expected actions Out-of-hours, alternate-person, and multi-jurisdiction provisions Annual tabletop exercise testing Escalation event logging for framework improvement --- ## Escalation Without Reprisal URL: https://docs.standardintelligence.com/escalation-without-reprisal Breadcrumb: Operations › Oversight › Escalation Without Reprisal Last updated: 28 Feb 2026 Whistleblower Protection (Directive 2019/1937) AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Directive 2019/1937, Article 14 The organisation extends its existing whistleblower protection mechanisms under Directive (EU) 2019/1937 to cover AI compliance concerns. This ensures that individuals at every level of the oversight pyramid can report concerns about AI system behaviour, compliance posture, or governance effectiveness without fear of retaliation. The protection covers concerns about the system's behaviour (potential harm, discrimination, opacity), the organisation's compliance processes (inadequate documentation, superficial assessment, ignored non-conformities), and the governance framework itself (suppressed escalations, modified thresholds without approval, inadequate resources). Protection extends to both internal and external reporting. The Legal and Regulatory Advisor ensures the whistleblower framework's AI extension complies with national implementing legislation in each deployment jurisdiction, as the Directive has been transposed differently across member states. Key outputs Directive 2019/1937 protection extended to AI compliance concerns Coverage of system behaviour, process, and governance concerns National implementation variations addressed per jurisdiction Module 7 AISDP documentation Reporting Channels AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Directive 2019/1937, Article 14 Four reporting channels are available. Confidential reporting to the AI Governance Lead provides the primary internal pathway. Anonymous reporting through a dedicated channel (hotline, online portal) ensures that individuals who fear identification can still report. Direct reporting to the Internal Audit Assurance Lead bypasses the AI Governance Lead, covering concerns about the Lead's own conduct. External reporting to the national competent authorit y provides a pathway outside the organisation entirely. Each channel is documented, communicated during training, and tested periodically. The organisation logs the use of each channel (without identifying anonymous reporters) to track reporting volume and channel effectiveness. Key outputs Four reporting channels (confidential, anonymous, internal audit, external NCA) Documented and communicated during training Periodic testing of channel functionality Volume tracking without anonymous reporter identification Cultural Reinforcement AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 Formal policies are necessary but insufficient. The organisation actively cultivates a culture in which AI concern reporting is valued. This means leadership publicly acknowledging and responding to reported concerns, recognising individuals who identify genuine problems, including AI concern reporting in performance evaluation criteria (positively, not punitively), and conducting regular training that normalises concern reporting as a professional responsibility. Cultural reinforcement is the difference between a compliance framework that exists on paper and one that functions in practice. An operator who observes harmful system behaviour must believe, based on experience and organisational signals, that reporting the concern will be welcomed rather than resented. Key outputs Leadership acknowledgement and recognition of concern reporting Performance evaluation integration (positive framing) Regular training normalising reporting as professional responsibility Cultural reinforcement as operational effectiveness mechanism Documented Response to Every Escalation AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 Every escalation receives a documented response within a defined timeframe. The response addresses the substance of the concern, identifies actions taken or planned, explains the rationale if no action is taken, and confirms that no retaliation has occurred or will occur. The escalation and response are retained in the documentation repository. Escalations that receive no response, or responses that dismiss the concern without substantive engagement, erode trust in the escalation framework. The quarterly oversight review examines the escalation response rate and quality as governance health indicators. Key outputs Documented response to every escalation within defined timeframe Substantive engagement with the concern raised Retained in documentation repository Response rate and quality reviewed quarterly Annual Audit Verification AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 17 The Internal Audit Assurance Lead verifies the non-retaliation framework's effectiveness annually. Verification includes reviewing escalation logs for patterns suggesting suppression, conducting confidential interviews with a sample of oversight pyramid personnel, assessing whether reported concerns received documented responses, and checking for any adverse employment actions following escalation events. Findings are reported to the audit committee. Deficiencies in the non-retaliation framework represent a systemic risk to the entire oversight programme: if people do not escalate, the oversight pyramid fails from the bottom up. Key outputs Annual audit of non-retaliation framework effectiveness Confidential interviews with oversight pyramid personnel Escalation pattern analysis for suppression indicators Audit committee reporting --- ## Evidence Preservation (Art. 73(6)) URL: https://docs.standardintelligence.com/evidence-preservation-art-736 Breadcrumb: Operations › PMM › Serious Incident Reporting › Evidence Preservation (Art. 73(6)) Last updated: 28 Feb 2026 Evidence Preservation (Art. 73(6)) AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 73(6) Article 73(6) explicitly prohibits the provider from altering the AI system in a way that could affect subsequent evaluation of the incident's causes, prior to informing the competent authorities. The engineering team preserves the system's state at the time of the incident: model version, configuration, input data, output data, feature values, and all relevant logs. If the system is operating in a way that continues to cause harm, the break-glass procedure should be activated to stop processing. The system's state must be captured before or simultaneously with the stop action. Automated snapshot scripts trigger on any critical-severity alert, capturing the current model state (version, configuration, parameters), inference logs for the incident period, monitoring metrics, the data pipeline state, and system configuration. The snapshot is written to immutable storage within minutes of the trigger. This evidence serves two purposes: the regulatory notification (which must describe the incident, the system involved, and the corrective actions taken) and any subsequent investigation by the competent authority. Key outputs System state preserved before any alteration Automated snapshot scripts on critical alert trigger Immutable storage for preserved evidence Break-glass activation with simultaneous state capture --- ## Fairness Monitoring URL: https://docs.standardintelligence.com/fairness-monitoring Breadcrumb: Operations › PMM › Fairness Monitoring Last updated: 28 Feb 2026 Fairness Metrics AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 10(2)(f) Selection rate ratios, equalised odds, predictive parity, and calibration within groups are computed by the Technical SME on production data at defined intervals. Selection rate ratios measure whether the system's positive outcome rate differs across protected groups. Equalised odds measures whether the true positive and false positive rates are consistent across groups. Predictive parity measures whether the positive predictive value is consistent. Calibration within groups measures whether predicted probabilities correspond to observed frequencies for each group. These metrics are computed for single protected characteristics and, where cell sizes are sufficient, for intersectional subgroups. The AISDP declares the primary fairness metrics and the minimum acceptable ratios; PMM monitors compliance with these declarations. Fairness metric computation in production uses the same methodology as the pre-deployment fairness evaluation, ensuring that production fairness is directly comparable to the validated baseline. Key outputs Four fairness metric families computed on production data Single-characteristic and intersectional computation AISDP-declared minimum ratios monitored Methodology consistent with pre-deployment evaluation Computation Intervals & Intersectional Subgroups AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 10(2)(f) Fairness metric computation intervals depend on the system's volume and risk profile. High-volume systems (processing thousands of decisions daily) may compute fairness metrics weekly; lower-volume systems may require monthly intervals to accumulate sufficient data for statistically meaningful computation. The PMM plan documents the computation interval for each metric with the statistical justification. Intersectional subgroup analysis (for example, age and gender combined, or ethnicity and disability status combined) requires larger sample sizes than single-characteristic analysis. The PMM plan specifies the minimum cell size below which intersectional metrics are flagged as inconclusive. A common threshold is 30 observations per cell, though higher thresholds may be appropriate for metrics with higher variance. Where intersectional analysis is not feasible in production due to sample size constraints, the Technical SME compensates through periodic batch analysis on accumulated data, synthetic data augmentation for sensitivity analysis, or targeted deployer surveys. Key outputs Computation interval per metric with statistical justification Minimum cell size for intersectional analysis (typically 30+) Inconclusive flagging for insufficient samples Compensating strategies for small intersectional cells Compensating for Missing Demographic Data AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 10(2)(f), Article 10(5) In many deployment contexts, demographic data about affected persons is not available to the provider. Compensating strategies include proxy-based estimation (with documented methodology and accuracy bounds), periodic deployer surveys or sampling studies, external benchmark comparison (comparing the system's output distributions against known population distributions), and structured feedback analysis (examining complaint and appeal patterns for demographic signals). Each compensating strategy has limitations that the Technical SME documents. Proxy-based estimation introduces measurement error; the accuracy bounds must be reported alongside the estimated fairness metrics. Deployer surveys depend on deployer cooperation and may be subject to selection bias. External benchmarks may not reflect the system's specific deployment population. Feedback analysis captures only the concerns of persons who complain, which may systematically exclude the most affected groups. The AISDP documents which compensating strategy is applied, its known limitations, and the confidence level of the resulting fairness estimates. Where no reliable fairness monitoring is achievable for a specific protected characteristic, this gap is documented as a residual risk in the risk register . Key outputs Four compensating strategies with documented limitations Accuracy bounds reported alongside estimated metrics Residual risk documentation where no reliable monitoring is achievable Module 12 AISDP documentation --- ## Feature Flag Pattern URL: https://docs.standardintelligence.com/feature-flag-pattern Breadcrumb: Operations › Oversight › Break-Glass Procedures › Feature Flag Pattern Last updated: 28 Feb 2026 Feature Flag Pattern AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 (4)(e) Feature flags (LaunchDarkly, Unleash, Flagsmith) provide a clean implementation pattern for the break-glass mechanism. A feature flag named "system-active" defaults to true. The inference service checks this flag before processing each request. When break-glass is activated, the flag is set to false, and all subsequent inference requests return a suspension notice. Feature flags propagate globally within seconds; LaunchDarkly's typical propagation time is under 200 milliseconds. The flag change is logged with the identity of the person who changed it and the timestamp, providing audit evidence. Feature flags also enable partial break-glass: specific model components or decision pathways can be disabled independently while the rest of the system continues to operate. The feature flag configuration is managed alongside the system's version-controlled configuration. Key outputs Feature flag "system-active" as break-glass mechanism Sub-200ms global propagation Logged with identity and timestamp Partial break-glass capability for specific components --- ## Feedback Loop to Governance URL: https://docs.standardintelligence.com/feedback-loop-to-governance Breadcrumb: Operations › PMM › Governance & Maintenance › Feedback Loop to Governance Last updated: 28 Feb 2026 Feedback Loop Metrics (Meta-Monitoring) AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 The Technical SME monitors the feedback loop itself. Key metrics include time from PMM finding to decision (how quickly the team responds), time from decision to completed fix (how quickly engineering executes), the share of PMM findings resulting in a system change versus those accepted as within tolerance, and the share of fixes that successfully resolve the originating finding versus those requiring further work. These meta-metrics provide the AI Governance Lead with visibility into the PMM system's operational effectiveness. A feedback loop with a median response time of six months is materially different from one with a two-week median, and the difference affects the organisation's ability to maintain compliance under Article 72. Meta-metrics are reported at the quarterly PMM review . Sustained deterioration in feedback loop responsiveness triggers a process review: is the PMM action backlog overloaded? Are engineering resources sufficient? Is the decision authority framework functioning? Key outputs Four meta-metrics (response time, fix time, change rate, fix success rate) Quarterly reporting at PMM review Sustained deterioration triggers process review Module 12 AISDP documentation --- ## Final AISDP Version URL: https://docs.standardintelligence.com/final-aisdp-version Breadcrumb: Operations › End-of-Life › Artefacts › Final AISDP Version Last updated: 28 Feb 2026 Final AISDP Version AISDP module(s): All modules Regulatory basis: Article 11 , Article 18 The final AISDP version incorporates the decommissioning record across all relevant modules. It is the definitive compliance document for the system's entire lifecycle, from inception through operation to decommission. The version number, date, and "final — system decommissioned" status are recorded in Module 1 . The final version is archived with the evidence pack for the ten-year retention period. Key outputs Complete lifecycle documentation in single final version Decommissioning record integrated across modules "Final — system decommissioned" status Ten-year archival with evidence pack --- ## Fresh Eyes Review Reports URL: https://docs.standardintelligence.com/fresh-eyes-review-reports Breadcrumb: Operations › Oversight › Artefacts › Fresh Eyes Review Reports Last updated: 28 Feb 2026 Fresh Eyes Review Reports AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 Fresh eyes review reports document each review: the reviewer (who was not involved in daily operations), the date, the scope of material reviewed, the findings (including normalised issues surfaced), and the recommendations. Findings are entered into the Non-Conformity Register and tracked to completion. Key outputs Per-review documentation with scope and findings Normalised issues surfaced by independent reviewer Non-Conformity Register entries for findings Module 7 AISDP evidence --- ## Human Oversight Monitoring URL: https://docs.standardintelligence.com/human-oversight-monitoring Breadcrumb: Operations › PMM › Human Oversight Monitoring Last updated: 28 Feb 2026 Override Rate Analysis AISDP module(s): Module 7 (Human Oversight), Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 14 Override rates carry compliance significance in both directions. A consistently low rate (below 2–5%) may indicate automation bias: operators accepting recommendations without meaningful scrutiny. A consistently high rate (above 20–30%) may indicate the system is underperforming or operators disagree with the model's logic. Neither extreme is healthy. The PMM plan defines an expected override rate range based on the system's documented accuracy and the decision context. Both upper and lower threshold breaches generate alerts. Override analysis is disaggregated by operator (identifying individuals needing additional training), by decision type (identifying categories where the system underperforms), by time period (detecting trends such as declining override rates indicating growing automation bias), and by deployer (identifying deployment-specific issues). A declining override rate trend over months is a particularly significant signal: as operators become accustomed to the system, they may progressively reduce their engagement, eroding the human oversight the AISDP documents. Key outputs Expected override rate range with upper and lower thresholds Disaggregation by operator, decision type, time period, and deployer Declining trend detection for automation bias Module 7 and Module 12 AISDP evidence Review Time Analysis AISDP module(s): Module 7 (Human Oversight), Module 12 (Post-Market Monitoring) Regulatory basis: Article 14 The time operators spend reviewing each case before accepting or overriding is a proxy for the depth of human engagement. Operators who consistently review cases in under five seconds for decisions documented as requiring substantive analysis are unlikely to be performing meaningful oversight. The Technical SME monitors review time against a minimum threshold defined in the PMM plan, based on the complexity of the decision and the information the operator must evaluate. Review time distribution is as informative as the average. A bimodal distribution, where most cases are reviewed in seconds but a small proportion take several minutes, may indicate that operators skim the majority and only engage deeply with cases that trigger an intuitive concern. This pattern leaves the organisation exposed to errors in the "skimmed" cases. The minimum review time threshold is calibrated through operational research: domain experts assess the minimum time needed to meaningfully review the information presented in the oversight interface for a typical case. This calibration should be documented and reviewed annually. Key outputs Minimum review time threshold based on decision complexity Distribution shape analysis (bimodal detection) Calibration through domain expert assessment Module 7 and Module 12 AISDP evidence Escalation Monitoring AISDP module(s): Module 7 (Human Oversight), Module 12 (Post-Market Monitoring) Regulatory basis: Article 14 The Technical SME tracks escalation frequency over time and disaggregated by escalation reason. A decline in escalation frequency may indicate operators are more confident, the system is improving, or escalation is perceived as burdensome and is being avoided. The PMM plan defines a baseline escalation rate and requires investigation of sustained deviations. Escalation reasons are categorised: uncertainty about the output, suspected error, novel input pattern, ethical concern, system malfunction. Trend analysis by category provides more actionable insight than aggregate frequency. A declining escalation rate for "suspected error" is positive (suggesting improved model accuracy); a declining rate for "ethical concern" may warrant investigation (are ethical concerns declining, or is the escalation pathway for ethical concerns not working?). The escalation monitoring results feed into the quarterly PMM review , where the AI Governance Lead assesses whether the escalation framework is functioning as intended. Key outputs Escalation frequency tracking over time Per-reason categorisation with trend analysis Baseline escalation rate with deviation investigation Quarterly governance review integration Automation Bias Detection AISDP module(s): Module 7 (Human Oversight), Module 12 (Post-Market Monitoring) Regulatory basis: Article 14 Beyond override rates and review times, more granular indicators detect automation bias. If the system presents a confidence score alongside its recommendation, operators who override high-confidence recommendations at the same rate as low-confidence ones are likely not using the confidence information. If the system provides explanatory features (the top contributing factors to the recommendation), operators who override without engagement patterns suggesting they have read the explanation may be accepting recommendations on face value. The Technical SME computes these behavioural indicators where the human oversight interface captures sufficient interaction data. The correlation between system confidence and override rate should be positive: operators should override less when the system is more confident and more when confidence is lower. A flat correlation indicates that operators are not engaging with the confidence signal. Automation bias indicators are reported in the quarterly PMM review and tracked over time. A rising automation bias trend triggers operator retraining, interface redesign (to make confidence and explanation information more prominent), or workload adjustment. Key outputs Confidence-override correlation analysis Explanation engagement pattern detection Quarterly reporting with trend tracking Remediation through retraining, interface redesign, or workload adjustment Operator Wellbeing & Workload Parameters AISDP module(s): Module 7 (Human Oversight), Module 12 (Post-Market Monitoring) Regulatory basis: Article 14 Article 14 compliance depends on operators who are alert, motivated, and capable of exercising independent judgement. Monitoring tracks workload indicators: cases per operator per shift, shift duration, break frequency, and overtime hours. Cognitive fatigue degrades oversight quality. The PMM plan defines maximum workload parameters based on decision complexity and the organisation's assessment of sustainable oversight capacity. An operator who has reviewed three hundred cases in a single shift is less likely to catch a subtle error in case three hundred and one than an operator who has reviewed thirty. Workload thresholds trigger alerts when exceeded, and the AI Governance Lead has authority to reduce case volumes or increase staffing. Operator wellbeing monitoring also tracks secondary indicators: error rates in human review (errors identified during quality assurance checks on operator decisions), voluntary rotation requests, and absenteeism patterns. A sustained increase in these indicators may signal that the oversight workload is unsustainable. Key outputs Maximum workload parameters (cases per shift, duration, breaks) Workload threshold alerts with AI Governance Lead authority to adjust Secondary wellbeing indicators (error rates, rotation requests, absenteeism) Module 7 and Module 12 AISDP documentation --- ## Immediate Actions (Halt, Hold, Notify Deployers) URL: https://docs.standardintelligence.com/immediate-actions-halt-hold-notify-deployers Breadcrumb: Operations › Oversight › Break-Glass Procedures › Immediate Actions (Halt, Hold, Notify Deployers) Last updated: 28 Feb 2026 Immediate Actions (Halt, Hold, Notify Deployers) AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 (4)(e) When break-glass is activated, three immediate actions follow. Halt: the system stops processing new inference requests. Hold: pending decisions that have not yet been communicated to affected persons are held and not released until the situation is resolved. Notify deployers: all active deployers are informed that the system has been suspended, with guidance on fallback procedures. The halt action is automated through the stop button, kill switch, or feature flag. The hold action requires that the system's deployment architecture supports holding pending outputs, which should be designed into the system during Phase 3. Deployer notification uses the pre-established communication channels, with a pre-drafted suspension notification template. The three actions occur in parallel, not sequentially. Waiting to notify deployers until the halt is confirmed introduces unnecessary delay in deployer environments where affected persons may still be receiving the system's outputs through cached responses. Key outputs Three parallel immediate actions (halt, hold, notify) Hold capability designed into system architecture Pre-drafted suspension notification template Parallel execution to minimise harm window --- ## In-Application Stop Button URL: https://docs.standardintelligence.com/in-application-stop-button Breadcrumb: Operations › Oversight › Break-Glass Procedures › In-Application Stop Button Last updated: 28 Feb 2026 In-Application Stop Button AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 (4)(e) The primary break-glass mechanism is a prominent, clearly labelled control in the operator's review interface. When activated, it triggers an API call to the inference service that suspends request processing, drains in-flight requests (completing them, not dropping them, to avoid data loss), and returns a "service suspended" response to any subsequent requests. The stop button must be visually prominent (not buried in a menu), immediately responsive (activation to suspension within seconds), and available at all times (not disabled by any system state). The button's activation is logged with the identity of the person who activated it, the timestamp, and the case context. The in-application stop button provides the fastest intervention pathway for operators who are actively using the system and observe harmful behaviour. Key outputs Prominent in-application stop control Immediate inference suspension with request draining Logged activation with identity and timestamp Module 7 AISDP documentation --- ## Inference Latency URL: https://docs.standardintelligence.com/inference-latency Breadcrumb: Operations › PMM › Operational Monitoring › Inference Latency Last updated: 28 Feb 2026 Inference Latency AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 15 Latency monitoring tracks mean response times and tail latencies (95th and 99th percentiles). High-percentile latency spikes can cause timeouts in downstream systems that depend on the AI system's output. Where latency exceeds the timeout threshold, the downstream system may receive no output or a default fallback value. If the deployer's process assumes a valid AI output is always available, timeout-driven fallbacks introduce silent failures. The Technical SME defines latency thresholds in the PMM plan , calibrated against the deployer's integration architecture, and monitors them separately for each inference endpoint. Latency degradation may indicate resource contention, model complexity growth (after a retrained model is deployed), or infrastructure issues. Latency monitoring should correlate with throughput monitoring: latency degradation under increasing load is expected, but latency degradation at constant load suggests an infrastructure or model problem. Key outputs Mean, P95, and P99 latency monitoring per endpoint Thresholds calibrated to deployer integration architecture Timeout-driven silent failure detection Latency-throughput correlation analysis --- ## Informational Tier URL: https://docs.standardintelligence.com/informational-tier Breadcrumb: Operations › PMM › Alerting & Escalation › Informational Tier Last updated: 28 Feb 2026 Informational Tier AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 An informational alert indicates that a metric has shifted but remains within the established tolerance band. The alert is logged and reviewed at the next scheduled PMM review meeting. No immediate action is required. Informational alerts provide trend visibility: a metric that generates frequent informational alerts is approaching its warning threshold and warrants proactive attention. The PMM review meeting examines the informational alert log for patterns that suggest emerging issues before they breach warning thresholds. Informational alerts should not be routed through the real-time alerting service (PagerDuty/Opsgenie) to avoid contributing to alert fatigue. They are collected in the monitoring dashboard and reviewed in batch at the governance meeting. Key outputs Logged and reviewed at next scheduled PMM meeting No immediate action required Trend visibility for proactive attention Dashboard collection, not real-time alerting service --- ## Infrastructure Kill Switch URL: https://docs.standardintelligence.com/infrastructure-kill-switch Breadcrumb: Operations › Oversight › Break-Glass Procedures › Infrastructure Kill Switch Last updated: 28 Feb 2026 Infrastructure Kill Switch AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 (4)(e) The secondary break-glass mechanism is an infrastructure-level kill switch: a dedicated API endpoint, hosted separately from the main application, that scales the inference service to zero replicas (on Kubernetes) or disables the inference endpoint (on managed ML services). This mechanism exists in case the application itself is compromised or unresponsive. The infrastructure kill switch is accessible to Level 1 (engineering) and Level 4 ( AI Governance Lead ) personnel. It operates independently of the application layer, ensuring that a software failure in the AI system does not prevent the system from being stopped. Both the application-level and infrastructure-level mechanisms are documented in the AISDP, with clear instructions for when each should be used. Key outputs Infrastructure-level kill switch independent of application Scales inference to zero or disables endpoint Accessible to engineering and AI Governance Lead Documented alongside application-level mechanism --- ## Initial Report Content (Art. 73(5)) URL: https://docs.standardintelligence.com/initial-report-content-art-735 Breadcrumb: Operations › PMM › Serious Incident Reporting › Initial Report Content (Art. 73(5)) Last updated: 28 Feb 2026 Initial Report Content (Art. 73(5)) AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 73(5) The initial report is prepared using the Commission's September 2025 draft template. It contains the provider's identity, the system's identity and EU database registration details, a description of what happened (when, where, who was affected), the suspected causal link between the AI system and the harm, and the immediate actions taken to prevent further harm. A pre-templated notification package reduces the time between triage determination and regulatory submission. The template has fields pre-populated from the AISDP and the incident management system. Structured incident timeline tools (Incident.io, FireHydrant) can export chronologically ordered narratives into the template, ensuring accuracy under time pressure. The initial report explicitly acknowledges that the investigation is ongoing and that supplementary information will follow. Incomplete but timely reporting is far preferable to delayed but comprehensive reporting; the timelines are strict, and failure to report within the required period is itself a compliance violation. Key outputs Commission template adopted as reporting baseline Pre-templated fields populated from AISDP and incident management Explicit acknowledgement that investigation is ongoing Timely submission prioritised over comprehensiveness --- ## Instructions for Use Guidance (Art. 26(4)) URL: https://docs.standardintelligence.com/instructions-for-use-guidance-art-264 Breadcrumb: Operations › PMM › Deployer Monitoring Support › Instructions for Use Guidance (Art. 26(4)) Last updated: 28 Feb 2026 Instructions for Use Guidance (Art. 26(4)) AISDP module(s): Module 8 (Transparency), Module 11 (Deployer Obligations) Regulatory basis: Article 13 , Article 26(4) The deployer's Article 26 monitoring obligation is only as effective as the guidance the provider supplies. The Instructions for Use must include sufficient operational monitoring guidance for the deployer to fulfil their obligation. This guidance specifies the minimum monitoring activities: reviewing system outputs for consistency and plausibility, tracking human oversight metrics, monitoring complaint and appeal rates, and observing the system's behaviour for changes indicating degradation or drift. The Instructions also define minimum data that the deployer must collect and share with the provider: aggregated performance statistics, anonymised output samples, human oversight metrics (override rates, review times), and a complaint and incident summary. The Legal and Regulatory Advisor formalises these data-sharing requirements in the deployment contract. Clear suspension criteria are also provided. Article 26(5) requires deployers to suspend use when they consider the system presents a risk, but deployers may lack the technical knowledge to assess risk. The Instructions specify observable indicators that should trigger suspension: systematically biased output patterns, sudden output distribution changes, error rates exceeding defined thresholds, and use outside the documented intended purpose. Key outputs Deployer monitoring guidance in Instructions for Use Minimum data-sharing requirements formalised in contract Suspension criteria with observable indicators Module 8 and Module 11 AISDP evidence --- ## Investigation & Corrective Action URL: https://docs.standardintelligence.com/investigation-and-corrective-action Breadcrumb: Operations › PMM › Serious Incident Reporting › Investigation & Corrective Action Last updated: 28 Feb 2026 Investigation & Corrective Action AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 73(6) Following the initial report, Article 73(6) requires the provider to investigate, assess the incident's risk, and take corrective steps. The investigation determines the root cause (data issue, model deficiency, integration error, deployment configuration, human oversight failure, or external factor), the scope of impact (how many persons affected, which subgroups, in which member states), and the appropriate remedy (model fix, data correction, configuration change, deployment limitations, system withdrawal, or enhanced human oversight). The corrective action is documented and communicated to the competent authority as a supplement to the initial report. If the corrective action involves a substantial modification to the system, a new conformity assessment may be required. The investigation timeline depends on the incident's complexity; the authority may set deadlines for supplementary reports. The investigation findings also feed into the risk register and the PMM feedback loop. A serious incident reveals a risk that the pre-deployment risk assessment did not anticipate; the risk register is updated, and the AISDP is amended to reflect the new understanding. Key outputs Root cause determination across six categories Impact scope assessment (persons, subgroups, jurisdictions) Corrective action communicated to competent authority Risk register and AISDP updated with investigation findings --- ## Level 1: Technical Monitoring URL: https://docs.standardintelligence.com/level-1-technical-monitoring Breadcrumb: Operations › Oversight › Six-Level Pyramid › Level 1: Technical Monitoring Last updated: 28 Feb 2026 Level 1: Technical Monitoring — Personnel & Function AISDP module(s): Module 7 (Human Oversight), Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 14 Level 1 of the oversight pyramid comprises ML engineers, platform engineers, and site reliability engineers responsible for continuous automated monitoring of the system's technical health. This level provides real-time visibility into inference latency, error rates, throughput, and infrastructure utilisation through dashboards and automated alerting. The engineering team must have the ability to diagnose and remediate technical failures, access to the system's logging infrastructure for root cause analysis, and the capacity to execute emergency actions when metric threshold breaches indicate service degradation. Level 1 is the first line of detection for technical failures that may have compliance implications. Level 1 monitoring operates continuously, including outside business hours, through on-call rotation with defined response SLAs. The monitoring infrastructure is independent of the AI system it monitors, ensuring that a failure in the AI system does not disable the monitoring capability. Key outputs Continuous automated monitoring by engineering team Real-time dashboards and automated alerting On-call rotation with defined SLAs Module 7 and Module 12 AISDP documentation Level 1: Authority — Emergency Rollback Without Prior Approval AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 Level 1 personnel have authority to execute emergency rollbacks without prior approval from the AI Governance Lead , with immediate post-hoc notification. This authority exists because requiring senior approval before addressing an active technical failure introduces delay that may increase harm or extend downtime. The scope of this authority is limited to technical remediation actions: rolling back to a previous model version, reverting a configuration change, scaling infrastructure, or activating a fallback service. Actions that alter the system's intended purpose, change its decision logic, or modify compliance-relevant parameters require AI Governance Lead approval. Every emergency action is logged with the identity of the person who acted, the timestamp, the action taken, the rationale, and the post-hoc notification to the AI Governance Lead. This log is retained as Module 7 evidence and reviewed at the quarterly oversight meeting. Key outputs Emergency rollback authority without prior approval Scope limited to technical remediation Post-hoc notification to AI Governance Lead Action logging retained as Module 7 evidence Level 1: Escalation Triggers AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 Level 1 escalation triggers include infrastructure failures affecting system availability, performance degradation beyond defined SLA thresholds, security alerts from runtime monitoring, and anomalous patterns in system logs. These triggers are defined in the PMM plan and implemented as automated alert rules. When a trigger fires, the Level 1 team follows the triage process to determine whether the issue is operational or model-related. Operational issues within Level 1's remediation authority are addressed directly. Issues that may have compliance implications, indicate model degradation, or suggest a potential serious incident are escalated to Level 2 (operators) and Level 4 (compliance) simultaneously. The escalation triggers are reviewed quarterly at the oversight meeting. Triggers that generate excessive false positives are recalibrated; triggers that fail to detect genuine issues are tightened. Key outputs Four categories of escalation trigger (infrastructure, performance, security, anomalies) Automated alert implementation Dual escalation to Level 2 and Level 4 for compliance-relevant issues Quarterly trigger review and recalibration --- ## Level 2: AI System Operators URL: https://docs.standardintelligence.com/level-2-ai-system-operators Breadcrumb: Operations › Oversight › Six-Level Pyramid › Level 2: AI System Operators Last updated: 28 Feb 2026 Level 2: AI System Operators — Personnel & Function AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 Level 2 comprises the human operators who interact with the AI system's outputs in daily operation. For a recruitment system, these are the recruiters using the screening tool. For a credit scoring system, these are the credit analysts reviewing the model's recommendations. They exercise the override, intervention, and escalation capabilities documented in AISDP Module 7. Operators are trained and certified on the system's capabilities, limitations, and known failure modes. They understand the meaning of confidence indicators and explanation outputs, know when and how to override recommendations, and have a clear, low-friction escalation pathway for reporting concerns. They must recognise patterns suggesting the system is behaving differently from its documented intended purpose. Level 2 provides the most direct observation of the system's real-world behaviour. Automated monitoring (Level 1) detects technical anomalies; human operators detect decision quality issues, contextual inappropriateness, and fairness concerns that metrics alone cannot capture. Key outputs Human operators providing real-time output oversight Trained and certified on system capabilities and limitations Override, intervention, and escalation capability Direct observation of real-world behaviour quality Level 2: AI Literacy Requirements (Art. 4) AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 4 Article 4 requires AI literacy for all persons involved in AI system operation. For Level 2 operators, this means understanding how the system works at a conceptual level, knowing what it does and does not do without needing the underlying mathematics. Operators must recognise the difference between the system's recommendations and their own professional judgement. Operators understand the risks of automation bias and the importance of independent evaluation. They know the signs of output drift: the system suddenly recommending a different proportion of candidates, or consistently disagreeing with the operator's assessment for a particular case type. They understand that the system's confidence score reflects statistical calibration, not certainty. AI literacy for operators is delivered through hands-on training with the actual oversight interface, not generic AI awareness courses. The training programme is refreshed annually and after any substantial system modification. Key outputs Conceptual understanding of system behaviour Automation bias awareness and independent evaluation skills Output drift recognition capability Hands-on training with the actual system interface Level 2: Escalation Triggers AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 Level 2 escalation triggers include a pattern of outputs inconsistent with the operator's professional judgement, outputs appearing to disadvantage a particular group of affected persons, situations the system's training did not anticipate (novel input types, unusual circumstances), and any case where the operator believes the system may be causing harm. These triggers depend on operator judgement rather than automated thresholds. The escalation pathway must be low-friction: a single button or form in the oversight interface that captures the operator's concern, the affected case identifier, and the reason for escalation. High-friction escalation pathways (requiring a formal written report, supervisor pre-approval, or multi-step process) suppress legitimate escalations. Escalation data is captured, aggregated, and analysed as part of the human oversight monitoring . A declining escalation rate over time warrants investigation. Key outputs Four categories of operator escalation trigger Low-friction escalation pathway in the oversight interface Escalation data captured for human oversight monitoring Judgement-based triggers complementing automated detection Level 2: Non-Retaliation Protection AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 Operators must escalate concerns without fear of negative consequences. If operators believe that raising concerns will lead to reprimand, performance penalties, or career disadvantage, they will not escalate, and the organisation loses its most valuable source of real-world feedback. An explicit non-retaliation commitment for good-faith AI concern reporting is communicated by the AI Governance Lead during operator training, reinforced by management, and enforceable through the organisation's whistleblower protection mechanisms. The commitment extends to override decisions: an operator who overrides the system's recommendation in good faith and turns out to be wrong should not face negative consequences for exercising the judgement the Article 14 framework requires of them. Non-retaliation is verified annually by the Internal Audit Assurance Lead through confidential operator surveys and escalation pattern analysis. A sudden drop in escalation rates following a personnel change or management communication warrants investigation. Key outputs Explicit non-retaliation commitment communicated during training Coverage extends to override decisions and escalations Annual verification through confidential surveys Module 7 AISDP documentation --- ## Level 3: Product Management & Business URL: https://docs.standardintelligence.com/level-3-product-management-and-business Breadcrumb: Operations › Oversight › Six-Level Pyramid › Level 3: Product Management & Business Last updated: 28 Feb 2026 Level 3: Product Management & Business — Personnel & Function AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 Level 3 comprises product managers, business unit heads, and deployer relationship managers. They provide oversight of the system's alignment with business intent, deployer satisfaction, and affected person experience, bridging technical monitoring and organisational accountability. Product managers have access to business-level metrics: deployer satisfaction scores, override rates per deployer, complaint volumes, and affected person feedback. They interpret these metrics in the context of the AISDP's documented intended purpose and the risk assessment 's identified residual risk s. Level 3 is the oversight layer most attuned to whether the system is achieving its stated purpose in the real world. Technical metrics may show the system operating within specification; Level 3 detects whether that specification translates into appropriate real-world outcomes for deployers and affected persons. Key outputs Business-level metric access and interpretation Deployer satisfaction and affected person experience monitoring Intent alignment oversight Bridge between technical monitoring and organisational accountability Level 3: Intent & Outcome Drift Detection AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 Level 3 is where intent and outcome drift are most likely to be detected. Three forms of drift are relevant. Purpose creep: deployers using the system beyond its documented intended purpose. Configuration undermining: deployer organisations configuring the system in ways that weaken human oversight. Outcome divergence: the system's real-world outcomes diverging from expectations set during implementation. These observations represent compliance risks that may not be visible in technical monitoring data. A system whose accuracy metrics are within specification may still be causing harm if deployers are applying its outputs in contexts the AISDP did not anticipate. Product management has the domain knowledge to recognise these patterns and the authority to escalate them. Intent drift detection feeds into the AISDP maintenance process. If deployers are consistently using the system outside its intended purpose, the organisation must either update the intended purpose (with a corresponding conformity assessment impact analysis) or take corrective action to restrict usage to the documented scope. Key outputs Three drift forms monitored (purpose creep, configuration undermining, outcome divergence) Domain knowledge applied to detect non-technical compliance risks AISDP maintenance trigger for intended purpose drift Module 7 AISDP evidence Level 3: Escalation Triggers AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 Level 3 escalation triggers include deployer configuration or usage patterns outside the intended conditions of use, trends in deployer feedback suggesting dissatisfaction with fairness, accuracy, or transparency, affected person complaints (particularly those alleging discrimination or opacity), and any indication that real-world outcomes are diverging from AISDP commitments. Escalation from Level 3 reaches Level 4 (compliance, legal, and data protection), where the regulatory implications of the observed patterns are assessed. Level 3 escalations are often the earliest signal that a compliance issue is developing, before it manifests in technical metrics. Escalation data from Level 3 is aggregated and reported at the quarterly oversight review. Cross-deployer complaint patterns are a particularly valuable Level 3 escalation source. Key outputs Four categories of business-level escalation trigger Escalation to Level 4 for regulatory assessment Early signal of developing compliance issues Cross-deployer pattern analysis integration --- ## Level 4: Compliance, Legal & Data Protection URL: https://docs.standardintelligence.com/level-4-compliance-legal-and-data-protection Breadcrumb: Operations › Oversight › Six-Level Pyramid › Level 4: Compliance, Legal & Data Protection Last updated: 28 Feb 2026 Level 4: Compliance, Legal & Data Protection — Personnel & Function AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 , Article 17 Level 4 comprises the AI Governance Lead , Legal and Regulatory Advisor, and DPO Liaison. They provide oversight of the system's compliance posture, regulatory risk, and legal obligations. This level receives regular reporting from Levels 1–3: technical monitoring summaries, operator escalation reports, product management observations, and non-conformity register updates. Level 4 interprets these reports in the context of the EU AI Act, GDPR , and sector-specific legislation. It assesses whether observed issues constitute regulatory non-compliance, determines whether escalation to Level 5 (executive) is warranted, and initiates formal corrective action where required. Level 4 is also responsible for maintaining the AISDP as a living document, ensuring that operational findings are reflected in the documentation, and managing the organisation's relationship with competent authorities. Key outputs Compliance posture oversight across AI Act, GDPR, sector regulation Reporting from Levels 1–3 interpreted for regulatory implications Formal corrective action initiation AISDP maintenance and authority relationship management Level 4: Regulatory Horizon Scanning AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 The Legal and Regulatory Advisor monitors guidance published by the European AI Office , enforcement actions taken by national competent authorit ies, developments in harmonised standards , and amendments to the Act's Annexes. Each development is assessed for its impact on the organisation's AI systems and, where relevant, triggers AISDP updates, reclassification reviews, or operational changes. Horizon scanning also covers sector-specific regulatory developments (financial services regulation, healthcare regulation, employment law), GDPR interpretive guidance that affects AI data processing, and case law emerging from the AI Liability Directive and national courts. This broader regulatory context shapes the compliance requirements the AISDP must satisfy. Horizon scanning findings are documented in the regulatory monitoring register and reported at the quarterly oversight review. Material developments are escalated immediately to the AI Governance Lead. Key outputs AI Office, NCA, harmonised standards, and Annex amendment monitoring Cross-regulatory scanning (GDPR, sector-specific, liability) Documented findings in regulatory monitoring register Immediate escalation for material developments Level 4: Escalation Triggers AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 Level 4 escalation triggers include any Level 1–3 escalation that may constitute a regulatory breach, post-market monitoring data suggesting the system no longer meets Articles 9–15, and external events affecting the compliance posture (enforcement actions against comparable systems, published vulnerability disclosures, changes in regulatory expectations). Level 4 escalation reaches Level 5 (executive leadership) when the issue requires strategic decision-making, resource allocation beyond current budgets, or risk appetite adjustment. Level 4 also triggers the serious incident reporting process when applicable. External events may require urgent response. An enforcement action against a competitor's comparable system signals heightened regulatory scrutiny; the Legal and Regulatory Advisor assesses whether the organisation's system is exposed to the same vulnerability and recommends proactive remediation. Key outputs Regulatory breach escalation from Levels 1–3 Articles 9–15 non-compliance signals from PMM data External event response (enforcement actions, vulnerabilities) Escalation to Level 5 for strategic decisions --- ## Level 5: Executive Leadership URL: https://docs.standardintelligence.com/level-5-executive-leadership Breadcrumb: Operations › Oversight › Six-Level Pyramid › Level 5: Executive Leadership Last updated: 28 Feb 2026 Level 5: Executive Leadership — Personnel & Function AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 Level 5 comprises the CEO, CTO, CRO, and board members with AI governance oversight. They provide strategic oversight of the organisation's AI compliance programme, resource allocation, and risk appetite decisions. Executive leadership must receive periodic reporting (quarterly during normal operations, immediately for serious incidents) covering the compliance status of all high-risk systems, the open non-conformity register , serious incidents or near-misses, the PMM summary, and the overall risk posture. Level 5 holds the authority to increase compliance investment, adjust risk appetite, halt deployments, and set organisational culture around AI governance. Without executive engagement, the compliance programme lacks the organisational weight to compete with commercial priorities. Executive oversight is documented through board and committee reporting materials, providing evidence that the organisation's leadership is actively engaged in AI governance. Key outputs Strategic oversight of AI compliance programme Resource allocation and risk appetite authority Quarterly and immediate reporting cadence Board and committee documentation as evidence Level 5: Periodic Reporting AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 Executive reporting follows a dual cadence. Quarterly reports cover the portfolio compliance status, aggregated PMM trends, non-conformity register summary, resource utilisation against plan, regulatory developments, and upcoming milestones. Immediate reports are triggered by serious incidents under Article 73 , non-conformities that the AI Governance Lead has been unable to resolve within defined timelines, and resource constraints threatening compliance posture. The quarterly report is concise and decision-oriented: it presents the information executives need to allocate resources, set priorities, and assess whether the organisation's AI risk appetite is appropriate. Detailed technical analysis remains at Levels 1–4; Level 5 receives strategic summaries with clear escalation points. Key outputs Quarterly portfolio compliance reporting Immediate reporting for serious incidents and unresolved non-conformities Decision-oriented format with clear escalation points Module 7 AISDP evidence Level 5: AI Literacy for Executives (Art. 4) AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 4 Article 4's AI literacy requirement extends to executive leadership. Executives need strategic awareness: what the organisation's AI systems do and which populations they affect, what the regulatory obligations are and what non-compliance consequences entail, how to interpret compliance reporting, and when to exercise authority to halt or modify a deployment. Executive literacy is delivered through focused briefings (annual, with event-triggered updates for material regulatory changes), not through the same detailed programme as operators. The briefings cover the AI portfolio overview, risk posture, compliance status, and upcoming regulatory milestones. They should equip executives to ask informed questions and make governance decisions, not to interpret model metrics. Key outputs Strategic AI literacy for executive decision-making Annual briefings with event-triggered updates Portfolio overview, risk posture, and regulatory milestones Module 7 AISDP evidence Level 5: Escalation Triggers AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 Level 5 escalation triggers include any serious incident under Article 73, non-conformities that the AI Governance Lead has been unable to resolve within defined timelines, resource constraints preventing the organisation from maintaining its compliance posture, and board-level risk appetite decisions regarding residual risk s. Escalation to Level 5 is the final internal step before external regulatory engagement. When the AI Governance Lead escalates to Level 5, the expectation is that executive authority is needed to resolve the issue, whether through additional resources, strategic reprioritisation, or a decision to withdraw a system. Level 5 decisions are documented and retained. Key outputs Four categories of executive escalation trigger Final internal escalation before external engagement Executive authority required for resolution Documented decisions retained as evidence --- ## Level 6: External Oversight URL: https://docs.standardintelligence.com/level-6-external-oversight Breadcrumb: Operations › Oversight › Six-Level Pyramid › Level 6: External Oversight Last updated: 28 Feb 2026 Level 6: External Oversight — Bodies & Organisation's Role AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Articles 70, 74 Level 6 comprises national competent authorit ies, notified bodies , external auditors, and market surveillance bodies providing independent oversight from outside the organisation. The organisation cannot control external oversight; it can prepare for it. The Conformity Assessment Coordinator maintains readiness for regulatory inspections by ensuring the AISDP and evidence pack are current, the documentation repository is accessible, designated personnel are available to respond to inquiries, and the organisation can produce requested documentation within expected timelines. The inspection readiness posture is the organisation's operational preparation for Level 6 engagement. Annual inspection drills, the pre-configured regulatory access IAM role, and the inspection log ensure the organisation can respond cooperatively and promptly to external oversight. Key outputs Preparation for competent authority, notified body, and auditor engagement Inspection readiness maintained continuously Documentation accessibility and personnel availability Module 7 AISDP documentation Level 6: Annual External Audit AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 17 Beyond regulatory inspections (which are at the authority's initiative), the organisation may commission annual external audits of its AI compliance programme. An external auditor provides independent verification that the compliance framework is functioning, evidence is genuine, and the organisation's self-assessment is not biased by familiarity. External audits are particularly valuable for organisations without a large internal audit function. The audit scope covers AISDP completeness and currency, evidence pack integrity, PMM operational effectiveness, and governance framework functioning. Findings are reported to the audit committee and entered into the Non-Conformity Register . Key outputs Annual external audit of AI compliance programme Independent verification of AISDP, evidence, PMM, and governance Findings reported to audit committee Non-Conformity Register entries for identified gaps --- ## Limited-Visibility Deployments — Callback APIs URL: https://docs.standardintelligence.com/limited-visibility-deployments-callback-apis Breadcrumb: Operations › PMM › Deployer Monitoring Support › Limited-Visibility Deployments — Callback APIs Last updated: 28 Feb 2026 Limited-Visibility Deployments — Callback APIs AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 Callback APIs provide a structured channel for deployers to report events to the provider. Published webhook endpoints cover specific event types: performance degradation reports, incident notifications, user complaints, and ground truth feedback. Deployers call these endpoints when events occur. The API schema is predefined, documented in the Instructions for Use, and includes validation to ensure data quality. This mechanism depends on deployer cooperation; the deployer agreement includes an obligation to use the callback APIs and a defined SLA for reporting. The Technical SME monitors callback API usage: a deployer that stops calling the API may have disconnected from the monitoring framework, creating a blind spot. Callback APIs complement telemetry agents. Telemetry captures continuous quantitative data; callbacks capture event-driven qualitative information that automated metrics cannot detect (a deployer observing unusual operator behaviour, a complaint from an affected person, a deployment context change). Key outputs Published webhook endpoints per event type Predefined schema documented in Instructions for Use Contractual obligation with SLA for deployer reporting Complementary to telemetry agents for event-driven information --- ## Limited-Visibility Deployments — Synthetic Monitoring URL: https://docs.standardintelligence.com/limited-visibility-deployments-synthetic-monitoring Breadcrumb: Operations › PMM › Deployer Monitoring Support › Limited-Visibility Deployments — Synthetic Monitoring Last updated: 28 Feb 2026 Limited-Visibility Deployments — Synthetic Monitoring AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 Synthetic monitoring is the mechanism entirely within the provider's control. Sentinel test suites submit known inputs to the deployed system at defined intervals and verify the outputs. This detects functional degradation, silent model changes (if the deployer has modified the system), and availability problems without requiring any deployer cooperation. Synthetic monitoring cannot detect distributional drift in the real-world input population (sentinel inputs are fixed), but it provides a baseline behavioural check. The test cases span the system's intended use cases and include edge cases relevant to the risk register . The sentinel suite results are compared against the baseline established at deployment. For limited-visibility deployments, the PMM plan documents which monitoring mechanisms are used (telemetry, callbacks, synthetic), the coverage each provides, the residual monitoring gaps, and the mitigations for those gaps. Where full PMM visibility cannot be achieved, the AISDP documents the limitation and the compensating controls. Key outputs Sentinel test suites submitted at defined intervals Functional degradation and silent change detection No deployer cooperation required Residual gaps documented with compensating controls --- ## Limited-Visibility Deployments — Telemetry Agents URL: https://docs.standardintelligence.com/limited-visibility-deployments-telemetry-agents Breadcrumb: Operations › PMM › Deployer Monitoring Support › Limited-Visibility Deployments — Telemetry Agents Last updated: 28 Feb 2026 Limited-Visibility Deployments — Telemetry Agents AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 Many high-risk systems are deployed by third-party deployers who control the production environment. The provider may have limited or no direct access to inference logs, operator behaviour data, or real-world outcomes. Telemetry agents bridge this visibility gap. A lightweight monitoring component (OpenTelemetry Collector sidecar or Fluent Bit forwarder) runs in the deployer's environment, collects inference metadata (input distributions, output distributions, latency, error rates), and transmits it to the provider's PMM infrastructure. The Technical SME designs the telemetry to minimise transmitted data: distributional summaries and aggregate metrics rather than raw inference data, respecting the deployer's data sovereignty. The telemetry schema, transmission frequency, and data handling terms are documented by the Legal and Regulatory Advisor in the deployer agreement. Telemetry agent deployment is a contractual obligation, not a request. The deployment contract includes escalation provisions for telemetry pipeline failures and, in extreme cases, the provider's right to suspend service until monitoring is restored. Key outputs OpenTelemetry Collector or Fluent Bit telemetry agents Distributional summaries, not raw inference data Contractual obligation with escalation for pipeline failures Module 12 AISDP documentation --- ## LLM & Generative AI Monitoring URL: https://docs.standardintelligence.com/llm-and-generative-ai-monitoring Breadcrumb: Operations › PMM › LLM Monitoring Last updated: 28 Feb 2026 Hallucination Detection AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 15 For generative AI systems that produce factual claims, hallucination monitoring compares generated claims against source documents. Three approaches are common. Entailment scoring: an NLI model checks whether the source actually supports the generated claim. Citation verification: the system checks whether generated citations exist and contain the claimed information. Consistency checking: the system flags cases where the same query produces contradictory answers on different occasions. For RAG systems, RAGAS provides automated evaluation of faithfulness (whether the answer follows from retrieved documents), answer relevance, and context relevance. Trulens offers a similar framework with customisable feedback functions. For non-RAG systems, NLI-based detection relies on comparing outputs against a reference corpus. These detectors are imperfect; they miss subtle hallucinations and occasionally flag correct statements. The monitoring combines automated detection with periodic human evaluation: a random sample of outputs is reviewed by domain experts rating factual accuracy, relevance, and safety. The Technical SME tracks hallucination rates as a PMM metric with defined thresholds. Key outputs Three automated detection approaches (entailment, citation, consistency) RAGAS and Trulens for RAG system evaluation Combined automated and human evaluation Hallucination rate tracked as PMM metric with thresholds Safety Monitoring AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 15 For systems with safety constraints (content policies, behavioural boundaries, use-case restrictions), monitoring tracks policy violation rates over time. A rising violation rate may indicate the model's safety alignment has degraded, users have discovered bypass techniques, or the system is encountering input patterns it was not designed to handle. Lakera Guard scans model inputs and outputs for prompt injection attempts, PII leakage, toxic content, and other safety violations. NVIDIA NeMo Guardrails enforces conversational guardrails (topic boundaries, response format constraints, safety filters) at the application layer. Llama Guard provides a safety classifier applicable to model outputs. Safety monitoring runs on every output in production, with violations logged, counted, and reported in the PMM report. Prompt injection detection is particularly relevant for high-risk systems: an adversary who can manipulate the system's behaviour through crafted inputs can cause the system to produce outputs that violate its intended purpose or harm affected persons. Prompt injection rates should be tracked and reported alongside content safety metrics. Key outputs Policy violation rate tracking over time Lakera Guard, NeMo Guardrails, Llama Guard integration Every-output safety monitoring in production Prompt injection detection and rate tracking Prompt/Response Distribution Monitoring AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 15 The Technical SME monitors the distribution of incoming prompts for shifts indicating the system is being used outside its intended purpose. Topic classification of incoming prompts, using BERTopic or custom embedding-based clustering, detects usage drift. A sudden shift in the topic distribution (a large increase in prompts about a topic the system was not designed for) may indicate misuse, a user population change, or an adversarial probing campaign. Output characteristics (length, sentiment, topic, confidence indicators) are similarly monitored for shifts that might indicate model degradation or adversarial manipulation. The baseline topic and output distributions are established during deployment and tracked over time. The monitoring alerts when the topic distribution diverges significantly from the baseline. The investigation determines whether the shift represents legitimate evolution of the user population (requiring an intended purpose review) or problematic usage (requiring corrective action). Key outputs Prompt topic classification via BERTopic or embedding clustering Output characteristic distribution monitoring Baseline establishment at deployment with ongoing tracking Intended purpose review trigger for significant shifts Annotation Platforms AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 15 Automated monitoring cannot capture all dimensions of LLM output quality. A regular human evaluation programme provides qualitative assessment that complements automated metrics. Argilla, Label Studio, and Prodigy provide annotation platforms for structuring this evaluation. The AI Governance Lead defines the human evaluation cadence in the PMM plan . A common approach evaluates a random sample of 100–500 outputs weekly, rated on a structured rubric covering accuracy, relevance, safety, and explanation quality. The evaluation results feed into the PMM report and provide the ground truth against which automated quality metrics are calibrated. The annotation platform should support structured rubrics (ensuring consistency across evaluators), inter-annotator agreement measurement (ensuring evaluation quality), and integration with the monitoring pipeline (feeding results back into the metric computation layer). Key outputs Weekly human evaluation of 100–500 output samples Structured rubric (accuracy, relevance, safety, explanation quality) Inter-annotator agreement measurement Results integrated into PMM metrics and reporting --- ## Monthly PMM Reports URL: https://docs.standardintelligence.com/monthly-pmm-reports Breadcrumb: Operations › PMM › Artefacts › Monthly PMM Reports Last updated: 28 Feb 2026 Monthly PMM Reports AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 Monthly PMM reports document the monitoring results for each reporting period: metric values across all five dimensions, alert summary (informational, warning, critical, with resolution status), deployer feedback summary, any investigations initiated or completed, and the current status of open non-conformities. The reports are prepared by the PMM analyst and reviewed by the Technical SME. Monthly reports provide the data foundation for the quarterly governance review. They are retained as Module 12 evidence for the ten-year period. Key outputs Five-dimension metric reporting Alert and investigation summary Deployer feedback integration Ten-year retention as Module 12 evidence --- ## Non-Retaliation for Break-Glass URL: https://docs.standardintelligence.com/non-retaliation-for-break-glass Breadcrumb: Operations › Oversight › Break-Glass Procedures › Non-Retaliation for Break-Glass Last updated: 28 Feb 2026 Non-Retaliation for Break-Glass AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 The organisation's AI governance policy explicitly protects any individual who triggers a break-glass action in good faith from negative consequences. A culture in which operators or managers hesitate to stop a system because they fear career repercussions is one in which harmful AI systems continue to operate. The non-retaliation commitment covers good-faith activations that turn out to have been unnecessary. False positives are the expected cost of an effective safety mechanism; penalising them discourages future legitimate activations. The commitment is communicated during operator training, reinforced by management, and enforceable through the organisation's HR policies. The Internal Audit Assurance Lead verifies non-retaliation compliance as part of the annual oversight audit, including confidential interviews with personnel who have activated break-glass procedures . Key outputs Explicit non-retaliation for good-faith break-glass activations Coverage includes false positives Annual verification by Internal Audit Assurance Lead Module 7 AISDP documentation --- ## Notification Chain URL: https://docs.standardintelligence.com/notification-chain Breadcrumb: Operations › Oversight › Break-Glass Procedures › Notification Chain Last updated: 28 Feb 2026 Notification Chain AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 The break-glass activation event triggers an automated notification chain. The on-call engineering team is notified to investigate and resolve. The AI Governance Lead is notified to assess compliance implications. The DPO Liaison is notified if personal data processing is affected. Senior management is notified if the incident has business continuity implications. PagerDuty or Opsgenie automates this routing based on severity and time-of-day rules. The notification chain functions outside business hours through the on-call rotation. Each notification includes the system identifier, the person who activated break-glass, the timestamp, and any available context about the reason for activation. The notification chain is tested during the annual break-glass exercise to verify that all recipients receive alerts and respond within their defined timeframes. Key outputs Automated notification to engineering, governance, DPO, and management PagerDuty/Opsgenie routing with out-of-hours coverage Per-notification context (system, person, timestamp, reason) Annual testing through break-glass exercise --- ## Operational Monitoring URL: https://docs.standardintelligence.com/operational-monitoring Breadcrumb: Operations › PMM › Operational Monitoring Last updated: 28 Feb 2026 Availability & Uptime vs SLO Inference Latency Error Rate Tracking by Type Resource Utilisation & Capacity Dependency Health Operational vs Model Incident Triage --- ## Operational Oversight URL: https://docs.standardintelligence.com/operational-oversight Breadcrumb: Operations › Operational Oversight (S.13) Last updated: 28 Feb 2026 Operational oversight ensures that human control over AI systems remains effective throughout production operation. The six-level oversight pyramid defines escalating oversight responsibilities from technical monitoring through AI system operators, product management, compliance functions, executive leadership, and external oversight. Break-glass procedures provide emergency controls for system shutdown, fallback activation, and regulatory notification. Escalation without reprisal establishes reporting channels and whistleblower protections. AI literacy addresses the Article 4 obligation for operator competence. Continuous oversight governance maintains oversight effectiveness over time. Oversight across boundaries addresses multi-deployer and cross-organisational oversight. Oversight fatigue countermeasures prevent degradation of human review quality. Portfolio scaling manages oversight across multiple AI systems. Corporate governance integration embeds AI oversight into existing board and committee structures. The section concludes with oversight artefacts. ℹ This section corresponds to the Operational Oversight section and feeds primarily into AISDP Module 7 (Human Oversight). --- ## Operational vs Model Incident Triage URL: https://docs.standardintelligence.com/operational-vs-model-incident-triage Breadcrumb: Operations › PMM › Operational Monitoring › Operational vs Model Incident Triage Last updated: 28 Feb 2026 Operational vs Model Incident Triage AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 15 When a monitoring alert fires, the triage process determines whether the root cause is operational (infrastructure, configuration, dependency) or model-related (drift, degradation, adversarial input). This distinction matters because the response path, the responsible team, and the regulatory implications differ. The PMM plan defines diagnostic procedures for common alert patterns. A simultaneous spike in latency and error rate with stable model metrics suggests an infrastructure issue. Stable infrastructure metrics with degrading accuracy suggest a model issue. Where the cause is ambiguous, both the engineering and ML teams are engaged simultaneously to avoid sequential diagnosis delays. Model-related incidents may have compliance implications (triggering AISDP updates, risk register entries, or serious incident reporting ). Operational incidents typically do not have direct compliance implications unless they cause the system to produce incorrect outputs that affect persons. Key outputs Operational vs model root cause triage framework Diagnostic procedures for common alert patterns Parallel team engagement for ambiguous causes Compliance implication differentiation --- ## Operator Training & Certification Records URL: https://docs.standardintelligence.com/operator-training-and-certification-records Breadcrumb: Operations › Oversight › Artefacts › Operator Training & Certification Records Last updated: 28 Feb 2026 Operator Training & Certification Records AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 4 , Article 17 Training and certification records document each person in the oversight pyramid : their tier, completed training modules, completion dates, calibration exercise results, and next refresher due date. Operator certification records confirm competence demonstrated through calibration and scenario exercises. Records are generated by the LMS and retained as Module 7 evidence. Key outputs Per-person training completion and certification records Calibration exercise results Refresher scheduling and overdue tracking Module 7 AISDP evidence --- ## Oversight Across Boundaries URL: https://docs.standardintelligence.com/oversight-across-boundaries Breadcrumb: Operations › Oversight › Cross-Boundary Oversight Last updated: 28 Feb 2026 Provider-Deployer Boundary AISDP module(s): Module 7 (Human Oversight), Module 11 (Deployer Obligations) Regulatory basis: Article 14 , Article 26 An oversight gap arises because the provider cannot observe how the deployer uses the system, and the deployer cannot observe the system's internal behaviour. Provider-side PMM relies on deployer data; the deployer's oversight relies on Instructions for Use from the provider. If either side fails, the oversight chain breaks. The provider defines minimum oversight reporting requirements for deployers (override rates, complaint volumes, anomalous observations) and establishes contractual obligations and practical mechanisms for reporting. The provider aggregates deployer reports across its deployment base and monitors for cross-deployer patterns that individual deployers cannot see. The deployer agreement specifies escalation procedures that cross the organisational boundary, including deployer access to the provider's incident reporting team and the provider's access to the deployer's oversight data. Key outputs Minimum deployer reporting requirements defined contractually Provider aggregation of cross-deployer patterns Bidirectional escalation across the organisational boundary Module 7 and Module 11 AISDP documentation Joint Ventures & Partnerships AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 25 Where multiple organisations jointly develop or operate a high-risk system, compliance responsibility allocation must be contractually explicit. Article 25 assigns obligations to the provider; one organisation is designated as provider and others must understand their obligations as importers, distributors, or deployers. The operational oversight framework specifies which organisation monitors which aspects, how escalations cross organisational boundaries, and how joint governance decisions are made. A joint governance committee, meeting quarterly, reviews the shared system's compliance posture and resolves inter-organisational issues. Key outputs Contractually explicit compliance responsibility allocation Article 25 provider designation Inter-organisational escalation and governance protocols Joint governance committee for shared systems Platform & Marketplace Deployments AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 25 Systems deployed through cloud platforms or AI marketplaces introduce a three-party relationship: the model provider, the platform operator, and the deployer. The platform operator may host inference infrastructure, manage the API, and mediate the provider-deployer relationship. Oversight responsibilities are allocated across all three parties. The AISDP documents the platform operator's role, the data flows between parties, and the contractual provisions for monitoring and incident response . Gaps in the three-party allocation are identified and mitigated; a platform operator that refuses to share operational telemetry with the provider creates a monitoring blind spot that the AISDP must document. Key outputs Three-party oversight responsibility allocation Platform operator role documented in AISDP Data flow and contractual provision documentation Monitoring blind spots identified and mitigated --- ## Oversight Artefacts URL: https://docs.standardintelligence.com/oversight-artefacts Breadcrumb: Operations › Oversight › Artefacts Last updated: 28 Feb 2026 Operator Training & Certification Records Break-Glass Test Records Oversight Audit Reports Portfolio Compliance Dashboards Board & Committee Reporting Materials Escalation & Override Logs Fresh Eyes Review Reports --- ## Oversight Audit Reports URL: https://docs.standardintelligence.com/oversight-audit-reports Breadcrumb: Operations › Oversight › Artefacts › Oversight Audit Reports Last updated: 28 Feb 2026 Oversight Audit Reports AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 17 The annual oversight audit report documents the six verification areas tested, the findings, the non-conformities identified, the remediation recommendations, and the audit committee presentation. The report is retained for the ten-year period. Key outputs Six-area audit findings documented Non-conformities and remediation recommendations Audit committee presentation Ten-year retention --- ## Oversight Fatigue Countermeasures URL: https://docs.standardintelligence.com/oversight-fatigue-countermeasures Breadcrumb: Operations › Oversight › Fatigue Countermeasures Last updated: 28 Feb 2026 Personnel Rotation (6–12 Month Cycles) AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 Personnel responsible for daily oversight tasks (reviewing dashboards, triaging alerts, conducting operator oversight) rotate on a 6–12 month cycle. A new person brings a fresh perspective: they notice anomalies the previous person had normalised, ask questions about processes the previous person had stopped questioning, and identify documentation gaps the previous person had worked around. The AI Governance Lead plans the rotation schedule in advance, with a handover period including knowledge transfer and a documented handover checklist. The handover prevents knowledge loss while ensuring the incoming person approaches the system with fresh eyes. Key outputs 6–12 month rotation cycle for oversight personnel Planned handover with documented checklist Fresh perspective counteracting normalisation of deviance Module 7 AISDP documentation Quarterly Threshold Drift Checks AISDP module(s): Module 7 (Human Oversight), Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 14 Quarterly threshold drift checks compare current operational thresholds (in monitoring configuration, CI pipeline gates, and alert rules) against the values documented in the AISDP. Over time, teams may informally adjust thresholds upward to reduce alert volume, with each adjustment individually reasonable but cumulatively reducing sensitivity to genuine problems. Any discrepancy must be either reverted to the documented threshold or formally approved by updating the AISDP with the new threshold and the rationale. Threshold drift checks are conducted by a person who was not involved in the threshold adjustments, providing independent verification. Key outputs Quarterly comparison of operational thresholds against AISDP values Discrepancies reverted or formally approved with AISDP update Independent verification by non-involved personnel Module 7 and Module 12 AISDP evidence "Fresh Eyes" Reviews AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 Scheduled "fresh eyes" reviews bring personnel not involved in daily operations into a periodic deep-dive. An internal auditor, a team member from a different system, or an external consultant reviews the monitoring data, evidence repository, non-conformity register , and governance meeting minutes with no prior context. Fresh eyes reviewers often reveal systemic issues that the operational team has normalised: thresholds that have drifted, non-conformities that have been open for months without remediation, documentation that no longer matches the deployed system's configuration, or governance processes that have become perfunctory. Findings are documented and entered into the Non-Conformity Register. Key outputs Periodic deep-dive by non-operational personnel No-prior-context review of monitoring, evidence, and governance Normalised systemic issues surfaced Findings entered into Non-Conformity Register --- ## Performance Monitoring URL: https://docs.standardintelligence.com/performance-monitoring Breadcrumb: Operations › PMM › Performance Monitoring Last updated: 28 Feb 2026 Accuracy Metrics AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 15 (1) The Technical SME computes the system's core accuracy metrics continuously on production data. The metric set includes AUC-ROC (discrimination ability across all thresholds), F1 score (harmonic mean of precision and recall), precision (proportion of positive predictions that are correct), recall (proportion of actual positives correctly identified), Brier score (calibration accuracy for probabilistic predictions), and calibration error (agreement between predicted probabilities and observed frequencies). These metrics are computed against ground truth labels where available. The specific metrics reported depend on the system's task: classification systems report the full set; ranking systems may substitute NDCG or MAP; regression systems report RMSE, MAE, and R-squared. The AISDP declares the primary metrics and the minimum acceptable thresholds; PMM monitors compliance with these declarations. All accuracy metrics are computed on production data using the same methodology as the validation gate, ensuring comparability between pre-deployment and post-deployment performance. Key outputs Core accuracy metric set computed continuously on production data Metric selection aligned with system task type Comparison against AISDP-declared thresholds Methodology consistent with validation gate Ground Truth Handling AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 15(1) In many deployment contexts, ground truth labels are not immediately available. A credit scoring system's true outcome is not known until the borrower repays or defaults, potentially years later. A recruitment screening system's true outcome (the quality of the hired candidate's performance) may not be known for months. Where ground truth is available, accuracy metrics are computed directly. Where ground truth is delayed, the Technical SME defines proxy metrics and leading indicators that provide early warning without waiting for labels to arrive. NannyML's CBPE method estimates accuracy from the confidence score distribution; Evidently AI computes drift metrics that correlate with performance degradation. The PMM plan documents the expected ground truth delay for each metric, the proxy metrics used during the delay period, and the process for recomputing metrics once ground truth arrives. Where ground truth labels arrive with a delay, the computation pipeline handles late-arriving data and recomputes affected metrics, ensuring the historical record is updated. Key outputs Ground truth delay documented per metric Proxy metrics and leading indicators for delayed-truth systems CBPE estimation for accuracy without ground truth Recomputation on late-arriving labels Disaggregated Performance — Subgroup-Specific Degradation AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 15(1), Article 10(2)(f) Aggregate performance metrics can mask subgroup-specific degradation. A system whose aggregate accuracy remains stable but whose accuracy for a specific subgroup has degraded is experiencing a compliance-relevant change that aggregate monitoring would miss. The Technical SME computes all performance metrics across protected characteristic subgroups, where data is available and lawful to process under Article 10(5) . The same disaggregation structure used during pre-deployment fairness testing is applied in production. Where cell sizes for certain subgroups are too small for statistically meaningful computation, the metric is flagged as inconclusive rather than omitted. Subgroup-specific degradation may indicate that the data distribution for that subgroup has shifted, that the model's decision boundary is poorly calibrated for that population, or that an upstream data quality issue disproportionately affects certain groups. Each of these root causes requires a different remediation approach. Key outputs Per-subgroup performance metric computation Consistent disaggregation structure with pre-deployment testing Inconclusive flagging for insufficient cell sizes Root cause differentiation for subgroup-specific degradation Temporal Stability & Trend Analysis AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 15(1) The Technical SME tracks performance metrics over time with trend analysis. A slow, consistent decline that does not breach the threshold on any single measurement may still represent a significant cumulative degradation over months. A system that loses 0.5% accuracy per month would take ten months to breach a 5% degradation threshold, but after ten months the degradation is substantial. Trend analysis uses rolling averages, linear regression on metric time series, and change-point detection algorithms. A statistically significant downward trend, even where no individual measurement breaches a threshold, should generate a warning alert for investigation. Seasonal patterns (predictable fluctuations linked to business cycles, academic calendars, or other periodic factors) are documented in the PMM plan and excluded from trend analysis. Temporal stability monitoring also detects sudden performance shifts that might indicate a data pipeline failure, a deployment error, or an adversarial event. A sudden drop in accuracy followed by a return to normal warrants investigation even if the recovery was spontaneous. Key outputs Rolling averages, trend regression, and change-point detection Slow degradation detection below individual threshold breach Seasonal pattern documentation and exclusion Sudden shift detection and investigation --- ## Periodic Deployer Audits & Satisfaction Surveys URL: https://docs.standardintelligence.com/periodic-deployer-audits-and-satisfaction-surveys Breadcrumb: Operations › PMM › Deployer Monitoring Support › Periodic Deployer Audits & Satisfaction Surveys Last updated: 28 Feb 2026 Periodic Deployer Audits & Satisfaction Surveys AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 Periodic audits, where the provider's PMM team visits the deployer site (or conducts a remote audit) to verify monitoring data quality and completeness, provide a direct check on the data pipeline's reliability. Annual monitoring audits are documented and retained as evidence, strengthening the provider's compliance posture even when day-to-day data flows are imperfect. Deployer satisfaction surveys, conducted quarterly, provide qualitative feedback on the system's real-world performance. Survey results are a leading indicator: a decline in deployer satisfaction often precedes formal incident reports. The surveys cover system reliability, output quality, documentation usefulness, support responsiveness, and any concerns about the system's behaviour. Deployer health monitoring tracks whether deployers are using the telemetry pipeline, submitting feedback through structured channels, acknowledging critical communications, and running current system versions. Non-responsive deployers represent a compliance risk; the provider has a defined escalation process culminating, if necessary, in contractual remedies or service suspension. Key outputs Annual deployer monitoring audits (on-site or remote) Quarterly satisfaction surveys as leading indicators Deployer health monitoring (telemetry, feedback, version currency) Non-responsive deployer escalation process --- ## PMM Artefacts URL: https://docs.standardintelligence.com/pmm-artefacts Breadcrumb: Operations › PMM › Artefacts Last updated: 28 Feb 2026 Monthly PMM Reports Quarterly Review Minutes Annual Oversight Audit Report Serious Incident Reports & Register AISDP Version Updates Updated Risk Register Entries --- ## PMM as Continuous Compliance URL: https://docs.standardintelligence.com/pmm-as-continuous-compliance Breadcrumb: Operations › PMM › Governance & Maintenance › PMM as Continuous Compliance Last updated: 28 Feb 2026 PMM That Collects Without Acting Is Non-Compliant AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 Article 72 requires a PMM system that "actively and systematically" collects, documents, and analyses data. A system that collects data but does not act on it is non-compliant with the spirit of Article 72. The PMM system is the mechanism through which the organisation detects problems not anticipated during development, identifies drift developing over time, gathers evidence for serious incident reports, and generates data feeding back into the risk management system. The feedback loop, the escalation framework, and the quarterly governance review are the mechanisms that ensure PMM findings translate into actions. Without these mechanisms, monitoring dashboards become decoration and compliance reports become fiction. A competent authority assessing the organisation's PMM compliance will examine not only whether monitoring is operational, but whether findings have produced changes. A PMM system with no documented actions over twelve months, despite operating in a dynamic production environment, suggests either that the monitoring is not detecting issues (a sensitivity problem) or that detected issues are not being addressed (a governance problem). Key outputs Active and systematic collection, analysis, and action required Feedback loop, escalation, and governance review as action mechanisms Authority scrutiny extends to whether findings produced changes Module 12 AISDP documentation --- ## PMM Data Retention & Privacy URL: https://docs.standardintelligence.com/pmm-data-retention-and-privacy Breadcrumb: Operations › PMM › Governance & Maintenance › PMM Data Retention & Privacy Last updated: 28 Feb 2026 Lawful Basis (GDPR Art. 6(1)(f) + AI Act Art. 72) AISDP module(s): Module 4 ( Data Governance ), Module 12 (Post-Market Monitoring) Regulatory basis: GDPR Article 6(1)(f), AI Act Article 72 PMM monitoring data frequently contains personal data: inference inputs may include personal characteristics, outputs may include decisions about identified individuals, and operational logs may record which operators handled which cases. Processing personal data for PMM purposes requires a lawful basis under GDPR Article 6. Legitimate interest under Article 6(1)(f) is the most common basis, supported by the legal obligation under the AI Act to conduct post-market monitoring. The legitimate interest assessment documents the purpose (regulatory compliance and system safety monitoring), the necessity (the PMM obligation cannot be met without processing inference data), and the balancing test (individual interests are protected by data minimisation, access controls, and retention limits). The DPO Liaison ensures the legitimate interest assessment is documented and reviewed annually. Where the system processes special category data , the additional conditions under GDPR Article 9 are addressed in the DPIA . Key outputs Legitimate interest assessment documented for PMM data processing Purpose, necessity, and balancing test specified Annual review by DPO Liaison Module 4 and Module 12 AISDP documentation Data Minimisation & Tiered Retention AISDP module(s): Module 4 (Data Governance), Module 12 (Post-Market Monitoring) Regulatory basis: GDPR Article 5 (1)(e), AI Act Article 18 PMM monitoring collects only the data necessary for its compliance purpose. Where full inference inputs are not needed (where aggregated statistics suffice), the data collection layer anonymises or aggregates at the point of collection. Where individual-level data is needed for disaggregated performance analysis or incident investigation, it is retained at minimum granularity and duration. A tiered retention approach balances the AI Act's ten-year documentation obligation with the GDPR's storage limitation principle. Individual-level inference data is retained at full granularity for 90 days (sufficient for incident investigation and short-term analysis), then aggregated to statistical summaries for long-term retention. The summaries, together with the PMM reports they generate, are retained for the full ten-year period. The DPO Liaison documents the retention policy in both the PMM plan and the DPIA. The retention tiers, aggregation methodology, and deletion schedules are implemented through automated lifecycle policies. Key outputs Data minimisation at point of collection where possible 90-day individual-level retention, then aggregation Ten-year retention for statistical summaries and PMM reports Automated lifecycle policies enforcing retention tiers Access Controls & Regulatory Access Profile AISDP module(s): Module 4 (Data Governance), Module 12 (Post-Market Monitoring) Regulatory basis: GDPR, AI Act Article 74 Access to PMM data containing personal information is restricted to authorised PMM analysts and investigators, with access logged and reviewed. Role-based access controls enforce the principle of least privilege: the governance dashboard provides compliance-relevant metrics without exposing individual-level data. The "regulatory access" profile provides competent authority inspectors with access to PMM dashboards and reports without granting access to raw individual-level data unless specifically required for an investigation. Where an inspector requests individual-level data, the Legal and Regulatory Advisor negotiates the scope, applies data protection safeguards, and documents the access provided. Access logs for PMM data are retained and reviewed quarterly by the DPO Liaison, who verifies that access patterns are consistent with authorised purposes. Key outputs Role-based access controls with least privilege Regulatory access profile for inspectors (dashboards, not raw data) Access logging and quarterly DPO Liaison review Module 4 and Module 12 AISDP documentation --- ## PMM Governance & Maintenance URL: https://docs.standardintelligence.com/pmm-governance-and-maintenance Breadcrumb: Operations › PMM › Governance & Maintenance Last updated: 28 Feb 2026 This section covers the following topics: Quarterly PMM Reviews Feedback Loop to Governance Change Impact Assessment PMM Data Retention & Privacy PMM Resource Planning PMM as Continuous Compliance --- ## PMM Infrastructure Architecture URL: https://docs.standardintelligence.com/pmm-infrastructure-architecture Breadcrumb: Operations › PMM › PMM Infrastructure Architecture Last updated: 28 Feb 2026 Data Collection Layer Storage Layer Computation Layer Alerting Layer Dashboard Layer PMM Tooling --- ## PMM Plan URL: https://docs.standardintelligence.com/pmm-plan Breadcrumb: Operations › PMM › PMM Plan Last updated: 28 Feb 2026 Data Collection Strategy AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 72(3) The PMM plan's data collection strategy specifies what data is collected, from which sources, and at what frequency. The data collection layer captures inference inputs (the data the system receives), inference outputs (the system's decisions or recommendations), ground truth labels (where available, whether immediately or with delay), operational metadata (latency, error codes, resource utilisation), human oversight interactions (overrides, review times, escalations), and deployer feedback (complaints, anomaly reports, usage patterns). Each data source has a defined collection mechanism. Inference inputs and outputs are captured asynchronously from the production pipeline, typically streamed to a message queue (Kafka, AWS Kinesis, Google Pub/Sub) to avoid adding latency to the inference path. Ground truth labels may arrive with significant delay; the collection strategy documents the expected delay for each label source and the mechanism for matching labels to the corresponding predictions. The data collection layer must handle production peak throughput without data loss. Dropped monitoring events create blind spots in the compliance record that may coincide precisely with the stress conditions most likely to produce compliance-relevant anomalies. Key outputs Per-source data collection specification (what, where, how often) Asynchronous collection to avoid inference latency impact Ground truth matching with expected delay documentation Peak throughput handling without data loss Analysis Methodology AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72(3) The analysis methodology defines how collected data is processed into compliance-relevant metrics. The PMM metric set mirrors the validation gate metrics established during development, adapted for production conditions where ground truth may be unavailable or delayed. For systems where ground truth is available, the Technical SME computes the declared performance metrics directly on production data. For systems where ground truth is delayed, the methodology defines proxy metrics and leading indicators. NannyML's Confidence-Based Performance Estimation (CBPE) method estimates model accuracy from the confidence score distribution without requiring ground truth labels. Estimated performance is monitored continuously, with alerts when estimates fall below declared thresholds. The methodology also specifies the statistical tests applied to detect drift (PSI, KS test, Jensen-Shannon divergence, Wasserstein distance), the computation frequency for each metric (hourly, daily, weekly), and the minimum sample sizes required for statistically meaningful computation. Metrics computed on insufficient sample sizes are flagged as inconclusive rather than reported as definitive. Key outputs Production metric set mirroring validation gate metrics Ground truth delay handling (proxy metrics, CBPE estimation) Statistical tests and computation frequency per metric Minimum sample size requirements Threshold & Trigger Framework AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72(3) The threshold framework distinguishes normal variation from alert conditions. Each PMM metric has a defined tolerance band (the range of values considered normal), a warning threshold (the boundary at which investigation is warranted), and a critical threshold (the boundary at which immediate action is required). Thresholds are derived from the system's validation performance and calibrated against the deployment context. A drift threshold of PSI > 0.2 is a common starting point for investigation; PSI between 0.1 and 0.2 warrants monitoring. Performance thresholds align with the AISDP-declared accuracy and fairness commitments: the critical threshold corresponds to the declared minimum, and the warning threshold is set above the critical threshold to provide early warning. Thresholds are reviewed quarterly at the PMM governance meeting. Initial thresholds set conservatively (generating more alerts) are tuned based on operational experience. Threshold tuning is documented, with the rationale for each adjustment and the AI Governance Lead 's approval. Key outputs Per-metric tolerance band, warning threshold, and critical threshold Derivation from validation performance and deployment context Quarterly review with documented adjustment rationale AI Governance Lead approval for threshold changes Escalation Procedures AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72(3) The escalation procedures define who is notified, how quickly, and what actions follow when a threshold is breached. The procedures are tiered by severity (informational, warning, critical) and specify the notification channel (dashboard, email, PagerDuty/Opsgenie), the initial responder, the escalation timeline, and the expected actions at each step. Escalation procedures account for out-of-hours scenarios, key person unavailability (named alternates for every role), and multi-jurisdiction incidents where different authorities may need notification in different time zones. The procedures are rehearsed annually through tabletop exercises and documented in the PMM plan. The escalation procedures cross-reference the serious incident reporting process. Where a critical alert indicates potential harm that meets the Article 3(49) serious incident definition, the escalation pathway transitions directly into the incident reporting workflow. Key outputs Severity-tiered escalation with notification channels and timelines Out-of-hours and key-person-unavailability contingencies Annual rehearsal through tabletop exercises Cross-reference to serious incident reporting Feedback Loop Definition AISDP module(s): Module 12 (Post-Market Monitoring), Module 6 (Risk Management System) Regulatory basis: Article 72(3) The feedback loop connects PMM findings to the risk management system, the AISDP, and the development cycle. When monitoring identifies a performance degradation, a fairness drift, or a new risk that was not anticipated during development, the feedback loop ensures this information is acted upon rather than merely recorded. PMM findings feed into the risk register : a newly identified risk from production monitoring is added to the register with its source identified as "PMM finding." Findings that affect documented AISDP claims (for example, a sustained performance degradation below the declared threshold) trigger an AISDP update. Findings that indicate a need for model retraining, additional data collection, or architecture changes enter the development backlog through the change management framework. The quarterly PMM review meeting is the primary governance forum for the feedback loop. The AI Governance Lead reviews monitoring trends, approves corrective actions, and confirms that the feedback loop is functioning, meaning that findings are producing changes, not accumulating in reports. Key outputs PMM findings integrated into risk register, AISDP, and development backlog Change management framework as the channel for corrective actions Quarterly governance review confirming feedback loop operation Module 12 and Module 6 AISDP documentation --- ## PMM Resource Planning URL: https://docs.standardintelligence.com/pmm-resource-planning Breadcrumb: Operations › PMM › Governance & Maintenance › PMM Resource Planning Last updated: 28 Feb 2026 Personnel (0.25–0.5 FTE per System) AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 PMM requires dedicated analytical capacity. The PMM analyst (or team, for larger deployments) reviews monitoring dashboards, investigates alerts, prepares PMM reports, and coordinates with the engineering team on remediation. For a medium-complexity high-risk system, a reasonable estimate is 0.25 to 0.5 FTE of dedicated PMM analytical effort, supplemented by engineering support during alert investigation and remediation. The PMM analyst role requires a combination of data science skills (to interpret metrics and investigate anomalies), regulatory awareness (to assess compliance implications of findings), and operational discipline (to maintain the monitoring infrastructure and reporting cadence). For organisations with multiple systems, a centralised PMM team achieves economies of scale through shared tooling and cross-system pattern analysis. Staffing continuity is critical. A PMM function that operates effectively for six months but then loses its analyst to another project degrades rapidly. The AI Governance Lead treats PMM staffing as a committed operational expense. Key outputs 0.25–0.5 FTE dedicated PMM analyst per system Engineering support during investigation and remediation Centralised team for multi-system economies of scale Committed operational staffing, not discretionary Infrastructure & Re-Validation Testing AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 The monitoring infrastructure (data collection, storage, computation, alerting, dashboards) has ongoing compute and storage costs that grow over time as data accumulates. Organisations project these costs over the system's expected lifetime, factoring in the ten-year retention obligation. Periodic re-validation testing at defined intervals (quarterly or biannual) provides a scheduled check beyond alert-driven investigation. Re-validation exercises rerun the full performance, fairness, and robustness test suite against current production data, establishing a fresh baseline and detecting slow degradation that continuous monitoring may not capture. Incident response imposes unplanned costs: engineering effort for investigation and remediation, legal effort for reporting and authority interaction, and operational effort for deployer communication. A contingency budget ensures the organisation can respond without diverting resources from other critical activities. Key outputs Ongoing infrastructure cost projection over system lifetime Quarterly or biannual re-validation testing Contingency budget for incident response Module 12 AISDP documentation Budget Heuristic (15–25% of Annual Dev Cost) AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 As a planning estimate, organisations should budget between 15% and 25% of the system's annual development cost for ongoing PMM and compliance maintenance. This figure varies significantly by system complexity, risk level, and deployment scale, but it provides a starting point for financial planning. The budget covers personnel (PMM analyst, engineering support), infrastructure (compute, storage, tooling licences), testing (periodic re-validation exercises), deployer support (feedback management, audit visits), and incident response contingency. For systems deployed across multiple jurisdictions, the budget includes the incremental multi-jurisdiction costs. The AI Governance Lead validates the budget annually during the strategic review, adjusting for changes in system complexity, deployment scale, or regulatory requirements. Key outputs 15–25% of annual development cost as PMM budget heuristic Five cost categories (personnel, infrastructure, testing, deployer support, incident response) Annual validation and adjustment Module 12 AISDP documentation --- ## PMM Tooling URL: https://docs.standardintelligence.com/pmm-tooling Breadcrumb: Operations › PMM › PMM Infrastructure Architecture › PMM Tooling Last updated: 28 Feb 2026 PMM Tooling AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 The PMM tooling landscape spans several categories. Metric collection and visualisation: Prometheus for metric ingestion and alerting, Grafana for dashboards, Datadog or New Relic for integrated observability. Drift and performance estimation: Evidently AI (open-source, computes drift using PSI, JS divergence, KS test; generates structured reports), NannyML (open-source, CBPE performance estimation without ground truth, drift detection). Fairness monitoring : Fairlearn for fairness metric computation, Aequitas for automated bias reporting. LLM monitoring : RAGAS and Trulens for hallucination and quality evaluation, Lakera Guard and NeMo Guardrails for safety monitoring. For smaller organisations, the open-source stack (Prometheus, Grafana, Evidently AI, NannyML, Fairlearn) provides comprehensive capability with no licence cost. For larger organisations, integrated platforms (Datadog, Arize, WhyLabs, Arthur) provide managed infrastructure at scale. The PMM plan documents the chosen tooling, the rationale for selection, and the integration architecture connecting the tools into a coherent monitoring pipeline. Key outputs Tooling selection documented with rationale Open-source stack for smaller organisations Integrated platforms for larger portfolios Integration architecture connecting tools into coherent pipeline --- ## Portfolio Compliance Dashboards URL: https://docs.standardintelligence.com/portfolio-compliance-dashboards Breadcrumb: Operations › Oversight › Artefacts › Portfolio Compliance Dashboards Last updated: 28 Feb 2026 Portfolio Compliance Dashboards AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 Portfolio dashboard snapshots are captured quarterly and retained as evidence. They demonstrate that portfolio-level oversight was operational and that compliance status across all systems was monitored. Dashboard exports include per-system status indicators, aggregate compliance metrics, and trend visualisations. Key outputs Quarterly dashboard snapshots retained Per-system and aggregate compliance status Trend visualisations Module 7 AISDP evidence --- ## Portfolio Scaling URL: https://docs.standardintelligence.com/portfolio-scaling Breadcrumb: Operations › Oversight › Portfolio Scaling Last updated: 28 Feb 2026 Shared Monitoring Infrastructure & Cross-System Analysis AISDP module(s): Module 7 (Human Oversight), Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 14 Monitoring infrastructure, evidence repositories, document management systems, and CI/CD pipeline s are designed as shared services supporting multiple AI systems. The marginal cost of adding a new system to the monitoring infrastructure should be low. Shared infrastructure (Prometheus/Grafana with multi-tenant configuration, or Datadog with per-system tags) enables cross-system analysis: detecting patterns (a common vulnerability across systems using the same GPAI model) that individual system monitoring would miss. Each system's metrics are labelled with the system identifier, enabling aggregate views (how many systems have open non-conformities?) and per-system drill-down (what is system X's current fairness drift status?). Key outputs Shared monitoring infrastructure as multi-system service Low marginal cost per additional system Cross-system pattern detection capability Per-system isolation within shared infrastructure Tiered Oversight (Risk-Based Assignment) AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 Not all high-risk systems require the same oversight intensity. A credit scoring system affecting millions of consumers warrants more intensive oversight than an internal document classification system. The AI Governance Lead defines oversight tiers based on the system's risk profile, deployment scale, and affected population sensitivity. Higher-tier systems receive more frequent reviews, dedicated oversight personnel, and more granular monitoring. Lower-tier systems receive scheduled reviews, shared oversight personnel, and standard monitoring configurations. Tier assignments are documented and reviewed annually. Key outputs Oversight tiers based on risk, scale, and sensitivity Resource allocation calibrated to tier Annual tier assignment review AI Governance Lead documentation Centralised Governance, Distributed Execution AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 The AI Governance Lead provides central coordination: maintaining the portfolio-level risk register , ensuring consistent standards, and reporting to executive leadership. Day-to-day oversight execution (monitoring, escalation handling, operator training) is distributed to the teams closest to each system. This model ensures governance standards are consistent across the portfolio while operational knowledge remains local. A centralised team that tries to handle daily oversight for twenty systems will lack the domain expertise needed for each; distributed teams without central coordination will drift into inconsistent practices. Key outputs Central coordination by AI Governance Lead Distributed execution by system-proximate teams Consistent standards with local operational knowledge Portfolio-level risk register maintained centrally Portfolio-Level Compliance Dashboards AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 Portfolio compliance dashboards aggregate the compliance posture across all systems into a single executive view. For each system, the dashboard shows conformity status (green/amber/red), number and severity of open non-conformities, PMM metric status, evidence currency status, and date of last formal assessment. Credo AI and Holistic AI provide built-in multi-system views. For organisations using the open-source stack, Grafana or Metabase dashboards aggregate per-system metrics into portfolio views. The dashboard enables the AI Governance Lead and executive leadership to allocate resources, set priorities, and identify systems approaching compliance risk. Key outputs Per-system compliance status in single executive view Five status indicators per system Credo AI, Holistic AI, or Grafana/Metabase implementation Decision support for resource allocation and prioritisation Standardised Processes & Cross-System Learning AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 A common AISDP template, evidence taxonomy, non-conformity workflow, and assessment checklist reduce per-system governance overhead. The governance team applies the same process to every system, learning from experience across the portfolio. A finding in one system (a monitoring gap, a documentation deficiency) is applied as a preventive check across all others. Standardised processes are documented as a portfolio-level governance procedure maintained by the AI Governance Lead. This documentation demonstrates to a competent authority that the organisation applies consistent compliance standards, not ad hoc approaches that vary by system. Key outputs Common templates and workflows across portfolio Cross-system preventive learning from individual findings Portfolio-level governance procedure documentation Consistent compliance standards demonstrated --- ## Post-Decommission Monitoring Schedule URL: https://docs.standardintelligence.com/post-decommission-monitoring-schedule Breadcrumb: Operations › End-of-Life › Artefacts › Post-Decommission Monitoring Schedule Last updated: 28 Feb 2026 Post-Decommission Monitoring Schedule AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 , Article 73 The post- decommission monitoring schedule documents the continuing obligations after shutdown: the ten-year archive verification schedule (annual accessibility checks), the downstream decision monitoring cadence and duration, the post-withdrawal incident reporting responsibility and named person, the GDPR data subject rights process, and the historical PMM data analysis capability. The schedule assigns a named owner for each obligation and specifies the expiry date (calculated from the date the system was placed on the market plus ten years). The AI Governance Lead reviews the schedule annually to confirm obligations are being met and to close obligations whose expiry date has passed. Key outputs Five continuing obligations documented with named owners Annual review by AI Governance Lead Expiry dates calculated per Article 18 Module 12 AISDP evidence --- ## Post-Decommission Obligations URL: https://docs.standardintelligence.com/post-decommission-obligations Breadcrumb: Operations › End-of-Life › Post-Decommission Last updated: 28 Feb 2026 10-Year Document Retention AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 18 The ten-year documentation retention obligation runs from the date the system was placed on the market, not from the date of decommission . The post-decommission obligations register calculates the expiry date. The archived AISDP, evidence pack, conformity assessment records, Declaration of Conformity , PMM reports, and serious incident records must all remain retrievable for the full period. The AI Governance Lead assigns a named owner for the post-decommission archive, ensuring that organisational changes (mergers, restructuring, personnel turnover) do not leave the archive orphaned. Annual verification checks confirm that the archive remains accessible and that the storage service has not expired or been inadvertently deleted. Key outputs Ten-year retention from date placed on market Expiry date calculated in post-decommission obligations register Named archive owner assigned Annual accessibility verification GDPR Data Subject Rights AISDP module(s): Module 4 ( Data Governance ) Regulatory basis: GDPR Articles 15–22 Data subjects retain their GDPR rights even after the system is decommissioned. A data subject who submits an access request under Article 15 must receive a response, even if the system no longer exists. The organisation maintains the capability to respond to such requests for as long as it retains personal data related to the system. The DPO Liaison defines the post-decommission process for handling data subject requests and communicates it to the relevant team. Where personal data has been deleted, the response confirms deletion. Where aggregated or anonymised data remains, the response explains that individual-level data is no longer held. Where personal data is retained under the ten-year archival obligation (inference logs within the 90-day retention window at decommission time), the response provides the data and explains the retention justification. Key outputs Data subject rights persist after decommission Post-decommission request handling process defined by DPO Liaison Response capability maintained for duration of personal data retention Module 4 AISDP documentation Post-Withdrawal Incident Reporting (Art. 73) AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 73 If serious incidents come to light after withdrawal (affected persons report harm that occurred while the system was in service), the provider must still report them under Article 73. The Article 73 reporting obligation is not limited to incidents discovered during the system's operational period; it applies whenever the provider becomes aware of a serious incident, regardless of the system's current status. The post-decommission obligations register notes this continuing obligation. The AI Governance Lead ensures that a named person remains responsible for receiving and triaging post-withdrawal incident reports. The incident triage process and reporting execution process remain available, drawing on the archived evidence pack for investigation. Key outputs Article 73 reporting obligation persists after withdrawal Named person responsible for post-withdrawal incident triage Triage and reporting processes remain available Archived evidence pack supports investigation Historical PMM Data Analysis AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 The provider may be required to analyse historical monitoring data in response to post-withdrawal investigations, complaints, or competent authority requests. The archived PMM data (aggregated metrics, PMM reports, alert logs) must be retrievable and analysable for the retention period. Historical analysis may reveal patterns that were not detected during the system's operational period: slow fairness drift that only becomes apparent over a longer time horizon, or downstream decision impacts that emerge years after the system was decommissioned. The post-decommission monitoring plan defines the scope and duration of proactive historical analysis; reactive analysis in response to external requests may be required at any point during the retention period. Key outputs Archived PMM data retrievable and analysable Proactive historical analysis per post-decommission plan Reactive analysis capability for external requests Module 12 AISDP evidence --- ## Post-Market Monitoring URL: https://docs.standardintelligence.com/post-market-monitoring Breadcrumb: Operations › Post-Market Monitoring (S.12) Last updated: 28 Feb 2026 Post-market monitoring is the continuous compliance obligation that runs from the moment a high-risk AI system enters production until decommissioning . The PMM plan establishes the monitoring scope, frequency, and responsibilities required by Article 72(3) . Five monitoring dimensions follow: performance, fairness, data drift , operational, and human oversight. PMM infrastructure architecture defines the technical stack. Specialised monitoring for LLM and generative AI systems and composite systems extend the core framework. The alerting and escalation framework implements severity-based response. Serious incident reporting addresses the Article 73 obligation. -support'); return false;" class="xref">Deployer monitoring support enables downstream compliance. Quarterly PMM review s, feedback loops to governance, change impact assessment , data retention and privacy, resource planning, and continuous compliance round out the operational framework. The section concludes with PMM artefacts. ℹ This section corresponds to the Post-Market Monitoring section and feeds primarily into AISDP Module 12 (Post-Market Monitoring). --- ## Quarterly PMM Reviews URL: https://docs.standardintelligence.com/quarterly-pmm-reviews Breadcrumb: Operations › PMM › Governance & Maintenance › Quarterly PMM Reviews Last updated: 28 Feb 2026 Review Agenda AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 The AI Governance Lead convenes quarterly PMM review meetings examining a structured agenda: monitoring metric trends across all five dimensions (performance, fairness, drift, operational, human oversight), alert history and resolution statistics, deployer feedback summary and cross-deployer pattern analysis, complaint volumes and patterns, non-conformity register status (PMM-related entries), threshold review and recalibration decisions, the feedback loop operation (are findings producing actions?), and the PMM budget and resource status. The review produces documented minutes with action items, owners, and deadlines. Each action item is tracked to completion. The quarterly review minutes are retained as Module 12 evidence, demonstrating that the PMM system is governed, not merely running. The review should also assess the PMM system's own health: are metrics being computed on schedule? Are dashboards accessible? Is the alerting layer delivering alerts reliably? A PMM system that fails silently provides false assurance. Key outputs Eight-item structured review agenda Documented minutes with tracked action items PMM system health self-assessment Module 12 AISDP evidence Decision Authority by Impact Tier AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 A PMM finding that warrants action must pass through a decision process with defined authority tiers. Threshold adjustments (tuning warning thresholds based on operational experience) can be authorised by the Technical SME, recorded in the PMM review minutes. Model retraining on updated data, where the pipeline follows the documented methodology and the retrained model passes all validation gates, is authorised by the Technical Owner with notice to the AI Governance Lead. Model architecture changes, hyperparameter shifts, or changes to the feature set require AI Governance Lead approval and a substantial modification assessment. System suspension or withdrawal requires AI Governance Lead sign-off with immediate notice to the Legal and Regulatory Advisor and affected deployers. Without defined decision authority, findings accumulate in dashboards and reports without translating into system improvements. The tiered authority structure ensures that routine adjustments proceed quickly, while higher-impact changes receive appropriate governance scrutiny. Key outputs Four-tier decision authority (Technical SME, Technical Owner, AI Governance Lead, AI Governance Lead + Legal) Routine adjustments proceed quickly; high-impact changes receive scrutiny Authority documented in QMS Module 12 AISDP documentation Prioritisation Against Development Workload AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 PMM-triggered remediation competes with feature development, bug fixes, and other engineering priorities. Without a defined prioritisation mechanism, PMM actions are perpetually deprioritised and the feedback loop stalls. A separate PMM action backlog with its own prioritisation criteria addresses this risk. Critical PMM actions (compliance threshold breaches, serious incident corrective actions) override all other engineering work. Warning-level PMM actions are scheduled by the Technical Owner within the next development sprint. Informational PMM findings are reviewed by the AI Governance Lead at the quarterly meeting and scheduled as capacity permits. The AI Governance Lead has authority to elevate PMM action priority when engineering prioritisation is inconsistent with compliance risk. This authority is documented in the QMS and understood by engineering leadership. A PMM action that sits in the backlog for months because it was deprioritised by the engineering team represents a compliance risk that the AI Governance Lead must own. Key outputs Separate PMM action backlog with compliance-first prioritisation Critical PMM actions override all other work AI Governance Lead authority to elevate priority Authority documented in QMS Traceable Documentation per Cycle AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 Every completed feedback loop cycle, from PMM finding through decision, action, validation, and AISDP update, is documented as a single traceable record. The record captures the originating PMM finding (alert, report, or deployer feedback), the decision taken (including authorising person and date), the action implemented (code change, retrain, configuration update, threshold adjustment), the validation result (retrained model performance against validation gates), and the AISDP update (modules modified, new version number). This documentation demonstrates to a competent authority that the PMM system functions as a closed loop: findings produce actions, actions produce improvements, improvements are documented. It also provides version control traceability , linking each AISDP version change to the PMM finding that motivated it. The traceable record is retained as Module 12 evidence within the evidence register , cross-referenced to the relevant Non-Conformity Register entry where applicable. Key outputs Per-cycle traceable record (finding, decision, action, validation, AISDP update) Closed-loop demonstration for competent authority Version control traceability from AISDP change to PMM finding Module 12 AISDP evidence --- ## Quarterly Review Minutes URL: https://docs.standardintelligence.com/quarterly-review-minutes Breadcrumb: Operations › PMM › Artefacts › Quarterly Review Minutes Last updated: 28 Feb 2026 Quarterly Review Minutes AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 Quarterly review minutes document the governance meeting's agenda, attendees, discussion, decisions, action items, owners, and deadlines. They provide evidence that the AI Governance Lead is actively governing the PMM system and that findings are being acted upon. Minutes are reviewed and approved by the AI Governance Lead before filing. Action items from previous quarters are reviewed for completion. Incomplete actions are escalated or re-assigned. Key outputs Structured meeting minutes with decisions and action items Action item tracking across quarters AI Governance Lead approval Ten-year retention as Module 12 evidence --- ## Regulatory Basis URL: https://docs.standardintelligence.com/regulatory-basis Breadcrumb: Operations › End-of-Life › Regulatory Basis Last updated: 28 Feb 2026 Applicable Articles (Art. 3(16–17), 16, 18, 20, 49/71, 72, 73, 79) AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Articles 3(16), 3(17), 16, 18, 20, 49, 71, 72, 73, 79 The EU AI Act addresses system end-of-life through several provisions. Article 3(16) defines recall as any measure aiming to achieve the return, taking out of service, or disabling of an AI system already made available to deployers. Article 3(17) defines withdrawal as any measure preventing a system in the supply chain from being made available on the market. These are distinct: withdrawal prevents new deployments; recall retrieves or disables systems already in deployers' hands. Article 16 requires providers to maintain compliance throughout the system's time on the market; when compliance can no longer be maintained, Article 20's corrective actions apply. Article 18 imposes ten-year documentation retention from the date the system was placed on the market, persisting after withdrawal or recall. Articles 49 and 71 require the EU database registration to be updated to reflect the system's changed status. Article 72 's PMM obligations do not cease at decommissioning; historical monitoring data may be required for post-withdrawal investigations. Article 73 requires serious incident reporting even for incidents discovered after withdrawal. Article 79 empowers market surveillance authorities to order withdrawal or recall, with a 15 working-day backstop. Every system will eventually reach end-of-life. The AISDP must document the end-of-life process, planned during the design phase and refined as the system matures. Key outputs Eight regulatory provisions mapped to end-of-life obligations Distinction between recall and withdrawal understood Ten-year retention persisting after decommission Post-withdrawal incident reporting obligation ISO Standards (42001 A.6.2.6, 23894 Annex C) AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: ISO/IEC 42001, ISO/IEC 23894 ISO/IEC 42001 Annex A Control A.6.2.6 requires that the AI management system address decommissioning as a defined lifecycle stage. The organisation must define and document processes for safely phasing out the AI system while addressing residual impacts. ISO/IEC 23894 Annex C reinforces this by mapping risk management activities to every lifecycle stage, including retirement, requiring assessment and treatment of risks arising from withdrawal from service. These standards ensure that decommissioning is treated as a planned lifecycle event, not an ad hoc response. A system decommissioned without a structured process risks leaving orphaned data, unnotified deployers, active credentials with no owner, and regulatory obligations that no one tracks. The AISDP's end-of-life documentation satisfies both the EU AI Act's Article 18 retention requirements and the ISO standards' lifecycle management requirements simultaneously. Key outputs ISO 42001 A.6.2.6 decommissioning as defined lifecycle stage ISO 23894 Annex C retirement-stage risk management Structured process preventing orphaned data and obligations Module 12 AISDP documentation --- ## Reporting Execution URL: https://docs.standardintelligence.com/reporting-execution Breadcrumb: Operations › PMM › Serious Incident Reporting › Reporting Execution Last updated: 28 Feb 2026 Reporting Execution AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 73 The incident lead prepares the initial report. The Legal and Regulatory Advisor reviews for completeness, accuracy, and legal implications. The AI Governance Lead authorises submission. The report is submitted to the market surveillance authority of the member state where the incident occurred. If the incident occurred in multiple member states, the Legal and Regulatory Advisor coordinates parallel submissions to each relevant authority. The reporting execution process must function under pressure, across time zones, and outside business hours. The escalation path ensures that the AI Governance Lead and Legal and Regulatory Advisor can be reached for authorisation at any hour. Named alternates cover leave and unavailability. The submission is logged in the serious incident register with the date, time, recipient authority, report version, and the individual who authorised submission. This log provides the organisation's record that the reporting obligation was met within the required timeline. Key outputs Legal review and AI Governance Lead authorisation before submission Multi-jurisdiction parallel submission coordination Out-of-hours capability with named alternates Submission logged in serious incident register --- ## Reporting Timelines (2/10/15 Days) URL: https://docs.standardintelligence.com/reporting-timelines-21015-days Breadcrumb: Operations › PMM › Serious Incident Reporting › Reporting Timelines (2/10/15 Days) Last updated: 28 Feb 2026 Reporting Timelines (2/10/15 Days) AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 73(1) The reporting regime is tiered by severity. Two days from awareness: widespread infringement of fundamental rights, or serious and irreversible disruption to critical infrastructure. Ten days from awareness: death of a person or suspected causal link to a death. Fifteen days from awareness: all other serious incidents meeting the Article 3(49) definition. The clock starts when the provider "becomes aware" of the incident and establishes a causal link (or reasonable likelihood of one) between the AI system and the harm. In practice, awareness develops gradually: a deployer reports an anomaly, the engineering team investigates, the investigation reveals a potential causal link, and the legal team confirms the Article 3(49) definition is met. Internal processes must compress this chain because the two-day and ten-day timelines leave almost no margin. Article 73(5) permits submission of an initial, incomplete report followed by supplementary information. The Commission recognises that full root cause analysis cannot be completed within two days. The initial report should contain provider identity, system identity and registration details, incident description, suspected causal link, and immediate actions taken. Key outputs Three-tier timeline (2/10/15 days from awareness) "Awareness" definition including causal link establishment Initial incomplete report permitted with supplementary follow-up Internal process design to compress the awareness chain --- ## Resource Utilisation & Capacity URL: https://docs.standardintelligence.com/resource-utilisation-and-capacity Breadcrumb: Operations › PMM › Operational Monitoring › Resource Utilisation & Capacity Last updated: 28 Feb 2026 Resource Utilisation & Capacity AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 15 The engineering team tracks CPU, GPU, memory, and storage utilisation against capacity limits. AI inference workloads can exhibit sudden load spikes driven by batch processing cycles, deployer activity patterns, or seasonal demand. When utilisation approaches capacity limits, the system may queue requests, increase latency, or drop requests entirely. Capacity monitoring provides sufficient lead time (weeks, not hours) for infrastructure scaling decisions. The PMM plan defines warning thresholds at 70 to 80% of capacity and critical thresholds at 90% or above, with documented response procedures for each. Response procedures range from automated scaling (for cloud-native deployments) to manual capacity planning requests (for on-premises infrastructure). Storage utilisation monitoring is particularly relevant for systems that accumulate monitoring data, inference logs, and evidence artefacts. The ten-year retention requirement ( Article 18 ) means storage requirements grow continuously and must be planned. Key outputs CPU, GPU, memory, and storage utilisation monitoring Warning (70–80%) and critical (90%+) thresholds Capacity scaling lead time of weeks Storage growth planning for ten-year retention --- ## Resumption Criteria URL: https://docs.standardintelligence.com/resumption-criteria Breadcrumb: Operations › Oversight › Break-Glass Procedures › Resumption Criteria Last updated: 28 Feb 2026 Resumption Criteria AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 The break-glass procedure specifies what conditions must be met before the system is restarted. Resumption criteria are predefined in the AISDP, authorised by a named individual (the AI Governance Lead for compliance-triggered suspensions, the Technical SME for purely technical suspensions), and verified through a documented checklist. The resumption checklist typically includes: root cause identified and understood, corrective action implemented or confirmed not to require system change, all validation gates passed on the current system state, deployers notified that service will resume, evidence preservation completed, and the AI Governance Lead's written authorisation. A system that is restarted without meeting the resumption criteria risks reintroducing the same harmful behaviour. The resumption decision is documented in the oversight log and retained as Module 7 evidence. Key outputs Predefined resumption criteria in AISDP Named authoriser for resumption decision Documented checklist verified before restart Module 7 AISDP evidence --- ## Serious Incident Register URL: https://docs.standardintelligence.com/serious-incident-register Breadcrumb: Operations › PMM › Serious Incident Reporting › Serious Incident Register Last updated: 28 Feb 2026 Serious Incident Register AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 73 The serious incident register tracks every event assessed against the Article 3(49) criteria, regardless of whether it was ultimately determined to constitute a serious incident. Each entry records the event description, detection date, awareness date (as defined by Article 73), triage outcome (confirmed, ruled out, or under investigation), severity tier and reporting deadline, submission date and recipient authority, investigation status and findings, corrective actions taken, and closure date. The register demonstrates a functioning incident detection and assessment process. It provides an evidence base for trend analysis: recurring near-miss events may indicate a systemic weakness requiring proactive intervention. It protects the organisation in disputes about whether reporting obligations were met. The Legal and Regulatory Advisor advises on the application of legal professional privilege to the register. Factual operational data is separated from legal analysis. The register is established under the Legal and Regulatory Advisor's authority, with a documented purpose of assessing legal reporting obligations, strengthening the privilege claim. Key outputs Every assessed event documented regardless of determination Near-miss trend analysis for proactive intervention Legal privilege considerations documented Module 12 AISDP evidence --- ## Serious Incident Reporting URL: https://docs.standardintelligence.com/serious-incident-reporting Breadcrumb: Operations › PMM › Serious Incident Reporting Last updated: 28 Feb 2026 Art. 3(49) Definition — Five Categories of Serious Incident Reporting Timelines (2/10/15 Days) Detection Infrastructure Triage Process Evidence Preservation (Art. 73(6)) Initial Report Content (Art. 73(5)) Reporting Execution Investigation & Corrective Action Cross-Regime Interaction (Art. 73(9)) Serious Incident Register --- ## Serious Incident Reports & Register URL: https://docs.standardintelligence.com/serious-incident-reports-and-register Breadcrumb: Operations › PMM › Artefacts › Serious Incident Reports & Register Last updated: 28 Feb 2026 Serious Incident Reports & Register AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 73 The serious incident reports archive retains every report submitted to competent authorities (initial and supplementary), together with the investigation findings, corrective actions, and authority correspondence. The serious incident register is maintained alongside, tracking all assessed events regardless of determination. Key outputs Complete report archive (initial, supplementary, correspondence) Register of all assessed events including near-misses Investigation findings and corrective actions documented Ten-year retention as Module 12 evidence --- ## Seven Decommission Workstreams URL: https://docs.standardintelligence.com/seven-decommission-workstreams Breadcrumb: Operations › End-of-Life › Seven Workstreams Last updated: 28 Feb 2026 WS1: Deployer Transition — Notification & Arrangements WS1: API-Served Systems (Deprecation, Sunset, Cut-Off) WS1: Embedded/On-Premises & Workflow-Integrated Systems WS2: Technical Shutdown (Controlled, Logged, Reversible) WS3: Data Lifecycle Closure WS4: Downstream Decision Monitoring — Historical Outputs WS5: Documentation Finalisation — Final AISDP Version WS6: Archival — 10-Year Retention WS7: Regulatory Notifications (EU DB, Deployers, CA) --- ## Silent Escalation Detection URL: https://docs.standardintelligence.com/silent-escalation-detection Breadcrumb: Operations › PMM › Alerting & Escalation › Silent Escalation Detection Last updated: 28 Feb 2026 Silent Escalation Detection AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 A common failure mode is the "silent escalation," where an alert is acknowledged but no action follows. The alerting system tracks not only acknowledgement but also subsequent actions and outcomes. An alert that is acknowledged but produces no root cause analysis, no documented decision, and no resolution indicates a gap in the escalation framework. Silent escalation detection runs as a periodic check (weekly or fortnightly) reviewing all acknowledged alerts against a resolution checklist: was root cause analysis conducted? Was a decision documented? Was the resolution verified? Alerts that fail this check are flagged as "stale escalations" and re-escalated to the AI Governance Lead . The quarterly PMM review examines the stale escalation rate as a health metric for the escalation framework itself. A rising stale escalation rate indicates that the framework is being overwhelmed (too many alerts), undermined (insufficient response capacity), or ignored (cultural resistance to the escalation process). Key outputs Acknowledged-but-unresolved alert detection Resolution checklist (root cause, decision, verification) Stale escalation re-escalation to AI Governance Lead Stale escalation rate as framework health metric --- ## Six-Level Oversight Pyramid URL: https://docs.standardintelligence.com/six-level-oversight-pyramid Breadcrumb: Operations › Oversight › Six-Level Pyramid Last updated: 28 Feb 2026 This section covers the following topics: Level 1: Technical Monitoring Level 2: AI System Operators Level 3: Product Management & Business Level 4: Compliance, Legal & Data Protection Level 5: Executive Leadership Level 6: External Oversight --- ## Storage Layer URL: https://docs.standardintelligence.com/storage-layer Breadcrumb: Operations › PMM › PMM Infrastructure Architecture › Storage Layer Last updated: 28 Feb 2026 Storage Layer AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 , Article 18 Monitoring data is stored in a time-series database optimised for the analytical queries PMM requires: aggregation over time windows, comparison between periods, and disaggregation by subgroup. For high-volume systems, a tiered storage strategy reduces cost: raw inference data is retained at full granularity for a defined period (typically 30–90 days), then aggregated to hourly or daily summaries for long-term retention. The Technical SME documents the aggregation methodology in the PMM plan . The raw data retention period must be sufficient to support serious incident investigations; a serious incident discovered weeks after it occurred requires access to the raw data from the incident period. Long-term storage uses the same tiered approach as the broader evidence repository : active storage for current data, archival storage (S3 Glacier, Azure Archive, Google Archive) for historical data, with lifecycle policies enforcing the ten-year retention period. Key outputs Time-series database optimised for PMM queries Tiered storage (raw 30–90 days, aggregated long-term) Raw data retention sufficient for incident investigation Ten-year archival with lifecycle policies --- ## System End-of-Life & Decommissioning URL: https://docs.standardintelligence.com/system-end-of-life-and-decommissioning Breadcrumb: Operations › End-of-Life & Decommissioning (S.12.11) Last updated: 28 Feb 2026 System end-of-life planning begins during architecture design and executes when the system reaches the end of its operational life. The regulatory basis establishes the AI Act obligations that survive decommissioning. End-of-life triggers define the events that initiate the decommission process: planned retirement, regulatory withdrawal, performance degradation, and organisational change. End-of-life planning covers the decommission plan, stakeholder notification, transition planning, and timeline. Seven decommission workstreams address system shutdown, data disposition, model archival, documentation preservation, deployer transition, regulatory notification, and knowledge transfer. Post-decommission obligations cover the ten-year documentation retention, ongoing regulatory response, and liability management. The section concludes with artefacts. ℹ This section corresponds to the End-of-Life section and feeds primarily into AISDP Module 12 ( Post-Market Monitoring ) and Module 10 (Record-Keeping). --- ## Technical Shutdown Log URL: https://docs.standardintelligence.com/technical-shutdown-log Breadcrumb: Operations › End-of-Life › Artefacts › Technical Shutdown Log Last updated: 28 Feb 2026 Technical Shutdown Log AISDP module(s): Module 3 (Model Documentation), Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 18 The technical shutdown log records the sequence of actions: endpoint deactivation dates and HTTP 410 activation, model artefact archival with final version hash and storage location, credential revocation dates with verification test results, infrastructure release dates and resource identifiers, monitoring data final snapshot export, and IaC state file archival. For automated decommissions, the log includes the Terraform destroy output, Vault lease revocation audit trail, and API gateway configuration changes. Key outputs Sequential action record with dates and verification Model integrity hash at archival Credential revocation verification IaC state file archived --- ## Threshold Calibration — Derivation & Quarterly Review URL: https://docs.standardintelligence.com/threshold-calibration-derivation-and-quarterly-review Breadcrumb: Operations › PMM › Alerting & Escalation › Threshold Calibration — Derivation & Quarterly Review Last updated: 28 Feb 2026 Threshold Calibration — Derivation & Quarterly Review AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 Threshold calibration determines how sensitive the alerting system is. Thresholds set too low generate excessive alerts (contributing to alert fatigue and desensitisation); thresholds set too high miss genuine compliance issues until they become severe. Initial thresholds are derived from the system's validation performance. The critical threshold corresponds to the AISDP-declared minimum acceptable value. The warning threshold is set at a level that provides sufficient lead time for investigation and remediation before the critical threshold is breached. For drift metrics, thresholds are calibrated against the natural variability observed during the validation period. Thresholds are reviewed quarterly at the PMM governance meeting. The review examines alert volume per threshold (are thresholds generating the right number of alerts?), false positive rate (are alerts leading to genuine issues or benign findings?), and detection latency (are warning alerts providing enough lead time before critical alerts?). Threshold adjustments are documented with their rationale and approved by the AI Governance Lead . Key outputs Initial derivation from validation performance Warning threshold calibrated for lead time before critical Quarterly review of alert volume, false positive rate, and detection latency Documented adjustments approved by AI Governance Lead --- ## Triage Process URL: https://docs.standardintelligence.com/triage-process Breadcrumb: Operations › PMM › Serious Incident Reporting › Triage Process Last updated: 28 Feb 2026 Triage Process AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 73 Any detection event triggers a predefined triage process completed within 24 hours. The triage assesses whether the event meets the Article 3(49) definition using a documented decision tree: is there evidence of actual or potential harm to individuals? Does the harm meet the severity criteria? Is the harm attributable to the AI system's output or behaviour? If the Article 3(49) threshold is met, the triage classifies the severity tier (2-day, 10-day, or 15-day), assigns an incident lead (typically a senior Technical SME), notifies the AI Governance Lead and Legal and Regulatory Advisor immediately, and initiates evidence preservation. If the threshold is not met, the event is logged in the serious incident register as an assessed event with the triage rationale documented. Each triage determination, whether positive or negative, is documented with the evidence considered and the rationale for the conclusion. This documentation protects the organisation if the determination is later questioned. Key outputs 24-hour triage completion from detection Article 3(49) decision tree applied Severity tier classification and incident lead assignment Documented determination regardless of outcome --- ## Updated Risk Register Entries URL: https://docs.standardintelligence.com/updated-risk-register-entries Breadcrumb: Operations › PMM › Artefacts › Updated Risk Register Entries Last updated: 28 Feb 2026 Updated Risk Register Entries AISDP module(s): Module 6 (Risk Management System), Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 9 PMM findings that reveal new risks or change the assessment of existing risks trigger risk register updates. Each update documents the originating PMM finding, the risk reassessment, any new or modified mitigations, and the residual risk acceptance. The risk register is a living document that evolves with the system's operational experience. Updated risk register entries demonstrate that the organisation's risk management is responsive to production evidence, not static from the pre-deployment assessment. This responsiveness is a mitigating factor under Article 99(7). Key outputs PMM-triggered risk register updates Originating finding linked to risk reassessment Responsive risk management as mitigating factor Module 6 and Module 12 AISDP evidence --- ## Warning Tier URL: https://docs.standardintelligence.com/warning-tier Breadcrumb: Operations › PMM › Alerting & Escalation › Warning Tier Last updated: 28 Feb 2026 Warning Tier AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 A warning alert indicates that a metric has breached its warning threshold, typically set at a level indicating potential drift before the compliance threshold is reached. The Technical SME reviews the alert within five working days. Root cause analysis is initiated. If the cause is identified and benign (a known seasonal pattern, a one-time data quality event that has been corrected), the alert is documented and closed with the rationale. If the cause is unclear or concerning, the alert is escalated to the AI Governance Lead . Warning alerts that persist without resolution are escalated automatically after the five-day review period. Warning alerts provide the early detection window that prevents critical alerts. A well-calibrated warning threshold gives the organisation time to investigate and remediate before the metric breaches the compliance threshold. Key outputs Five-working-day Technical SME review Root cause analysis initiated Benign cause documented and closed; unclear cause escalated Auto-escalation after review period --- ## Who Can Trigger Break-Glass (Level 2 or Above) URL: https://docs.standardintelligence.com/who-can-trigger-break-glass-level-2-or-above Breadcrumb: Operations › Oversight › Break-Glass Procedures › Who Can Trigger Break-Glass (Level 2 or Above) Last updated: 28 Feb 2026 Who Can Trigger Break-Glass (Level 2 or Above) AISDP module(s): Module 7 (Human Oversight) Regulatory basis: Article 14 (4)(e) The break-glass procedure defines who is authorised to stop the system. For high-risk AI systems, any person at Level 2 or above in the oversight pyramid should be able to trigger a break-glass action. Requiring senior management approval before stopping a potentially harmful system introduces delay that may increase harm. The broad authorisation reflects the practical reality that the person closest to the system's operation is most likely to observe harmful behaviour first. An operator who notices the system producing systematically biased outputs should be able to halt processing immediately, without waiting for management authorisation. The break-glass authorisation list is documented in the AISDP and communicated during training. Every authorised person knows how to activate the procedure and is protected from reprisal for doing so. Key outputs Level 2 and above authorised to trigger break-glass No senior management pre-approval required Authorisation list documented in AISDP Communication during training --- ## WS1: API-Served Systems (Deprecation, Sunset, Cut-Off) URL: https://docs.standardintelligence.com/ws1-api-served-systems-deprecation-sunset-cut-off Breadcrumb: Operations › End-of-Life › Seven Workstreams › WS1: API-Served Systems (Deprecation, Sunset, Cut-Off) Last updated: 28 Feb 2026 WS1: API-Served Systems (Deprecation, Sunset, Cut-Off) AISDP module(s): Module 8 (Transparency) Regulatory basis: Article 13 For API-served systems, the provider implements a three-phase deprecation sequence. Phase 1 (deprecation notice): the API documentation is updated with the shutdown date, and a sunset header is added to API responses indicating the withdrawal timeline. Phase 2 (sunset period): the API continues operating at full functionality through the deprecation period, giving deployers time to migrate. Phase 3 (cut-off): on the announced date, the API begins returning HTTP 410 Gone responses with a body explaining the system's withdrawal. Post-cutoff access attempts are logged for the decommissioning record. API gateway configurations (AWS API Gateway, Kong, Apigee) can automate the deprecation sequence: adding sunset headers, switching to 410 responses after the cutoff, and logging post-cutoff attempts. The automation reduces human error and provides a verifiable record. Key outputs Three-phase API deprecation (notice, sunset, cut-off) Sunset header on API responses during deprecation period HTTP 410 Gone with explanation after cut-off Automated via API gateway configuration --- ## WS1: Deployer Transition — Notification & Arrangements URL: https://docs.standardintelligence.com/ws1-deployer-transition-notification-and-arrangements Breadcrumb: Operations › End-of-Life › Seven Workstreams › WS1: Deployer Transition — Notification & Arrangements Last updated: 28 Feb 2026 WS1: Deployer Transition — Notification & Arrangements AISDP module(s): Module 8 (Transparency), Module 11 (Deployer Obligations) Regulatory basis: Article 13 , Article 20 The provider notifies all known deployers of the withdrawal decision, the reason for withdrawal (at an appropriate level of detail; for mandated withdrawals, the non-conformity must be described), the timeline for decommission , and the recommended transition arrangements. The notification is delivered through the established deployer communication channels with delivery confirmation. Deployer acknowledgement is tracked; non-responsive deployers receive follow-up communications through escalating channels. The AI System Assessor documents all notifications, including dates, recipients, content, and acknowledgements received. Transition arrangements vary by deployment model. The provider offers migration guidance for alternative systems, parallel running periods where feasible, and support for deployers conducting their own impact assessments. The deployer agreement may specify end-of-life obligations (minimum notice periods, transition support duration) that the provider must honour. Key outputs All known deployers notified with reason, timeline, and arrangements Delivery confirmation and acknowledgement tracking Migration guidance and transition support Notification records retained as AISDP evidence --- ## WS1: Embedded/On-Premises & Workflow-Integrated Systems URL: https://docs.standardintelligence.com/ws1-embeddedon-premises-and-workflow-integrated-systems Breadcrumb: Operations › End-of-Life › Seven Workstreams › WS1: Embedded/On-Premises & Workflow-Integrated Systems Last updated: 28 Feb 2026 WS1: Embedded/On-Premises & Workflow-Integrated Systems AISDP module(s): Module 8 (Transparency) Regulatory basis: Article 13 For embedded or on-premises systems, the provider issues a software update communicating end-of-service status, provides data export capabilities so deployers can extract data needed for continuity, and coordinates disabling or uninstallation. On-premises decommission may require deployer cooperation; the deployer agreement should specify the deployer's obligations upon withdrawal notification. For systems integrated into deployer workflows, the provider offers migration guidance for alternative systems, provides parallel running periods where feasible, and supports deployers in conducting their own impact assessments. Workflow-integrated systems pose the highest transition risk: deployers may have built operational processes around the AI system's outputs, and removing the system disrupts those processes. The transition timeline for embedded and workflow-integrated systems is typically longer than for API-served systems, factoring into the overall decommission plan. Key outputs Software update communicating end-of-service Data export capabilities for deployer continuity Migration guidance and parallel running support Extended transition timeline for embedded systems --- ## WS2: Technical Shutdown (Controlled, Logged, Reversible) URL: https://docs.standardintelligence.com/ws2-technical-shutdown-controlled-logged-reversible Breadcrumb: Operations › End-of-Life › Seven Workstreams › WS2: Technical Shutdown (Controlled, Logged, Reversible) Last updated: 28 Feb 2026 WS2: Technical Shutdown (Controlled, Logged, Reversible) AISDP module(s): Module 3 (Model Documentation), Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 16, Article 18 The Technical SME coordinates the technical shutdown in a controlled, logged, and reversible sequence until the final cutoff. Inference endpoint deactivation proceeds in stages: lower-priority or lower-risk deployments go offline first to identify unexpected dependencies. Endpoints return informative HTTP 410 responses, not silent failures. Model artefacts are moved from the production registry stage to an archived stage. They are not deleted; they are retained for the ten-year period. Cryptographic signatures are verified one final time to confirm artefact integrity at archival. The Technical SME records the final model version, its hash, and the archival location. All production credentials, API keys, service accounts, and access tokens are revoked. The Technical SME verifies revocation by attempting access with the revoked credentials and confirming failure. Dedicated infrastructure is released. Before monitoring infrastructure shutdown, the Technical SME exports a final snapshot of all monitoring data and archives it alongside the PMM records. Key outputs Staged endpoint deactivation with HTTP 410 responses Model artefacts archived (not deleted) with integrity verification Credential revocation with verification testing Final monitoring snapshot before infrastructure release --- ## WS3: Data Lifecycle Closure URL: https://docs.standardintelligence.com/ws3-data-lifecycle-closure Breadcrumb: Operations › End-of-Life › Seven Workstreams › WS3: Data Lifecycle Closure Last updated: 28 Feb 2026 WS3: Data Lifecycle Closure AISDP module(s): Module 4 ( Data Governance ) Regulatory basis: Article 18 , GDPR Article 5 (1)(e) Data lifecycle closure reconciles the AI Act's ten-year documentation retention with the GDPR's storage limitation principle for each data category. Training data containing personal data: delete or anonymise at decommission unless a specific retention justification exists (pending litigation, regulatory investigation); retain metadata, provenance records, and statistical summaries for ten years. Inference logs containing personal data: apply the PMM data retention policy; logs whose retention period extends beyond decommission transfer to archive storage for scheduled deletion. Monitoring and PMM data: retain aggregated non-personal data for ten years; anonymise or delete personal data per the retention policy. Model artefacts and embeddings: archive for ten years (no personal data concern unless the model memorises training data, in which case the risk assessment determines treatment). The DPO Liaison verifies that all personal data scheduled for deletion has been removed from all storage locations: primary databases, backup systems, caches, derived datasets, and any third-party systems. The verification is documented and signed by the DPO Liaison. The deletion verification methodology from applies. Key outputs Per-data-category retention/deletion decision AI Act ten-year retention reconciled with GDPR storage limitation DPO Liaison signed deletion verification Module 4 AISDP documentation --- ## WS4: Downstream Decision Monitoring — Historical Outputs URL: https://docs.standardintelligence.com/ws4-downstream-decision-monitoring-historical-outputs Breadcrumb: Operations › End-of-Life › Seven Workstreams › WS4: Downstream Decision Monitoring — Historical Outputs Last updated: 28 Feb 2026 WS4: Downstream Decision Monitoring — Historical Outputs AISDP module(s): Module 12 (Post-Market Monitoring) Regulatory basis: Article 72 Decisions made by the system during its operational lifetime may still affect individuals after decommission . A credit scoring system withdrawn three years ago may have produced assessments still influencing access to financial services. A recruitment screening system may have contributed to hiring decisions whose effects persist in career trajectories. A medical diagnostic aid may have influenced treatment plans still being followed. The AI Governance Lead assesses whether historical outputs continue to affect individuals. Where they do, a post-decommission monitoring plan specifies the data sources for tracking downstream effects, the metrics to monitor, the monitoring duration (which may be shorter than the ten-year retention period, depending on how long outputs remain consequential), the responsible person, and the escalation pathway if adverse outcomes are detected. This monitoring need not replicate the full PMM programme. It focuses on the specific risk dimensions (fairness, accuracy of consequential decisions) that remain relevant. Monitoring outputs are added to the archived evidence pack . Key outputs Assessment of continuing impact from historical outputs Post-decommission monitoring plan where impacts persist Focused on fairness and accuracy of consequential decisions Monitoring outputs archived as evidence --- ## WS5: Documentation Finalisation — Final AISDP Version URL: https://docs.standardintelligence.com/ws5-documentation-finalisation-final-aisdp-version Breadcrumb: Operations › End-of-Life › Seven Workstreams › WS5: Documentation Finalisation — Final AISDP Version Last updated: 28 Feb 2026 WS5: Documentation Finalisation — Final AISDP Version AISDP module(s): All modules Regulatory basis: Article 11 , Article 18 The AI System Assessor prepares the final AISDP version, incorporating all operational content plus a decommissioning record. The record captures the end-of-life trigger and rationale (reason, trigger identification date, governance approval), the plan and execution log (approved plan, milestone completion dates, deviations and resolutions), deployer notification records (copies of all notifications with delivery confirmation), the technical shutdown record (endpoint deactivation dates, credential revocation confirmations, infrastructure release records, final model integrity verification), the data deletion and retention record (per-category schedule with DPO Liaison signed attestation), the post-decommission obligations register (ten-year retention expiry date, monitoring commitments, data subject rights obligations), and the registration status update ( EU database updated to "no longer on the market/in service"). Module 1 is updated with system status "decommissioned" and the decommission date. Module 6 records end-of-life as a risk treatment (risk eliminated by system withdrawal). Module 7 records cessation of operational oversight and transition to post-decommission monitoring. Module 10 records the decommissioning within the QMS change log. Key outputs Final AISDP version with decommissioning record Seven decommissioning record components Per-module end-of-life updates across Modules 1, 3, 4, 6, 7, 8, 10, 12 Module 12 AISDP evidence --- ## WS6: Archival — 10-Year Retention URL: https://docs.standardintelligence.com/ws6-archival-10-year-retention Breadcrumb: Operations › End-of-Life › Seven Workstreams › WS6: Archival — 10-Year Retention Last updated: 28 Feb 2026 WS6: Archival — 10-Year Retention AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Article 18 The Conformity Assessment Coordinator archives the final AISDP version and the complete evidence pack in long-term storage. The archive must be immutable (preventing retrospective modification), accessible (a competent authority request can be serviced within a reasonable timeframe), and cost-efficient (active monitoring infrastructure can be decommissioned while data survives). Cloud cold storage tiers (AWS Glacier, Azure Archive, GCS Archive) provide cost-effective long-term retention with retrieval latencies of hours to days, acceptable for regulatory requests. The archive includes a manifest document describing the system, the AISDP version history, the evidence pack contents, and the retrieval procedures. This manifest ensures a future employee or competent authority can navigate the archive without relying on institutional knowledge. Automated archival pipelines reduce human error: the pipeline exports the final model artefact, final monitoring snapshot, and AISDP to archive storage, verifies completeness against the evidence register , and produces an archival receipt with checksums. Key outputs Immutable, accessible, cost-efficient long-term archive Cloud cold storage (Glacier, Azure Archive, GCS Archive) Manifest document for future navigation Automated archival pipeline with checksum verification --- ## WS7: Regulatory Notifications (EU DB, Deployers, CA) URL: https://docs.standardintelligence.com/ws7-regulatory-notifications-eu-db-deployers-ca Breadcrumb: Operations › End-of-Life › Seven Workstreams › WS7: Regulatory Notifications (EU DB, Deployers, CA) Last updated: 28 Feb 2026 WS7: Regulatory Notifications (EU DB, Deployers, CA) AISDP module(s): Module 12 ( Post-Market Monitoring ) Regulatory basis: Articles 49, 71, Article 20 Three categories of regulatory notification accompany decommission . EU database registration : the Conformity Assessment Coordinator updates the registration to reflect "no longer on the market/in service" under Articles 49 and 71, with the date of withdrawal recorded. Deployer notification: all known deployers are formally notified under Article 20, with the non-conformity (for compliance-driven withdrawals) described at an appropriate level of detail. Competent authority notification: where the withdrawal results from non-conformity presenting a risk under Article 79(1), the provider informs the competent authority and any notified body that issued a certificate. For mandated withdrawals, the competent authority has already been engaged; the notification confirms the withdrawal's completion and the corrective actions taken. The Legal and Regulatory Advisor coordinates all regulatory notifications, ensuring consistency across jurisdictions for multi-state deployments. Key outputs EU database status updated to "no longer on the market/in service" Deployer notification under Article 20 Competent authority and notified body notification where applicable Multi-jurisdiction coordination by Legal and Regulatory Advisor --- # Resources --- ## Abbreviations URL: https://docs.standardintelligence.com/abbreviations Breadcrumb: Resources › Glossary (Appendix C) › Abbreviations Last updated: 28 Feb 2026 Abbreviations AISDP module(s): Cross-cutting Key abbreviations: AISDP (AI System Documentation Package), AUC-ROC (Area Under the Receiver Operating Characteristic Curve), CBPE (Confidence-Based Performance Estimation), CDR (Classification Decision Record), CE (Conformité Européenne), CI/CD (Continuous Integration/Continuous Deployment), CRA (Cyber Resilience Act), DoC (Declaration of Conformity), DORA (Digital Operational Resilience Act), DPIA (Data Protection Impact Assessment), DPO (Data Protection Officer), DVC (Data Version Control), EU (European Union), FMEA (Failure Mode and Effects Analysis), FRIA (Fundamental Rights Impact Assessment), GDPR (General Data Protection Regulation), GPAI (General-Purpose AI), IaC (Infrastructure as Code), IFU (Instructions for Use), JS (Jensen-Shannon divergence), KS (Kolmogorov-Smirnov test), LLM (Large Language Model), LMS (Learning Management System), MSA (Market Surveillance Authority), NB (Notified Body), NC (Non-Conformity), NCA (National Competent Authority), NIS2 (Network and Information Security Directive 2), PMM (Post-Market Monitoring), PSI (Population Stability Index), QMS (Quality Management System), RAG (Retrieval-Augmented Generation), RBAC (Role-Based Access Control), SAST (Static Application Security Testing), SBOM (Software Bill of Materials), SCA (Software Composition Analysis), SHAP (SHapley Additive exPlanations), SLA (Service Level Agreement), SLO (Service Level Objective), SME (Subject Matter Expert), SRR (Selection Rate Ratio). Key outputs Complete abbreviation expansion table --- ## Additional Diagrams (Remaining 11) URL: https://docs.standardintelligence.com/additional-diagrams-remaining-11 Breadcrumb: Resources › Architectural Diagrams › Additional Diagrams (Remaining 11) Last updated: 28 Feb 2026 Additional Diagrams (Remaining 11) AISDP module(s): Various Additional Mermaid diagrams are included: the end-of-life workflow (seven workstreams from trigger to post- decommission ), the break-glass procedure flowchart, the CI/CD pipeline stage diagram, the data governance lifecycle, the conformity assessment execution phases, the deployment ledger structure, the monitoring infrastructure five-layer architecture, the provider-deployer communication model, the PMM data retention tiers, the cross-regime reporting matrix, and the version control composite versioning scheme. Each diagram supports its respective AISDP module and is retained as visual evidence in the evidence pack. Key outputs Eleven additional architectural and process diagrams Per-diagram AISDP module mapping Visual evidence in evidence pack --- ## AISDP — 12 Module Overview URL: https://docs.standardintelligence.com/aisdp-12-module-overview Breadcrumb: Resources › Core Artefacts › AISDP — 12 Module Overview Last updated: 28 Feb 2026 AISDP — 12 Module Overview AISDP module(s): All modules Regulatory basis: Article 11 , Annex IV The AISDP is structured as twelve modules, each mapping to specific Annex IV requirements. Module 1 : System Description and Intended Purpose. Module 2 : Development Process. Module 3 : Model Documentation. Module 4 : Data Governance and Dataset Documentation . Module 5 : Testing and Validation. Module 6 : Risk Management System. Module 7 : Human Oversight. Module 8 : Transparency and User Information. Module 9 : Robustness and Cybersecurity. Module 10 : Version Control and Change Management . Module 11 : Fundamental Rights Impact Assessment . Module 12 : Post-Market Monitoring and Change History. Each module has a regulatory authority, domain guidance cross-reference, responsible role, content fields, and evidence sources. The AISDP is assembled incrementally from Phase 1 through Phase 5, not authored retrospectively. Key outputs Twelve-module structure aligned to Annex IV Per-module regulatory authority and responsible role Incremental assembly across delivery phases --- ## AISDP Assembly Timeline (Incremental, Phase 1 to Phase 5) URL: https://docs.standardintelligence.com/aisdp-assembly-timeline-incremental-phase-1-to-phase-5 Breadcrumb: Resources › Core Artefacts › AISDP Assembly Timeline (Incremental, Phase 1 to Phase 5) Last updated: 28 Feb 2026 AISDP Assembly Timeline (Incremental, Phase 1 to Phase 5) AISDP module(s): All modules Regulatory basis: Article 11 The AISDP is assembled incrementally across the seven delivery phases. Module 1 is completed during Phase 1. Module 6 is drafted during Phase 2 and updated continuously. Modules 2, 3, 4 are populated during Phases 3–4 as architecture and development progress. Modules 5, 7, 8, 9 are completed during Phase 4 testing and Phase 5 validation. Module 10 is maintained continuously from Phase 3 onward. Module 11 is completed during Phase 2 and refined as the system matures. Module 12 is activated at Phase 7. By Phase 5, the AISDP should be substantially complete, requiring only final review and consistency checking. Key outputs Per-module assembly phase mapping Incremental assembly preventing retrospective documentation Phase 5 completion target --- ## Architectural Diagrams URL: https://docs.standardintelligence.com/architectural-diagrams Breadcrumb: Resources › Architectural Diagrams Last updated: 28 Feb 2026 Risk Classification Diagram Oversight Pyramid Diagram Incident Response Diagram Feedback Loop Diagram Delivery Timeline Diagram Additional Diagrams (Remaining 11) --- ## Break-Glass Mechanisms Summary URL: https://docs.standardintelligence.com/break-glass-mechanisms-summary Breadcrumb: Resources › Technical Infrastructure › Break-Glass Mechanisms Summary Last updated: 28 Feb 2026 Break-Glass Mechanisms Summary AISDP module(s): Module 7 Regulatory basis: Article 14 (4)(e) Three independent halt mechanisms: in-application stop button (prominent UI control, ), infrastructure kill switch (dedicated API endpoint independent of application, ), and feature flag pattern (LaunchDarkly/Unleash, sub-200ms propagation, ). Any Level 2+ person can trigger. Non-retaliation for good-faith activations. Annual testing exercise required. See for detailed treatment. Key outputs Three independent halt mechanisms Level 2+ authorisation Annual testing and non-retaliation --- ## Brownfield Systems URL: https://docs.standardintelligence.com/brownfield-systems Breadcrumb: Resources › Brownfield Systems Last updated: 28 Feb 2026 Gap Assessment Approach AISDP module(s): Cross-cutting Regulatory basis: Article 16 For systems already in production, the AI System Assessor examines each AISDP module and identifies what documentation exists, what is missing, what testing has been performed, what is needed, what governance controls are in place, and what is absent. The gap assessment produces a remediation plan with priorities, owners, and timelines. The gap assessment is the first step in brownfield compliance and determines the scope of the retrofit effort. Key outputs Per-module gap identification Remediation plan with priorities and timelines First step in brownfield compliance Documentation Reconstruction Principles AISDP module(s): Cross-cutting Regulatory basis: Article 11 Where documentation was not created during development, it must be reconstructed from available artefacts. Training data characteristics may be derived from statistical analysis of the deployed model's behaviour. Architecture details may be extracted from the codebase. Design decisions may be recovered through interviews with the development team. The AISDP should clearly indicate where documentation has been reconstructed, not generated contemporaneously. Transparency about reconstruction is more credible to a competent authority than retroactive documentation claiming to be original. Key outputs Reconstruction from available artefacts Statistical analysis, code extraction, team interviews Clear labelling of reconstructed documentation Retrofit Phases (A: Critical, B: Documentation, C: Infrastructure) AISDP module(s): Cross-cutting Regulatory basis: Article 16 Three retrofit phases for brownfield systems. Phase A (critical gaps): human oversight controls, serious incident reporting capability, basic PMM; addresses the highest compliance risk first. Phase B (documentation gaps): assemble the AISDP from existing and reconstructed artefacts; version control established from the current state forward. Phase C (infrastructure gaps): version control extension, CI/CD pipeline compliance gates, monitoring infrastructure build-out. The phased plan is documented and approved by the AI Governance Lead with milestones demonstrating progress toward full compliance. Key outputs Three retrofit phases prioritised by compliance risk Phase A addresses immediate safety and reporting gaps Milestones demonstrating progress August 2026 Milestone Requirement AISDP module(s): Cross-cutting Regulatory basis: Article 113 The full high-risk AI system framework applies from 2 August 2026. Organisations must reach at least Level 4 (Operational) in the compliance maturity model by this date: conformity assessment s completed, Declarations of Conformity signed, high-risk systems registered in the EU database , PMM producing data, operators trained, and incident response in place. Systems operating after this date without conformity assessment are in breach of the AI Act. Key outputs 2 August 2026 application date Level 4 maturity target Post-deadline operation without conformity assessment is non-compliant --- ## CDR — Content & Process Summary URL: https://docs.standardintelligence.com/cdr-content-and-process-summary Breadcrumb: Resources › Core Artefacts › CDR — Content & Process Summary Last updated: 28 Feb 2026 CDR — Content & Process Summary AISDP module(s): Module 1 (System Description) Regulatory basis: Article 6 The Classification Decision Record documents the reasoning chain determining whether a system is high-risk. It comprises document control , system summary, and the three-pathway classification analysis: Pathway A (Annex I product, Article 6(1)), Pathway B ( Annex III area, Article 6(2)), and the Article 6(3) exception (functional criterion and risk criterion assessed separately). The CDR is produced by the AI System Assessor , independently reviewed by the Classification Reviewer, and approved by the AI Governance Lead before Phase 2 begins. See for the detailed treatment. Key outputs Three-pathway classification analysis Independent review and governance approval First compliance artefact produced (Phase 1) --- ## CI/CD Pipeline with Four Compliance Gates URL: https://docs.standardintelligence.com/cicd-pipeline-with-four-compliance-gates Breadcrumb: Resources › Technical Infrastructure › CI/CD Pipeline with Four Compliance Gates Last updated: 28 Feb 2026 CI/CD Pipeline with Four Compliance Gates AISDP module(s): Module 2 , Module 5 Regulatory basis: Annex IV (2)(e), Article 9 (7) The CI/CD pipeline enforces four compliance gates: performance gate (accuracy, precision, recall, F1 against AISDP-declared thresholds), fairness gate (selection rate ratios, equalised odds against declared minimums), robustness gate (adversarial perturbation survival rate), and substantial modification gate (automated comparison against cumulative baseline). Gate failures block deployment. See for the detailed treatment. Key outputs Four compliance gates blocking deployment on failure Automated threshold enforcement Substantial modification detection --- ## Code Examples URL: https://docs.standardintelligence.com/code-examples Breadcrumb: Resources › Code Examples Last updated: 28 Feb 2026 Data Validation Examples AISDP module(s): Module 4 Code examples are provided for data validation including Great Expectations data quality checks, schema validation, statistical distribution verification, and data lineage capture. These examples demonstrate how automated data validation generates Module 4 evidence as a byproduct of the engineering workflow. Key outputs Great Expectations data quality checks Schema and distribution validation Automated evidence generation Version Control Examples AISDP module(s): Module 10 Code examples are provided for DVC data versioning commands, MLflow model registration, composite version tagging, and deployment ledger entries. These examples demonstrate how version control infrastructure generates Module 10 evidence automatically. Key outputs DVC data versioning MLflow model registration Deployment ledger automation CI/CD Pipeline Examples AISDP module(s): Module 2 , Module 5 Code examples are provided for GitHub Actions pipeline configuration with compliance gates, fairness gate threshold enforcement, robustness gate perturbation testing, and substantial modification gate baseline comparison. These examples demonstrate automated compliance checking within the deployment pipeline. Key outputs GitHub Actions compliance gate configuration Fairness and robustness gate implementation Substantial modification automated detection Security Examples AISDP module(s): Module 9 Code examples are provided for Semgrep SAST rule configuration, Trivy container scanning, Sigstore Cosign artefact signing, and SBOM generation with Syft. These examples demonstrate DevSecOps integration generating Module 9 evidence. Key outputs SAST, container scanning, signing, SBOM tooling DevSecOps pipeline integration Automated security evidence generation Monitoring Examples AISDP module(s): Module 12 Code examples are provided for Prometheus alert rule configuration (three severity tiers), Alertmanager routing (critical to PagerDuty, warning to Slack), recording rules for rolling performance baseline computation, and PSI drift threshold configuration. These examples demonstrate how the monitoring infrastructure generates Module 12 evidence. Key outputs Prometheus alert and recording rules Alertmanager severity-based routing PSI drift threshold configuration --- ## Compliance-at-Deployment URL: https://docs.standardintelligence.com/compliance-at-deployment Breadcrumb: Resources › Templates › Common Pitfalls › Compliance-at-Deployment Last updated: 28 Feb 2026 Compliance-at-Deployment AISDP module(s): Module 12 Treating compliance as a gate to pass at deployment, overlooking ongoing obligations. The AISDP is a living document, PMM operates continuously, and the risk register evolves with operational experience. Post-deployment compliance requires sustained investment (15–25% of annual development cost, ). Key outputs Pitfall: deployment-only compliance mindset Solution: continuous compliance through PMM and governance --- ## Conformity Assessment Reference URL: https://docs.standardintelligence.com/conformity-assessment-reference Breadcrumb: Resources › Conformity Assessment Reference Last updated: 28 Feb 2026 Route Determination (Annex VI, NB, Voluntary) AISDP module(s): Cross-cutting Regulatory basis: Article 43 Three conformity assessment routes: Annex VI internal control (default for most Annex III systems), Annex VII notified body assessment (mandatory for biometric identification for law enforcement under Annex III point 1), and voluntary third-party review (a complement to Annex VI providing independent credibility). See for detailed treatment. Key outputs Three assessment routes Mandatory NB for biometric identification Voluntary third-party review option Three Workstreams Summary AISDP module(s): Cross-cutting Regulatory basis: Annex VI Three concurrent assessment workstreams: QMS assessment ( Article 17 quality management system evaluated against twelve sub-requirements), technical documentation assessment (AISDP reviewed for completeness, accuracy, and traceability ), and evidence verification (evidence pack artefacts verified against AISDP claims). See for detailed treatment. Key outputs Three concurrent workstreams QMS, technical documentation, and evidence verification Cross-workstream finding consolidation Five Execution Phases Summary AISDP module(s): Cross-cutting Regulatory basis: Annex VI Five execution phases: Phase 1 — assessment planning (scope, schedule, team, criteria). Phase 2 — document review (AISDP completeness and consistency). Phase 3 — evidence verification (artefact-level testing against claims). Phase 4 — finding consolidation and NC classification. Phase 5 — determination (conformity confirmed, conditional conformity, or non-conformity). See for detailed treatment. Key outputs Five sequential execution phases Three possible determination outcomes Non-conformity classification and remediation NC Severity Summary AISDP module(s): Cross-cutting Regulatory basis: Annex VI Three non-conformity severity levels: Critical — prevents the system from meeting a mandatory requirement; blocks Declaration of Conformity until resolved. Major — significant gap that weakens compliance posture; must be resolved within defined timeline with interim mitigations. Minor — documentation deficiency or process improvement opportunity; resolved through normal governance cycle. See for detailed treatment. Key outputs Three severity levels (critical, major, minor) Critical blocks Declaration of Conformity Remediation timelines per severity Notified Body Engagement Summary AISDP module(s): Cross-cutting Regulatory basis: Articles 28–36, Annex VII Notified body engagement is mandatory for Annex III point 1 (biometric identification for law enforcement) under Annex VII. Voluntary engagement with a recognised assessment body strengthens compliance credibility for other high-risk systems. The engagement process covers selection (NANDO register), scope agreement, assessment execution, certificate issuance, and ongoing surveillance. See for detailed treatment. Key outputs Mandatory for Annex III point 1 Voluntary for other high-risk systems NANDO register for selection --- ## Core Artefacts to Produce URL: https://docs.standardintelligence.com/core-artefacts-to-produce Breadcrumb: Resources › Core Artefacts Last updated: 28 Feb 2026 CDR — Content & Process Summary AISDP — 12 Module Overview Module 1: System Description & Intended Purpose Module 2: Development Process Module 3: Model Documentation Module 4: Data Governance & Dataset Documentation Module 5: Testing & Validation Module 6: Risk Management System Module 7: Human Oversight Module 8: Transparency & User Information Module 9: Robustness & Cybersecurity Module 10: Version Control & Change Management Module 11: Fundamental Rights Impact Assessment Module 12: Post-Market Monitoring & Change History AISDP Assembly Timeline (Incremental, Phase 1 to Phase 5) Evidence Pack — Traceability & Currency Requirements Risk Register — Living Document Requirements Declaration of Conformity — Eight Points & Legal Significance FRIA Report — Scope & Separation Requirement PMM Plan — Five Dimensions & Proportionality --- ## Cross-Reference Index URL: https://docs.standardintelligence.com/cross-reference-index Breadcrumb: Resources › Cross-Reference Index (Appendix B) Last updated: 28 Feb 2026 Article-to-Section Mapping AISDP module(s): Cross-cutting The cross-reference index maps every EU AI Act Article cited in this documentation to the sections that address it. Assessors use this index to verify that a given regulatory requirement is addressed and to locate all relevant guidance across the thirteen domains. The index covers explicit citations only; Articles not appearing may fall outside the document's scope. See for the complete mapping table. Key outputs Per-Article section cross-reference Coverage verification tool for assessors Explicit citations only Annex-to-Section Mapping AISDP module(s): Cross-cutting The cross-reference index maps Annexes I through IX to the sections that address them. Annexes X through XIII (GPAI-specific) are not referenced as they address GPAI provider obligations rather than downstream AISDP preparation. See for the complete mapping table. Key outputs Per-Annex section cross-reference Annexes X–XIII excluded (GPAI-specific) AISDP Module-to-Section Mapping AISDP module(s): All modules The cross-reference index maps each of the twelve AISDP modules to the documentation sections that contribute content. This enables the Technical SME to identify, for any given module, all the domain sections that provide the underlying guidance and evidence requirements. See for the complete mapping table. Key outputs Per-module contributing section cross-reference Technical SME navigation tool --- ## Cross-Regulatory Instruments URL: https://docs.standardintelligence.com/cross-regulatory-instruments Breadcrumb: Resources › Glossary (Appendix C) › Cross-Regulatory Instruments Last updated: 28 Feb 2026 Cross-Regulatory Instruments AISDP module(s): Cross-cutting Key cross-regulatory instruments: CRA (Cyber Resilience Act, Regulation (EU) 2024/2847), DORA (Digital Operational Resilience Act, Regulation (EU) 2022/2554), DPIA (Data Protection Impact Assessment), GDPR (General Data Protection Regulation), NIS2 (Network and Information Security Directive, Directive (EU) 2022/2555), Directive 2019/1937 (Whistleblower Protection), and the AI Liability Directive proposal. Key outputs Cross-regulatory instrument definitions with regulation references --- ## Cybersecurity as Afterthought URL: https://docs.standardintelligence.com/cybersecurity-as-afterthought Breadcrumb: Resources › Templates › Common Pitfalls › Cybersecurity as Afterthought Last updated: 28 Feb 2026 Cybersecurity as Afterthought AISDP module(s): Module 9 Bolting security on as a final pre-deployment gate instead of embedding it from the outset through DevSecOps practices. Retroactive assessments find more problems, cost more to remediate, and delay deployment. Key outputs Pitfall: late-stage security addition Solution: DevSecOps from project inception --- ## Declaration of Conformity — Eight Points & Legal Significance URL: https://docs.standardintelligence.com/declaration-of-conformity-eight-points-and-legal Breadcrumb: Resources › Core Artefacts › Declaration of Conformity — Eight Points & Legal Significance Last updated: 28 Feb 2026 Declaration of Conformity — Eight Points & Legal Significance AISDP module(s): Cross-cutting Regulatory basis: Article 47, Annex V The Declaration of Conformity contains eight mandatory sections per Annex V: system identification, provider identification, sole responsibility statement, conformity statement listing all applicable legislation, data protection compliance, standards and specifications applied, notified body information, and signatory with date. The Declaration is a legally binding assertion; signing in the face of unresolved critical non-conformities exposes the signatory to personal liability and the organisation to penalties under Article 99. See for detailed treatment. Key outputs Eight mandatory Annex V sections Legal binding significance Pre-signature checklist requirement --- ## Decommissioning as Afterthought URL: https://docs.standardintelligence.com/decommissioning-as-afterthought Breadcrumb: Resources › Templates › Common Pitfalls › Decommissioning as Afterthought Last updated: 28 Feb 2026 Decommissioning as Afterthought AISDP module(s): Module 12 Treating end-of-life as an operational task with no governed process risks orphaned personal data, unrevoked credentials, unsupported deployers, and unmanaged ten-year obligations. The end-of-life process should be planned during architecture phase. Key outputs Pitfall: ungoverned decommission Solution: architecture-phase planning --- ## Delivery Timeline Diagram URL: https://docs.standardintelligence.com/delivery-timeline-diagram Breadcrumb: Resources › Architectural Diagrams › Delivery Timeline Diagram Last updated: 28 Feb 2026 Delivery Timeline Diagram AISDP module(s): Cross-cutting The seven-phase delivery Gantt chart shows the overlapping phases from Phase 1 (Discovery, Weeks 1–3) through Phase 7 ( Operational Monitoring , ongoing), with typical duration of 20–28 weeks from initiation to production deployment. Phase overlap is illustrated, showing how risk assessment informs architecture which informs development. Key outputs Seven-phase Gantt chart Phase overlap illustration 20–28 week typical timeline --- ## Document Terms URL: https://docs.standardintelligence.com/document-terms Breadcrumb: Resources › Glossary (Appendix C) › Document Terms Last updated: 28 Feb 2026 Document Terms AISDP module(s): Cross-cutting Key document terms: AISDP (AI System Documentation Package), CDR (Classification Decision Record), Declaration of Conformity, evidence pack, evidence register, Instructions for Use, model card, Non-Conformity Register, risk register, version quad, deployment ledger, cumulative baseline, compensating controls, and procedural alternative. Key outputs Document term definitions with section references --- ## Eleven Common Pitfalls URL: https://docs.standardintelligence.com/eleven-common-pitfalls Breadcrumb: Resources › Templates › Common Pitfalls Last updated: 28 Feb 2026 Retrospective Documentation Legal Document Syndrome Empty Evidence Pack Human Oversight as Checkbox Compliance-at-Deployment Cybersecurity as Afterthought Oversight Designed After Deployment Suppressed Escalation Scope Creep Without Reclassification Ignoring Cumulative Change Decommissioning as Afterthought --- ## Empty Evidence Pack URL: https://docs.standardintelligence.com/empty-evidence-pack Breadcrumb: Resources › Templates › Common Pitfalls › Empty Evidence Pack Last updated: 28 Feb 2026 Empty Evidence Pack AISDP module(s): Cross-cutting Producing an AISDP narrative without assembling the supporting evidence. Every material claim must trace to a specific, retrievable artefact. Evidence register gaps are flagged as non-conformities during assessment. Key outputs Pitfall: narrative without proof Solution: per-claim traceability to evidence artefacts --- ## End-of-Life Reference URL: https://docs.standardintelligence.com/end-of-life-reference Breadcrumb: Resources › End-of-Life Reference Last updated: 28 Feb 2026 Plan During Architecture Phase AISDP module(s): Module 12 Regulatory basis: Article 16 End-of-life planning begins during the system's design phase (Phase 3), not when the moment of decommission arrives. The AISDP documents the end-of-life process from the outset, including trigger criteria, deployer notification templates, data lifecycle closure procedures, and post-decommission obligations. The plan is refined as the system matures. See for detailed treatment. Key outputs Design-phase planning requirement AISDP documentation from outset Refinement through system maturity Seven Workstreams Summary AISDP module(s): Module 12 Regulatory basis: Articles 16, 18, 20 WS1: Deployer transition (notification, API deprecation, embedded system support). WS2: Technical shutdown (staged endpoint deactivation, model archival, credential revocation). WS3: Data lifecycle closure (per-category retention/deletion, GDPR reconciliation). WS4: Downstream decision monitoring (historical output impact). WS5: Documentation finalisation (final AISDP version). WS6: Archival (ten-year immutable storage). WS7: Regulatory notifications ( EU database , deployers, competent authority). See for detailed treatment. Key outputs Seven workstreams with cross-references Parallel execution for mandated withdrawals Sequential execution for planned retirements Post-Decommission Obligations Summary AISDP module(s): Module 12 Regulatory basis: Articles 18, 72, 73 Five continuing obligations: ten-year document retention with annual accessibility verification, GDPR data subject rights response capability, post-withdrawal serious incident reporting , historical PMM data analysis capability, and downstream decision monitoring where historical outputs continue affecting individuals. Named owners and expiry dates documented in the post-decommission monitoring schedule. Key outputs Five continuing obligations Named owners and expiry dates Annual review by AI Governance Lead --- ## Evidence & Document Management URL: https://docs.standardintelligence.com/evidence-and-document-management Breadcrumb: Resources › Technical Infrastructure › Evidence & Document Management Last updated: 28 Feb 2026 Evidence & Document Management AISDP module(s): Cross-cutting Regulatory basis: Article 17 , Article 18 Four management components: evidence repository (SharePoint/Confluence/GitLab with version control and access controls), Non-Conformity Register tracking (Jira/ServiceNow with severity, owner, remediation plan, resolution), GRC platform (Credo AI/Holistic AI/OneTrust for compliance orchestration), and LMS (Docebo/TalentLMS/Moodle for AI literacy tracking). See and for detailed treatment. Key outputs Four management components Tooling options per component Version control and access controls throughout --- ## Evidence Pack — Traceability & Currency Requirements URL: https://docs.standardintelligence.com/evidence-pack-traceability-and-currency-requirements Breadcrumb: Resources › Core Artefacts › Evidence Pack — Traceability & Currency Requirements Last updated: 28 Feb 2026 Evidence Pack — Traceability & Currency Requirements AISDP module(s): Cross-cutting Regulatory basis: Annex IV , Annex VI Every material claim in the AISDP must trace to a specific, retrievable artefact in the evidence pack . The evidence register catalogues each artefact with a unique identifier, description, responsible owner, creation date, storage location, and the AISDP claims it supports. Currency is maintained through the CI/CD pipeline 's automated evidence generation and the feedback loop's traceable documentation. An AISDP without its evidence pack is a narrative without proof. Key outputs Per-claim traceability to evidence artefacts Evidence register with currency tracking Automated evidence generation via CI/CD --- ## Explainability Summary URL: https://docs.standardintelligence.com/explainability-summary Breadcrumb: Resources › Technical Infrastructure › Explainability Summary Last updated: 28 Feb 2026 Explainability Summary AISDP module(s): Module 3 , Module 8 Regulatory basis: Article 13 , Article 86 Model-agnostic methods: SHAP (feature attribution), LIME (local surrogate models). Model-specific methods: GradCAM (vision models), attention weights (transformer models). Article 86 right to explanation requires that affected persons receive meaningful information about AI involvement in decisions affecting them. The explanation methodology, scope, and limitations are documented in Module 3; the delivery mechanism is documented in Module 8. See for the detailed treatment. Key outputs SHAP, LIME, GradCAM, attention weights Article 86 right to explanation Methodology, scope, and limitations documented --- ## Fairness & Bias Tooling Summary URL: https://docs.standardintelligence.com/fairness-and-bias-tooling-summary Breadcrumb: Resources › Technical Infrastructure › Fairness & Bias Tooling Summary Last updated: 28 Feb 2026 Fairness & Bias Tooling Summary AISDP module(s): Module 4 , Module 5 , Module 12 Regulatory basis: Article 10 Four-stage fairness tooling: pre-training (data distribution analysis, representation assessment, proxy variable detection), post-training (Fairlearn MetricFrame/Aequitas for disaggregated metrics), production (continuous fairness metric computation, weekly or monthly), and missing demographic data (proxy estimation, deployer surveys, external benchmarks). See (pre-deployment) and (production) for detailed treatment. Key outputs Four-stage fairness assessment Fairlearn/Aequitas tooling Missing demographic data strategies --- ## Feedback Loop Diagram URL: https://docs.standardintelligence.com/feedback-loop-diagram Breadcrumb: Resources › Architectural Diagrams › Feedback Loop Diagram Last updated: 28 Feb 2026 Feedback Loop Diagram AISDP module(s): Module 12 The PMM feedback loop diagram shows the cycle from PMM finding through decision authority (with four-tier branching), engineering implementation, validation gate confirmation, AISDP update, and evidence pack update, returning to continuous monitoring. The diagram illustrates the closed-loop principle that findings must produce actions. Key outputs Closed-loop cycle visual representation Four-tier decision authority branching Module 12 documentation --- ## FRIA Report — Scope & Separation Requirement URL: https://docs.standardintelligence.com/fria-report-scope-and-separation-requirement Breadcrumb: Resources › Core Artefacts › FRIA Report — Scope & Separation Requirement Last updated: 28 Feb 2026 FRIA Report — Scope & Separation Requirement AISDP module(s): Module 11 Regulatory basis: Article 27 The FRIA examines the impact on all potentially affected EU Charter rights, with particular attention to intersectional effects. The FRIA is distinct from the DPIA (which addresses data protection risks under GDPR Article 35); the two assessments may share evidence but must reach independent conclusions. The FRIA is conducted by the DPO Liaison, consulted with stakeholders (deployers, affected person representatives, domain experts), and retained as Module 11 evidence. See for the detailed methodology. Key outputs Per-Charter-right impact analysis Separation from DPIA Stakeholder consultation requirement --- ## Roles URL: https://docs.standardintelligence.com/glossary--roles Breadcrumb: Resources › Glossary (Appendix C) › Roles Last updated: 28 Feb 2026 Roles AISDP module(s): Cross-cutting Ten governance and technical role definitions: AI Governance Lead, AI System Assessor, Business Owner, Classification Reviewer, Conformity Assessment Coordinator, DPO Liaison, Internal Audit Assurance Lead, Legal and Regulatory Advisor, Technical Owner, and Technical SME. Each definition includes the role's accountability, authority, and the section reference for the detailed description. Key outputs Per-role accountability and authority summary --- ## Glossary URL: https://docs.standardintelligence.com/glossary Breadcrumb: Resources › Glossary (Appendix C) Last updated: 28 Feb 2026 Regulatory Terms Document Terms Roles Cross-Regulatory Instruments Standards & Bodies Technical Terms Tools Abbreviations --- ## Governance Structure URL: https://docs.standardintelligence.com/governance-structure Breadcrumb: Resources › Governance Structure Last updated: 28 Feb 2026 Seven Roles Summary & Multi-Role Assignment AISDP module(s): Cross-cutting Regulatory basis: Article 17 Seven functional governance roles thread through every domain. AI Governance Lead : ultimate compliance accountability, AISDP approval, Declaration of Conformity signatory, competent authority relationship. AI System Assessor : risk identification, classification analysis, conformity assessment , independence from the development team. Conformity Assessment Coordinator : end-to-end certification workflow, Non-Conformity Register , EU database registration . Legal and Regulatory Advisor: legal sufficiency, regulatory interpretation, Declaration review, insurance coverage. DPO Liaison: FRIA oversight, data governance review, GDPR -AI Act alignment (Articles 56–62). Internal Audit Assurance Lead: independent assurance, periodic audits, audit committee reporting. Technical SME: engineering expertise, AISDP technical content, monitoring configuration. In smaller organisations, one person may hold multiple roles. The AI System Assessor must not also serve as AI Governance Lead (independence requirement). The DPO Liaison and Legal and Regulatory Advisor roles can combine where the individual has both data protection and regulatory competence. Multi-role assignments are documented in the QMS with a rationale confirming that independence and capacity are maintained. Key outputs Seven functional roles with defined accountability Multi-role assignment rules and independence constraints QMS documentation of role assignments Governance Cadence (Sprint, Monthly, Quarterly, Annual) AISDP module(s): Cross-cutting Regulatory basis: Article 17 Four governance rhythms operate concurrently. Sprint-level: compliance tasks embedded in each development sprint; AISDP modules updated; evidence pack maintained; sprint retrospective includes compliance dimension. Monthly: PMM reports prepared by PMM analyst, reviewed by Technical SME; deployer feedback aggregated; non-conformity register status checked. Quarterly: AI Governance Lead convenes oversight review (six agenda items, ) and PMM review (eight agenda items, ); threshold calibration reviewed; board reporting prepared. Annual: Internal Audit Assurance Lead conducts oversight audit (six verification areas, ); break-glass exercise conducted; AI literacy refresher training delivered; external audit commissioned where applicable. Key outputs Four-cadence governance rhythm documented Per-cadence activities, owners, and artefacts defined Cross-references to detailed articles Decision Authority Framework (Four Tiers) AISDP module(s): Cross-cutting Regulatory basis: Article 14 , Article 72 Four decision authority tiers govern PMM-triggered and oversight-triggered actions. Tier 1 (Technical SME): threshold adjustments, monitoring configuration changes, routine engineering remediation. Tier 2 (Technical Owner): model retraining on updated data where all validation gates pass; notice to AI Governance Lead. Tier 3 (AI Governance Lead): architecture changes, feature set changes, hyperparameter shifts; substantial modification assessment triggered. Tier 4 (AI Governance Lead + Legal and Regulatory Advisor): system suspension, withdrawal, recall; immediate deployer notification; potential serious incident reporting . Key outputs Four-tier authority matrix Per-tier scope, authoriser, and notification requirements Cross-reference to --- ## Human Oversight as Checkbox URL: https://docs.standardintelligence.com/human-oversight-as-checkbox Breadcrumb: Resources › Templates › Common Pitfalls › Human Oversight as Checkbox Last updated: 28 Feb 2026 Human Oversight as Checkbox AISDP module(s): Module 7 Documenting that human oversight "exists" without designing the operational reality: interface, training, override capability, workload management, automation bias countermeasures, and escalation pathways. Article 14 requires operational design. Key outputs Pitfall: policy statement instead of operational design Solution: full operational oversight implementation --- ## Human Oversight Framework Reference URL: https://docs.standardintelligence.com/human-oversight-framework-reference Breadcrumb: Resources › Human Oversight Framework Reference Last updated: 28 Feb 2026 Six-Level Pyramid Summary AISDP module(s): Module 7 Regulatory basis: Article 14 Level 1: Technical Monitoring (engineering team, continuous automated monitoring, emergency rollback authority). Level 2: AI System Operators (human oversight of outputs, override capability, escalation). Level 3: Product Management (intent alignment, deployer satisfaction, drift detection). Level 4: Compliance, Legal, Data Protection (regulatory monitoring, legal assessment, GDPR oversight). Level 5: Executive Leadership (strategic oversight, resource allocation, risk appetite). Level 6: External Oversight (competent authorities, notified bodies , auditors). See for detailed treatment. Key outputs Six levels with distinct responsibilities Escalation flows between levels Per-level escalation triggers AI Literacy Programme Summary AISDP module(s): Module 7 Regulatory basis: Article 4 Five-tier programme aligned to the oversight pyramid : Level 1 (deep technical), Level 2 (hands-on system-specific with calibration exercises), Level 3 (compliance-business metric integration), Level 4 (EU AI Act and GDPR legal framework), Level 5 (strategic executive briefings). Training cadence: initial, annual refresher, event-triggered. LMS tracking with certification as prerequisite for system operation. See for detailed treatment. Key outputs Five tiers aligned to oversight pyramid Three-cadence delivery model LMS tracking and operator certification Escalation Without Reprisal Summary AISDP module(s): Module 7 Regulatory basis: Directive 2019/1937, Article 14 Four reporting channels (confidential, anonymous, internal audit, external NCA). Directive 2019/1937 whistleblower protection extended to AI compliance concerns. Cultural reinforcement through leadership acknowledgement, positive performance evaluation, and regular training. Documented response to every escalation. Annual audit verification of non-retaliation framework. See for detailed treatment. Key outputs Four reporting channels Whistleblower protection extension Annual verification Fatigue Countermeasures Summary AISDP module(s): Module 7 Regulatory basis: Article 14 Three countermeasures: personnel rotation on 6–12 month cycles, quarterly threshold drift checks comparing operational thresholds against AISDP values, and "fresh eyes" reviews by non-operational personnel. These address normalisation of deviance, informal threshold relaxation, and systemic issue blindness. Key outputs Three countermeasures Normalisation of deviance addressed Cross-references to detailed articles --- ## Ignoring Cumulative Change URL: https://docs.standardintelligence.com/ignoring-cumulative-change Breadcrumb: Resources › Templates › Common Pitfalls › Ignoring Cumulative Change Last updated: 28 Feb 2026 Ignoring Cumulative Change AISDP module(s): Module 10 Making individually sub-threshold changes that collectively constitute a substantial modification . Each change passes automated gates, but the aggregate effect alters behaviour significantly. Cumulative baseline tracking is the control. Key outputs Pitfall: aggregate sub-threshold changes crossing modification threshold Solution: cumulative baseline tracking --- ## Incident Response Diagram URL: https://docs.standardintelligence.com/incident-response-diagram Breadcrumb: Resources › Architectural Diagrams › Incident Response Diagram Last updated: 28 Feb 2026 Incident Response Diagram AISDP module(s): Module 12 The serious incident response flowchart traces the pathway from event detection through triage, severity classification (2/10/15-day tiers), evidence preservation, break-glass activation where harm is continuing, initial report preparation, legal review, governance authorisation, submission, investigation, and supplementary reporting. The diagram is included in Module 12 and the incident response plan . Key outputs End-to-end incident response flowchart Severity tier colour coding Module 12 and incident response plan --- ## L1 Awareness URL: https://docs.standardintelligence.com/l1-awareness Breadcrumb: Resources › Templates › Maturity Model › L1 Awareness Last updated: 28 Feb 2026 L1 Awareness AISDP module(s): Cross-cutting The organisation is aware of the EU AI Act. AI systems are identified but not classified. No AISDP, no formal risk assessment , no governance roles assigned. Immediate priorities: discovery and classification (Phase 1), governance role appointments, risk assessment for highest-risk systems. Key outputs Maturity Level 1 characteristics Immediate priorities identified --- ## L2 Foundational URL: https://docs.standardintelligence.com/l2-foundational Breadcrumb: Resources › Templates › Maturity Model › L2 Foundational Last updated: 28 Feb 2026 L2 Foundational AISDP module(s): Cross-cutting Systems classified; governance roles assigned; AISDP preparation begun for highest-risk systems; basic version control and CI/CD exist for software but not data or models. Immediate priorities: data version control, model registry , model validation gate s in CI/CD, data governance framework. Key outputs Maturity Level 2 characteristics Data and model version control as priority --- ## L3 Structured URL: https://docs.standardintelligence.com/l3-structured Breadcrumb: Resources › Templates › Maturity Model › L3 Structured Last updated: 28 Feb 2026 L3 Structured AISDP module(s): Cross-cutting AISDPs under preparation for all high-risk systems; version control covers code, data, and models; CI/CD includes fairness and robustness gates; cybersecurity threat model exists; internal assessment framework defined. Immediate priorities: PMM system, serious incident reporting , human oversight interface, conformity assessment preparation. Key outputs Maturity Level 3 characteristics PMM and oversight implementation as priority --- ## L4 Operational URL: https://docs.standardintelligence.com/l4-operational Breadcrumb: Resources › Templates › Maturity Model › L4 Operational Last updated: 28 Feb 2026 L4 Operational AISDP module(s): Cross-cutting Conformity assessment s completed; Declarations of Conformity signed; high-risk systems registered; PMM producing data; operators trained; incident response in place. Immediate priorities: feedback loop operationalisation, oversight culture, inspection readiness , extend programme to medium-risk and new systems. This is the minimum target for August 2026. Key outputs Maturity Level 4 characteristics August 2026 minimum target Feedback loop and culture as next priorities --- ## L5 Optimising URL: https://docs.standardintelligence.com/l5-optimising Breadcrumb: Resources › Templates › Maturity Model › L5 Optimising Last updated: 28 Feb 2026 L5 Optimising AISDP module(s): Cross-cutting Compliance is a natural byproduct of engineering and governance workflow; evidence generated automatically; feedback loop operates continuously; regulatory interactions constructive. Priorities: continuous improvement, threshold refinement from operational experience, harmonised standard incorporation, adaptation to regulatory developments. This level represents embedded compliance maturity. Key outputs Maturity Level 5 characteristics Compliance as natural engineering byproduct Continuous improvement focus --- ## Legal Document Syndrome URL: https://docs.standardintelligence.com/legal-document-syndrome Breadcrumb: Resources › Templates › Common Pitfalls › Legal Document Syndrome Last updated: 28 Feb 2026 Legal Document Syndrome AISDP module(s): Cross-cutting Treating the AISDP as a legal document (vague, hedged, written to minimise exposure) when it should be a technically precise record verifiable against source artefacts. Hedging and vagueness invite deeper scrutiny from competent authorities and notified bodies . Key outputs Pitfall: legal hedging in technical documentation Solution: technically precise, verifiable claims --- ## LLM / Generative AI Tooling Summary URL: https://docs.standardintelligence.com/llm-generative-ai-tooling-summary Breadcrumb: Resources › Technical Infrastructure › LLM / Generative AI Tooling Summary Last updated: 28 Feb 2026 LLM / Generative AI Tooling Summary AISDP module(s): Module 12 Regulatory basis: Article 72 Five monitoring domains for LLM/generative AI systems: hallucination detection (NLI entailment scoring, citation verification, consistency checking; RAGAS/Trulens for RAG systems), safety monitoring (Lakera Guard for prompt injection /PII/toxicity, NeMo Guardrails for topic boundaries, Llama Guard for safety classification), prompt/response distribution monitoring (BERTopic embedding clustering, output characteristic tracking), human evaluation programme (Argilla/Label Studio/Prodigy; 100–500 outputs weekly with structured rubric), and annotation quality (inter-annotator agreement measurement). See for detailed treatment. Key outputs Five LLM monitoring domains RAGAS, Trulens, Lakera Guard, NeMo Guardrails tooling Human evaluation programme with structured rubric --- ## Maturity Model URL: https://docs.standardintelligence.com/maturity-model Breadcrumb: Resources › Templates › Maturity Model Last updated: 28 Feb 2026 L1 Awareness L2 Foundational L3 Structured L4 Operational L5 Optimising Target: Level 4 Before August 2026 --- ## Module 1: System Description & Intended Purpose URL: https://docs.standardintelligence.com/module-1-system-description-and-intended-purpose Breadcrumb: Resources › Core Artefacts › Module 1: System Description & Intended Purpose Last updated: 28 Feb 2026 Module 1: System Description & Intended Purpose AISDP module(s): Module 1 Regulatory basis: Annex IV (1) Module 1 identifies the system (name, version, provider, intended purpose, deployment context, affected persons, risk classification ) and establishes the scope within which all other modules operate. It includes the version quad, the Statement of Business Intent, and the intended conditions of use. Module 1 is the first module populated during Phase 1 and the module updated last during decommission . Key outputs System identification and intended purpose Version quad and deployment context Scope boundary for all subsequent modules --- ## Module 10: Version Control & Change Management URL: https://docs.standardintelligence.com/module-10-version-control-and-change-management Breadcrumb: Resources › Core Artefacts › Module 10: Version Control & Change Management Last updated: 28 Feb 2026 Module 10: Version Control & Change Management AISDP module(s): Module 10 Regulatory basis: Annex IV (2)(b), Article 12 Module 10 records the versioning scheme (composite version quad, ), current version identifiers, complete change log, substantial modification assessment for each change, cumulative baseline tracking, and third-party component versions. Key outputs Version quad and change log Substantial modification assessments Cumulative baseline tracking --- ## Module 11: Fundamental Rights Impact Assessment URL: https://docs.standardintelligence.com/module-11-fundamental-rights-impact-assessment Breadcrumb: Resources › Core Artefacts › Module 11: Fundamental Rights Impact Assessment Last updated: 28 Feb 2026 Module 11: Fundamental Rights Impact Assessment AISDP module(s): Module 11 Regulatory basis: Article 27 Module 11 contains the FRIA methodology, Charter rights assessed with impact analysis, affected population identification, impact severity assessment, mitigation measures, Article 27(4) notification to the market surveillance authority where required, and deployer FRIA guidance. The DPO Liaison drafts; the AI System Assessor reviews. See for the detailed FRIA treatment. Key outputs FRIA per Charter right with impact severity Mitigation measures and stakeholder engagement Article 27(4) notification --- ## Module 12: Post-Market Monitoring & Change History URL: https://docs.standardintelligence.com/module-12-post-market-monitoring-and-change-history Breadcrumb: Resources › Core Artefacts › Module 12: Post-Market Monitoring & Change History Last updated: 28 Feb 2026 Module 12: Post-Market Monitoring & Change History AISDP module(s): Module 12 Regulatory basis: Article 72 , Annex IV (2)(g) Module 12 contains the PMM plan , monitoring dashboard configuration, feedback loop documentation, serious incident reporting procedures, the complete change history linked to version control , and the end-of-life plan . Key outputs PMM plan with five monitoring dimensions Serious incident reporting procedures Change history and end-of-life plan --- ## Module 2: Development Process URL: https://docs.standardintelligence.com/module-2-development-process Breadcrumb: Resources › Core Artefacts › Module 2: Development Process Last updated: 28 Feb 2026 Module 2: Development Process AISDP module(s): Module 2 Regulatory basis: Annex IV (2)(a) Module 2 documents the development methodology, coding standards, testing requirements, and review processes. It captures the CI/CD pipeline design, the version control strategy, and the compliance gate configuration. The Technical SME drafts Module 2; the AI Governance Lead approves. Key outputs Development methodology and coding standards CI/CD pipeline and compliance gates Version control strategy documentation --- ## Module 3: Model Documentation URL: https://docs.standardintelligence.com/module-3-model-documentation Breadcrumb: Resources › Core Artefacts › Module 3: Model Documentation Last updated: 28 Feb 2026 Module 3: Model Documentation AISDP module(s): Module 3 Regulatory basis: Annex IV (2)(b)–(e) Module 3 provides the complete model description: architecture, training methodology, feature engineering, performance metrics, explainability approach, architectural diagrams (C4 model), and known failure modes. It includes the Model Selection Record and the model card . The Technical SME drafts; evidence sources include training configuration records, benchmark reports, and the explainability documentation. Key outputs Complete model technical description Model Selection Record and model card Performance metrics with confidence intervals --- ## Module 4: Data Governance & Dataset Documentation URL: https://docs.standardintelligence.com/module-4-data-governance-and-dataset-documentation Breadcrumb: Resources › Core Artefacts › Module 4: Data Governance & Dataset Documentation Last updated: 28 Feb 2026 Module 4: Data Governance & Dataset Documentation AISDP module(s): Module 4 Regulatory basis: Article 10 , Annex IV (2)(d), (f) Module 4 documents training data (source, size, period, coverage), collection methodology, preparation steps, bias assessment, data quality metrics per ISO/IEC 25012, special category data processing ( Article 10(5) ), data lineage , validation and test datasets, and retention policy. The Technical SME drafts; the DPO Liaison reviews data protection aspects. See for the detailed data governance treatment. Key outputs Training data description and provenance Bias assessment methodology and results Data lineage and retention policy --- ## Module 5: Testing & Validation URL: https://docs.standardintelligence.com/module-5-testing-and-validation Breadcrumb: Resources › Core Artefacts › Module 5: Testing & Validation Last updated: 28 Feb 2026 Module 5: Testing & Validation AISDP module(s): Module 5 Regulatory basis: Annex IV (2)(e), Article 9 (7) Module 5 documents the test strategy, unit and integration tests, performance benchmarks, fairness testing, robustness testing, pre-deployment validation, and any independent verification. Evidence sources include CI/CD pipeline reports, benchmark reports, fairness test results, and validation reports. See for CI/CD pipeline and validation gate details. Key outputs Test strategy and coverage targets Fairness and robustness testing results Pre-deployment validation against production-representative data --- ## Module 6: Risk Management System URL: https://docs.standardintelligence.com/module-6-risk-management-system Breadcrumb: Resources › Core Artefacts › Module 6: Risk Management System Last updated: 28 Feb 2026 Module 6: Risk Management System AISDP module(s): Module 6 Regulatory basis: Article 9 , Annex IV (2)(g) Module 6 documents the risk identification methodology (five methods, ), the complete risk register, scoring methodology, residual risk assessment, risk acceptance records signed by the AI Governance Lead , reputational risk assessment (five dimensions, ), and continuous risk monitoring parameters feeding into PMM. The AI System Assessor drafts; the AI Governance Lead approves risk acceptance. Key outputs Five-method risk identification Risk register with residual risk acceptance Reputational risk assessment --- ## Module 7: Human Oversight URL: https://docs.standardintelligence.com/module-7-human-oversight Breadcrumb: Resources › Core Artefacts › Module 7: Human Oversight Last updated: 28 Feb 2026 Module 7: Human Oversight AISDP module(s): Module 7 Regulatory basis: Article 14 , Annex IV (3) Module 7 documents the oversight architecture (six-level pyramid, ), operator interface design (override mechanisms, explanation displays, confidence indicators), break-glass procedures , deployer oversight model, AI literacy programme, anti-automation-bias measures, and override monitoring configuration. Article 14(3) distinguishes between measures built into the system by the provider before placing on the market (Article 14(3)(a)), such as technical controls, mandatory review workflows, and confidence thresholds, and measures identified by the provider as appropriate to be implemented by the deployer (Article 14(3)(b)), such as operator training requirements, workload limits, and escalation procedures. Module 7 documents both categories. The Technical SME drafts; the AI Governance Lead approves. Key outputs Six-level oversight pyramid Break-glass procedures and operator interface AI literacy programme and override monitoring --- ## Module 8: Transparency & User Information URL: https://docs.standardintelligence.com/module-8-transparency-and-user-information Breadcrumb: Resources › Core Artefacts › Module 8: Transparency & User Information Last updated: 28 Feb 2026 Module 8: Transparency & User Information AISDP module(s): Module 8 Regulatory basis: Article 13 , Annex IV(3), Article 47 Module 8 contains the Instructions for Use, deployer guidance, explanation methodology, accessibility compliance (WCAG 2.1 Level AA), affected person notification templates and process, and a reference to the signed Declaration of Conformity . Key outputs Instructions for Use per Article 13(3) Explanation methodology for operators and affected persons Accessibility and notification documentation --- ## Module 9: Robustness & Cybersecurity URL: https://docs.standardintelligence.com/module-9-robustness-and-cybersecurity Breadcrumb: Resources › Core Artefacts › Module 9: Robustness & Cybersecurity Last updated: 28 Feb 2026 Module 9: Robustness & Cybersecurity AISDP module(s): Module 9 Regulatory basis: Article 15 , Annex IV (2)(e) Module 9 documents the AI-specific threat assessment, cybersecurity controls, ISO/IEC 27001 status, penetration testing results, adversarial testing programme, cross-regulatory mapping ( CRA , NIS2 , DORA ; Articles 348–355), and the incident response plan . Article 15(4) requires cybersecurity solutions "appropriate to the relevant circumstances and the risks"; the measures documented in Module 9 are therefore calibrated to the system's specific threat profile rather than applied as a uniform checklist. Key outputs AI threat assessment and cybersecurity controls Penetration and adversarial testing results Cross-regulatory mapping and incident response --- ## Monitoring Infrastructure Summary URL: https://docs.standardintelligence.com/monitoring-infrastructure-summary Breadcrumb: Resources › Technical Infrastructure › Monitoring Infrastructure Summary Last updated: 28 Feb 2026 Monitoring Infrastructure Summary AISDP module(s): Module 12 Regulatory basis: Article 72 Five-layer monitoring infrastructure: data collection layer (asynchronous streaming via Kafka/Kinesis/Pub-Sub), storage layer (time-series database with tiered retention), computation layer (scheduled metric computation, idempotent and deterministic), alerting layer (PagerDuty/Opsgenie with severity routing), and dashboard layer (Grafana/Metabase for operational and governance views). See for detailed treatment. Key outputs Five infrastructure layers Tooling options per layer Tiered storage strategy --- ## Oversight Designed After Deployment URL: https://docs.standardintelligence.com/oversight-designed-after-deployment Breadcrumb: Resources › Templates › Common Pitfalls › Oversight Designed After Deployment Last updated: 28 Feb 2026 Oversight Designed After Deployment AISDP module(s): Module 7 Building the operational oversight framework after the system is live, when operators need to be trained, interfaces tested, escalation pathways rehearsed, and break-glass procedures validated before the system affects real people. Key outputs Pitfall: post-deployment oversight design Solution: oversight designed during architecture phase --- ## Oversight Pyramid Diagram URL: https://docs.standardintelligence.com/oversight-pyramid-diagram Breadcrumb: Resources › Architectural Diagrams › Oversight Pyramid Diagram Last updated: 28 Feb 2026 Oversight Pyramid Diagram AISDP module(s): Module 7 The six-level oversight pyramid diagram shows the hierarchy from Level 1 (Technical Monitoring) through Level 6 (External Oversight), with escalation flows between levels. The diagram is included in Module 7 and used during AI literacy training. Key outputs Six-level pyramid visual representation Escalation flow arrows Module 7 and training material --- ## PMM Plan — Five Dimensions & Proportionality URL: https://docs.standardintelligence.com/pmm-plan-five-dimensions-and-proportionality Breadcrumb: Resources › Core Artefacts › PMM Plan — Five Dimensions & Proportionality Last updated: 28 Feb 2026 PMM Plan — Five Dimensions & Proportionality AISDP module(s): Module 12 Regulatory basis: Article 72 The PMM plan monitors five dimensions: performance, fairness, data drift , operational health, and human oversight effectiveness. Monitoring intensity is proportionate to the system's risk profile, deployment scale, and affected population sensitivity. The plan documents data collection strategy, analysis methodology, threshold framework, escalation procedures, and the feedback loop. Key outputs Five monitoring dimensions Proportionate intensity calibration Threshold framework and escalation procedures --- ## Production Monitoring Reference URL: https://docs.standardintelligence.com/production-monitoring-reference Breadcrumb: Resources › Production Monitoring Reference Last updated: 28 Feb 2026 Five Monitoring Dimensions Summary AISDP module(s): Module 12 Regulatory basis: Article 72 Performance (accuracy metrics, ground truth handling, disaggregated and temporal analysis; ). Fairness (selection rate ratios, equalised odds, intersectional computation, missing demographic data strategies; ). Data drift (input drift PSI/KS/JS, concept drift, per-feature drift; ). Operational health (availability, latency, error rates, resource utilisation, dependency health; ). Human oversight (override rates, review times, escalation monitoring, automation bias detection, operator wellbeing; ). Key outputs Five dimensions with cross-references to detailed articles Per-dimension key metrics identified Three Severity Tiers Summary AISDP module(s): Module 12 Regulatory basis: Article 72 Informational: metric shifted within tolerance band; logged, reviewed at next scheduled meeting; no immediate action. Warning: breached warning threshold; Technical SME review within five working days; root cause analysis initiated; auto-escalation after five days if unresolved. Critical: breached compliance threshold or fundamental rights concern; immediate investigation; AI Governance Lead notified within 24 hours; break-glass considered; serious incident reporting assessed. Key outputs Three tiers with response timeframes Auto-escalation for unresolved warnings Critical tier links to serious incident reporting Serious Incident Reporting Timelines Summary AISDP module(s): Module 12 Regulatory basis: Article 73 Two days from awareness: widespread fundamental rights infringement or serious irreversible critical infrastructure disruption. Ten days from awareness: death or suspected causal link to death. Fifteen days from awareness: all other serious incidents meeting Article 3(49). Initial incomplete reports permitted with supplementary follow-up ( Article 73(5) ). See for detailed treatment. Key outputs 2/10/15-day tiered reporting timelines "Awareness" includes causal link establishment Initial incomplete report permitted Feedback Loop Metrics Summary AISDP module(s): Module 12 Regulatory basis: Article 72 Four meta-metrics: time from PMM finding to decision, time from decision to completed fix, share of findings resulting in system change versus accepted within tolerance, and share of fixes successfully resolving the originating finding. Reported quarterly at PMM review. Sustained deterioration triggers process review. See for detailed treatment. Key outputs Four feedback loop meta-metrics Quarterly reporting cadence Deterioration triggers process review --- ## Readiness Assessment Checklist URL: https://docs.standardintelligence.com/readiness-assessment-checklist Breadcrumb: Resources › Templates › Readiness Assessment Last updated: 28 Feb 2026 Governance Readiness AISDP module(s): Cross-cutting Regulatory basis: Article 17 All ten governance and technical roles appointed: AI Governance Lead with sufficient authority, AI System Assessor (s) with independence from development, Technical SME, Technical Owner, Business Owner, Conformity Assessment Coordinator, Legal and Regulatory Advisor, DPO Liaison, Internal Audit Assurance Lead, and Classification Reviewer with independence from the Assessor. Roles documented and communicated. Multi-role assignments justified and approved. Key outputs Ten roles appointed and documented Independence requirements satisfied Communication to organisation completed Classification Readiness AISDP module(s): Module 1 Regulatory basis: Article 6 Complete inventory of AI systems in the organisation's portfolio. Classification Decision Record produced for each system. Risk tier determination reviewed and approved by the AI Governance Lead. Systems requiring no AISDP (minimal risk) documented with rationale. Systems requiring AISDP prioritised for preparation. Key outputs Complete AI system inventory Per-system CDR produced and approved Portfolio prioritisation for AISDP preparation Infrastructure Readiness AISDP module(s): Module 2 , Module 10 Regulatory basis: Article 12 Version control system operational for code, data, and model artefacts. Model registry deployed and integrated with CI/CD pipeline . CI/CD pipeline includes model validation gate s (performance, fairness, robustness). Monitoring infrastructure capable of collecting and analysing production data. Document management system with version control and access controls. Each missing item represents a workstream the AI Governance Lead initiates in parallel with AISDP preparation. Key outputs Five infrastructure prerequisites assessed Missing items trigger parallel workstreams Foundation for automated evidence generation Process Readiness AISDP module(s): Cross-cutting Regulatory basis: Article 17 Data governance framework documented (quality standards, lineage tracking, bias assessment methodology). Development methodology documented (coding standards, testing requirements, review processes). Incident response plan drafted with roles assigned. PMM plan drafted with metrics, thresholds, and escalation procedures. End-of-life process defined with responsibilities assigned for post- decommission obligations. Key outputs Five process prerequisites assessed End-of-life process included in readiness Missing items trigger parallel workstreams Knowledge Readiness AISDP module(s): Module 7 Regulatory basis: Article 4 Key personnel have received AI Act training appropriate to their roles. The engineering team understands the compliance implications of their work. The legal team understands the technical architecture and its compliance dimensions. Cross-functional literacy ensures that governance gates function as designed. Key outputs Role-appropriate AI Act training completed Cross-functional understanding established Foundation for effective governance gates --- ## Regulatory Interaction Reference URL: https://docs.standardintelligence.com/regulatory-interaction-reference Breadcrumb: Resources › Regulatory Interaction Reference Last updated: 28 Feb 2026 EU Database Registration Summary AISDP module(s): Cross-cutting Regulatory basis: Articles 49, 71, Annex VIII EU database registration is completed before the system is placed on the market. Annex VIII defines the required information fields: provider identification, system identification, classification basis, conformity assessment route, status, and version details. The registration is maintained and updated when the system status changes (modification, suspension, withdrawal). See for detailed treatment. Key outputs Pre-market registration requirement Annex VIII information fields Ongoing maintenance obligation CE Marking Summary AISDP module(s): Cross-cutting Regulatory basis: Article 48 CE marking is affixed after conformity assessment confirms compliance and the Declaration of Conformity is signed. For AI systems with a physical product, the marking appears on the product. For software-only systems, the marking appears in the documentation, packaging, or accompanying materials. The CE marking signifies that the system conforms to all applicable EU AI Act requirements. Key outputs Post-conformity-assessment CE marking Physical product vs software-only placement Conformity signification Inspection Readiness Summary AISDP module(s): Cross-cutting Regulatory basis: Article 74 Inspection readiness comprises the pre-configured regulatory access IAM role, the 30-minute documentation production drill, designated contact persons for each competent authority, the inspection log tracking all regulatory interactions, and annual readiness exercises. See for detailed treatment. Key outputs Pre-configured regulatory access role 30-minute documentation drill Annual readiness exercises Multi-Jurisdiction Checklist Summary AISDP module(s): Cross-cutting Regulatory basis: Articles 70, 74 Multi-jurisdiction deployment requires: identifying the lead market surveillance authority per member state, mapping per-jurisdiction FRIA notification requirements, preparing translations of the Declaration of Conformity and affected person notifications, establishing per-jurisdiction Legal and Regulatory Advisor contacts, and maintaining a jurisdiction register documenting deployment status per member state. See for detailed treatment. Key outputs Per-jurisdiction authority identification Translation requirements Jurisdiction register --- ## Regulatory Terms URL: https://docs.standardintelligence.com/regulatory-terms Breadcrumb: Resources › Glossary (Appendix C) › Regulatory Terms Last updated: 28 Feb 2026 Regulatory Terms AISDP module(s): Cross-cutting Key regulatory terms defined in the glossary: AI system (Article 3(1)), high-risk AI system (Article 6), provider (Article 3(3)), deployer (Article 3(4)), affected person, intended purpose (Article 3(12)), reasonably foreseeable misuse, placing on the market (Article 3(9)), putting into service (Article 3(11)), serious incident (Article 3(49)), substantial modification (Article 3(23)), recall (Article 3(16)), and withdrawal (Article 3(17)). Key outputs Regulatory term definitions with Article references --- ## Retrospective Documentation URL: https://docs.standardintelligence.com/retrospective-documentation Breadcrumb: Resources › Templates › Common Pitfalls › Retrospective Documentation Last updated: 28 Feb 2026 Retrospective Documentation AISDP module(s): Cross-cutting Attempting to reconstruct the development process from memory after the system is built produces documentation that is inaccurate, incomplete, and obviously post-hoc to any experienced reviewer. The solution is to generate documentation as a byproduct of the engineering workflow through CI/CD pipeline automated evidence generation. Key outputs Pitfall: post-hoc documentation reconstruction Solution: engineering-workflow documentation generation --- ## Risk Classification Diagram URL: https://docs.standardintelligence.com/risk-classification-diagram Breadcrumb: Resources › Architectural Diagrams › Risk Classification Diagram Last updated: 28 Feb 2026 Risk Classification Diagram AISDP module(s): Module 1 The risk classification decision flowchart traces the assessment pathway from "Is it an AI system?" through the three classification pathways (Annex I product, Annex III area, Article 6(3) exception) to the final determination. The diagram is referenced during Phase 1 classification and included in the CDR . Key outputs Classification decision flowchart Three-pathway visual representation CDR supporting diagram --- ## Risk Register — Living Document Requirements URL: https://docs.standardintelligence.com/risk-register-living-document-requirements Breadcrumb: Resources › Core Artefacts › Risk Register — Living Document Requirements Last updated: 28 Feb 2026 Risk Register — Living Document Requirements AISDP module(s): Module 6 Regulatory basis: Article 9 The risk register is maintained as a living document throughout the system lifecycle. It evolves with operational experience: PMM findings create new entries, serious incidents reveal unanticipated risks, regulatory developments change the risk landscape, and system modifications alter the risk profile. Each entry records risk ID, description, likelihood, severity across four dimensions, current mitigations, residual risk level, and assigned owner. The AI Governance Lead reviews and accepts residual risks. Key outputs Living document updated throughout lifecycle Four-dimension severity scoring AI Governance Lead residual risk acceptance --- ## Scope Creep Without Reclassification URL: https://docs.standardintelligence.com/scope-creep-without-reclassification Breadcrumb: Resources › Templates › Common Pitfalls › Scope Creep Without Reclassification Last updated: 28 Feb 2026 Scope Creep Without Reclassification AISDP module(s): Module 1 Gradually expanding the system's use beyond its documented intended purpose without reclassification review. A system classified as non-high-risk under Article 6(3) that is subsequently used outside the exception is operating in breach regardless of capability. Key outputs Pitfall: gradual purpose expansion without reclassification Solution: reclassification review on intended purpose change --- ## Security Stack Summary URL: https://docs.standardintelligence.com/security-stack-summary Breadcrumb: Resources › Technical Infrastructure › Security Stack Summary Last updated: 28 Feb 2026 Security Stack Summary AISDP module(s): Module 9 Regulatory basis: Article 15 Eight security domains: SAST (Semgrep/SonarQube), SCA (Dependabot/Snyk), container scanning (Trivy/Grype), SBOM generation (Syft/CycloneDX), secrets management (HashiCorp Vault/AWS Secrets Manager), artefact signing (Sigstore Cosign), API security (OAuth 2.0 + mTLS + rate limiting ), and penetration testing (annual, covering full stack plus AI-specific vectors). See for the detailed treatment. Key outputs Eight security domain coverage Tooling options per domain Annual penetration testing requirement --- ## Standards & Bodies URL: https://docs.standardintelligence.com/standards-and-bodies Breadcrumb: Resources › Glossary (Appendix C) › Standards & Bodies Last updated: 28 Feb 2026 Standards & Bodies AISDP module(s): Cross-cutting Key standards: ISO/IEC 42001 (AI Management System), ISO/IEC 23894 (AI Risk Management), ISO/IEC 27001 (Information Security), ISO/IEC 25012 (Data Quality), ISO/IEC 25010 (Software Quality). Key bodies: European AI Office, national competent authorities (NCAs), notified bodies (NANDO register), CEN/CENELEC (harmonised standards development). Key outputs Standards with scope summaries Regulatory and standards body definitions --- ## Suppressed Escalation URL: https://docs.standardintelligence.com/suppressed-escalation Breadcrumb: Resources › Templates › Common Pitfalls › Suppressed Escalation Last updated: 28 Feb 2026 Suppressed Escalation AISDP module(s): Module 7 Creating formal escalation pathways but cultivating a culture where using them carries career risk. A reprisal culture suppresses the information the organisation needs most. The non-retaliation commitment must be a lived value. Key outputs Pitfall: formal pathways undermined by cultural reprisal Solution: non-retaliation as lived organisational value --- ## Target: Level 4 Before August 2026 URL: https://docs.standardintelligence.com/target-level-4-before-august-2026 Breadcrumb: Resources › Templates › Maturity Model › Target: Level 4 Before August 2026 Last updated: 28 Feb 2026 Target: Level 4 Before August 2026 AISDP module(s): Cross-cutting Regulatory basis: Article 113 Most organisations in early 2026 are between Level 1 and Level 2. The gap to Level 4 requires governance establishment, classification completion, infrastructure build-out, AISDP preparation, conformity assessment , and PMM activation. Organisations that defer investment and attempt last-minute compliance sprints will find higher costs, lower quality, and greater enforcement risk. Key outputs Current state assessment (most organisations L1–L2) Gap to Level 4 identified Deferral risk acknowledged --- ## Technical Infrastructure URL: https://docs.standardintelligence.com/technical-infrastructure Breadcrumb: Resources › Technical Infrastructure Last updated: 28 Feb 2026 Version Control Summary CI/CD Pipeline with Four Compliance Gates Monitoring Infrastructure Summary Security Stack Summary Fairness & Bias Tooling Summary Explainability Summary LLM / Generative AI Tooling Summary Evidence & Document Management Break-Glass Mechanisms Summary --- ## Technical Terms URL: https://docs.standardintelligence.com/technical-terms Breadcrumb: Resources › Glossary (Appendix C) › Technical Terms Last updated: 28 Feb 2026 Technical Terms AISDP module(s): Cross-cutting Key technical terms defined: automation bias, concept drift, data drift, data lineage, data poisoning, demographic parity, differential privacy, embedding, F1 score, feature engineering, feature store, federated learning, fine-tuning, GPAI, ground truth, hallucination, hyperparameter, inference, LLM, model drift, model extraction, overfitting, perturbation testing, Population Stability Index (PSI), precision, proxy variable, RAG, recall, robustness, and rollback. Key outputs Technical term definitions with section references --- ## Templates & Checklists URL: https://docs.standardintelligence.com/templates-and-checklists Breadcrumb: Resources › Templates & Checklists Last updated: 28 Feb 2026 Templates and checklists provide practical tools for implementing the AISDP . Three core templates cover the AISDP module structure, the classification decision record , and the declaration of conformity. The readiness assessment checklist evaluates governance, classification, infrastructure, process, and knowledge readiness. Eleven common pitfalls documents the most frequent compliance failures observed across organisations and provides specific countermeasures. The maturity model defines five levels from awareness through optimising, with Level 4 as the target before the August 2026 compliance deadline. ℹ These resources correspond to the Templates & Checklists section. --- ## Three Core Templates URL: https://docs.standardintelligence.com/three-core-templates Breadcrumb: Resources › Templates › Core Templates Last updated: 28 Feb 2026 A.1: AISDP Module Structure Template AISDP module(s): All modules Regulatory basis: Article 11 , Annex IV The AISDP module structure template provides a standardised format for each of the twelve modules. Each module section includes regulatory authority (the specific Annex IV sub-paragraph), domain guidance (cross-reference to the relevant source sections), responsible role, content fields (as a structured table with field name, description, and evidence source), and assessor guidance notes. The template ensures consistency across systems in a portfolio. Key outputs Standardised per-module template Regulatory authority and evidence source mapping Assessor guidance integrated A.2: Classification Decision Record Template AISDP module(s): Module 1 Regulatory basis: Article 6 The CDR template provides structured sections for document control , system summary, and the three-pathway classification analysis (Annex I product, Annex III area, Article 6(3) exception). Each pathway has an assessment table with criterion, finding, and evidence columns. The template includes the classification determination section, the reviewer sign-off, and the AI Governance Lead approval. See for the content summary. Key outputs Three-pathway assessment tables Reviewer and governance sign-off sections Evidence column per assessment criterion A.3: Declaration of Conformity Template AISDP module(s): Cross-cutting Regulatory basis: Article 47, Annex V The DoC template provides structured fields for all eight Annex V sections, the pre-signature checklist (21 verification items), the signatory acknowledgement, and the liability warning. The pre-signature checklist must be completed by the Conformity Assessment Coordinator and reviewed by the Legal and Regulatory Advisor before the Declaration is presented for signature. See for the content summary. Key outputs Eight Annex V sections with structured fields 21-item pre-signature checklist Liability warning and signatory acknowledgement --- ## Tools URL: https://docs.standardintelligence.com/tools Breadcrumb: Resources › Glossary (Appendix C) › Tools Last updated: 28 Feb 2026 Tools AISDP module(s): Cross-cutting Key tools referenced: Fairlearn, Aequitas (fairness); SHAP, LIME (explainability); RAGAS, Trulens, Lakera Guard, NeMo Guardrails (LLM monitoring); Evidently AI, NannyML (drift detection); Prometheus, Grafana, Datadog (monitoring); MLflow, Weights & Biases (model registry); DVC, LakeFS (data versioning); Semgrep, Trivy, Sigstore (security); LaunchDarkly, Unleash (feature flags); PagerDuty, Opsgenie (alerting); Credo AI, Holistic AI (GRC); and Argilla, Label Studio (annotation). Key outputs Per-domain tooling catalogue Open-source and commercial options identified --- ## Version Control Summary URL: https://docs.standardintelligence.com/version-control-summary Breadcrumb: Resources › Technical Infrastructure › Version Control Summary Last updated: 28 Feb 2026 Version Control Summary AISDP module(s): Module 10 Regulatory basis: Article 12 , Annex IV (2)(b) Five version-controlled artefact categories: code (Git), data (DVC/LakeFS), models (MLflow/Weights & Biases), configuration (version-controlled YAML/JSON), and the deployment ledger (immutable record of every deployment event). The composite version quad (code version, data version, model version, configuration version) is the version recorded in the AISDP, EU database , and Declaration of Conformity . See for the detailed treatment. Key outputs Five artefact categories under version control Composite version quad Deployment ledger --- # Artefact Taxonomy --- ## A1. Pipeline Execution Logs URL: https://docs.standardintelligence.com/a1-pipeline-execution-logs Breadcrumb: Artefact Taxonomy › Category A — Engineering Work-Product › A1. Pipeline Execution Logs Last updated: 28 Feb 2026 A1. Pipeline Execution Logs AISDP module(s): Module 2 (Development Process), Module 10 (Version Control) Immutable records of every CI/CD pipeline run, capturing each discrete stage from data preparation through feature engineering, model training, evaluation, and deployment. Each record includes duration, resource consumption, convergence metrics, random seeds, evaluation results, gate pass/fail decisions, and any exception approvals. Generated automatically at every pipeline run; the pipeline is configured with discrete, auditable stages and each stage emits structured log events on entry, completion, and failure. Responsible party: CI/CD pipeline auto-generates. Technical SME configures the pipeline stages and logging specification. Regulations addressed: Article 12 (record-keeping); Article 17 (QMS); Annex IV (2)(b) (design and development process description). Key outputs Per-run structured log with stage-level granularity Gate decision records with exception approvals --- ## A10. Third-Party Data Quarantine Log URL: https://docs.standardintelligence.com/a10-third-party-data-quarantine-log Breadcrumb: Artefact Taxonomy › Category A — Engineering Work-Product › A10. Third-Party Data Quarantine Log Last updated: 28 Feb 2026 A10. Third-Party Data Quarantine Log AISDP module(s): Module 4 ( Data Governance ) Records of supplier data deliveries that fail intake validation. Each entry records the delivery, the validation failures, the supplier notification, quarantine status, and resolution. The automated intake validation pipeline (Great Expectations or Soda Core) checks schema compliance, completeness, range and distribution, and anomaly detection. Deliveries that fail are routed to a quarantine holding area. Responsible party: Technical SME operates the pipeline. Data engineering team resolves quarantined deliveries. Regulations addressed: Article 10(3) (data preparation processes); Article 10(2)(f) (error detection). Key outputs Per-delivery validation failure record Resolution audit trail --- ## A11. Vulnerability Management Register URL: https://docs.standardintelligence.com/a11-vulnerability-management-register Breadcrumb: Artefact Taxonomy › Category A — Engineering Work-Product › A11. Vulnerability Management Register Last updated: 28 Feb 2026 A11. Vulnerability Management Register AISDP module(s): Module 9 (Robustness and Cybersecurity) Centralised register of all identified security vulnerabilities with severity classification, remediation SLA, current status, and resolution evidence. Fed by SAST (Semgrep), SCA, container scanning (Trivy, Grype, Snyk), and penetration testing results. Critical vulnerabilities unpatched beyond SLA are escalated to the Non-Conformity Register . Responsible party: Security team maintains. Technical SME reviews AI-specific vulnerabilities. Regulations addressed: Article 15 (cybersecurity); CRA Article 11 (vulnerability handling); NIS2 Article 21 (cybersecurity risk management); DORA Article 28 (ICT third-party risk). Key outputs Per-vulnerability SLA-tracked remediation record Escalation to NCR on SLA breach --- ## A12. Operational Dashboard URL: https://docs.standardintelligence.com/a12-operational-dashboard Breadcrumb: Artefact Taxonomy › Category A — Engineering Work-Product › A12. Operational Dashboard Last updated: 28 Feb 2026 A12. Operational Dashboard AISDP module(s): Module 12 (Post-Market Monitoring) Real-time or near-real-time display of system behaviour: current metric values, alert status, recent trends, and active investigations across the five monitoring dimensions (performance, fairness, data drift , operational health, human oversight). Built in Grafana, integrated with Prometheus, Elasticsearch, and time-series databases. Responsible party: Technical SME configures and monitors. Regulations addressed: Article 72 (PMM); Article 9 (4) ( residual risk monitoring). Key outputs Five-dimension real-time monitoring view Quarterly evidence screenshots for the evidence register --- ## A13. Evaluation Reports URL: https://docs.standardintelligence.com/a13-evaluation-reports Breadcrumb: Artefact Taxonomy › Category A — Engineering Work-Product › A13. Evaluation Reports Last updated: 28 Feb 2026 A13. Evaluation Reports AISDP module(s): Module 5 (Testing and Validation) Per-build records of all performance, fairness, robustness, and calibration metrics declared in the AISDP. Include confidence intervals and per-subgroup disaggregation. Any metric breaching its declared threshold blocks deployment. Computed automatically during the model evaluation stage of the CI/CD pipeline . Responsible party: CI/CD pipeline auto-generates. Technical SME reviews. Regulations addressed: Article 9 (7) (testing); Article 15 (accuracy and robustness); Annex IV (2)(e) (validation and testing results). Key outputs Per-build multi-metric evaluation with gate decision Per-subgroup disaggregated results --- ## A14. Cumulative Baseline Tracking Record URL: https://docs.standardintelligence.com/a14-cumulative-baseline-tracking-record Breadcrumb: Artefact Taxonomy › Category A — Engineering Work-Product › A14. Cumulative Baseline Tracking Record Last updated: 28 Feb 2026 A14. Cumulative Baseline Tracking Record AISDP module(s): Module 10 ( Version Control ) Drift metrics comparing each candidate model version against both the immediately preceding version and the originally assessed baseline. Prevents incremental changes from cumulatively producing a substantial modification. Automated drift metrics (PSI, statistical tests) are computed at each candidate version evaluation. Sustained drift crossing defined thresholds triggers mandatory review. Responsible party: CI/CD pipeline computes drift metrics. AI System Assessor maintains the baseline and reviews trigger events. Regulations addressed: Article 3(23) (substantial modification); Articles 43–47 ( conformity assessment ). Key outputs Per-candidate dual-axis drift comparison (vs baseline, vs predecessor) Cumulative trigger assessment --- ## A15. Dead-Letter Queue Investigation Records URL: https://docs.standardintelligence.com/a15-dead-letter-queue-investigation-records Breadcrumb: Artefact Taxonomy › Category A — Engineering Work-Product › A15. Dead-Letter Queue Investigation Records Last updated: 28 Feb 2026 A15. Dead-Letter Queue Investigation Records AISDP module(s): Module 4 ( Data Governance ) Records of non-conforming data records caught by ingestion boundary validation. Each record documents what failed, why, and how the issue was resolved. Patterns in dead-letter queue volume reveal upstream data quality trends. The ingestion layer routes records failing schema validation, range checks, or freshness requirements to a dead-letter queue for investigation. Responsible party: Data engineering team investigates and resolves. Technical SME monitors volume trends. Regulations addressed: Article 10(3) (data preparation processes); Article 72 (PMM). Key outputs Per-record failure investigation and resolution audit trail Volume trend analysis for quarterly PMM review --- ## A16. Governance Dashboard URL: https://docs.standardintelligence.com/a16-governance-dashboard Breadcrumb: Artefact Taxonomy › Category A — Engineering Work-Product › A16. Governance Dashboard Last updated: 28 Feb 2026 A16. Governance Dashboard AISDP module(s): Module 12 (Post-Market Monitoring) Summary views for the AI Governance Lead and compliance team: compliance metric RAG status, alert history and resolution statistics, trend analysis, Non-Conformity Register status, and per-module evidence currency (date of last auto-generated update vs. last human review). Built in Grafana or a BI tool (Metabase, Superset). Fed by the same monitoring infrastructure as the operational dashboard. Responsible party: AI Governance Lead uses. Technical SME or compliance team configures. Regulations addressed: Article 72 (PMM, governance-level monitoring); Article 17 (QMS, management review). Key outputs Compliance RAG status per AISDP module Evidence currency tracking --- ## A2. Model Registry Entries URL: https://docs.standardintelligence.com/a2-model-registry-entries Breadcrumb: Artefact Taxonomy › Category A — Engineering Work-Product › A2. Model Registry Entries Last updated: 28 Feb 2026 A2. Model Registry Entries AISDP module(s): Module 2 (Development Process), Module 10 (Version Control) Central catalogue of all model versions. Each entry contains provenance metadata linking the model artefact to the exact code, data, configuration, and pipeline execution that produced it. Entries carry cryptographic signatures and follow a stage management lifecycle: experimental, staging, production, archived. Auto-registered at each training run with origin, training data version, code commit, hyperparameters, pipeline execution ID, evaluation metrics, content hash, and digital signature. Responsible party: CI/CD pipeline auto-registers. Technical SME maintains the registry and configures promotion gates. Regulations addressed: Article 12 (record-keeping and traceability ); Article 18 (ten-year retention); Annex IV (2)(b)–(e) (model description, training methodology, evaluation). Key outputs Per-version provenance record with cryptographic integrity Stage transition audit trail --- ## A3. Model Card URL: https://docs.standardintelligence.com/a3-model-card Breadcrumb: Artefact Taxonomy › Category A — Engineering Work-Product › A3. Model Card Last updated: 28 Feb 2026 A3. Model Card AISDP module(s): Module 3 (Architecture and Design), Module 5 (Testing and Validation) Standardised summary of a model version's architecture, training data, performance metrics (disaggregated by subgroup), intended use, known limitations, and failure modes. Auto-populated from the model registry , experiment tracker, and evaluation pipeline at each training run. The template is version-controlled and maintained by the Conformity Assessment Coordinator. Responsible party: CI/CD pipeline auto-generates. Technical SME configures the generation template. Regulations addressed: Annex IV(2)(b)–(e) (general description, design specifications, training methodology, evaluation results); Article 13 (transparency). Key outputs Per-version model card with disaggregated metrics --- ## A4. Composite Version Identifier (Version Quad) URL: https://docs.standardintelligence.com/a4-composite-version-identifier-version-quad Breadcrumb: Artefact Taxonomy › Category A — Engineering Work-Product › A4. Composite Version Identifier (Version Quad) Last updated: 28 Feb 2026 A4. Composite Version Identifier (Version Quad) AISDP module(s): Module 10 (Version Control) A single immutable reference linking code commit, data version, model version, and configuration version for every deployed instance. Tagged to every inference request at the point of execution. From this identifier, the full provenance chain is one lookup away. Assembled automatically at deployment from the four component version identifiers. Responsible party: CI/CD pipeline generates. Technical SME configures the assembly and tagging mechanism. Regulations addressed: Article 12 (record-keeping); Annex IV (2)(b) (version identification); Article 47/Annex V (recorded in the Declaration); Article 49 /Annex VIII (recorded in EU database registration). Key outputs Per-deployment composite version identifier Per-inference-request traceability tag --- ## A5. Deployment Ledger URL: https://docs.standardintelligence.com/a5-deployment-ledger Breadcrumb: Artefact Taxonomy › Category A — Engineering Work-Product › A5. Deployment Ledger Last updated: 28 Feb 2026 A5. Deployment Ledger AISDP module(s): Module 10 (Version Control) Immutable, append-only record of every deployment event. Each entry records the deployment date, composite version, deploying individual, authorisation evidence, target environment, and feature flag activations. Entries cannot be modified after creation. Auto-populated by the deployment pipeline at each production promotion. Responsible party: CI/CD pipeline auto-populates. AI Governance Lead authorises the initial production deployment. Regulations addressed: Article 12 (record-keeping); Annex VI (b) ( traceability link between assessed artefacts and production deployment). Key outputs Per-deployment immutable event record Feature flag activation audit trail --- ## A6. SBOM (Software/ML Bill of Materials) URL: https://docs.standardintelligence.com/a6-sbom-softwareml-bill-of-materials Breadcrumb: Artefact Taxonomy › Category A — Engineering Work-Product › A6. SBOM (Software/ML Bill of Materials) Last updated: 28 Feb 2026 A6. SBOM (Software/ML Bill of Materials) AISDP module(s): Module 3 (Architecture and Design), Module 9 (Robustness and Cybersecurity) Complete dependency inventory for each build: all software libraries, ML framework versions, pre-trained model components (base models, embedding models, tokenisers), and external API dependencies, each with licence terms. Generated in SPDX or CycloneDX standard formats. Auto-generated as part of the CI pipeline using Syft, CycloneDX CLI, or SPDX tools. Attached to the container image as a cosign attestation. Responsible party: CI/CD pipeline generates per build. Conformity Assessment Coordinator stores the SBOM in the evidence register . Regulations addressed: Annex IV (2)(b) (system composition); Article 15 (cybersecurity); CRA Article 12 (software transparency); DORA Article 28 (ICT third-party risk). Key outputs Per-build SPDX or CycloneDX inventory ML-specific component catalogue with licences --- ## A7. Data Lineage Records URL: https://docs.standardintelligence.com/a7-data-lineage-records Breadcrumb: Artefact Taxonomy › Category A — Engineering Work-Product › A7. Data Lineage Records Last updated: 28 Feb 2026 A7. Data Lineage Records AISDP module(s): Module 4 (Data Governance), Module 10 (Version Control) End-to-end provenance chain tracing raw data sources through every transformation to the features used in training and inference. Column-level lineage captures indirect relationships. Captured automatically as pipeline components emit lineage events through OpenLineage with Marquez, or equivalent. Retained for the full ten-year period. Responsible party: Automated pipeline generates. Technical SME configures the lineage framework. Regulations addressed: Article 10 (data governance); Article 12 (record-keeping); GDPR Articles 15–17 (data subject rights); Article 18 (ten-year retention). Key outputs Per-pipeline-run column-level lineage graph --- ## A8. Feature Registry URL: https://docs.standardintelligence.com/a8-feature-registry Breadcrumb: Artefact Taxonomy › Category A — Engineering Work-Product › A8. Feature Registry Last updated: 28 Feb 2026 A8. Feature Registry AISDP module(s): Module 4 ( Data Governance ) Central catalogue of all features used by the system. Each entry records the feature name, source dataset and field, transformation applied, proxy variable assessment (correlation with each protected characteristic, justification for retention or removal), and SHAP-based feature importance. Maintained by the Technical SME as part of the feature engineering layer. Updated when features are added, modified, or retired. Responsible party: Technical SME maintains. Regulations addressed: Article 10(2)(f) (examination for possible biases); Article 10(5) (special category data); Annex IV (2)(d) (data description). Key outputs Per-feature proxy variable assessment Feature importance ranking --- ## A9. Override and Escalation Logs URL: https://docs.standardintelligence.com/a9-override-and-escalation-logs Breadcrumb: Artefact Taxonomy › Category A — Engineering Work-Product › A9. Override and Escalation Logs Last updated: 28 Feb 2026 A9. Override and Escalation Logs AISDP module(s): Module 7 (Human Oversight), Module 12 (Post-Market Monitoring) Immutable records of every operator override, escalation, and human review decision. Each record captures the operator identity, the system's recommendation, the operator's decision, the stated rationale (via structured drop-down with free-text supplement), and the outcome. Captured automatically at the post-processing and human oversight layers. Responsible party: Monitoring layer captures automatically. Technical SME analyses patterns. Regulations addressed: Article 14 (human oversight); Article 12 (record-keeping); Article 72 (PMM). Key outputs Per-override structured decision record Automation bias trend data for quarterly review --- ## Artefact Interdependencies URL: https://docs.standardintelligence.com/artefact-interdependencies Breadcrumb: Artefact Taxonomy › Cross-Cutting Analysis › Artefact Interdependencies Last updated: 28 Feb 2026 Artefact Interdependencies AISDP module(s): Cross-cutting The 61 artefacts form a dependency graph rather than an independent list. Six artefacts serve as foundational inputs to many others. The composite version identifier (A4) is referenced by the deployment ledger (A5), the Declaration of Conformity (E1), the EU database registration (E2), the assessment report (D4), and the substantial modification assessment (B9). The risk register (C2) feeds the residual risk sign-offs (C3), the IFU residual risk section (E4), the threat model (B7), and the FRIA (E7). The evidence register (B2) indexes every other artefact. Three artefacts function as terminal outputs that consume but do not feed other artefacts: the Declaration of Conformity (E1), the CE marking authorisation (E3), and the serious incident report (E6). The Declaration is the final convergence point; it cannot be signed until the assessment report (D4) is complete, the NCR (D3) is clear, and the risk register (C2) shows accepted residual risk. The model card (A3) illustrates a cross-category dependency chain: it is auto-generated (Category A), feeds the dataset documentation (B5), the fairness evaluation report (B4), and the IFU (E4), and is reviewed during the conformity assessment (D2). Key outputs Foundational artefact identification (six key inputs) Terminal artefact identification (three final outputs) Declaration of Conformity dependency chain --- ## Artefact Taxonomy URL: https://docs.standardintelligence.com/artefact-taxonomy Breadcrumb: Artefact Taxonomy Last updated: 28 Feb 2026 The AISDP v22 generates 61 distinct compliance artefacts across the system lifecycle. This section classifies those artefacts along a spectrum from pure engineering output to formal legal instrument, using a five-category taxonomy. The classification determines how each artefact is generated, who owns it, how it is stored, and what regulatory significance it carries. Category A (engineering work-product, 16 artefacts) covers pipeline-generated outputs that exist because engineers build systems. Category B (compliance evidence, 14 artefacts) covers those same outputs when specifically structured, retained, and traceable for regulatory demonstration. Category C (governance decision records, 12 artefacts) documents the organisation's reasoning and risk acceptance. Category D (assessment records, 8 artefacts) captures the formal conformity assessment process. Category E (regulatory instruments, 11 artefacts) covers documents with legal force that leave the organisation. The taxonomy pages provide per-artefact descriptions, collection methods, responsible parties, and regulatory mappings. Cross-cutting analysis covers regulatory mapping across ten legal instruments, retention requirements, update frequencies, collection method patterns, responsible party distribution, artefact interdependencies, and official template sources. ℹ This section corresponds to the Artefact Taxonomy. It cross-references all twelve AISDP modules and supports the evidence register (B2) and cross-reference index (B14). --- ## B1. Evidence Pack URL: https://docs.standardintelligence.com/b1-evidence-pack Breadcrumb: Artefact Taxonomy › Category B — Compliance Evidence › B1. Evidence Pack Last updated: 28 Feb 2026 B1. Evidence Pack AISDP module(s): Cross-cutting The complete collection of supporting artefacts substantiating every material claim in the AISDP. An AISDP without its evidence pack is a narrative without proof. Artefacts are generated as natural byproducts of the engineering workflow through CI/CD automation, version control , and monitoring infrastructure. The Conformity Assessment Coordinator ensures that each artefact is collected, versioned, stored in the designated location, and linked to the AISDP claims it supports. Responsible party: Conformity Assessment Coordinator maintains the collection. All ten roles contribute artefacts. Regulations addressed: Articles 11 and 18 (technical documentation and retention); Annex IV (documentation requirements); Annex VI (conformity assessment evidence). Key outputs Master artefact collection with per-module coverage tracking Currency attestation --- ## B10. PMM Feedback Loop Records URL: https://docs.standardintelligence.com/b10-pmm-feedback-loop-records Breadcrumb: Artefact Taxonomy › Category B — Compliance Evidence › B10. PMM Feedback Loop Records Last updated: 28 Feb 2026 B10. PMM Feedback Loop Records AISDP module(s): Module 12 (Post-Market Monitoring) Traceable records demonstrating the complete cycle for each monitoring finding: identification, decision authority determination, engineering implementation, validation gate confirmation, AISDP update, and evidence pack recording. Evidence that the risk management system is responsive to production data. Decision authority is tiered by impact. Responsible party: Technical SME implements fixes. AI Governance Lead authorises significant changes. Regulations addressed: Article 9 (risk management, continuous updating); Article 72 (PMM); Article 12 (record-keeping). Key outputs Per-finding traceable cycle record (finding through to AISDP update) Decision authority tiering evidence --- ## B11. Operator Training and AI Literacy Records URL: https://docs.standardintelligence.com/b11-operator-training-and-ai-literacy-records Breadcrumb: Artefact Taxonomy › Category B — Compliance Evidence › B11. Operator Training and AI Literacy Records Last updated: 28 Feb 2026 B11. Operator Training and AI Literacy Records AISDP module(s): Module 7 (Human Oversight) LMS-based records of operator training on system capabilities, limitations, oversight procedures, and automation bias awareness. Calibration cases (known-answer items) are injected at random intervals during production use; operators who agree with the system on cases where it is wrong exhibit automation bias. Results feed into quarterly PMM review s. Responsible party: AI Governance Lead has programme oversight. HR/training function delivers. Regulations addressed: Article 4 (AI literacy); Article 14 (human oversight); Article 26(7) (deployer staff AI literacy). Key outputs Per-operator training completion and competence records Automation bias trend tracking --- ## B12. Archived Model and Documentation Package URL: https://docs.standardintelligence.com/b12-archived-model-and-documentation-package Breadcrumb: Artefact Taxonomy › Category B — Compliance Evidence › B12. Archived Model and Documentation Package Last updated: 28 Feb 2026 B12. Archived Model and Documentation Package AISDP module(s): Module 10 (Version Control) Complete preservation of model artefacts, AISDP modules, evidence pack , and runtime environment (including the container image) for the ten-year retention period. Must be retrievable, not merely stored. The AI System Assessor verifies the cryptographic signature one final time before archival. Tested periodically by retrieving a sample and confirming it can be loaded. Responsible party: AI Governance Lead ensures infrastructure. Technical SME verifies signatures before archival. Regulations addressed: Article 18 (ten-year retention); Article 12 (record-keeping, requiring retrievability). Key outputs Complete archive with integrity verification Retrieval test records --- ## B13. Inspection Readiness Pack URL: https://docs.standardintelligence.com/b13-inspection-readiness-pack Breadcrumb: Artefact Taxonomy › Category B — Compliance Evidence › B13. Inspection Readiness Pack Last updated: 28 Feb 2026 B13. Inspection Readiness Pack AISDP module(s): Cross-cutting Pre-assembled documentation package enabling rapid response to competent authority requests. Organised to serve both NIS2 audits and AI Act inspections without reorganisation under time pressure. Pre-translated where authorities require the national language. Maintained in a state of continuous readiness. Responsible party: Conformity Assessment Coordinator assembles. AI Governance Lead approves. Regulations addressed: Article 21 (cooperation with competent authorities); Article 79(2) (fifteen working-day backstop); NIS2 Article 32 (supervisory measures); DORA Article 11 (audit provisions). Key outputs Per-jurisdiction pre-translated documentation package Response team identification and protocol --- ## B14. Cross-Reference Index URL: https://docs.standardintelligence.com/b14-cross-reference-index Breadcrumb: Artefact Taxonomy › Category B — Compliance Evidence › B14. Cross-Reference Index Last updated: 28 Feb 2026 B14. Cross-Reference Index AISDP module(s): Cross-cutting Consolidated mapping of every cited EU AI Act article, annex, and AISDP module to the documentation sections addressing it. Supports completeness verification and navigation. Generated from the document structure. Responsible party: AI System Assessor compiles. Regulations addressed: Annex IV (completeness of documentation); Articles 11 and 18 (documentation integrity). Key outputs Regulation-to-AISDP section mapping Completeness verification tool --- ## B2. Evidence Register URL: https://docs.standardintelligence.com/b2-evidence-register Breadcrumb: Artefact Taxonomy › Category B — Compliance Evidence › B2. Evidence Register Last updated: 28 Feb 2026 B2. Evidence Register AISDP module(s): Cross-cutting Structured catalogue of every artefact in the evidence pack. Each entry records a unique artefact identifier, the AISDP module it supports, the EU AI Act article it demonstrates compliance with, the artefact's current version and storage location, date last updated, the freshness requirement, and the responsible role. Must be queryable. Maintained in Airtable, Notion, SharePoint, or YAML. Distinguishes between system-specific evidence and shared evidence for multi-system organisations. Responsible party: Conformity Assessment Coordinator maintains. Regulations addressed: Articles 11 and 18 (documentation and retention); Annex IV ( traceability ); Annex VI (assessment evidence management). Key outputs Queryable artefact catalogue with traceability metadata Freshness status tracking per artefact --- ## B3. Distributional Analysis Reports URL: https://docs.standardintelligence.com/b3-distributional-analysis-reports Breadcrumb: Artefact Taxonomy › Category B — Compliance Evidence › B3. Distributional Analysis Reports Last updated: 28 Feb 2026 B3. Distributional Analysis Reports AISDP module(s): Module 4 ( Data Governance ) Consolidated report per dataset combining the distributional analysis output matrix (features vs. protected characteristics with test statistics and p-values), the flagged features register, the proxy variable correlation matrix with justification review outcomes, and the intersectional pre-training analysis with cell sizes and reliability assessments. The statistical analysis is automated; the report structure, per-feature justification review, and intersectional analysis are compliance-driven additions. Responsible party: Technical SME generates. AI System Assessor reviews for completeness. AI Governance Lead reviews acceptability of identified biases. Regulations addressed: Article 10(2)(f) (examination for possible biases); Annex IV (2)(d) (data description). Key outputs Per-dataset bias analysis with per-feature justification Intersectional representation assessment --- ## B4. Fairness Evaluation Report URL: https://docs.standardintelligence.com/b4-fairness-evaluation-report Breadcrumb: Artefact Taxonomy › Category B — Compliance Evidence › B4. Fairness Evaluation Report Last updated: 28 Feb 2026 B4. Fairness Evaluation Report AISDP module(s): Module 4 ( Data Governance ), Module 5 (Testing and Validation) Central fairness evidence document. Contains per-subgroup metrics (TPR, FPR, selection rate ratio, disparity measures), threshold compliance status, and bias mitigation outcomes. Computed by the fairness evaluation suite (Fairlearn MetricFrame, AI Fairness 360) integrated into the CI pipeline. Any threshold breach blocks deployment. Responsible party: Technical SME generates. AI System Assessor reviews. Regulations addressed: Article 10(2)(f) (bias examination); Article 9 (7) (testing against pre-defined metrics); Annex IV (2)(e) (validation and testing results). Key outputs Per-subgroup disparity metrics with threshold compliance Bias mitigation impact assessment --- ## B5. Dataset Documentation URL: https://docs.standardintelligence.com/b5-dataset-documentation Breadcrumb: Artefact Taxonomy › Category B — Compliance Evidence › B5. Dataset Documentation Last updated: 28 Feb 2026 B5. Dataset Documentation AISDP module(s): Module 4 (Data Governance) Per- dataset documentation recording source, size, temporal period, geographic coverage, collection methodology, legal basis for processing, preparation steps, quality metrics per ISO/IEC 25012, special category data processing, and version identifier. Maintained as a living artefact co-located with the dataset in the versioning system (DVC, Delta Lake, or LakeFS). A dataset version bump triggers a corresponding documentation update. Responsible party: Technical SME drafts. DPO Liaison reviews data protection aspects. Regulations addressed: Article 10 (data governance, all sub-requirements); Article 10(5) (special category data); Annex IV (2)(d) and (f); GDPR Article 6 (lawful basis); GDPR Article 9 (special category data). Key outputs Per-dataset provenance, composition, and quality documentation Legal basis and special category data assessment --- ## B6. Data Retention Plan URL: https://docs.standardintelligence.com/b6-data-retention-plan Breadcrumb: Artefact Taxonomy › Category B — Compliance Evidence › B6. Data Retention Plan Last updated: 28 Feb 2026 B6. Data Retention Plan AISDP module(s): Module 4 ( Data Governance ) Per-data-category specification of retention periods, justifications, storage tiers and cost implications, and deletion or anonymisation procedures. Reconciles the AI Act's ten-year documentation retention with GDPR 's storage limitation principle. Categories covered: training data, validation data, test data, inference inputs, inference outputs, and operator interaction logs. Responsible party: DPO Liaison reviews against GDPR. AI Governance Lead ensures infrastructure planning and budgeting. Regulations addressed: Article 18 (ten-year retention); GDPR Article 5 (1)(e) (storage limitation); GDPR Article 17 (right to erasure). Key outputs Per-category retention schedule with regulatory justification GDPR-AI Act reconciliation analysis --- ## B7. Threat Model URL: https://docs.standardintelligence.com/b7-threat-model Breadcrumb: Artefact Taxonomy › Category B — Compliance Evidence › B7. Threat Model Last updated: 28 Feb 2026 B7. Threat Model AISDP module(s): Module 9 (Robustness and Cybersecurity) Living document mapping the system's threat landscape using combined STRIDE, MITRE ATLAS, OWASP Top 10 for LLM Applications 2025 v2.0, and PASTA frameworks. Each threat is mapped against system components and assessed for both technical severity and fundamental rights impact. Developed during Phase 3 architecture design using IriusRisk or OWASP Threat Dragon. Updated when the system architecture changes, new threat intelligence emerges, or '); return false;" class="xref">post-market monitoring reveals new attack vectors. Responsible party: Technical SME drafts and maintains. Regulations addressed: Article 15 (cybersecurity); Article 9 (risk management); CRA Article 10 (cybersecurity requirements); NIS2 Article 21 (risk management measures). Key outputs Per-threat control mapping with residual risk Top-ten bow-tie diagrams Fundamental rights impact assessment per threat --- ## B8. Penetration Testing Reports URL: https://docs.standardintelligence.com/b8-penetration-testing-reports Breadcrumb: Artefact Taxonomy › Category B — Compliance Evidence › B8. Penetration Testing Reports Last updated: 28 Feb 2026 B8. Penetration Testing Reports AISDP module(s): Module 9 (Robustness and Cybersecurity) Results of security testing including AI-specific attack scenarios (adversarial inputs, model extraction , data poisoning , prompt injection ). For financial entities subject to DORA , includes TLPT using TIBER-EU methodology. Refreshed annually. External testers with realistic threat actor capabilities conduct structured exercises using MITRE ATLAS alongside MITRE ATT&CK. Responsible party: Technical SME commissions. Independent testers execute. Regulations addressed: Article 15 (cybersecurity); DORA Article 26 (TLPT); NIS2 Article 21 (security testing). Key outputs AI-specific attack scenario results Conventional vulnerability findings Remediation recommendations tracked through vulnerability register --- ## B9. Substantial Modification Assessment URL: https://docs.standardintelligence.com/b9-substantial-modification-assessment Breadcrumb: Artefact Taxonomy › Category B — Compliance Evidence › B9. Substantial Modification Assessment Last updated: 28 Feb 2026 B9. Substantial Modification Assessment AISDP module(s): Module 10 (Version Control) Formal assessment of whether a system change crosses quantitative thresholds, potentially triggering a new conformity assessment cycle. Three possible outcomes: substantial modification (re-assessment required), within acceptable bounds (documented with evidence), or cumulative baseline trigger (full assessment despite sub-threshold individual change). Retained for ten years regardless of outcome. Responsible party: AI System Assessor assesses. AI Governance Lead approves. Legal and Regulatory Advisor provides input for borderline cases. Regulations addressed: Article 3(23) (substantial modification); Articles 43–47 (conformity assessment); Article 12 (record-keeping of changes). Key outputs Per-change threshold evaluation with determination Cumulative baseline trigger assessment --- ## C1. Classification Decision Record (CDR) URL: https://docs.standardintelligence.com/c1-classification-decision-record-cdr Breadcrumb: Artefact Taxonomy › Category C — Governance Decision Record › C1. Classification Decision Record (CDR) Last updated: 28 Feb 2026 C1. Classification Decision Record (CDR) AISDP module(s): Module 1 (System Identity) Records whether the system falls within the AI Act's scope, its risk tier, and the full reasoning. Captures the Article 3(1) definition analysis, Annex III domain assessment, and where applicable the Article 6(3) exception two-limb test. The gateway artefact; all subsequent compliance activity depends on correct classification. The Classification Reviewer independently reviews; disagreements are escalated to the AI Governance Lead . Reclassification trigger s are defined and monitored. Responsible party: AI System Assessor drafts. Classification Reviewer independently reviews. AI Governance Lead approves at the Phase 1 governance gate. Regulations addressed: Article 3(1) (AI system definition); Article 5 (prohibited practices); Articles 6–7 and Annex III (high-risk classification); Article 6(3) (exception assessment); Article 50 (limited-risk transparency obligations). Key outputs Scope determination with three-question Article 3(1) analysis Risk tier classification with Annex III domain mapping Reclassification trigger monitoring specification --- ## C10. D&O and Insurance Coverage Review URL: https://docs.standardintelligence.com/c10-dando-and-insurance-coverage-review Breadcrumb: Artefact Taxonomy › Category C — Governance Decision Record › C10. D&O and Insurance Coverage Review Last updated: 28 Feb 2026 C10. D&O and Insurance Coverage Review AISDP module(s): Module 6 (Risk Management System) Assessment of whether D&O insurance covers Article 99 regulatory fines, defence costs, and personal liability for the Declaration signatory. Reviews existing policy exclusions against the Article 99 three-tier penalty framework (EUR 35M/7%, EUR 15M/3%, EUR 7.5M/1%). Findings are documented and shared with the AI Governance Lead before the Declaration is signed. Responsible party: Legal and Regulatory Advisor assesses. Regulations addressed: Article 99 (penalties); Article 47 ( Declaration of Conformity , personal liability). Key outputs Per-tier coverage analysis Gap identification with recommendations --- ## C11. Regulatory Guidance Monitoring Log URL: https://docs.standardintelligence.com/c11-regulatory-guidance-monitoring-log Breadcrumb: Artefact Taxonomy › Category C — Governance Decision Record › C11. Regulatory Guidance Monitoring Log Last updated: 28 Feb 2026 C11. Regulatory Guidance Monitoring Log AISDP module(s): Cross-cutting Quarterly tracking of published guidance, interpretive statements, and enforcement actions from competent authorities across all deployment jurisdictions. Where guidance conflicts or diverges across member states, the organisation documents its interpretation, reasoning, and supporting evidence. New jurisdictions are added at deployment. Responsible party: Legal and Regulatory Advisor maintains on a quarterly cycle. Regulations addressed: Article 9 (risk management); Article 72 (PMM); Article 113 (transitional provisions). Key outputs Per-jurisdiction guidance tracking with impact assessment Cross-jurisdiction conflict register with organisational interpretation --- ## C12. Decommission Plan URL: https://docs.standardintelligence.com/c12-decommission-plan Breadcrumb: Artefact Taxonomy › Category C — Governance Decision Record › C12. Decommission Plan Last updated: 28 Feb 2026 C12. Decommission Plan AISDP module(s): Module 12 (Post-Market Monitoring) Structured plan for system end-of-life covering seven workstreams: data disposition, model archival, deployer notification, EU database update, documentation archival, operator transition, and regulatory notification. Each milestone has a responsible owner, target date, and completion criterion. Prepared when an end-of-life trigger is activated. Responsible party: AI Governance Lead tracks milestones and escalates delays. Regulations addressed: Article 79(2) (fifteen working-day backstop); Article 18 (ten-year retention post-decommission); Article 49 /71 (EU database update); Article 72 (PMM cessation). Key outputs Seven-workstream plan with per-milestone ownership Progress tracking and escalation protocol --- ## C2. Risk Register URL: https://docs.standardintelligence.com/c2-risk-register Breadcrumb: Artefact Taxonomy › Category C — Governance Decision Record › C2. Risk Register Last updated: 28 Feb 2026 C2. Risk Register AISDP module(s): Module 6 (Risk Management System) Central living document recording all identified risks. Each entry records risk ID, description, likelihood, severity across four dimensions (health and safety, fundamental rights, operational integrity, reputational exposure), current mitigations, residual risk level, and assigned owner. Initially populated through five-method risk identification : FMEA, stakeholder consultation, regulatory gap analysis, adversarial red-teaming, and horizon scanning. Updated continuously from PMM findings, serious incident s, regulatory developments, and system modifications. Reviewed formally each quarter and at every governance gate. Responsible party: AI System Assessor populates. Technical SME provides technical risk input. AI Governance Lead reviews and accepts residual risk. Regulations addressed: Article 9 (risk management system, all sub-requirements); Article 9(2)(a) (identification and analysis); Article 9(4) (residual risk communication); Annex IV (2)(g) (risk management documentation). Key outputs Five-method risk identification evidence Per-risk FMEA RPN scoring (1,000-point scale) Quarterly review records --- ## C3. Residual Risk Acceptance Sign-offs URL: https://docs.standardintelligence.com/c3-residual-risk-acceptance-sign-offs Breadcrumb: Artefact Taxonomy › Category C — Governance Decision Record › C3. Residual Risk Acceptance Sign-offs Last updated: 28 Feb 2026 C3. Residual Risk Acceptance Sign-offs AISDP module(s): Module 6 (Risk Management System) Formal records of the AI Governance Lead 's acceptance of residual risk at each governance gate. Each sign-off records the specific residual risks accepted, the compensating controls in place, and the conditions under which the acceptance remains valid. Generated at Phase 1, Phase 2, Phase 3, Phase 5, and operational review gates. Retained for the ten-year period. Responsible party: AI Governance Lead signs. AI System Assessor prepares the residual risk profile. Regulations addressed: Article 9 (4) (residual risk communication to deployers); Article 9(2)(a) (risk acceptance); Article 14 (human oversight, as residual risks inform oversight design). Key outputs Per-gate signed risk acceptance with conditions Deployer communication cross-reference to IFU --- ## C4. Model Selection Record URL: https://docs.standardintelligence.com/c4-model-selection-record Breadcrumb: Artefact Taxonomy › Category C — Governance Decision Record › C4. Model Selection Record Last updated: 28 Feb 2026 C4. Model Selection Record AISDP module(s): Module 3 (Architecture and Design) Documents the full model evaluation, the compliance criteria scoring for each candidate architecture, and the reasoned rationale for the selected model. Per-component entries are required for multi-model architectures. Candidate architectures are evaluated against six compliance criteria (documentability, testability, auditability, bias detectability, maintainability, determinism). Responsible party: Technical SME drafts. AI System Assessor verifies completeness. AI Governance Lead approves. Regulations addressed: Annex IV(2)(b)–(c) (description, design specifications); Article 9 (risk management); Article 13 (transparency). Key outputs Per-candidate six-criteria scoring with weighted totals Multi-model per-component selection rationale --- ## C5. Compliance Criteria Scoring Matrix URL: https://docs.standardintelligence.com/c5-compliance-criteria-scoring-matrix Breadcrumb: Artefact Taxonomy › Category C — Governance Decision Record › C5. Compliance Criteria Scoring Matrix Last updated: 28 Feb 2026 C5. Compliance Criteria Scoring Matrix AISDP module(s): Module 3 (Architecture and Design) Quantitative comparison of candidate architectures across six compliance criteria, with weighted scores and evidence-based justifications. Weights are approved before the evaluation begins to prevent post-hoc rationalisation. Each candidate is scored as strong, adequate, or weak against each criterion. Responsible party: Technical SME scores. AI Governance Lead approves weights before evaluation begins. Regulations addressed: Annex IV(2)(b)–(e) (design, training, evaluation); Article 13 (transparency); Article 12 (record-keeping). Key outputs Pre-approved weight rationale Per-candidate evidence-based scoring --- ## C6. Model Origin Risk Assessment URL: https://docs.standardintelligence.com/c6-model-origin-risk-assessment Breadcrumb: Artefact Taxonomy › Category C — Governance Decision Record › C6. Model Origin Risk Assessment Last updated: 28 Feb 2026 C6. Model Origin Risk Assessment AISDP module(s): Module 3 (Architecture and Design) Evaluation of provenance, governance quality, and inherited risk for each model component (open-source, commercial, GPAI). Reviews model card s, dataset descriptions, evaluation reports, adversarial evaluation history, licence terms, and governance practices. Common gaps include absent disaggregated fairness metrics and incomplete adversarial robustness evaluation. Responsible party: AI System Assessor conducts. Technical SME provides technical evaluation. Regulations addressed: Article 25 (3) (information from GPAI providers); Article 53 (GPAI transparency obligations); Article 51(2) (systemic risk assessment ). Key outputs Per-component provenance risk rating Upstream documentation gap analysis Compensating control specification for inherited risks --- ## C7. IP and Licensing Analysis URL: https://docs.standardintelligence.com/c7-ip-and-licensing-analysis Breadcrumb: Artefact Taxonomy › Category C — Governance Decision Record › C7. IP and Licensing Analysis Last updated: 28 Feb 2026 C7. IP and Licensing Analysis AISDP module(s): Module 3 (Architecture and Design) Assessment of copyright exposure, licence compatibility, and IP risk for all model components, training data, and third-party dependencies. Records the organisation's interpretation of ambiguous licence terms and risk acceptance. Automated licence compliance scanning in the CI pipeline covers all dependencies. Where terms are ambiguous, the Legal and Regulatory Advisor documents the organisation's interpretation. Responsible party: Legal and Regulatory Advisor reviews. AI System Assessor compiles. AI Governance Lead signs off on residual IP risk. Regulations addressed: Annex IV (2)(b) (component description); Directive 2019/790 (Copyright in the Digital Single Market); licence-specific obligations. Key outputs Licence compatibility analysis Copyright exposure assessment (TDM compliance) Ambiguous term interpretation register --- ## C8. Fine-Tuning Provider Boundary Determination URL: https://docs.standardintelligence.com/c8-fine-tuning-provider-boundary-determination Breadcrumb: Artefact Taxonomy › Category C — Governance Decision Record › C8. Fine-Tuning Provider Boundary Determination Last updated: 28 Feb 2026 C8. Fine-Tuning Provider Boundary Determination AISDP module(s): Module 3 (Architecture and Design) Documents whether fine-tuning a GPAI model constitutes a substantial modification under Article 3(23) , triggering the provider boundary shift under Article 25 (1)(b) and full Article 16 obligations. Evaluates the fine-tuning against each substantial modification criterion. Where provider status is triggered, the full set of Article 16 obligations is mapped. Responsible party: AI System Assessor conducts. AI Governance Lead and Legal and Regulatory Advisor approve. Regulations addressed: Article 3(23) (substantial modification); Article 25(1)(b) (provider boundary shift); Article 16 (provider obligations); Articles 43, 47. Key outputs Per-criterion substantial modification assessment Provider obligation mapping (if triggered) --- ## C9. Quarterly PMM Review Minutes URL: https://docs.standardintelligence.com/c9-quarterly-pmm-review-minutes Breadcrumb: Artefact Taxonomy › Category C — Governance Decision Record › C9. Quarterly PMM Review Minutes Last updated: 28 Feb 2026 C9. Quarterly PMM Review Minutes AISDP module(s): Module 12 (Post-Market Monitoring) Governance record of the primary post-market monitoring forum. Documents monitoring trends, corrective actions approved, operator escalation patterns, deployer feedback, complaint volumes, Non-Conformity Register status, and confirmation that the feedback loop is functioning. Chaired by the AI Governance Lead . Responsible party: AI Governance Lead chairs and approves. Regulations addressed: Article 72 (PMM); Article 9 (risk management, continuous updating); Article 17 (QMS, management review). Key outputs Five-dimension monitoring trend summary Corrective action register with ownership and deadlines Feedback loop functioning confirmation --- ## Category A — Engineering Work-Product URL: https://docs.standardintelligence.com/category-a--engineering-work-product Breadcrumb: Artefact Taxonomy › Category A — Engineering Work-Product Last updated: 28 Feb 2026 A1. Pipeline Execution Logs A2. Model Registry Entries A3. Model Card A4. Composite Version Identifier (Version Quad) A5. Deployment Ledger A6. SBOM (Software/ML Bill of Materials) A7. Data Lineage Records A8. Feature Registry A9. Override and Escalation Logs A10. Third-Party Data Quarantine Log A11. Vulnerability Management Register A12. Operational Dashboard A13. Evaluation Reports A14. Cumulative Baseline Tracking Record A15. Dead-Letter Queue Investigation Records A16. Governance Dashboard --- ## Category B — Compliance Evidence URL: https://docs.standardintelligence.com/category-b--compliance-evidence Breadcrumb: Artefact Taxonomy › Category B — Compliance Evidence Last updated: 28 Feb 2026 B1. Evidence Pack B2. Evidence Register B3. Distributional Analysis Reports B4. Fairness Evaluation Report B5. Dataset Documentation B6. Data Retention Plan B7. Threat Model B8. Penetration Testing Reports B9. Substantial Modification Assessment B10. PMM Feedback Loop Records B11. Operator Training and AI Literacy Records B12. Archived Model and Documentation Package B13. Inspection Readiness Pack B14. Cross-Reference Index --- ## Category C — Governance Decision Record URL: https://docs.standardintelligence.com/category-c--governance-decision-record Breadcrumb: Artefact Taxonomy › Category C — Governance Decision Record Last updated: 28 Feb 2026 C1. Classification Decision Record (CDR) C2. Risk Register C3. Residual Risk Acceptance Sign-offs C4. Model Selection Record C5. Compliance Criteria Scoring Matrix C6. Model Origin Risk Assessment C7. IP and Licensing Analysis C8. Fine-Tuning Provider Boundary Determination C9. Quarterly PMM Review Minutes C10. D&O and Insurance Coverage Review C11. Regulatory Guidance Monitoring Log C12. Decommission Plan --- ## Category D — Assessment Record URL: https://docs.standardintelligence.com/category-d--assessment-record Breadcrumb: Artefact Taxonomy › Category D — Assessment Record Last updated: 28 Feb 2026 D1. Assessment Plan D2. Assessment Checklist D3. Non-Conformity Register D4. Internal Conformity Assessment Report D5. Assessor Records Archive D6. Readiness Assessment Checklist D7. Annual Oversight Audit Report D8. Regulatory Interaction Log --- ## Category E — Regulatory Instrument URL: https://docs.standardintelligence.com/category-e--regulatory-instrument Breadcrumb: Artefact Taxonomy › Category E — Regulatory Instrument Last updated: 28 Feb 2026 E1. Declaration of Conformity E2. EU Database Registration Entry E3. CE Marking E4. Instructions for Use (IFU) E5. AISDP (Twelve-Module Documentation Package) E6. Serious Incident Reports E7. FRIA Report E8. Data Protection Impact Assessment (DPIA) E9. Affected Person Notification Templates E10. Regulator Contact Register E11. Break-Glass Procedure Documentation --- ## Collection Method Patterns URL: https://docs.standardintelligence.com/collection-method-patterns Breadcrumb: Artefact Taxonomy › Cross-Cutting Analysis › Collection Method Patterns Last updated: 28 Feb 2026 Collection Method Patterns AISDP module(s): Cross-cutting Three collection patterns cover the 61 artefacts. Automated collection (28 artefacts, predominantly Category A and portions of Category B) requires no human action during normal operation; the CI/CD pipeline , monitoring infrastructure, and logging framework generate the artefact as a byproduct. Human-reviewed automation (15 artefacts, predominantly Category B) involves automated generation followed by structured human review, approval, or enrichment. Manual with structure (18 artefacts, predominantly Categories C and D) requires human judgement guided by templates, checklists, or defined procedures. The AISDP's design philosophy pushes artefacts toward automated collection wherever possible. The 28 fully automated artefacts represent the system's compliance foundation; they cannot fall out of date because they are generated as engineering byproducts. The 15 human-reviewed artefacts sit at the compliance boundary where automated outputs require human interpretation. The 18 manual artefacts capture organisational judgement that cannot be automated. Key outputs Three-pattern collection methodology classification Per-artefact collection pattern assignment --- ## Cross-Cutting Analysis URL: https://docs.standardintelligence.com/cross-cutting-analysis Breadcrumb: Artefact Taxonomy › Cross-Cutting Analysis Last updated: 28 Feb 2026 Regulatory Mapping Retention Requirements Update Frequencies Collection Method Patterns Responsible Party Distribution Artefact Interdependencies Official Template Sources --- ## D1. Assessment Plan URL: https://docs.standardintelligence.com/d1-assessment-plan Breadcrumb: Artefact Taxonomy › Category D — Assessment Record › D1. Assessment Plan Last updated: 28 Feb 2026 D1. Assessment Plan AISDP module(s): Cross-cutting Defines the conformity assessment scope, methodology, assessor team, assessment phases, and timeline. Specifies the three workstreams: QMS assessment ( Annex VI (a)), technical documentation assessment (Annex VI(b)), and consistency assessment. Prepared by the Conformity Assessment Coordinator before the formal assessment begins. Specifies evidence to be reviewed, assessment criteria per article and sub-requirement, and expected duration per phase. Responsible party: Conformity Assessment Coordinator drafts. AI Governance Lead approves. Regulations addressed: Article 43 (conformity assessment procedures); Annex VI (internal control procedure); Annex VII (notified body procedure, where applicable). Key outputs Three-workstream assessment structure with timeline Per-article evidence mapping --- ## D2. Assessment Checklist URL: https://docs.standardintelligence.com/d2-assessment-checklist Breadcrumb: Artefact Taxonomy › Category D — Assessment Record › D2. Assessment Checklist Last updated: 28 Feb 2026 D2. Assessment Checklist AISDP module(s): Cross-cutting Granular per-sub-requirement checklist mapping every requirement of Articles 8–15, Article 17 , and Annex IV to specific questions, evidence expectations, and pass/fail criteria. Each sub-requirement of each article is a separate checklist item with its own evidence requirement. The AI System Assessor executes the checklist during the technical documentation assessment, recording evidence, determination (conformant, non-conformant, or partially conformant), and conditions for each item. Responsible party: Conformity Assessment Coordinator prepares. AI System Assessor executes. Regulations addressed: Articles 8–15 (all high-risk system requirements); Article 17 (QMS); Annex IV (technical documentation); Annex VI (b) (technical documentation assessment procedure). Key outputs Per-sub-requirement determination with evidence reference Summary conformity status per article --- ## D3. Non-Conformity Register URL: https://docs.standardintelligence.com/d3-non-conformity-register Breadcrumb: Artefact Taxonomy › Category D — Assessment Record › D3. Non-Conformity Register Last updated: 28 Feb 2026 D3. Non-Conformity Register AISDP module(s): Cross-cutting Records all non-conformities identified during assessment, classified by severity (critical, major, minor). Each entry records the finding, severity, root cause analysis, corrective action plan, assigned owner, deadline, and verification step confirming corrective action effectiveness. Critical non-conformities must be fully resolved before the Declaration can be signed; major non-conformities must have approved remediation plans. The register is also used during operational life for PMM findings, vulnerability SLA breaches, and audit findings. Responsible party: Conformity Assessment Coordinator maintains. AI Governance Lead reviews before Declaration signing. Regulations addressed: Article 43 (conformity assessment); Annex VI (internal control); Article 17 (QMS, non-conformity management ). Key outputs Per-finding severity classification with root cause Corrective action tracking with verification Pre-Declaration clearance status --- ## D4. Internal Conformity Assessment Report URL: https://docs.standardintelligence.com/d4-internal-conformity-assessment-report Breadcrumb: Artefact Taxonomy › Category D — Assessment Record › D4. Internal Conformity Assessment Report Last updated: 28 Feb 2026 D4. Internal Conformity Assessment Report AISDP module(s): Cross-cutting The assessment's formal concluding document: scope, methodology, assessor team, findings by phase, Non-Conformity Register summary, and overall assessment conclusion. Signed by the lead assessor and reviewed by the AI Governance Lead . Retained for ten years as the evidential foundation for the Declaration of Conformity . The lead assessor classifies each non-conformity by severity, reconciles findings across phases, and reaches an overall conclusion. Responsible party: Lead assessor signs. AI Governance Lead reviews. Regulations addressed: Article 43 ( conformity assessment ); Annex VI (internal control procedure); Article 18 (ten-year retention); Article 47 (Declaration of Conformity). Key outputs Three-workstream findings synthesis Overall conformity conclusion NCR summary with pre-Declaration status --- ## D5. Assessor Records Archive URL: https://docs.standardintelligence.com/d5-assessor-records-archive Breadcrumb: Artefact Taxonomy › Category D — Assessment Record › D5. Assessor Records Archive Last updated: 28 Feb 2026 D5. Assessor Records Archive AISDP module(s): Cross-cutting Documentation demonstrating that the assessment was conducted by competent, independent assessors. Contains conflict of interest declarations, competence evidence (qualifications, training records, CPD logs), and independence arrangements for each assessment. Conflict of interest declarations are completed before each assessment; competence evidence is maintained continuously. Responsible party: Conformity Assessment Coordinator maintains. Regulations addressed: Annex VI (internal control, assessor competence); Article 43 (conformity assessment procedures); ISO/IEC 42001:2023 (competence requirements). Key outputs Per-assessor conflict of interest declaration Competence evidence with CPD log Independence arrangement documentation --- ## D6. Readiness Assessment Checklist URL: https://docs.standardintelligence.com/d6-readiness-assessment-checklist Breadcrumb: Artefact Taxonomy › Category D — Assessment Record › D6. Readiness Assessment Checklist Last updated: 28 Feb 2026 D6. Readiness Assessment Checklist AISDP module(s): Cross-cutting Pre-assessment gate confirming that governance, technical, and documentation prerequisites are met before formal conformity assessment begins. Checks that all ten roles are appointed, independence requirements satisfied, AISDP modules substantially complete, evidence artefacts current, and testing gates passed. Prevents premature assessment. Responsible party: Conformity Assessment Coordinator executes. AI Governance Lead approves readiness. Regulations addressed: Article 43 (conformity assessment, preparatory control); Annex VI (internal control). Key outputs Per-prerequisite pass/fail with blocker identification Readiness determination with sign-off --- ## D7. Annual Oversight Audit Report URL: https://docs.standardintelligence.com/d7-annual-oversight-audit-report Breadcrumb: Artefact Taxonomy › Category D — Assessment Record › D7. Annual Oversight Audit Report Last updated: 28 Feb 2026 D7. Annual Oversight Audit Report AISDP module(s): Module 7 (Human Oversight), Module 12 (Post-Market Monitoring) Independent annual audit of monitoring infrastructure, escalation pathways, break-glass procedures , training currency, and non-retaliation commitments. The Internal Audit Assurance Lead tests each component of the oversight framework. Findings are tracked through the Non-Conformity Register . Covers whistle-blower protections under Directive 2019/1937. Responsible party: Internal Audit Assurance Lead conducts. AI Governance Lead receives. Regulations addressed: Article 14 (human oversight); Article 72 (PMM); Article 17 (QMS, internal audit); Directive 2019/1937 (whistleblower protection). Key outputs Per-component oversight effectiveness assessment Non-retaliation commitment verification --- ## D8. Regulatory Interaction Log URL: https://docs.standardintelligence.com/d8-regulatory-interaction-log Breadcrumb: Artefact Taxonomy › Category D — Assessment Record › D8. Regulatory Interaction Log Last updated: 28 Feb 2026 D8. Regulatory Interaction Log AISDP module(s): Cross-cutting Record of every substantive communication with competent authorities, notified bodies, and market surveillance authorities. Each entry is timestamped and attributed, recording meeting minutes, document submissions, questions raised, responses provided, and interim findings. Internal SLAs: five business days for routine queries, two business days for urgent queries. Serves as evidence of cooperative engagement, a mitigating factor under Article 99(7). Responsible party: Conformity Assessment Coordinator maintains. Regulations addressed: Article 21 (cooperation with competent authorities); Article 99(7) (mitigating factors); Annex VII (notified body interaction, where applicable). Key outputs Per-interaction timestamped record with document exchange log SLA compliance tracking --- ## E1. Declaration of Conformity URL: https://docs.standardintelligence.com/e1-declaration-of-conformity Breadcrumb: Artefact Taxonomy › Category E — Regulatory Instrument › E1. Declaration of Conformity Last updated: 28 Feb 2026 E1. Declaration of Conformity AISDP module(s): Cross-cutting Legally binding statement under Article 47 that the system conforms to all applicable requirements. Contains eight mandatory elements per Annex V: system identification, provider identification, sole responsibility statement, conformity statement listing all applicable legislation, data protection compliance, standards and specifications applied, notified body information, and signatory with date. Signing carries personal liability; an inaccurate Declaration exposes the signatory and the organisation to Tier 3 penalties under Article 99(5). The Declaration can only be signed when the assessment report supports it. Responsible party: AI Governance Lead signs. Legal and Regulatory Advisor witnesses and confirms legal sufficiency. Regulations addressed: Article 47 ( Declaration of Conformity ); Annex V (eight mandatory elements); Article 99(5) (penalties for misleading information); Article 48 (CE marking). Key outputs Signed Declaration with eight Annex V elements Machine-readable copy retained for ten years Translations per deployment jurisdiction --- ## E10. Regulator Contact Register URL: https://docs.standardintelligence.com/e10-regulator-contact-register Breadcrumb: Artefact Taxonomy › Category E — Regulatory Instrument › E10. Regulator Contact Register Last updated: 28 Feb 2026 E10. Regulator Contact Register AISDP module(s): Cross-cutting Per-jurisdiction register of authority contacts for incident notification and inspection. Lists for each deployment member state: the AI Act market surveillance authority, the NIS2 competent authority or CSIRT, the DORA competent financial authority (where applicable), and the ENISA reporting portal for CRA notifications. Includes contact details, preferred communication channels, reporting portals, and language requirements. Tested during incident response exercises to confirm details are current. Responsible party: AI Governance Lead maintains. Regulations addressed: Article 73 (serious incident reporting); NIS2 Article 23 (incident notification); DORA Article 19 (incident reporting); CRA Article 14 (vulnerability reporting). Key outputs Per-jurisdiction authority contact sheet Annual verification and exercise records --- ## E11. Break-Glass Procedure Documentation URL: https://docs.standardintelligence.com/e11-break-glass-procedure-documentation Breadcrumb: Artefact Taxonomy › Category E — Regulatory Instrument › E11. Break-Glass Procedure Documentation Last updated: 28 Feb 2026 E11. Break-Glass Procedure Documentation AISDP module(s): Module 7 (Human Oversight) Emergency override and shutdown procedures implementing Article 14 's requirement for the ability to stop, disable, or intervene. Documents activation criteria, authorised personnel, escalation paths, mandatory post-activation review, and evidence preservation requirements. The technical implementation ensures the override mechanism functions independently of the model inference pipeline. Tested before deployment and annually thereafter. Post-activation review is mandatory; each activation produces a dated record. Responsible party: AI Governance Lead approves the procedures. Technical SME designs the technical implementation. Regulations addressed: Article 14(4)(e) (ability to stop, override, or reverse); Article 14(4)(f) (ability to intervene or interrupt); Article 73(6) (evidence preservation); Article 15 (cybersecurity, last-resort defence). Key outputs Activation criteria with authorised personnel Independence verification (mechanism operates outside inference pipeline) Post-activation review protocol --- ## E2. EU Database Registration Entry URL: https://docs.standardintelligence.com/e2-eu-database-registration-entry Breadcrumb: Artefact Taxonomy › Category E — Regulatory Instrument › E2. EU Database Registration Entry Last updated: 28 Feb 2026 E2. EU Database Registration Entry AISDP module(s): Cross-cutting Formal submission to the EU database under Articles 49/71. Annex VIII Section A (provider of high-risk system), Section B (provider of non-high-risk system), or Section C (deployer) entries as applicable. Data fields populated from the AISDP, Declaration, and deployment information. Must cover all deployment jurisdictions and be updated on material changes. Updated again at decommission . Responsible party: Conformity Assessment Coordinator prepares and submits. Regulations addressed: Article 49 (registration by providers); Article 71 (EU database); Annex VIII (data elements, Sections A, B, and C). Key outputs Per-jurisdiction registration submission Update audit trail --- ## E3. CE Marking URL: https://docs.standardintelligence.com/e3-ce-marking Breadcrumb: Artefact Taxonomy › Category E — Regulatory Instrument › E3. CE Marking Last updated: 28 Feb 2026 E3. CE Marking AISDP module(s): Cross-cutting Visible declaration that the system conforms to all applicable requirements. Affixed on the system's user interface and documentation. Must be visible, legible, and indelible. Affixing without a completed conformity assessment is a Tier 2 offence under Article 99. A formal CE marking approval step in the deployment workflow requires explicit confirmation before affixation proceeds. Responsible party: AI Governance Lead or Conformity Assessment Coordinator authorises. Regulations addressed: Article 48 (CE marking requirements); Regulation (EC) No 765/2008 Article 30 (CE marking principles); Article 99 (Tier 2 penalties for improper affixation). Key outputs Authorisation record with pre-affixation checklist Evidence of marking placement (screenshot, documentation reference) --- ## E4. Instructions for Use (IFU) URL: https://docs.standardintelligence.com/e4-instructions-for-use-ifu Breadcrumb: Artefact Taxonomy › Category E — Regulatory Instrument › E4. Instructions for Use (IFU) Last updated: 28 Feb 2026 E4. Instructions for Use (IFU) AISDP module(s): Module 3 (Architecture and Design), Module 5 (Testing and Validation), Module 6 (Risk Management System), Module 7 (Human Oversight) Deployer-facing documentation required under Article 13 . Contains the system's intended purpose, capabilities and limitations, performance characteristics, human oversight requirements, maintenance obligations, and residual risk s with compensating controls deployers should apply. Compiled from AISDP Modules 3, 5, 6, and 7. Translated into each deployment jurisdiction's official language. Specifies which subgroups are affected by residual risks, the magnitude, conditions of materialisation, and recommended compensating controls. Responsible party: Technical SME drafts technical content. AI Governance Lead approves. Conformity Assessment Coordinator manages translations. Regulations addressed: Article 13 (transparency and provision of information to deployers); Article 13(3) (specific content requirements); Article 14 (3)(b) (deployer human oversight measures); Article 26 (deployer obligations). Key outputs Deployer-ready documentation with per-subgroup residual risk guidance Per-jurisdiction translations --- ## E5. AISDP (Twelve-Module Documentation Package) URL: https://docs.standardintelligence.com/e5-aisdp-twelve-module-documentation-package Breadcrumb: Artefact Taxonomy › Category E — Regulatory Instrument › E5. AISDP (Twelve-Module Documentation Package) Last updated: 28 Feb 2026 E5. AISDP (Twelve-Module Documentation Package) AISDP module(s): All twelve modules The master compliance document comprising twelve modules: System Identity, Development Process, Architecture and Design, Data Governance , Testing and Validation, Risk Management System, Human Oversight, Transparency and User Information, Robustness and Cybersecurity, Version Control and Change Management , FRIA , and Post-Market Monitoring . Every claim requires a supporting artefact in the evidence pack. A competent authority's first request in enforcement proceedings will be for this document. Assembled incrementally across the seven delivery phases; maintained as a living document with version history demonstrating continuous compliance. Responsible party: AI System Assessor compiles. Conformity Assessment Coordinator reviews completeness. AI Governance Lead approves. Regulations addressed: Article 11 (technical documentation); Article 18 (ten-year retention); Annex IV (documentation content requirements); Article 99(7) (thoroughness as mitigating factor). Key outputs Twelve-module living document with version history Per-claim evidence traceability --- ## E6. Serious Incident Reports URL: https://docs.standardintelligence.com/e6-serious-incident-reports Breadcrumb: Artefact Taxonomy › Category E — Regulatory Instrument › E6. Serious Incident Reports Last updated: 28 Feb 2026 E6. Serious Incident Reports AISDP module(s): Module 12 ( Post-Market Monitoring ) Formal notifications to the market surveillance authority under Article 73 for incidents meeting the Article 3(49) definition. Tiered deadlines: two days (widespread infringement or critical infrastructure disruption), ten days (death), fifteen days (default). An initial incomplete report is permitted under Article 73(5) . Uses pre-drafted templates with a shared incident fact sheet and regime-specific annexes for each applicable regulation (AI Act, GDPR , DORA , NIS2 , CRA ). Evidence is preserved and the system left unaltered per Article 73(6) before authority notification. Responsible party: AI Governance Lead owns. Legal and Regulatory Advisor coordinates with authorities. Regulations addressed: Article 73 (serious incident reporting); Article 3(49) (serious incident definition); GDPR Article 33 (breach notification, 72 hours); DORA Article 19 (major ICT incident, 4 hours); NIS2 Article 23 (significant incident, 24 hours); CRA Article 14 (vulnerability reporting, 24 hours). Key outputs Tiered-deadline incident notification Cross-regime parallel reporting with shared fact sheet Evidence preservation attestation --- ## E7. FRIA Report URL: https://docs.standardintelligence.com/e7-fria-report Breadcrumb: Artefact Taxonomy › Category E — Regulatory Instrument › E7. FRIA Report Last updated: 28 Feb 2026 E7. FRIA Report AISDP module(s): Module 11 (FRIA) Fundamental Rights Impact Assessment examining the system's impact on all potentially affected EU Charter rights, with attention to intersectional effects. Required for specified deployer categories under Article 27 (1): bodies governed by public law, private entities providing public services, and deployers conducting creditworthiness or insurance risk assessment s. Charter rights are mapped to the system's intended purpose and deployment context. Stakeholder consultation with deployers, affected person representatives, and domain experts informs the analysis. Article 27(4) notification to the market surveillance authority is prepared where required. Responsible party: DPO Liaison drafts. AI System Assessor reviews. AI Governance Lead approves. Regulations addressed: Article 27 (FRIA obligation); Article 27(1) (deployer categories); Article 27(4) (notification); EU Charter of Fundamental Rights (Articles 8, 15, 17, 21, 41, 47). Key outputs Per-right impact analysis with intersectional assessment Stakeholder consultation evidence Market surveillance authority notification --- ## E8. Data Protection Impact Assessment (DPIA) URL: https://docs.standardintelligence.com/e8-data-protection-impact-assessment-dpia Breadcrumb: Artefact Taxonomy › Category E — Regulatory Instrument › E8. Data Protection Impact Assessment (DPIA) Last updated: 28 Feb 2026 E8. Data Protection Impact Assessment (DPIA) AISDP module(s): Module 11 ( FRIA ) Assessment of risks to individuals' rights and freedoms from personal data processing under GDPR Article 35. Covers lawful basis for processing, data subject rights implications, and the tension between GDPR storage limitation and the AI Act's ten-year retention. Follows EDPB guidelines (WP 248 rev.01). Cross-references FRIA findings to avoid duplication. Distinct from the FRIA; the two may share evidence but must reach independent conclusions. Responsible party: DPO Liaison drafts. AI Governance Lead approves. Regulations addressed: GDPR Article 35 ( DPIA obligation); GDPR Article 36 (prior consultation); Article 10(5) (special category data). Key outputs Processing description with legal basis Risk assessment with mitigating measures GDPR-AI Act retention reconciliation --- ## E9. Affected Person Notification Templates URL: https://docs.standardintelligence.com/e9-affected-person-notification-templates Breadcrumb: Artefact Taxonomy › Category E — Regulatory Instrument › E9. Affected Person Notification Templates Last updated: 28 Feb 2026 E9. Affected Person Notification Templates AISDP module(s): Module 8 (Transparency and User Information) Templates and processes for informing individuals of the system's involvement in decisions affecting them, and for providing explanations of individual outcomes. The mechanism through which the organisation meets its transparency obligations to individuals. Designed to satisfy both the AI Act's right to explanation and GDPR 's automated decision-making provisions. Explanation methodology, scope, and limitations are documented in AISDP Module 3 ; the delivery mechanism and templates in Module 8. Responsible party: DPO Liaison drafts. Legal and Regulatory Advisor reviews. Regulations addressed: Article 86 (right to explanation); Article 50 (transparency obligations); GDPR Article 22 (automated individual decision-making); GDPR Articles 13–14 (information to data subjects). Key outputs Plain-language notification template Per-decision explanation methodology --- ## Five-Category Classification URL: https://docs.standardintelligence.com/five-category-classification Breadcrumb: Artefact Taxonomy › Five-Category Classification Last updated: 28 Feb 2026 Taxonomy Overview AISDP module(s): Cross-cutting The 61 artefacts sit along a spectrum from pure engineering output to formal legal instrument. Five categories capture the meaningful distinctions. Artefacts on the left of the spectrum are generated as byproducts of building and running the system; artefacts on the right carry legal force or are submitted directly to regulators. Category A (engineering work-product, 16 artefacts) contains items generated automatically by the development and operations pipeline. These exist because engineers build systems, not because regulators require documentation. Pipeline execution logs, model registry entries, and SBOMs fall here. Category B (compliance evidence, 14 artefacts) contains work-products specifically retained, structured, or assembled to substantiate a regulatory claim. Often auto-generated, their format, retention period, and traceability metadata are shaped by compliance requirements. Evidence packs, distributional analysis reports, and dataset documentation belong here. Category C (governance decision records, 12 artefacts) contains internal records of decisions, approvals, risk acceptance, and ongoing management. These document the organisation's reasoning rather than the system's technical state. The CDR , risk register , and residual risk sign-offs sit in this category. Category D (assessment records, 8 artefacts) contains artefacts of the formal conformity assessment process. These are produced during or for the structured evaluation that precedes the Declaration of Conformity . Assessment plans, checklists, and the assessment report belong here. Category E (regulatory instruments, 11 artefacts) contains documents with legal force: submitted to authorities, shared with deployers as formal outputs, or directly referenced by regulation. The Declaration of Conformity, serious incident reports, and the AISDP itself fall in this category. Key outputs Five-category taxonomy with placement criteria Per-category character description Spectrum Visualisation AISDP module(s): Cross-cutting The spectrum runs from engineering work-product (left) to regulatory instrument (right). Each column represents one taxonomy category; artefacts within a column share the same fundamental character. Category A artefacts would exist in some form even without the EU AI Act; they are CI/CD pipeline byproducts. Category B artefacts would exist in a less structured form; the ten-year retention, traceability metadata, and formal report structure are compliance additions. Category C artefacts would not exist without a compliance obligation; they document deliberate organisational choices. Category D artefacts exist solely because the Act mandates conformity assessment before market placement. Category E artefacts carry legal force; errors in these documents trigger Article 99 penalty exposure. The largest clusters appear at the engineering end (16 in Category A, 14 in Category B), reflecting the AISDP's design philosophy that compliance artefacts are generated as CI/CD automation byproducts rather than standalone documentation exercises. Key outputs Five-column spectrum mapping all 61 artefacts Per-category artefact counts Distribution and Design Philosophy AISDP module(s): Cross-cutting The distribution is roughly balanced. Engineering work-product (16) and compliance evidence (14) together account for 30 artefacts, reflecting the principle that compliance documentation should emerge from the engineering workflow rather than require a separate documentation effort. Governance decision records (12) are the most legally consequential cluster in practice. A competent authority examining a system will look first at the AISDP and Declaration (Category E), but its investigation into organisational culpability will focus on the CDR, risk register, residual risk sign-offs, and provider boundary determination (Category C). These artefacts document what the organisation knew, when it knew it, and what judgements it made. Assessment records (8) form the smallest category but carry disproportionate weight. They are the direct evidentiary basis for the Declaration of Conformity. A deficient assessment report undermines the Declaration itself. Regulatory instruments (11) are the only artefacts that leave the organisation. They are seen by deployers, competent authorities, notified bodies , market surveillance authorities, and affected persons. Every other artefact exists behind the organisation's walls until an authority requests access. Key outputs Distribution analysis across five categories Per-category regulatory significance assessment Boundary Cases AISDP module(s): Cross-cutting Three artefacts sit at the boundary between categories and could reasonably be placed in either. Each placement decision is documented with the rationale. The threat model (placed in B, could be A) presents the first boundary case. Threat modelling is standard security practice, placing it close to engineering work-product. The combined STRIDE/ATLAS/OWASP/PASTA framework and fundamental-rights impact scoring, however, are compliance-driven additions that push it into Category B. Break-glass procedure documentation (placed in E, could be C) sits at the second boundary. The operational motivation for shutdown procedures is strong, suggesting Category C governance. The design is directly shaped by Article 14 's requirement for the ability to stop, disable, or intervene, and the procedures are referenced in the Instructions for Use shared with deployers. This regulatory character places them in Category E. The FRIA report (placed in E, could be C) marks the third boundary. An internal assessment in character, the FRIA resembles other Category C governance records. Article 27 (4) requires notification to the market surveillance authority, however, transforming it from an internal document into one with regulatory consequences. This notification requirement places it in Category E. Key outputs Three boundary case analyses with placement rationale Using the Taxonomy AISDP module(s): Cross-cutting The taxonomy serves three practical purposes. First, it determines collection methodology: Category A artefacts are auto-generated by CI/CD pipelines, Category B artefacts are automated with human review or structured templates, Category C artefacts require human judgement, Category D artefacts follow the assessment workflow, and Category E artefacts require legal review and formal approval. Second, it guides retention and storage decisions. Categories A and B share the ten-year retention obligation under Article 18 ; their storage tiers reflect access frequency. Category C artefacts require immutable storage with audit trails because they document organisational knowledge. Category D artefacts are retained for ten years as the evidentiary basis for the Declaration. Category E artefacts have jurisdiction-specific requirements including translation and accessibility. Third, it informs quality review cycles. Category A artefacts are validated by pipeline gates; human review is exception-based. Category B artefacts undergo periodic completeness and freshness checks. Category C artefacts are reviewed at governance gates and quarterly. Category D artefacts are reviewed during the assessment cycle. Category E artefacts undergo legal review before each issuance. Key outputs Per-category collection methodology guidance Per-category retention and storage guidance Per-category quality review cycle guidance --- ## Official Template Sources URL: https://docs.standardintelligence.com/official-template-sources Breadcrumb: Artefact Taxonomy › Cross-Cutting Analysis › Official Template Sources Last updated: 28 Feb 2026 Official Template Sources AISDP module(s): Cross-cutting Seven artefacts have official or authoritative template sources as of February 2026. The European Commission published a draft serious incident reporting template on 26 September 2025, with the final version expected in August 2026. ECNL and the Danish Institute for Human Rights published a FRIA guide with a downloadable questionnaire template. Three further artefacts have their structure prescribed directly by the regulation's annexes: Annex V ( Declaration of Conformity , eight mandatory elements), Annex VIII (EU database registration, Sections A/B/C data fields), and Annex IV (AISDP documentation structure). The Commission's own FRIA template under Article 27 (5) had not been published as of February 2026. No other official fillable templates have been published by the Commission, AI Office , or any national competent authorit y for the remaining 54 artefacts. Custom templates for all 61 artefacts are provided in the companion Artefact Template Pack. Key outputs Per-artefact official template source identification Download links for published templates Gap identification for pending official templates --- ## Regulatory Mapping URL: https://docs.standardintelligence.com/regulatory-mapping Breadcrumb: Artefact Taxonomy › Cross-Cutting Analysis › Regulatory Mapping Last updated: 28 Feb 2026 Regulatory Mapping AISDP module(s): Cross-cutting The 61 artefacts address obligations across ten legal instruments: the EU AI Act (Regulation (EU) 2024/1689), GDPR (Regulation (EU) 2016/679), CRA (Regulation (EU) 2024/2847), DORA (Regulation (EU) 2022/2554), NIS2 (Directive (EU) 2022/2555), the EU Charter of Fundamental Rights, Directive 2019/1937 (Whistleblower Protection), Regulation (EC) No 765/2008 ( CE marking ), Directive 2019/790 (Copyright in the Digital Single Market), and ISO/IEC standards (42001, 23894, 25010, 25012, 27001, 38507). Every artefact maps to at least one AI Act provision. Fourteen artefacts address GDPR requirements in addition. Nine address CRA, DORA, or NIS2 cybersecurity obligations. The heaviest regulatory concentration falls on four AI Act articles: Article 9 (risk management, referenced by 12 artefacts), Article 12 (record-keeping, 11 artefacts), Article 72 (PMM, 10 artefacts), and Article 18 (documentation retention, 9 artefacts). Key outputs Per-artefact regulatory provision mapping Per-regulation artefact count and coverage analysis --- ## Responsible Party Distribution URL: https://docs.standardintelligence.com/responsible-party-distribution Breadcrumb: Artefact Taxonomy › Cross-Cutting Analysis › Responsible Party Distribution Last updated: 28 Feb 2026 Responsible Party Distribution AISDP module(s): Module 2 (Development Process) All ten roles from the AISDP governance framework own artefacts. The CI/CD pipeline (automated) generates the largest share (16 artefacts in Category A). The AI System Assessor is the most frequently named human owner (11 artefacts across Categories B, C, and D). The Conformity Assessment Coordinator owns 8 artefacts concentrated in Categories B and D. The AI Governance Lead is the approving authority for 15 artefacts but is the primary owner of only 4. The Technical SME appears as responsible party or contributor on 20 artefacts, reflecting the role's position at the engineering-compliance boundary. The DPO Liaison owns or co-owns 5 artefacts concentrated in data governance and regulatory instruments. The Legal and Regulatory Advisor owns 3 artefacts (C7, C10, C11) but reviews or witnesses 6 others. The Internal Audit Assurance Lead and Classification Reviewer each own a single artefact (D7 and C1 respectively) but their independence function makes those artefacts disproportionately significant. Key outputs Per-role artefact ownership count Per-role approval authority count --- ## Retention Requirements URL: https://docs.standardintelligence.com/retention-requirements Breadcrumb: Artefact Taxonomy › Cross-Cutting Analysis › Retention Requirements Last updated: 28 Feb 2026 Retention Requirements AISDP module(s): Module 10 (Version Control) Article 18 establishes the ten-year retention baseline for technical documentation. This applies directly to Categories B, D, and E. Category A artefacts inherit the retention requirement because they constitute the evidence underlying the AISDP claims. Category C governance records are retained for the same period because they document the organisation's state of knowledge. Storage tiers reflect access frequency. Category E artefacts remain in hot storage throughout the system's operational life; they may be requested by authorities at any time. Category A pipeline logs move to cold storage after the operational monitoring period but must remain retrievable. Deletion verification procedures apply at the end of the retention period, with specific attention to GDPR storage limitation reconciliation. Key outputs Per-category retention period specification Storage tier guidance by access frequency Deletion verification protocol --- ## Update Frequencies URL: https://docs.standardintelligence.com/update-frequencies Breadcrumb: Artefact Taxonomy › Cross-Cutting Analysis › Update Frequencies Last updated: 28 Feb 2026 Update Frequencies AISDP module(s): Cross-cutting Artefact update frequencies fall into four bands. Continuous artefacts (most of Category A) are updated at every pipeline run, deployment, or inference request; they are never out of date because they are auto-generated. Event-driven artefacts (portions of Categories B and C) are updated when a triggering event occurs: a system change, a monitoring finding, a regulatory development, or an incident. Periodic artefacts ( quarterly PMM review minutes, annual audit reports, regulatory guidance monitoring logs) follow a defined schedule. Milestone artefacts (the Declaration of Conformity , the assessment report, the CE marking authorisation) are produced at specific lifecycle points and revised only on substantial modification . Evidence freshness tracking in the evidence register (B2) must reflect these bands. A continuous artefact flagged as overdue signals a pipeline failure. A periodic artefact flagged as overdue signals a governance process failure. Key outputs Four-band update frequency classification Per-artefact frequency assignment Freshness monitoring guidance --- # Report Errata --- ## Report Errata URL: https://docs.standardintelligence.com/errata Breadcrumb: Report Errata Last updated: 28 Feb 2026 ← Back to Documentation Report an Issue Documentation Errata Found an error, inaccuracy, or outdated section in our documentation? Submit a report here. We review every submission, and your contribution helps us maintain the accuracy of our technical materials. 👤 Your Details So we can follow up if we need clarification First name * Last name * Email * Organisation Role or context Helps us understand your perspective, e.g. "Developer integrating the API" or "Compliance officer" 📍 Error Location Where in the documentation did you find this issue? Documentation section * Select a section Getting Started Development: Model Selection Development: Data Governance Development: Architectures Development: Version Control Development: CI/CD Pipelines Security: Threat Modelling Security: Cybersecurity Foundations Security: API Security Security: DevSecOps & Testing Security: Incident Response Security: Supply Chain Security: Cross-Regulatory Mapping Governance: Risk Assessment Governance: Conformity Assessment Governance: Certification, Standards & Legal Governance: Regulator Interaction Governance: End-to-End Delivery Operations: Post-Market Monitoring Operations: Operational Oversight Operations: End-of-Life Resources Artefact Taxonomy Other / Unsure Documentation version Page URL or title * Paste the full URL of the affected page, or type the page title if the URL is unavailable Section or heading The specific heading, code block, table, or paragraph where the error appears Bearer Tokens" or "Step 3 of the quickstart"'> 🔍 Error Details Describe what is wrong and what should change Error type * Select type Factual inaccuracy Technical error (wrong code, params, output) Outdated content Missing information Broken or incorrect link Typo or grammatical error Inconsistency between pages Misleading or ambiguous wording Formatting or display issue Other Severity * Select severity Critical: could cause data loss, security risk, or compliance failure High: blocks a user from completing a task Medium: causes confusion, requires a workaround Low: cosmetic, typo, or minor wording What the documentation currently says * Copy the relevant text, code snippet, or describe the current state as precisely as you can What it should say * Provide the corrected text, code, or a description of what the accurate content would be Impact or reproduction steps How did you discover this error? What happened when you followed the documentation? 📎 Supporting Evidence Optional references to strengthen the report Source or reference URL Link to an authoritative source that confirms the correction, e.g. API changelog, EU regulation article, GitHub issue Environment details SDK version, operating system, browser, or any context that may help reproduce the issue Additional notes I consent to Standard Intelligence storing this information for the purpose of investigating and resolving this errata report. My details will only be used to follow up on this submission. Your data is handled in accordance with our privacy policy. Errata submissions are reviewed within five working days. Submit Errata Report ✓ Report received Thank you for helping us improve our documentation. We will review your submission and, if needed, reach out via the email you provided. ← Back to Documentation ---