This article maps SOC 2, ISO/IEC 27001, and HIPAA onto AI workloads, identifies where the existing controls from Articles 1 through 14 of this module satisfy each requirement, and shows how the COMPEL Domain D13 maturity discipline produces the evidence packages compliance audits consume.
SOC 2 for AI workloads
SOC 2 is the American Institute of Certified Public Accountants attestation framework for service organizations, organized around five Trust Services Criteria: Security, Availability, Processing Integrity, Confidentiality, and Privacy. AI workloads operated as part of a service offering inherit the SOC 2 obligation for the service and add control surfaces the underlying framework was not specifically designed for.
Security criterion. The Security trust criterion requires controls that protect the system against unauthorized access. For AI workloads, the relevant controls from this module include authentication and authorization at the inference endpoint (Article 6), credential management (Article 7), network isolation (Article 8), encryption (Article 9), and the broader access governance the threat model (Article 1) maps. Audit evidence is the configuration of the controls, the operational logs (Article 13) demonstrating the controls operating as designed, and the incident-response evidence (Article 14) demonstrating the controls’ effectiveness when tested.
Availability criterion. The Availability criterion requires controls supporting the system’s availability for operation as committed. For AI workloads the criterion picks up the rate-limiting and abuse-prevention controls (Article 6), the network resilience patterns (Article 8), and the operational monitoring that supports the availability commitment. The criterion also picks up the supply-chain resilience controls (Article 12) where availability depends on upstream dependencies.
Processing Integrity criterion. The Processing Integrity criterion requires that system processing be complete, valid, accurate, timely, and authorized. AI workloads stress this criterion in distinctive ways: the model is the processing engine, and the validity of the model’s outputs is a property the operator must demonstrate. The relevant controls include the input and output validation patterns (Articles 2, 3, 6), the model integrity verification (Articles 4, 12), the training-data integrity (Article 5), and the monitoring that detects processing degradation (Article 13).
Confidentiality criterion. The Confidentiality criterion requires controls protecting information designated as confidential. For AI workloads the criterion picks up the encryption controls (Article 9), the access controls on training data and model artefacts (Articles 4, 7, 8), and the model-output handling that prevents inadvertent confidentiality breaches (Article 3). The criterion also picks up the privacy-preserving controls where confidentiality of training data is at stake.
Privacy criterion. The Privacy criterion requires controls supporting collection, use, retention, disclosure, and disposal of personal information consistent with the entity’s privacy commitments. For AI workloads the criterion adds the obligations specific to AI: the documentation of training-data sources, the controls that prevent inadvertent disclosure of training data through model outputs, and the model-card discipline that documents how personal information is used in the system.
The SOC 2 evidence package for AI workloads includes the threat model (Article 1), the model card with controls mappings, the operational logs demonstrating the controls, and the incident-response history demonstrating responsiveness. The audit firm reads the evidence against the criteria and issues the report.
ISO/IEC 27001 for AI workloads
ISO/IEC 27001 is the international standard for Information Security Management Systems. Annex A of the standard enumerates 93 controls organized into four themes (Organizational, People, Physical, Technological) that the certified organization must implement or document a deliberate exclusion of. AI workloads are in scope of any 27001 certification that covers the systems they run on, and the controls largely map cleanly to the practices in this module.
The controls most relevant to AI workloads include A.5 (information security policies — extended to include AI security policy), A.8 (asset management — extended to include model artefacts and training datasets as managed assets, mapping to Article 4), A.9 (access control — Articles 6, 7), A.10 (cryptography — Article 9), A.12 (operations security including logging — Article 13), A.13 (communications security — Article 8), A.14 (system acquisition, development, and maintenance — including the AI supply chain, Article 12), A.16 (information security incident management — Article 14), and A.17 (information security aspects of business continuity).
The 27001 certification requires the operator to demonstrate that the management system itself operates as designed: that risks are identified, that controls are selected and implemented, that effectiveness is measured, that incidents are managed, and that the system is reviewed and improved on a defined cadence. The COMPEL Domain D13 maturity rubric provides the evidence for the AI-specific portion of the management system; the broader ISMS provides the framework into which D13 plugs.
ISO/IEC 42001:2023 https://www.iso.org/standard/81230.html is the AI-specific complement to ISO 27001 and is increasingly co-certified by organizations that operate substantial AI workloads. Annex A.6 (organizational controls for AI) and Annex A.7 (technical controls for AI) of ISO 42001 are the AI-specific extensions to the 27001 control set; the controls are designed to overlay rather than replace the 27001 controls and the audit firms that certify both standards have converged on integrated audit approaches.
HIPAA for AI workloads
The Health Insurance Portability and Accountability Act, with its accompanying Security Rule, applies to AI workloads that process Protected Health Information (PHI) for covered entities and business associates in the United States healthcare ecosystem. The Security Rule organizes its requirements into Administrative, Physical, and Technical Safeguards that the covered organization must implement.
The Technical Safeguards most relevant to AI workloads include Access Control (mapped to Articles 6 and 7), Audit Controls (mapped to Article 13), Integrity (mapped to Articles 4 and 12), Person or Entity Authentication (Article 6), and Transmission Security (Articles 8 and 9). The Administrative Safeguards include Security Management Process (the threat model from Article 1, the risk assessment, and the broader TRiSM discipline from Article 10), Workforce Security, Information Access Management, Security Awareness and Training, Security Incident Procedures (Article 14), Contingency Plan, and Evaluation. The Physical Safeguards apply to the underlying infrastructure and are typically inherited from the cloud provider’s certifications.
HIPAA also requires Business Associate Agreements (BAAs) between covered entities and any party that processes PHI on their behalf. AI workloads that consume hosted-model APIs require BAA coverage from the model provider or the use of the provider’s HIPAA-eligible services with the additional configuration the BAA requires. The supply-chain controls from Article 12 produce the AI-BOM evidence that documents which providers are in scope.
The HIPAA evidence package for AI workloads includes the risk assessment, the policies and procedures, the training records, the access logs (Article 13), the incident records (Article 14), the BAAs, and the technical configuration evidence. The Office for Civil Rights audit reads the evidence against the Rule and assesses compliance.
How COMPEL Domain D13 produces the evidence
The COMPEL Domain D13 maturity assessment is the discipline that produces the evidence packages the compliance audits consume. The maturity rubric (Module 1.3 of this body of knowledge) defines what each maturity level looks like for security infrastructure; the practices in Articles 1 through 14 of this module are the operational implementation of the rubric; and the artefacts the practices produce — threat models, model cards, AI-BOM, inference logs, incident records, red-team reports — are the evidence the audits read.
The discipline that links the operational practice to the audit evidence is the COMPEL TRiSM operating model from Article 10. The unified registry, the integrated review, and the integrated incident response produce the evidence as a side effect of running the program well, rather than as a separate compliance-evidence-collection effort. The result is that an organization operating Domain D13 at Advanced or Strategic maturity passes SOC 2, ISO 27001, ISO 42001, and HIPAA audits with evidence packages that come substantially from the same source-of-truth registry and that present consistent answers across frameworks.
The European Union’s AI Act, Article 15 https://artificialintelligenceact.eu/article/15/, explicitly contemplates the integrated approach: the technical-documentation provisions for high-risk systems require an evidence package that demonstrates the operator’s compliance with the cybersecurity, accuracy, and robustness requirements together. The NIST AI Risk Management Framework Cybersecurity profile https://www.nist.gov/itl/ai-risk-management-framework is converging with the international standards to produce a unified set of expected evidence. NIST SP 800-218A https://csrc.nist.gov/pubs/sp/800/218/a/final provides the Secure Software Development Framework practices that satisfy the engineering-grade evidence requirement.
The OWASP Top 10 for Large Language Model Applications https://owasp.org/www-project-top-10-for-large-language-model-applications/, the MITRE ATLAS knowledge base https://atlas.mitre.org/, and the Gartner AI TRiSM Hype Cycle https://www.gartner.com/en/articles/gartner-top-strategic-technology-trends-for-2024 each contribute the threat-catalog and tooling-maturity references the compliance evidence cites.
Maturity Indicators
Foundational. AI workloads are not in scope of the organization’s compliance audits, or are in scope but treated separately from the broader compliance program. Each audit framework is satisfied through independent evidence collection. The team cannot produce a unified compliance posture for any AI workload.
Applied. AI workloads are in scope of at least the highest-priority compliance frameworks (typically SOC 2 or ISO 27001). Evidence is collected for each framework but the collection is partially manual and partially redundant across frameworks. The team has mapped the controls from this module onto the relevant framework requirements.
Advanced. Compliance evidence for AI workloads is produced from the unified registry the TRiSM discipline (Article 10) maintains. Evidence packages satisfy multiple frameworks from common sources. The threat model (Article 1), the AI-BOM (Article 12), the operational logs (Article 13), and the incident records (Article 14) are the source-of-truth artefacts the audits consume. Domain D13 maturity assessments are performed and the gaps drive the next compliance cycle.
Strategic. Compliance is a first-class governance surface. AI-specific frameworks (the EU AI Act, ISO 42001, the NIST AI RMF) are addressed alongside the general-purpose frameworks with integrated evidence packages. Audit findings drive permanent control improvements verified by red-team engagements (Article 11). The compliance posture is itself audited on a regular schedule. The organization contributes to industry compliance-harmonization initiatives that reduce the duplication across frameworks.
Practical Application
A team operating AI workloads under a compliance regime should make three changes this quarter. First, build the unified evidence map: for each compliance framework in scope, enumerate the controls the framework requires and identify which of the Module 1.8 articles produces the corresponding evidence. The mapping immediately surfaces gaps where evidence is missing and overlaps where the same evidence satisfies multiple frameworks.
Second, perform the COMPEL Domain D13 maturity assessment using the rubric in Module 1.3 of this body of knowledge. The assessment produces the gap analysis that distinguishes operational maturity from compliance posture and drives the integrated investment plan for the next cycle.
Third, schedule the integrated audit conversation with the audit firm or the internal audit function, presenting the unified evidence package and the maturity assessment together. The conversation establishes the operating pattern for future audits and surfaces the evidence-collection improvements that make subsequent audits faster and cheaper.
This article closes Module 1.8. The fifteen articles together constitute the COMPEL foundations-level body of knowledge for Domain D13 — Security and Infrastructure. The maturity progression from Foundational through Strategic is the multi-year program the organization runs against the rubric, and the practices in each article are the operational implementation that the program advances. Module 2.8 (in the practitioner-level body of knowledge) extends this material with the deeper engineering practice; Module 3.8 (in the governance-professional body of knowledge) extends it with the governance discipline; Module 4.8 (in the leadership body of knowledge) extends it with the executive frame. Domain D13 is one of twenty domains the COMPEL framework addresses; the organization’s overall AI security posture is one component of the broader transformation maturity the framework as a whole supports.
© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.