ISO 42001 requirements
ISO 42001:2023 controls: All you need to know

Looking to streamline the work for ISO 42001 compliance?
ISO/IEC 42001 offers a structured set of controls that guides organizations in developing, managing, and deploying AI systems responsibly. These controls form the core of an organization’s artificial intelligence management system (AIMS), serving as safeguards for transparency, bias prevention, human oversight, and explainability across the AI lifecycle.
Understanding which controls apply is one of the first steps to preparing for certification and avoiding material compliance gaps. Implementing relevant controls also helps you prepare for other AI compliance frameworks. To help you navigate the process, we’ll discuss:
- What ISO 42001 AI controls are
- The standard’s nine main control areas
- Tips and best practices for control selection
What are ISO 42001 controls?
ISO 42001 controls are normative requirements outlined in Annex A of the standard, which organizations select and implement as part of their AIMS. “Normative” in this context means the controls, including the policies and procedures they cover, must be implemented and evidenced to demonstrate alignment with the standard.
ISO 42001 controls are not uniformly applicable and depend on the organization's AI system risks and use cases. Additionally, the standard also provides Annexes B, C, and D as further references to help teams design and justify their controls.
Annex B shares normative implementation guidance for controls, useful for planning and designing your control setup. The remaining two annexes are informative and don’t introduce additional expectations. Annex C helps organizations map objectives and common risk sources across AI systems, while Annex D offers examples, use cases, and additional context to help interpret and apply ISO 42001 controls and requirements.
Together, these annexes also support organizations across:
- Policy development and resource allocation
- Risk management and monitoring
- Third-party relationships and vendor governance
{{cta_withimage7="/cta-modules"}} | ISO 42001 checklist
Purpose of ISO 42001 controls
The main intent behind ISO 42001 controls is to establish an efficient framework for developing and managing AI systems within an organization. Key objectives include:
- Responsible use of AI: Ensuring your AI systems are developed, deployed, and used in ways that align with organizational and ethical expectations, such as transparency, fairness, and human oversight
- Accountability: Defining roles, responsibilities, and decision-making authority for AI systems to showcase accountability across the AI lifecycle
- Risk management: Identifying, evaluating, and mitigating AI risks, such as bias and model drift, through regular risk assessments and ongoing oversight
- Fostering innovation: Enabling teams to develop and deploy AI systems responsibly by baking risk management and governance into operational workflows
When validated during a certification audit, ISO 42001 controls provide objective evidence of governance and risk management, which signals customer trust and adds differentiation in the market. Adopting the controls also establishes a durable compliance foundation, making it easier to align with other AI regulations and standards, such as the EU AI Act, OECD AI Principles, CPS 234, and NIST AI RMF.
Bonus read: Learn about the best compliance audit software for modern teams.
How many controls does ISO 42001 have?
ISO 42001’s Annex A lists 38 controls organized into nine control areas, each addressing a different aspect of AI governance and risk management. The number of controls you adopt depends on your organizational context and risk profile, and you can also implement additional controls outside what’s listed in Annex A to maintain adequate AI safeguards. Because of this flexibility, ISO 42001 can work for organizations of all sizes across many sectors.
An overview of ISO 42001 Annex A controls
Annex A controls are split into nine areas, each covering a different aspect of AI management:
- A.2 Policies related to AI
- A.3 Internal organization
- A.4 Resources for AI systems
- A.5 Assessing impacts of AI systems
- A.6 AI system life cycle
- A.7 Data for AI systems
- A.8 Information for interested parties of AI systems
- A.9 Use of AI systems
- A.10 Third‑party and customer relationships
.png)
The level of rigor with which you implement these controls depends on your industry, the nature of your AI systems, and your use cases. For example, high-stakes sectors such as healthcare, safety, autonomous vehicles, and heavy machinery often require stricter controls, as failures in these systems can have catastrophic consequences. As a result, these controls must be designed with faster detection, clearer escalation, and stronger human oversight than what would be required for lower-risk AI use cases.
The following sections will help you understand the expectations under each of the nine control areas.
A.2 Policies related to AI
This control area requires organizations to create and document AI-related policies. The policies need to provide management direction and support for AI systems according to business needs.
Organizations must also assess the alignment of the AI policy with other organizational policies, identifying where they are affected by AI governance objectives. Your AI policies should be reviewed and updated regularly for effectiveness, especially following changes to AI use cases, regulatory obligations, and organizational goals.
A.3 Internal organization
This control area emphasizes the importance of transparent internal AI governance by assigning roles and responsibilities within the AIMS. Organizations will define and allocate task owners for key AI-related activities at every stage of the AI lifecycle, as well as document a process for reporting AI-related concerns.
{{cta_withimage28="/cta-blocks"}} | Vanta’s AI Security Assessment
A.4 Resources for AI systems
This control domain guides organizations toward allocating sufficient and appropriate resources to effectively manage their AI systems. It requires documenting resources for these dimensions:
- Human resources, including people involved in development, deployment, operation, change management, maintenance, transfer, and decommissioning of AI systems
- System and computing resources
- Data resources used for the AI system
- Tooling
By identifying these resources, organizations maintain visibility into dependencies that affect AI risks, keeping compliance processes sustainable and scalable.
A.5 Assessing impacts of AI systems
This control area addresses one of the most common gaps of AI compliance: identifying how AI impacts people, society, and communities.
It requires organizations to assess and document the potential consequences their AI systems may have on individuals and groups within the broader societal context. These evaluations are expected to inform governance decisions around privacy, fairness, safety, and system oversight while developing or managing AI systems.
A.6 AI system life cycle
This domain governs the entire AI lifecycle, requiring teams to implement appropriate safeguards during design and development. It continues doing so throughout validation, deployment, and operation, and ultimately until decommissioning and retirement. Maintaining controls in this section requires ongoing monitoring, maintenance, and support activities.
Organizations must also prepare technical documentation for relevant stakeholders, including users, partners, and supervisory authorities. Together, these controls help with lifetime traceability of AI security and operational rigor.
A.7 Data for AI systems
This control objective requires organizations to define and document how the data used by AI systems is collected, processed, and maintained at every step of the AI cycle.
The goal is to ensure that data supporting AI systems meet quality expectations, while also addressing accuracy, privacy, fairness, and regulatory requirements.
A.8 Information for interested parties of AI systems
Transparency in AI use is crucial to prevent misuse. This control objective addresses it.
Organizations must disclose essential information about AI systems to stakeholders, including users, regulators, customers, and other affected individuals. This may include details about the system’s purpose, usage instructions, limitations, and other information necessary for an appropriate understanding of AIMS mechanisms in practice.
A.9 Use of AI systems
This control category governs the responsible use of AI in production. It guides organizations on how to define appropriate AI usage, set clear boundaries, and implement safeguards to prevent misuse, unintended outcomes, and unauthorized use outside approved use cases.
A.10 Third‑party and customer relationships
The last control area under Annex A focuses on controls for AI supply chain risk management. It instructs organizations on how to ensure that suppliers, partners, and vendors involved in AI systems align with the organization’s responsible AI approach and customer expectations.
This involves setting boundaries around responsible AI use, handling, and compliance obligations, as well as maintaining oversight over any third-party AI contributions.
{{cta_withimage7="/cta-modules"}} | ISO 42001 checklist
Do ISO 42001 controls overlap with ISO 27001 and SOC 2 controls?
While many security frameworks have similar controls, ISO 42001 controls are fundamentally different from those of ISO 27001 and SOC 2. While ISO 27001 and SOC 2 primarily focus on information security and privacy, ISO 42001 is designed to address the unique risks, impacts, and governance requirements of AI systems.
The most notable difference between the frameworks is their scope:
- ISO 42001 governs the responsible development, deployment, and use of AI systems
- ISO 27001 focuses on managing information security risks
- SOC 2 provides assurance over controls related to data security, availability, confidentiality, and privacy
ISO 42001 clearly adds new AI-specific governance and lifecycle controls and transparency requirements that aren’t covered by ISO 27001 or SOC 2.
The frameworks do overlap in some areas, most notably data governance, access control documentation, and risk assessments. Organizations already aligned with SOC 2 or ISO 27001 can reuse these overlapping controls for ISO 42001, which can save time and resources.
Tips and best practices for selecting ISO 42001 controls
Selecting the right set of controls is an essential step in your ISO 42001 compliance journey. Start by conducting a risk assessment to determine which areas need addressing, such as ethical, operational, or regulatory risks, and choose controls that mitigate those risks. Document your choices in a Statement of Applicability (SoA), including why controls were chosen or excluded.
Follow these best practices to improve your control selection process:
- Anchor controls to your AI footprint: Your controls should support strategic organizational goals and address areas where AI is used—think about what AI-derived outcomes your organization is responsible for.
- Consider your pre-deployment approach: For organizations building AI, Clause 8 (Operation) often surfaces issues where AI risk is not embedded into the SDLC, such as missing pre-deployment risk checkpoints or controls that exist only on paper. Ideally, controls for the entire lifecycle should be built upfront and iterated as needed.
- Plan post-deployment controls: Many certification gaps arise from missing controls in the later stages of the AI lifecycle. Design controls that support monitoring for data drift, performance degradation, bias shifts, misuse, security/privacy incidents, and unmanaged or shadow AI.
- Take into account available resources: Evaluate human, technical, and financial capacities before starting control implementations to keep the process sustainable.
- Involve relevant stakeholders in the decision: Include technical, business, and compliance teams to have better coverage and buy-in during control discussions.
Once controls are selected, effective implementation and maintenance become the larger challenge. You can streamline this process with a compliance automation tool such as Vanta, which can save you time across tasks like scoping, documentation, risk management, and control implementation.
Scope and implement ISO 42001 controls with Vanta
Vanta is a leading agentic trust platform that makes 42001 compliance straightforward, efficient, and scalable. The platform achieves this through prebuilt workflow automation, ready-to-use templates, and operationalized guidance on ISO 42001 and Annex A controls.
Vanta’s ISO 42001 product offers features that support multiple compliance aspects, such as:
- Automated evidence collection through 400+ integrations across your tech stack
- A dedicated auditor portal
- Pre-built risk scenarios
- 1,200+ automated, hourly tests
- Continuous tracking in a unified dashboard
- ISO 42001 documentation support with version control
- Adaptive framework scoping to align with your AI use cases
- Partner network support to find ISO 42001 auditors and consultants
Schedule a tailored demo for your team to understand how Vanta can accelerate your ISO 42001 program.
You can also watch Vanta’s free ISO 42001 webinar to see what automation options make your compliance workflows efficient.
{{cta_simple21="/cta-modules"}} | ISO 42001 product page
Explore more ISO 42001 articles
Introduction to ISO 42001
ISO 42001 requirements
Preparing for ISO 42001 certification
Get started with ISO 42001:
Start your ISO 42001 journey with these related resources.

4 lessons learned during our ISO 42001 audit
Key takeaways from our ISO 42001 audit—and tips to help other companies navigate the process with ease.

The ISO 42001 Compliance Checklist
The ISO 42001 compliance checklist helps to lay the foundation for what your organization should expect when working towards certification.

The EU AI Act Checklist
Get our free checklist to understand what’s required under the EU’s AI Act, how ISO 42001 fits in, and how compliance builds trust—and a competitive advantage.
