Share this article

EU AI Act Checklist
Accelerating security solutions for small businesses Tagore offers strategic services to small businesses. | A partnership that can scale Tagore prioritized finding a managed compliance partner with an established product, dedicated support team, and rapid release rate. | Standing out from competitors Tagore's partnership with Vanta enhances its strategic focus and deepens client value, creating differentiation in a competitive market. |
The security landscape is challenging right now and, according to Vanta’s State of Trust Report, 55% of organisations say that security risks have never been higher.
The rapid adoption of AI technologies in the workplace has only added to this challenge, as organisations are required to deliver more oversight and governance as a result. But as AI adoption accelerates, governance and risk management typically stall, with only 2 in 5 (37%) organisations currently conducting, or in the process of conducting, regular AI risk assessments.
This means that organisations are under growing pressure from increased AI regulation–chief among them, the European Union's Artificial Intelligence Act (AI Act).
What is the EU AI Act?
The AI Act is a first-of-its-kind legal framework intended to regulate the development and use of AI across Europe. While it came into force in August 2024, it has an implementation timeline running right up to mid-2027.
The AI Act is designed to create a uniform legal framework for the development, marketing, and use of AI systems across the European Union. It seeks to ensure that AI technologies are developed and utilised in a manner consistent with the EU’s values, including respect for fundamental rights, democracy, the rule of law and environmental protection.
But the question for businesses is–does it impact them?
Even if your company isn’t based in Europe, the short answer is probably yes. This is because the AI Act is relevant to all companies that develop, deploy, or sell AI solutions in the EU and those based outside of the EU that are involved with these companies. This has significant implications for companies who, while they might not manage their own AI, have a relationship with those that do.
Right now, the guidelines only apply to those organisations with high-risk use cases that have the potential to impact people’s health, safety and rights. According to the AI Act, these use cases can be found in various systems from traffic control to migration; biometric identification to education; elections to recruitment, and more.
Crucially, even if your company does not consider its AI systems to be high-risk, there are obligations for other risk levels. Companies should therefore continuously monitor AI within their business, so they are doing everything they can to stop minimal risk turning into high-risk.
The AI Act contains no exemption based on company size, so even small companies will need to be compliant if they are developing, deploying or selling AI solutions. What’s more, this isn’t a nice-to-have. Companies face an obligation to become AI Act compliant–to establish policies, risk management capabilities, and audit management features. If they don’t, they could face steep fines of up to €35,000,000 or 7% of their total worldwide annual turnover for the preceding financial year–whichever is higher.
While AI Act compliance is a necessity for most businesses, it also creates opportunities and can be a growth accelerator–empowering businesses to go beyond the standard and demonstrate trust to customers, partners and stakeholders alike.
How is the EU AI Act different from ISO 42001 certification?
While the AI Act is a legal requirement for businesses operating in Europe, ISO 42001 is an international standard for Artificial Intelligence Management Systems (AIMS).
Despite being voluntary, earning ISO 42001 certification is instrumental in showing customers that you are taking the necessary steps to ensure responsible usage and development of AI. It is a solid foundation from which to build your AI Act compliance.
Like the AI Act, ISO 42001 certification involves competence and awareness, frameworks, policies and management. But, crucially, it is about establishing and maintaining an AI Management System (AIMS) for AI development and operations.
How to use this checklist
This easy-to-reference checklist shows you the steps you need to take to get EU AI Act compliant.
Understand what AI Act compliance involves and the benefits of investing in a trust management platform like Vanta to automate key compliance activities, so that your security teams stay focussed on mission-critical, strategic work instead of manual work.
Let’s dive in.
{{eu-ai-act="/checklists"}}





FEATURED VANTA RESOURCE
The ultimate guide to scaling your compliance program
Learn how to scale, manage, and optimize alongside your business goals.