Share this article

The EU AI Act: Key deadlines, risk levels, and steps to prepare
Accelerating security solutions for small businesses Tagore offers strategic services to small businesses. | A partnership that can scale Tagore prioritized finding a managed compliance partner with an established product, dedicated support team, and rapid release rate. | Standing out from competitors Tagore's partnership with Vanta enhances its strategic focus and deepens client value, creating differentiation in a competitive market. |
The EU AI Act is one of the world’s first comprehensive regulations aimed at AI-based systems. While we had voluntary standards like ISO 42001, the Act introduced mandatory requirements that in-scope organizations must meet to avoid considerable fines and operational disruptions.
If you develop, use, or distribute AI systems, you may have to meet the obligations prescribed by this directive. Our EU AI Act summary will help you do so by covering:
- The meaning of the EU AI Act
- Its scope and applicability
- Compliance deadlines and non-compliance penalties
- High-level compliance process
What is the EU AI Act?
The EU AI Act is a regulation developed by the European Commission to regulate the development and use of AI systems based on their risk levels. It provides a harmonized framework for mitigating the risks associated with AI as an emerging technology, largely unregulated so far.
The EU AI Act is the first regulation that prohibits specific AI use cases. It also enhances the safety and transparency of the allowed systems to minimize ethical and societal concerns.
The Act is mandatory for all organizations within the EU and those outside the EU that provide services to organizations or individuals within EU Member States.
Bonus reads: Check out these guides to learn more about other AI-related frameworks:
- What is ISO 42001 certification?
- Everything you need to know about NIST AI RMF
- NIST AI RMF and ISO 42001 differences explained
Who needs to comply with the EU AI Act?
The EU AI Act differentiates between six types of AI operators, each with distinct regulatory obligations. These are outlined in the table below:
Under the Act, different operators have unique regulatory obligations, with providers being the most heavily regulated. You can use the EU’s official AI Act compliance checker tool to determine which category you fall under.
Besides the operator type, organizations might have to meet different obligations depending on the risk level of their AI systems.
{{cta_withimage37="/cta-blocks"}} | EU AI Act checklist
Understanding the 4 risk levels within the EU AI Act
The EU AI Act classifies AI systems into four risk levels based on their potential negative impact:
- Unacceptable
- High
- Limited
- Minimal
AI systems with unacceptable risks are banned altogether. Examples include:
- Systems that deploy manipulative, deceptive, or subliminal techniques to influence user behavior and decision-making
- Social scoring systems that evaluate or classify individuals based on personal traits or social behavior
- Systems that exploit vulnerabilities related to disability, age, or socioeconomic status of users to impact a user’s behavior
While high-risk systems can still be employed, they’re heavily regulated and must meet various stringent requirements, such as:
- Establishing a comprehensive AI risk management program that follows a system’s entire lifecycle
- Enabling automated record-keeping to track events that help identify a system’s national risk level and monitor significant modifications
- Gathering comprehensive documentation to demonstrate compliance and enable effective external audits
Examples of high-risk systems include those related to critical infrastructure (e.g., transport, utilities), employment and worker management, law enforcement, credit scoring, and more.
Limited-risk systems aren’t subject to extensive regulation, so compliance mostly focuses on increasing transparency. Operators must ensure users understand when they are interacting with AI rather than a human. Examples of such systems include popular chatbots, which aren’t inherently dangerous but might be misused or expose users to misinformation.
Transparency in this context also means informing the user about all the relevant aspects of their interactions with the system. For example, AI-generated images might need to be annotated as such to clarify the way they were created.
Finally, AI systems with minimal risks aren’t subject to regulation at all—operators must only prove that their system falls under this category. Still, it might be wise for operators of such systems to follow at least some of the Act’s requirements, particularly those related to fairness and human oversight.
EU AI Act implementation timeline and compliance deadlines
The EU AI Act was enacted on August 1, 2024, and the European Commission outlined a clear implementation schedule. The key dates include:
- Feb 2, 2025: Deadline for the prohibition of systems carrying unacceptable risks
- Aug 2, 2025: Deadline for the mitigation of limited risks
- Aug 2, 2026: Deadline for the implementation of all controls
In addition to these deadlines, there are several other dates to keep in mind. For a full view, visit the official EU AI Act implementation timeline. Pay special attention to the items highlighted in blue because they’re related to the Act’s application, while the rest are mainly aimed at the European Commission or Member States.
Most organizations should have ample time to meet the act’s requirements because most AI systems shouldn’t fall under the prohibited category. Still, the Act’s comprehensive nature calls for efficient compliance that helps your organization avoid potentially high penalties. Delaying compliance efforts can lead to rushed implementation, increasing the risk of errors or operational disruptions.
{{cta_webinar7="/cta-blocks"}}
EU AI Act non-compliance penalties
The EU AI Act imposes various penalties to ensure effective enforcement of all the necessary rules. Non-compliant organizations might face considerable fines, depending on the specific violation:
- Non-compliance with the prohibition of AI activities: Up to EUR 35,000,000 or 7% of worldwide annual turnover, whichever is higher
- Non-compliance with most other requirements (particularly those related to high-risk AI): Up to EUR 15,000,000 or 3% of worldwide annual turnover, whichever is higher
- Submission of incomplete, misleading, or incorrect information to authorities: Up to EUR 7,500,000 or 1% of worldwide annual turnover, whichever is higher
These penalties apply to organizations of all sizes. The only exceptions are startups and SMEs, which pay the lower of the two amounts corresponding with the violation.
When determining the exact fine, national competent authorities will factor in several circumstances, including:
- The nature, duration, gravity, and consequences of the violation
- Whether the same operator has already faced fines by other governing bodies for similar violations
- The level of cooperation with authorities in addressing and remediating the violation
- Whether the violation was intentional or a result of negligence
While various extenuating circumstances can lower the exact amount, fines are still significant enough to warrant a careful approach to EU AI Act compliance. Beyond financial penalties, non-compliance can lead to product bans, legal disputes, or loss of customer trust.
How to comply with the EU AI Act: 5 steps to follow
To achieve EU AI Act compliance, follow these steps:
- Determine the Act’s applicability: Using the official compliance checker, complete the questionnaire to determine your organization’s category and understand the applicable regulatory obligations.
- Conduct an AI review and prepare documentation: Review your AI system thoroughly, including risk management, data governance, oversight, and other relevant aspects. Document your findings to simplify the gap analysis and collect evidence of sufficient controls.
- Perform a conformity assessment (for high-risk systems): After identifying compliance gaps, bridge them by introducing the necessary technical, administrative, and procedural controls aligned with the EU AI Act requirements.
- Submit your EU Declaration of Conformity: The European Commission requires organizations to submit an official EU Declaration of Conformity (DoC) to prove compliance with the EU AI Act. You must keep the DoC on file for 10 years and make it readily available to the interested parties.
- Conduct post-market monitoring and reassessment: Due to the volatility of AI systems, the EU AI Act requires organizations to develop a system for ongoing monitoring of their solutions and adherence to the applicable regulations.
As a newly introduced regulation, the EU AI Act presents a learning curve for many organizations. This can make implementing the requirements challenging, especially if you rely on disparate documentation systems and manual compliance workflows.
To avoid these issues, consider adopting a dedicated compliance automation and management platform. The right solution should provide the guidance and tools necessary to ensure streamlined and timely compliance.
Achieve EU AI Act compliance with Vanta
Vanta is a robust compliance and trust management solution offering the fastest and most efficient way to comply with the EU AI Act. It automates complex compliance workflows, letting you avoid laborious work and focus on high-impact tasks.
Automation features and helpful resources are included in Vanta’s dedicated EU AI Act product. Some of the platform’s key features include:
- 150+ pre-built controls alongside custom ones
- Ready-to-use document templates
- In-app policy editor
- Risk management features
If you’ve already implemented some controls to manage your AI system effectively, such as the ISO 42001 standard, Vanta automatically cross-references with the Act to help you avoid duplicate workflows. The same goes for controls related to other regulations and standards, so you can effortlessly manage multiple compliances from a single unified hub.
Schedule a custom demo of Vanta’s EU AI Act product to learn more about its functionalities and get a hands-on overview.
{{cta_simple31="/cta-blocks"}}
A note from Vanta: Vanta is not a law firm, and this article does not constitute or contain legal advice or create an attorney-client relationship. When determining your obligations and compliance with respect to relevant laws and regulations, you should consult a licensed attorney.





FEATURED VANTA RESOURCE
The ultimate guide to scaling your compliance program
Learn how to scale, manage, and optimize alongside your business goals.