BlogCompliance
March 28, 2025

The EU AI Act: Key deadlines, risk levels, and steps to prepare

Written by
Vanta
Reviewed by
No items found.

Accelerating security solutions for small businesses 

Tagore offers strategic services to small businesses. 

A partnership that can scale 

Tagore prioritized finding a managed compliance partner with an established product, dedicated support team, and rapid release rate.

Standing out from competitors

Tagore's partnership with Vanta enhances its strategic focus and deepens client value, creating differentiation in a competitive market.

The EU AI Act is one of the world’s first comprehensive regulations aimed at AI-based systems. While we had voluntary standards like ISO 42001, the Act introduced mandatory requirements that in-scope organizations must meet to avoid considerable fines and operational disruptions.

If you develop, use, or distribute AI systems, you may have to meet the obligations prescribed by this directive. Our EU AI Act summary will help you do so by covering:

  • The meaning of the EU AI Act
  • Its scope and applicability
  • Compliance deadlines and non-compliance penalties
  • High-level compliance process

What is the EU AI Act?

The EU AI Act is a regulation developed by the European Commission to regulate the development and use of AI systems based on their risk levels. It provides a harmonized framework for mitigating the risks associated with AI as an emerging technology, largely unregulated so far.

The EU AI Act is the first regulation that prohibits specific AI use cases. It also enhances the safety and transparency of the allowed systems to minimize ethical and societal concerns.

The Act is mandatory for all organizations within the EU and those outside the EU that provide services to organizations or individuals within EU Member States.

Bonus reads: Check out these guides to learn more about other AI-related frameworks:

Who needs to comply with the EU AI Act?

The EU AI Act differentiates between six types of AI operators, each with distinct regulatory obligations. These are outlined in the table below:

Operator Definition
Provider An individual or entity that develops AI systems or general-purpose models and puts them on the market or into service for free or at a cost
Deployer An individual or entity using an AI system for any purpose other than personal, non-professional use
Distributor An individual or entity anywhere in the supply chain (except for providers and importers) that makes AI systems available to the EU market
Importer An EU-based individual or entity that makes an AI system available under the trademark of a non-EU individual entity
Authorized representative An EU-based individual or entity with a written mandate to perform and carry out a provider’s obligations established in the Act
Product manufacturer An individual or entity that makes an AI system available alongside their product under their name or trademark

Under the Act, different operators have unique regulatory obligations, with providers being the most heavily regulated. You can use the EU’s official AI Act compliance checker tool to determine which category you fall under.

Besides the operator type, organizations might have to meet different obligations depending on the risk level of their AI systems. 

{{cta_withimage37="/cta-blocks"}} | EU AI Act checklist

Understanding the 4 risk levels within the EU AI Act

The EU AI Act classifies AI systems into four risk levels based on their potential negative impact:

  1. Unacceptable
  2. High
  3. Limited
  4. Minimal

AI systems with unacceptable risks are banned altogether. Examples include:

  • Systems that deploy manipulative, deceptive, or subliminal techniques to influence user behavior and decision-making
  • Social scoring systems that evaluate or classify individuals based on personal traits or social behavior
  • Systems that exploit vulnerabilities related to disability, age, or socioeconomic status of users to impact a user’s behavior

While high-risk systems can still be employed, they’re heavily regulated and must meet various stringent requirements, such as:

  • Establishing a comprehensive AI risk management program that follows a system’s entire lifecycle
  • Enabling automated record-keeping to track events that help identify a system’s national risk level and monitor significant modifications
  • Gathering comprehensive documentation to demonstrate compliance and enable effective external audits

Examples of high-risk systems include those related to critical infrastructure (e.g., transport, utilities), employment and worker management, law enforcement, credit scoring, and more.

Limited-risk systems aren’t subject to extensive regulation, so compliance mostly focuses on increasing transparency. Operators must ensure users understand when they are interacting with AI rather than a human. Examples of such systems include popular chatbots, which aren’t inherently dangerous but might be misused or expose users to misinformation.

Transparency in this context also means informing the user about all the relevant aspects of their interactions with the system. For example, AI-generated images might need to be annotated as such to clarify the way they were created.

Finally, AI systems with minimal risks aren’t subject to regulation at all—operators must only prove that their system falls under this category. Still, it might be wise for operators of such systems to follow at least some of the Act’s requirements, particularly those related to fairness and human oversight. 

EU AI Act implementation timeline and compliance deadlines

The EU AI Act was enacted on August 1, 2024, and the European Commission outlined a clear implementation schedule. The key dates include:

  • Feb 2, 2025: Deadline for the prohibition of systems carrying unacceptable risks
  • Aug 2, 2025: Deadline for the mitigation of limited risks
  • Aug 2, 2026: Deadline for the implementation of all controls

In addition to these deadlines, there are several other dates to keep in mind. For a full view, visit the official EU AI Act implementation timeline. Pay special attention to the items highlighted in blue because they’re related to the Act’s application, while the rest are mainly aimed at the European Commission or Member States.

Most organizations should have ample time to meet the act’s requirements because most AI systems shouldn’t fall under the prohibited category. Still, the Act’s comprehensive nature calls for efficient compliance that helps your organization avoid potentially high penalties. Delaying compliance efforts can lead to rushed implementation, increasing the risk of errors or operational disruptions.

{{cta_webinar7="/cta-blocks"}}  

EU AI Act non-compliance penalties

The EU AI Act imposes various penalties to ensure effective enforcement of all the necessary rules. Non-compliant organizations might face considerable fines, depending on the specific violation:

  • Non-compliance with the prohibition of AI activities: Up to EUR 35,000,000 or 7% of worldwide annual turnover, whichever is higher
  • Non-compliance with most other requirements (particularly those related to high-risk AI): Up to EUR 15,000,000 or 3% of worldwide annual turnover, whichever is higher
  • Submission of incomplete, misleading, or incorrect information to authorities: Up to EUR 7,500,000 or 1% of worldwide annual turnover, whichever is higher

These penalties apply to organizations of all sizes. The only exceptions are startups and SMEs, which pay the lower of the two amounts corresponding with the violation. 

When determining the exact fine, national competent authorities will factor in several circumstances, including:

  • The nature, duration, gravity, and consequences of the violation
  • Whether the same operator has already faced fines by other governing bodies for similar violations
  • The level of cooperation with authorities in addressing and remediating the violation
  • Whether the violation was intentional or a result of negligence

While various extenuating circumstances can lower the exact amount, fines are still significant enough to warrant a careful approach to EU AI Act compliance. Beyond financial penalties, non-compliance can lead to product bans, legal disputes, or loss of customer trust.

How to comply with the EU AI Act: 5 steps to follow

To achieve EU AI Act compliance, follow these steps:

  1. Determine the Act’s applicability: Using the official compliance checker, complete the questionnaire to determine your organization’s category and understand the applicable regulatory obligations.
  2. Conduct an AI review and prepare documentation: Review your AI system thoroughly, including risk management, data governance, oversight, and other relevant aspects. Document your findings to simplify the gap analysis and collect evidence of sufficient controls.
  3. Perform a conformity assessment (for high-risk systems): After identifying compliance gaps, bridge them by introducing the necessary technical, administrative, and procedural controls aligned with the EU AI Act requirements.
  4. Submit your EU Declaration of Conformity: The European Commission requires organizations to submit an official EU Declaration of Conformity (DoC) to prove compliance with the EU AI Act. You must keep the DoC on file for 10 years and make it readily available to the interested parties.
  5. Conduct post-market monitoring and reassessment: Due to the volatility of AI systems, the EU AI Act requires organizations to develop a system for ongoing monitoring of their solutions and adherence to the applicable regulations.

As a newly introduced regulation, the EU AI Act presents a learning curve for many organizations. This can make implementing the requirements challenging, especially if you rely on disparate documentation systems and manual compliance workflows.

To avoid these issues, consider adopting a dedicated compliance automation and management platform. The right solution should provide the guidance and tools necessary to ensure streamlined and timely compliance.

Achieve EU AI Act compliance with Vanta

Vanta is a robust compliance and trust management solution offering the fastest and most efficient way to comply with the EU AI Act. It automates complex compliance workflows, letting you avoid laborious work and focus on high-impact tasks.

Automation features and helpful resources are included in Vanta’s dedicated EU AI Act product. Some of the platform’s key features include:

  • 150+ pre-built controls alongside custom ones
  • Ready-to-use document templates
  • In-app policy editor
  • Risk management features

If you’ve already implemented some controls to manage your AI system effectively, such as the ISO 42001 standard, Vanta automatically cross-references with the Act to help you avoid duplicate workflows. The same goes for controls related to other regulations and standards, so you can effortlessly manage multiple compliances from a single unified hub.

Schedule a custom demo of Vanta’s EU AI Act product to learn more about its functionalities and get a hands-on overview.

{{cta_simple31="/cta-blocks"}} 

A note from Vanta: Vanta is not a law firm, and this article does not constitute or contain legal advice or create an attorney-client relationship. When determining your obligations and compliance with respect to relevant laws and regulations, you should consult a licensed attorney.

Access Review Stage Content / Functionality
Across all stages
  • Easily create and save a new access review at a point in time
  • View detailed audit evidence of historical access reviews
Setup access review procedures
  • Define a global access review procedure that stakeholders can follow, ensuring consistency and mitigation of human error in reviews
  • Set your access review frequency (monthly, quarterly, etc.) and working period/deadlines
Consolidate account access data from systems
  • Integrate systems using dozens of pre-built integrations, or “connectors”. System account and HRIS data is pulled into Vanta.
  • Upcoming integrations include Zoom and Intercom (account access), and Personio (HRIS)
  • Upload access files from non-integrated systems
  • View and select systems in-scope for the review
Review, approve, and deny user access
  • Select the appropriate systems reviewer and due date
  • Get automatic notifications and reminders to systems reviewer of deadlines
  • Automatic flagging of “risky” employee accounts that have been terminated or switched departments
  • Intuitive interface to see all accounts with access, account accept/deny buttons, and notes section
  • Track progress of individual systems access reviews and see accounts that need to be removed or have access modified
  • Bulk sort, filter, and alter accounts based on account roles and employee title
Assign remediation tasks to system owners
  • Built-in remediation workflow for reviewers to request access changes and for admin to view and manage requests
  • Optional task tracker integration to create tickets for any access changes and provide visibility to the status of tickets and remediation
Verify changes to access
  • Focused view of accounts flagged for access changes for easy tracking and management
  • Automated evidence of remediation completion displayed for integrated systems
  • Manual evidence of remediation can be uploaded for non-integrated systems
Report and re-evaluate results
  • Auditor can log into Vanta to see history of all completed access reviews
  • Internals can see status of reviews in progress and also historical review detail
FEATURED VANTA RESOURCE

The ultimate guide to scaling your compliance program

Learn how to scale, manage, and optimize alongside your business goals.