Most companies deploying AI agents lack an AI policy | Vanta
BlogSecurity
May 4, 2026

8 in 10 companies are betting on AI agents—but fewer than half have a policy to govern them

Written by
Vanta
Reviewed by
No items found.

Accelerating security solutions for small businesses 

Tagore offers strategic services to small businesses. 

A partnership that can scale 

Tagore prioritized finding a managed compliance partner with an established product, dedicated support team, and rapid release rate.

Standing out from competitors

Tagore's partnership with Vanta enhances its strategic focus and deepens client value, creating differentiation in a competitive market.

AI agents are already integral to how many modern companies operate. They write content, qualify leads, review resumes, and analyze financial data. Teams adopt them because they save time and unlock scale.

However, security often struggles to keep up with adoption enthusiasm. Vanta’s latest research shows the disconnect among organizations:

  • 8 in 10 deployed or plan to deploy AI agents
  • 65% say adoption is outpacing their understanding
  • Only 44% have a formal AI policy

That mismatch creates real risk. This piece breaks down what’s driving that gap, where it shows up in practice, and what strong AI governance looks like when it actually works.

AI adoption is outpacing security

AI tools are spreading across organizations faster than security teams can keep up. Employees are under pressure to use AI to move faster—and they’re acting on it. In fact, 70% of companies already have unmanaged AI tools in their environment, often introduced without security review.

Adoption starts at the team level: a group finds a tool, tests it, and expands usage once it proves valuable. Leadership encourages momentum because of immediate results. But governance lags behind. AI tools become embedded in day-to-day workflows before organizations have the systems to evaluate, approve, and monitor them—creating risk that’s hard to see and even harder to control.

The governance gap is wider than it looks

Vanta data shows that fewer than half of organizations operate under a formal AI policy—leaving gaps in ownership, usage, and accountability. When no one owns a tool or defines how it should be used, accountability breaks down quickly.

In practice, this leads to Shadow AI. Today, 70% of companies have unmanaged AI tools in their environment, often introduced without security review. A sales team might rely on one tool for outreach while marketing uses another for content—each moving fast, but independently. Security teams are left without visibility into what’s in use, what data these tools access, or how outputs are reviewed.

Regulatory pressure is about to surge

The EU AI Act introduces new requirements for high-risk AI systems starting in August 2026, with clear expectations for transparency, risk management, and oversight. If companies don’t comply, penalties can claim significant portions of your annual revenue—putting it on par with GDPR enforcement.

This shift goes beyond a single regulation. AI governance is gaining traction globally, with frameworks like the NIST AI Risk Management Framework in the U.S. and evolving guidance under the UK’s AI regulatory approach reinforcing similar expectations around accountability and control.

For companies, this entails greater scrutiny of AI-driven decisions, stronger requirements for documentation and controls, and more operational accountability. Organizations that delay building these capabilities now will have less time to catch up as enforcement approaches.

The business risks of operating without AI policies

Risk builds quickly across teams, tools, and workflows without clear AI policies. What starts as fast experimentation can warp into real business impact, both now and as regulations catch up.

  • Security incidents: Employees may input sensitive data into unapproved tools, increasing the risk of data exposure.
  • Loss of customer trust: Unvalidated AI outputs can introduce errors into customer communications and decisions.
  • Compliance gaps: AI usage can drift from regulatory requirements, making audits harder and increasing exposure.
  • Shadow AI and IT growth: Teams adopt tools without approval, limiting visibility and control for security teams.
  • Lack of accountability: Without clear ownership, issues are harder to trace, escalate, and resolve.


These risks are already surfacing, with more companies disclosing AI-related risks in public filings.

What effective AI governance actually looks like

Strong AI governance creates structure without slowing teams down, so teams can scale AI with confidence. At its core, effective governance includes key components like:

  • Formal AI policy framework: Clear, documented rules that define how AI can be used across the organization
  • Defined ownership and accountability: Specific teams are responsible for each AI system and its outcomes
  • Risk assessment and classification: A process to evaluate AI systems based on their impact, sensitivity, and potential risk
  • Continuous monitoring and controls: Ongoing visibility into how systems behave, with mechanisms to catch and address issues early


The most effective organizations integrate AI governance into their broader GRC systems. This allows them to apply consistent controls, track risk in one place, and scale governance as adoption grows.

From policy to practice: operationalizing AI governance

  1. Creating a policy is only the first step. The real work starts when governance becomes part of daily operations. 
  2. Begin with visibility. Organizations need a centralized inventory of AI systems and agents so they can see what exists across teams—including where each system is deployed, what it connects to, and what it can do. This is also the first step in addressing shadow AI—making sure teams can see not just approved tools, but everything already in use across the organization. Without that, risk is hard to manage.
  3. From there, policies need to map to real requirements. Strong programs align AI usage with frameworks like SOC 2, ISO 27001, and emerging standards like ISO 42001, so compliance is built in from the jump.


Ongoing monitoring is just as critical. Teams need continuous visibility into system behavior, along with the ability to produce evidence when needed. It’s just as important that automation helps teams track usage, apply consistent controls, and collect evidence without the scramble. This is where governance becomes operational.

Stay in control as AI scales

Vanta helps teams bring structure to AI governance with a centralized system to inventory AI tools, define ownership, enforce controls, and continuously monitor how systems behave. From mapping AI usage to frameworks like SOC 2, ISO 27001, and ISO 42001 to automatically collecting the evidence auditors expect, Vanta makes it easier to move from policy to practice.

As AI becomes part of your core infrastructure, the ability to prove control matters just as much as the control itself. Vanta automatically collects audit-ready evidence, maps AI usage to compliance frameworks, and gives teams real-time visibility into how systems behave so you stay ready.

AI adoption is moving fast. Let Vanta help: Request a demo.

Access Review Stage Content / Functionality
Across all stages
  • Easily create and save a new access review at a point in time
  • View detailed audit evidence of historical access reviews
Setup access review procedures
  • Define a global access review procedure that stakeholders can follow, ensuring consistency and mitigation of human error in reviews
  • Set your access review frequency (monthly, quarterly, etc.) and working period/deadlines
Consolidate account access data from systems
  • Integrate systems using dozens of pre-built integrations, or “connectors”. System account and HRIS data is pulled into Vanta.
  • Upcoming integrations include Zoom and Intercom (account access), and Personio (HRIS)
  • Upload access files from non-integrated systems
  • View and select systems in-scope for the review
Review, approve, and deny user access
  • Select the appropriate systems reviewer and due date
  • Get automatic notifications and reminders to systems reviewer of deadlines
  • Automatic flagging of “risky” employee accounts that have been terminated or switched departments
  • Intuitive interface to see all accounts with access, account accept/deny buttons, and notes section
  • Track progress of individual systems access reviews and see accounts that need to be removed or have access modified
  • Bulk sort, filter, and alter accounts based on account roles and employee title
Assign remediation tasks to system owners
  • Built-in remediation workflow for reviewers to request access changes and for admin to view and manage requests
  • Optional task tracker integration to create tickets for any access changes and provide visibility to the status of tickets and remediation
Verify changes to access
  • Focused view of accounts flagged for access changes for easy tracking and management
  • Automated evidence of remediation completion displayed for integrated systems
  • Manual evidence of remediation can be uploaded for non-integrated systems
Report and re-evaluate results
  • Auditor can log into Vanta to see history of all completed access reviews
  • Internals can see status of reviews in progress and also historical review detail
FEATURED VANTA RESOURCE

The ultimate guide to scaling your compliance program

Learn how to scale, manage, and optimize alongside your business goals.