Share this article

8 in 10 companies are betting on AI agents—but fewer than half have a policy to govern them
Accelerating security solutions for small businesses Tagore offers strategic services to small businesses. | A partnership that can scale Tagore prioritized finding a managed compliance partner with an established product, dedicated support team, and rapid release rate. | Standing out from competitors Tagore's partnership with Vanta enhances its strategic focus and deepens client value, creating differentiation in a competitive market. |
AI agents are already integral to how many modern companies operate. They write content, qualify leads, review resumes, and analyze financial data. Teams adopt them because they save time and unlock scale.
However, security often struggles to keep up with adoption enthusiasm. Vanta’s latest research shows the disconnect among organizations:
- 8 in 10 deployed or plan to deploy AI agents
- 65% say adoption is outpacing their understanding
- Only 44% have a formal AI policy
That mismatch creates real risk. This piece breaks down what’s driving that gap, where it shows up in practice, and what strong AI governance looks like when it actually works.
AI adoption is outpacing security
AI tools are spreading across organizations faster than security teams can keep up. Employees are under pressure to use AI to move faster—and they’re acting on it. In fact, 70% of companies already have unmanaged AI tools in their environment, often introduced without security review.
Adoption starts at the team level: a group finds a tool, tests it, and expands usage once it proves valuable. Leadership encourages momentum because of immediate results. But governance lags behind. AI tools become embedded in day-to-day workflows before organizations have the systems to evaluate, approve, and monitor them—creating risk that’s hard to see and even harder to control.
The governance gap is wider than it looks
Vanta data shows that fewer than half of organizations operate under a formal AI policy—leaving gaps in ownership, usage, and accountability. When no one owns a tool or defines how it should be used, accountability breaks down quickly.
In practice, this leads to Shadow AI. Today, 70% of companies have unmanaged AI tools in their environment, often introduced without security review. A sales team might rely on one tool for outreach while marketing uses another for content—each moving fast, but independently. Security teams are left without visibility into what’s in use, what data these tools access, or how outputs are reviewed.
Regulatory pressure is about to surge
The EU AI Act introduces new requirements for high-risk AI systems starting in August 2026, with clear expectations for transparency, risk management, and oversight. If companies don’t comply, penalties can claim significant portions of your annual revenue—putting it on par with GDPR enforcement.
This shift goes beyond a single regulation. AI governance is gaining traction globally, with frameworks like the NIST AI Risk Management Framework in the U.S. and evolving guidance under the UK’s AI regulatory approach reinforcing similar expectations around accountability and control.
For companies, this entails greater scrutiny of AI-driven decisions, stronger requirements for documentation and controls, and more operational accountability. Organizations that delay building these capabilities now will have less time to catch up as enforcement approaches.
The business risks of operating without AI policies
Risk builds quickly across teams, tools, and workflows without clear AI policies. What starts as fast experimentation can warp into real business impact, both now and as regulations catch up.
- Security incidents: Employees may input sensitive data into unapproved tools, increasing the risk of data exposure.
- Loss of customer trust: Unvalidated AI outputs can introduce errors into customer communications and decisions.
- Compliance gaps: AI usage can drift from regulatory requirements, making audits harder and increasing exposure.
- Shadow AI and IT growth: Teams adopt tools without approval, limiting visibility and control for security teams.
- Lack of accountability: Without clear ownership, issues are harder to trace, escalate, and resolve.
These risks are already surfacing, with more companies disclosing AI-related risks in public filings.
What effective AI governance actually looks like
Strong AI governance creates structure without slowing teams down, so teams can scale AI with confidence. At its core, effective governance includes key components like:
- Formal AI policy framework: Clear, documented rules that define how AI can be used across the organization
- Defined ownership and accountability: Specific teams are responsible for each AI system and its outcomes
- Risk assessment and classification: A process to evaluate AI systems based on their impact, sensitivity, and potential risk
- Continuous monitoring and controls: Ongoing visibility into how systems behave, with mechanisms to catch and address issues early
The most effective organizations integrate AI governance into their broader GRC systems. This allows them to apply consistent controls, track risk in one place, and scale governance as adoption grows.
From policy to practice: operationalizing AI governance
- Creating a policy is only the first step. The real work starts when governance becomes part of daily operations.
- Begin with visibility. Organizations need a centralized inventory of AI systems and agents so they can see what exists across teams—including where each system is deployed, what it connects to, and what it can do. This is also the first step in addressing shadow AI—making sure teams can see not just approved tools, but everything already in use across the organization. Without that, risk is hard to manage.
- From there, policies need to map to real requirements. Strong programs align AI usage with frameworks like SOC 2, ISO 27001, and emerging standards like ISO 42001, so compliance is built in from the jump.
Ongoing monitoring is just as critical. Teams need continuous visibility into system behavior, along with the ability to produce evidence when needed. It’s just as important that automation helps teams track usage, apply consistent controls, and collect evidence without the scramble. This is where governance becomes operational.
Stay in control as AI scales
Vanta helps teams bring structure to AI governance with a centralized system to inventory AI tools, define ownership, enforce controls, and continuously monitor how systems behave. From mapping AI usage to frameworks like SOC 2, ISO 27001, and ISO 42001 to automatically collecting the evidence auditors expect, Vanta makes it easier to move from policy to practice.
As AI becomes part of your core infrastructure, the ability to prove control matters just as much as the control itself. Vanta automatically collects audit-ready evidence, maps AI usage to compliance frameworks, and gives teams real-time visibility into how systems behave so you stay ready.
AI adoption is moving fast. Let Vanta help: Request a demo.





FEATURED VANTA RESOURCE
The ultimate guide to scaling your compliance program
Learn how to scale, manage, and optimize alongside your business goals.























.png)