Share this article

Your auditor is about to ask about AI agents. 9 things they'll want to see
Accelerating security solutions for small businesses Tagore offers strategic services to small businesses. | A partnership that can scale Tagore prioritized finding a managed compliance partner with an established product, dedicated support team, and rapid release rate. | Standing out from competitors Tagore's partnership with Vanta enhances its strategic focus and deepens client value, creating differentiation in a competitive market. |
Studies show that AI adoption outpaces understanding: 72% of organizations are already using or planning to use agentic AI, while 65% say their use of AI is moving faster than their ability to fully understand it (Vanta State of Trust, 2025).
Audits are starting to reflect that gap. In 2025, 72% of S&P 500 companies disclosed at least one material AI risk, up from 12% in 2023. Yet, only 26% of organizations have comprehensive AI governance policies in place.
That shift is also formalizing. ISO 42001, published in 2023, gives organizations a structured AI Management System (AIMS) that auditors can certify against—and it aligns closely with the EU AI Act, which becomes fully enforceable in August 2026. For companies building or deploying AI, it's quickly becoming the governance benchmark.
What auditors actually evaluate in AI systems
Auditors aren’t waiting for AI-specific frameworks to catch up—they’re applying the ones that already exist. Even though SOC 2 and the NIST AI RMF weren’t designed with autonomous agents in mind, auditors map agent behavior directly to those controls. And with ISO 42001—the first certifiable international standard built specifically for AI management systems—auditors now have a dedicated framework to evaluate how organizations govern AI. If an AI agent can access data, trigger workflows, or make decisions, it’s treated like any other system that can introduce risk.
That shift is only speeding up. NIST’s AI Agent Standards Initiative is expected to shape compliance frameworks and vendor assessments as soon as 2027.
They’re looking for control, which usually comes down to answering a few questions:
- Can you explain what your AI systems do?
- Can you show how access and decisions are controlled?
- Can you provide evidence that oversight is consistent?
Underneath all of it is a simple standard: your AI systems should behave predictably, securely, and in line with defined controls. Here are nine factors your auditor will likely want to see at your organization.
1. A complete inventory of AI agents across your environment
Auditors will expect a clear list of every AI agent in use, so they can understand where automation is happening and what risks it may introduce. That includes agents across departments and functions, such as:
- A support agent drafting and sending replies in Zendesk
- A finance agent approving low-risk invoices in NetSuite
- A sales agent updating Salesforce records
- A security agent triaging alerts in real time
They’ll also expect context like:
- Where each agent is deployed
- What systems it connects to
- What actions it can take
Most organizations don’t have this fully mapped. That’s where shadow AI starts to creep in.
2. Defined ownership for every AI system
To help mitigate that shadow AI risk, every AI system needs a clear owner. That owner should be responsible for:
- Approving agent use cases
- Managing changes and updates
- Monitoring performance and risk
Without ownership, issues tend to stall. A finance agent might be configured by engineering, used by finance, and reviewed by security. When something breaks, no one is fully accountable.
3. Clear boundaries on what agents can and cannot do
Auditors will look closely at how access and permissions are defined and enforced—what each agent is allowed to do, what it’s blocked from doing, and what systems or data it can access. After all, only 48% of organizations have frameworks in place to limit AI autonomy.
Each agent should be treated like its own identity, with scoped permissions that can be audited and reviewed. In practice, this might look like:
- A support agent that’s allowed to issue refunds under $100, but prevented from issuing larger refunds without human approval
- A procurement agent that can draft purchase orders, but can’t approve or send them without a reviewer
- A CRM automation agent that can update customer records, but has no access to financial systems
These boundaries map directly to access control requirements in SOC 2 and ISO 27001. ISO 42001 goes further—it explicitly requires organizations to define the scope of AI autonomy, document whether they serve as an AI developer, deployer, or user, and conduct AI impact assessments that evaluate downstream risks of agent actions.
4. Evidence of human oversight and intervention points
Autonomy needs guardrails. Auditors expect human approval for sensitive actions, clear escalation paths, and the ability to override or stop an agent.
In practice, issues often emerge gradually: an agent starts by recommending refunds, then auto-approves under a threshold, and eventually expands its scope without formal review. Oversight needs to stay consistent as autonomy increases.
{{cta_webinar8="/cta-blocks"}} | State of Trust on AI
5. Logging and traceability of AI decisions
If an AI agent takes action, you need a record of it. Auditors expect logs that capture what happened, when it happened, what inputs were used, and why the decision was made.
For example, if an agent updates 200 CRM records in an hour, you should be able to trace exactly what triggered that behavior.
This visibility supports both auditability and incident response.
6. Data handling and model input controls
AI systems are only as controlled as the data they use. Auditors want to see clear rules around what data an agent can access, how it’s used, and whether sensitive information is properly protected.
In practice, that means limiting agents to only the data they need, anonymizing or minimizing personal data, and ensuring consent where required. For example, a support agent shouldn’t have access to full customer records if it only needs ticket history to do its job.
Many controls are still uneven. Only 35% of organizations rely solely on anonymized data, and just 31% require opt-in for AI data usage, leaving plenty of room for inconsistent handling.
7. Risk assessments specific to AI systems
AI introduces new types of risk, and auditors expect formal assessments that account for things like misuse scenarios, model failures, and downstream impact across systems. ISO 42001 formalizes this through a requirement for AI impact assessments—structured evaluations of how an AI system could affect individuals, groups, and society, including considerations around bias, transparency, and ethical use.
That means you’ll want to add AI-specific risks to your risk planning. That might include creating plans for scenarios like what happens if an agent approves fraudulent invoices or exposes sensitive data through outputs or logs.
Only 45% of organizations conduct regular AI risk assessments today.
8. Continuous monitoring, not point-in-time reviews
AI systems don’t adhere to audit schedules. Auditors expect ongoing monitoring of behavior and access, alerts for anomalies, and clear visibility into how systems change over time—because models, integrations, and permissions can shift quickly, introducing new risks without obvious signals.
At the same time, teams already spend an average of 12 weeks per year on compliance work, making manual reviews hard to sustain in dynamic environments. Continuous monitoring is what actually scales.
9. Evidence, not policies
Auditors want proof that controls are working in practice. Sixty-one percent of organizations say they spend more time proving security than improving it—highlighting how critical automation has become. Evidence should be continuously collected, easy to verify, and directly tied to controls.
This includes process documentation that clearly defines roles and responsibilities, along with systems that automatically collect and map evidence to controls. This is where tools like Vanta and your ticketing or workflow system come in.
What to do now before your next audit
You don’t need to solve everything at once. Start with structure. Focus on building a centralized inventory of AI agents, assigning clear ownership, implementing identity-based access controls, monitoring activity continuously, and automating evidence collection and reporting. Documented processes need to be made available and updated regularly when changes are made.
These steps align closely with how auditors are already evaluating AI systems.
Turn AI governance into audit-ready evidence with Vanta
Vanta helps bring structure to AI governance with a centralized system to manage agents, enforce access controls, continuously monitor activity, and automatically collect the evidence auditors expect. From mapping AI systems to frameworks like SOC 2, ISO 27001, and ISO 42001 to maintaining real-time visibility into how controls perform, Vanta makes it easier to prove that your AI is governed.
With 95% of the required ISO 42001 document templates, 70+ controls, and a preferred certification partnership with A-LIGN, Vanta helps you move from governance intent to audit-ready evidence.
Request a demo to see how Vanta helps you stay audit-ready as AI adoption scales.
{{cta_simple21="/cta-blocks"}} | AI practices with ISO 42001





FEATURED VANTA RESOURCE
The ultimate guide to scaling your compliance program
Learn how to scale, manage, and optimize alongside your business goals.











.png)









.png)
.png)
.png)