BlogSecurity
October 22, 2025

30+ due diligence questions to ask AI vendors in a security review

Written by
Vanta
Reviewed by
Connor Synder
GRC, Subject Matter Expert

Accelerating security solutions for small businesses 

Tagore offers strategic services to small businesses. 

A partnership that can scale 

Tagore prioritized finding a managed compliance partner with an established product, dedicated support team, and rapid release rate.

Standing out from competitors

Tagore's partnership with Vanta enhances its strategic focus and deepens client value, creating differentiation in a competitive market.

Introducing third-party AI into your systems can be a milestone for productivity and growth, but it also expands your attack surface in unpredictable ways. If your AI vendors have weak controls, threats like data poisoning and algorithm failure can ripple through your systems.

According to Vanta’s 2024 State of Trust Report, AI-based malware has increased by 32% over the last 18 months alone, showing how quickly malicious actors exploit weak points in AI security, whether in your own environment or your vendors’. That’s why it’s necessary to thoroughly evaluate the security posture of all your AI vendors and partners, but what should the due diligence questions look like?

In this guide, we’ll discuss:

  • Common risks of working with AI vendors
  • Key questions you should ask them during a security review

Common risks of working with AI vendors

The first step to an effective security review is to understand the unique risks your AI vendors add to your environment and identify the assets you need to secure. Some of the most common risks are:

  • Malicious use or compromise of AI tools: Hackers can exploit AI tools to generate phishing attacks, malware, or harmful outputs. Techniques like prompt injection can also compromise the AI’s behavior and produce results that violate internal policies or bypass security controls.
  • Intellectual property exposure: AI systems can leak or expose sensitive or proprietary information due to improper training data, model inversion attacks, or weak isolation in multi-tenant AI environments.
  • Lack of transparency in AI decision-making: It can be hard to justify how AI agents and solutions make decisions, especially those with complex algorithms. This opacity means you cannot always identify and address vulnerabilities in third-party systems. 
  • Compliance violations due to evolving regulations: The AI compliance space is in a state of flux, with standards and regulations changing quickly to keep pace with AI. These updates can create gaps in a vendor’s compliance posture, which increases the risk of violations for your organization.
  • Data privacy concerns: AI tools controlled by vendors still access large amounts of data to function. If the data shared is sensitive in nature, your organization takes on the risk of exposure leakage or misuse, and depends on the vendor to implement the appropriate controls.

{{cta_withimage28="/cta-blocks"}}

Questions to ask your AI vendor across 7 key categories

When planning vendor security reviews, start by evaluating which vendors are high-risk based on the impact they have on your AI initiatives. You can select those that support critical activities or handle sensitive data for deeper reviews. Additionally, conduct a thorough security review every time you onboard new vendors.

Your AI security questionnaire should cover these key categories of questions:

  1. Security controls
  2. Third-party risk management
  3. Data governance and privacy
  4. Transparency and explainability
  5. Performance monitoring and management
  6. Organizational management
  7. Human oversight

We’ll share 30+ sample questions across these categories below. Note that the concerns under some of the categories may overlap, so we focused on questions that are a natural fit for each heading.

1. Security controls

Third-party AI systems have many direct vulnerabilities that organizations inherit, such as direct prompt injections, data breaches, data and model poisoning, and malware generation. Your vendor’s security controls are the only safeguard against these risks.

For example, let’s consider a financial services firm that uses a third-party GenAI model to automate its own chatbot. If someone at the provider’s end decides to retrain the model using all submitted prompts, which include sensitive details like account numbers, the data could be exposed if the model is ever breached.

Ideally, your review should probe both security controls and operational practices of the vendor. Some of the questions you could ask include:

  • Do you restrict and control access to AI systems according to user roles?
  • How often are AI system vulnerability scans performed?
  • Is AI security built into the broader cybersecurity program?
  • Do you use customer data for training purposes? If yes, how is the data anonymized?
  • Are API integrations protected with the right security measures, including authentication, authorization, and encryption in transit?

2. Third-party risk management

AI vendors often rely on their own vendors or subcontractors to deliver their service. They are fourth parties that you don’t interact or share contractual obligations with, so you cannot verify their security measures.

There are several areas to review to mitigate these risks:

  • Supply chain mapping and transparency: See if your vendors disclose the third parties they rely on, and whether your data is shared or used for model training
  • Contractual protections for AI data: Review whether data transfers between your vendors and their contractors are opt-in or opt-out, and the contractual terms that govern how your data is shared, used, and stored
  • Exit strategies and sub-processor agreements: Confirm if your vendor has clear offboarding procedures for subprocessors, termination clauses, and notification obligations in their contract

You can translate these investigations into concrete questions in your vendor review questionnaire. Here are some sample questions:

  • Do you evaluate third-party relationships for security, privacy, and ethical risks?
  • Is there a formalized security-informed AI vendor onboarding process?
  • What procedures are in place to securely offboard a contractor?
  • Do your contracts with third parties require explicit controls or SLAs?
  • How do you report a third-party breach?

3. Data governance and privacy

According to Vanta’s 2025 AI governance report, data privacy is among the most pressing issues when using AI systems, with 63% of organizations citing it as their top concern.

If your vendor systems have access to sensitive data, look into how it’s handled—what data is being collected, how it’s stored, and whether it’s used for AI training.

Some of the questions you can ask to understand your vendor’s approach include:

  • Does your organization conduct data protection impact assessments?
  • Can customers opt out of having their data used for training AI models?
  • How long do you retain collected data, and how do you dispose of it?
  • Do you comply with any key data protection regulations, such as GDPR or HIPAA?
  • What encryption techniques are applied to data at rest, in transit, and during processing?

4. Transparency and explainability

Vendor AI models are only as reliable as their training data. Incorrect or biased data makes the model prone to skewed choices. Depending on your use case, this can lead to operational and ethical risks that impact your security posture. But since you cannot access a third party’s training data, verifying AI decision-making processes will be challenging for your team.

You can test AI systems for bias without accessing training data in several ways, such as validating your own inputs or using open source metrics to identify undue reliance on sensitive attributes. You can also just ask your vendor if they adhere to fairness standards from frameworks like ISO 42001 and the EU AI Act.”

Marsel Fazilov

Many AI regulations, such as the EU AI Act, list transparency and explainability as core compliance requirements, with violations potentially resulting in financial penalties or legal escalation.

You can see how your vendors track and audit all AI model decisions with questions like:

  • Did your AI models undergo bias or fairness checks?
  • Which explainability methods do you implement to understand AI system outputs?
  • Do your high-risk AI systems come with interpretability features?
  • Can users flag biased, harmful, or incorrect outputs?
  • How do you handle updates and retraining of models?

{{cta_webinar6="/cta-blocks"}}

5. Performance monitoring and management

When vendor AI solutions are integrated into critical workflows, service disruptions or latency can result in downtime, making ongoing monitoring a necessity. Oversight also helps catch issues like AI hallucinations, particularly in generative models, which lead to inaccurate or misleading outputs.

Model degradation is another issue that requires ongoing performance management. Over time, the training data your vendors used can become outdated and show behaviors like: 

  • Reduced output accuracy
  • Increased error rates
  • Slower response times
  • Overfitting or overreliance on outdated training data

The questions you could include in this category of your review include:

  • Do you continuously monitor AI systems for model drift and performance degradation?
  • Do you have automated monitoring systems for detecting AI inaccuracies?
  • Do you obtain independent external validation for high-risk AI models and their interpretability?
  • How frequently do you update datasets and retrain?
  • What alerts are in place if AI performance falls below acceptable thresholds?

6. Organizational management

Even with robust internal safeguards, weak governance practices on your vendor’s end increase operational and compliance risks. For example, standards such as ISO 42001, GDPR, and SOC 2 require that your vendors meet compliance requirements. This means that if your AI vendors experience a breach, your organization can be held accountable for not managing third-party risk.

That’s why it’s essential to review a vendor’s internal procedures for AI governance. That includes their ongoing mechanisms for detecting shadow AI, which refers to unmanaged AI tools used without approval or oversight.

Here are some sample questions to include in your vendor security review:

  • Do you categorize AI systems based on their risk levels?
  • Do you maintain an inventory of AI-enabled systems and shadow AI?
  • Is there a dedicated team for overseeing AI development and deployment?
  • Do you train staff on AI governance and responsible use?
  • How do you ensure compliance with internal AI governance standards?

{{cta_withimage28="/cta-blocks"}}

7. Human oversight

Although AI models can streamline many repetitive workflows and reduce manual efforts, they still require human oversight to detect and address anomalous outputs. You should verify if your vendors enable human checks for their AI systems to detect functionality issues throughout the AI lifecycle.

You should also explicitly ask about remediation procedures and the escalation paths to report unexpected behavior. Other safeguards to review include the frequency of model audits and the retraining protocols to keep systems aligned with evolving data.

You can verify your vendors have adequate human oversight procedures with questions like:

  • Do you have a team to monitor issues with AI systems?
  • What are your established protocols on when and how to retrain AI models?
  • How frequently do humans validate high-risk AI system outputs?
  • Do you have a chain of accountability for AI decisions and remediation?
  • How do you integrate human oversight in automated systems?

How Vanta helps you work with AI vendors confidently

Vanta is a trust management platform that helps you secure AI systems, demonstrate responsible AI practices, improve vendor monitoring practices, and enhance customer trust.

You can download our ready-made AI Security Assessment Template for conducting vendor reviews—it’s an expert-vetted set of questions for thoroughly evaluating your vendor’s AI security posture. You can also use it to demonstrate your organization’s AI practices to potential prospects.

Besides built-in guidance, Vanta provides support for leading AI compliance standards and frameworks, such as ISO 42001 and NIST AI RMF, strengthening your approach to AI security.

The platform offers a dedicated AI compliance product with features that reduce procedural ambiguity, such as:

  • Faster responses to security reviews with Question Automation
  • Access to a public-facing Trust Center to demonstrate security practices
  • Pre-built policy templates and control sets
  • Automated evidence collection powered by 375 + integrations

Explore how these features can support your organization by scheduling a custom demo today.

{{cta_simple34="/cta-blocks"}}

Access Review Stage Content / Functionality
Across all stages
  • Easily create and save a new access review at a point in time
  • View detailed audit evidence of historical access reviews
Setup access review procedures
  • Define a global access review procedure that stakeholders can follow, ensuring consistency and mitigation of human error in reviews
  • Set your access review frequency (monthly, quarterly, etc.) and working period/deadlines
Consolidate account access data from systems
  • Integrate systems using dozens of pre-built integrations, or “connectors”. System account and HRIS data is pulled into Vanta.
  • Upcoming integrations include Zoom and Intercom (account access), and Personio (HRIS)
  • Upload access files from non-integrated systems
  • View and select systems in-scope for the review
Review, approve, and deny user access
  • Select the appropriate systems reviewer and due date
  • Get automatic notifications and reminders to systems reviewer of deadlines
  • Automatic flagging of “risky” employee accounts that have been terminated or switched departments
  • Intuitive interface to see all accounts with access, account accept/deny buttons, and notes section
  • Track progress of individual systems access reviews and see accounts that need to be removed or have access modified
  • Bulk sort, filter, and alter accounts based on account roles and employee title
Assign remediation tasks to system owners
  • Built-in remediation workflow for reviewers to request access changes and for admin to view and manage requests
  • Optional task tracker integration to create tickets for any access changes and provide visibility to the status of tickets and remediation
Verify changes to access
  • Focused view of accounts flagged for access changes for easy tracking and management
  • Automated evidence of remediation completion displayed for integrated systems
  • Manual evidence of remediation can be uploaded for non-integrated systems
Report and re-evaluate results
  • Auditor can log into Vanta to see history of all completed access reviews
  • Internals can see status of reviews in progress and also historical review detail
FEATURED VANTA RESOURCE

The ultimate guide to scaling your compliance program

Learn how to scale, manage, and optimize alongside your business goals.