Share this article

8 questions about AI and compliance
Accelerating security solutions for small businesses Tagore offers strategic services to small businesses. | A partnership that can scale Tagore prioritized finding a managed compliance partner with an established product, dedicated support team, and rapid release rate. | Standing out from competitors Tagore's partnership with Vanta enhances its strategic focus and deepens client value, creating differentiation in a competitive market. |
AI is one of the hottest topics in tech right now. More than half of consumers have already tried generative AI tools like ChatGPT or DALL-E. According to a Gartner poll, 70% of executives say their business is investigating and exploring how they can use generative AI, while 19% are in pilot or production mode. Business use cases for AI range from enhancing the customer experience (38%), revenue growth (26%), and cost optimization (17%).
While there’s tremendous excitement around AI, there’s also concerns around potential risks. It’s important to do your due diligence before bringing any tool into your ecosystem to ensure your data remains secure. And AI is no exception.
At the same time, there are also risks that come with not innovating. Companies who implement AI are seeing major efficiency gains and revenue generating benefits. And the longer you wait to start using AI, the further ahead your competition gets. The time to start thinking about AI is now.
We recently hosted a webinar where we answered some of our community's most-pressing questions about AI and compliance. We’ve compiled some of them here to help you get started.
1. What are the biggest concerns to consider if I’m integrating ChatGPT into my product?
If you’re planning to integrate ChatGPT into the core functionality of your product, treat it like you would any other vendor you’re bringing into your ecosystem. The key is to make sure your provider is going to be a good steward of your data. The main thing to be concerned about is training on your data.
If you’re using the free version of ChatGPT (or any free tool), you can adjust your settings to ensure your data are not used to train the model. Even with these settings in place, we recommend that you don't put any confidential, personal, customer, or private company data into any free service provider as there is no guarantee what will happen to your data.
Being under a paid contract helps to reduce your risk as there are terms and conditions your vendor has agreed to about the use of your data. This is even more critical when integrating a new tool into your product.
We’ve recently gone through this process at Vanta and we know that OpenAI’s API doesn’t train on your data. The company guarantees you won’t need to worry about your data being shared with other customers.
2. Are there any laws, regulations, standards, or guidelines to consider when developing or deploying an AI system?
There are a variety of privacy laws to consider based on the region you do business in, where you customers and employees are, and which industry you’re in. Some noteworthy frameworks to consider include CCPA and other U.S. privacy regulations, as well as GDPR.
If you plan to use AI to augment or automate business decisions, it's important to be transparent about that. Many privacy regulations require that consumers be informed about how automated processes impact them and that they must be able to opt out of these processes. One example is using AI to filter job applications — applicants must be able to opt out of this. You’ll also need to take extra precautions to reduce the bias of the algorithm when using AI to make business decisions.
Over the coming months and years, more regulations and laws will be created specifically around the use of AI. The European Union has already proposed a law called the EU AI Act that would assign different risk levels to applications of AI and deem certain uses as unacceptable, high risk, or unregulated risk if the law passes.
3. Does OpenAI’s paid API align with SOC 2? Is it secure?
The short answer is yes — but it’s important to do your due diligence. OpenAI is a mature company with a robust security and compliance team. At Vanta, we recently went through this and saw no issues with SOC 2.
The SOC 2 criteria make no mention of AI nor do they deem AI to be any more risky than other tools. SOC 2 auditors don’t see AI as a red flag as long as you've done your due diligence and met all the risk assessment criteria for SOC 2.
4. What happens if you fine tune a model using anonymized data? Does that have CCPA and GDPR implications?
AI models are trained and enhanced with datasets. It’s important to ensure that an AI model is not trained using customer data as you risk sharing one customer's data with a different customer.
There are a few ways to finetune your AI models that prevents this:
- OpenAI’s fine-tuning API.
- You can run your own models internally and fine tune on other data or with a different service provider.
- Fine tune AI models for specific customers to eliminate the risk of sharing data between customers.
IIf you have truly anonymized data, which is data that can no longer be tied back to a person, that is no longer within the scope of CCPA or GDPR. This means that using fully anonymized data to train an AI model won’t breach your compliance with CCPA or GDPR (although you should also keep your contractual and extra-contractual commitments to your customers in mind). The only caveat is that you’ll need a commitment from any third parties you share the data with that they won’t try to re-identify the data and that you monitor them to ensure that they don't.
5. When it comes to getting standards like ISO 27001 or SOC 2, how can AI help companies?
Getting a ISO 27001 or SOC 2 can be a complex and time-consuming process when done manually. AI and automation tools can streamline the compliance process.
There are intelligent automation tools that can help you easily integrate systems, tools, and applications connected to your ecosystem that need to be included in the scope of your ISO27001 or SOC 2. There’s also tools that offer continuous monitoring capabilities that can detect risks in real-time. These are two capabilities of Vanta that make getting compliant easy.
Looking toward the future of cybersecurity, AI will be baked into the cybersecurity toolkit at every level. While cyber criminals will start using AI, there will be AI on the other side detecting and preventing cybercrimes. AI will perform predictive analysis, enhance access monitoring, and get better at identifying bad actors who are using AI for things like deep fakes or phishing scams. AI will be helping to fight AI!
6. What tools are there to help us manage or mitigate all these different sources of AI risks?
It’s important to use general cybersecurity best practices and tools that protect your organization from any outside vendor, including an AI vendor. This includes tools and process like:
- Having robust risk management tools to monitor your business’ risk holistically and continuously.
- Using vendor risk management tools to manage your third-party risk and review processes.
- Reducing risk from employees who may be using tools that aren’t onboarded and vetted for security.
7. What are some ways that AI can help my team in the audit process?
Many organizations are in the early stages of implementing AI tools into their offerings, and there are some early examples of how this is improving the audit process. One example is Github Co-pilot, an AI-powered programmer, that can help your developers write well-structured code that aligns with your organization’s standards. Other AI-powered tools can help identify and remediate security vulnerabilities ahead of a breach.
Over the coming years, we expect there to be vast opportunities for AI-assisted tools in security and compliance. We anticipate that thousands of new tools will emerge into the market and that existing tools will enhance and extend their capabilities with AI and machine learning.
8. How is Vanta going to integrate AI into its products?
We’re in the process of doing just that! While we won’t give too much away, AI plays an important role in Vanta’s products. We’ll have more news to announce very soon. Stay tuned! 😉
To learn more about AI and compliance, watch the full webinar.





FEATURED VANTA RESOURCE
The ultimate guide to scaling your compliance program
Learn how to scale, manage, and optimize alongside your business goals.