BlogSecurity
January 19, 2024

In AI we trust: AI governance best practices from legal and compliance leaders

Written by
Vanta
Reviewed by
No items found.

Accelerating security solutions for small businesses 

Tagore offers strategic services to small businesses. 

A partnership that can scale 

Tagore prioritized finding a managed compliance partner with an established product, dedicated support team, and rapid release rate.

Standing out from competitors

Tagore's partnership with Vanta enhances its strategic focus and deepens client value, creating differentiation in a competitive market.

According to Vanta’s State of Trust Report, 54% of businesses say that regulating AI would make them more comfortable investing in it. But with regulation still in flux, how can companies adopt AI safely and responsibly to minimize risk while accelerating innovation? 

At our second annual VantaCon, we brought together a panel of legal, security, and compliance experts, moderated by Vanta’s Chief People & Legal Officer Ari Shahdadi, to share insights and best practices on how they’re approaching AI governance.

Here are the top takeaways from Jo Golub (Senior Director of Legal, Ironclad), Siah Burke (HIPAA Security Officer, Anthropic), and Dan Cunha (General Counsel, Quora). 

<iframe src="https://player.vimeo.com/video/895933749?h=d5abbc2674" width="640" height="360" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen></iframe>

Establish company policies for AI usage

At Ironclad, Senior Director of Legal Jo Golub and team approach AI usage in three different buckets: 

  • Building products with generative AI: In this category, the legal team works with the engineering team on building new products and features. Jo says they “coordinate closely together and make sure everybody’s on the same page about what we can do within the regulatory framework and our contractual language [as well as] the principles of how we work with AI.” The team has also published a white paper that lays out Ironclad’s AI governance model so that customers can understand how they use AI. 

 

  • Company or team use of AI tools: This usage applies to AI tools that aren’t part of Ironclad’s product itself but are used to fulfill their business needs. As Jo explains,  these are typically governed by the language in their master services agreement or data processing agreement (DPA), and the Legal team plays an important role in making sure that the company is operating within any obligations to customers and adhering to the regulatory framework. 

  • Individual use of AI tools: The last bucket applies to individual employees’ usage of AI tools, which Jo says “can always feel hardest to get your arms around because you just can’t scale an audit of that or anything.” To give employees guidance on how to use those tools safely and responsibly, the team rolled out a generative AI policy which requires employees to opt out of training AI models and double check their work for accuracy.

Ensure close collaboration between legal and compliance teams

"If you look at founded industries or technologies that are in use today,” says Siah Burke, HIPAA Security Officer at Anthropic, “they’re generally backed by long-standing founded laws, regulations, and accepted certifications.” This creates the backbone of a good compliance program where both legal and compliance teams play a part. 

With AI, especially in the building of foundational models, legal and compliance teams need to work together even more closely. “Right now, regulation either doesn’t exist or it’s rapidly changing. We all know that it’s coming, but it’s not really quite there yet.” 

In this quickly evolving regulatory environment, Siah stresses the importance of a unified partnership between legal and compliance teams. Compliance teams need to be able to apply new regulations but understanding and communicating this information can be a challenge. That’s where legal teams play a crucial role by distilling new regulations to their compliance counterparts and working with them to write and roll out policies. 

Stay informed while filtering out AI noise

With so much happening day to day, it can be overwhelming trying to keep up with the latest developments in AI regulation. To avoid information overload, Quora General Counsel Dan Cunha recommends focusing on what’s most relevant to your business. 

“Every article is [about] AI. Anytime a state passes some regulation — [for example] Maryland passed its own AI regulation — you look at it and it's really just, ‘Hey, the state should start thinking about using AI,” says Dan. “A lot of it is filtering down and thinking about your own company and your own use case. There are going to be challenges to model providers and all things, but how does that impact you?”

Some of Dan’s go-to resources include Hogan Lovells’ Monday memo on the biggest developments in privacy, Davis Wright Tremaine’s blog on AI, and superhuman.ai. Once a quarter, Dan and team create an executive summary on the latest AI regulations. 

Dan also encourages his team to brush up on AI more generally. “We can’t really be good lawyers if we don’t understand the technology.,” he says. “You’re not going to be an expert, but… understand the difference between an LLM and a diffusion model and know the terminology. So when you go talk to engineers and product people, you sound smart enough, but also that’s going to help you start issue spotting.”

Want to learn more about where the future of trust and AI are headed? Watch all VantaCon recordings here.

Access Review Stage Content / Functionality
Across all stages
  • Easily create and save a new access review at a point in time
  • View detailed audit evidence of historical access reviews
Setup access review procedures
  • Define a global access review procedure that stakeholders can follow, ensuring consistency and mitigation of human error in reviews
  • Set your access review frequency (monthly, quarterly, etc.) and working period/deadlines
Consolidate account access data from systems
  • Integrate systems using dozens of pre-built integrations, or “connectors”. System account and HRIS data is pulled into Vanta.
  • Upcoming integrations include Zoom and Intercom (account access), and Personio (HRIS)
  • Upload access files from non-integrated systems
  • View and select systems in-scope for the review
Review, approve, and deny user access
  • Select the appropriate systems reviewer and due date
  • Get automatic notifications and reminders to systems reviewer of deadlines
  • Automatic flagging of “risky” employee accounts that have been terminated or switched departments
  • Intuitive interface to see all accounts with access, account accept/deny buttons, and notes section
  • Track progress of individual systems access reviews and see accounts that need to be removed or have access modified
  • Bulk sort, filter, and alter accounts based on account roles and employee title
Assign remediation tasks to system owners
  • Built-in remediation workflow for reviewers to request access changes and for admin to view and manage requests
  • Optional task tracker integration to create tickets for any access changes and provide visibility to the status of tickets and remediation
Verify changes to access
  • Focused view of accounts flagged for access changes for easy tracking and management
  • Automated evidence of remediation completion displayed for integrated systems
  • Manual evidence of remediation can be uploaded for non-integrated systems
Report and re-evaluate results
  • Auditor can log into Vanta to see history of all completed access reviews
  • Internals can see status of reviews in progress and also historical review detail
FEATURED VANTA RESOURCE

The ultimate guide to scaling your compliance program

Learn how to scale, manage, and optimize alongside your business goals.