BlogSecurity
January 12, 2024

How AI is transforming the future of trust

Written by
Vanta
Reviewed by
No items found.

Accelerating security solutions for small businesses 

Tagore offers strategic services to small businesses. 

A partnership that can scale 

Tagore prioritized finding a managed compliance partner with an established product, dedicated support team, and rapid release rate.

Standing out from competitors

Tagore's partnership with Vanta enhances its strategic focus and deepens client value, creating differentiation in a competitive market.

How is generative AI transforming trust? And what does it mean for companies — from startups to enterprises — to be trustworthy in an increasingly AI-driven world? 

These questions were top of mind at our second annual VantaCon, where Vanta co-founder and CEO Christina Cacioppo moderated a discussion with Aaron Levie (co-founder and CEO of Box), Nat Friedman (entrepreneur and investor), and Harrison Chase (CEO of Langchain) on the obstacles and opportunities ahead in the future of trust. 

Below are excerpts from the conversation, lightly edited and condensed for clarity. 

AI and trust: A new era of computing

Christina: To kick things off, how do you think about the broader theme of trust and AI? And what do you think is missing from the conversation today? 

Aaron: I think trust and AI are going to be the most connected concepts probably of any technology that I've seen in terms of the overlap between the need for trust and a technology movement. 

[With] cloud computing, you sort of had this psychological element of trust, which is, “Okay, I'm moving my infrastructure from my data center to the cloud. I'm nervous about that.”

But a lot of that was sort of an emotional issue and in our brains. We've had things like cybersecurity and an element of trust, but in that sense, that's still a pretty deterministic set of issues — here's how you control your cyber footprint, we have a zero trust environment, so that's how you protect assets. 

AI and trust is a whole new ballgame because the models themselves [are] probabilistic [and] we don't always understand the sources of information that go into the models or the training sets themselves. So I think we're in an era where we collectively are going to have to figure out what this all looks like and how we build an environment of high-trust AI. This is going to be an all new era of computing requiring trust in our infrastructure. 

Nat: I completely agree with that. I think a way to think about what's happening is that these large language models or deep learning models — they're like some kind of alien technology that crash landed on planet earth. We've been tinkering with them for the last couple of years and figuring out what they can and can't do. But we didn't exactly design them. We don't exactly know how they work. And so there's a lot of mystery to it…. They're a little bit of black boxes, and there's an entire field called interpretability, which is about trying to look at models and figure out, how are doing what they do, and what are the circuits that are being developed inside of here? So they're not engineered systems that we design. They're more like a stew (or something like that) and they grow. 

The design challenge that people have in building products around AI is how to take a model — which is very powerful and does amazing things that make you say, wow, but also often makes mistakes and makes errors — and build a product around it that makes it useful and something that you can rely on.

In what way do you design a human in the loop to validate and observe the outputs and make sure you're actually getting what you want? How does the trajectory of that change when we chase, as the models mature and we get more and more powerful capabilities, the mistakes into weirder and weirder corners that maybe become less common but more unexpected when they occur? 

This is a fundamental question about building products and AI. For product designers and people who are incorporating them, you just have to know that it's almost like hiring people. They're going to make mistakes. So how do you design an organization that's tolerant to mistakes that some people might make? Sometimes it's almost the same with AI. 

The design challenge that people have in building products around AI is how to take a model — which is very powerful and does amazing things that make you say, wow, but also often makes mistakes and makes errors — and build a product around it that makes it useful and something that you can rely on. - Nat Friedman

Balancing innovation and regulation

Christina: How do you think about keeping pace with innovation and new technologies and balancing that with what sounds like an oncoming wave of a patchwork of regulation? 

Aaron: I'm more skeptical that we'll see much regulation because of the open source wedge. I don't quite understand what you'd regulate at this stage in a way that would be meaningful and I think the countries that overregulate will have enough sort of negative reaction to economic challenges. So my take is that it's going to be very hard to essentially regulate AI. The best outcome is, I actually think the regulation should begin to really emerge within the agencies where the outcomes of AI are and where the impact is. So the FAA should care about how AI should show up in airplanes and the FDA should care about where AI shows up in medical devices. 

I think it'll be hard to essentially do that, which is then going to need more ways of managing all of that. How do you manage the different privacy requirements from a regulatory standpoint in every single industry, in every single compliance regime? And so I think my guess is that as regulators work through this environment, they're going to find it harder and harder to centrally regulate this stuff because what they're going to find is, to Nat's point, it's a bunch of numbers, and they're going to realize like, oh, we can't really control what's in this CSV file. And that will create a lot of downstream, new open questions. 

Nat: I hope that's true. I think you might be right. It's funny, it's almost like a new technology emerges and then in our companies we establish a center of excellence for that technology and then eventually it's just part of the way we do business. And maybe that's what'll happen in government. They're eager though. I agree that that would be correct though to regulate it sort of at the application level. Look, it's already illegal to make bio weapons. Do you need a new law? You don't really need a new law for that. So I think that may be what happens. 

I think we will see some attempts in the meantime to this question of what are the UIs and what are the applications? I think this is a really interesting question, and with each new generation of AI, it changes because we get more and more powerful and capable models. 

Right now, we're in the state where the models are pretty good, they have some intelligence in them, they can do some things well, but they're not sufficiently reliable, and not smart enough that it's obvious how to use them. In this intermediate stage that we're in before AI gets much more powerful, this is when the product design challenges are the largest because you have to figure out how to take something that has some power and some reliability issues and make it useful. 

Harrison: My sense is, on the UX side is that the different UXs probably allow different types of companies opportunities to take advantage. So I'd say like adding chat to a product, we've seen incumbents do that incredibly quickly and across a wide variety of industries. They have the data, they have the UI already, they have the platform that people are logging into. 

As the UXs maybe start to morph more and more away from just a chat box where it's easy to add that in,  if it becomes something more complicated and it's doing a lot of background work, there are real UX challenges in terms of: how do you communicate that to people, how do you build trust in what they're doing, what's does the human in the loop or on the loop look like? I think that maybe opens up the opportunity for just vastly different types of businesses and more startups to come along and innovate there. 

The power and potential of innovating with AI

Christina: We've talked a lot about some of the risks and the challenges [of AI] and a little bit of the opportunities, but I think we've maybe steered a little bit more on the risk side. So I wanted to close this out with a strong flip to the other side. Harrison, you're working on this space, and probably planning to for years and years to come. What was the moment where you [determined], this is what I want to do. 

Harrison: So my background is in ML and ML ops, and the specific thing was basically playing around with GPT-3 and basically seeing yeah, this is pretty good. They released some updates that kinda went under [the radar] and people didn't pay a lot of attention to them, but it got quite good. 

One thing in particular that makes me really excited about building a lot of these AI applications now is that the ML and AI component, when you build these applications, they're kind of almost like an application by itself.  It's not like one cog. But if you're building an agent, or if you're building a system to do question answering, yes, there's a lot of engineering to figure out, but that can be a product by itself. 

Christina: Nat, I know you've been in and around this space forever, but do you remember a moment or talking to where you're like, oh, this is good, the time is now, but the time is approaching the alien deposit has landed.

Nat: It was the same for me. When GPT-3 hit, I was completely blown away. I was CEO of GitHub and I said, “I don't know what, but we're going to build some product for developers with this.” And we entered this period of tinkering and prototyping and figuring out what worked and what didn't. We learned very quickly that the demo was always great with AI. Whatever idea we thought of, we built the demo, the proof of concept, it was always amazing. But then when you actually start using it, is it reliable? Does it really work? Can you trust it? Are you putting in more than you're getting out? That's about product design at this stage. 

The models are just going to get better. No one is stuck. If you go talk to the researchers who are pushing the frontier of AI capabilities right now, they have 200 ideas for how to improve them. They all sound plausible. They won't all work, but no one's out of ideas. We have a lot of ways to make things better, and there's even some level of convergence now around what are the likely paths forward to superhuman general intelligence. I'm basically like an AI maximalist. I think we will have universal remote workers. The population of your company will be primarily AI. It'll be increasingly neural over time. Whether this takes five years or 15 years, I don't know. But it'll happen because we're really good at innovating and making things better, and there's lots of ways to do that with AI. So it's just going to keep improving. 

"We imagine a world where anything that a human does on top of their data, now AI can do on top of your data." - Aaron Levie

Christina: [Aaron], do you have a GPT-3 story? 

Aaron: So I have a GPT-3.5 story — we were sleeping on the [GPT-3] and [GPT-2] waves. We had done this thing seven years ago called Box Skills, which was this framework for plugging into AI models, and we plugged into all the computer vision stuff. What we found was that it was extremely powerful, but very narrow. So customers would come to us and they would say, “Hey, we've got this great use case. Can you help us understand our data?” And we said, “Yes, here's this very niche model provider that is for real estate documents in the UK.” It was just so laborious to get it going, so customers didn't end up at the end of the funnel because of the complexity. So we were like, okay, well we have this framework, we have this infrastructure. We're ready to go the moment that something else happens in the future.

We watch GPT-2 and GPT-3. The chat GT interface was the thing that just blew our minds, and obviously the improvements in that model. Very quickly we strung it together with data in Box and the mindblowing thing was you could take a 50-page document and in three seconds get a response to, “Summarize this,” or “What are the legal risks in this contract?” or “Turn this marketing asset into a blog post.” 

As has been shared, we imagine a world where anything that a human does on top of their data, now AI can do on top of your data. Obviously we have a lot of interfaces that we have to figure out along the wa, but now imagine you have a thousand times more resources to throw at any problem in your business. 

What can you now do? How much more money can you make? How can you reduce risk? How can you make your business more efficient? This is sort of unlimited potential. 

I'm an AI maximalist and a human ingenuity maximalist in the sense that I actually think it's all good for everybody. If all of us had that kind of intelligence as a superpower, that actually means more jobs. It means more growth, it means more opportunity — not less — and not sort of in any kind of zero-sum way against what the humans are doing. I think we will find a way to make this a resource for growth and prosperity. So we're pretty excited. 

Want to see more from VantaCon? Watch all of the recordings here.

Access Review Stage Content / Functionality
Across all stages
  • Easily create and save a new access review at a point in time
  • View detailed audit evidence of historical access reviews
Setup access review procedures
  • Define a global access review procedure that stakeholders can follow, ensuring consistency and mitigation of human error in reviews
  • Set your access review frequency (monthly, quarterly, etc.) and working period/deadlines
Consolidate account access data from systems
  • Integrate systems using dozens of pre-built integrations, or “connectors”. System account and HRIS data is pulled into Vanta.
  • Upcoming integrations include Zoom and Intercom (account access), and Personio (HRIS)
  • Upload access files from non-integrated systems
  • View and select systems in-scope for the review
Review, approve, and deny user access
  • Select the appropriate systems reviewer and due date
  • Get automatic notifications and reminders to systems reviewer of deadlines
  • Automatic flagging of “risky” employee accounts that have been terminated or switched departments
  • Intuitive interface to see all accounts with access, account accept/deny buttons, and notes section
  • Track progress of individual systems access reviews and see accounts that need to be removed or have access modified
  • Bulk sort, filter, and alter accounts based on account roles and employee title
Assign remediation tasks to system owners
  • Built-in remediation workflow for reviewers to request access changes and for admin to view and manage requests
  • Optional task tracker integration to create tickets for any access changes and provide visibility to the status of tickets and remediation
Verify changes to access
  • Focused view of accounts flagged for access changes for easy tracking and management
  • Automated evidence of remediation completion displayed for integrated systems
  • Manual evidence of remediation can be uploaded for non-integrated systems
Report and re-evaluate results
  • Auditor can log into Vanta to see history of all completed access reviews
  • Internals can see status of reviews in progress and also historical review detail
FEATURED VANTA RESOURCE

The ultimate guide to scaling your compliance program

Learn how to scale, manage, and optimize alongside your business goals.