Share this article

Building the Team at Anthropic: Daniela Amodei on Hiring 10x AI Engineers | Frameworks for Growth
Accelerating security solutions for small businesses Tagore offers strategic services to small businesses. | A partnership that can scale Tagore prioritized finding a managed compliance partner with an established product, dedicated support team, and rapid release rate. | Standing out from competitors Tagore's partnership with Vanta enhances its strategic focus and deepens client value, creating differentiation in a competitive market. |
🔔 Subscribe for more startup strategy and founder stories: https://www.youtube.com/@TrustVanta
👉 Follow us on all social media platforms: LinkedIn, X, Instagram, and TikTok
🎧 Follow Frameworks for Growth on Spotify and Apple Podcasts
In this episode
Vanta CEO Christina Cacioppo sits down with Daniela Amodei, President and Co-founder of Anthropic, to discuss what it takes to build both a mission-driven company and a cutting-edge AI research lab.
They explore how Daniela approaches hiring with intention, how to scale responsibly in the age of AI, and what it means to build trust while working on transformative technologies like Claude.
🔑 Topics covered (timestamps):
00:17 - Introduction
00:45 - Anthropic as an AI safety and research company
02:22 - Balance technical research and commercializing
04:21 - Public benefit corporations (PBCs)
05:50 - Hiring lessons from Anthropic and OpenAI
08:03 - Articulating company values
09:33 - Aligning new employees on company values
12:39 - Non-obvious traits when hiring
15:25 - Balancing thoughtfulness and simplicity with engineering hires
18:50 - Common misconceptions about the future of AI
20:05 - Enterprises are bullish on AI
21:29 - What will be true about AI in 2035
22:57 - Co-founding a company with your sibling
24:08 - A moment where you realized the future is here
25:15 - What Daniela’s reading
25:54 - Interest in East African government and advising them on AI
27:29 - Incorporating as a PBC was the right move
28:25 - Deciding on how to structure their hardware
Read the full transcript
Christina:
Welcome to Frameworks for Growth. I'm Christina Cacioppo, co-founder and CEO of Vanta. And I'm here with Daniela Amodei, President and co-founder of Anthropic. So prior to starting Anthropic in 2020, you were VP of Safety and Policy at OpenAI, led policy teams, engineering teams, also early employee at Stripe, and led risk and recruiting teams there, and started your career in international development. Thanks so much for being here today.
Daniela:
Thank you so much for having me.
Christina:
So Anthropic describes itself as an AI safety and research company that's kind of different, purposely, than a standard technology company. What does that mean to you all in practice?
Daniela:
So first of all, so great to be here today. Thank you for taking the time to chat with me. So I think we really focus on how to develop our tools and systems in a way that is robustly safe really across our stack. And what I mean by that is we apply technical training techniques to how we actually train our models to ensure that they are aligned with human values. We use a technique called constitutional AI that helps us to imbue Claude with a sense of ethics. There's a number of different documents that actually went into helping to train Claude with constitutional AI. It includes things like the UN Declaration of Human Rights. In terms of how we ensure that the technology is safe when we actually release a new model or a new set of products, we have a framework called the Responsible Scaling Policy. And our Responsible Scaling Policy is essentially a set of guidelines that indicate how and when we will release models to ensure that they're safe. We use different levels of AI safety levels. There's five of them and they're actually modeled along the lines of biological safety levels for people that do research with biological applications, and really what we look for is defining a concrete set of safety and security
Christina:
Very neat. And you’re touching on some of this, but Anthropic as a research lab and doing a bunch of kind of deep research and also trying to commercialize the technology. Most startups aren't trying to both do a bunch of very real technical research and commercialize at the same time. How does that, how do you balance that tension?
Daniela:
I think this is one of the most interesting and unusual things about running a company like Anthropic is we're almost part research lab, part tech startup, and we have a huge emphasis on our public benefit mission as well. Really thinking about how to balance all three of those things in service of the mission and in providing value for our customers, I think, is one of the great joys and excitements of getting to work at a place like Anthropic. In terms of the research-to-product balance, really what we found is that having this collaboration between product and research really helps to make both of those teams much stronger. In particular, so much of what we see is that as new research capabilities are developed, it actually opens up new avenues in the product that wouldn't have existed before the technology itself advanced. A great recent example of this is Claude has been particularly useful for developers in writing code. Many of the products that we've developed over the course of the past two to three months have really been geared around making that process easier for software developers, whether they're at a big enterprise or a mid-market company or at a budding startup. If we hadn't known that the models were going to be really good at coding, we wouldn't have necessarily designed parts of our product that way. I think in some ways, there's this iterative process where as we're getting more customers using us for coding, we're also helping to train the models to make them stronger in coding and then providing more product support in that area.
Christina:
Makes a lot of sense and we use a lot of that at Vanta too.
Daniela:
That’s great, glad to hear. Happy customer.
Christina:
Very much so. Curious to talk about public benefit corporations. Most startup founders do not decide to be public benefit corporations, especially on incorporation. Should they?
Daniela:
When we were starting Anthropic, we actually spent a reasonable amount of time thinking about what kind of structure we wanted to incorporate as. And we had a little bit of a benefit of having seen sort of different models up close. Some of what we were thinking when we incorporated as a PBC was what was a very practical way to balance both the commercial interests that we foresaw we would have, with the public benefit intentions behind founding a company like Anthropic, that was so invested in topics like safety. And what we ultimately decided was that public benefit corporations are a really nice blend of those two areas. It's a C corporation, so it is structured in most ways very similarly to a Delaware C corporation. But the one exception is that we have the ability to balance commercial needs against public benefit interests. And this felt to us like it was a really pragmatic approach to what we were trying to do. And so I would say depending on what the goal of the startup is that's being founded, I think public benefit corporations are worth looking at if there's a component of your business that really aims to balance the public interest with some of the commercial activities that a normal business engages in.
Christina:
Well, between OpenAI and Anthropic, you've helped build two of the iconic AI companies of our generation. I'm curious when you think about literally building that company and people and literally building these companies and hiring them, any hiring lessons or maybe things you learned at OpenAI that you've applied at Anthropic?
Daniela:
This is probably not surprising as an early employee and founder, but I think so many founders and early employees talk about culture and how important culture is when you're growing a company, particularly one that's scaling really rapidly. And I just want to echo that I think the most valuable currency that you have when you're a small and quickly scaling company is the culture of that company. So to me, culture is really an expression of the company's values. And I think the more clearly, it can feel really hard and very fuzzy to sort of pin down, what is your culture and what are your values and we go through the process, we write the mission statement and we write the values and we have the culture interviews and it never completely captures the feeling that you have when you walk in the doors and you're an employee and you're like, I know what it feels like to work here. But I think the effort to really invest in that process early and say, okay, it can never really be quantified. But if I had to quantify it, how would I describe our culture? What are the core values that underpin who we are as an organization and how do we make sure that people that come to the company are on board with that. I think probably the biggest lesson I've learned at Anthropic is that you should really be as open and communicative about what your culture is as possible. Because the people that are excited about that set of values and that vision are going to be attracted to you. They're going to say, wow, we love that you are a public benefit corporation and you're balancing these two things and you care about this set of topics. For people who that's not their cup of tea, it's much better to learn that early. You want them to read your website and say, well, that sounds nice, that's not really what I'm interested in. And so I think it's this delicate balance of wanting to attract the right kind of people and really, really having this very extreme almost emphasis on ensuring cultural alignment in your early employees.
Christina:
And how do you talk about the values in a way that is clear? Does that make sense? Because I think the critique of startup values is, oh, they often all sound the same. What's the Anthropic spin on them where you feel like you're just clear with people about what is important to you all?
Daniela:
Yeah. Number one, I would say if your values are not making people kind of like, if they're not creating tension, they're probably too bland, right? So it's like, be kind, be direct. Everyone wants that. And so if there's not an intentional tension to them, probably they're not specific enough. The second is that you need to be able to back up any value with 10 examples like off the top of your head, right? A value that we have is do the simple thing that works. And this comes from our, you know, mostly from our research team saying, we're not actually here to have a bunch of fancy algorithms for no reason. We're really trying to make the models as useful as possible for people. And there's no extra points for sort of being complicated or writing these fancy documents or overthinking things. But I think that value applies outside of just research for us. And off the top of my head, I could explain how we use that not just in research, but in product, in engineering, in our office selection choice, in how we set up our off-sites. I think anytime a value is one where you can just point to examples of it really easily, it's more likely to be sticky because people will run into it and feel that value.
Christina:
Yeah. Okay. Actually, building off of that and for the founders watching, when you're making an office choice or something in product development, how much do you tie that to your values? You could rant off the example. How do you get it so the 200th person hired at Anthropic can also pull up those examples.
Daniela:
Yeah. Number one, I actually think writing them down and talking about them in public forums within the company is really valuable. So in our all hands, we try to mention a value every all hands. We don't always do it perfectly and you don't want to shoehorn it in. But I think really emphasizing and just making the language super comfortable. You have to make sure it actually resonates with people, otherwise it feels like painful propaganda. But if it does resonate with people, it's actually something that they volunteer themselves. So I will hear other employees in the company just say, well, we were considering hosting this event off-site, but we just decided to host it at the office because we wanted to do the simple thing that works. I think I always light up when I hear a version of that or with a different value because it means this is actually something that's practical for people and it applies to them. And the other thing I would say is don't be afraid to retire values if they're not working for you. So sometimes something feels really right to you as the founder, but it's just not sticky. And it's maybe just you're saying it the wrong way. It's not necessarily that the underlying core part of it is wrong. But I think just feeling comfortable iterating until someone says, actually, that lands with me better. People should be meme-ifying themselves. Another thing I think is interesting is some of our values, didn't come from us, from the founders, they came from the company. Someone just started using a term and then we were like, that's actually a value of the company, that's great. I think it helps employees feel like they have ownership over the values as well. It's not something being forced on them.
Christina:
I love the meme-ifying of values. Any good stories there you can share?
Daniela:
I think there's some really funny things that came up in our earlier, in our earlier days around swag, where at an offsite that we had maybe two or three years ago, Dario was just talking about scaling laws and how quickly everything was going to grow in AI. Even those of us that really believe that, now two years looking back, it's just so much crazier to see just the rate of growth that's been happening in artificial intelligence and the adoption curve. And at the end of his talk or in the middle, he said something like, and things are never gonna be chill again. They'll never be chill, and like now we have these like cute sweatshirts on like, I have one on my toddler. We have like the little kid version and it says things will never be chill again. We give them to new parents for like their babies. So I think it just like, sometimes they just kind of like emerge, but like Dario wasn't intending for that to be like a meme, but then people thought it was so funny and it felt so accurate. And then there were all these like nice ways that we could sort of apply it within the business.
Christina:
I like that. Any values you've retired over the course of the company?
Daniela:
There are a few I mean, move fast and save things I love, but it feels a little too cheeky for us. We were just like, not trying to, we wanted to sort of be, I think we just ended up adjusting that one because we were like, okay, that's just, it was sticky sometimes, but not everyone loved it. And so we're like, if not everyone loves it, people are not going to use it.
Christina:
Makes sense. Yeah. Outside of the values and outside of the like, true platitudes, the be kind, like be direct. What's a non-obvious trait you look for when hiring?
Daniela:
I think one of the most important qualities, especially in Anthropic, is a willingness to talk about a wide range of topics respectfully. And so I think for us, you know, we work on a very, you know, cutting edge frontier technology. And a lot of the things that kind of bump into it, like, are, can feel complicated or controversial, right? So AI has incredible potential to help people. It also has incredible potential to be misused if we're not careful. And so how can we make sure, we have to address those challenges at Anthropic. We view that as our core job. And so if people are not willing to have respectful disagreements and to be very open-minded, right? So even to say we're coming from completely different opinions about this topic, but I have to be able to engage with you respectfully on those topics. It's a very particular skill to actually know how to do that, especially when it can be about things that feel so fundamental. I think that is something we look for. It's also something we assess for in our culture interview. We ask people, what are some unusual beliefs that you hold? How have you defended those beliefs in uncomfortable situations for yourself because you felt it was the right thing to do? It's not like we're looking for a particular belief system. We're looking for the ability to say, hey, I felt this was the correct thing and maybe it wasn't popular, but I really stood by this or I changed my mind because I actually realized I wasn't right about this.
Christina:
Very neat. When you go through that interview, how do you prep people for that interview?
Daniela:
Yeah, it's a great question. It might have changed since I gave it, which is now a while ago. But we give people a heads up that we say, hey, really trying to, we actually give a little bit of a preamble about why we're asking these questions, to say, hey, this is actually supposed to be bidirectional, we're really trying to show you what Anthropic is like and some of our values and why we've chosen these questions is to give you a little bit of insight too into what it's like to work here. For the vast majority of these questions, there's no right or wrong answer. It's not like, have to say this type of topic was a place you held your ground, but we really want to understand your thought process and we want you to be honest. There's no right or wrong answer to the question. So you don't have to prep for it, we just are trying to get to understand your thinking process.
Christina:
Yeah. I asked because we have a Vanta version of that interview, so different setup, but it's the same thing. We used to in the early days not prep people for it. They would generally get quite confused by what was happening. That's why I asked. Again, you don't want to lead them on, but it is like, here's why.
Daniela:
Yes. I think the context is so helpful. So they're not like, why are you asking me this question?
Christina:
Exactly. Building on hiring, I imagine when you're recruiting folks across Anthropic, engineers in particular, you want to balance the thoughtfulness and debate and keeping simple things simple. How do you tease that out, maybe especially with engineers?
Daniela:
I think for both engineers and research, we are always in this really interesting dance of when are we investing in the fundamental research and engineering work that we need to do, and when are we just launching and iterating something? I think this is very similar across a lot of companies, but I think at a place like Anthropic, it's particularly challenging because sometimes investing in fundamental research that doesn't necessarily look like it's going to have immediate returns for customers is the best way to create breakthroughs. I wish I had a perfect formula. It's much more of an art than a science. But at least on the research side, we really try to diversify the amount of time that we're spending optimizing particular qualities in the model or tools or systems, and just some amount of open exploratory research time for our team. I think on the engineering side, it's similar. We want to give some scope for people to be able to really rethink and reimagine what products we're building and why. And I think it's highly related to the fact that because the research is developing so quickly, kind of like in that coding example, you don't always know what's coming down the pike and things can change on a dime, right? And you can suddenly say, well, there's this incredible new use case and we should actually build a product around it. And so in some ways, we always want to be giving people the space to just reimagine how is it that we're even providing value to our customers. And we also have to like set a product roadmap and build towards that roadmap and make sure that we're doing all of the very important, but sometimes unsexy engineering work that needs to happen. I would say it just falls somewhere in the middle of the 60-40 or 40-60 or 50-50 with periods of time where we're really sprinting towards a particular release that we've been working on for a long time. But I do think a little bit of that open-ended exploratory mindset is an important quality for us.
Christina:
And then how do you bake that into your processes or do you not? Google famously had 20 percent time that got copied and then it didn't. But how do you leave that time for exploration and delivering for the stuff you know customers want and may have committed?
Daniela:
We lean pretty heavily on team rotations. So often I found that it's actually easier to have a group of people that's just in exploratory mode because it's easier to get into that flow. If you're trying to split your own time, 80 percent, versus 20%. Sometimes that works and I think that is true for some employees. But I think in the majority of cases where this is purposely an exploratory team, right? You're doing exploratory efforts. This is a team that's building a product or a research workflow that is pretty well defined and they're executing on it. It's not necessarily any less exciting because there's a lot of work that needs to be done. But oftentimes people enjoy going between those two different states of work and flow. I think it's also a more interesting way for employees to learn because they both get to see the value of exploration and the impact that that exploration can later have on our customers.
Christina:
We've talked a lot about AI. Looking ahead, what's something about the future of AI that you think is widely misunderstood today?
Daniela:
I think the degree to which the rollout and implementation of different use cases, and even across different sectors is going to be very uneven. So I think AI in some pockets of the world, it's impossible to imagine certain companies not leveraging AI today. But I think there's a whole swath of the world that has never used AI, has never thought about how to use AI and might not for many more years. And so I think something that's just really interesting to me is thinking about how will this kind of permeate through the economy, both at the consumer level, but also at an application layer, at a foundational infrastructure layer, in enterprises, in mid-market businesses. I don't think anyone can really predict the ways that the technology is going to be adopted, or the places, or how, or when. I think that's going to be a really interesting and maybe a little bit chaotic period, just as different parts of the world start to leverage AI more or less.
Christina:
Any surprising places that today leverage it more than maybe the you of five years ago would have guessed?
Daniela:
I think I've been surprised the degree to which many enterprise customers have gotten on board with AI and implemented it very quickly. I think there's sometimes a little bit of an unfair characterization of enterprises, very slow. I think our experience has been a huge amount of effort, enthusiasm, nuance and just familiarity with the technology in some of the biggest businesses in the world. I've been very impressed at their ability to understand the value that AI can provide within their organization and their ability to adopt it and move quickly.
Christina:
What do you attribute that to?
Daniela:
I think the fundamental power of the technology is quite hard to miss. I think for many of the most successful businesses in the world, they've gotten there by seeing what the next wave is going to be. I think it's quite obvious that AI is a big deal. I think the ability for these organizations to say, wow, this is actually really important and we need to find a way to integrate this. Even if we don't exactly know how a company is to take us through the process of doing this and driving that top-down change from a business perspective isn't easy. But I think when there's enough institutional will, I've been so impressed to watch so many of these customers really quickly transform so many of their different business processes and workflows.
Christina:
Very neat. So we first met probably 10 years ago, in 2014, 2015, sitting here in 2025. If you think about 2035, which it feels probably forever away, the way 2014 does now. Yeah. But what do you think is true of AI in the world then, in 2035?
Daniela:
I think in 2035, I think the world is going to look really different. I mean, in a lot of ways, of course. But my expectation is just there will be a lot of places where it will be so hard to imagine how this pocket worked without artificial intelligence. Just like today, it's really hard to imagine so many of the baseline technologies that we take for granted from 10 or 20 years ago. If you imagine the Internet and search, was not very common 20 years ago even, maybe 25. It's impossible to imagine how did people get places without Google Maps? How did you know where to go? I think there's going to be a lot of moments like that in artificial intelligence where you'll say, well, how did I possibly shop for something on the web by myself? Obviously, an AI assistant is just going to go find things for me that I want, whatever. I don't know exactly what it'll look like. But I also think there will be a lot of things that look surprisingly the same in the same way that if you imagine 20 or 25 years ago. Yes, it's much easier for us to find information and to get places, but there's fundamental things about how humans engage and interact together that I don't think have changed very much at all. So my expectations we’ll see another round of that. It just might be on a slightly faster cycle than say maybe the first round of the Internet wave.
Christina:
Makes sense. Somewhat famously, you co-founded Anthropic with your sibling. Any surprising parts of that?
Daniela:
Well, I had great inspiration from Stripe because I saw John and Patrick co-found a company together up close. I'm really fortunate that Dario and I have extremely complimentary skill sets. It's actually just been really fun. We've always wanted to work on something together. We worked together at OpenAI. But since we were little kids, we've always wanted to be able to build something together. For me, it's actually a huge privilege to be able to get to work with him every day. I think just our ability to tackle problems together from the left brain, right brain perspective is very unique. And I have seen many great partnerships like that in organizations or businesses before. But there's something very special about someone that you've known your entire life, being able to do it with them.
Christina:
Any good pre-Anthropic projects, like whether it's a pillow fort or like ganging up on your parents?
Daniela:
I wish I had something so tangible. Well, he taught me to drive actually. That's a good one. Yeah. So I feel like that was more one-directional. But what are like collaborative projects we did together? We really like to play video games together. So we've had a lot of building that way.
Christina:
That's very neat. You've been watching AI for the last decade. Do remember a moment where you saw something maybe on a screen, maybe in real life and you're like, this is the future, like this thing.
Daniela:
It's hard to think of a particular moment, but I think, probably unsurprisingly, like GPT-2 when I was pretty new at OpenAI, when we were working on that. I think even though now looking back on it, it's like hilariously bad. Just seeing an artificial tool respond so convincingly on such a wide range of topics, math and writing and like, there was some French in there and it must have been in its training data, but it could speak some French. It was pretty crazy at the time to see it, especially so up close, and to say, wow, how is this possible that this model is able to do all of these things? I think that for me felt like a very wow moment. There's of course been many more impressive things since then, but I think there was something about that early phase that felt very interesting.
Christina:
What are you reading these days?
Daniela:
When I have time. I'm a big science fiction reader, I've actually been reading a lot about the mid 20th century in Europe, which I'm very curious about or the World War I, World War II period. I'm reading a book now called The World of Yesterday, which is just very interesting to just think about how quickly things can change both from a sociopolitical perspective, but also from an economic perspective, a technological perspective. It's wild to me to think that's almost 100 years ago now, but I think having access to a lot of really interesting source material from that period, I think has been really interesting to read.
Christina:
Very neat. I think we both share an interest in East African politics, which is not something I could say to most people I interview.
Daniela:
It’s very unusual, yes.
Christina:
But if you were, I don't know, for some reason, Anthropic disappeared tomorrow in a good way and you had a new job advising some East African government on AI, what would you be saying?
Daniela:
We're actually very interested in working actually in this exact area. Some of our work in beneficial deployments is around how do we ensure that parts of the developing world are not left behind in this AI transition. I think what I would say is number one, there's so many potential applications, just like we say to enterprise businesses, right, of this technology in a way that could be helpful. We should pick one place to start. And I think AI can often feel extremely overwhelming. It does to us too, right, of sort of where do you begin? But I think the potential for artificial intelligence, particularly to just revolutionize sort of mass education is really vast. And I think there could be an incredible opportunity in partnering with governments around the world to figure out ways to just provide access to basic information and more complex information and AI tutors to a really wide range of populations across all different types of languages too, right. These models speak every language at an incredibly high level of fluency, including dead languages, rare languages. And I think the ability to provide that level of education to people is really unique. And I'm not totally sure that we've yet even started to kind of crack the process of realizing that.
Christina:
Very neat. When you think back since founding Anthropic, what's something that at the time felt like a really big deal? And in retrospect, just kind of ho-hum.
Daniela:
Well, there's some funny things like which office we should, which office we should start in. We grew out of it much faster than we thought we would. We were trying to think about where to put our home base and we picked a really nice office, but we grew out of it in about a year and a half. I think probably, I don't want to say it's ho-hum, but I think we thought a lot about the incorporation question. So like, should we be an LLC? Should we be a C corporation? Should we be a nonprofit? Should we be a PBC? In retrospect, it was so obvious that PBC was the right decision for us. But it took us like a month or two of discussion before we formally decided what to incorporate as. Now, looking back, it doesn't feel like that should have been a difficult problem at all, but we just didn't have the foresight to know what the shape of the organization would look like at the time.
Christina:
What about the flip side? So something that seemed kind of ho-hum at the time and then in retrospect was actually kind of an important door to walk through.
Daniela:
I mean, interestingly I think our, just how we structured our hardware, it's more of a technical thing, but how did we decide to start on one platform versus another and scale up our work there. I actually think we made great decisions, but I think we were like, okay, we'll just start building on this thing. I think that just the rate at which that technology as part of the stack has developed so quickly. I don't know that we thought about it that hard in our initial decision-making.
Christina:
Well, it has been wonderful talking to you today. Thank you so much for the time.
Daniela:
Thank you. It was great to be here.





FEATURED VANTA RESOURCE
The ultimate guide to scaling your compliance program
Learn how to scale, manage, and optimize alongside your business goals.