You're one of the most successful investors that a lot of people have probably never heard of. AI is the only market where the more I learn, the less I know. And in every other market, the more I learn, the more I know. The more I'm able to predict things, and I can't predict anything anymore. What scares you about the future? That's a big question.
I think in a couple years we'll start thinking about it as we're selling units of cognition. AI is dramatically under-hyped because most enterprises have not done anything in it and that's where all the money is, all the changes, all the impact is, all the jobs, everything. The people that I know I've been very successful and driven solely. end up miserable because that money and It's just what do you do? What fulfills you? What are the most common self-inflicted wounds that kill companies?
I think that... is the next wave. I think it's going to be an ongoing wave of And that's coming. And that hasn't even happened. in a world power this podcast is of what other people have already figured out. My guest today is Elad Gill, who has had a front row seat to some of the most important technology companies started in the past two decades.
He invested early in Stripe, Airbnb, Notion, Coinbase, Andrel, and so many others. He's also authored an incredible book on scaling startups called High Growth Handbook. In my opinion, he's one of the most underrated figures in Silicon Valley. In this episode, we explore how he thinks about startups, talent, decision-making, AI, and most importantly, the future of all of these things.
We talk about the importance of clusters, why most companies die from self-inflicted wounds, and what it really means to scale a company, and importantly, what it means to scale yourself. You've had a front row seat at some of the biggest, I would say surprises in a way, like Stripe, Coinbase, Airbnb, when they were just ideas. What was the moment where you recognized these were going to be outliers?
So all three of those are very different examples, to your point. I invested in Airbnb when it was probably around eight people. Stripe was probably around the same size. And then Coinbase only got involved with much later, when it was a billion-dollar-plus company. And even then, I thought there was enormous upside on it, which luckily has turned out to be the case. I think really the way I think about investing in general is that there's two dimensions that really matter.
The first dimension is what people call product market fit, or is there a strong demand for whatever it is you're building? And then secondarily, I look at the team. And I think most early stage people flip it. They look at the team first and hook it as a founder. And obviously, I've started two companies myself. I think the founder side is incredibly important and the talent side is incredibly important.
But I've seen amazing people get crushed by terrible markets, and I've seen reasonably mediocre teams do extremely well in what are very good markets. And so in general, I first asked, do I think there's a real need here? How's it differentiated? What's different about it?
And then I dig into like, are these people exceptional? How will they grow over time? You know, what are some of the characteristics of how they do things? Let's go into people's second, but how do you determine product market fit in a world
where a lot of people are buying product market fit almost through brute force or giving away product. Yeah, there's a lot of signals you can look at, and I think it's kind of varied by type of business. Is it a consumer business versus enterprise versus whatever?
For things like consumer businesses who are just looking at organic growth rate and retention, are people using it a lot? Are they living in it every day? That sort of thing. That would be early Facebook, right? The usage metrics were insane. And then for certain B2B products, it could be rate of growth and adoption. It could be metrics people call like NDR and a dollar retention or other things like that. Honestly, if you're investing before the thing even exists in the market.
then you have to really dig into how much do I believe there's a need here, right? Or how much is our customer need? So I invested in Rippling and other related companies before there's anything built, right? Under the premise that this is something that a lot of people want. And Notion is the same thing. Actually, Notion was a rare example where I did it as a person investment. I met Ivan, who's a CEO over there. And everything about him was so aesthetically cohesive.
in a very odd way, the way he dressed, his hairstyle, the color scheme of his clothes, the color scheme of the app and the pitch deck. The only other person I've seen that with is Jack Dorsey, who started Square and Twitter. And there was this odd, almost pure embodiment of aesthetic. And I just thought it was so intriguing and so cool. And I've only seen two people like that before that I had to invest.
And it was just this immense consistency. It was very weird. And you see that, like you go to his house and it's like, it feels like him. You know, everything, the company feels like him. Everything feels like him. It's, It's fascinating. He's done an amazing job with it. It almost stands out to the point where you think it's manufactured. I think it's genuine. I think it's almost the opposite. You feel the purity of it. You're like, oh my gosh, there's a unique aesthetic element here.
And that probably reflects some unique way of viewing the world or thinking about products or thinking about people and their usage. Let's come back to Outliers. So product market fit Outliers, how do you identify an Outlier team? Yeah, you know, I think it really depends on the discipline or the area for tech. I think it's very different than if you're looking in other areas.
For an early tech team, I almost used this Apple framework of Jobs, Wozniak, and Cook. Steve Jobs and Steve Wozniak started Apple together. Steve Jobs was known as somebody who really was great at setting the vision and direction, but also was just an amazing salesperson. And selling means selling employees to join you. It means raising money. It means selling your first customers. It's negotiating your supply chain. Those are all aspects of sales in some sense or negotiation.
And so you need at least one person who can do that unless you're just doing a consumer product that you throw out there, right? And it just grows and then people join you because it's growing. Then you need somebody who can build stuff and build it in a uniquely good way, and that was Wozniak. The way that he was able to hack things together, drop chips from the original design of Apple devices, etc., was just considered legendary.
And then as the thing starts working, you eventually need somebody like Tim Cook who can help scale the company. And so you could argue that was Sheryl Sandberg in the early days of Facebook, who eventually came on as a hire and helped scale it. And Zuck was really the sort of mixture of the product visionary, the salesperson, etc. Why did all these people concentrate in San Francisco almost?
California. How did that happen where you had Apple, you have Stripe, you have Coinbase, you have Facebook, you have Walk me through that. We were talking a little bit about this before we started recording about clusters of people. Yeah, it's really fascinating because if you look at almost every major movement throughout history...
And that could be a literary movement, it could be an artistic movement, it could be a finance movement, economic schools of thought. It's almost always a group of young people aggregating in a specific city. who all somehow find each other and all start collaborating and working together towards some common set of goals that reflect that. So there was... You know, a famous literary school in the early 20th century in London.
That was, I think it was like Virginia Woolf and John Maynard Keyes and E.M. Forster and all these people all kind of aggregated and became friends and started supporting each other. Or you look at the Italian Renaissance. Similarly in Florence, you have this aggregation of all these great talents, all in a timely manner coincident with each other. Favism or Italian futurism or Impressionism, Paris in the late 1800s. And so that repeatedly happens for everything.
And similarly, that's happened for tech. And even within tech, we've had these successive waves, right? Really, the founding story of Silicon Valley goes back to the defense industry and then the semiconductor industry, right? Defense was HP, another company starting off in the 40s.
You then ended up with Shockley Semiconductor and Fairchild Semiconductor in the early semiconductor companies 50s, 60s, and that kind of established Silicon Valley as a hub. And as things moved from microprocessors to computers, to software, people just kept stuff propagating across those waves from within the industry. So one big thing is just you have a geographic cluster.
And you have that for every single industry. You look at wineries and they're clustered in a handful of places because of geography. You look at the energy industry. It's in a handful of cities. Finance is in New York and Hong Kong and London. So every single industry has clusters. Hollywood and Bollywood and Lagos and Nigeria for the main hubs for movie making in different regions.
So in Silicon Valley, obviously, we created this tech cluster, but then even within the tech cluster, there are these small pockets of people that I mentioned earlier. that somehow find each other and self-aggregate. It's funny, I was talking to Patrick Hollison, the founder of Stripe, about this. And he mentioned that when he was 18 and he showed up in Silicon Valley as a nobody, right? Completely unknown, 18-year-old, nobody's heard of him.
And during that six-month period that he was first here, he said he met all these people who are now giants of Silicon Valley. And it was this weird self-aggregation of people kind of finding and meeting each other and talking about what each other's working on. Somehow this keeps happening, and this happens through time. And then right now in Silicon Valley, it's happening in very specific areas. It's happening all the AI researchers.
all knew each other from before. They were in the common set of labs, they had common lineages. All the best AI founders, which is different from the researchers, have their own cluster. And all the SaaS people have their own cluster. And so it's this really interesting, almost self-aggregation effect of talent finding each other and then helping each other over time. And it's just fascinating how that works. How do you think about that in an era of remote work?
Remote work is generally not great for innovation unless you're truly in an online collaborative environment. And the funny thing is that when people talk about tech, they would always talk about how tech is the first thing that could go remote because you can write code from anywhere and it can do it from anywhere. But that's true of every industry, right? You look at Hollywood.
You could make a movie from anywhere, like you film it off-site anyhow, or on-site in different places. You could write a script from anywhere. You could edit the musical score from anywhere. You could edit the film from anywhere. You could write the script from anywhere. So why is everything clustered in Hollywood? Nobody would ever tell you, oh, don't go to Hollywood, go to Boise, and you could work in the movie industry. Or finance.
You could raise money from anywhere, come up with your trading strategy from anywhere, everything in finance is in a handful of locations. And so tech is the same way. And it's because there's that aggregation of people. There's the people helping each other, sharing ideas, trading things informally. learning new distribution methods that kind of spread, learning new AI techniques that spread.
There's money around it that funds it specifically so it's easier to raise money. There's people who have already done it before who can help you scale once something is working. That's the common complaint I hear in Europe for both our companies there is we can't find the executives who know how to scale what we're doing. Oh, interesting.
And so I do think there are these other sort of ancillary things that people talk about. The service providers, the lawyers who know how to set up startups, right? Or the accountants who know how to do tax and accounting for startups. Those things sound trivial, but they cluster. Most people think the key to a successful business is the product, but often the real secret is what's behind the product, the systems that make selling seamless.
That's why millions of businesses from household names to independent creators trust Shopify. I'm not exaggerating about how much I love these guys. I'm actually recording this ad in their office building right now. Shopify powers the number one checkout on the planet. It's simple, it's fast, and with ShopPay, it can boost conversion rates up to 50%. I can checkout in seconds. No typing in details. No friction. It's fast, secure, and helps businesses convert more sales.
That means fewer abandoned carts and more customers following through. If you're serious about growth, your commerce platform has to work everywhere your customers are. Online, in-store, on social, and wherever attention lives. The best businesses sell more and they sell with Shopify. Upgrade your business and get the same checkout I use. Sign up for your $1 per month trial at shopify.com slash knowledge project.
all lowercase. Go to shopify.com slash knowledge project to upgrade your selling today. shopify.com slash knowledge project. I think a lot about systems, how to build them, optimize them, and make them more efficient. But efficiency isn't just about productivity, it's also about security.
You wouldn't leave your front door unlocked, but most people leave their online activity wide open for anyone to see, whether it's advertisers tracking you, your internet provider throttling your speed, or hackers looking for weak points. That's why I use NordVPN. NordVPN protects everything I do online. It encrypts my internet traffic so no one, not even my ISP, can see what I'm browsing, shopping for, or working on.
And because it's the fastest VPN in the world, I don't have to trade security for speed. Whether I'm researching, sending files, or streaming, there's zero lag or buffering. But one of my favorite features, the ability to switch my virtual location. It means I can get better deals on flights, hotels, and subscriptions just by connecting to a different country. And when I'm traveling, I can access all my usual streaming services as if I were at home.
Plus, Threat Protection Pro blocks ads, malicious links before they become a problem, and Nord's dark web monitor alerts me if my credentials ever get leaked online. It's a premium cybersecurity for the price of a cup of coffee per month. Plus, it's easy to use. With one click, you're connected and protected. To get the best discount off your NordVPN plan, go to nordvpm.com slash knowledge project.
Our link will also give you four extra months on the two-year plan. There's no risk with Nord's 30-day money-back guarantee. The link is in the podcast episode description box. A big part of Y Combinator is sort of like helping everybody with that stuff.
Yeah, well, Combinator is a great example of taking out-of-network people. At least that was the initial part of the premise, not the full premise, right? Like people like Sam Altman or others who were very early in NYC came out of Stanford, which was part of the mean hub.
But a lot of other people came out of universities that just weren't on the radar for people who tended to back things in Silicon Valley. And so, you know, the early Reddit founders went to East Coast universities. The Airbnb founders, two of them were out of RISD, the Rhode Island Institute of Design.
And so YC early on was very good at taking very talented people who weren't part of the core networks in Silicon Valley and basically inserting them into those networks and helping them succeed. Why do you think they're still relevant today? Why is YC still relevant today? I think they've just done a great job of building sort of brand and longevity. Gary, who's taken over, is fantastic, and so I think he brings a lot of that.
Let's go back to first principles and really implement YC the way that we think it can really succeed for the future. And I think they do a really good job of two things. One is plugging people in, as mentioned. Particularly, if you're a SaaS company, you want to have a bunch of customers instantly. Your batchmates will help you with that. But also, it teaches people to ship fast. and to kind of force finding customers.
And so because you're in this batch structure and you're meeting with your batch every week and you hear what everybody else is doing, you feel peer pressure to do it, but also it kind of shapes how you think about the world.
what's important, what to work on. And so I think it's almost like a brainwashing program, right? Beyond everything else they do, which is great. It sets a timeline they have to hit and it brainwashes you to think a certain way. One of the things that I see, which I think is, maybe relevant, maybe not, you tell me is, I like how it brings people together who are probably misfits or outliers in their own environment.
and then puts them in an environment where ambition is the norm. It's not the outlier to have ambition. Shipping is the norm. It's not the outlier to ship. And it sort of normalizes these things that maybe cause success or lead to an increased likelihood of success. It's actually a very interesting question of what proportion of founders these days are actually...
people who normally wouldn't fit in, right? So the sort of founder archetype of before was rebellious people or people who could never work for anybody else or whatever. And then as tech has grown dramatically in market cap and influence and everything else, it's inevitable that the type of people who want to come out here and do things has shifted. And then the perception of risk in startups has dropped a lot. And so I actually think the founder mix
has shifted quite a bit. There isn't as much quirkiness in tech. And during COVID, it was awful. It was very un-quirky. Because at that point, there was a zero interest rate environment. Money was abundant everywhere. And the nature of people who joined or who showed up shifted. And so I think we had two or three years where the average founder just wasn't that great on a relative basis to history.
And then as the AI wave was happening, you know, I started getting involved with a lot of the generative AI companies maybe three-ish years ago, maybe three and a half years ago. So before ChatGPT came out and before MidJourney and all these things kind of took off. And the people starting those companies were uniquely good. And you felt the shift. You went from these kind of plain vanilla, me too, almost LARPers.
to these incredibly driven, mission-oriented, hyper smart, very technical people who wanted to do something really big. You felt it. It was a dramatic shift And if you look at it, there's basically been three or four waves of talent coming through the AI ecosystem. And I should say gen AI, because we had this whole wave, we had 10 years, 15 years of
other types of deep learning, right? We had recursive neural networks and convolutional neural networks and GANs and all these things. And that technology basis fundamentally has different capabilities than this new wave. And so there's this paper in 2017 that came out of Google called the Transformer Architecture.
And that is the thing that spawned this whole wave of AI right now that we're experiencing. And so it's a new technology basis. We took a step function and we're doing new stuff that you couldn't do before on the old technologies. That whole wave led to this really interesting set of companies and the first people in that wave were the researchers because they were closest to it and they could see firsthand
what was actually happening in the technology in the market, how they were using it. You know, the engineers at OpenAI used to go into the weights to query. which then eventually in some form is ChatGPT, right? They were doing it before it existed. There was also Mina at Google, which was basically an internal form of almost like ChatGPT. So they kind of saw the future, and they wanted to try and substantiate it. And you could argue that the same thing happened in the internet wave in the 90s.
All the people working at the National Supercomputer Centers, like Marc Andreessen and others, saw the future before anyone else. They were using email before anyone else. They were browsing the internet before anyone else. They were using FTP and file downloads and sharing music files before anyone else.
And so they knew what was coming. They had a glimpse into the future. That's the old saying, the future is here, it's just not equally distributed. For AI, we had the same thing. We had these researchers who could tangibly feel what was coming. And so the first wave of AI companies was researchers. The second wave was infrastructure people. We're not closest. And in this current wave, we're now at the application people, the people who are building applications on top of the core technology.
What do you think is the next wave? I think it's going to be an ongoing wave of kind of everything, right? There's still a lot to build, but I think we'll see more and more application level companies. We'll see fewer. what are known as foundation model companies, the people building the open AIs or Anthropics or some of the Google core technologies or X.AI. There will be specialized versions of that, right? That's all the language stuff, right? It understands
what you say, and it can interpret it, and it can generate text for you and do all these things, right? That's all these LLMs, large language models. There's going to be the same thing done for physics and material science. We've already seen it happening in biology, right? So at that layer, there's a bunch of stuff. There's the infrastructure. What is the equivalent of cloud services?
And then there's the apps on top, and then in the apps you have B2B, and then you have consumer. And so I think we're going to see a lot of innovation across the stack, but I think this next wave is a mix of B2B and consumer, and then I think the wave after that. is very large enterprise adoption. And so I think AI is dramatically under hype.
Because most enterprises have not done anything in it. And that's where all the money is, all the changes, all the impact is, all the jobs, everything, right? It's a big 80-20 rule of the economy. And that's coming. And that hasn't even hit yet. Are there any historical parallels to anything that you can think of that map to artificial intelligence or AGI?
I think the thing that people misunderstand about artificial intelligence is that people are kind of viewing it as what you're selling as a cool tool to help you with productivity or whatever it is. I think in a couple years we'll start thinking about it as we're selling units of cognition. We're selling bits of person time or person equivalent to do stuff for us.
I'm going to effectively hire 20 bot programmers to write code for me to build an app, or I'm going to hire an AI accountant and I'm going to basically rent time off of this unit of cognition. On the digital side, it really is this shift from your selling tools to your selling effectively white-collar work. On the robotic side, you'll probably have some form of like robot minutes or something.
You'll probably end up with some either human form robots or other things that will be doing different forms of work on your behalf. And, you know, potentially you buy these things or maybe you rent them. You know, it'll be interesting to see what business models emerge around it. What scares you about the future? That's a big question. Along what dimension? Wherever you want to take it. What scares you about AI? Do you have any fears about AI? I think that I have opposing fears.
In the short run, I worry that there's the real chance to kind of strangle the golden goose, right? I do think AI and this wave of AI is the single biggest potential motivator for versions of global advancements in health and education and all the things that really matter fundamentally.
And there's some really great papers from the 80s that basically show that one-on-one tutoring, for example, will increase performance by one or two standard deviations. You'll get dramatically better if you have a one-on-one tutor for something. And if you actually look through history and you look at how Alexander the Great was tutored by Aristotle and all these things,
There's a lot of kind of prior examples of people actively doing that on purpose for their kids if they can afford it. This AI revolution is a great example of something that could basically provide that for every child around the world, as long as they have access to any device, which is most people at this point, right, globally.
So from an education system perspective, a healthcare system perspective, it's a massive change. So in the short run, I'm really worried that people are going to constrain it and strangle it and prevent it from happening because I think it's really important for humanity. In the long run, there's always these questions of at what point
do you actually consider something sentient versus not? Is it a new life form? Like, is there a species competition? You know, there's those sorts of questions, right? In the very long run. Without robots, you could say, well, you just unplug the data center. Who cares? It doesn't matter. If you do have robots and other things, then it gets a little bit harder maybe. At what point do you think AI is going to start solving problems that we can't solve?
in the sense of a lot of what it's doing today is organizing logic on a human level equivalent. It's already surpassed us on many things, right? Just even look at how people play Go now. and the patterns they learned off of AI, which can beat any person to go. I mean, gaming is a really good example of that, where every wave of gaming advancements where you pitted AI against people
People said, well, fine, they beat people at checkers, but they'll never beat them at chess. And then they beat them at chess and say, well, fine, chess, but they'll never beat them at go. They beat them at go, and they're like, well, what about complex games where there's bluffing? They'll never beat them at poker. And then Noam Brown had his poker paper.
And say, well, okay, poker, well, they'll never beat them at things like diplomacy where you're manipulating people against each other. And then, you know, a Facebook team solved diplomacy, right? And so gaming is a really great example where you have superhuman performance against every game now.
And you see that in other aspects of things as well. I guess where my mind was going is in terms of mathematical problems. I mean, we've solved a couple maybe that we haven't been able to solve, but we haven't made real leaps. or biology or health or longevity, like where here's the, not the solution maybe to Alzheimer's because that's like a big leap, but maybe it's like,
You're not looking in the right area. You need to research in this area more. Like when is that sort of advancement coming? Yeah, I think it's a really good question. I mean, AI is already having some interesting advancements in biology, right? The Nobel Prize this past year in biology went to...
Demis and a few other people who built predictive models using AI about how proteins will fold, right? And so I think it's already being recognized as something that's impacting the field at the point where it gets a noble.
The hard part with certain aspects of biology and protein folding is a good counterexample. You actually have very good data. You had tens of thousands or maybe hundreds of thousands of crystal structures. You had solved structures for all these proteins and you could use that to train the model. Right. If you look at it, about half or more than half of all biology research in top journals is not reproducible. So you have a big data problem. Half the data is false. It's incorrect.
And this is actually something that Amgen published a couple years ago where they showed this because they weren't able to reproduce cancer findings in their lab because they're trying to develop a drug and they're like, wait a minute, this thing we thought could turn into a drug isn't real. And so there's this really big replication issue. It's not part of the advantage for AI, then, like, I'm thinking out loud here. Sure. Like, if I uploaded...
all of the Alzheimer's papers to AI. And it would be like, these ones aren't replicatable. There's mathematical errors here. This looks like fraud. But all of these things have generated future research. So what you're doing is you're being like, oh, you've spent billions of dollars on this. That's likely not like statistically it's probably not going to yield results.
You should focus your attention here. And that would have a huge impact on... Yeah, I think there's almost like three different things that are mixed in here. One is just fraud. You know, you fudged an image or reusing or whatever. I think AI is wonderful for that. And I actually think...
And I'm happy to, if anybody who's listening to this wants to get sponsored, or maybe we should do a competition or something, to basically build fraud detectors using AI or plagiarism detectors. You could do it for liberal arts as well as sciences, right? Yeah. And I bet you'd uncover a ton of stuff. Separate from that, there's people publishing things that are just bad. And the question is, is it bad because they ignored other data? Did they throw data points? How would you know?
as an AI system that somebody threw out half their data to publish a paper. And so there's other issues around how science is done right now. Or you just rush it, and you have the wrong controls, and then it still gets published because it's a hot field. That happens a lot. If you look during COVID, there were so many papers that in hindsight were awful papers, but they got rushed out.
because of COVID, and unless somebody goes back and actually redoes the experiment and then publishes it, they read it and it didn't work, which nobody does because nobody's going to publish it for you. How do you know that it's not reproducible? And so that's part of the challenge in biology. And so the biology problem isn't can an AI model do better? I'm sure it could. The biology problem is how do you create the data set that actually is clean enough
and has high enough fidelity that you can train a model that then goes and cleans everything else up. And it's doable. All these things are very doable. You just have to go and do it, and it's a lot of work.
If you look at things like math and physics and other things like that, people are just starting to train models against that now. So I do think we'll, in the coming years, see some really interesting breakthroughs there. Do you think that'll be rapid or do you, like how will those breakthroughs happen?
Yeah, it's kind of the same thing. You kind of need to figure out what's the data set you're using, what kind of model and model architecture you're using, because different architectures seem to work better or worse for certain types of problems as well, like the... The protein folding ones have three or four different types of models that often get mixed in, at least traditionally. A lot of them have moved to these transformer backbones, but then they're augmented by other things.
So it's a little bit of like, do you have enough and the right data? Do you have the right model approach? And then can you just keep scaling it? Walk me through why I'm wrong here. What came to mind when you were saying this is we're training AI based on data, so it's like here's how we've solved problems in the past. this is how you're likely to solve it in the future. But if I remember correctly, DeepMind trained Go by just being like, here are the rules.
We're not actually going to show you people that have played before. And that led to the creativity that we now see. Yeah, that's called self-play. And as long as you have enough rules, you can do it. You need a utility function you're working against, right? In the context of a game, it's winning the game. And there's very specific rules of the game. You know when to flip over the go piece.
You know what winning means, right? And so it's easy to train against that because you have a function to select against. This game you did well. This game you did badly. Here's positive feedback or negative feedback to the model.
They're starting to do that more and more. So if you look at the way people are thinking about models now and scaling them, there's three or four components to it. One is ongoing data scale. Second is the training cluster. People always talk about all the money they're spending on GPUs. The third is reasoning modules. And that's the new stuff from OpenAI in terms of O1 and O3 and all these things.
There's other forms of time of inference related optimizations and how do you do them and some aspects eventually of the soft play and some of the places where that may really come into. Focus soon is coding!
Because you can push code and you can see if it runs and you can see what errors are thrown. There's more stuff you can do in domains where you have a clear output you're shooting for and that you can test against it. And there's rapid feedback. And there's rapid feedback, and that's the key. How quickly can you get feedback to keep training the system and iterating? What happens when I give an AI a prompt?
What happens on the inside of that? What's the difference between a good prompt and a bad prompt? Does it basically take my prompt and break it into reasoning steps that a human would use? Like first I do this, second I do this, third I do this, and then I give the output. And then the follow-on to this is like, what can we do to better prompt AI to get better outcomes? Yeah, great question. So a lot of the people are working on agents.
have basically built what you're describing, which is something that will take a complex task, break it down into a series of steps. store those steps, and then go back to them as you get output. So you're actually chaining a model. You're pinging it over and over with the output of the prior step and asking it now to do the next step. So one approach to that is you literally break it up into 10 pieces.
If it's a simple problem and you're just like, write me a limerick with XYZ characteristics, then the model can just do that in a single sort of call of the model. But if you're trying to do something really complex, you know, book me a flight.
or find me and book me a flight to mexico it's like okay first i need to find the flight and so that means i need to go to this website and i need to interact with the website and pull the data and I need to analyze that information and then I have to figure out what fits with your trip and then I go through the booking stops and then I get the confirmation.
So it really depends on what you're asking the model to do. When I think of a model, though, I don't think of an agent. I just think, why can't AI do that? Like, why do I need a specific... type of AI to book a flight to Mexico. Why can't ChatGPT just do it? ChatGPT in its current form, or at least in the simplest form, is effectively interrogating a mix of a logic engine and a knowledge corpus. It's like a thing that will look at what it knows and based on that, provide you with some output.
That's a little bit different from asking somebody to take an action. And that's similar to if I was talking to you and I said, hey, where's a nice place to go? And you didn't say, oh, you should go to Cabo or you should go to wherever, right? That's different from me saying, hey, could you get me there?
And you have to go to the computer and load up the website and book it for it. It's the same thing for AI. And so right now we have AIs that are very capable at understanding language, synthesizing it, manipulating it. but they don't have this remembrance of all the steps that they've taken and will take. And so you need to overlay that as another system on top of it.
And you see this a lot in the way your brain works, right? You have different parts of your brain that are involved with vision and understanding it. You have different parts of your brain for language. You have different parts of your brain for empathy, right? You have mirror neurons that help you empathize with somebody or relate to them. So your brain is a bunch of modules strung together to be able to do all sorts of complex tasks, be they cognitive or physical.
And we can assume that over time you end up with roughly something like that as well for certain forms of AI systems. How are you using AI today? I use it a lot. I use it for Everything from, you know, like I'll go to a conference and I'll dump the names of the attendees in and ask like, who should I chat with based on these criteria? And can you pull background on them?
You know, obviously a lot of people use it for coding right now or coding-related tasks. I use it for a lot of what are known as, like, regexes, regular expressions. Like, if I want to pull something out, of certain types of data. I'll do that sometimes. So there's all sorts of different uses for it. What have you learned about prompting that more people should know?
I think a lot of people, and I'm by no means like a, you know, there's these people whose jobs are called prompt engineering and that's all they do.
I think fundamentally a lot of it just comes down to like what are you specifically asking and can you create enough specificity and sometimes you can actually add checks into the system where you say Go back and double-check this just to make sure that you didn't omit something because there are enough errors sometimes depending on which model you're using and for what use case and everything else that if you put in simple safeguards of, hey, generate a table of XYZ as output,
but then go back and double-check that these two things are true, I think has helped me clean up a lot of things that would normally have been errors. It's almost like adding a test case. Yeah, yeah. Basically, if you think about it as like a smart intern, you know, often with your intern, you say, okay, go do this thing, but why don't you double check these three things about it?
And as the models get more and more capable, they'll be less like an intern and more like a junior employee, and then they'll be like a senior employee, and then they'll be like a manager. And as the models get better and better and the capabilities get stronger, you'll see all these other things emerge. Where do you see the bottlenecks today? And what comes to mind for me are different aspects of AI. So you have From going all the way up the stack, you have electricity, you have compute.
You have LLMs. You have data. Where do you see the bottlenecks? Where's the biggest bang for the buck? What's preventing this from going faster? It's a really interesting question.
I think there's people who are better versed than I am in it because there's this ongoing question of when does scaling run out for which of those things, right? When do we not have enough data to generate the next versions of models or do we just use synthetic data and will that be sufficient? Or how big of a training cluster can you actually get to economically? How do you fine-tune or post-train a model and at what point does that not yield as many results? That said,
Each one of these things has its own scaling curves. Each one of these seems to still be working quite well. And then if you look at a lot of the new reasoning stuff that OpenAI and others have been working on, Google's been working on some stuff here as well, when you talk to people who work on that, they feel that there's still
enormous scaling loss for that still left, right? Because those are just brand new things that just rolled out. And so these sort of reasoning engines have their own big curve to climb as well. So I think we're going to see two or three curves that are simultaneously continue to inflect. Is this the first real revolution? where incumbents have an advantage, and I say that because data costs money, compute costs money, power costs money.
And it sort of favors the Googles, the Microsofts, the people with a ton of capital. Yeah. I think in general, every technology wave has a differential split of outcome for incumbents versus startups. So the internet was 80% startup value. It was Google, it was Amazon, it was all these companies we now know a lot. Meta, you know? and then mobile.
the mobile revolution was probably 80% incumbent value or 90%, right? And so that was mobile search was Google and mobile CRM was Salesforce and mobile whatever was that app you were already using. And the things that emerged during that revolution of startups were things that took advantage. of the unique characteristics that were new to the phone. GPS, so you had Uber. Everybody has a camera, you have Instagram, et cetera, right? And so the things that became big companies
And mobile that were startups were able to do it because they took advantage of something new that the incumbents didn't necessarily have any provenance over. Crypto was 100% or roughly 100% startup value. It's Coinbase and it's the tokens and everything else. So you kind of go through wave by wave and you ask, what are the characteristics that make something better or worse? And if you actually look at self-driving, which was sort of an earlier AI revolution in some sense,
The two winners, at least in the West, seemed to be Tesla, which was an incumbent carmaker in some sense, by the point that they were willing to step out, and Google through Waymo. So two incumbents won in self-driving, which I think is a little bit under-discussed because we had like two dozen.
self-driving companies, right? Wouldn't that make sense though because they have the most data in the sense of like Tesla acquires so much data every day and now the way that they've set up full self-driving, my understanding is
It's gotten really good in the last six months. One of the reasons is they stopped coding, basically, and they started feeding the data into AI and having the AI generate the next version effectively. Yeah, a lot of the early self-driving systems were basically people writing a lot of kind of edge case heuristics. So you'd almost write a rule. If X happens, you do Y or some version of that. And they moved a lot of these systems over to just end-to-end deep learning.
And so this modern wave of AI has really taken over the self-driving world in a really strong way. It's really helped these things accelerate, to your point. And so Waymo similarly has gotten dramatically better recently. So I think all that's true. I guess it's more of a question of when does that sort of scale matter and why wasn't there anybody who was able to partner effectively with an existing automotive company? What did other things happen?
in the market. For this current wave of AI, it really depends on the layer you're talking about. And I think there's going to be enormous value for both incumbents and startups. On the incumbent side, it really looks like the foundation model companies are either paired up
or driven by incumbents, maybe one or two kind of examples. So, you know, OpenAI is roughly partnered with Microsoft, but Microsoft also has its own efforts. Google is its own partner in some sense, right? Amazon has partnered with Anthropic. Obviously, Facebook has Llama, the open source model. But I think for three of the four, and then there's x.ai, which is Elon Musk's ability to execute in such an insane way that's really driving it.
and access to capital and all the rest. But if you look at it, and I wrote a blog post about this maybe two, three years ago, which is basically what's the long-term market structure for that layer? And it felt like it had to be an oligopoly. or, you know, at most in oligopoly. And the reason was this point that you made about capital. And back then, it costs, you know, tens of millions to build a model. But if you extrapolated the scaling curve,
You're like, every generation is going to be a few X to 10 X more. And so eventually you're talking about billions, tens of billions of dollars. Not that many people can afford it. And then you ask, what's the financial incentive for funding it?
And the financial incentive for the cloud businesses is their clouds, right? If you look at Azure's last quarter, I think it was like a $28 billion quarter or something like that. I think they said that 10 to 15% of the lift on that was from AI being sold on the cloud.
So that's what, one and a half to three billion, a quarter, right? So the financial incentive for Microsoft to fund OpenAI is it feeds back into its cloud. It feeds back in other ways too, but it feeds back to its cloud. And so I don't think it's surprising. that the biggest funders of AI today besides sovereign wealth has been cloud.
because they have a financial incentive to do it and people really miss that. So I think that that is part of what really helped lock in this oligopoly structure early is you had enormous capital scale going to a handful of the best players through these cloud providers and so the venture capitalists would put hundreds of millions of dollars into these companies. The clouds put tens of billions in, and that's the difference.
And I guess the optimism there is that I can go use the full scale of AWS or Azure or Google. and just rent time so I don't need to make the capital investments, I don't need to run the data center, I don't need to... Well, you could have done that either way, right? You didn't have to take money from them because they're happy to be a customer.
That's what I'm saying, right? So the optimism is you can compete with them now because you're just competing on ideas. You have access to infrastructure. Yeah, and you would have done that no matter what, just given that everything moved to clouds, like these third-party clouds that you can run on. So that's enabling.
But at least for these sort of language models, they're increasingly just a moat due to capital scale. Do you think that we just end up with like three or four and they're all pretty much equivalent? Yeah, I'm not sure. I think you can imagine two worlds. World one is where you have an asymptote. Eventually things kind of all flatline against some curve because you can only scale a cluster so much, you only have so much data or whatever.
in which case eventually things should converge really closely over time. And in general, things have been converging faster than not across the major model platforms already. Or a second world is, if you think about the capability set built into each AI model, If you have something that's far enough ahead, and it's very good at code, and it's very good at data labeling, and it's very good at doing a lot of the jobs that allow you to build the next model really fast,
then eventually you may end up with a very strong positive feedback loop for whoever's far enough ahead that their model always creates the next version of the model faster than anybody else. And then you maybe have liftoff, right? Maybe that's something that ends up dramatically far ahead because every six months becomes more important than the last five years. And so there's another world you can imagine where you're in a liftoff scenario.
where there's a feedback loop of the model effectively creating its next version. So GPT-5 or 7 or whatever, GPT-7 would create GPT-8. which would help create GPT-9, which would even faster create GPT-10. And at that point, you have an advantage, but the advantage is expanding at the velocity at which you're creating the next model. Correct, because GPT-10 perhaps is so much more capable than 9.
Then everybody else is at 9, it's already building 11. And it can build it faster or smarter, et cetera, than everybody else. And so it really comes down to what proportion of the model building task or model training and building task is eventually done by AI itself. At GMC, ignorance is the furthest thing from bliss. His research. Testing. Until it results in not just one truck, but a whole lineup. The 2025 GMC Sierra lineup. Featuring the Sierra 50. Because true bliss is removing every shadow
We are professional grade. Visit GMC.com to learn more. This episode is brought to you by state. Knowing you could be saving money for the things you really want is a great feeling. Talk to a State Farm agent today to learn how you can choose to bundle and save with a personal price plan like a good neighbor.
State Farm is there. Prices are based on rating plans that vary by state. Coverage options are selected by the customer. Availability, amount of discounts and savings, and eligibility vary by state. What do you think of Facebook? They've spent, I don't know, 50, 60 billion, and they've basically given it away to society. Yeah, I've been super impressed by what they've done with Llama. I think open source is incredibly important. Why is open source important?
It does a couple of things. One is it levels a playing field for different types of uses of this technology and it makes it globally available in certain ways. That's important. Second, it allows you to take out things that you may not want in there because it's open weights and it's open source. So if you're worried about a specific political bias or a specific cultural outlook.
Because it's really interesting if you look at the way people talk about norms and what should be built into models and safety and all the rest. It's like, who are you to determine all of global norms? with your own values. That's a form of cultural imperialism if you think about it. You're basically imposing what you think on everybody else.
And so open source models gives you a bit more leeway in terms of being able to retrain a model or, you know, have it reflect whatever norms of your country or your region or, you know, whatever lens on that you want to take. So I think it's also important from that perspective. As an investor, what's the ROI on a 60... $100 billion open source model. How do you think through what Facebook is trying to do or accomplish? Is it just like
I don't want the competitors to get too far ahead. I need to... I don't know how meta specifically is thinking about it, so I think I'd be sort of talking out of turn if I just made some stuff up. I think that in general... There's been all sorts of times where open source has been very important strategically for companies. And if you actually look at it, almost every single major open source company
has had a giant institutional backer. IBM was the biggest funder of Linux in the 90s as a counterbalance to Microsoft. And the biggest funders of all the open source browsers are Apple and Google with WebKit. And you just go through technology wave after technology wave, and there's always a giant backer. And maybe the biggest counter to that is Bitcoin and all the crypto stuff. And you could argue that they're their own backer through the token.
Bitcoin financially effectively has fueled the development of Bitcoin. that's kind of paid for itself in some sense as an open source tool or open source sort of form of money. You know, I don't know why AI would be different. I, a couple years ago, was trying to extrapolate who was the most likely party to be the funder of open source AI. And back then I thought it would be Amazon, because at the time they didn't have a horse in the race like Microsoft and Google, or maybe it would be NVIDIA.
And Meta was kind of on the list because of all the money they have, but in Proorus and Engineering and FAIR, and they have a lot of great things, but they weren't the one I would have guessed as the most likely. They were on the list, but they weren't the most likely. And then there's other players with tons of money. intensive capabilities and the question is are they going to do anything? What does Apple do?
What does Samsung do? You know, there's like half a dozen companies that could still do really interesting things if they wanted to. And the question is, what are they going to do? How would you think about the big players and who is best positioned for the next two to three years? How would you rank them? In terms of AI or in terms of other things?
terms of ai yeah um like who's most likely to to accrue some of the advantages of ai yeah it's it's it's kind of hard because um AI is the only market where the more I learn, the less I know. And in every other market, the more I learn, the more I know. And the more I'm able to predict things. And I can't predict anything anymore. I feel like every six months, things change over so rapidly.
You know, fundamentally, there's a handful of companies in the market that are doing very well. Obviously, there's Google, there's Meta. There's OpenAI, there's Microsoft, Anthropic and AWS or Anthropic, X.ai. You know, Mistral has done some interesting things over time. So I think there's like a handful of companies that are the ones to watch. And the question is, how does this market evolve? Does it consolidate or not? Like what happens? How do you think about regulation around AI?
Yeah, so there's basically like three or four forms of AI safety that people talk about and they kind of mix or complete them. The first form of AI safety is almost what I call digital safety. It's like, will the thing offend you or will there be hate content or other things? And there's actually a lot of rules that already exist around hate speech on the internet or hate speech in general or what's free speech or not and how you should think about all these things.
So I'm less concerned about that. I think people will figure that out. There's a second area which is almost like physical safety. which is we'll use AI to create a virus. We'll use AI to derail a train, you know, et cetera. And similarly, like when I look at the arguments made about how it will create a biological virus and et cetera, et cetera, like you can already do that, right? The protocols for...
cloning and PCR and all this, it's all on the internet. It's all posted by major labs. It's in all the textbooks. That's not new knowledge that people can't just go and do right now if they really wanted to. So I don't know why that matters in terms of AI. And then the third area is sort of this existential safetyism, like AI will become self-aware and destroy us, right?
And when people talk about safety, they mix those three things. They conflate them, and therefore they say, well, eventually maybe something terrible happens here, so we better shut everything else down, while other people are just saying, hey, I'm worried about hate speech. And so I think when people talk about safety, they have to really define clearly what they mean and then they have to create a clear view of why it's a real concern.
It's sort of like if I kept saying, I think an asteroid could at some point hit the Earth, and therefore we better do X, Y, Z. We should move the Earth. At some point, these things get a little bit ridiculous in terms of safetyism. There's actually a broader question societally of like, why has society become so risk averse in certain ways and so safety centric?
And it impacts things in all sorts of ways. I'll give you a dumb example. After what age does the data suggest that a child doesn't need a special seat? They can just use a seatbelt. I think it's like 10 or 12, isn't it? Well, so in California, for example, the law is up until age 8. Okay. You have to be in a booster seat or a car seat or whatever.
If you actually look at crash data, real data, and people have now reproduced this across multiple countries, multiple time periods, it's the age of two. Oh, wow. So for six extra years, we keep people in booster seats and car seats and all of it, at least against the data, right? The Freakonomics podcast actually had a pretty good bid on this. And there's like multiple papers now that
reproducibly show this retrospectively. You just look at all the crashes. That's crazy. So why do we do it? Safety. But it's not safe. Exactly, but it's positioned as safe. As a parent, of course you want to protect your children. No, seriously, right? But then it has other implications. It's like you can't easily transport the kids in certain scenarios because you don't have the car seat.
You know, you can only fit so many car seats in a car and it's a pain in the butt. And do you upgrade the car if you want more kids? And can you afford it? And, you know, so there's all these ramifications. And it's because I think, A, it's lucrative for the car seat companies to sell more car seats for longer, right? You get an extra six years on the kid or whatever. Parents will, of course, say, I want safety no matter what. And certain legislators are happy to just, you know, legislate it.
I think there's lots and lots and lots of examples of that in society if you start picking at it. And you realize it pervades everything. It pervades aspects of medicine. It pervades things like AI now. It's everywhere. There's one in Ottawa that I see on mornings when there's schools around the...
I don't know, five or six blocks of a school. They basically have crossing guards everywhere now. So it's basically even for high schools. So kids can't walk to school on their own, which I think you think oh well how do you argue with that right like and then I was thinking about this the other day because I was driving and you know I got stopped by one of these people and I was like we're just teaching kids that like they don't even have to pay attention
They can look at their phone, the crossing guard's going to save them, and then if the crossing guard's not, we're not developing ownership or agency in people. How do you think about that? I think it's really bad for society at scale. I mean, it's kind of like...
There was a different wave of this, which was, you know, 10, 15 years ago with fragility and microaggressions and everything can offend you and you need to be super fragile and all this stuff, right, which I think is very bad for kids. And I think that has a lot of mental health implications. The wave we're in now, which is basically taking away independence, agency, risk-taking.
I think that has some really bad downstream implications in terms of how people act, what they consider to be risky or not. and what that means about how they're going to act in life and also their ability to actually function independently. So I agree. I think all those things are things that we've accumulated over the last few decades that are probably quite negative.
You're one of the most successful investors that a lot of people have probably never heard of. One of the things that you've said is that most companies die from self-inflicted wounds and not competition. What are the most common self-inflicted wounds that kill companies? Yeah, I think there's two or three of them. It depends on the stage of the company. For a very early company, the two ways that they die is the founders start fighting and the team blows up.
or they run out of money, which means they never got their product market fit. They never figured out something that they could build economically that people would care about. So for the earliest stages, that's roughly everything. Every once in a while you have some competitor dynamic, but the reality is
Most incumbent companies don't care about startups. And startups have five, six years before an incumbent wakes up and realizes it's a big deal and then tries to crush them. And sometimes that works. Sometimes you just end up with capped outcomes. So for example, you could argue Zoom and Slack got capped by Microsoft launching stuff into Teams in terms of taking parts of the market or creating a more competitive market dynamic for them.
The other types of self-inflicted wounds, honestly, sometimes people get very competitor-centric versus customer-centric. Go deeper on that. I mean, there's a lot of examples of that. Sort of like if you focus on your competitor too much, you stop doing your own thing. You stop building that thing the customer actually wants.
and you lose differentiation relative to your competitor, or you start doing things that can hurt your competitor, but they don't necessarily help you. And sometimes your competitor will retaliate. An example of that would be in the... pharmaceutical distribution world. 20 years ago, there was roughly three players that really mattered of any scale, and they used to go after each other's market share really aggressively, which eroded all the pricing, which meant they were bad businesses.
And at some point, I think one of them decided to stop competing for share, but just protect itself. And then the others copied it, and suddenly margins went way up in the industry. They stop being as focused on banging on each other and more just like, let me just build more services for my customers and let's just focus on our own side. We're all going to win a lot more that way, right?
In some cases, yeah, if you have an oligopoly market, that's usually where it ends up. Eventually, this is why people are so worried about collusion, right? Eventually, the companies decide, hey, we should be in a stable equilibrium instead of beating up on each other and shrinking margins.
Scaling a company often means scaling the CEO. What have you learned about the ways that successful CEOs scale themselves and things that get in the way? Yeah, I think it's just three things. One is figuring out who else you need to fill out your team with and how much can you trust them and all the rest and so
One piece of it is very innovative. Founder CEOs always want to innovate, and so they reinvent things that they shouldn't reinvent. Like sales is effectively process engineering that's been worked through for... decades, you don't need to go reinvent sales, you know, because hire a sales team and it'll work just fine. So one aspect is getting out of your own way on reinvention. There's certain things you want to rethink, but many of them you don't.
Part of it is hiring people who are going to be effective in those roles and more effective than you might be. often you end up finding people who are complimentary to you. Now that really breaks down during CEO succession. Because what happens is often the CEO will promote the person who's their complement as the next CEO instead of finding somebody like them who can innovate and push on people and drive new products and new changes.
And so often you see companies have a golden age under a founder and then decay. And the decay is because the founder promoted their lieutenant who was great at operations or whatever, but wasn't a great product thinker or technology vision like themselves. And so that's actually a failure mode for longer term.
related areas. You could argue such a Microsoft is a good example of somebody who has more of a founder mindset. I'm going to reinvent things. I'm going to rethink things. I'm going to do these crazy deals. They backed OpenAI at GPT-2, which is like a huge risk.
They've done all sorts of really smart acquisitions. So that's an example of somebody who actually did a smart succession there in terms of finding somebody who's a bit more like product founder mentality. In terms of other ways that... CEOs fail is they listen too much to conventional wisdom. on how to structure their team.
And really the way you want to have your team function at a large organization is based on the CEO. What does the CEO need? What are the complements they need? What is the structure they need? And if you were to plop out that person and plop in a different CEO, that structure probably shouldn't work like half the time.
There's some types of people where there's lots of commonalities, particularly if it's people who came up the corporate ladder and they're all used to doing things the same way. But if you're more of a founder CEO,
and you're going to have your quirks, and you're going to have your obsessions, and you're going to have all these things that founders often have, you need an org structure that reflects you. And so Jensen from NVIDIA talks about this. The claim is he has 40 direct reports. He claims that...
You know, he doesn't do many one-on-ones or things like that, and the focus is more on finding very effective people who've been with him for a while and who can just drive things, right? And then he sort of deep dives into different areas. That's a very different structure from how Satya's running Microsoft, or Larry Ellison has run Oracle over time, or you look at these other giants of industry and management and everything else.
And so I think you really need an org structure that reflects you. Now there's going to be commonalities, and there's only so many reports most people can handle and all the rest of it, but I do think you kind of want to have the team that reflects your needs versus the generic team that could reflect anybody's needs.
Is that the problem with sort of a lot of these business leadership books that are written about a particular person and style that they have, and then people read them and they try to implement them, but it's not genuine to who they are?
I think that's very true, and it really depends on whether you're talking about the generic case of, hey, it's a big company, and you're at a related large company that's 100 years old that's been run a certain way. I wouldn't be surprised if you could roughly interchange
the CEOs of a subset of the pharma companies in terms of the org structure probably would still roughly work. They may not have the chemistry with the people or the trust or whatever, but the org structures are probably reasonably similar.
That's probably pretty different than if you looked at how Oracle's been run over time versus Microsoft over time versus Google over time versus whoever. When you say that, I think the wording you use, like conventional wisdom, CEOs should pay less attention to conventional wisdom. Do you mean that in the sense of the, I guess, the nomenclature that Brian Jeske came out with was founder mode?
Yeah, I think we lived through a decade or so, maybe longer, where a lot of forces came into play in the workplace that were not productive to the company actually obtaining its missions and objectives.
And a lot of that was all the different forms of politics and bring your whole self to work and all these things that people are talking about, which I don't want somebody's whole self at work. You know, I remember at Google, For Halloween, and maybe we should edit this part out, but there's somebody who would show up and ask us chaps every Halloween.
And you're like, I don't want to see that. Like, I'm in a work environment. Why is this engineer walking around like this? And then the second you start bringing kids to work, you're like, I sure as hell don't want this guy walking around, right? And that's bring your whole self to work. Like, why would you do that? You actually should bring your professional self to work. You should bring the person who's going to be effective in a work environment.
and can work with all sorts of diverse people and be effective and doesn't bring all their mores and values and everything else in the workplace that don't have a place in the workplace. There's a subset of those that do, but many don't. We lived through a decade where not only were those things encouraged, but the traditional... conventionalist executives brought that stuff with them. And I think it was probably bad for a lot of cultures.
It defocused them from their mission. It defocused them from their customers. It defocused them from doing the things that were actually important. And the first person to speak out against that was Brian Armstrong that I remember in a very public and visible way.
And then Toby Luque followed him not long after. And they said, no, the workplace is not about that. It's about X, Y, and Z. And if you don't like it, basically leave. And was that the moment where we started to go back to... founder mode effectively.
I think it took some time. I think Brian was incredibly brave for doing that. He got a lot of flack for it. They tried to cancel him. They tried to cancel him aggressively, which was sort of the playbook, right? And this was happening inside of companies too, right? You'd say something and you'd get canceled for it.
And so you can have a real conversation around some of these things. And again, that just reinforced it. And I think Brian stepping forward made a huge difference. To your point, Toby, I think did it really well. I still sometimes send the essay that he wrote for that to other people where he had a few central premises, which is we have a specific mission and we're going to focus on that. We're not focusing on other things.
we're not a family we're a team yeah right the family is like hey your uncle shows up drunk all the time you kind of tolerate it because it's your uncle if somebody showed up drunk at work all the time you shouldn't tolerate that right you're not a family You're a sports team. You're trying to optimize for performance. You're trying to optimize for the positive interchange within that team.
And you want people pulling in the direction of the team, not people doing their own thing, which is a family, right? And so I think there was a lot of these kind of conversations or discussions that were more like, it's a family and...
bring yourself to work and all the holisticness of yourself. And it's actually, well, no, you probably shouldn't show up at work drunk and, you know, look at bad things on the internet. You know, you should focus on your job and you should focus on good collaboration with your coworkers and things like that. you're around a lot of outlier CEOs. Not only in the context of you know them, but you hang out with them. You spend a lot of time with them.
What are sort of the common patterns that you've seen amongst them? Are there common patterns or is everybody completely unique? But I imagine that at the core, there's commonality. Yeah, you know, this is something I've been kind of riffing on lately, and I don't know if it's quite correct, but I think there's like two or three common patterns. I think pattern one is there are a set of people who are, and by the way, all these people are like incredibly smart.
you know, incredibly insightful, et cetera. So they all have a few common things. But I do think there's two or three archetypes. I think one of them is just the people who are hyper-focused. They don't get involved with other businesses. They don't do a lot of angel investments. They don't, you know, do press junkets that don't make sense. They just stay on one track.
and a version of that was Travis from Uber. I knew him a little bit before Uber, and I've run into him once or twice since then, but like... He was always incredibly focused. He used to be an amazing angel investor. I think he made great investments, but he stopped doing it with Uber and he just focused on Uber. And as far as I know, he never sold secondary until he left the company, right? He was just hyper-focused on making it as successful as possible. So that's one class of archetype.
There's a second class, which I'd view as people who are equally smart and driven, but a bit more polymathic may be the wrong word, but they just have very broad interests, and they express those interests in different ways while they're also running their company. And often they have a period where they're just focused on their company and then they add these other things over time.
Examples of that, I mean, obviously Elon Musk is now that, right? In terms of what Patrick calls in, is that he's running a biology institute, or his, Silvana and the other Patrick are running it alongside him, called ARC. Brian Armstrong is now running a longevity company in parallel to Coinbase or he has somebody running it. So there's a lot of these examples of people doing X2, X3 and doing it in other fields. Honestly that's a little bit of a new development.
relative to what you were allowed to do before, right? Because there's both activist investors who try to prevent that in public markets in particular, but also it was just a different mindset of how do I show impact over time? and are these people going from the first one hyperfocus to this or were they always sort of I don't want to use the word dowel because it really understands how focused they are on their businesses, but are they always like that?
as they get larger and scales differently? Or is it, no, we've gone from sort of the first, which is this hyper-focus, to the second? I think it's more like when you talk to them, the way that they think about the world and the set of interests they have is a little bit different from the first group of folks. And I'm not talking about Travis specifically because I didn't know him well enough to have a perspective on that, but I just mean more generally.
I've noticed that they have this commonality of when you talk to them very early, they're like 20 years old or whatever, and you meet them. The set of interests that they have is very, very broad. And they tend to go very deep on each thing that they get interested in. Whether it benefits them or not, they just go deep on it. Because it's interesting. They're driven by a certain form of interestingness. in addition to being driven by impact.
And then I think there's a third set of people who end up with outside successes. And sometimes that's just product market fit, and then they grow into the role. And so there's some businesses that just have either such strong network effects or just such a strong liftoff early on.
And they're obviously very smart people and all the rest of it, but you don't feel that same drive underlying it or that same need to do big things. It's almost accidental. And you sometimes see that. Would you say that's more locked? I don't know. I mean, say somebody is really good at product market fit, but they're not that aggressive. And once they're at a certain level, they're not that ambitious. Part of it too is like, what's your utility curve? Like, what do you care about in life?
Do you care about status? Do you care about money? Do you care about power? Do you care about impact? Do you do things because it's interesting? Like why do you do stuff? And imagine people where that is a big part of everything they do. Because I think the average person may have mixes of that, but they're also just happy going to their kids and hanging out. It's a different life. The average Google engineer is not going to be...
this insanely driven hyper you know hyper drive person anymore what do you think keeps people going I mean a lot of people become successful and maybe they hit whatever number they have in their head that they can retire comfortably or live the life they want to live and then become. Maybe not intentionally. I mean, they're not thinking it that way, but they take their foot off the gas and, you know, all of a sudden I'm focused on 10 different things instead of one thing.
And then there's another subset of people that are like, they just blow right by that and they keep going. And whether it's a hundred million or a billion or a 10 billion or, you know, in Elon's case, a hundred billion or more, but they keep going. Yeah. It's back to what's your, like, what do you care about? What's your utility function? What's driving you?
And based on what's driving you, like the people that I know who have been very successful or driven solely by money end up miserable. Because they have money and then what? It's never enough. What do you do then? Well, it's not just never enough. It's just what do you do? What fulfills you? You can already buy everything you can ever buy.
Like what fulfills you? And you also see versions of this where you see people who make it and then they don't know what to do with themselves. I think I mentioned this earlier. There's one guy I know who's incredibly successful and he spends all his time buying domain names. And you're like, well, Is that fulfilling? It's almost like, what's your meaning or purpose?
I feel like the people who end up doing these other things have some broader meaning or purpose driver even very early on. And obviously people want to win and all the rest. There's this really good framework from Navaravicon.
In the 90s, John Doerr was one of the giants of the legends of investing. He used to ask founders, are you a missionary or mercenary? And of course, the question that you were expected to say is, I'm a missionary, right? I'm doing it because it's the right work to do and all this.
And Naval's framework is like, when you're young, of course you're at least half, if not more, mercenary. You want to make it. You're hungry. You don't have any money. You need to survive. You're driven because of that, in part.
And then in the middle phase of your career or life, you're more of a missionary if you're not a zero-sum person. You suddenly can have a broader purpose. You can do other things. You can engage. And then he's like, late in your life, you're an artist. You do it for the love of the craft. I much prefer that framework.
of the people that I see who do the most interesting big things over time fall into that latter category where always there is some mercenary piece of course you want to have money to survive and all this stuff and then that morphs into your You become more mission-centric, and then over time you just do it for the love of whatever the thing you're doing is. And those are the people that I see that become happy over time. What's the difference between success and relevance?
Yeah, it's a great question because there's lots of different ways to define success. Success could mean I have a million Instagram followers. It depends on your own version of success, right? Societally, one of the big versions of success is a big financial outcome. One could argue a bigger version of that is like a happy family. There's lots of versions of success. Relevance means that
You're somehow impacting things that are important to the world and people seek you out because of that. Or alternatively, you're just impacting things, right?
But usually people end up seeking you out because of that for a specific thing. And the amazing thing is that there's lots and lots of people who've been successful who are no longer relevant. You just look at the list of even the billionaires or whatever metric you want to use and like, how many of those people are actually sought out because they're doing something interesting or important.
And so there's this interesting question that I've been toying with, which is, are there characteristics to people who stay relevant over very long arcs of time? People are constantly doing interesting things. One could argue Sam Altman has sort of maintained that over a very long arc between YC and the early things he was involved with the investing side, and then of course now OpenAI and other areas. Patrick is obviously doing that between Stripe and Arc and other areas.
And there's people with longer arcs than that, right? Mark Andreessen invented the browser, and then there was one of the key people behind that. And then... Started multiple companies, including Netscape, which was a giant of the internet, and then started one of the most important venture firms in the world. And so that's a great example of a very, very strong arc over time, or Elon Musk has a very strong arc over time, right? From Zip2 to PayPal to all the stuff he's done now.
The question is, what do those people have in common? Peter Thiel, right? Think of all the stuff he's done across politics and the Thiel fellows and the funds and Palantir and Facebook and all this stuff. And the commonality that stands out to me across all those people is they tend to be pretty polymathic. So they have a wide range of interests. They tend to be driven by a mix of stuff, not just money.
So, of course, money is important and all the rest, but I think for a subset of people, it's interestingness. For a subset, it's impact. For a subset, it's power, whatever it is, but there's usually a blend, and for each person, there's a different spike across that. And the other, I think, commonality is almost all of them had some form of success early. Because the thing that people continue to underappreciate is kind of like the old Charlie Mungerism that
The thing it continues to underappreciate is the power of incentives. The thing I continue to underappreciate is the power of compounding. And you see that in investing and financial markets, but you also see that in people's careers and impact. And the people who are successful early have a platform upon which they can build over time in a massive way. They have the financial wherewithal to take risks or fund new things. and importantly they're in the flow of information.
You start to meet all the most interesting people thinking the most interesting things and you can synthesize all that. in this sort of pool of ideas and thoughts and people. This is full circle back to almost where we started, right? How important is that flow of information? to finding the next opportunity, to capitalizing on other people's mistakes, to staying relevant. Yeah, there's two types of information. there's information that's hidden
And there's information that... So I'll give you an example, right? When I started investing in generative AI, all these early foundation model things, etc. Basically, nobody was doing it. And it was all out in the open. GPT-3 had just dropped. It was clearly a big step function from 2. If you just extrapolated that, you knew really, really interesting things were going to happen. People were using it internally in different ways at these companies.
And so it was in plain sight that GPT-3 existed out there, but very few people recognized that it was that important. And so the question is why, right? The information was out there. There's other types of information that
Early access to helps impact how you think about the world, and sometimes that could just be a one-on-one conversation, or sometimes, again, they could be doing things out in the open. So, for example, all the different things that Peter Thiel talked about and had insights on like 10 years ago ended up being true. Not all, but a lot of them, right? So wait, let me go through some of these. I found information that is publicly available that you haven't found.
There's I weigh the information differently than you do, so I weigh the importance of it differently. And then there's access, where I have access to information that you don't have. Are there other types of information advantages? No, because I think the one where you interpret it differently that you mentioned has all sorts of aspects to that. Go deeper on that. Well, do you have the tooling to do it? Do you need a data scientist, right? It's all the algorithmic trading stuff.
All the information's out there, but can you actually make use of it? There's, do you have the right filter on it? Do you pick up or glean certain insights or make intuitive leaps that other people don't? You know, there's all the different, it's sort of like when people talk about Richard Feynman, the physicist,
And they said, with other physicists who won Nobel Prizes, they're like, oh yeah, I could understand how that person got there. It's this chain of logical steps, and maybe I could have done that. They're like, with Feynman, he just did these leaps, and nobody knew how he did it. And so I do think there's people who uniquely synthesize information in the world and come to specific conclusions. And those conclusions are often right, but people don't know how they got there.
You're bringing it back to clusters and all the stuff about information and how to think about it and how to interpret it. It's all about being in a cluster. How do you go about constructing a better cluster? Like if you take the presumption that the material that goes into my head, whether I'm reading... That's one way I'm conversing, I'm searching. How do I improve the information quality through a cluster or not that my raw material is built on later? Yeah, I think it's a few things.
Different people approach our processes in different ways. And this is back to the best people somehow tend to aggregate, or maybe best is the wrong word. There's a bunch of people with common characteristics, a subset of whom become very successful, that somehow repeatedly keep meeting each other quite young in the same geography. And again, it's happened throughout history.
And so A, there's clearly some attraction between these people to talking to each other and hanging out with each other and learning from each other. And sometimes you meet somebody and you're like, wow, I just learned a ton off of this person in like 30 minutes. And this was a great conversation. Versus, okay, yeah, that was nice to meet that person. They're nice or whatever, you know. And...
I feel like a lot of folks who end up doing really big, interesting things just somehow meet or aggregate towards these other people, and they all tell each other about each other, and they hang out together and all the rest. And so I do think there's...
sort of self-attraction of these groups of people. Now the internet has helped create online versions of that. There's been a lot of talk now about these IOI or gold medalist communities where people do like math or coding competitions or other things.
Scott, the CEO of Cognition, is a great example of that, where he knows a lot of founders in Silicon Valley. And one of the reasons they all know each other is through these competitions and is a way to aggregate people growing up all over the country or all over the world who never would have connected. And then they connect through these competitions.
And so that's become a funnel for a subset of people. So the move towards the internet, I think, has actually created a very different environment where you can find more like-minded people than you ever could before, right? Because before how would you find people? And how would you even know to go to Silicon Valley? Do you think it's true that if I change your information flow I can change your trajectory?
And if so, what are the first steps that people listening can take to get better information? If you want to work in a specific area and be top of your game in that area, you should move to the cluster for whatever that is. So if you want to go into movies, you should go to Hollywood. If you want to go into tech, you should go to Silicon Valley, et cetera.
The whole, hey, you can succeed at anything from anywhere is kind of true, but it's very rare. And why make it harder for yourself? Yeah, why play on hard mode? Yeah. How do you think about that in terms of companies and remote work? We were talking about this a little bit before we hit record in the sense of... One of the things that people lose is the culture of the company and feeling part of something larger than themselves. How does that impact the quality of work we do?
the information flow we have. There's no more water cooler conversation where like, hey, you know, in that presentation you should have done this and not that. Yeah, that's a good point. I think it's interesting. If a company is really young and still very innovative,
I think a lot of remote work tends to be quite bad in terms of the success of the company. Now, that doesn't mean I won't succeed. It just makes it much harder. And a company I backed, I don't know how long ago now, 14 years or something like that was GitLab. which has done quite well. It's a public company now, et cetera. And they were one of them.
very first remote first companies. And so when I backed them, it was like four people or something. I can't remember, four or five people. They were fully remote. They stayed remote forever. And they built a ton of processes in to actually make that work. And they were brilliant about it. And they actually have all this published on their website where you can go and you can read hundreds of pages.
about everything they've done to enable remote work. Everything from how they thought about salary bans based on location on through to processes and all the rest. And it was a very quirky... It may still be culture where I'd be talking to the CEO and he'd say, oh, this conversation is really interesting.
And he dropped the link to our Zoom into a giant group chat and random people just start popping in while we're talking. Oh, wow. You know, and you're like, who are these people? Like, we're just talking about, should you do a riff and like 30 people just join? Like, is this a good idea? It was a very... and it probably still is, very innovative, very smart culture, very process-driven, you know, very just excellent at saying, okay, if we're going to be remote,
Let's put in place every single control to make that work. So they're very smart about that. I have not seen many other companies do anything close to that. And so I think... For very early companies, the best companies I know are almost 100% in-person. And there's some counterexamples of that, and crypto has some nuances on that, which is a little bit different. But for a standard AI, tech, SaaS, etc., that's generally the role. as a company gets lighter, You're definitely gonna have remote.
parts of your workforce, right? Parts of your sales team are remote, although really they should be at the customer site, right? Remote should mean customer site or home office or something, right? It shouldn't mean truly remote. And you always
even 10 years ago or whatever, would make exceptions, right? You'd say, well, this person is really exceptional and I know them well and they're moving to Colorado and we'll keep this person because we know that they're as productive or more productive than anybody else on the team even if they're not going to be in the office every day. later stage companies there's this really big question of like um how much of your team do you want to be remote how many days a week and then
is enforcing a lack of remote policy, just also enforcing that you're prioritizing people who care about the company more than they care about other things. Right. And HCA needs to come and make a judgment call about how important that is. How much does that impact how they can participate in global talent? Because that's often the question or concern. So there's like a set of trade-offs. I mean, the argument for it, I guess, is like it's more flexible for employees
If that is part of what you're optimizing for, but we can also hire world-class talent that we might not be able to hire otherwise. Yeah. And I don't know if I 100% buy that, but it's possible. I've been in the sauna at the gym with a number of people on Microsoft Teams calls. Yeah, you can see people are clearly not working. Now, the flip side of that is... There are certain organizations that you knew people weren't working very hard at before things went remote, right? Yeah.
some of the big tech companies before COVID, You'd go in and it'd be pretty empty until like 11 and then people would roll in for lunch and then they'd leave at like 2. And so one argument I make sometimes is that big tech is effectively a big experiment in UBI, universal basic income. for people who went to good schools, right? You're literally just giving money to people for not doing very much.
in some cases. Do you think that that's starting to change and the complacency maybe that caused that is starting to go away as we get into this? It seems like we had this Everybody was super successful. They all had their owner, but now we have a new race. We have to get fit again almost.
You know, it's kind of like the person who goes to the gym and never breaks a sweat. If you're talking about fitness, you know, they lift away and they're like, I'm going to get my phone now. That's what I feel like has basically happened. So I think the reality is, if you look at what Musk did at Twitter, where they cut 80% or whatever it was. I wouldn't be surprised if you could do things that are pretty close to that at a lot of the big tech companies.
That's fascinating. One of the things that we talked about was sort of how the best in any field there's sort of like 20 people who are just exceptional. Go deeper on that for me. Yeah, so we were talking about clusters, right? So there's geographic clusters. Like, hey, all of tech is happening in one area, and honestly, all of AI is happening in, like, you know, a few blocks, right, if you were to aggregate it all up.
So there are these very strong cluster effects at the regional level. And then, as we mentioned, there's groups of people who keep running into each other who are kind of the motive force for everything. And if you look at almost every field, There's at most a few dozen, maybe for very big fields, a few hundred people who are roughly driving almost everything, right? You look at cancer research.
And there's probably 20 or 30 labs of the most important labs where all the breakthroughs come out of it. Not just that. The lineage of those labs, the people they came from was in common. And the people who end up being very successful afterwards are all come from one of those, are mainly all come from those same labs. We actually see this for startups, right? My team went back and we looked at where do all the startup founders come out of school-wise.
And three schools dominate by far in terms of big outside outcomes. Stanford is number one by far, and then MIT and Harvard. And then there's a big step down and there's a bunch of schools that have some successes, Berkeley and Duke and a few others. And then there's kind of everything else, right? And so there are these very strong rules of like the lineage of people as well, right? And oddly enough, you see this in religious movements, right? The lineage really matters.
Schools of yoga, the lineage really matters. Like all these things, the lineage really matters. And so what you find is that in any field, there's a handful of people who drive that field. And a handful, again, can be in the tens or maybe hundreds. And that's true in tech. There was probably early on, 20, 30, whatever, maybe 100 at most AI researchers who were driving much of the progress. There's a bunch of ancillary people, but there's a core group.
That's true in areas of biology. That's true in finance. And eventually most of these people end up meeting each other in different forms and some become friends and some become rebels and some become both. But it's surprising how small these groups are. A friend of mine and I were joking that we must be in a simulation because we keep running into the same people over the 10 or 20 year arc who keep doing the big things. Yeah. Does that mean those people are almost perpetually undervalued?
especially if it's not a CEO and they're running their own show, if it's a researcher. If you take the hypothesis that maybe there's only 20 great investors or 20 great researchers or 20 great whatever, but they're employees of somebody else, then they're perpetually undervalued. Because it's like, no matter how much I'm paying you, it's almost not enough because you're going to drive this forward. Yeah, it depends on how you define greatness. If somebody is the world's best kite flyer. Yeah.
No, seriously, though, right? Like, there's going to be a handful of people who are the best at every single thing. But there's not a ton of economic value. Correct. Yeah, and so that's the question, right? Yeah, and so... Part of the question is, What is the importance of each person relevant to an organization or field? And then are they properly recognized or awarded relative to those contributions? And if not, why not? And if so, then great.
And so I think there's a separate question of rewards, effectively. And rewards could be status, it could be money, it could be influence, it could be whatever it is. What else have you guys learned about investing in startups? So you had these clusters like... Most people come from Stanford, MIT, or Harvard. What are the other things that you've picked up that you were like, oh, that...
that's surprising or counterintuitive or challenges and existing belief that I had. I mean, I'll give you one that challenges and then I'll give you one that I think is consistent. Maybe I'll start with a consistent one, which is back to clusters. We take all of market cap of companies worth a billion dollars or more that are private. And every quarter or two, we basically look at geographically where are they based, right?
And traditionally the U.S. has been about half of that globally. The Bay Area has been about half of that. So 25% of all private technology while creation happens in one place, right? In one city. If you add a New York and LA, then you're at like 40% of the world. Wow. Right? And LA is mainly SpaceX and Android. Yeah.
So it's very concentrated. That's why when I see venture capitalists build these global firms with branches everywhere, you're like, why? From a research allocation perspective, unless you're just trying to have a specific firm for reasons. And if you look at AI, it's like 80 to 90% of the market cap is all in the Bay Area. Right? And so it's a super cluster.
And you see that going the other way. Like for fintech, a lot of the value of fintech was split between New York and the Bay Area. So one aspect of it is these things are actually more extreme than you'd think for certain areas. And space and defense is roughly all, or was Southern California until SpaceX moved some of its operations. The counterintuitive thing is more tactical things.
There's a few things that people say a lot in Silicon Valley that just aren't correct. So if you look, for example, there's this thing that you should always have a co-founder. or an equal co-founder. And if you look at the biggest successes in the startup world over time, they were either solo founders or very unequal founders. And there's counter examples to that, of course, but that was
Amazon, right, Jeff Bezos was the only founder. Microsoft, it was unequal. And eventually the other founder left. You know, you kind of go through the list and there aren't that many where there was true equality. but it's now kind of this myth that you should be equal with your fit founder. And I think there's negative aspects of doing that. A second thing that's a little bit counterintuitive is reference checks on founders.
So if you get a positive reference check on someone, then it's positive. If you get a negative reference check on a founder, it's usually neutral. Unless people are saying they're ethically bad or there's some issue with them or whatever. But there's two reasons for that. One is I think product market fit trumps the founder fidelity. And so you could be kind of crappy, but if you hit the right thing, you can do really well. But the other piece of it is it's contextual.
Like somebody who's kind of lazy and not great in one environment may actually be much better when they have their... when they're responsible and they need to drive everything. As an example of that, there was somebody I worked with at Twitter who was a very nice person, but never really seemed that effective to me. He was always kind of hanging out, drinking coffee, chatting. And then a few years later, I met up with him and he was running a very successful startup. And I said, what happened?
I mean, I said it nicer than that, right? It's so interesting you bought this great company. He said, you know what? I finally feel like my ass is on the line. And that's why I'm working so hard. And that's why I'm so, you know. Now, in general, I think that the true giant outside success archetype is somebody who can't tolerate that. They're always on and they can't help it.
But there are examples where the context of the organization and the context of your situation. When you invested in Andrel, what was your, you mentioned you had criteria and they checked it off. What was your mental? if I'm going to invest in a tech-forward defense company that needs to have XYZ, what was that criteria?
Yeah, so Android all happened in a unique moment in time where Google had just shut down Maven, and defense had suddenly become very unpopular in Silicon Valley, and people were making arguments that ethically you shouldn't support the defense industry, and all the stuff that I thought was pretty ridiculous. Because if you cared about Western values and you wanted to defend them, of course you needed
defense tech. So I started looking around to see who's building interesting things in defense because if the big companies won't do it, then what a great opportunity for a startup, right? It's a good moment in time. And it felt like there was four or five things that you needed in order to build a next-gen defense tech company because there was a bunch of defense tech companies that just never worked or hit small scale.
Number one is you needed a why now moment for the technology. What is shifting in technology that the incumbents can't just tack it on? Because the way the defense industry works is there's a handful of players called primes who sell directly to the DoD. And they subcontract out everything else, right? And if you're not a prime and you don't have a direct relationship, then you end up in a bad spot in terms of being able to really win big programs and survive as a company or succeed.
So number one is, what is the technology, why now, that creates an opening? For Android, it was initially machine vision and drones, which were new things. Two is, are you going to build a broad enough product portfolio that you can become a prime? Right. Which they did from day one. Third is, do you have connectivity slash ability to really focus on faster sales cycle?
Fourth is, can you raise enough money that you'll last long enough that you can put up with really long timelines to actually get to these big programs of record? I think Anderle did their first program of record in something like that. three and a half years. It was remarkably fast. I think it was the fastest program on record since the Korean War or something. It was super impressive. And then lastly, the way that the business model for the defense industry works is this cost plus.
Oh, yeah. So you basically make, say, 5% to 12% on top of whatever your cost to work the product out is, and that includes your labor. That includes every component. And that's why there's a very big incentive in the defense industry to overrun on time. Yeah. Because you've charged 10% on that time, right? So if something's late, you make more money. And not have a cost incentive at all. You have no cost incentive. That's why you have a $100 screw.
Because you make five bucks on the screw that costs a hundred bucks instead of using a 10 cent screw, right? Yeah. And so the cost plus model is extremely bad. if you want, efficient, fast-moving defense industry, right? And they were really focused on trying to create a more traditional hardware margin business where An example would be if Lockheed Martin sold a drone to the government for a million dollars and made 5% cost plus, they'd make 50k.
If Anderil sold a $100,000 drone with the same capabilities of the government and had a 50% hardware margin, they'd make $50,000 too. But the government could buy 10 of them for the same price. Yeah. So the government gets 10 times the hardware or the capability set. and Errol gets 10 times as much margin. If again, that structure works, And everybody basically wins.
And so I just thought that business model shift was really important. Why now, though, in the sense of why wouldn't the defense industry encourage more competition? They know they're paying cost plus. They know the screw shouldn't be 100 bucks.
Why didn't they encourage this way before hand-roll? Yeah, I think at the time Cost Plus was viewed as the most fair version of it because you're like, oh, just give me your bill of materials and I know exactly what it costs and then you'll just get a fixed margin. And so that's more fair. And I know from my budgeting perspective, really, like how much budget I need to ask for, how much. Yeah, and I think in hindsight, maybe I worked at that moment in time, but it no longer seems applicable.
And then the other thing that's happened in the defense industry is there's been massive consolidation over the last 30 years. And so a lot of the growth of these companies came through M&A and so you had fewer and fewer players competing for the same business. And so that also means that it's back to the oligopoly market structure that we talked about earlier.
How do you see defense changing in the future? Is it less about ships and more about cyber and drones? And how do we see the future of defense spending in a world where... What used to dominate is these billion-dollar ships, and now we're in a world of asymmetry where you know, for a couple million bucks, I might be able to hire the best cyber attack team in the world, or I might be able to buy a thousand drones, or...
How do you think about that? How do you think about defense in the next five, ten years? Yeah, I mean, in general, defense is inevitably going to move to these highly distributed drone-based systems as a major component of any... branch of the military. And it's not just because it's faster, cheaper, et cetera, et cetera, but also there's certain things that you can't do with a human operator inside the cockpit. So for example, you have a plane.
The g-forces that human pilot playing can tolerate. is much lower than if you're just a drone and you don't have to worry about people. Inside the... Plus, we must be at a point where AI can outperform a human fighter pilot, I would imagine. I haven't kept up on defense. Yeah, there's a few different contracts both in Europe and the U.S. that are moving ahead.
around autonomous flight and autonomous drones and all the rest of it, autonomous capabilities in general in the air. You know, I think the thing that people have stuck to so far is if there's any sort of decision that... is involved with killing somebody or hurting something, then you need a human operator to actually trigger it. And so that way you're not turning over control to a fully autonomous system, which I think is smart, right? You don't want the thing to do the targeting and go after
the target and make all these mistakes, right? You want a human to make that decision. But we exist in a world where not everybody is going to follow those rules. That's true. Then the question is, what's the relative firepower of that group of people, and how do you deal with them, and what do you do to retaliate and everything else? I mean, in general, one could argue warfare has gotten dramatically less bloody.
Oh, wait, go deeper on that. Well, if you think about the type of warfare that happened 150 years ago, or imagine if some equivalent to the Hooties was constantly shooting at your ships 100 years ago. what do you think the response would have been? Do you think you would have said, ah, don't worry about it? Obviously, we've become much more
civilized in our approach and very thoughtful about the implications of certain ways that people used to fight battles and all the rest of it. But the way that we deal with problems today is very different from how we used to deal with them. Is there an equivalent to Andrel but in the software space from a defense
And I mean that as like cyber weapons or cyber defense. Who's the best? Yeah, I've been looking around for that for a while. I don't think I've seen anything directly yet, but it may exist and I may just have missed it. But I do think things like that are coming. And you do see some AI security companies emerging, which are basically using AI to deal with phishing threats or other things. You could argue material security is doing that.
Those people are working across pen testing and other areas right now as well. This has been a fascinating conversation. We always end with the same question, which is what is success for you? Yeah. You know, I've been noodling on that a lot recently. And I think if I look at the frameworks that exist,
and certain Eastern philosophies or religions, it's almost like there are these expanding circles that change with time as you go through your life, right? Early on, You're focused more on yourself and your schooling, and then you kind of add work, and then you add your family and community. And then you add society. And then eventually you become a sadhu and you go off and you meditate in a cave in the forest or whatever.
And different people weigh those different circles differentially. And a big transition I'm making right now probably is I've been focused a lot on work and family. And the thing I'm increasingly thinking about are what are positive things I can do that are more society-level. No, thanks so much for having me on.
Thanks for listening and learning with us. Be sure to sign up for my free weekly newsletter at fs.blog slash newsletter. The Farnham Street website is also where you can get more info on our membership program, which includes access to episode transcripts, my repository, ad-free episodes, and more. Follow myself and Farnam Street on X Instagram and LinkedIn to stay in
Plus, you can watch full episodes on our YouTube channel. If you like what we're doing here, leaving a rating and review would mean the world. And if you really like us sharing with a friend community.