David Rosenthal, I will say David, I would love to have NVIDIA's full production team every episode. It was nice not having to worry about turning the cameras on and off and making sure that nothing bad happened myself while we were recording this. David Rosenthal, I mean the drives that came out of the camera. David Rosenthal, all right, red cameras for the home studio starting next episode. David Rosenthal, all right, let's do it.
Who got the truth? Is it you, is it you, is it you? Who got the truth now? Is it you, is it you, is it you? Sit me down, say it straight. Another story on the way. Who got the truth? Welcome to this episode of Acquired, the podcast about great technology companies and the stories and playbooks behind them. I'm Ben Gilbert. David Rosenthal. And we are your hosts. Listeners, just so we don't bury the lead, this episode was insanely cool for David and I. Yeah.
After researching NVIDIA for something like 500 hours over the last two years, we flew down to NVIDIA headquarters to sit down with Jensen himself. And Jensen of course is the founder and CEO of NVIDIA, the company powering this whole AI explosion at the time of recording NVIDIA is worth 1.1 trillion dollars. And is the sixth most valuable company in the entire world. And right now is a crucible moment for the company. Expectations are set high.
I mean sky high. They have about the most impressive strategic position and lead against their competitors of any company that we've ever studied. But here's the question that everyone is wondering, will NVIDIA's insane prosperity continue for years to come? Is AI going to be the next trillion dollar technology wave? How sure are we of that? And if so, can NVIDIA actually maintain their ridiculous dominance as this market comes to take shape?
So Jensen takes us down memory lane with stories of how they went from graphics to the data center to AI, how they survived multiple near death experiences. He also has plenty of advice for founders and he shared an emotional side to the founder journey toward the end of the episode. Yeah, I got a new perspective on the company and on him as a founder and a leader just from doing this despite. We thought we knew everything before we came in advance and it turned out we didn't.
Turns out the protagonist actually knows more. Yes. All right, well listeners, join the Slack. There is incredible discussion of everything about this company, AI, the whole ecosystem, and a bunch of other episodes that we've done recently going on in there right now. So that is acquired.fm slash Slack. We would love to see you. And without further ado, this show is not investment advice. David and I may have investments in the companies we discuss.
And this show is for informational and entertainment purposes only on to Jensen. So Jensen, this is acquired. So we want to start with story time. So we want to wind the clock all the way back to I believe it was 1997. You're getting ready to ship the Riva 128, which is one of the largest graphics chips ever created in the history of computing. It is the first fully 3D accelerated graphics pipeline for a computer.
And you guys have about six months of cash left. And so you decide to do the entire testing in simulation rather than ever receiving a physical prototype. You commission the production run site unseen with the rest of the company's money. So you're betting it all right here on the Riva 128. It comes back and of the 32 direct X blend modes, it supports eight of them. And you have to convince the market to buy it.
And you got to convince developers not to use anything, but those eight blend modes. Walk us through what that was. The other 24 weren't that important. Okay, so wait a minute. Was that the plan all a lot like when when did you realize that we should have realized I didn't learn about it until it was too late. We should have implemented all 32. But we built we built and so we had to make the best of it. That was really an extraordinary time. Remember Riva 128 was MV3.
MV1 and MV2 were based on forward texture mapping, no triangles, but curves and it's isolated the curves. And because we were rendering higher level objects, we essentially avoided using Z buffers. And we thought that that was going to be a good rendering approach and turns out to have been completely the wrong answer.
And so what Riva 128 was was a reset of our company. Now remember at the time that we started the company in 1993, we were the only consumer 3D graphics company ever created. And we were focused on transforming the PC into an accelerated PC because at the time windows was really a software rendered system.
And so anyways, Riva 128 was a reset of our company because by the time that we realized we had gone down the wrong road, Microsoft had already rolled out direct X. It was fundamentally incompatible with MV3 architecture. 3rd-E competitors have already shown up even though we were the first company at the time that we were founded so the world was a completely different place.
The question about what to do as a company strategy, at that point, I would have said that we made a whole bunch of wrong decisions, but on that day that mattered, we made a sequence of extraordinary really good decisions. And that time, 1997 was probably in Videa's best moment. And the reason for that was our backs were up against the wall. We were running out of time, we're running out of money for a lot of employees running out of hope.
And the question is what do we do? Well, the first thing that we did was we decided that look, do we have access now here? When I'm going to fight it, let's go figure out a way to build the best thing in the world for it. And Riva 128 is the world's first fully accelerated hardware accelerated pipeline for rendering 3D. And so the transform, the projection, every single element all the way down to the frame buffer was completely hardware accelerated.
We implemented a texture cache. We took the bus limit, the frame buffer limit to as big as physics could afford in time. We made the biggest chip that anybody had ever imagined building. We used the fastest memories. Basically, if we built that chip, there could be nothing that could be faster. And we also chose a cost point that is substantially higher than the highest price that we think that any of our competitors would be willing to go.
If we built it right, we accelerated everything, we implemented everything in DirectX that we knew of, and we built it as large as we possibly could, then obviously nobody can build something faster than that. Today, in a way, you kind of do that here at Embidia too. You were a consumer products company back then, right? There was end consumers who were going to have to pay the money to buy them.
That's right. But we observed that there was a segment of the market where people were, because at the time the PC industry was still coming up, and it wasn't good enough. Everybody was clamoring for the next fastest thing. And so if your performance was 10 times higher this year than what was available, there's a whole large market of enthusiasts who weeple.
We believed that would have gone after it. And we were absolutely right. That the PC industry had a substantially large, enthusiastic market that would buy the best of everything. To this day, it kind of remains true. And for certain segments at a market where the technology is never good enough, like 3D graphics, when we chose the right technology, 3D graphics, is never good enough.
And we call it back then, 3D gives us sustainable technology opportunity because it's never good enough. And so your technology can keep getting better, which shows that we also made the decision to use this technology called emulation. There was a company called ICOs. And on the day that I called them, they were just shutting the company down because they had no customers.
And I said, hey, look, I'll buy what you have in the inventory. And no promises are necessary. And the reason why we needed that emulator is because if you figure out how much money that we have, if we taped out a chip, and we got it back from the fab, and we started working on our software, by the time that we found all the bugs because we did the software, then we taped out a chip again. Well, we would have been out of business already. And so I knew your competitors would have caught up.
Well, not to mention we would have been out of business. Who cares? So if you're going to be out of business anyways, that plan obviously wasn't the plan. The plan that companies normally go through, which is build the chip, write the software, fix the bugs, tape out the new chip, so on and so forth, that method wasn't going to work. And so the question is, if we only had six months, and you get to tape out just one time, then obviously you're going to tape out a perfect chip.
So I remember having a conversation with our leaders and they said, but Jens, how do you know it's going to be perfect? I said, I know it's going to be perfect because if it's not, we'll be out of business. And so let's make a perfect. We get one shot. We essentially virtually prototype the chip by buying this emulator. And Dwight and the software team wrote our software the entire stack and ran it on this emulator and just sat in the lab waiting for windows to paint.
It was like 60 pounds per frame or something. I actually think that it was an hour per frame, something like that. And so we just sit there and watch a paint. And so on the day that we decided to tape out, I assumed that the chip was perfect. And everything that we could have tested, we tested in advance and told everybody this is it, we're going to tape out the chip is going to be perfect.
Well, if you're going to tape out a chip and you know it's perfect, then what else would you do? That's actually the good question. If you knew that you hit enter, you taped out a chip and you knew it was going to be perfect, then what else would you do? Well, the answer obviously, go to production. And marketing blitz. Yeah, yeah. And developer really. Just start everything off because you got a perfect chip. And so we got in our head that we have a perfect chip.
How much of this was you and how much of this was like your co-founders the rest of the company, the board was everybody telling you you were crazy? No, everybody was clear we had no shot. They're not doing it would be crazy because otherwise you might have gone home. Yeah, you're going to be out of business anyways.
So anything aside from that is crazy. So it seems like a fairly logical thing and quite frankly right now I'm describing it every you're probably thinking, yeah, it's pretty sensible. Well, it worked. Yeah. And so we take that out and went directly to production. So is the lesson for founders out there when you have conviction on something like the RIVA 128 or CUDA, go bet the company on it.
And this keeps working for you. So it seems like you're less than learned from this is yes, keep pushing all the chips in because so far it's worked every time. No, how do you think about that? No, no, when you push your chips in, I know it's going to work. Notice we assume that we taped out a perfect chip. The reason why we taped out a perfect chip is because we emulated the whole chip before we taped it out.
We developed the entire software stack. We ran QA on all the drivers and all the software. We ran all the games we had. We ran every VGA application we had. And so when you push your chips in, what you're really doing is when you bet the farm you're saying, I'm going to take everything in the future, all the risky things and I pull it in advance.
And that is probably the lesson. And to this day, everything that we can pre-fetch, everything in the future that we can simulate today, we pre-fetch it. We talk about this a lot. We just talked about this on our Costco episode. You want to push your chips in when you know it's going to work. So every time we see you make a bet the company move, you've already simulated it. Yeah, yeah, yeah. Do you feel like that was the case with CUDA? Yeah. In fact, before that was CUDA, there was CG. Right.
And so we were already playing with the concept of how do we create an abstraction layer above our chip that is expressable in a higher level language and higher level expression. And how can we use our GPU for things like CT reconstruction, image processing. We were already down that path. And so there were some positive feedback and some intuitive positive feedback that we think that that general purpose computing could be possible.
You just looked at the pipeline of a programmable shader. It is a processor and is a highly parallel and it is massively threaded and it is the only processor in the world that does that. And so there were a lot of characteristics about programmable shading that would suggest that CUDA has a great opportunity to succeed.
And that is true if there was a large market of machine learning practitioners who would eventually show up and want to do all this great scientific computing and accelerated computing. But at the time when you were starting to invest what is now something like 10,000 person years in building that platform. Did you ever feel like, oh man, we might have invested ahead of the demand for machine learning since we're like a decade before the whole world is realizing it.
I guess yes and no. You know, when we saw deep learning, when we saw Alex net and realized it's incredible effectiveness and computer vision. We had the good sense if you will to go back to first principles and ask, you know, what is it about this thing that made it so successful. When a new software technology or new algorithm comes along and somehow leapfrogs 30 years of computer vision work, you have to take a step back and ask yourself, but why?
And fundamentally, is it scalable and if it's scalable, what are the problems can it solve? And there were several observations that we made. The first observation, of course, is that if you have a whole lot of example data, you could teach this function to make predictions. Well, what we've basically done is discovered a universal function approximator because the dimensionality could be as high as you wanted to be in because each layer is trained one layer at a time.
There's no reason why you can't make very, very deep neural networks. Okay, so now you just reason your way through. Okay, so now I go back to 12 years ago. You could just imagine the reasoning I'm going through in my head that we've discovered an universal function approximator. In fact, we might have discovered with a couple more technologies, a universal computer that you can take. And you can take attention to the image-naked competition. Yeah, yeah. You're leading up to this.
Yeah, yeah. And the reason for that is because we're already working on computer vision at the time. And we were trying to get CUDA to be a good computer vision system or most of the algorithms that were created for computer vision aren't good for CUDA. And so we were sitting there trying to figure it out. All of a sudden Alex net shows up. And so that was incredibly intriguing. It's so effective that it makes you take us that back and ask yourself, why is it happening?
So by the time that you reason your way through this, you go, well, what are the kind of problems in a world where a universal function approximator can solve? Right? Well, we know that most of our algorithms start from principled sciences. Okay, you want to understand the causality. And from the causality, you create a simulation algorithm that allows us to scale. Well, for a lot of problems, we kind of don't care about the causality.
We just care about the predictability of it. Like, do I really care for what reason you prefer this toothpaste over that? I don't really care the causality. I just want to know that this is the one you would have predicted. Do I really care that the fundamental cause of somebody who buys a hot dog, buys a ketchup and mustard? It doesn't really matter. It only matters that I can predict it. It applies to predicting movies, predicting music. It applies to predicting quite frankly, weather.
We understand thermodynamics. We understand radiation from the sun. We understand cloud effects. We understand oceanic effects. We understand all these different things. We just want to know whether we should wear a sweater or not. And so causality for a lot of problems in the world doesn't matter. We just want to emulate the system and predict the outcome.
It can be an incredibly lucrative market. If you can predict what the next best performing feed item to serve into a social media feed turns out that's a hugely valuable market. This is where I was going to go with that. I love the examples you pulled a two-paste ketchup, music, movies. When you realize this, you realize, hang on a second.
A universal functional approximator, a machine learning system, you know, something that learns from examples, could have tremendous opportunities because it's just the number of applications is quite enormous. And everything from obviously we just are talking about commerce all the way to science. And so you realize that maybe this could affect a very large part of the world's industries. Almost every piece of software in the world would eventually be programmed this way.
And if that's the case, then how you build a computer and how you build a chip in fact can be completely changed. And realizing that the rest of it just comes with, you know, do you have to encourage to put your chips behind it. So that's where we are today. And that's where Nvidia is today. But I'm curious in that, you know, there's a couple years after Alex and I, and this is when Ben and I were getting into the technology industry and the venture industry ourselves,
I started at Microsoft in 2012. So right after Alex, but before anyone was talking about machine learning and even the mainstream engineering community. There were those couple of years there where to a lot of the rest of the world, these looked like science projects.
Yeah. The technology companies here in Silicon Valley, particularly the social media companies, they were just realizing huge economic value out of this, the Google's, the Facebook's, the Netflix, etc. And obviously that led to lots of things, including opening a couple of years later. But during those couple of years, when you saw just that huge economic value unlock here in Silicon Valley, how are you feeling during those times?
The first thought was of course, reasoning about how we, we should change our computing stack. The second thought is where can we find earliest possibilities of use? If we were to go build this computer, what would people use it to do? And we were fortunate that working with the world's universities and researchers was, was innate in our company because we were already working on CUDA and CUDA's early adopters were researchers because we democratized super computing.
CUDA is not just used as you know for AI, CUDA is used for almost all fields of science. Everything from molecular dynamics to imaging, CT reconstruction, to seismic processing, to you know, whether simulations, quantum chemistry, the list goes on, right? And so the number of applications of CUDA in research was very high. And so when the time came and we realized that deep learning could be really interesting, it was natural for us to go back to the researchers
and find every single AI researcher on the planet and say how can we help you advance your work. And that included Yanlecon and Andrew Aang and Jeff Hinton. And that's how I met all these people. And I used to go to all the AI conferences and that's where you know I met Ilya Suskerber there for the first time.
And so it was really about at that point, where did the system so we can build the software stacks we can build to help you be more successful to advance the research because at the time it looked like a toy. But we had confidence that even Gan the first time I met Goodfellow, the Gan was like 32 by 32.
And it was just a blurry image of a cat, you know, but how far can it go? And so we believed in it. We believe that one you could scale deep learning because obviously it's trained layer by layer and you could make the data sets larger and you could make the models larger. And we believe that if you made that larger and larger and we get better and better.
Yeah, kind of sensible. And I think the discussions and the engagements with the researchers was the exact positive feedback system that we needed. How we go back to research it was that's where it all happened. When OpenAI was founded in 2015. I mean, that was such an important moment that's obvious today now. But at the time, I think most people even people in tech were like, what is this?
Yeah, were you involved in it at all? Like, you know, because you were so connected to the researchers to Ilya taking that talent out of Google Facebook to be blunt, but receding the research community and opening it up. What's such an important moment? Were you involved in it at all? I wasn't involved in the founding of it, but I knew a lot of the people there and Elon, of course, I knew. And Peter Beale was there and Ilya was there.
And we have some great employees today that were there in the beginning. And I knew that they needed some amazing computer that we were building. And we're building the first version of the DGX, which, you know, today when you see a hopper, it's 70 pounds, 35,000 parts, 10,000 amps. But DGX, the first version that we built was used internally. And I delivered the first one to OpenAI. And that was a fun day.
Most of our success was aligned around in the beginning, just about helping the researchers get to the next level. I knew it wasn't very useful in its current state. But I also believe that in a few clicks, it could be really remarkable. And that belief system came from the interactions with all these amazing researchers. And it came from just seeing the incremental progress. And the first the papers were coming out every three months. And then papers today are coming out every day. Right?
So you could just monitor the archive papers and took an interest in learning about the progress of deep learning and to the best of my ability to read these papers. And you could just see the progress happening, you know, in real time, exponentially, in real time. And it seems like within the industry, from some researchers we spoke with, it seemed like no one predicted how useful language models would become when you just increase the size of the models.
They thought, oh, there has to be some algorithmic change that needs to happen. But once you cross that 10 billion parameter mark, and certainly once you cross the 100 billion, they just magically got much more accurate, much more useful, much more life like where you shocked by that. And you saw a truly large language model. And do you remember that feeling? Well, my first feeling about the language model was how clever it was to just mask out words and make it predict the next word.
It's self supervised learning at its best. We have all this text, you know, I know what the answer is. I was just making guess it. And so my first impression of Bert was really how clever it was. And now the question is how can you scale that? You know, the first observation almost everything is interesting and then try to understand intuitively what works. And then the next step, of course, is from first principles. How would you extrapolate that?
And so obviously we knew that Bert was going to be a lot larger. Now, one of the things about these language models is it's encoding information isn't that right? It's compressing information. And so within the world's languages and text, there's a fair amount of reasoning that's encoded in it. We describe a lot of reasoning things. And so if you were to say that a few step reasoning is somehow learnable from just reading things.
I wouldn't be surprised. You know, for a lot of us, we get our common sense and we get our reasoning ability by reading. And so why wouldn't a machine learning model also learn some of the reasoning capabilities from that? And from reasoning capabilities, you could have emergent capabilities. Right. Emergent abilities are consistent with intuitively from reasoning. And so some of it could be predictable, but still it's still amazing. The fact that it's sensible doesn't make it any less amazing.
Right. I could visualize literally the entire computer and all the modules in a self-driving car. And the fact that it's still keeping lanes makes me insanely happy. And so I even remember that for my first operating systems class in college when I finally figured out all the way from programming language to the electrical engineering classes bridged in the middle by that OS class. I'm like, oh, I think I understand how the Von Neumann computer works soup to nuts. And it's still a miracle.
Yeah. Yeah. Yeah. Exactly. Yeah. Yeah. When you put it all together, it's still a miracle. Now is a great time to talk about one of our favorite companies, Statsig, and we have some tech history for you. Yes. So in our Nvidia part three episode, we talked about how the AI research teams at Google and Facebook drove incredible business outcomes with cutting it ML models.
And these models powered features like the Facebook newsfeed, Google ads and the YouTube next video recommendation in the process transforming Google and Facebook into the juggernauts that we know today. And while we talked all about the research, we didn't touch on how these models were actually deployed. Yeah, the most common way to deploy new models was through experimentation, a B testing.
When the research team created a new model, product engineers would deploy the model to a subset of users and measure the impact of the model on core product metrics. Great experimentation tools transformed machine learning development process. They de-risked releases since each model could be released to a small set of users. They sped up release cycles.
Researchers could suddenly get quick feedback from real user data. And most importantly, they created a pragmatic data driven culture since researchers were rewarded for driving actual product improvements. And over time, these experimentation tools gave Facebook and Google a huge edge because they really became a requirement for leading ML teams.
Yep. So now you're probably thinking, well, that's great for Facebook and Google, but my team can't build out our own internal experimentation platform. Well, you don't have to thanks to Statsig.
So Statsig was literally founded by X Facebook engineers who did all this. They've built a best in class experimentation feature flagging and product analytics platform that's available to anyone and surprise surprise a ton of AI companies are now using Statsig to improve and deploy their models, including open AI and anthropic.
Yep. So whether you're building with AI or not, Statsig can help your team ship faster and make better data driven product decisions. They have a very generous free tier and a special program for venture backed companies, simple pricing for enterprises and no seat based fees. If you're in the acquired community, there's a special offer. You get five million free events a month and white glove onboarding support. So visit Statsig.com slash acquired and get started on your data driven journey.
We have some questions we want to ask you some are cultural about Nvidia, but others are generalizable to company building broadly. And the first one that we wanted to ask is we've heard that you have 40 plus direct reports and that this org chart works a lot differently than a traditional company org chart.
Do you think there's something special about Nvidia that makes you able to have so many direct reports, not worry about coddling or focusing on career growth of your executives and you're like, no, you're just here to do your freaking best work. And the most important thing in the world now go.
A is that correct and B is there something special about Nvidia the Naples that. I don't think it's something special in the I think that we had the courage to build a system like this and video is not build like a military is not bill like a like the armed forces. Where you have you know, generals and colonels, you were just we're not set up like that we're not set up in a command and control and information distribution system from the top down.
We're really built much more like a computing stack. And a computing stack the lowest layer is our architecture and then there's our chip and then there's our software and and on top of it, there are all these different modules and each one of these layers of modules are people. And so the architecture of the company to me is a computer with a computing stack with people managing different parts of the system.
And who reports to whom your title is not related to anywhere you are in the stack. It just happens to be who is the best at running that module on that function on that layer. It is in charge and that person is the pilot in command. And so that's one characteristic and you always thought about the company this way even from the earliest days. And the reason for that is because your organization should be the architecture of the machinery of building the product.
Right. That's what a company is. And yet everybody's company looking exactly the same but they all built different things. How does that make any sense? Do you see what I'm saying? How you make fried chicken versus how you floor burgers versus how you make Chinese fried rice is different. And so why would the machinery, why would the process be exactly the same.
And so it's not sensible to me that if you look at the orchards of most companies, it all kind of looks like this. And then you have one group that's for your business and you have another for another business, you have another for another business. And they're all kind of supposedly autonomous. And so none of that stuff makes any sense to me. It just depends on what is it that we're trying to build and what is the architecture of the company that best suits to go build it.
That's so that's number one. In terms of information system and how do you enable collaboration. We kind of wired up like a neural network. And the way that we say is that there's a phrase in the company called mission is the boss. And so we figure out what is the mission of what is the mission and we go wire up the best skills and the best teams and best resources to achieve that mission and it cuts across the entire organization in a way that doesn't make any sense.
But it looks like a little bit like a neural network. And when you still mission, do you mean mission like. And then you do the mission is yeah, OK, so it's not like further accelerated computing. It's like we're shipping dgx cloud build hopper or somebody else's build a system for hopper somebody is build kuda for hopper. Somebody's job is built kudian and for kuda for hopper. Somebody's job is the mission right is so you know your mission is to do something.
What are the tradeoffs associated with that versus the traditional structure. The downside is the pressure on the leaders is fairly high. And the reason for that is because in a command and control system, the person who you reports to has more power than you. And the reason why they have more power than you is because they're closer to the source of information than you are.
In our company, the information is disseminated fairly quickly to a lot of different people is usually at a team level. So for example, just now I was in our robotics meeting. And we're talking about certain things and we're making some decisions. And there are new college grads in a room. There's three vice presidents in the room. There's two e-stabs in a room.
And at the moment that we decided together, we reasoned through some stuff. We made a decision. Everybody heard it's exactly the same time. So nobody has more power than anybody else. Does it make sense? The new college grad learned at exactly the same time as the e-stabs. And so the executive staff and the leaders that work for me and myself, you earn the right to have your job based on your ability to reason through problems and helping other people succeed.
And it's not because you have some privilege information that I knew the answer was 3.7 and only I knew. You know, everybody knew. When we did our most recent episode in video part 3, we just released, we sort of did this thought exercise. Especially over the last couple of years, your product shipping cycle has been very impressive, especially given the level of technology that you are working with and the difficulty of this all.
We sort of said, like, could you imagine Apple shipping two iPhones a year? And we said that for illustrative purposes. That's for illustrative purposes. Not to pick on Apple, but a large tech company. A large tech company. A large tech. A large flagship product. Or their flagship product twice per year. Yeah. Or you know, two WDDCs a year. Yeah. There seems to be something in.
Well, you can't really imagine that. Whereas that happens here. Are there other companies, either current or historically, that you look up to, admire, maybe took some of this inspiration from? In the last 30 years, I've read my fair share of business books. And as in everything you read, you're supposed to, you're supposed to, to first of all, enjoy it, right? Enjoy it, be inspired by it. But not to adopt it.
That's not the whole point of these books. The whole point of these books is to share their experiences. And you're supposed to ask, you know, what does it mean to me in my world? And what does it mean to me in the context of what I'm going through? What does this mean to me in the environment that I'm in? And what does this mean to me and what I'm trying to achieve? And what does this mean to a video in the age of our company and the capability of our company?
And so you're supposed to ask yourself, what does it mean to you? And then from that point being informed by all these different things that we're learning, we're supposed to come up with our own strategies. You know, what I just described is kind of how I go about everything. You're supposed to be inspired and learn from everybody else. And the education is free, you know, when somebody talks about a new product, you're supposed to go listen to it. And I'm supposed to ignore it.
You're supposed to go learn from it. And it could be a competitor, it could be a Jason industry, it could be nothing to do with us. The more we learn from what's happening on the world, the better. But then you're supposed to come back and ask yourself, you know, what does this mean to us? Yeah, you don't just want to imitate them. That's right. Yeah. I love this tee up of learning, but not imitating and learning from a wide array of sources.
There's this sort of unbelievable third element, I think, to what Embedias has become today. And that's the data center. It's certainly not obvious. I can't reason from Alex Net and your engagement with the research community. And social media, feedback, and just two. You deciding and the company deciding. We're going to go in a five year all in journey on the data center. Yeah. Yeah. How did that happen? Yeah. Our journey to the data center happened almost 17 years ago.
I'm always being asked, I mean, what are the challenges that the company could see someday? And I've always felt that the fact that Embedias technology is plugged into a computer. And that computer has to sit next to you because it has to be connected to a monitor that will limit our opportunities someday. Because there are only so many desktop PCs that plug a GPU into. And there's only so many CRTs and the time LCDs that we could possibly drive.
So the question is, when it be amazing, if our computer doesn't have to be connected to the viewing device, that the separation of it made it possible for us to compute somewhere else. And one of our engineers came and showed it to me one day. And it was really capturing the frame buffer, encoding it into video, and streaming it to a receiver device, separating computing from the viewing. In many ways, that's cloud gaming. In fact, that was when we started GFN.
We knew that GFN was going to be a journey that would take a long time because you're fighting all kinds of problems, including the speed of light. And latency everywhere you look. That's right. For listen, GFN, GFN. GFN. Yeah. Yeah. GFN. And we've been working on GFN. And all of a sudden, that's your first cloud product. That's right. And look at GFN was in video's first data center product.
And our second data center product was remote graphics, putting our GPUs in the world's enterprise data centers. Which then led us to our third product, which combined CUDA Plus, our GPU, which became a supercomputer, which then worked towards more and more and more. And the reason why it's so important is because the disconnection between where Nvidia's computing is done versus where it's enjoyed, if you can separate that, your market opportunity explodes.
Yeah. Yeah. And it was completely true. And so we're no longer limited by the physical constraints of the desktop PC sitting by your desk. And we're not limited by one GPU per person. And so it doesn't matter where it is anymore. And so that was really the great observation. It's a good reminder. The data center segment of Nvidia's business to me has become synonymous with how is AI going.
And that's a false equivalence. And it's interesting that you were only this ready to sort of explode in AI in the data center. Because you had three plus previous products where you learned how to build data center computers. Exactly. Even though those markets weren't these like gigantic world changing technology shifts the way that AI is, that's how you learn. Yeah. That's right. You want to pave the way to future opportunities.
You can't wait until the opportunity is sitting in front of you for you to reach out for it. And so you have to anticipate, you know, our job is CEOs to look around corners and anticipate where will opportunities be someday. And even if I'm not exactly sure what and when, how do I position the company to be near it? To be just standing kind of near under the tree. And we can do a diving catch when Apple falls.
You guys know what I'm saying? Yeah. But you've got to be close enough to do the diving catch. Yeah. And you're going to wind a 2015 and open AI. If you hadn't been laying this groundwork in the data center. Yeah. You wouldn't be powering open AI right now. But the idea that computing will be mostly done away from the viewing device. That the vast majority of computing will be done away from the computer itself. That insight was good.
In fact, cloud computing, everything about today's computing is about separation of that. And by putting it in a data center, we can overcome this latency problem, meaning you're not going to overcome speed of light. Speed of light end to end is only 120 milliseconds or something like that. It's not that long. From a data center to an entire area. Anywhere. Yeah. And so we could literally across the planet. Yeah. Right. So if you could solve that problem approximately something like that.
I don't forget the number, but it's 70 milliseconds, 100 milliseconds. But it's not that long. And so my point is if you could remove the obstacles everywhere else, then speed of light should be perfectly fine. And you could build data centers as large a like and you could do amazing things. And this little tiny device that we use as a computer or your TV as a computer or whatever computer, they can all instantly become amazing. And so that insight 15 years ago was a good one.
So speaking of the speed of light in Finaband. Yeah. David's like begging me to go here. I can feel it. You totally saw that in Finaband would be way more useful way sooner than anyone else realized. Requiring melanox. I think you uniquely saw that this was required to train large language models. And you were super aggressive in acquiring that company. Why did you see that when no one else saw that? Well, there were several reasons for that.
If you want to be a data center company building the processing chip is in the way to do it. A data center is distinguished from a desktop computer versus a cell phone, not by the processor in it. A desktop computer in a data center uses the same CPUs. Use the same GPUs, apparently, right, very close. And so it's not the chip. It's not the processing chip that describes it. But it's the networking of it is the infrastructure of it.
It's the computing is distributed, how security is provided, how networking is done, so on and so forth. And so those characteristics are associated with melanox, not in video. And so the day that I concluded that really in video wants to be a build computers of the future and computers of the future are going to be data centers and body to data centers. And then we want to be data center oriented company that we really need to get into networking. And so that was one.
The second thing is observation that whereas cloud computing started in hyperscale, which is about taking commodity components, a lot of users, and virtualizing many users on top of one computer, AI is really about distributed computing where one job, one training job is orchestrated across millions of processors.
And so it's the inverse of hyperscale almost. And the way that you design a hyperscale computer with, with off-to-shelf commodity Ethernet, which is just fine for Hadoop, it's just fine for search queries, it's just fine for all of those things. But now when you're sharpening a model across. Not when you're sharpening a model across, right. And so that observation says that the type of networking you want to do is not exactly Ethernet.
And the way that we do networking for supercomputing is really quite ideal. And so the combination of those two ideas, it convinced me that Melanox is absolutely the right company because they were the world's leading hyperformance networking company. And we worked with them in so many different areas in a hyperformance computing already. Plus, I really like the people. The Israel team is a world class. We have some 3,200 people there now.
And it was one of the best strategic decisions I ever made. When we were researching particularly part three of our Nvidia series, we talked to a lot of people. And many people told us the Melanox acquisition is one of if not the best of all time, any technology company. Yeah, I think so too. Yeah. And it's so disconnected from the work that we normally do. It was surprising to everybody. But frame this way, you were standing near where the action was.
Yeah. So you could figure out as soon as that apple sort of becomes available to purchase. Like, oh, LLMs are about to blow up. I'm going to need that. Everyone's going to need that. I think I know that before anyone else does. Yeah. You want to position yourself near opportunities. You don't have to be that perfect. You know? You want to position yourself near to tree. And even if you don't catch the apple before it hits the ground so long as you're the first one to pick it up.
You want to position yourself close to the opportunities. And so that's kind of a lot of my work is positioning the company near opportunities. And having the company having the skills to monetize each one of the steps along the way so that we can be sustainable. What you just had remind me of a great aphorism from Buffett and Munger, which is it's better to be approximately right than exactly wrong. Yeah. There you go. Yeah. That's a good one. That's a good one. Till we have time.
Yeah. All right, listeners. We are here to tell you about a company that literally couldn't be more perfect for this episode. Crusoe. Yes. Crusoe, as you know by now, is a cloud provider built specifically for AI workloads and powered by clean energy. And Nvidia is a major partner of Crusoe.
Their data centers are filled with A100s and H100s. And as you probably know, with the rising demand for AI, there's been a huge surge in the need for high performing GPUs leading to a noticeable scarcity of Nvidia GPUs in the market. Crusoe has been ahead of the curve and is among the first cloud providers to offer Nvidia's H100s scale. They have a very straightforward strategy. Create the best AI cloud solution for customers using the very best GPU hardware on the market.
The customers ask for like Nvidia and invest heavily in an optimized cloud software stack. Yep. To illustrate, they already have several customers already running large scale generative AI workloads on clusters of Nvidia H100 GPUs, which are interconnected with 3,200 gigabit in Finaband and leveraging Crusoe's network attached block storage solution.
And because their cloud is run on wasted, stranded or clean energy, they can provide significantly better performance per dollar than traditional cloud providers. Yep. Ultimately, this results in a huge win-win. They take what is otherwise a huge amount of energy waste that causes environmental harm and use it to power massive AI workloads. And it's worth noting that through their operations, Crusoe is actually reducing more emissions than they would generate.
In fact, in 2022, Crusoe captured over 4 billion cubic feet of gas, which led to the avoidance of approximately 500,000 metric tons of CO2 emissions. That's equivalent to taking about 160,000 cars off the road. Amazing. If you, your company or your portfolio companies could use lower cost and more performance infrastructure for your AI workloads, go to crusocloud.com slash acquired. That's CRUSOEcloud.com slash acquired or click the link in the show notes.
I want to move away from Nvidia if you're okay with it and ask you some questions since we have a lot of founders that listen to this show, sort of advice for company building. The first one is when you're starting a startup in the earliest days, your biggest competitor is you don't make anything people want. Like your company is likely to die just because people don't actually care as much as you do about what it's worth.
In the later days, you actually have to be very thoughtful about competitive strategy. I'm curious, what would be your advice to companies that have product market fit that are starting to grow, they're in interesting growing markets? Where should they look for competition and how should they handle it? Well, there are all kinds of ways to think about competition. When we prefer to position ourselves in a way that serves a need that usually hasn't emerged.
I've heard you or others in the video, I think you use the phrase zero billion dollar market. That's exactly right. It's our way of saying there's no market yet, but we believe there will be one. And usually when you're positioned there, everybody's trying to figure out why are you here? Right, because when we first got into automotive, because we believe that in the future, the car is going to be largely software.
And if it's going to be largely software, a really incredible computer is necessary. So when we positioned ourselves there, most people, I still remember one of the CTOs told me, you know what, cars cannot tolerate the blue screen of death. I don't think anybody can tolerate that, but it doesn't change the fact that someday every car will be a software-defined car. I think, you know, 15 years later, we're largely right.
So oftentimes there's non-consumption, and we like to navigate our company there. And by doing that, by the time that you, the market emerges, it's very likely there aren't that many competitors shape that way. And so we were early in PC gaming, and today, Nvidia is very large in PC gaming. We reimagined what a design workstation would be like.
And today, just by every workstation on the planet uses Nvidia's technology, we reimagined how super computing ought to be done, and who should benefit from super computing that we would democratize it, and look today in videos and accelerated computing is quite large. We reimagined how software would be done, and today it's called machine learning, and how computer we'd be done, we call it AI. And so we reimagined these kind of things, try to do that about a decade in advance.
And so we spend about a decade in zero billion dollar markets, and today I spent a lot of time on omniverse, and omniverse is a classic example of a zero billion dollar business. There's like 40 customers now. Yeah, it was on BMW, you know. Yeah, it's cool. So let's say you do get this great 10 year lead, but then other people figure it out, and you got people nip in at your heels.
What are some structural things that someone who's building a business can do to sort of stay ahead, and you can just keep your petal to the metal and say, we're going to outwork them, and we're going to be smarter. And like that works to some extent, but those are tactics. What strategically can you do to sort of make sure that you can maintain that lead? Oftentimes, if you created the market, you ended up having, you know, what people describe as moats.
Because if you build your product right, and it's enabled an entire ecosystem around you to help serve that end market, you've essentially created a platform. Sometimes it's a product-based platform. Sometimes there's a service-based platform. Sometimes there's a technology-based platform.
But if you were early there, and you were mindful about helping the ecosystem succeed with you, you ended up having this network of networks, and all these developers, and all these customers who are built around you. And that network is essentially your moat. And so, you know, I don't love thinking about it in the context of a moat.
And the reason for that is because you're now focused on building stuff around your castle. I tend to like thinking about things in the context of building a network. And that network is about enabling other people to enjoy the success of the final market. You know, that you're not the only company that enjoys it, but you're enjoying it with a whole bunch of other people, including me.
I'm so glad you brought this up because I wanted to ask you, in my mind, at least, and it sounds like in years two, in videos, absolutely a platform company, of which there are very few meaningful platform companies in the world. I think it's also fair to say that when you started for the first few years, you were a technology company and not a platform company.
Every example I can think of of a company that tried to start as a platform company fails. You got to start as a technology first. When did you think about making that transition to being a platform? Like your first graphics cards were technology. There was no CUDA. There was no platform.
Yeah. What you observed is not wrong. However, inside our company, we were always a platform company. And the reason for that is because from the very first day of our company, we had this architecture called UDA. It's the UDA of CUDA. CUDA is compute-unified device architecture. That's right. And the reason for that is because what we've done, what we essentially did in the beginning, even though we were 128, only had computer graphics, the architecture described accelerators of all kinds.
We would take that architecture and developers would program to it. In fact, Nvidia's first strategy, business strategy, was we were going to be a game console inside the PC. And a game console needs developers, which is the reason why Nvidia, a long time ago, one of our first employees was a developer relations person. And so it's the reason why we knew all the game developers and all the 3D developers and we knew whatever. So was the original business plan to like...
Sort of like to build direct apps. Yeah, compete with Nintendo and Sega as like PCs. Original Nvidia architecture was called Direct NV. Direct Nvidia. And DirectX was an API that made it possible for operating system to directly connect. I draw a hardware. Yeah, hardware. But DirectX didn't exist when you started Nvidia, right? And that's when it made your strategy run for the first company. We had Direct Nvidia.
And which in 1995 became, you know, well, DirectX came out. So this is an important lesson. We were always a developer oriented company. The initial attempt was we will get the developers to build on Direct NV and then they'll build for our chips and then we'll have a platform. And exactly. What played out is Microsoft already had all these developer relationships. So you learn the lesson the hard way of like, yeah, yeah, we just got a lot of.
Microsoft did back in the day. They're like, oh, that could be a developer platform. We'll take that. Thank you. No, but they had a lot. They did it very differently and they did a lot of things right. We did a lot of things wrong. But you were competing against Microsoft in the 90s. I mean, that's. Yeah, it's like trying to have you against Nvidia today.
Yeah, it's a lot different. But I appreciate that. But, but we were we were nowhere near near competing with them. If you look now, when Kuda came along and there was OpenGL, there was to right decks. But there's there's still another extension, if you will, and the extension is Kuda. And that Kuda extension allows a chip that got paid for running DirectX and OpenGL. To create an install base for Kuda. Yeah. And so that's the industry. You were so.
Milletin and I think from our research, it really was you being militant that every Nvidia chip will run Kuda. Yeah, if you're a computing platform, everything's got to be compatible. We are the only accelerator on the planet where every single accelerator is architecturally compatible with the others. None has ever existed. There are literally a couple of hundred million right 250 million 300 million installed base of active Kuda.
GP is being used in the world today. And they're all architecturally compatible. How would you have a computing platform if if you know, MV 30 and MV 35 and 39 and MV 40, they're all different. Right? At 30 years, it's all completely compatible. And so that's the only unnegotiable rule in our company. Everything else is negotiable. I mean, I guess Kuda was a rebirth of UDA, but understanding this now, UDA going all the way back.
Yeah, it really is all the way back to all the chips you've ever had. Yeah, yeah. In fact, UDA goes all the way back to all of our chips today. Wow. For the record, I didn't help any of the founding CEOs that are listening. I gotta tell you, while you were asking that question, what lessons would I impart, I don't know. I mean, the characteristics of successful companies and successful CEOs, I think are fairly well described a whole bunch of them.
I just think starting successful companies are insanely hard. It's just insanely hard. And when I see these amazing companies getting built, I have nothing but admiration and respect because I just know that it's insanely hard. And I think that everybody did many similar things. There are some good, smart things that people do. There are some dumb things that you can do. But you could do all the right smart things and still fail.
You could do a whole bunch of dumb things and I did many of them and still succeed. So obviously, that's not exactly right. I think skills are the things that you can learn along the way. But an important moment, certain circumstances have to come together. And I do think that the market has to be one of the agents to help you succeed. It's not enough, obviously, because a lot of people still fail.
Do you remember any moments in Nvidia's history where you're like, we made a bunch of wrong decisions, but somehow we got saved. Because it takes the sum of all the luck and all the skill in order to succeed. Do you remember any moments where you're like, I just thought that you started with River 120 and was spot on. River 128, as I mentioned, the number of smart decisions we made, which are smart to this day.
How we design chips is exactly the same to this day. Because gosh, nobody's ever done it back then. And we pulled every trick in the book in a desperation because we had no other choice. Well, guess what? That's the way things ought to be done. And now everybody does it that way. Everybody does it because why should you do things twice if you can do it once? Why tape out a chip seven times if you could tape it out one time?
And so the most efficient, the most cost effective, the most competitive speed is technology, speed is performance, time to market is performance. All of those things apply. So why do things twice if you can do it once? Yeah. And so River 128 made a lot of great decisions and how we spec products, how we think about market needs and lack of, how do we judge markets and all of this. We made some amazingly good decisions. Yeah, we were back against the wall. We only had one more shot to do it.
But once you pull out of the stops and you see what you're capable of, why would you put stops in next time? Exactly. Like it goes to keep stops out all the time. That's right. Every time. That's right. Is it fair to say though maybe on the luck side of the equation, thinking back to 1997, that that was the moment where consumers tip to really, really valuing 3D graphical performance in games. Oh, yeah. So for example, luck. Let's let's have what luck.
If Carmac had decided to use acceleration because remember, Doom was completely software rendered. And the Nvidia philosophy was that although general purpose computing is a fabulous thing is going to enable software and IT and everything, we felt that there were there were applications that wouldn't be possible or would be costly if it wasn't accelerated should be accelerated and 3D graphics was one of them, but it wasn't the only one.
And it was just happens to be the first one and a really great one. And I still remember the first times we met John, he was quite emphatic about using CPUs and and the software render was really good. I mean, quite frankly, if you look at look at Doom, the performance of Doom was really hard to achieve even with accelerators at the time. You know, if you didn't filter, if you didn't have to do by the near filtering, it did a pretty good job.
The problem with Doom though was you needed Carmack to program it. Yeah, you need a car mag to program it exactly it was it was a genius piece of code and. But nonetheless software renders did a really good job and but if he hadn't decided to go to open GL and accelerate accelerate for quake, frankly, you know, what would be the killer app that put us here, right?
And so Carmack and Swini both between Unreal and quake created the first two killer applications for consumer 3D. Yeah, and so I will pull them a great deal.
I want to come back real quick too. You know, you said you told these stories and you're like, well, I don't know what founders can take from that. I actually do think, you know, if you look at all the big tech companies today, perhaps with the exception of Google, they did all start and understanding this now about you by addressing developer. Planning to build a platform and tools for developers. You know, all of them Apple. That is. Well, I guess with AWS, that's how I started.
So I think that actually is a lesson to your point of like that won't guarantee success by any means, right? But that'll get you hanging around a tree if the Apple falls. Yeah, as many good ideas as we have, you don't have all the world's good ideas and the benefit of having developers is you get to see a lot of good ideas.
Yeah, yeah. Well, as we start to drift toward the end here, we spent a lot of time on the past. And I want to think about the future a little bit. I'm sure you spend a lot of time on this being on the cutting edge of AI. You know, we're moving into an era where the productivity that software can accomplish when a person is using software can massively amplify the impact and the value that they're creating, which has to be amazing for humanity in the long run.
In the short term, it's going to be inevitably bumpy as we sort of figure out what that means. What do you think some of the solutions are as AI gets more and more powerful and better at accelerating productivity for all the displaced jobs that are going to come from it. Well, first of all, we have to keep AI safe and there's a couple of different areas of AI safety. That's really important. Obviously.
And robotics and self driving car, there's a whole field of AI safety and we've dedicated ourselves to functional safety and active safety and all kinds of different different areas of safety. I'm going to apply human and loop when is it okay for human not to be in the loop at you know, how do you get to a point where where increasingly human doesn't have to be in the loop, but human largely in the loop.
In the case of information safety, obviously bias false information and appreciating the rights of artists and creators. That whole area deserves a lot of attention and you've seen some of the work that we've done instead of scraping the internet. We partnered with Getty and Shutterstock to create commercially fair way of applying artificial intelligence to AI.
In the area of large language models and the future of increasingly greater agency AI, clearly the answer is for as long as it's sensible and I think it's going to be sensible for a long time is human in the loop. The ability for an AI to self learn and improve and change out in the wild in the digital form should be avoided. We should collect data, we should carry the data, we should train a model, we should test the model, validate the model before we release it on the wild again.
So humanism and loop. There are a lot of different industries that have already demonstrated how to build systems that are safe and good for humanity and obviously the way autopilot works for applying and two pilot system and an air traffic control and redundancy and diversity. All of the basic philosophies of designing safe systems apply as well in self-driving cars and so on and so forth. I think there's a lot of models of creating safe AI and I think we need to apply them.
With respect to automation, my feeling is that and we'll see but it is more likely that AI is going to create more jobs in the near term. The question is what's the definition of near term and the reason for that is the first thing that happens with productivity is prosperity. And prosperity when the companies get more successful, they hire more people because they want to expand into more areas.
And so the question is if you think about a company and say okay if we improve the productivity they need fewer people. Well that's because the company has no more ideas but that's not true from the companies. And if you become more productive and the company becomes more profitable, usually they hire more people to expand into new areas.
And so long as we believe that there are more areas to expand into that the more ideas and drugs, this drug discovery, there are more ideas in transportation, there are more ideas in retail, there are more ideas in entertainment, there are more ideas in technology. So long as we believe that there are more ideas, the prosperity of the industry which comes from improved productivity results in hiring more people, more ideas.
Now, if you go back in history, we can fairly say that today's industry is larger than the world's industry that was a thousand years ago. And the reason for that is because obviously humans have a lot of ideas. And I think that there's plenty of ideas yet for prosperity and plenty of ideas that can be get from productivity improvements. But my sense is that it's likely to generate jobs. Now obviously net generation of jobs doesn't guarantee that any one human doesn't get fired.
Okay, I mean that's obviously true. And it's more likely that someone will lose a job to someone else, some other human that uses an AI, you know, and not likely to an AI, but some other human that uses an AI. And so I think the first thing that everybody should do is learn how to use AI so that they can augment their own productivity. And every company should augment their own productivity to be more productive so that they get more prosperity, hire more people.
And so I think jobs will change. My guess is that we actually have higher employment. We'll create more jobs. I think industries will be more productive. And many of the industries that are currently suffering from lack of labor, workforce is likely to use AI to get themselves off the street and get back to growth and prosperity. So I see it a little bit differently, but I do think that jobs will be affected. And I'd encourage everybody just to learn AI.
This is appropriate. There's a version of something we talk about a lot on acquired. We call it the Moritz-Corillary to Moore's Law after Mike Moritz from Sequoia. Sequoia was the first investor in our company. Yeah, of course. Yeah. The great story behind it is that when Mike was taking over for Don Valentine with Doug, he was sitting and looking at Sequoia's returns and he was looking at fun three or four.
I think it was four, maybe that had Cisco in it. He was like, how are we ever going to top that? I can't. I can't. Don's going to have his beef. We're never going to beat that. And he thought about it and he realized that, well, as compute gets cheaper and it can access more areas of the economy because it gets cheaper and can get adopted more widely. Well, then the markets that we can address should get bigger. Yeah. And AI, your argument is basically AI will do the same thing.
Exactly. Exactly. I just gave you exactly the same example that in fact productivity doesn't result in us doing less. Productivity usually results in us doing more. Everything we do will be easier, but we'll end up doing more. Because we have infinite ambition. The world has infinite ambition. And so if a company is more profitable, they tend to hire more people to do more. That's true. Technology is a lever and the place where the idea kind of falls down is that we would be satisfied.
Humans have never ending ambition. So humans will always expand and consume more energy and attempt to pursue more ideas. That has always been true of every version of our species. Now is a great time to share something new from our friends at Blinkist and Go One that is very appropriate to this episode. Yes. So personal story time. I, a few weeks ago, was scouring the web to find Jensen's favorite business books, which was proving to be difficult.
I really wanted Blinkist to make Blinks of each of those books so you could all access them. And I think I found one or two in random articles, but that just wasn't enough. So finally, before I gave up as a last resort, I asked an AI chatbot specifically Bard to provide me a list and cite the sources of Jensen's favorite business books. And miraculously, it worked. Bard found books that Jensen had called out in public forums over the past several decades.
So if you click the link in the show notes or go to Blinkist.com slash Jensen, you can get the blinks of all five of those books plus a few more that Jensen specifically told us about later in the episode. Yes. And we also have an offer from Blinkist and Go One that goes beyond personal learning. Blinkist has handpicked a collection of books related to the themes of this episode.
So tech innovation, leadership, the dynamics of acquisitions, these books offer the mental models to adapt to a rapidly changing technology environment. And just like all other episodes, Blinkist is giving acquired listeners an exclusive 50% discount on all premium content. This gives you key insights from thousands of books at your fingertips all condensed into easy to digest summaries.
And if you're a founder, a team lead or an L&D manager, Blinkist also includes curated reading lists and progress tracking features all overseen by a dedicated customer success manager to help your team flourish as you grow.
Yes. So to claim the whole free collection, unlock the 50% discount and explore Blinkist's enterprise solution, simply visit Blinkist.com slash Jensen and use the promo code Jensen Blinkist and their parent company Go One are truly awesome resources for your company and your teams as they develop from small startup to enterprise.
Our thanks to them and seriously, this offer is pretty awesome. Go take them up on it. We have a few lighting round questions. We want to ask you and then we have a very fun. If you think that's okay. We'll open up an easy one based on all these conference rooms. We see in name around here. Favorite sci-fi book. I've never read a sci-fi book before. No. Come on. Yeah. What's with the obsession with Star Trek and like just watch the TV show. Favorite sci-fi TV show. Star Trek is my favorite.
Yeah. Star Trek is my favorite. It's not Vager. Now they're on the way in. That's a good, it's a good comment. Vager is an excellent one. Yeah. What car is your daily driver these days and related questions you still have to Supra? Oh, it's one of my favorite cars and also favorite memories. You guys might not know this, but but Lori and I got engaged. Christmas one year and we drove back in my brand new Supra and we totaled it. We were this close to the end. Thank God you didn't.
But nonetheless, it was my fault. It wasn't the Supras fault. But it's a mark. The one time when it wasn't the Supras fault. I love that car. I'm driven these days for first-curry reasons and others, but I'm driven in the Mercedes EQS. It's great car. Yeah, great car. Thanks. Using Nvidia technology. Yeah, we're in the central computer. Sweet. I know we already talked a little bit about business books, but one or two favorites that you've taken something from.
Clay Christianson, I think, has the series is the best. I mean, there's just no two ways about it. And the reason for that is because it's so intuitive and so sensible. It's approachable. But I read a whole bunch of them and I read just about all of them. I really enjoyed any and gross books. They're all really good. They're awesome. Favorite characteristic of Don Valentine?
Grumpy, but enduring. And what he said to me the last time I was the decided to invest in our company says, if you lose my money, I'll kill you. Of course he did. And then over the course of the decades, the years have followed when something is nice written about us in Mercury News, it seems like he wrote it in a crayon. You'll say, you'll say, good job, Don. And I'll just write right over the newspaper. It was a good job, Don. He's mails of tinny.
And I hope I'd kept them. But anyways, you could tell he's a real sweetheart. But he cares about the companies. I bet. He's a special character. Yeah, he's in trouble. What is something that you believe today that 40-year-old Jensen would have pushed back on and said, no, I disagree? There's plenty of time. Yeah, there's plenty of time. If you prioritize yourself properly and you make sure that you don't let outlook be the controller of your time, there's plenty of time.
Plenty of time. In the day, plenty of time. To achieve this thing. Like to get this thing. Just don't do everything. Prioritize your life. Make sacrifices. Don't let outlook control what you do every day. Notice I was late to our meeting. And the reason for that, but the time I looked up, I, oh my gosh, you know, Ben and David waiting, you know, that's already. We have time. Yeah, exactly. That's a didn't stop. This from being a great job.
No, but you have to prioritize your time really carefully. And don't let outlook determine that. Love that. What are you afraid of if anything? I'm afraid of the same things today that I was I was in the very beginning of this company, which is letting the employees down. You know, you have a lot of people who joined your company because they believe in your hopes and dreams. And they've adopted it as their hopes and dreams. And you want to be right for them. You want to be successful for them.
You want them to be able to build a great life as well as help you build a great company and be able to build a great career. You want them to have to enjoy all of that. And these days, I want them to be able to enjoy the things I've had the benefit of enjoying and all the great success I've enjoyed. I want them to be able to enjoy all of that. And so, so I think I think the greatest fear is that you let them down.
What point did you realize that you weren't going to have another job that like this was it? I just I don't change jobs. You know, if it wasn't because of Chris and Curtis convincing me to do do Nvidia, I would still be a L.O.C. I'm sorry to have a project today. I'm sorry of it. Wow. Really? Yeah. Yeah. I'm sorry of it. I would keep doing what I'm doing. And at the time that I was there, I was completely dedicated and focused on on helping L.O.C.
Logic be the best company could be. And I was L.O.C. Logic's best ambassador. I've got great friends that to this day that I've known from from L.O.C. Logic. It's a company I loved. Then I loved dearly today. I know exactly why I went. The revolutionary impact it had on chip design and system design and computer design. In my estimation, one of the most important companies that ever came to Silicon Valley and changed everything about how computers were made.
It put me in the in the epicenter of some of the most important events in computer industry. It led me to meeting Chris and Curtis and Andy Bechtoschein and John Rubenstein and some of the most important people in the world. And Frank that I was with the other day and just I mean the list goes on. And so L.O.C. Logic was really important to me and I would still be there. I would you know who knows what L.O.C.
Logic would have become if I were still there. Right. And so that's kind of how my mind works. Powering the AI of the world. Yeah, exactly. I mean I might be doing the same thing I'm doing today. I got a sense from remembering back to part one of our series on Nvidia. But until until I'm fired. This is my last job. I love. I got the sense that L.O.C. Logic might have also changed your perspective and philosophy about computing too.
The sense I we got from the research was that when right out of school and when you first went to AMD first, right. Yeah. You believed that like kind of a version of that was that the Jerry Sanders real men have fabs. Like you need to do the whole stack. Like you got to do everything and that L.O.C. Logic changed you. What L.O.C. Logic did was was realized that you can express transistors and logic, go gates and chip functionality in high level languages.
That by raising the level of abstraction in what it's now called high level design. It was coined by Harvey Jones who's on Nvidia's board and I met met him way back in the early days of synopsis. But during that time there was this belief that you can express chip design in high level languages. And by doing so you could take advantage of optimizing compilers and optimization logic and tools and be a lot more productive.
That logic was so sensible to me and I was 21 years old at the time and I wanted to pursue that vision. Now frankly that idea happened in machine learning. It happened in software programming. I want to see it happen in digital biology so that we can think about biology in a much higher level language. Probably a large language model would be the way to make it make it representable.
That transition was so revolutionary. I thought that was the best thing I ever happened to the industry and I was really happy to be part of it. I was at ground zero and so I saw one industry change revolutionize another industry. And if not for all of the logic doing the work that it did synopsis shortly after then why would the computer industry be word as today? It's really really terrific. I was at the right place at the right time to see all that. That was super cool.
And it sounded like the CEO of LSL Logic put a good word in for you with Don Valentine. I didn't know how to write a business plan. Which it turns out is not actually important. No. It turns out that making a financial forecast that nobody knows is going to be right or wrong turns out not to be that important. But the important things at a business plan probably could have teased out. And the hard of writing a business plan ought to be much, much shorter and it forces you to condense.
What is the true problem you're trying to solve? What is the unmet need that you believe will emerge? And what is it that you're going to do that is sufficiently hard that when everybody else finds out it's a good idea they're not going to swarm it and make you obsolete. And so it has to be sufficiently hard to do. There are a whole bunch of other skills that are involved in just product and positioning and pricing and go to market and all that kind of stuff.
But those are skills and you can learn those things easily. The stuff that is really, really hard is the essence when I described it. I did that. Okay. But I had no idea how to write a business plan. And I was fortunate that Wolf Kuergen was so pleased with me and the work that I did when I was at LSL Logic. He called on Valentine and told Don, you know, invest in this kid and he's going to come your way.
And so I was, you know, I was, I was set up for success from that moment and got it, got it on ground. As long as he didn't lose the money. I think Sequoia did okay. I think we probably are one of the best investments they've ever made. Have they held through today? The VC partner is still on the board, Mark Stevens. Yeah, yeah. Yeah. All these years, the two founding VCs are still on the board. Sutter, hell, and Sequoia. Yeah, Tench, Cox and Mark Stevens. I don't think that ever happens.
Yeah. We are singular in that circumstance, I believe. They've had value this whole time. I've been inspiring this whole time. I gave great wisdom and great support. But they also were so good. They haven't counted yet. They've been entertained, you know, by the company. It's barbed-by company and enriched by the company. And so they stayed with it. And I'm really grateful. Well, in that vein, our final question for you. It's 2023, 30 years anniversary of the founding of NVIDIA.
If you were magically 30 years old again today in 2023, and you were going to Denny's with your two best friends, you were the two smartest people you know, and you're talking about starting a company. What are you talking about starting? I wouldn't do it. I know. And the reason for that is really quite simple. Ignoring the company that we would start. First of all, not exactly sure.
The reason why I wouldn't do it, and it goes back to why it's so hard, is building a company and building a video turned out to have been a million times harder than I expected it to be. Any of us expected it to be.
And at that time, if we realized the pain and suffering and just a vulnerable you, you're going to feel and the challenges that you're going to endure, the embarrassment and the shame, and you know, the list of all the things that go wrong, I don't think anybody would start a company. Nobody in their right mind would do it. And I think that that's kind of the superpower of a entrepreneur. They don't know how hard it is. And they only ask themselves, how hard can it be?
And to this day, I tricked my brain into thinking, how hard can it be? Because you have to. Still. Yeah, you wake up in the morning. Yep. How hard can it be? Everything that we're doing. How hard can it be? On the verse, how hard can it be? You know, in terms of the sense that you're playing to retire anytime soon, though. No, no, no. So you could choose to say, like, whoa, this is too hard. The trick is still working. You're still working. Yeah, I'm still enjoying myself immensely.
And I'm adding a little bit of value. But the, that's really the trick of an entrepreneur. You have to get yourself to believe that it's not that hard. Because it's way harder than you think. And so if I go taking all of my knowledge now and I go back and I said, I'm going to endure that whole journey again, I think it's too much. It is just too much.
Do you have any suggestions on any kind of support system or a way to get through the emotional trauma that comes with building something like this? I've family and friends and all the colleagues we have here. I'm surrounded by people who've been here for 30 years. Chris has been here for 30 years. And Jeff Fisher has been here 30 years. Dwight's been here 30 years. And Jonah and Brian have been here 25 some years. And probably longer than that. And Joe Greco has been here 30 years.
I'm surrounded by these people that never one time gave up. And they never one time gave up on me. And that's the entire ball of wax. And to be able to go home and have your family be fully committed to everything that you're trying to do. And Thick or Thin, they're proud of you and proud of the company. And you kind of need that. You need the unwavering support of people around you.
Jim Gathers and the 10th Coxes and Mark Stevens and Harvey Jones and all the early people of our company, the Bill Millers. They not one time gave up on the company in us. And you kind of need that. You know, not kind of need that. And I'm pretty sure that almost every successful company and entrepreneurs that have gone through some difficult challenges they had that support system around them. I can only imagine how meaningful that is. I mean, I know how meaningful that is in any company.
But for you, I feel like the Nvidia journey is particularly amulifying on these dimensions. You know, you went through two, two if not three 80% plus drawdowns in the public markets. They have investors who've stuck with you. From day one through that, Muskie just like so much support. Yeah, yeah, it is incredible. And you hate that any of that stuff happened. And most of it is out of your control. But 80% fall, it's an extraordinary thing. I don't know how you look at it.
And I forget exactly, but I mean, we traded down at about a couple of two through billion dollars in market value for a while because of the decision we made in going into Kuda and all that work. And your belief system has to be really, really strong. You have to really, really believe it and really, really want it. Otherwise, it's just too much to endure. I mean, because everybody's questioning you and employees aren't questioning you, but employees have questions.
People outside are questioning you. And it's a little embarrassing. And it's like, when your stock price gets hit, it's embarrassing no matter how you think about it. And it's hard to explain. And so there's no good answer to any of that stuff. CEOs are human, and companies are built of humans. And these challenges are hard to endure. And it had an appropriate comment on our most recent episode on you all where we were talking about the current situation in the video.
I think for any other company, this would be a precarious spot to be in, but for Nvidia. And this is kind of all that. You guys are familiar with these large swings in amplitude. Yeah, the thing that the keep in mind is at all times, what is the market opportunity that you're engaging? And that informs your size. I was told a long time ago that Nvidia can never be larger than a billion dollars. Obviously, it's an underestimation under imagination of the size of the opportunity.
It is the case that no chip company can ever be so big. And so, but if you're not a chip company, then why is that applied to you? And this is the extraordinary thing about technology right now. Is technology is a tool, and it's only so large. What's unique about our current circumstance today is that we're in the manufacturing of intelligence. We're in the manufacturing of work world. That's AI. And the world of tasks doing work, productive, generative AI work, generative intelligent work.
That market size is enormous, is measured in trillions. One way to think about that is if you build a chip for a car, how many cars are there and how many chips would they consume? That's one way to think about that. However, if you build a system that whenever needed, assisted in the driving of the car, what's the value of a autonomous chauffeur, every now and then? And so now the market, obviously the problem becomes much larger, the opportunity becomes larger.
What would it be like if we were to magically conjure up a chauffeur for everybody who has a car? And how big is that market? And obviously, that's a much, much larger market. And so the technology industry is at the, you know, what we've discovered, what Nvidia has discovered, is that by separating ourselves from being a chip company, but building on top of a chip and you're not in the AI company, the market opportunity has grown by probably a thousand times.
You know, don't be surprised if technology companies become much larger in the future because what you produce is something very different. And that's the kind of the way to think about, you know, how large can your opportunity, how large can you be? That was everything to do with the size of the opportunity. Yep. Well, Jensen, thank you so much. Thank you. Ooh, David, that was awesome. So fun. Well, listeners, we want to tell you that you should totally sign up for our email list.
Of course, it is notifications when we drop a new email, but we've added something new. We're including little tidbits that we learn after releasing the episode, including listener corrections. And we also have been sort of teasing what the next episode will be. So if you want to play the little guessing game, along with the rest of the acquired community, sign up at acquired.fm slash email. Our huge thank you to Blinkist, Statsig, and Crusoe.
All the links in the show notes are available to learn more and get the exclusive offers for the acquired community from each of them. You should check out ACQ2, which is available at any podcast player. As these main acquired episodes get longer and come out once a month instead of once every couple of weeks, it's a little bit more of a rarity these days. We've been up leveling our production process, and that takes time.
Yes. ACQ2 has become the place to get more from David and I, and we've just got some awesome episodes coming up that we are excited about. If you want to come deeper into the acquired kitchen, become an LP acquired.fm slash LP. Once every couple of months or so, we'll be doing a call with all of you on Zoom just for LPs to get the inside scoop of what's going on in acquired land and get to know David and I a little bit better. And once a season, you'll get to help us pick a future episode.
So that's acquired.fm slash LP. Anyone should join the Slack acquired.fm slash Slack? God, we've got a lot of things now, David. I know the hamburger bar on our website is expanding. Expanding. I know, that's how you know we're becoming enterprise. We have a mega menu, a menu of menus, if you will. What is the acquired solution that we can sell? That's true. We got to find that. All right.
With that listeners, acquired.fm slash Slack to join the Slack and discuss this episode, acquired.fm slash store to get some of that sweet merch that everyone is talking about. And with that listeners, we will see you next time. We'll see you next time. Who got the truth? Is it you? Is it you? Is it you? Who got the truth now?