The Golden Age of Simulation with Chris Anderson & James Currier - podcast episode cover

The Golden Age of Simulation with Chris Anderson & James Currier

Oct 14, 202036 minEp. 47
--:--
--:--
Listen in podcast apps:

Episode description

The podcaster did not provide a description for this episode.

Transcript

You need to do millions, thousands of iterations of this with lots of variations. Now that's impossible to do in the real just too expensive. But in simulation, you push a button and it's done the next morning.

And you're starting to see this in biology simulating a cell, simulating a, an organism, you know, simulating cities, simulating, populations, epidemiology, simulating the weather, simulating complexity is a big problem, but we now have the opportunity to not just throw more saying at it, but start to use AI, just sort of fill in the gaps. This is Kristen O'Brien, Managing Editor at NFX.

In today's episode, we listen in as NFX partner James Currier talks with one of the most influential people on the front lines of tech, Chris Anderson, former editor in chief at wired, and now founder and CEO of drone software company 3 d r. Chris is someone who sees the future in a way most others do not. In this episode, he gives us a view and Golden Age we're entering where AI and simulation combine to change how breakthrough companies are born. This is the NFX podcast.

So today, I'm talking with one of the most generative people in Silicon Valley. Chris Anderson. He was famously editor and chief of wired for over a decade, an insanely influential time in tech. Plus, he then went on to write the book, The Long Tail, Free, and Makers, but Chris is not only an author. He is also a person who builds from first principles, and that is, of course, one of the highest compliments I can pay.

And the other highest compliment I can pay is to call him a hobbyist, which he is, and I believe that is the hobbyists who have been the source of many of the greatest innovations in Silicon Valley. And if Silicon Valley is the hobbyist and the crazy uncle in the garage for the country and the world. It is Chris Anderson who is the hobbyist and crazy uncle in the garage for us. So I used these terms with highest admiration.

So in 2009, actually, Chris was thinking about building drones with his kids And he then learned all about drone Beller. He did a couple of articles and wired about it, got really interested in it, started DIY drones, community of over 40,000 people who were hobbyists building UAVs before everybody else was. And from that, he learned that there was an opportunity to build a drones company.

And so he started a company called 3 d robotics, which is now known as 3 d r. I was an angel investor, and Chris and I have been in a book club now for about 6 years. So we've been friends a long time. But, you know, the 3 d robotics company, the 3 d r company, is now focused on enterprise drone software for construction, engineering, government agencies, and more. And we're going to talk about what he's learned from that moment of building that company.

He's raised a $178,000,000 Beller the last 10 years. He's become an important part of sort of ecosystem of what's going on with autonomous and drones and whatnot. We're gonna get to that. And so today, we've got you here, Chris, And last week, when we caught up, we realized that there are many things we could cover today, but we settled on simulations. And so that's what we're gonna dig into today. And so let's jump into that.

And I guess we should start with where you are and where you've been in the last few years with 3 d r. And what you've learned in the drone area and where that's gone. I'd love to hear more about that. Tiny correction, actually. All that drone stuff started in 2007. 2009. And it's weird because, you know, back then, it seemed, you know, 1st of all, just putting the letters DIY in front of a, you know, military industrial thing is kind of provocative to the point of being ludicrous.

That was kind of the point. You know, imagine if I said DIY, clear power Flint. Remember drones at that point were being classified as cruise missiles. Export control, all that. So it was purposely provocative in the sense that it made people think. It also is purposely enabling, which is to say DIY means that you can do it yourself. And, you know, that people serve it, we'll say, well, how? And show me and, and you know, rest is history.

What's interesting is that you know, by the time we got to 2009, you know, we'd already had, you know, a company. By the time we got to 2012, we were making 100 of 1000 of auto pilots and, you know, other technology necessary to make drones. And by the time we got to 2016, we were making millions of drones. You know, the technology was largely sorted out.

At that Flint. And I, you know, sort of somewhat, let's say, unwisely started to pontificate about the sky being dark with drones, you know, because we'd hit this moment where we were doing bottoms up, not top down aerospace. We were doing it based on smartphone We weren't taking a, you know, 747 and taking out the pilot. We were taking a smartphone and adding Beller. So I'm like, look, there's a empty space between the ground, you know, and We have satellites in orbit.

We have sensors on the ground, and it's just really expensive to get sensors, you know, a 100 and 400 feet off the ground or 100 feet off the ground because pilots are if anything that carries a person is dangerous. So, you know, what would it take to measure the planet under the clouds at high resolution anywhere, anytime for free? Was clear that smartphone technology was gonna be the way to do it. Internet connected smartphone con technology was the way to do it.

And that just completely changed sense of who should do it. It's not gonna be an aerospace company. It's gonna be people who, you know, are building the internet, people who are using smartphones, people using arduino, and then the maker Morgan And so we demonstrated that. The good news is that we accomplished it. You know, you can bow by a fully autonomous drone from M Walmart for, you know, a couple $100 and it's got an amazing camera and it's connected to the internet. It's great.

The bad news is that this guy is still not dark with these things. And the reason what we thought was a technology problem or an economic problem, out to be a regulatory problem. And so what we discovered is that Silicon Valley is all about innovating in the gray space. So the, you know, the classic classic rule of Silicon Valley is asked forgiveness, not permission, Uber, Airbnb, PayPal, the, the Facebook exec. We all did it. And that works great. And, you know, and then it James.

Okay. That's bad. But it's actually almost worse to succeed. Because once you've innovating the gray base and you prove that people do want it and it can be done, then you can't just keep innovating the gray space. You then have to kind of sit down with the regulators in the closed rooms and write the reg so it's no longer space. So it now becomes white space, which means you now have to, you know, what happens when the revolutionaries win? Will they become part of the establishment?

And so, you know, after this first five for 10 years of spending the technology to prove that small cheap safe drones were a thing.

We then had to spend the next 10 years, and we're halfway through it, working with the FAA to make them it reached the potential by being allowed to fly beyond visual line of sight, one to many over people at night, all those things that we aren't currently allowed to do, because, you know, frankly, right now, if it's one pilot, one drone, we haven't achieved anything. That's like a radio controlled airplane from the 19 fifties. These are flying robots.

They're supposed to be not just, you know, one to many, but maybe none to men is supposed be autonomous vehicles doing what humans don't have to do. And we're not allowed to fly beyond visualized sight. We're not allowed to fly without a single operator watching it the whole time, not allowed to be autonomous. And that's where we're focused right now is basically moving the regulatory regime into the 21st century by sort of helping them understand what's tech possibly possible today.

We're now going from 6 months Silicon Valley cycles to 10 year DC cycles. It's tough, and it is emotionally the opposite of being an entrepreneur. And yet this is the deal. If you wanted to change the world, you know, you start by demonstrating and then you and then you have to work with them. So 3 d r is still functioning. You're still selling into military construction.

You're doing some very sophisticated technologies but at the same time, you're spending a lot of time on Zoom calls with people in DC trying to work through the details of what the reg should be for companies like yours and for the future of sort of drones in in America and then the world by extension. Yeah. Unfortunately, at the end of last year, we sold off our software business to Esri, which is a big geospatial giant.

And so that was a good exit for all involved and allowed me to just focus on this regulatory stuff with a small team. What we got the FAA to accept was that these things are low risk, that they're small, they're light, they're not carrying anybody, and that low risk vehicles don't have to bear the same regulatory burden of a man of jetliner Morgan vehicle.

And so they basically invented a new sort of, you know, what's called performance based to form a certification where we don't have to certify every nut and bolt and lock it down for decades the way traditional aviation is. But instead, we can just sort of say, look, you know, we this is the way it works. This is the way it performs. We demonstrate performs that way and that they certify it based on that that demonstration. And as a result, we can then modify.

We can update the software and all that without having to go through certification again. And it's so largely to address the 7 37 Max fiasco. Yes. So talk us through that. I mean, that that's been a real shift. Boeing Max Fiasco. The Fiasco, this it's an implicit consequence of the way aircraft are certified. So back in the fifties where the current certification regime was was created, aircraft for most of the mechanics and they had a lifespan of 30 years, 40 years or even more.

And so, basically, you could lock them down. You can say, these nuts and these bolts and this engine, these bits are not gonna change. And, you know, they come through a factory. We can control their production. They have serial numbers. They're not gonna change. And it takes 10 years to certify, but that's okay because the useful life of these things is generations. What happened since the fifties is that aircraft went from essentially mechanical to essentially software.

And so the vast majority of complexity in a modern airplane is the software. And the problem is that software, you can't lock it for 30, 40 years. Unfortunately, they had to. So the 7 37 max is based on a core of software that was originally certified in the 19 sixties seventies. So imagine if you, you know, think of, like, the IRS running cobol.

Imagine if you have, you know, software where the cord that you cannot change because it's too expensive to recertify and that everything on top of that patches. So you have a tiny core from the sixties seventies and then 40 years of patches on top. And you could just imagine how unmanageable that is. Basically, they got lost in the complex They no longer had the ability to understand what the software did because it was just a pile of batches. So this was bound to happen.

So the FAA wants to experiment with a new way, but they wanna experiment if the low risk area, you know, not starting with jetliners, but starting with small drones, build up a lot of data, and then extend it to Mandeviation. Right. And so here's the United States.

With a strategic asset known as the Boeing Corporation, right, which the government is then saying, you have to take these things out of the sky really hurting Boeing James hurting Boeing's reputation, but they have to do it to protect human life. They're now in a situation where they need to help Boeing and everybody else figure out a new way to deal with software based aircraft. Drones are the most extreme example of that because as you said, they're basically just iPhones with propellers.

And and so you've now found yourself in the middle of this debate between the old way and then the brand brand new way of drones. And at the same time, they're trying to change the wheels on the car for the traditional aviation system as they're moving that forward. Exactly. Yeah. I mean, it's, by the way, it is true in SEC have you deal with modern, you know, fintech medicine, FDA, having to deal with modern, you know, medical or drug discovery, etcetera.

So all these regulatory agencies are basically having to try to catch up with technology's progress. And they're always gonna be slow, but, you know, hopefully they're open minded to doing things differently. And sometimes it takes a crisis, like 7 37 max to drive them. And sometimes there's enlightened, you know, leadership that drives it. But in our case, it's not just set recognizing that an aircraft is mostly software.

It's also recognized an aircraft is connected to the internet, which FAA had really no concept of a connected vehicle. It also recognizes that, you know, that, that you might operate aircraft with an iPad. You know, in other words, that you'd use a consumer device, like a fur or a tablet, as the interface, well, they've never had this notion how do you regulate iOS now? Do you regulate Android? And what about AI? They've never dealt with what they call non deterministic systems for.

AIs by definition non deterministic. How do you allow AI with all the good that it brings into a system which was by definition locked down? The whole notion of a learning system is one that isn't locked down. It evolves over time. EFTFAs never had a deal with it before. So these kind of all, like, the modern world has basically rushed in these regulatory chambers saying, guys, 21st century, let's get with it. And they have, on one hand, you know, a mandate for safety.

That's the only mandate the FAA or the FDA has that not economic development, not innovation. It's only safety. So, you know, and they also have the curse of a fantastic safety record we've never had safer air travel than we have today. So now you're saying to these regulators, you're saying, like, you're gonna have to do things differently that might be a little riskier, and you're gonna do so largely for innovation and economic benefits by the way, are not part of your mandate.

So you can see how challenging that is, and basically has to come from the White House or Congress to give them a new mandate to do this. Well, it's true. I mean, you see this in dealing with large corporations as well. People say, well, I'm having a deal with GM or deal with Google. No. You're not. You're dealing with an individual in a department of division of Google. And when we're talking regulatory, we're talking about people whose only job is to defend.

Yeah. They get no upside from any of the benefit of these things. They only get downside from the downside of these things. Exactly. And so what you're saying is you've gotta go to the CEO to get them to change the mandate to incorporate the concept of the benefits that you could get from drones or from, you know, a more varied sort of aircraft landscape than we have today, but that's hard to do. Yeah. Yeah. Because they've got their charts.

They've got their bonuses coming to them for doing a good job of keeping people alive. And I hear friends talking about, well, you know, autonomous coming. It's right around the cars is right around the Currier. And I'm like, well, no. That's only true if people were rational and said, look, we lose 30 or 40,000 people on the roads every year cars. And so we only lost 20,000 people to autonomous cars this year. We're really improving, but that's not how people think. Absolutely.

And when you think about, you know, cars, you probably think, oh, well, it's a technical problem. Well, it's actually a legal problem. I mean, you need tort reform to make sure that, you know, when you turn on autonomy, does that mean that the liability transfers from the operator to the And likewise, you know, for the regulators is, you know, National Highway Transport Safety Administration, I guess. They don't have a mandate to have faster cars or better traffic. They're all about safety.

So who's gonna drive it? You know, if you were founder 30 years ago, you would make a new product, an innovative product, Morgan disrupt product. You would then take that product, and it would largely fit within the regulatory systems that you needed to go through in order to turn it from a gray space into a white space. But now not only do you have the challenge of doing all that, but now you're going to impact onto a regulatory regime that is still twenty, thirty years old.

And isn't really up to date upon the technology you're using. Yeah. And I mean, the only silver lining, one silver lining in this is that because we can't innovate in the gray space, rather, we're not just talking hypothetic We can put a million drones in the Morgan they can make it again to say, well, you know, look, here's the data. Here's what actually works. They were killed anybody that at least is a little this existence proof. The other thing we have is that's a global market.

And so if we can't do it here, we'll do it somewhere else. You'll do it in New Zealand or Australia or Singapore or China. And, you know, although the FAA may care about, you know, competitiveness with other countries. Congress does and the White House does. And so they all get the attention of those who can help Got it. And where do you see AI and simulation playing in here? Both of those are Tarint incognito from a regulatory perspective.

Is the golden age of AI, and it's the golden age of simulation for reasons that I can I'll get to in a second. Both of them are hard to integrate into existing regime. We both have to innovate on the technology side to show benefits are, and then we have to find some way to package them up palatable to the regulators.

I think everyone understands why it's the golden age of AI, you know, due to Nvidia CUDA cores, a, you know, cloud processing and, you know, tensor flow and and all that, but may not be as obvious why it's the golden age of simulation, just to kind of remind everybody what simulation is. Simulation is what you do is you let's say you're doing a car a drone or something like that. You still make the vehicle.

You simulate the environment of the vehicle's in, and then you run lots of variations of different environments and different scenarios. And you asked your code. You can train your AI, you know, on that. You can, you know, instrument everything so you can see what's going on. Obviously, gets hurt because it's all virtually in the simulation environments or off of something like Unity or on video game environment.

In the back end, you're basically running a simulated version of your, of probably code. The reason it's the golden age of simulation is related to the same things that are going on with AI. You know, first of all, the environments, you know, the game engines are just photorealist absolutely amazing. I mean, if you played a modern game, you know, it's really hard to distinguish between that and reality. Like the latest Microsoft Flint simulator.

Exactly. Latest Microsoft Flint simulator being a perfect example, but, you know, just any video game these days is amazing. So we now have that level fidelity of the real world Secondly, we have the ability to run those engines in the cloud. So you're not limited to the power of a single desktop. And then, you know, thirdly, you know, when you have these reality capture. So you mentioned Microsoft Flint simulator, which just came out, and that's working with Microsoft on this.

Microsoft flight simulator is fantastic groundbreaking, not just because of the engine, but because the environment is real. James from Bing Maps, satellite views, which they've reconstructed, you know, algorithmically into three-dimensional buildings. And if you fly over your house, it doesn't just look satellite picture of your house. It is like your house, you know, from any angle, full geometry, SIM city kind of thing.

And so, you know, that was one of the hardest things about simulation was creating realistic environments. And now they're automatic. You get the real world sort of created for you for free because we just called through a process called photogrammetry. We could take the satellite imagery just turn it into what we call digital twin anywhere, including, you know, cities, airports, roads, you know, the works.

I mean, finally, you know, once you have these photorealistic environments that are populated by reality capture of the real world, you can then run massively parallel simulations and do reinforcement learning. You can do machine learning in those simulations. And that means, you know, the reinforcement learning is something that we basically say, you know, here's a goal to your vehicle is gonna try to stay on the here's the goal. It gets punished if it goes off the road.

It gets rewarded if it stays on the road. You throw more challenging environments, rain, fog, snow, whatever. And, you know, it learns this. You need to do millions, thousands of iterations of this with lots of variations before it really learns. Now that's impossible to do in the real world. It's just too and and simulation, you push a button and it's done the next morning. Yeah. You've gotta attach it to some sort of big cloud, like AWS. Exactly. Exactly. Azure. And running in parallel.

Brandon of a parallel. Got it. So and what you're saying is that, look, I mean, we've had this Microsoft flight simulator and other things since what, in 1982 or something. And I think this latest release, which you know, feels so real is the 11th major entry. It's like we're there now. Microsoft Flint simulator is the first game that I know of. I could be wrong about this. Maybe our fact checkers will check. The first game I know of where the environment is all the real world.

It isn't just capture from satellites. It's got real time data. So, like, in those hurricanes, you could actually fly out into the hurricanes as they were happening, experiencing it photorealistically as if you were there. So I don't think we've ever had this literally a twin of our world in both space and time, both land and air and everything. I think as as far as I can tell, it's the entire world. I mean, you know, Nepal, Africa, you name it. It's there.

We were talking about doing that in 2nd life back in 2005, but we knew it was gonna be a long term thing. But I guess it's here now 15 years later. So now that that is Pete, now that we have the processing power, we've got the simulation. We've got the ML. What happens next? What happens next is that all of our autonomous devices, Zillow robots get a lot smarter, a lot faster.

So right now, they've been gated by real world constraints, you know, putting them together in the first place, you know, sensors, the quality of the sensors, you know, just people's time. We all do simulation to a degree, but the simulations always you know, limited in fidelity, just to say we might simulate, you know, with the camera seed, but we're not simulating the weather. We might simulate the, you know, the weather, but like just a consistent wind but not gusts and turbulence.

And so now that we're able to basically fractally simulate the real world, simulate environments, we're now able to the fidelity, the sort of accuracy of these simulators starts to asymptotically approach, you know, 1. We becomes almost indistinguishable that as far as the robot is concerned, it doesn't know if it's the real world or simulator.

As a matter of fact, Amazon just today patented a simulator for their drones, and they basically take physical drones, stick it on a Beller, and then stick screens around it, and then simulate a real world, what it would see, and then the drone wouldn't know the difference it would basically sort of say, oh, I basically, maybe you take the props off or whatever, but, you know, so it isn't actually flying around. It's like a virtual reality for robots.

You know, we you put the goggles on the robot's head, if you will. And it just doesn't know any better. And what this means is that now that we have the Beller to do that kind of high fidelity training, we can assume machine learning that the AI comes out of that is going to work in the real world and the closer the simulators get to the real world, the more confidence we have that the training is gonna apply to the real world.

And now we can start to stick these things and things they've never seen an environment they've never seen before and just assume they would work. And that is, you know, almost by definition, that is intelligence. Intelligence is not, if I move an arm, you know, robot arm and tell it to pick up this pencil and put it down over there. That's one thing. If I put a robot arm in an unknown environment and say, hey. You, you know, find that Beller and bring it to me. Never seen a ruler before.

It doesn't know the environment. Doesn't even know what those words means. It's not gonna do it. And so the intelligence suitability too enter a novel environment. And then use some kind of first principles of what things mean or how to get around to solve a problem. And humans are really good at that, and robots are really bad at that. And I think the secret is going to be high fidelity simulation to the real world with massively parallel training. Right.

And so we were able to do that with AlphaGo against, you know, the go game because that is a simulated, very simplistic world with simplistic rules, but it had to figure out what the rules were and then get smarter.

And you're saying now that we've been able to model the real world in the same way that we're able to model a Go James digitally, we are now gonna be able to start putting that type of intelligence against this real data Pete, and we're gonna start seeing rapid accelerations of autonomous. Yeah. Exactly. By the way, documentary, alphago is just extraordinary. It is very tempting to watch AI beat chess. Watch AI Pete go and think, well, that's it.

You know, SkyNet's self aware, and we forget that is one big difference between those games in the real world and those games, the AI has perfect information about the games. They know exactly where the pieces are. It's a very limited environment. It's basically a two dimensional the real world, you have imperfect information about the real world.

I mean, right now, if you just look around your room right now, you have, you know, a certain level of granularity of knowledge about the world you know, around you, but, you know, you don't get down to the subatomic level. You can't see, you know, spectrums that your eyes can't perceive. You can't see where it's going to be, you know, where it was 10 minutes ago, You have temporal resolution issues, etcetera. And so the real world's noisy and our sensing ability is imperfect.

And as a result, you know, we have to compensate for that with predictions based on pattern matching and things like that. Robots, you know, have all the same problems. They have imperfect information, and that's the reason why we have all these debates about, you know, does Tesla says, well, cameras are enough. And then, you know, Waymo says, we need cameras and lidar and Morgan and radar and other information.

And basically, you know, the simple answer is the more information you can have about what's around your environment, a better job you're gonna do predicting it. And that is really where a simulation can stand up because that simulation can start to give AI perfect information about its surroundings, even more perfect than they can get with real sensors, but at least start to prepare us for an era as sensors get better, you know, the simulation will be there waiting for them. Right.

And so, you know, in the past, we've said that argument against simulation has been that it's not enough granularity and that the sort of SIM to real gap was too big, and that is shrinking very rapidly in the last 24 months in the next 36 months, probably. Yeah. The so SIM to real gap is really the big problem we all deal with right now, which is, you know, if you train something in sim and you take it to the real world, it should work. It often does.

And that's because we didn't emulate the real world, you know, well enough. The way we can resolve that is you can just emulate the real world better and better, but a Flint, actually, you then realize that your problem isn't creating a synthetic world. Your problem is actually measuring the physical world and testing it, like, right now with the FAA, and we Flint introduce in simulation. The FAA when they regulate, when they certify a vehicle, they say, okay.

Well, you need to test it under this range of temperatures from the low from 0 degrees to a 100 degrees. You need to test a different air pressure different humidities, different winds, not just like, you know, prevailing wind, but gusts and turbulent. And then you need to move the center of gravity around, and it's just like, you need to do this matrix of all the tests, and there's, like, 10,000 which is unaffordable in real planes. They talk about chasing wind.

You have to like fly to the Himalayas to find the wind necessary to, you know, have this right combination of altitude. So simulation gives you an opportunity to just create that synthetically. And then you say, okay. Well, how do I know it's right? So I created this incredibly granular wind vector field with all sorts of guss and turbulence and sort of interplay, you know, around the buildings and this little eddy that comes around these two buildings that are right next to each other.

It's like, great. How accurate is that? Well, now I've got another problem. Like, how do I measure the real world to that granularity? I mean, do I set up little ananometers, you know, field of ananometers in the city, or do I just, you know, fly a drone and see how it's affected, and then use that as a kind of its own measurement. And we're actually, right now, we're actually stuck on measuring the real world to check whether to see whether a simulator accurate.

And we're doing a lot of our testing right now at a wind farm in North Dakota because there's one place that's pretty good at measuring winds at different places. It's kind of a major of winds is a wind farm. Interesting. I mean, one of the things you said to me, I think, before was to what extent can simulation be believed enough to the extent that you'd bet your life on it. I'm well Right now, you've mentioned 2 things.

Is our simulation algorithms correct to create the world are our algorithms for our vehicle Currier? And then I guess the third thing is, can we measure the real world accurately? Like, all three of those things are in question. Probably an order of difficulty creating a, you know, complex high resolution synthetic world is probably the easiest. Now if you can do it algorithmically, the way Microsoft Flint simulator does easier yet, but, you know, that's pretty good at that.

Video James industry has helped us a lot there. The next easiest thing is to create an infinite number variation of that world. And there's something called in a, something called generative adversarial networks, and that's a way to create things, you know, in a a hallucinate you know, worlds that it's never seen before, but it nevertheless realistic. And so, a, we start by creating 1 realistic world, then we permute that world through the James to create, you know, a lot of variations.

And imagine if you wanna test, like, you know, self driving cars, you wanna not just create different roads and weather conditions and lighting, but also, you know, pedestrian in combinations and pedestrian car combinations and dogs and birds and, you know, trash and all this kind of stuff. So you wanna really see, like, more variation than the real world present. We're not necessarily there yet, but again, is unlimited in disability theoretically to create variation. Okay. That's all good.

Then you have to check its fidelity. See how it corresponds to the real world. And that's where it gets a little tricky. First of all, if you create a synthetic, you know, variation in the real world, you won't find that in the simulator. You won't find that exact one in the real world. It was synthetic. So you don't have 1 to 1 correspondence. And secondly, you know, it is nothing's perfect.

And, you know, there's a going to be some variation, some combination, the bad things that happen that, you know, you never anticipated the James didn't anticipate, and someone's gonna die because of that. And that's just inevitable. And, you know, back to what said before. Right now, 30,000 people die on the roads, due to car accident. Let's say we bring it down to 300 down by what the three orders of magnitude. Is that okay?

Well, you know, you and I know that the headlines will be robots kill 300 Pete. But even if you bring it down to 30, you know, at that point, well, that's kinda where we are right now. I mean, Tesla's killed what, it's 2 or 3, maybe, at this point, and there's still running. So there's probably a number, you know, that's maybe, you know, and the general rule of thumb is 11100th as many people. The robots kill 1100 as many people as the humans. That's probably gonna be acceptable.

It's gonna be rocky, and it's not, you know, to be better for 11000th, but, sir, 1% is probably socially acceptable over time. So that's that's the ratio that's required for us to move forward with the innovation Exactly.

Unfortunately, what's gonna happen is that it's not gonna be like a 100% of people stop driving and are entirely replaced by robots, which kill 1% gonna happen is you're gonna have 90% of people still driving, 10% are having robots, and you're still gonna have, you know, tens of people, you know, dying in addition to the now 25,000 people who are being killed on the roads by humans, and it doesn't necessarily look like a massive step change in safety overnight.

Mhmm. Mhmm. Unless someone wants to make that issue because they're trying to build their journalistic reputation. Exactly. Unless or or you can take some place like Singapore, and they could just decide to ban humans. And, you know, and see how it goes. And then you say, well, sure, it works in Singapore because everything works in Singapore, you know, bring bring it to Jakarta, you know, and then then let's talk. Got it.

So given this explosion and the ability for us to simulate in the accuracy, what do you think could transpires in terms of startups or in terms opportunities or business, or is this all gonna be really helping with the regulatory environment for those businesses that are pretty far along, like a Tesla or or Boeing. Yeah. Well, the good news is that everybody uses simulation right now. All the self driving car companies have really excellent simulations.

All the aircraft, you know, companies, aerospace companies simulations that are pretty good. And Christian is an opportunity. There are startups right now who offer simulation and simulation as a service for others. I still think it's really early days for a couple reasons. 1, the cost of creating environment is still too high. And so I think that, you know, what Microsoft did with Microsoft Flint Simulator to, you know, procedurally generate you know, rich realistic environments.

It's probably the way forward. So that's a breakthrough there. Now it's not granular enough because it's using satellite data. You don't really have photos to build more realistic. And so, you know, but their drones can Beller, actually. I mean, you know, rather than satellites, 100,000 miles or wait, 100, I can't I can't really put put out a big a thousand miles actually used drones to photogrammetry, and that's actually the business we just sold to Esri.

That uses drones at, at a 100 feet to do reality capture photogrammetry, and that it pretty much perfect. I mean, that's down to 1 centimeter resolution, which is sort of good enough. They are the imagery that you're using for the simulation corresponds to what the vehicle would actually see from its own sensors. So algorithmically creating environments is a big opportunity right there. And I think you're gonna see a lot of progress in that domain.

The next one is gonna be about simulating the vehicle themselves that really level high level detail. And there you have companies like MATLAB and Simulink, mathworks variations of the same thing, which are been used in industry for decades. Simulate industrial processes. And, you know, they're very good at simulating motors or simulating valves and simulating mechanical elements.

It's quite difficult simulate an entire system in the same way it's difficult to simulate an entire body, you know, because it's so complex. And so I think, you know, we have to come up with smarter ways to simulate systems that don't involve hand crafting every element of it. And you're starting to see this in biology simulating a cell simulating a an organism, you know, simulating cities, simulating populations, epidemiology, simulating the weather, simulating complexity is a big problem.

We've been at it. Weather forecasting in particular for decades using supercomputers, but we now have the opportunity to not just throw more processing at it, but start to use AI to sort of fill in the gaps things we don't know to sort of hallucinate the bits, you know, what's realistic and sort of generate synthetic systems that maybe don't exactly the same nuts and bolts. You don't have to model every single, you know, alloy kind of work the same.

So that's between the environment on one side and the systems on the other side, both super complex, both need scalable algorithmic to get there. And I think that what we've been talking about today is the direction for both of them. Got it. And this word simulation, do we need a new word now that we're entering this new phase? We've got the simulation hypothesis about the world we live in. We've had this word a long time.

There's something that you would propose that would be a different word, or are we gonna struggle along with this simulation word there is a sort of dark side of simulation. Right? What's that? Well, just that a simulation could lie.

Like, if you decided to tweak the algorithms, you could simulate something and then present data that was sort of a lie, to get your way, to get through this next phase, to get your funding done, you know, because remember, our audience here at NFX is really early stage founders. And I'm just thinking through, you know, you could simulate something as an early stage founder and come up with some results. You could show to investors.

You could show to the regulatory agencies and say, gonna be Flint. But by simulating it, isn't there some magic in the background where you're tweaking the algorithms to tell the story you wanna tell? What's good Flint in naming it certainly applies to AI as well. You know, AI will give you an answer. Big data, you know, your data science will give you an answer. It looks scientific. Is it right? You know, garbage in garbage out.

Other words do are things like, you know, synthetic could be called a model, a simulation is a model of the world. It could be called data science in the sense that it's to to to basically based, etcetera.

I think what we're recognizing is that we've gone from the, you know, the naive sense that, you know, AI will answer all of our questions or that data science will answer all our And we went from a kind of a positi of data where we're like, well, if only we had enough data, we'd be able to get to the answers of this. Now we have an abundance of data, but we realized that's not enough. We don't really understand the data can't necessarily, you know, interpret the data.

We're not even sure using the right algorithms. So, you know, we've gone to the kind of, you know, shiny data phase where data scientists are considered god. And I think the data scientists are trying tell us, guys, we can get this, you know, depending on interpretation, we could be at really wrong. Data could be wrong. Our interpretation could be wrong. It's not a silver bullet. You can't push a button and get answers.

We really need humans in the loop to try to interpret this and check it throughout. You know, I I think we've the pendulum is over swung towards overconfidence. In the models and the data and the simulation.

And now we're gonna have to get, you know, a hard nosed, what's real here, what's really predictive here, and just negotiate it You know, I'm reminded of that phrase where you stand on any issues based on where you sit and, you know, where the simulations come out might be based in some places on where you're sitting and what you wanted to say. And we're gonna have to deal with the fact that humans are humans, and we all have our various interests at play when these things happen.

So when you think about these opportunities, this new Vista, this new landscape of simulation that is now emerging was gonna be possible. Is there something that you might say to early stage founders that would you know, help them figure out what to build.

I've been fascinated to watch the, you know, the step change in progress to the COVID vaccine work, not just the ability to sequence, the virus is quickly, ability to generate candidates quickly, the ability to, you know, run spin up production, parallel, and ability to test, you know, rapidly. All those things are, and does say nothing of the regulators agreeing to regulate rapidly.

So all that's been very exciting, but, you know, all those things work better with models, statistical mathematical computer models to drive it. It's very exciting to think about our ability to model biology to model, you know, organisms, to model disease progression, you know, right down to the protein level, neurology, etcetera, using this and did write a piece in wired Beller 13 years ago, which provocatively and probably wrongly was titled, the end of theory.

You know, the phrase that, you know, correlation is not causation. Scientific model is about I have a theory. It's falsifiable, so I'm gonna test that theory, run an experiment. And an experiment falsifies this. Great. That's what that means theory is wrong. If it doesn't falsify it, well, then other people should do different try to eventually falsify and eventually you say, Hey, you know, it looks like theory is right.

You know, in general, if you throw a lot of data into a algorithm and you look for correlations, you'll find correlation. And most of those correlations are spurious.

However, you know, with enough data and enough, you know, wisdom about what correlations might be real, you might identify correlations that are really cause and you might just sort of say, look, rather than sort of come up with the call or the theory first, then test it, you might just say come up with correlations, just throw data at it, look for correlations wherever you find it. And then once you find the correlations, that'll help you zero in on which ones might ultimately lead to a theory.

So it's kind of data driven research as opposed to theory driven research. Rated. But but you can sort of see that, you know, the assumptions I made there about more data, more processing power, smarter algorithms, those will only become true since then.

And I think then it sort of, you know, directed drug discovery and some of the work right now we're doing on buttons for getting data on, genetic and proteins and microbiomes, etcetera, but there's probably a good time to start applying people doing this, of course, but there's a lot of opportunity there to apply the latest in AI parallel processing data science to these super abundant data sets and find the intersection between computer science and biology. Couldn't agree with you Morgan.

Spending a lot of time in that space. I love it. So, Chris Anderson, it is always a pleasure, my friend, to chat with you. Thank you so much for coming on, and I wish you all the Beller. And I look forward to our next conversation. Thank you, James. This is fun. You've been listening to the NFX podcast. You can rate and review this show on Apple Podcasts, and you can subscribe to the podcast on Apple Podcasts, Spotify, Google Podcasts, or wherever you get your favorite podcasts.

For more information on building iconic technology companies visit nfx.com.

Transcript source: Provided by creator in RSS feed: download file