Pushkin. It seems like we've been hearing for I don't know ten years now that self driving cars are two years away. And yes, I know Tesla's are amazing because Tesla owners keep telling me so. But we are clearly not yet at that magic transformative moment when cars can truly drive themselves. We're not yet in a world where a human driver seems as out of place as a human elevator operator. So the question for today's show is this, what is the most important problem that we have to
solve for that to happen. What's it going to take to get to a world where I'm heading north on I five and I just climb in the back of my car and take a nap. I'm Jacob Goldstein, and this is What's Your Problem, the show where we talk to entrepreneurs and engineers about the future they're going to
build once they solve a few problems. My guest today is Aisha Evans, CEO of the autonomous car company Zooks zoo X. Zooks was acquired by Amazon back in twenty twenty, and I wanted to talk to Aisha because Zooks is really all in on this self driving car dream. They're not going for half measures. They're not going for baby steps. They're trying to build a truly fully AI take the
wheel self driving car. So I should seem like the perfect person to talk about the problems engineers are going to have to solve to build a car that can truly drive itself. We started our conversation talking about the self driving car that Zoos is building now, which really isn't exactly a car. It looks like a toaster on wheels. It's very boxy. There are sliding doors on either side, and inside there are two bench seats facing each other, and that's kind of it. There's no driver seat, there's
no dashboard, there's no steering wheel. If you step into our vehicle and you think about driving, we consider that we failed what transporting you no pedals, no steering wheel, You are not involved in the driving. You're a rider, and we conceived it for you. There's also no front
or back right, no fully symmetrical. When you think again about the effectiveness and the efficiency of transportation, especially in dense urban environment, imagine pulling into a narrow alley and not having to make a U turn and just flipping the lights and the vehicle goes into the other direction. So the thing I like about it, not having a steering wheel, not having a front or back is just the way the thing looks. The physical thing itself is
this manifestation of how all in you are. Right. It's not like, let's take a car and make a robot drive it. It's what does the world look like? What does a thing look like if there is no driver? Yes, if you you know, you become dispassionate no matter what car you have today, a personal car, and you look at it purely from an engineering standpoint. It's architected and designed with the concept of a human driver. Yea. What
we're doing, by the way, is really okay. If AI is going to be responsible for the driving amongst other human drivers, how do you architect and design the vehicle so that AI is the best and safest drival possible. That's our point of view, and then you work backward from there. So so how's it going to work? Basically, you have the app, you say I want to go from here to there. We come, we pick you up,
you sit down, you buckle up, push start. That's a safety thing, and then we take you to point B and your unbuckle, step off, and by that time we probably already know the other passenger. When can I call one depends on where you are. So if you're in Las Vegas fairly soon, much sooner than you think on the strip, you'll be able to call one. Does that mean this year? Next year? It won't be this year.
I can tell you what happened by twenty twenty five. Yes, And once we do that, then we'll go city by city. We've already been public saying that we'll go Las Vegas and then San Francisco. Then as we want to move east, but then as you're moving east you have to handle snow. And then we want to be global, and as you go global, you get a whole new set of parameters. The roundabouts in London come to mind, and so is
the idea that zoos vehicles will be like cabs are. Now, like I have a car, and then if I'm in the city and I need to get from one place to another, maybe I'll take a cab. Is it like that? It's like that with a little bit deeper philosophy, which is, look, we know that let's not with the United States, right, it's around two and a half cars roughly per family. We're not saying it'll be zero overnight, obviously you have to be rational, but we can do better than two
and a half cars per family. So let's say we get to one or one and a half. We feel that we will have done something after the break. The technical problems that Zoos and for that matter, all the other self driving car companies are still trying to fix. One surprisingly hard one, how to figure out who goes first when two cars pull up to an intersection at the same time. I should calls it the UGO I
goo problem. Now let's get back to what's your problem. So, Okay, the world has been talking about self driving cars for well over ten years now, and we know that to get to this self driving car world, there's going to be regulatory hurdles. There's going to be people worried about safety. I feel like all of that is pretty familiar by now. The thing I really wanted to talk with Aisha about
is the technical side. You know, like, what problem exactly do engineers have to solve to build cars that can really drive themselves. All of the companies can drive, and the normal very kind of constrained circumstances, The thing is all of these scenarios that can pop up, how do you deal with them? And by the way, how do you deal with them knowing you have human drivers around you, they have their own learned behaviors and learned expectations as
to what's going to come from a human driver. And the etiquette, the etiquette of it too that will not fully there yet. Tell me about the etiquette, Well, depending on which parts of the city, like if you're in more of a neighborhood area versus more of a business area, the behaviors are slightly different in how you approach things. So for example, we call it Ugo I goo. The ugo igo is a little bit more assertive on the business side of things versus in a place where that's
a little bit more residential. So all that is again the long tale of scenarios that we have to deal with and be ready for. The you go, I go is a really interesting one because that's not about the formal like rules of the road, right, that is very much a cultural thing that's gonna even vary from town to town. Right. I've lived different places and you go, I go. It's totally different in you know, Brooklyn than it is in Bozeman, Montana, So like, how's an AI
going to figure that out? This is where training is important because before it'll be a long, long, long, long time where you just deploy what we call generic AI, which is you come in for the first time, you drop in and boom, you know how to do it. So a lot of a lot of what we do is learn and train, and that's why you know it's called deep learning for example. And so we do it enough times enough forms that the stack knows what to look for to make the call. And we have those
examples today. For example, how we drive in San Francisco versus how we drive in the campus were on today, it's totally different. We're a lot more assertive in San Francisco because that's what's expected. The buffers are smaller because that's also what's expected than we do on the campus. So all that is built into the stack, both with control logic as well as with algorithms logic based on our training models. So I'm trying to sort of boil
down what you're saying. I mean, it's interesting, like on a certain level, you're saying you've solved the kind of big macro technical problems for computers to drive cars basically, But you're also saying there are a million edge cases double parked cars and bikes doing weird things, and who knows, people are weird, The world is weird, and those you haven't solved exactly. You're just saying you just have to practice more to get it. You just have to do
it like that. Why am I unsatisfied by that answer? Because it's to be fair, you have to practice. But practice is a feedback loop of doing yeah, finding errors and figuring out how to deal with the errors, doing it again. So it's a continuous feedback loop of that, and that is what is left to solve. Can you give me an example of a version of that loop you've just completed where you found an error and then fixed it. What's an example of that? Yeah, this one
is very personal. My first Zooks ride ever, So that's three and a half years ago. When the vehicle saw a double parked vehicle in front of it, but there was it's a single lane and so you have basically a double yellow line. The vehicle says, oh, the rule says through the AI stack that I cannot cross a double yellow line. Right, We would actually stop, disengage and
do it manually and go around it. But over time we've now learned through all of the dpvs we've seen, and so there are many times now when the app is a vehicle, ye sorry, when when there's a double park vehicle in front of the vehicle. Now the vehicle is able to look through the sense of pod, look at oncoming traffic, look at everything else, speeds and feeds and everything else around it, and many times now it is able to just smoothly like a human would do,
make the decision itself. That's a good one because there's a clear rule you can't cross a double yellow line. We all learn that. But yeah, we also all know if you're driving along and somebody's double parked in front of you, got to swerve over and nobody's coming the other way, like you're allowed to do that exactly. So how does an AI figure out when it's okay to do that? So we through again the algorithms, through a
lot of training, meaning all the scenarios. If it's seen over and over again, it's able to say, okay, I looked. And this is where again the placement of the center architecture is very important because in our vehicle. Since it's every top four corner and you have a two hundred and seventy degree view, it's able to basically say, hey, I'm looking in front of me. I have plenty of space. There's no car coming on the oncoming lane, and I
see that I have space. Also in front of the double park vehicle, there are no pedestrian, there's no bike coming up. Oh yeah, I can do this. It goes ahead. So you're telling me, basically, like the AI, the machine, the vehicle already knows how to drive. It just needs to practice, just needs to practice for millions of hours.
It's funny you should say this. I have a sixteen year old daughter, so she started the journey of driving, and you know, at first we were just within a mile of the house in the neighborhood, then maybe some areas where they are stop signs, and now we're up to a sort of the supermarket, and very soon we'll be up to going to school. It is the case that human beings can learn to drive pretty well in like what forty hours or so, and computers clearly cannot. Right.
And I used to think people were bad drivers, right, Like it seems obvious to me frankly, that people are bad drivers, right. You know, we look at our phones while we're driving. We overrate our driving abilities. We have literal blind spots. On the other hand, zooks, vehicles can see way better than people can see all the way around in three hundred and sixty degrees. They have like the smartest people in the world trying to teach them how to drive. They have millions of hours of practicing,
and still they're worse than human drivers. So, like, I don't know how to parse that, right, Like, are people actually way better drivers than I thought? Well, so I think that we conflate a lot of things when we talk about driving. So let's go back to my daughter. I don't think I dispute a little bit that my child started learning to drive at sixteen. She started learning to drive the first day she was in a car, Yeah,
or maybe even the first day she was alive. That's my seeing physics and seeing the world and understanding people. I mean, that's maybe the most interesting thing for me in this conversation, right, is like the hard part in teaching computers to drive is teaching them to figure out people exactly. And ecosystems that are built around and four people. That in a way is the ultimate problem you're trying to solve, right, Like try and teach a computer to
think like a person. That's exactly right. And I think when I put my firstborn in the car seat coming home from the hospital, she was already starting to learn huh, say more. You know you're in a car, you know it's moving, you know there's a driver, your parents, you're looking around you. You're already starting to your internal algorithms
are already starting to learn and take inputs. You know that you need to stop at a light before you're an actual driver, right, and then all of the weirdness of like if somebody is nodding, or if somebody's pointing, or if somebody's waving, and the different things a wave can mean, Like there can be like the nice guy point, like hey, nice job, or like the angry point, like what are you doing when you're driving? Certainly in the city,
you need to know what those different things mean. That is exactly right, and that is the essence of the problem that is left to solve. And how do you solve it? Practice train in code, figuring out to figure out a way to give the computers as many inputs as possible, teaching it how to make decisions, and very important, making sure that you teach the computer to know what it is it doesn't know, so that when it doesn't
know something, it tells us or it shows us. So then we can sit down and say, Okay, that's a problem. How do we get around that? How do we solve for that? That's really interesting. Fundamentally, this is a discussion about risk, right. I mean, what you're saying is Zoo's vehicles can basically drive themselves now, but not safely enough. That's exactly right, And so I mean one question is how safe are they going to have to be? Right? I imagine as good as human drivers is not good enough.
This is clearly a super high stakes, literally a life and death question. No system is infinitely safe. How good is good enough? We have to do way, way, way better than humans today, both on the crashes and on the number of miles driven per incident, basically and twice twice as good. That's like, how much how goodes it have to be? Like in a purely rational world, a little bit better would be good enough. Right. Human beings
are not just purely irrational. They are very rational, but they are not purely irrational, and so again I don't. None of us in the industry have given our metrics, so I'm not going to be the first one to do that. There's a reason we haven't. But we have to be much safer than humans. This is not a be as safe as the type situation a little bit better. I would not consider that to be a responsible thing
to do. In a minute, in the Lightning round, I should tells us where of all the places she's lived, human drivers are the worst. Also one domain where AI will never be better than humans. Okay, let's get back to the show. We're going to close with the Lightning round. So I love this. I love the conversation. I love talking about big, complicated technical things. But I also love asking lots of questions really fast at the end of
an interview. Are you ready? I am. What is the one piece of advice you'd give to somebody who is trying to solve a hard problem? Break it down. What's the biggest misconception people have about self driving cars? That cameras only, that cameras are enough? That's the tesla you Yes, what driving advice have you given your sixteen year old daughter who is learning to drive. Relax and pay attention. What is one domain where AI will never be better
than humans? And don't say love understanding the soul of a human empathy, empathy. Of all the places you've lived, where were the drivers the worst? You're gonna get me in trouble? Israel was pretty tough driving environment standpoint. I love the country. I would live there again. Driving there is pretty tough. Interesting in terms of like norms and I, Oh, you go, right, it's basically I go, I go pretty much. So where have AI engineers underrated humans self driving? And
where have AI engineers overrated humans? What are humans worse at? Oh? We cannot process what I call linear known processing where the rules are clear, the algorithm is clear, we will never beat the machine. Yeah, so we're terrible at chess, but actually pretty good at driving. That's exactly right. Well, this was delightful. Thank you for your time. I really enjoyed it too. Thank you. I have to thank you for something that nobody has done, and I do a
lot of these over the years. You pushed me to roots, to the root of and to finding a way to decipher the essence of self driving and why it's hard, and I really appreciate it. I learned something today. Aisha Evans is the CEO of Zoos. Today's show was produced by Edith Russelo. It was edited by Kate Parkinson Morgan and Robert Smith, and it was engineered by Amanda kay Wong. Theme music by Luis Gara. Our development team is Lee, Tom Mulad and Justine Lang. A huge team of people
makes What's Your Problem possible. That team includes, but is not limited to, Jacob Weisberg, Na Lobel, Heather Fame, John Schnars, Kerry Brodie, Carli mcgleory, Christina Sullivan, Jason Gambrel, Grant Hayes, Eric Sandler, Maggie Taylor, Morgan Ratner, Nicole Morano, Mary Beth Smith, Royston Breserve, Maya Kanig, Daniella Lakhan, Kazia Tan and David Blefer. What's Your Problem is a co production of Pushkin Industries and iHeartMedia. To find more Pushkin podcast listen on the
iHeartRadio app, Apple Podcasts, or where Effort. I'm Jacob Boldstein and I'll be back next week with another episode of What's Your Problem