Ideas I think about in terms of momentum. What is momentum? Momentum, definitely, is mass time velocity. Right? Mass you can think of as people or dollars. Velocity is a vector, so which way is it pointed? How far is the scalar? And so you get kind of three things that you get to play with there. And so I'm like, can I add more people to this?
Can I add more dollars to this? Can I change the direction? Can I see how that scalar is gone? Where is it so far? And then that gives me the ability to say this is a range of where the momentum can go. Welcome to the Software Misadventures podcast. We are your hosts, Ronak and Guang. As engineers, we are interested in not just the technologies but the people and the stories behind them.
So on this show, we try to scratch our own edge by sitting down with engineers, founders and investors to chat about their path, lessons they have learned, and of course the Misadventures along the way. Hi everyone, it's Guang here. In this episode, we are honored to chat with DJ Patil, known for coining the term data scientist and being appointed by President Obama to be the first US Chief Data Scientist.
In this conversation, we explore how high school Misadventures led to frameworks for decision making that DJ has leveraged throughout his career. Learning from his time at the White House, and of course his thoughts on what's going on in AI. Without further ado, let's get into the conversation. DJ, super excited to have you with us today. Welcome to the show. Thank you for having me. Glad to be here.
So you are incredibly accomplished today. What we wanted to do is rewind the clock a little bit and go back to the early days, specifically high school. We saw some of your commencement speeches and you mentioned that you are not a straight-ass student and at some point you were suspended from a class. I think it was an English class for perhaps throwing a stink bomb and then something happened. I was kind of confused. So I kicked out of my algebra class. And that happened to...
Red my rights. Yeah. So tell us what about that time? I just want to point out for the record. I am really good at hot-wiring a car. In pick-lock. Was that a lock? Wait, how did... I think the statute of limitations is over. But yeah. Where did you learn that? I learned to pick locks. This isn't a pre-internet era. I really kind of... There was these books around about lockpicking. And lockpicking is a puzzle.
It's a problem-solving thing. And I actually got introduced to it early on by my grandfather who would... There used to be this bicycle locks with the tumbler system. They are kind of cheap and they have three or four kind of tumblers. I learned that if you could listen, you know, through him is like the kind of classic. You could rotate it and you could hear it click and you could figure that out.
And so I got pretty good at that and realized like, you could just really have fun with people by moving the bikes around or switching locks. And it's just like... You could have fun with it. And so then that graduated to actually... How do you pick locks for... For, you know, on a door? And it just became a problem. I skill kind of problem thing. I wasn't really into doing damage or anything really dangerous or problematic.
But it was a fun thing and so much so even like one time in high school that when the teachers had found themselves locked out of the room. I had actually made my own lockpicks because there was no internet where you could just go buy lockpicks. And so I had actually made my own and opened the door to take it into the room and help them out. So it wasn't like a thing that was like crazy hidden. It was a known thing like DJ does these weird things.
And then, you know, even the car stuff, it was just like a problem thing for me of like... I had this creativity. I was trying to do stuff and I just didn't fit in to the traditional school system and the regular educational model. And so I needed some way to be able to express those things. And so I was building a lot of science experiments my own. I was doing all this stuff. But I didn't really have someone to coach me through that. And so, you know, I just had to learn stuff on my own.
And so that's kind of like where a lot of these shenanigans kind of came from. Is that where you met Mr. Knap? Is that you had someone, you know, brush with the wall in the form of a school system administrators? Yeah. So one of the commencement speeches that I was so fortunate to give was at Berkeley to the Information School. And I got to tell the story about the person who actually suspended me from high school. And his name was Bill Napp.
And the special thing about Mr. Knap was he suspended me. But then he also put me in this elite program later on that was, you know, sort of the special, special leaders of the school who were really asked to help run a ropes course, which is like an obstacle course in the trees to help kids and other people who were having a lot of trouble in their own life, academic life.
And so how does a person who is one moment kicking your ass by definition, you know, is then turns around and selects you and changes your life. And so what does it look like if we all type took a shot on somebody, you know, who was really like somebody like when we think about it is like, that's a pure person who is screwed up. And then you turn around and you say, you know what, they could be something special and you give them a second chance.
And so what is it, if you think about all the people who took a risk on us. And we ask, what if we just tried to take a risk on somebody else, how profound would that be? Like what would that actually look like if we all took a risk on somebody else to do this? Like you both went to the inside program. Jake, who was starting the inside program, you know, he found a way to track me down and was sort of harassing me not so until I got involved.
And you know, I was taking a shot on Jake. Jake was taking a shot on you. You all are taking shots on other people to help support them. And I think it's this very special thing of what makes our system work is that we invest in people. We are willing to invest in the success of others. And when we lend our credibility to that, something special might happen.
And so to bring this back to that story of Mr. Napp was, you can find all these commensives features on LinkedIn and the internet is, I got to say thank you to Mr. Napp. He was in the audience. And you know, how rare is it that we get to say thank you to somebody that literally turned the trajectory of my life around?
So one thing we were really curious about is like how, on the other side, receiving being able to give the chance, how do you go about making it easier for others to do take that risk on you? Because in your case, I can totally see it being like, oh, you have this kid who is very creative and obviously very talented doing all these things. But then maybe he actually has pretty high morals such that maybe he's card-jacking for shits and giggles. But you can do a lot of things.
Oh, it was a card-jacking to be clear. I was just hot-wiring. There's a big difference in those things just for the record. And everyone got their car back. I see the Statue of Lamentation doesn't extend maybe as far as I'm bad joke. Thank you. Right, but like people who have the same skills that you had, they could have done a lot more in terms of getting more money or actually going out of their way to maybe hurt people or things like that.
So yeah, what's your thought on how to make it easier for others to take the risk on you? There's a couple things I think that are wrapped up in that. First is, I think those of us that are in positions of privilege have the responsibility to look for talent, look for opportunities and sort of say, oh, that person here could go do something special. Tell you another quick one is, you know, sitting next to this conference and it was this younger-looking person.
And I was like, oh, how are you doing? And just chatting next to them. And you know, LinkedIn was already a thing at this point and just kind of met this kid. And he turns out he's 16 and he's like, I'm working on these math problems. I'm like, oh, tell me about your math problems. And you know, he starts telling me about he's working on and then, you know, it's kind of an unconference.
So he's like, I'm like, well, why don't you have a session like on hard math problems that are interesting. And then I said, I'll come to that. That sounds cool. And so he's running this session as a 16 year old on math problems and all this stuff. And we start to talk. And I'm like, hey, you know, maybe like you, there sounds like it really smart.
You could do a lot of cool stuff. And he's like, how do I do that? And so then we realize like, well, why don't you come work with me for a bit? Why don't you do other stuff? Like we could intern at LinkedIn. Of course, you know, this turns out to be Dylan Field who went on to start Figma. And, you know, but it started with just asking somebody, why are you here? What's interesting to you?
And then when he's passionate about something, he or she, they are passionate about something. You kind of just say, tell me more. Find out more. And then sort of say, what wouldn't it be cool to maybe work on something together or tell me, like, let them inspire you. I think if anything, the people who I am perceived as taking a chance on are actually giving much more to me in many ways.
I think teaching is like one of these amazing things where, and it's like being a parent, it's like, you get top more than you feel like you're teaching. I feel like I've gotten the better end of the bargain a lot of times from this. And so now on the side of like, how can people, because people ask me all the time, like, how do you get a mentor?
Like, are they say, will you be my mentor? I don't think mentorship quite works that way. I think mentorship works because you're like, hey, I got a problem. I'd love to get your opinion. I'd love to get your take. I'd love to run an idea by you. And then as you do that more and more, you're like, hey, you know, like something gets set up where it's more repetitive.
And you know, you sort of check in with that person regularly, you kind of let them know where you're trying to go directionally in life. And you do have to be persistent and you have to kind of just sometimes provide feedback and say, hey, here's what I'm working on. Here's the things I'm thinking about. And it sort of develops over a long arc. And the people who've mentored me and I've been able to consider, you know, father figures to friends.
It's always been more almost organic than something where it's like, hey, here's like a naturally we do things. In this case, like, when it comes to taking chances on people, I mean, Dylan Field is a great example here. Are there characteristics that you're kind of reflecting on it, realize that there is some spark that you see in a person that makes you go like, and sometimes it's not even you can't put words to that.
You could simply say, it's just intuition. I just feel like there is something here and you take a risk. Is it more of that or you realize that well, there are some characteristics that just people have persistence could be one of them, which we'll talk about pretty soon. That you've come to realize that yes, if you see that, you kind of feel more like, yep, this person needs a chance.
Yeah. So I think one of, you know, despite getting kicked out of my math class, you know, I do have my doctorate in mathematics, which is sort of one of the ironies of the world. And part, one of the beauties of being part of a math department is you see talent at a really young age. Like there's people who are really like, they just have an extra gear and you're just like, wow, that's an incredibly special thinker.
Such a special kind of person and how they just look at the world. And so I'm looking for that. Like somebody who's got just that, wow, that's a really clever way to look at the world. And just sort of see that there's something more to it. And I feel like if anything, I've been incredibly lucky in my life to just be around so many of those people.
And just, you know, a lot of times some of that is like seeing somebody who's passionate and just saying like, how do I take the roadblocks out of their way? You know, Jay Krepp was the first engineer that I inherited when I got to LinkedIn. And he was kind of on that out with a lot of the other teams.
And it's kind of like had these kind of wacky ideas. Everyone's like, I don't know, Jay is kind of difficult to work with. And I was just like, oh no, it seems like to me, that's a guy who doesn't, like, he's going to run through a wall. Like, how do we do stuff with that? And then it was like, okay, how do I align him with the right problems? How do I get the right team around him?
And then, you know, I come in as Chris Rikamini, who used to be an intern for me back at PayPal to join LinkedIn and, you know, partner them together. And then that's, that became the beginning of that real idea of not only first Voldemort, which is a key value store, and then evolved in with Neha Narkade joined. And then also hiring Gen Rao. That kind of combo became like, oh, maybe we have a whole new way of thinking about things.
And that really is this idea of Kafka, and that has turned into Kafka. But my job was to really say, great, you've got this idea. I am going to remove every hurdle from your path, and I'm going to run lead blocker to use that analogy to just allow you to get the stuff done, and to try to make it help make you successful. At the same time, though, there are times where you're removing all this roadblock, but they're not sort of getting to the point where you need them to be.
Do you think about it as almost like venture capital investing where it's like, you just got to make a lot of bets, right? Even if one of them pays off, you have like a 10x, 100x engineer, and then, right, like, you can do so much such that you don't have to feel bad about sort of all the other sort of time wasted, so to speak, right, like on the other opportunities. How do you think about that?
I guess I actually come from it from the other side, which is more almost research and development approach of like, how do you pick research problems? How do you run a lab? I'm trained much more formally, like I used to be at Los Alamos, which is famous for creating so many things from GPS to obviously unlocking the secrets of the atom. And I look at it from that approach of a lab model, and the early days of LinkedIn, the way I ran the LinkedIn data team, was much more like a lab.
I was like, okay, you guys, these got to the projects, these are like what we're trying to go for, because we didn't know exactly how to turn it exactly into that product at that point. There was sort of a 2070, 2010 ratio of 70% of kind of get the stuff that we knew we had some momentum, get it further, 20% a little bit more on the adventure side, and then 10% on like the dream kind of aspects.
But I always look at it, there is a holistic thing, but I think under each person, there is a portfolio of things that are also working on. And so as things start to pop, or things start to make sense across the net portfolio, I'm looking at how to put those pieces together into a comprehensive portfolio. And by all means, there's been plenty of failures I've had, most of which are my direct responsibility.
And I've had many things where I've been, I've made colossal mistakes, but what I think what we've done really well collectively is look at it as like, okay, how do we make a little bit of progress? How do we test this out? How do we try this? How do we make sure that we get there? You know, I got a lot of things, times people think it look at like all these ideas, and they're like, oh, it's overnight success.
People think like Kafka just went from this early idea to this big win right away, they forget all the pain that was there. They forget how much negotiation I had to do with the operations team to even allow this in. You know, like Bruno and team like that and Henke, to get them to like even be like, okay, give me this area, this is a safe zone. Here's how we go. Like there was a lot of negotiation to allow that to happen.
And then the team carried the rest of the way, they delivered, but it took a long arc, Figma took a long arc. It took a long time to develop. LinkedIn took a long time to become a thing. So it's very easy and retrospect to be like, yeah, we nailed it. That was easy. And I'm like, whoa, that nothing, all the good problems, they're good because they're hard. They're really hard. And so I love to work on things that are like that. I'm not really good at things that are easy.
That's like multi-year overnight success, right? Yeah. So taking a step back, like about a decade ago, you posted this on LinkedIn, where you said the best advice you received was no rules, only guidelines. How did you will want, where did you get this advice, and how do you think about applying it these days? Yeah, I don't. I got it actually. It developed in high school because I was constantly, I was so frustrated.
And people were constantly telling me I couldn't do things, or I was stupid, or all, like, they just, there was constantly like this pushback. And one of the things that I realized was, okay, these are guidelines. These aren't really rules, or they're stupid rules. I'll give you an example. And it's kind of like a version of what's going on with ChatGBT now.
So in chemistry class, we had to type out all the lab assignments. And it was like, you took, take what was literally in this lab book, and retype out all the things. And I thought this is a colossal waste of time. Obviously they wanted to do it, so we weren't like, you know, going to screw up the lab and hurt somebody. Like, really types this out? So, you know, this is a time when optical character recognition, OCR had just come about.
And I was like, well, I'm just going to OCR this thing. And I'm just going to change a few words, and I'm going to like submit it. And like, they were like, how are, because then I could go, like, I was done. Now what was I doing? So then I was doing a ton of coding at the system level. You know, this was like an early Apple lab that our school had. And so I was directly hacking into the OS layer to code to do this coding stuff.
So it wasn't like I was just like, oh, I'm just out messing around. I was very focused, but I was like, this other things are waste of time. And then suddenly, like, you know, they're like, we don't know how I did just doing something. We don't know what he's doing. We don't like it. Without ever asking why is he doing it this way? What else is he doing? I think a person who was really excellent would have been like, well, this is stupid.
Like, he's clearly not, he's doing this. He's cheating because he needs to go spend his time doing something harder. Shouldn't we just let him do something harder? And so I actually ended up doing really bad in the class, but not because of me, because they just were trying to basically pin it on me. And I was like, well, I clearly know the material.
I know all the stuff. And so I'm being penalized and we're going to let him be graduate because of this. And so I was like, well, this is guidelines. These aren't really rules. And I was just persistent enough that the school administrator was like, bless her heart was like, she was just like, this is stupid too. She's like, this is incredibly stupid. She's like, I'm just going to change your grade now.
Oh, wow. She's like, you don't have a, you know, I think an average, she's like, you can have a C or a B minus or something. And then that allowed me to graduate. But so I think about those things. And I think about that like entrepreneurship is many times the same way that there are, these aren't rules. These are guidelines. Now, I do want to be very clear.
There are also along with this. There are ethics and there are morals. And so just because you say there's no rules doesn't mean that you should be, give you carte blanche to go harm people, do crazy stuff to commemorate fraud to, you know, I think about vaping and like some of the early investments in that or some of the things that we've seen on other fronts.
Those are some of the issue sets that are really problematic on that front. So you have to have the ethics that combine with the sense you have to have your true north to do this correctly. And people that I really look up to as a model, there's like one set of people of genre style that I look up to is how they've actually conducted themselves in life and done things that are creative.
I think about Richard Feynman, you know, the great physicist and sort of like sure was he like he also was very good at picking locks, but also like having fun and creating all this chaos. And like my the person when my professors who was my PhD advisor, Jim York, he came up with like all sorts of crazy wacky ideas and people were always pissed at him.
And you know, especially like what should a student learn? And I remember going around and they were so grumpy because they're like, well, you know, students have to have this very formal learning and he's like, there's formal learning and there's life learning. And like what does it take to actually execute life? So he's like, how are you going to know to code something? How are you going to be able to communicate?
How are you going to do all like all these things that are, you know, are 101 for getting promoted in a job? He was already thinking about. And so I actually ran an experiment. I went around the whole department at University of Maryland. I asked as many professors as I could, what does a mathematician need to know? And I wrote all those answers down and I was like, oh, okay. So I, there's not a general, there's like some agreement. I must know those things.
There's a bunch of smattering things. I should be familiar with those things. And then there's this sort of weird set that somebody's like, you really need to know this, but no one's telling you this. I was like, I got to go get those things, like how to communicate, how to be able to write a research grant, all these other pieces. And I think that's true in business. If you went around and you went around to all the executives or all people like, what should I really know?
You'd get the same kind of assessment. And that's the stuff that you really, those are the things that the places that you should apply your directional intent to get smarter and more effective on. This is why, by the way, this is why David Henke and I got in so much trouble and had so much fun together because we both had this mindset, by the way. We were like, oh, that's what everyone's saying. We're just going to go do this. We're like, this is how we're going to roll.
I remember David and I would be like, like, you want to roll on this together? We're like, let's go do it. People are going to be pissed at us. Oh, yeah. And then we're like, let's go do it. I was starting to see it now. I think it's really cool that like the your emphasis on this combination, right, of both this no rules or approach, but with strong ethics, right? Because obviously without it, right? But things happen. But then how did you develop this sort of sense of conduct?
Morals and the ethics. Sorry. Yeah. Yeah. So I think one, I was very fortunate to grow up in an environment where I had my grandparents around. And my grandparents grew up in the, you know, they saw World War II. They saw the Depression. They were, we were all immigrants, myself included in immigrant to this amazing country. And so there was a different value system that was instilled of what is, what do we value?
What is important? A sense of what is responsibility to communities, to self, to family and betterment of ourselves? And so that stayed very true to me in grounding. I was exceptionally lucky in undergrad to get a solid dose of ethics training and formal training in ethics. And one of the things that, you know, both in classroom as well as with outside lecture series. And just got exposed to Karl Popper, who's, you know, the sort of philosophy of science, really all these philosophers.
And it was a world where I was like, oh, this is like, I, is like left brain, meat, right brain in a different way. And so it really was foundational for me to understand what does it mean and why are we working on the problems we are? I also got, I was very fortunate, you know, it was a theater equivalent, like we had these things called programs of concentration at University of California and so one of mine was theater and the other was psych.
And so in theater I took this class where I was like, I'm going to take things where I know nothing about. So there was a class in there called Gage Chicano Theater. And I was like, okay, I don't have any friends who are gay. I don't know what this word Chicano means and I really don't know anything about theater. So I'm going to take this class and it opened my eyes so spectacularly because I just didn't know or appreciate the problems in the LGBTQ community as we call it now.
And I did not understand the value, the challenges and all the things that have been faced by people who have been of Hispanic origin. And it moved, you know, this, are here in California that we're often prefer to Chicano, Chicano, Latinx kind of movements and what that effort, what they go through, the farm workers, how important what they are to our society, everything else. And that collective made me so open my eyes in a way that I just couldn't appreciate.
And I think that many times as technologists, we really suck at this. We really suck at like understanding who our members, our customers are. Could be an enterprise, could be a consumer, but like even in building devoted health, one of the most important things for me in building devoted was to not be in front of the computer.
It was to be in front of the human that we're taking care of being in their living room, not just, you know, in some clinical environment, understanding what they've walked through, what their life is like, what challenges they face. And this is where you realize like a lot of our thinking as technologists is off the mark.
Because a lot of the homes like I visit and do other things, you're like, everyone's like smart wearables, you're like, there's no internet here. Like if people were talking about putting people on these ketogenic diets in one of the neighborhoods, like where we were taking care of people, we're just thinking about taking care of people. And so I was on a ketogenic diet for 110 days in that neighborhood while trying to be out there for a few days a week. This is in Florida.
And I was like, there's zero chance, zero chance you could put somebody on a low carb diet in that area because of cultural aspects. And so if you're not living it, how are you supposed to really understand it? And this was even true for me in the government. You know, like people were talking about criminal justice reform until I really sat down with people who had been incarcerated. I didn't understand or appreciate the problems.
In this case, is it about like for everyone within technology and even otherwise to develop the sense of what you're building, who you're building for, trying to put yourselves in their shoes, trying to understand where the other kind of your court and court customers are coming from?
Would you say doing this more outreach spending more time with them? Even for like frontline engineers in a company, that is something that they should seek out and do instead of just, I'm just going to sit in front of my screen and write code. Yeah. So I think one of the classic, and this is one of the values of the US Digital Service, is it's not building for, it's building with, it's a we, we're building this together.
We're coming along on the journey together. This is one of the most important lessons that President Obama taught me when we were building precision medicine initiative. This is like creation of the largest genomic repository, largest collection of medical data all brought together to cure diseases. And he said, you've got to have the people at the table who are going to be really impacted by this.
And so, you know, I had different groups like cancer associations and different kind of chronic care groups, medical groups, others, data groups. It's like, where's the real people? And you know, that's not a fun message to get when the president. You don't want to discuss.
You're going to suck when you're getting chewed out by the president. Like, by definition, because everyone knows everyone's like, but everyone else is like, you know, everyone's like, you know, I don't know it's going to be like damn. Like, it's so, but you know, like when I got into the place and we were real, I got to really meet with the people. I thought I was like, I get it.
We're doing this wrong. We're doing this wrong. I got to reframe this. I need to really radically rethink how we're actually getting people involved. Because it's not our data. It's their data in this case. And we're putting it together. We have a right responsibility of stewardship.
And this is what I think, you know, there is this classic tension. People were probably thinking about it. It's like, well, customer doesn't always know what there's needed. But if you spend time with them and you're like, really building with them, you'll see it. You'll get it. So you'll see it really closely.
So there is another aspect of this. Like you said, customer doesn't really know what they want. For example, like, I think I forget if this was Steve Jobs or who was like, if you asked the customer what they want. Yeah, so they would want to, they wouldn't want the iPhone that they created at the time. And the other thing that you hear a lot in Silicon Valley is, well, you build it and they will come.
So how do you balance this where you kind of want to make sure that you're building the right thing for the right people, but at the same time, sometimes they aren't always sure what they need. Yeah, so the first is, I think people take that as like, you know, the iPhone comment and people don't know it. And they're just like, and then here's the magical thing that they want to go that people are going to want.
And you're like, there's no evidence for that. And so you need to actually build a pathway to get there. It wasn't like the iPhone just was there on day one. You know, they had obviously built and failed on the Newton. They tried a whole bunch of other things and the migration was there. Now they took a lot of very bold choices.
And they're like, here's our bad. Here's what we think. But that's a design process that they had earned the right to. And that's one of the things is like a lot of times I think people are like, if you build it, they will come. No, they won't. That is just not true. And most of these products, if you look, you know, in this classic, you go to lobby of one of these places, they'll show you like, here's V1. And you're like, oh my gosh, that was terrible.
Like, who would use that? And you realize like you iterated into it. You know, the social network sites of today versus what they were on day one radically different. Even Twitter, like it was just like originally short code kind of pushes. So I think that's very similar to even basic engineering technology. You know, nobody needed like spectacular sophisticated, you know, tools or concepts that we have today.
And to where like, yeah, that sucks. I don't have a way to move data around or I don't have a way to look this up or I don't have a way to debug this or date of observability or all these other pieces. And then you're like, yes, we need that. And then people are like, oh, that's the way to approach it.
And people can debate that. Like, there's various vicious debates about like, this is the right way to approach or that. But and that's when you get to actually test it out and see which is the right approach. Sounds like combination of fail fast and don't prematurely optimize.
Yeah, it's a fail fast. And it's like what I'm trying to do in the creation or building something. It's always about velocity being first derivative positive, which is velocity, second derivative positive, which is acceleration. And so I remember this even in the early days of LinkedIn, everyone was like, we're in like such like we're not in like we're in like number six plays or something.
People are always like, we're not in. And there was another competitor in Europe that people are like, look how fast they're going. They're doing all this stuff. It's like, our job isn't to win on this lap. Our job is to win on the 15th lap. Maybe the 30th lap. But like, we are supposed to get faster and better every time.
We know that they're going to we're going to win. Muhammad Ali has a great saying. It also is like, I never thought I never want to fight in the ring. I always wanted the fight in the preparation. So talking about velocity and also you mentioned White House for folks who I think everyone would know, but for folks who might not, you are the first US Chief Data Scientist. And most importantly, not the last. Yes. Because if you're the first and the last thing you screwed up.
Well, you said the right trend there. We're definitely seeing where you received. And in fact, Chief Data Scientist is not just in the White House, but different federal offices across the country, which is a real win here. By law, there by law, there are Chief Data Scientist or Chief Data Officers in every federal administration office. And now by executive order, a combo of the AI person as well.
Oh, good. No, that's pretty cool. So on your Twitter account, I think you have this note pinned right now. If folks who haven't looked at it, I would highly encourage to go read that. It's a it's on a White House notepad. It says, I'll just read the first lines that you have here, which is dream in years, plan and months, evaluate in weeks, ship daily. And then you have two other sections to I'm just going to read them. It's amazing.
Prodipe for one X, build for 10 X, engineer for 100 X. And then you have what is going to take to reduce the time and have and what do we need to double the impact. This is incredible. I've gone back and looked at this myself many times. I'm not doing anything close to what you were doing in the White House. But still it helps me think about problems. How did this come about?
Yeah. So one, everybody can have the card because all everyone's card, it's, you know, stuff produced by the White House is the people's house. So that's the card is is everybody's. The way it came out, I was very fortunate to give a commencement speech at University of California San Diego about this. But the thing is, we were in a time in the White House, very contentious time.
We'd seen the murder of a number of innocent black youth people of color. And there was a question of what's what's the policing that we would want in our neighborhoods? What's a policing we expect? We need also understand that policing is a very dangerous job also. There are a lot many police officers who are harmed and killed in the line of duty doing their job trying to protect people.
And so how do we balance this? How do we make this work? And so on our team, we had Lynn Overman. Actually, this was Denise Ross, who was the second-year chief data scientist, Cleirns Wardell and a number of others had this idea of like, well, let's get a convening together of great people and Lynn Overman that kind of helps steer this with a number of others.
And they said, let's get the police chiefs, let's get the technologists, let's get civic activists all in a room. And we're going to lock them in a room with White House cupcakes until they come up with an idea. And when you're in a room, that's a very polarized, very intense meeting. And so how do you get it moving forward? And what I wasn't sure what to do and what to say when you get in front of these people who are looking at you trying to decide like, do we trust you?
How are we going to, can we work with you? And so let's try to ask like, how do we make some progress? What's a lessons that we have had to actually make progress? And I think this is lessons that we've learned in building new initiative, start-ups, new ideas. And those are the exact things that came to my mind, those three sort of snippets of blocks of like, if I was starting from scratch, what would I want to do?
Like, every day we got to make some progress because days are going to suck. There are some days are going to suck. But if we just get one thing down, we get one something, we ship something every day, we're going to be going long. We're going to do something amazing. But we've got to be dreaming in those years. So we've got to have true North. Then we've got to ask ourselves, okay, what's the right, like let's not get hung up on big timelines.
We've got to have a part of type a lot. And then let's build for, you know, like, let's start experimenting, get a little bit bigger, and then finally engineer it to 100X. And then, there's a hack. Like, if you ask, like, what's it take to double the impact while cutting the timeline half? We're on log base two. And so everything you prioritize now is exponential. And so the things that are linear drop are, they're below the line. They're not interesting.
And so everything that's going to be, that's that are going to have the big impact. Those are the things you're going to go for. And so, like, if you think about it, we have an amazing country. We have a huge country, you know, and we're tiny relative to many of the other countries out there. So if you're going to get to several hundred million people and you start with 10,000 people, we got to be, we got to be on an exponential trajectory, real fast.
And so how do we start taking those pieces? And how do we make this a wee collectively? Because at the end, when we left the administration, these programs covered 94 million Americans starting from zero. And, you know, a lot of times people attributed this to the White House, it's all the people across the country who came together to do this.
People who decided this is how I want my community to work, how I could, and what we ended up being is a catalyst, a glue, and then saying, here's what we collectively can dream for.
This is what we should expect from our social contracts with each other, and let's push to be better. And then there's a bunch of things just like the lab model, it's just, I need to help get Congress on board, I need to get state legislatures on board, I need to get all these other, I need to get society on board around certain ideas. Then we can go after these. So many things, how did you get, like these people, like you said, you had to gain their trust, right?
It's very hard to sell people, it's like, hey, we're going to be making incremental updates, right, without sort of painting that picture. Like, how did you go about doing that? You know, honestly, it's just getting people to come and show up together with an openness. And, you know, if you can get people to show up with a degree of openness, to sit together and say,
look, there's a huge divide. Are there things that we could get there that we could do together? Are there potential things that we could get there? There's surprising amounts we all agree on. You know, we all agree that our kids should be able to go to school without being bullied. We should agree that kids should come home safe. Like, we should have them have food. Now, how do we accomplish those things? Is another thing?
And that there's a lot of disagreement there. So what we said is, okay, if we're first is we know that technology can leverage what if we just agreed like there's a certain number of data sets that we could put out there on the web. For everybody to look at, to compare, like maybe we'll discover some great, ridiculous insights. And that's what, that's how data, the data driven initiative, data, sorry, police data initiative started.
And more of it also added that police data, data driven justice initiative as well. But one of the things, the other things that we do is something that we call scouting scale. Great ideas don't have to just start from, you know, your own brainstorming for a whiteboard. You look across the entire landscape of whatever you're working on, health care, education, technology, whatever it is.
Somebody's tried something. Somebody's done something that's pretty spectacular. And then you kind of look at it and you go, huh, that's pretty cool. And you see somebody else who's done something slightly different. You go, that's pretty cool too. What would it take to scale that much more efficiently? Is there, what are the common threads? What are the things we could do that could collectively lift things that could make that happen?
And that's scouting scale approach, which then that term really scouting scale comes from Megan Smith who is the third US CTO. This is a great framing for how to look at stuff and say, okay, this is how we do it. I'll give you an example of this. It's concretely early linkedin. The sky showed up to friend of Jonathan Goldman's head fund manager shows up kind of two down the it happened downturn.
He's coming hang out with lunch for us, but he's always on these crazy fasting diets. And he's like, yeah, well, I'm trying to record these videos for math, teaching math, and just doing all this kind of stuff. And I was like, that's pretty cool. And we were just talking through it and he's just kind of doing all this stuff. And I was like, the roots of my ideas, he's kind of carrying it.
And I was just giving him a few thoughts only, Johnson, and really was kind of doing the heavy lifting. That person is all con who then went on to create Khan Academy. And so it's just like, you're just like, oh, well, I saw some other people doing this at MIT and at X, I've done a bunch of this work and trying to help rebuild the educational system in Iraq. It kind of had some exposure to these issues, but you're like, oh, well, yeah, you should do that. And then they take off with the idea.
How do you think about like spending, like, allocating resources to do that? Like is that someone, like, you kind of dedicate who's like, very good at being able to kind of spot different trends or is that, like, something that you, like, you know, have everybody have some kind of time allocation to do.
I think it's all of our jobs. It's honestly all of our jobs to do it. Denise Ross, who's at second US CTO, Clarence Werdell, they were both originally presidential innovation folks at the Department of Energy. And they were supposed to be working on all this energy related things and somehow they found a way collectively over to me. And they were like, we got to work on this policing issues.
And I was like, great. What do you think? And they had like all these ideas and they're sharing what's learning. And I was like, awesome. It's like, how do I co-op them and steal them from the Department of Energy to work on this problem? Because they had, like, think about ideas I think about in terms of momentum. What is momentum?
Momentum, definitely, is mass time velocity, right? Mass, you can think of as people or dollars. Velocity is a vector, so which ways it pointed, how far is the scalar? And so you get kind of three things that you get to play with there. And so I'm like, can I add more people to this? Can I add more dollars to this? Can I change the direction?
Can I see how that scalar's gone? Where is it so far? And then that gives me the ability to say, this is a range of where the momentum goes. So if I'm starting from absolutely zero, absolutely zero, that's really hard. People think like, oh, Tesla started from absolutely zero. That's not true at all. You know, it's not a founder of Tesla.
There was a bunch of other people who were playing around with different ideas. People think like self-driving cars started from zero. No, they forget the zero actually is a US federal government. DARPA funded that. People think about the COVID vaccines, RNA vaccines funded by the federal government. That is the spark. And then the rest of the money, the dollars of people, that ecosystem comes together as an analogy from a spark to nurture the flame.
And so I think it's on all of us to find and to foster that and to carry those ideas and then champion them into something novel or new. And sometimes that maybe, look, you can't do it inside your organization. And it's time to go spin it out and do something.
Like, you guys decided to do this crazy podcast idea. Probably you're like, hey, well, I don't know what we're doing this thing. And then you're like, well, maybe we should do it. And then finally probably wrote something down and you're like, fine, we just got to send it in, and then figure out how to do it. And then it happens. And then you do another one. I love the bad one. I ended up here. I don't know if that's true. But that's what fun happens.
Staying in the lane doesn't help totally agree. There's no rules. There's no rules on the guidelines. Talking about momentum, we have a ton more questions to ask a word or time in the White House. But we're going to fast forward a little bit. We're talking about momentum. At this point, it would be, we wouldn't do just such as the topic if we didn't talk about it. Like AI, for example, there's a lot of momentum in that space. And about a year back, I think you joined great point winters.
And you're doing more active investing. I mean, you've been investing even before. But now you're doing it a little more actively. So looking at the momentum that you see right now, you see a lot of toy applications on the A.S. At least that's the perception I have as a user. Like I use co-pilot for code super helpful tool, probably another best tools that that exists out there in terms of the application of it.
But apart from that, you see a bunch of toy things. So while there's a lot of momentum, there is also a lot of hype. Cures to know from your perspective, what are the things that you see which are under hyped and you're excited about? Well, I think it's useful to first rewind a little bit further back. Like, you know, I got super lucky. My dad was a professor. My uncle's a professor. My uncle's actually a professor, one of the early professors of AI.
And so while I was being babysat by some of the original AI luminaries, and so my exposure to AI has been going on since, like I don't know, since before I can even remember. And so I've been very privileged to have a front row seat to many of the early innovations all the way through. In fact, early days we thought about Monica Wigati, thought about saying, hey, we should maybe name the team artificial intelligence at LinkedIn.
I was like, no, absolutely not. I was like, we are still in the winter. We're not again. And for people who are interested, if you go to the LinkedIn learning, there's a whole series of kind of equivalent podcasts, courses, where I interview people who've been really on that journey through the arc of data. That you can, including Monica Wigati, Jonathan Goldman, some of the others we've met. We've talked about.
But the part that, that for AI that's there that I think we've seen is like machine learning, all these other pieces come in. When we were building out the first national strategy for AI really led by the president, one of the key things that we realized in that process is that we had to figure out how to really craft the AI. And what that really really craft that agenda for the country around what AI was going to look like.
And what that took look like is one, talking to lots of people across the country, seeing what fits, how do we put all this together, and then formulating it in the way that was actionable. And this is a time where the country was really focused on generalized artificial intelligence. And we needed to be more focused.
And like if we look at that, what did we get right, what did we get wrong? The first thing that we got wrong is you have to think about this in terms of automation, like the overall automation aspect and not just AI. Second thing is what we got wrong is originally we thought, oh, it's going to be the displacement of truck drivers and short cooks, like people flipping hamburgers.
Turns out, dexterity is hard. Dexterity is really hard. The self-driving, not easy. It's like all this stuff comes together really hard in the current existing frameworks assistant. But I think we've learned now is we've seen with transformers. This is just the next phase. But there's several more increments that need to happen.
And computer vision is showing stunning results and there's so much more. But what I think is where we think that some of the greatest change I think is going to happen is in places what I call stupid boring problems. And these are problems where you're just like, why is that, why do I have to do this? Why can't this thing just be filled out? Why am I just handing this piece of paper to this person to fill out something and go here and here?
And if you look around your life, there's lots of stupid boring problems. You know, you're like, why am I cutting pasting this? Why do I fill out this doctor's forum 15 times with all this stuff? There's a bunch of stuff that should just be audipopulated, those type of things. And so those stupid boring problems are the ones where I think we'll see the biggest lift for AI.
And that's going to turn it into a place where we're also, when you get rid of the stupid boring problems, you're also your safeguards for what you're doing ethically, morally, transparently are much more clear cut. Because everyone's like, yeah, get rid of that problem. I don't like that. And you can, you have much more grace in solving that problem without causing harm.
And then you can evolve to some of the bigger problems. I think right now what we have is AI is a lot of features. They feel like features rather than full blown products. And so we're not sure exactly what to do. Now, obviously they're getting integrated more and more and it'll be interwoven. But there's a lot more than an opportunity, very excited about.
And I would say, frankly, if I'm thinking about some of the areas I'm particularly excited about as I think opportunities for healthcare, life sciences, think about it for addressing issues around mental health loneliness. I think about it for reducing medical errors, catching medical errors, other issues, helping people understand and take control of their own healthcare and be a better advocate.
I think about it in terms of educational opportunities. I think about it in terms of finding potential new scientific breakthroughs. We just discovered an increase in number of organic or inter-organic compounds like crystalline compounds by an order of magnitude than we've ever had before. So, what about that? Might that do for new materials and other exciting things? That's a phenomenal jump of something that could be exciting for us.
So, you mentioned healthcare and life sciences. One of the things which is super important in these domains is precision and accuracy. I mean, for example, if we use AI for something like mental health, for example, guiding people how they can get better care.
But if you see right now a lot of these elements, for example, the hellos in it, they are not necessarily accurate. So, when it comes to practical applications in domains where the margin of error is really small and the cost of getting something wrong is really high, what version do you see of this technology evolving into to actually help in these domains? Because it wouldn't be the hellos in Asia that we see today.
Right. So, I don't think we're ready to insert LLMs directly as the replacement for a human. I think of it as an augmentation. Take, for example, something called crisis. There's an organization called Crisis Textline. If you're, you know, in crisis thinking about suicide or any serious issues, you text 7-4-1-7-4-1.
You're going to get connected to a volunteer, you know, very fast. And same if you text to the National Suicide Hotline number, 9-8, you're going to get connected to actually Crisis Textline or the Trevor Project. These organizations, what they do, then is they have a huge dilution of information coming at them. So, it's not to use AI to say, hey, you're okay. Like, that's not the point. What you're doing is you're using that technology to triage and prioritize. So, sorting, organizing it.
And so, there's certain things like if you walk across the Golden Gate Bridge and you see you need help signs, those are going to go directly to Crisis Textline. Now, they know they have a special shortcut because they know that right away you're going to the top of the queue. Because you're on a bridge, that's important.
But they also know things like if you use certain words in combination, that they need to escalate this right away. That you need to say, hey, this person, like if you use the word Tylenol, Advole, there's a high probability they're actually already over there. If somebody is a veteran and they say they have a firearm in their home, they probably have the firearm out with them. There's a high probability of that.
So, you need to change your paradigm for how you organize. So, we don't think of everything as a uniform distribution or some kind of traditional bell curve. We actually segments more intelligently. And that's where AI can really start to help us in an example. It's like, let us be smarter. Let's catch the edge cases. Let's be more effective at speed and time to help a person actually deliver the right care. Same thing on clinical notes for a patient.
A physician, you know, you've sitting in the room, you're sitting there and you're going to wear and you're like, I'm cold, how long are they going to be here? They're outside, you can hear them flipping through the chart and you're like, are they reading action the whole thing? You're like, can they want to summarize a lot of times? Can you summarize it in a way that's useful?
Summarization is going to be much lower, but also that you can back it up with also references or other type things that say, hey, here's where it is in the chart or other type of information. That's when I think we start to see the benefits of these systems. But do I think it's like ready for the clinical care setting? Not yet. Are we on a route to there? I think we're there. We're getting there.
But like, for example, identifying people who are potential for chronic kidney disease, we should totally be using those systems right now to identify it. For people who might have a higher rate of sepsis, should we use it to identify populations that might be more apt for some kind of harm? Absolutely. Let's help them out.
And at the same time, as we've seen, fortunately with certain governments, authoritarian governments, is we need to figure out what, because that technology can be equally used to identify groups that you want to harm. And we need to figure out what those are. And that's not just here in the United States. That is an international issue of how do we actually make sure this technology is used in a way that works for us rather than against us. And us is all of us.
Like with great technology comes great power and with great power comes great responsibility. So as you mentioned that this technology can be used to do a lot of good plus could be used in negative ways too. Right. What are some of the aspects that concern you with the advances being made but not having a whole lot of guardrails around it?
So there's a couple. The first is I think it comes down to us recognizing. If you think about the people who are going to work, the greatest population that's going to work on AI, that's in front of us. The people are going to work on data. That's in front of us. I think there's going to be over 900 students this year who are going to graduate the data science major from Berkeley.
And I guess I can say it here because now public, but I'm excited. I will actually I have the privilege and the honor to give their commencement speech. And so I guess even soon here. So so double thumbs up. So you know, part of this is is if we think about what do they need to make sure to shepherd this world in a way that we would want. Number one, I think we want every student to be trained with a good foundation of ethics to is like that ethics need to be integrated into the curriculum.
It can't be some like side class that you're barely paying attention to needs to be integrated. Second is like you think about security and you're about your healthcare data constantly installing. It's unbelievable to me that we would teach somebody how to design a database today. But we wouldn't tell them we wouldn't teach them. Like you know, like when you submit your test like you submit your database homework to say like here's my database.
It doesn't actually try to do you know injection attacks or something else. Like shouldn't it like shouldn't we be doing those things because like it's if we're not thinking about those basic security. It's making it be like saying equivalent of saying let's build a design a class to build bridges and then not stress tested to see if it falls under certain conditions.
Like we know that's like that's part of our curriculum how we train we have to train for failure modes. And so I think we need to do need to have that is one of our predominant ways of of really thinking about what do students really need to be to keep on the right side of this.
Next thing I think is there is you have to have diversity in teams you cannot build in this day and age or for this world if you don't have people on the team that look like the rest of the world and have experiences from the rest of the world. And I got to see this firsthand like US is a very big place many cultures many different aspect many lifestyles lots of things.
And when you look around that morning staff meeting room and you're like trying to solve these hard problems you are grateful that there are people that come from so many different walks of life because you can't solve a problem for the country if you don't have representation from all the different type of populations that are out there.
So you've spent time in technology for so many years and you also spend time I would say in policy to an extent working the White House one of the debates that you see about AI is like should this be more regulated as a domain or and then you see the other side of it which is well if you regulate technology you slow down the progress and this technology can do so much good so why slow it down what's your take on this.
I think the question you know is appropriate is when does regulation required and I think there's a couple ways to think about this like first why does the FDA exist. The FDA exists because it turned out for a long time it wasn't illegal to poison people in medicine and there was these kind of snake oil salesmen who were producing all this stuff and they poisoned a lot of people.
And it turned out they were like oh wait you mean medicine people are spent in supplies aren't safe that's not cool we're not okay with that you think about like why do we have the USDA it's because people produced meet that he had equal I and all this like you know Salmonella outbreaks and all these other things and you're like that's not cool we don't like that and that's when you put regulation it department transportation regulate things because you're like hey you don't know what's going on.
You're like hey you know we do want to have rules around our roads to make sure when I get on the road like the probability of an accident is not ridiculously high like I want those things so we often put these regulations in a practice because somebody's done something bad.
The reason the SEC and Treasury looked the way they are is because giant market crashes like that caused a recession and so what we're trying to what we're seeing now is the beginnings of how AI is transforming society and we have to ask are we okay with that and what does those regulations spend there.
The reason start to come in is if we don't do a good job being responsible with technology and so far you know my self point this at myself and it's been a lot of time working on this is like if you look at the technologies we've built they haven't really benefited everybody there's been a lot of harms on this.
Health health issues mental health issues girls on Instagram is one that's often highlighted but there's so many more including disinformation misinformation happening and so we've looked at that and that's that's people using propaganda for that massive distrust.
Around the world destabilize people's lifestyles caused direct harm to certain populations and so what's our answer to that if our answer to that is like sorry it's just we built the technologies that's not really responsible that that that's not being a good citizen of this and that's when regulation has to step in.
There's another side of regulation that also is important to address is you can create what's called regulatory capture you can create regulatory states in such a way that only certain people can play because they're big enough or they have certain rules and then then no there's no chance for innovation and that sucks too. And so finding that right balance with some technology that is so rapidly changing is really important I'll give you a very specific as an example.
If you think about the human genome and some of the first test cases that were out there where people are trying to patent the genome think about how different our world would be with a potential of genomic therapy if that like we hadn't gotten that people just been able to say okay it's a land grab and we patent this part of the genome and you don't get to work on unless you pay us in the same amount of money.
A lot of the therapies that are starting to emerge right now would be off the table so we need to find that right balance of these things because at the same time right now a lot of those drugs that are coming out are going to be prohibitively expensive.
So we're reaching towards the end of the conversation we would left to continue chatting for way more but we would try to route this up and before we let you go DJ we have one last question so you've mentioned in one of the commencement speeches as well that failure is the only option and is there a favorite failure of yours in it could be from school it could be from any of the jobs you've had White House or even now.
I had failed so many times this epic I mean I got suspended which means I got caught which means I screwed up right but it's also it's what you learn from each of the failures it's how fast you get back up what I do allow myself when I fail is I give myself 15 minutes to cry 15 minutes to cry and then it's like what I'm going to do next and people forget you know after I left LinkedIn I had a colossal failure on a startup and a 41 million.
And a 41 million dollar failure and a photo sharing app startup that you know that I got involved into I thought it could be really spectacular it was a dismal failure but what I was really fortunate enough to do was I had worked with so many people and helped solve so many problems for other people when I failed most people said you're dead in Silicon Valley people are like oh you had value you you helped us on something we will help you we still see and they helped me get back up.
They helped me kind of take another shot at the thing and so what I would really emphasize here is not about it's not about the actual failing it's the process after the failing so that you don't not only repeat the mistakes but that you're smarter so you can be you can really then deliver something of even greater value and try something even harder that is going to potentially change the world. And a far more dramatic manner. Well said that's a very good point to end on. Thank you so much DJ.
Awesome. It was perfect because I'm the computers we're about to shut off. Not a little better. So you can't you can't time it better. Perfect timing. Well DJ thank you so much for being here with us today. Love the conversation. Yeah. Well thank you so much again DJ. All right guys. Thanks. Thanks. Bye. Bye.