The year was 1956, and the place was Dartmouth College, in a research proposal, a math professor used a term that was then entirely new and entirely fanciful, artificial intelligence, there's nothing fanciful about AI anymore. The directors of the Stanford Institute for Human Centered Artificial Intelligence, John Etchemendy and Fei-Fei Li on Uncommon Knowledge now. [MUSIC]
>> Peter Robinson: Welcome to Uncommon Knowledge, I'm Peter Robinson, philosopher John Etchemendy served from 2000 to 2017 as provost here at Stanford University. Doctor Etchemendy received his undergraduate degree from the University of Nevada, before earning his doctorate in philosophy at Stanford. He earned that doctorate in 1983, and became a member of the Stanford philosophy department the very next year.
He's the author of a number of books, including the 1990 volume the Concept of Logical Consequence. Since stepping down as provost, Doctor Etchemendy has held a number of positions at Stanford, including, and for our purposes today, this is the relevant position. Co-director of the Stanford Institute for Human Centered Artificial Intelligence. Born in Beijing, Doctor Fei-Fei Li moved to this country at the age of 15.
She received her undergraduate degree from Princeton and a doctorate in electrical engineering from the California Institute of Technology. Now a professor of computer science here at Stanford, Doctor Li is the founder once again of the Stanford Institute for Human Centered Artificial Intelligence. Doctor Li's memoir published just last year, The Worlds I See, Curiosity, Exploration, and Discovery at the Dawn of AI. John Etchemendy and Fei-Fei Li, thank you for making the time to join me.
>> Fei-Fei Li: Thank you for inviting us. >> Peter Robinson: I would say that I'm going to ask a dumb question, but I'm actually going to ask a question that is right at the top of my form, what is artificial intelligence? I have seen the term a hundred times a day for, what, several years now, I have yet to find a succinct and satisfying explanation. Let's see, let's go to the philosophy, here's a man who's professionally rigorous, but here's a woman who actually knows the answer.
Yeah, and she knows the answer [LAUGH] >> John Etchemendy: So, let Fei-Fei answer, and then I will give you a different answer. >> Peter Robinson: Really, all right. >> Fei-Fei Li: Okay, Peter used the word succinct, and I'm sweating here.
So, because artificial intelligence by today is already a collection of methods and tools that summarizes the overall area of computer science that has to do with data, pattern recognition, decision making in natural language, in images, in videos, in robotics, in speech. So, it's really a collection, at the heart of artificial intelligence is statistical modeling, such as machine learning using computer programs.
But today, artificial intelligence truly is an umbrella term that covers many things that we're starting to feel familiar about, for example, language intelligence, language modeling or speech or vision. >> Peter Robinson: John, you and I both knew John McCarthy. >> John Etchemendy: Right. >> Peter Robinson: Who came to Stanford after he wrote that, used the term, coined the term artificial intelligence.
Now, the late John McCarthy, and I confess to you, who knew him as I did, that I'm a little suspicious of the term because I knew John, and John liked to be provocative. And I am thinking to myself, wait a moment, we're still dealing with ones and zeros. Computers are calculating machines, artificial intelligence is a marketing term. >> John Etchemendy: So, no, it's not really a marketing term. So, I will give you an answer that is more like what John would have given.
>> Peter Robinson: All right. >> John Etchemendy: And it's the field, the subfield of computer science that attempts to create machines that can accomplish tasks that seem to require intelligence. The early artificial intelligence were systems that played chess or checkers even, very, very simple things. Now John, who you know him, was ambitious, and he thought that in a summer conference at Dartmouth, they could solve most of the problems [LAUGH].
>> Peter Robinson: All right, let me name a couple of very famous events, what I'm looking for here, I'll name the events, we have in 1997, a computer defeats Garry Kasparov at chess, big moment. For the first time, Big Blue, an IBM project, defeats a human being at chess. And not just a human being, but Garry Kasparov, who, by some measures, is one of the half dozen greatest chess players who ever lived. >> Fei-Fei Li: Mm-hm.
>> Peter Robinson: And as best I can tell, computer scientists said, yawn, things are getting faster, but still. And then we have, in 2015, a computer defeats Go expert Han Fei. And the following year, it defeats Go grandmaster, Lee Sedol, I'm not at all sure I'm pronouncing that correctly. >> Fei-Fei Li: It's Sedol, yeah. >> Peter Robinson: In a five-game match, and people say, wow, something just happened this time.
So, what I'm looking for here is something that a layman like me can latch onto and say, here's the discontinuity. Here's where we entered a new moment, here's artificial intelligence. Am I looking for something that doesn't exist? >> John Etchemendy: No, no, I think you're not. So, the difference between Deep Blue and-. >> Peter Robinson: Which played chess. >> John Etchemendy: Which played chess, Deep Blue was written using traditional programming techniques.
And what deep blue did is it would, for each move, for each position of the board, it would look down to all the possible- >> Peter Robinson: Every conceivable decision tree. >> John Etchemendy: Every decision tree to a certain depth, I mean, obviously, you can't go all the way. And it would have ways of weighing which ones are best. And so, then it would say, this is the best move for me at this time. That's why, in some sense, it was not theoretically very interesting.
The AlphaGo- >> Peter Robinson: AlphaGo, which was a Google project. >> John Etchemendy: Which is a Google project. >> Peter Robinson: All right. >> John Etchemendy: This uses deep learning, it's a neural net, it's not explicit programming, we don't know. We don't go into it with an idea of, here's the algorithm we're gonna use, do this and then do this and do this. So, it was actually quite a surprise, particularly AlphaGo. >> Fei-Fei Li: Not to me but, [LAUGH], sure.
>> John Etchemendy: No, no, but-. >> Fei-Fei Li: To the public. Yeah. >> John Etchemendy: To the public. >> Fei-Fei Li: Yeah. >> Peter Robinson: But if our colleague, I'm going at this one more time because I really wanna understand this, [LAUGH], I really do. Our colleague here at Stanford, Zhi-Xun Shen, who must be known to both of you, physicist here at Stanford, and he said to me, Peter, what you need to understand about the moment when a computer defeated Go.
Go, which is a much more complicated, at least in the decision space, much, much bigger, so to speak, than chess. There are more pieces, more squares, all right. >> Fei-Fei Li: Yeah. >> Peter Robinson: And Zhi-Xun said to me, That whereas chess just did more quickly what a committee of grand Masters would have decided on. The computer in Go was creative, it was pursuing strategies that human beings had never pursued before, is there something to that? >> Fei-Fei Li: Yeah, so there's a famous.
>> Peter Robinson: Fei-Fei is getting impatient with me, I'm asking such, go ahead. >> Fei-Fei Li: No, no, you're asking such good questions. So in the third game of the, I think it was the third game of the five games, there was a move. I think it was move 32 or 35, is that the computer program made a move that really surprised every single Go masters. Not only Lisa Doe himself, but everybody who's watching. >> Speaker 1: That's a very surprising move.
>> Speaker 2: [LAUGH] I thought it was a mistake. >> Fei-Fei Li: In fact, even post analyzing how that move came about, the human masters would say, this is completely unexpected. What happens is that the computers, like John says, has the learning ability and has the inference ability to think about patterns. Or to decide on certain movements even outside of the trained, familiar human masters domain of knowledge, in this particular case. >> John Etchemendy: So, may I, Peter, let me.
>> Peter Robinson: Go ahead, yes. >> John Etchemendy: Let me expand on that. The thing is, these deep neural nets are supremely good pattern recognition systems. But the patterns they recognize, the patterns they learn to recognize, are not necessarily exactly the patterns that humans recognize. So it was seeing something about that position, and it made a move that because of the patterns that it recognized in the board, that made no sense from a human standpoint.
In fact, all of the lessons in how to play Go tell you, never make a move that close to the edge that quickly. And so everybody thought it made a mistake, and then it proceeded to win. And I think the way to understand that is it's just seeing patterns that we don't see. >> Fei-Fei Li: It's computing patterns that is not traditionally human, and it has the capacity to compute.
>> Peter Robinson: Okay. I'm trying to, we're already entering this territory, but I am trying really hard to tease out the, wait a moment. These are still just machines running zeros and ones, bigger and bigger memory, faster and faster ability to calculate. But we're still dealing with machines that run zeros and ones, that's one strand. And the other strand is, as you well know, 2001 Space Odyssey, where the computer takes over the ship. >> Dave: Open the pod bay doors, Carl.
>> Carl: I'm sorry, Dave, I'm afraid I can't do that. >> Peter Robinson: Okay, we'll come to this soon enough. Fei-Fei Li, in your memoir, the worlds I see, quote, I believe our civilization stands on the cusp of a technological revolution with the power to reshape life as we know it. Revolution, reshape life as we know it, now you're a man whose whole academic training is in rigor. Are you going to let her get away with this kind of wild overstatement?
>> John Etchemendy: No, I don't think it's an overstatement. I think she's right. >> Fei-Fei Li: He told me to write the book, [LAUGH] >> John Etchemendy: Mind you, Peter, it's a technology that is extremely powerful, that will allow us, and is allowing us to get computers to do things we never could have programmed them to do. And it will change everything but it's like what a lot of people have said, it's like electricity or it's like the steam revolution.
It's not something necessarily to be afraid of, it's not that it's going to suddenly take over the world. That's not what Fei-Fei was saying. >> Fei-Fei Li: Right, it's a powerful tool that will revolutionize industries and human the way we live. But the word revolution is not that, it's a conscious being, it's just a powerful tool that changes things. >> Peter Robinson: I would find that reassuring if a few pages later, Fei-Fei had not gone on to write.
>> Fei-Fei Li: No. >> Peter Robinson: There's no separating the beauty of science from something like, say, the Manhattan project. Nuclear science, we can produce abundant energy, but it can also produce weapons of indescribable horror. AI has boogeymen of its own, whether it's killer robots, widespread surveillance, or even just automating all eight billion of us out of our jobs. Now, we could devote an entire program to each of those boogeymen, and maybe at some point we should.
But now that you have scared me, even in the act of reassuring me, and in fact, it throws me that you so eager to reassure me that I think maybe I really should be even more scared than I am. Let me just go right down, here's the killer robots. Let me quote the late Henry Kissinger, I'm just going to put these up and let you, you may calm me down if you can. Henry Kissinger, if you imagine a war between China and the United States, you have artificial intelligence weapons.
Nobody has tested these things on a broad scale, and nobody can tell exactly what will happen when AI fighter planes on both sides interact. So you are then, I am quoting Henry Kissinger, who is not a fool after all. So you are then, in a world of potentially total destructiveness, Fei-Fei. >> Fei-Fei Li: So, like I said, I'm not denying how powerful these tools are. I mean, humanity before AI has already created tools and technology that are very destructive.
Could be very destructive, we talk about Manhattan project, right? But that doesn't mean that we should collectively decide to use this tool in this destructive way. >> John Etchemendy: Okay, Peter, think back before you even had heard about artificial intelligence. >> Peter Robinson: Which actually, is it, five years ago? >> John Etchemendy: No, no. >> Peter Robinson: This is all happening so fast. >> John Etchemendy: Just five years ago, or ten years ago. >> Peter Robinson: Right.
>> John Etchemendy: Remember the tragic incident where an Iranian passenger plane was shot down flying over the Persian Gulf by an aegis system? >> Peter Robinson: Yes. And One of our ships. >> John Etchemendy: One of our ships, an automation, an automated system because it had to be automated in order to be fast. >> Peter Robinson: Humans can't react that fast.
>> John Etchemendy: Yeah, exactly and in this case, for reasons that I think are quite understandable now that you understand the incident. But it did something that was horrible. That's not different in kind from what you can do with AI. So we, as creators of these devices or as users of AI, have to be vigilant about what kind of use we put them to. And when we decide to put them to one particular use, and there may be uses, the military has many good uses for them.
We have to be vigilant about their doing what we intend them to do rather than doing things that we don't intend. >> Peter Robinson: So you're announcing a great theme, and that theme is that what doctor Fei Fei Li has invented makes the discipline to which you have dedicated your life philosophy even more important, not less so. >> Fei-Fei Li: Yeah, that's why we're [CROSSTALK] >> Peter Robinson: Makes the human being more important, not less so. Am I making that? Am I being glib?
Or is that onto- >> John Etchemendy: Let me tell you a story about, so Fei-Fei say, used to live next door to me or close to next door to me. And I was talking. >> Peter Robinson: I'm not sure whether that would make me feel more safe or more exposed. >> John Etchemendy: And I was talking to her, I was still privileged at this time. And she said to me, you and John Hennessy started a lot of institutes that brought technology into other parts of the university.
We need to start an institute that brings philosophy, and ethics, and the social sciences into AI, because AI is too dangerous to leave it to the computer scientists alone. Nothing wrong with computer science. >> Peter Robinson: There are many stories about how hard it was to persuade him when he was provost. You succeeded, just one more bogeyman briefly.
>> Fei-Fei Li: Yeah. >> Peter Robinson: And we'll return to that theme that you just gave us there, and then we'll get back to the Stanford Institute. I'm quoting you again, this is from your memoir, the prospect of just automating all billion of us out of our jobs. That's the phrase you used? Well, it turns out that it took me mere seconds using my AI enabled search algorithm.
Search device to find a Goldman Sachs study from last year predicting that in the United States and Europe, some two thirds of all jobs could be automated, at least to some degree. So why shouldn't we all be terrified, Henry Kissinger, world apocalypse. All right, maybe that's a bit too much, but my job. >> Fei-Fei Li: So I think job change is real. Job change is real with every single technological advances that human civilization has faced, that is real, and that's not to be taken lightly.
We also have to be careful with the word job. Job tends to describe a holistic profession, or that a person attaches his or her income as- >> Peter Robinson: As an identity. >> Fei-Fei Li: Identity with, but there is also, within every job, pretty much within every job, there are so many tasks. It's hard to imagine there's a one job that has only one singular task, right? Like being a professor, being a scholar, being a doctor, being a cook. All this job have multiple tasks.
What we are seeing as technology is changing how some of these tasks can be done. And it's true, as it changes these tasks, some of them, some part of them could be automated, it's starting to change how the jobs are. And eventually it's gonna impact jobs. So this is gonna be a gradual process, and it's very important we stay on top of this. This is why human centered AI institute was founded is, these questions are profound. They're by definition multidisciplinary.
Computer scientists alone cannot do all the economic analysis, but economists not understanding what these computer science programs do will not by themselves understand the shift of the jobs. >> Peter Robinson: Okay, John, may I tell you. Go ahead. >> John Etchemendy: But let me just point something out. The Goldman Sachs study said that such and such percentage of jobs will be automated, or can be automated, at least in part.
>> Peter Robinson: Yes. >> John Etchemendy: Now, what they're saying is that a certain number of the tasks that go into a particular job. >> Peter Robinson: Filing, research. >> John Etchemendy: Exactly, so, Peter, you said it only took me a few seconds to go to the computer and find that article. Guess what? That's one of the tasks that would have taken you a lot of time. So part of your job has been automated. >> Peter Robinson: Okay, now let me tell you a story.
>> Fei-Fei Li: But also empowered. >> John Etchemendy: Empowered. >> Peter Robinson: Empowered, okay, fine. Thank you, thank you, you're making me feel good. Now, let me tell you a story. All three of us live in California, which means all three of us probably have some friends down in Hollywood. And I have a friend who was involved in the writers strike. Okay, and here's the problem, to run a sitcom, you used to run a writer's room.
And the writer's room would employ seven, a dozen on the Simpson show, the cartoon show, they'd had a couple of writers rooms running. They were employing 20. And these were the last kind of person you'd imagine a computer could replace, because they were well educated and witty and quick with words. And you think of computers as just running calculations, maybe spreadsheets, maybe someday they can eliminate accountants.
But writers, Hollywood writers, and it turns out, and my friend illustrated this for me by saying, doing the artificial intelligence thing, where it had a prompt draft a skit for Saturday Night Live in which Joe Biden and Donald Trump are playing beer pong. 15 seconds. Now, professionals could have tightened it up or made it, but it was pretty funny and it was instantaneous. And do you know what that means? That means you don't need four or five of the seven writers.
You need a senior writer to assign intelligence, the artificial. And you need maybe one other writer or two other writers to tighten it up or redraft it. It is upon us. And your artificial intelligence is going to get bad press when it starts eliminating the jobs of the chattering classes. And that has already begun. Tell me I'm wrong. >> John Etchemendy: Do you know, before the agricultural revolution, something like 80, 90% of all the people in the United States were employed on farms?
>> Peter Robinson: Right. >> John Etchemendy: Now, it's down to 2% or 3%. And those same farms, that same land, is far, far more productive. Now, would you say that your life, or anybody's life now was worse off than it was, say, in the 1890s, when everybody was working on the farm? No, so, yes, you're right. It will change jobs. It will make some jobs easier. It will allow us to do things that we could not do before.
And, yes, it will allow fewer people to do more of what they were doing before. And consequently, there will be fewer people in that line of work. That's true. >> Peter Robinson: That is true. >> Fei-Fei Li: I also want to just point out two things. One is that jobs is always changing, and that change is always painful. And as computer scientists, as philosophers, also as citizens of the world, we should be empathetic of that. And nobody is saying we should just ignore that changing pain.
So this is why we're studying this, we're trying to talk to policymakers. We're educating the population. In the meantime, I think we should give more credit to human creativity in the face of AI. I started to use this example that's not even AI. Think about the advanced, speaking of Hollywood graphics technology, CGI and all that, right?
>> Peter Robinson: The video gaming industry or- >> Fei-Fei Li: No, just animations and all that, right, one of many of our, including our children's favorite animation series is by Ghibli Studio. Princess Mononoke, my neighbor Totoro, spirited away, all of these were made during a period where computer graphics technology is far more advanced than these hand drawn animations.
Yet the beauty, the creativity, the emotion, the uniqueness in this film continue to inspire and just entertain humanity. So I think we need to still have that pride and also give the credit to humans, let's not forget our creativity and emotion and intelligence is unique, it's not going to be taken away by technology. >> Peter Robinson: Thank you, I feel slightly reassured. I'm still nervous about my job, but I feel slightly reassured.
But you mentioned government a moment ago, which leads us to how we should regulate AI, let me give you two quotations, I'll begin, I'm coming to the quotation from the two of you. But I'm going to start with a recent article in the Wall Street Journal by Senator Ted Cruz of Texas and former Senator Phil Graham, also of Texas. The Clinton administration took a hands-off approach to regulating the early Internet. In so doing, it unleashed extraordinary economic growth and prosperity.
The Biden administration, by contrast, is impeding innovation in artificial intelligence with aggressive regulation. That's them this is you, also a recent article in the Wall Street Journal, John Etchemendy and Fei-Fei Li. President Biden has signed an executive order on artificial intelligence that demonstrates his administration's commitment to harness and govern the technology. President Biden has set the stage, and now it is time for congress to act.
Cruz and Graham, less regulation, Etcemendy and Li, Biden administration has done well, now Congress needs to give us even more. >> John Etchemendy: No. >> Peter Robinson: All right, John, so. >> John Etchemendy: No, I don't agree with that. So I believe regulating any kinda technology is very difficult. And you have to be careful not to regulate too soon or not to regulate too late. Let me give you another example, you talked about the Internet, and it's true.
The government really was quite hands off, and that's good, it worked out. >> Peter Robinson: It worked out. >> John Etchemendy: But now let's also think about social media, has not worked out exactly the way we want it. We originally believed that we were gonna enter a golden age in which-. >> Peter Robinson: Friendship, comedy. >> John Etchemendy: Well, and everybody would have a voice and we could all live together, Kumbaya and so forth. And that's not what happened.
>> Peter Robinson: Jonathan Haidt has a new book out on the particular pathologies among young people from all of these social media, and not an argument. It's an argument, but it's based on lots of data. >> John Etchemendy: Yeah, [COUGH] so it seems to me that I'm in favor of very light handed and informed regulation to try to put up sorta bumpers. I don't know what the analogy is. >> Fei-Fei Li: Guardrails. >> John Etchemendy: Guardrails for the technology.
I am not for heavy handed, top down regulation that stifles innovation. >> Peter Robinson: Okay, here's another, let me get onto this, I'm sure you'll be able to adapt your answers to this question, too. >> Fei-Fei Li: Okay. >> Peter Robinson: I'm continuing your Wall Street Journal piece. Big tech companies can't be left to govern themselves around here, Silicon Valley, those are fighting words.
Academic institutions should play a leading role in providing trustworthy assessments and benchmarking of these advanced technologies. We encourage an investment and human capital to bring more talent to the field of AI with academia and the government. Okay, now it is mandatory for me to say this, so please forgive me, my fellow Stanford employees. Apart from anything else, why should academic institutions be trusted?
Half the country has lost faith in academic institutions, DEI, the whole woke agenda, antisemitism on campus. We've got a Gallup, recent Gallup poll showing the proportion of Americans who expressed a great deal or quite a lot of confidence in higher education this year came in at just 36%. And that is down in the last eight years from 57%, you are asking us to trust you at the very moment when we believe we have good reason to knock it off. Trust you?
Okay, Fei. >> Fei-Fei Li: So I'll start with this first half of the answer, I'm sure John has a lot to say. I do want to make sure, especially wearing the hats of co directors of HAI. When we talk about the relationship between government and technology, we tend to use the word regulation. I really, want to double click, I want to use the word policy. And policy and regulation are related, but not the same.
When John and I wrote that Wall Street Journal opinion piece, we really are focusing on a piece of policy that is to resource public sector, AI to resource academia. Because we believe that AI is such a powerful technology and science and academia and public sector still has a role to play to create public good. And public goods are curiosity driven knowledge exploration.
Our cures for cancers are, the maps of biodiversity of our globe, our discovery of nano materials that we haven't seen before, different ways of expressing in theater, in writing, in music. These are public goods, and when we are looking, when we are collaborating with the government on policy, we're focusing on that. So I really want to make sure regulation, we all have personal opinion, but there's more than regulation in policy.
>> Peter Robinson: Let me make one last run at you theory here, although I'm asking questions that you'd, I'm quite sure you'd like to take me out and swap me around at this point, John, but this is serious. You've got the Stanford Institute for Human Centered Artificial intelligence and that's because you really think this is important. But we live in a democracy and you're going to have to convince a whole lot of people. So let me take one more run at you and then hand it back to you.
John, your article in the Wall Street Journal, again, let me repeat this, we encourage an investment in human capital to bring more talent to the field of AI. With academia and the government, that means money, and investment means money, and it means taxpayers money. Here's what Cruz and Graham say in the Wall Street Journal. The Biden regulatory policy on AI has everything to do with special interest rent-seeking.
Stanford faculty make well above the national average income, we are sitting at a university with an endowment of tens of billions of dollars. John, why is not your article in the Wall Street Journal the very kind of rent seeking that senator Senator Cruz and Senator Graham are saying, are you kidding? >> John Etchemendy: Peter, let's take another example.
So one of the greatest policy decisions that this country has ever made was when Vannevar Bush, advisor to, at that time, President Truman, convinced- >> Peter Robinson: He stayed on through Eisenhower, as I recall. So it's important to know he's bipartisan. >> John Etchemendy: Exactly, no, it was not a partisan an issue at all, but convinced Truman to set up the NSF for funding- National Science Foundation.
Right, for funding curiosity based research, advanced research at the universities, and then not to say that companies don't have any role, not to say that government has no role. They both have roles, but they're different roles. And companies tend to be better at development, better at producing products and tapping into things that can, within a year or two or three, can be a product that will be useful. Scientists at universities don't have that constraint.
They don't have to worry about when is this going to be commercial. >> Peter Robinson: Commercial, right. >> John Etchemendy: And that has, I think, had such an incalculable effect on the prosperity of this country, on the fact that we are the leader in every technology field. It's not an accident that we're the leader in every technology field, we didn't used to.
>> Peter Robinson: And does it affect your argument, if I add, it also enabled us or contributed to a victory in the Cold War, the weapon systems that came out of universities? All right. >> John Etchemendy: Well, no, absolutely, and President Reagan- >> Peter Robinson: It ended up being a defensive democracy, kind of, you could argue from all kinds of points of view, as it was a good ROI for taxpayers money.
>> John Etchemendy: So we're not arguing for higher salaries for faculty or anything of that sort. But we think, particularly in AI, it's gotten to the point where scientists at universities can no longer play in the game because of the cost of the computing, the inaccessibility of the data. That's why you see all of these developments coming out of companies. That's great, those are great developments.
But we need to have also people who are exploring these technologies without looking at the product, without being driven by the profit motive. And then eventually, hopefully, they will develop discoveries, they will make discoveries, will then be commercializable. >> Peter Robinson: Okay, I noticed in your book, Fei-Fei, I was very struck that you said, I think it was about a decade ago, 2015, that you noticed that you were beginning to lose colleagues to the private sector.
>> Fei-Fei Li: Yeah. >> Peter Robinson: Presumably because they just pay so phenomenally well around here in Silicon Valley. But then there's also the point that to get to make progress in AI, you need an enormous amount of computational power, and assembling all those ones and zeros is extremely expensive. >> Fei-Fei Li: Exactly. >> Peter Robinson: Chat GPT what is the parent company? >> Fei-Fei Li: OpenAI. >> Peter Robinson: OpenAI got started with an initial investment of a billion dollars.
An initial, friends and family capital of a billion dollars is a lot of money, even around here. Okay, that's the point you're making. >> Fei-Fei Li: Yes. >> Peter Robinson: All right, it feels to me as though every one of these topics is worth a day long seminar, actually, I think they are. >> John Etchemendy: And by the way, this has happened before, where the science has become so expensive that university level research and researchers could no longer afford to do the science.
It happened in high energy physics. High energy physics used to mean you had a van de Graaff generator in your office [LAUGH] and that was your accelerator. >> Peter Robinson: Or you can do what you needed to do. >> John Etchemendy: And then the energy levels were higher and higher. And what happened? Well, the federal government stepped in and said, we're gonna build an accelerator, Stanford- >> Peter Robinson: Stanford Linear Accelerator. >> John Etchemendy: Exactly.
>> Peter Robinson: Sandia Labs, Lawrence Livermore, all these are, at least in part, federal establishment experts. >> Fei-Fei Li: CERN. >> Peter Robinson: CERN, which is European, right. >> John Etchemendy: Well, Fermilab, the first accelerator was SLAC, Stanford Linear Accelerator Center, then Fermilab and so on and so forth. Now, CERN is actually late in the game, and it's European consortium.
But the thing is, we could not continue the science without the help of the government, in government [INAUDIBLE]. >> Fei-Fei Li: Well, there's another, and then in addition to high energy physics and then bio, right? Especially with genetic sequencing and high throughput genomics, and biotech is also changing. And now you see a new wave of biology labs that are actually heavily funded by the combination of government and philanthropy and all that.
And that stepped in to supplement what the traditional university model is. And so we're now here with AI and computer science. >> Peter Robinson: Okay, we have to do another show on that one alone, I think. The singularity, good, this is good. Reassuring, you both are rolling your eyes. Wonderful, I feel better about this already, good. Ray Kurzweil, you know exactly where this is going.
Ray Kurzweil writes a book in 2005, this gets everybody's attention and still scares lots of people to death, including me. The book is called The Singularity is Near. And Kurzweil predicts a singularity that will involve, and I'm quoting him, the merger of human technology with human intelligence. He's not saying the tech will mimic more and more closely human intelligence, he is saying they will merge.
I set the date for the singularity, representing a profound and disruptive transformation in human capability as 2045. Okay, that's the first quotation. Here's the second, and this comes from the Stanford course catalog's description of the philosophy of artificial intelligence, a freshman seminar that was taught last quarter, as I recall, by one John Etchemendy. Here's from the description.
Is it really possible for an artificial system to achieve genuine intelligence, thoughts, consciousness, emotions? What would that mean? John, is it possible? What would it mean? >> John Etchemendy: I think the answer is actually no. >> Peter Robinson: Thank goodness you kept me waiting for a moment. >> John Etchemendy: The fantasies that Ray Kurzweil and others have been spinning up, I guess that's the way to put it.
Stem from a lack of understanding of how the human being really works and don't understand how crucial biology is to the way we work, the way we are motivated, how we get desires, how we get goals, how we become humans, become people. And what AI has done so far, AI is capturing what you might think of as the information processing piece of what we do. So part of what we do is information processing. >> Peter Robinson: So it's got the right frontal cortex but hasn't got the left frontal cortex.
>> John Etchemendy: Yeah, it's an oversimplification, but yes. >> Peter Robinson: Imagine that on television, all right. >> John Etchemendy: So I actually think it is, first of all, the date 2045 is insane. That will not happen. And secondly, it's not even clear to me that we will ever get that. >> Fei-Fei Li: Wait, I can't believe I'm saying this. In his defense, I don't think he's saying that 2045 is the day that the machines become conscious beings like humans.
It's more an inflection point of the power of the technology that is disrupting the society. >> Peter Robinson: He's right, we're already there. >> Fei-Fei Li: Exactly, that's what I'm saying. >> John Etchemendy: I think you're being overly generous. [LAUGH] [LAUGH] I think that what he means by the singularity is the date at which we create an artificial intelligence system that can improve itself and then get into a cycle, a recursive cycle, where it becomes a super intelligence.
>> Peter Robinson: Yes. >> John Etchemendy: And I deny that. >> Peter Robinson: He's playing the 2001 Space Odyssey game here. Different question, but related question. In some ways, this is a more serious question, I think. Although that's serious, too. Here's the late Henry Kissinger again, cool. We live in a world which has no philosophy. There is no dominant philosophical view.
So the technologists can run wild, they can develop world changing things, and there's nobody to say, we've got to integrate this into something. All right, I'm going to put it crudely again, but in China a century ago, we still had Confucian thought dominant among, at least among the educated classes. On my very thin understanding of Chinese history.
In this country, until the day before yesterday, we still spoke, without irony, of the Judeo-Christian tradition, which involved certain concepts about morality, what it meant to be human. It assumed a belief in God, but it turned out you could actually get pretty far along even if you didn't believe in. Okay, and Kissinger is now saying it's all fallen apart. There is no dominant philosophy. This is a serious problem, is it not? There's nothing to integrate AI into. You take his point.
It's up to the two of you to-. >> Fei-Fei Li: You are the philosopher. >> John Etchemendy: You're the Buddhist. >> Fei-Fei Li: You're the philosopher. >> Fei-Fei Li: I think this is a great first of all, thank you for that quote. I didn't read that quote from Henry Kissinger. I mean, this is why we founded the Human Centered AI Institute. These are the fundamental questions that our generation needs to figure out. >> Peter Robinson: So that's not just a question. That's the question.
>> Fei-Fei Li: It was one of the fundamental questions. It's also one of the fundamental questions that illustrates why universities are still relevant today. >> John Etchemendy: And Peter one of the things that Henry Kissinger says in that quote is that there is no dominant philosophy. >> Peter Robinson: Yes. >> John Etchemendy: There's no one dominant philosophy like the Judeo-Christian tradition, which used to be the dominant.
>> Peter Robinson: It's a different conversation in Paris in the 12th century, for example, the University of Paris. >> John Etchemendy: In order to take values into account when you're creating an AI system, you don't need a dominant tradition. What you need, for example, for most ethical traditions, is the golden rule. >> Peter Robinson: Okay, so we can still get along with each other. Even when it comes to deep, deep questions of value such as this, we still have enough common ground.
>> John Etchemendy: I believe so. >> Peter Robinson: I heave yet another sigh of relief. Okay, let's talk a little bit. We're talking a little bit about a lot of things here but so it is. Let us speak of many things as it is written in Alice in Wonderland, the Stanford Institute. The Stanford Institute for Human centered artificial intelligence, of which you are co-directors. And I just have two questions and respond as you'd like.
Can you give me some taste, some feel for what you're doing now and in some ways more important, but more elusive, where you'd like to be in just five years say? Everything in this field is moving. My impulse is to say ten years, because it's a rounder number. It's too far off in this field, Fei Fei. >> Fei-Fei Li: I think what really has happened in the past five years by Stanford HAI, among many things.
>> Peter Robinson: I just wanna make sure everybody is following you H-A-I Stanford HAI is the way it's known on this campus. >> Fei-Fei Li: Yes. >> Peter Robinson: All right, go ahead. >> Fei-Fei Li: Yeah, is that we have put a stick on the ground for Stanford as well as for everybody that this is an interdisciplinary study. That AI, artificial intelligence, is a science of its own. It's a powerful tool.
And what happens is that you can welcome so many disciplines to cross pollinate around the topic of AI or use the tools of AI to make other sciences happen or to explore other new ideas. And that concept of making this an interdisciplinary and multidisciplinary field is what I think Stanford HAI brought to Stanford. And also, hopefully, to the world, because like you said, computer science is kind of a new field. The late John McCarthy coined the term in the late 50s.
Now it's moving so fast, everybody feels it's just a niche computer science field that's just like making its way into the future. But we are saying, no, look broad, there's so many disciplines that can be put here. >> Peter Robinson: Who competes with the Stanford Institute in Human Centered Design? Is there such an institute at Harvard or Oxford or Beijing?
I just don't know what those- >> John Etchemendy: So in the five years since we launched, there have been a number of similar institutes that have been created at other universities. We don't see that as competition in any way. >> Peter Robinson: If these arguments you've been making are valid, then we need them. >> Fei-Fei Li: Yeah, we see that as a movement. >> John Etchemendy: We need them.
And part of what we want to do, and part of what I think we've succeeded to a certain extent doing is communicating this vision of the importance of keeping the human and human values at the center when we are developing this technology. When we are applying this technology. And we want to communicate that to the world. We want other centers that adopt a similar standpoint. And importantly, and one of the things that Fei Fei didn't mention is one of the things we try to do is educate.
And educate, for example, legislators so that they understand what this technology is, what it can do, what it can't do. >> Peter Robinson: So you're traveling to Washington, or the very generous trustees of this institution are bringing congressional staff, and they're both? >> John Etchemendy: Both. >> Peter Robinson: Both are happening.
>> Fei-Fei Li: Yeah. >> Peter Robinson: All right, so Fei-Fei, first of all, did you teach that course in Stanford HAI, or was the course located in the philosophy department, or cross listing? I'm just trying to get a feel for what's actually taking place there now. >> John Etchemendy: Yeah, I actually taught it in the confines of the HAI building. >> Peter Robinson: [LAUGH] Okay, so it's in HAI. >> John Etchemendy: No, it's a philosophy course.
>> Fei-Fei Li: It's listed as a philosophy course, but taught in the HAI. >> Peter Robinson: He's the former provost, he's an interdisciplinary walking wonder. >> Fei-Fei Li: Yeah. >> Peter Robinson: And your work in AI-assisted healthcare. >> Fei-Fei Li: Yep. >> Peter Robinson: Is that taking place in HAI, or is it at the university medical school? >> Fei-Fei Li: Well, that's the beauty, it's taking place in HAI, computer science department, the medical school.
It even has collaborators from the Law School, from the Political Science Department. So that's the beauty, it's deeply interdisciplinary. >> Peter Robinson: If I were the provost, I'd say this is starting to sound like something that's about to run amok. Doesn't that sound a little too interdisciplinary, John? Don't we need to define things a little bit here?
>> John Etchemendy: Let me say something, so Steve Denning, who was the Chair of our Board of Trustees for many years and has been a long, long time supporter of the university in many, many ways. In fact, we are the Denning co-directors of Stanford HAI. Steve saw five, six years ago, he said AI is going to impact every department at this university. And we need to have an institute that makes sure that that happens the right way, that that impact does not run amok.
>> Peter Robinson: All right, where would you like to be in five years? What's a course you'd like to be teaching in five years, what's a special project? >> Fei-Fei Li: I would like to teach a freshman seminar called the greatest discoveries by AI. >> Peter Robinson: Okay, a last question, I have one last question, but that does not mean that each of you has to hold yourself to one last answer, because it's a kind of open-ended question.
I have a theory, but all I do is wander around this campus. The two of you are deeply embedded here, and you ran the place for 17 years. So you'll know more than I will, including, you may know that my theory is wrong, but I'm going to trot it out, modest though it may be, even so. Milton Friedman, the late Milton Friedman, who when I first arrived here was a colleague at the Hoover Institution.
In fact, by some miracle, his office was on the same hallway as mine and I used to stop in on him from time to time. He told me that he went into economics because he grew up during the Depression, and the overriding question in the country at that time was how do we satisfy our material needs? There were millions of people without jobs, there really were people who had trouble feeding their families. All right, I think of my own generation, which is more or less John's generation.
You come much later, Fei-Fei. >> Fei-Fei Li: Thank you [LAUGH] >> Peter Robinson: And for us, I don't know what kind of discussions you had in the dorm room, but when I was in college there were bull sessions about the Cold War. The Cold War was real to our generation. That was the overriding question, how can we defend our way of life, how can we defend our fundamental principles? All right, here's my theory.
For current students, they've grown up in a period of unimaginable prosperity, material needs are just not the problem. They have also grown up during a period of relative peace. The Cold War ended, you could put different. The Soviet Union declared itself defunct in 1991, Cold War is over at that moment at the latest. The overriding question for these kids today is meaning, what is it all for, why are we here? What does it mean to be human? What's the difference between us and the machines?
And if my little theory is correct, then by some miracle this technological marvel that you have produced will lead to a new flowering of the humanities. Do you go for that, John? >> John Etchemendy: Do I go for it? I would go for it- [LAUGH] If it were going to happen. >> Peter Robinson: Did I put that in a slightly sloppy way? >> John Etchemendy: No, I think it would be wonderful, it's something to hope for. Now I'm going to be the cynic.
So far what I see in students is more and more focus, for Stanford students more and more focus on technology. >> Peter Robinson: Computer science is still the biggest major at this university. >> Fei-Fei Li: Yeah. >> John Etchemendy: Yeah, and we have tried, at HAI we have actually started a program called Embedded EthiCS, where the CS at the end of ethics is capitalized so it's Computer Science.
>> Peter Robinson: That'll catch the kids' attention [LAUGH] >> John Etchemendy: No, we don't have to catch their attention. What we do is virtually all of the courses in computer science, the introductory courses, have ethics components built in. So you have a problem set this week, and that'll have a whole bunch of very difficult math problems, computer science problem, and then it will have a very difficult ethical challenge.
And it'll say here's the situation, you are programming an AI system, and here's the dilemma. Now discuss, right, what are you gonna do? So we're trying to bring, I mean this is what Fei-Fei wanted. We're trying to bring- >> Peter Robinson: This is new within? >> John Etchemendy: Ethics, within, yeah, the last couple years. >> Peter Robinson: Okay. >> John Etchemendy: Two, three years, we're trying to bring the attention to ethics into the computer science curriculum.
And partly that's because students tend to follow the path of least resistance. >> Peter Robinson: Well they also, let's put it again, I'm saying things crudely again and again, but someone must say it, they follow the money. So as long as this valley that surrounds us rewards brilliant young kids from Stanford with CS degrees as richly as it does, and it is amazingly richly, they'll go get CS degrees, right? >> Fei-Fei Li: Well, I do think it's a little crude.
[LAUGH] I think money is one surrogate measure of also what is advancing in our time. Technology right now truly is one of the biggest drivers of the changes of our civilization. When you're talking about what does this generation of students talk about? I was just thinking that 400 years ago, when the scientific revolution was happening, what is in the dorms? Of course, it's all young men [LAUGH] in Cambridge or Oxford, but that must also be a very exciting and interesting time.
Of course, there wasn't Internet and social media to propel the travel of the knowledge. But imagine there was, the blossoming of discovery and of our understanding of the physical world. Right now, we're in that kind of great era of technological blossoming. It's a digital revolution. So, the conversations in the dorm, I think it's a blend of the meaning of who we are as humans as well as our relationship to these technology we're building.
And so it's a- >> Peter Robinson: So properly taught technology, can subsume or embed philosophy literature. >> Fei-Fei Li: Of course, can inspire, can inspire. And also think about it. What follows scientific revolution is a great period of change, of political, social, economical change, right? And we're seeing that. >> Peter Robinson: All for the better, that's right.
>> Fei-Fei Li: And I'm not saying it's necessarily for the better, but we haven't even peaked the digital revolution, but we're already seeing the political, socioeconomic changes. So this is, again, back to Stanford HAI when we founded it five years ago. We believe all this is happening, and this is an institute where these kind of conversations, ideas, debates should be taking place, education programs should be happening. And that's part of the reason we did this.
>> John Etchemendy: Let me tell you. Yeah, so, as you pointed out, I just finished teaching a course called Philosophy of Artificial Intelligence. >> Peter Robinson: About which I found out too late, I would have asked permission to audit your course John. >> John Etchemendy: No, you're too old. [LAUGH] And about half of the students were computer science students or planned to be computer science majors.
Another quarter planned to be symbolic systems majors, which is a major that is related to computer science. And then there was a smattering of others. And these were people, every one of them, at the end of the course, and I'm not saying this to brag, every one of them said, this is the best course we've ever taken. And why did they say that? It inspired, it made them think.
It gave them a framework for thinking, a framework for trying to address some of these problems, some of the worries that you've brought out today, and how do we think about them? And how do we not just become panicked because of some science fiction movie that we've seen or because we read Ray KurzweiI [LAUGH] so. >> Peter Robinson: Maybe it's just as well I didn't take the course. I'm sure John would have given me a C minus at best. >> John Etchemendy: Grade inflation.
[LAUGH] So it's clear that these kids, students, are looking for the opening to think these things and to understand how to address ethical questions, how to address hard philosophical questions. And that's what they got out of the course. >> Fei-Fei Li: And that's a way of looking for meaning in this time. >> Peter Robinson: Yes, it is. Dr. Fei-Fei Li and Dr. John Etchemendy, both of the Stanford Institute for Human Centered Artificial Intelligence, thank you. Thank you, Peter.
>> John Etchemendy: Thank you, Peter. >> Peter Robinson: For uncommon knowledge and the Hoover Institution and Fox Nation, I'm Peter Robinson. [MUSIC]