Thanks for tune into Tech Stuff. If you don't recognize my voice, my name is Osvalshin and I'm here because the inimitable Jonathan Strickland has passed the baton to Cara Price and myself to host Tech Stuff. The show will remain your home for all things tech, and all the old episodes will remain available in this feed.
Thanks for listening.
I was in a hotel in California and I saw that the phone lit up, and I thought, who's calling me at one o'clock in the morning, And then this Swedish voice came on. Then they said I won the Nobel Prize in Physics, and I thought, this is very odd. I don't do physics, so that's when I thought it might be a prank.
Jeffrey Hinton won the twenty twenty four Nobel Prize in Physics, an honor held by Albert Einstein and Marie Curie. A certain j Robert Oppenheimer were shortlisted but never won.
My big hope was to win the Nobel Prize in Physiology or Medicine for figuring out how the brain worked. And what I didn't realize is you could fail to figure out how the brain worked and still get a Noble Prize anyway.
Welcome to tech stuff. This is the story with our guests, the Nobel Laureate Jeffrey Hinton. Each week on Wednesdays, we bring you an in depth interview with someone who's at the forefront of technology or who can unlock a world where tech is at its most fascinating. His recent Nobel Prize win was for quote foundational discoveries and inventions that
enable machine learning with artificial neural networks. Now, artificial neural networks are learning models inspired by the network of neurons present in the human brain, and Hinton's desire to figure out the brain was a key inspiration for his pioneering work on AI. I was particularly fascinated by Hinton because his work went completely against the mainstream of computer science for decades, and yet he stuck to his guns. It's an incredible story of dedication in the face of personal loss.
Also fascinating is Hinton's relationship to his own creation. Here's what he said at the Nobel Prize banquet.
There is also a longer term existential threat that will arise when we create digital beings that are more intelligent than ourselves. We have no idea whether we can stay in control. But we now have evidence that if they are created by companies motivated by short term profits, our safety will not be the top priority. We urgently need research on how to prevent these new beings from wanting to take control. They are no longer science fiction, thank you.
So when I got the opportunity to sit down with Jeffrey Hinton, I wanted to know how he went from someone who wanted to understand the relationship between the mind and the brain to someone who paved the path for AI as we know it. And he obliged me by telling me about his trajectory from student to researcher, to professor, to Google employee to finally AI safety advocate.
Am I riding thinking with all due respect to Steve Jobs and Bill Gates and marks up a bug that you were, in a sense the original college dropout, not.
In the sense they dropped out, Because what I did was I I went to Cambridge and after a month I dropped out, but then I went back the next year. And then while I was doing my PhD, I dropped out, but then I finished it. So I'm not like them.
I went back. I'm a failed to drop out.
Failed to drop out, that's good. What were the reasons for? Was it uncertainty or ambivalence or curiosity or so.
When I first went to Cambridge, it was the first time I'd ever lived away from home, and I'd always my image of myself was that it was one of the clever ones. And when I went to Cambridge, I wasn't one of the clever ones. Everybody was clever, and I found it very stressful. I worked very, very hard, so I was working like twelve hours a day doing science, and the combination of working very hard to keep up and living away from home for the first time was just too much for me.
So there was a fantastic profile of you in the New Yorker which said, quote, Hinton tried different feels but was dismayed to find that he was never the brightest student in any given class, which made me smile. But I guess dismay and stress are quite a close cousins, And I suppose the stakes for you were high given the family you came from. Can you speak a little bit about that.
Yes, I had a lot of pressure from my father to be an academic success, and my mother sort of went along with it. So from an early age I realized that's what I had to achieve, and that's a lot of pressure.
How did they exert that pressure? How are you aware of it?
My father was a slightly strange character. He grew up in Mexico during all the revolutions without a mother, somewhat odd. Every morning when I went to school, not every morning, but quite often, as I left, you would say, get in their pitching. If you work very hard, when you're twice as old as me, you might be half as good.
Well that's sort of pressure.
Did you find that motivating?
I found it irritating, but I think it probably was motivating. He very inconsiderately.
He died while I was writing my thesis, and he never saw me being a success.
You were at Cambridge, you left briefly and you came back. I think you settled on experimental psychology as your degree.
In the end, I was doing natural sciences.
I started off doing physics chemist ring crystalline state because of the success in decoding the structure of DNA. Crystalline state was a big thing, and I left after a month. Then I reapplied to do architecture. I've always liked oarchitecture, and after a day I decided I was never going to be any good architecture because I wasn't I wasn't artistic enough. I loved the engineering, but the artistic bit I couldn't do very well. So I switched back to science.
But then I did physics, chemistry and physiology, and I really liked physiology. I'd never been allowed to do biology at school.
My father wouldn't allow it.
Why not, Ah well, he said, if you do biology, they'll teach you genetics. And he was a Stalinist and he didn't believe in genetics. Now, he was also a fellow of the British Royal Society in Biology who didn't believe in genetics.
Gosh, complicated man, Yes, but I mean he really you won't exaggerated him. He said to you, you can't study biology.
Now.
He had other reasons which weren't so bad, which is, you can always pick up biology when you're older. What you can't pick up when you're older is math. And I think that's probably true, and so that was a more valid reason.
Yeah, yeah, So what did you end up graduating from Cambridge with a degree in psychology?
So I did physics, chemistry and physiology for a year, and I did very well in physics. I got a first in physics. That's obviously a good predictor of a Nobel Price.
Said, your tutors would have been there.
Then I dropped it all and did philosophy, philosophy, philosophy. I did a year of philosophy and I developed strong antibodies, and then I switched to psychology, and so my final degree was in psychology.
MHM. And was there a single question or set of questions that you were in search of.
Yes, I wanted to know how the how the brain worked, and how the mind worked, and what the relationship was. I decided fairly early on that you're never going to understand the mind unless you understood the brain.
Was that a popular view at the time.
No, not really.
There was this kind of functionalist view that basically the view that came from computer software, which is that the software is totally different from the hardware, and what the mind is all about is software, the heuristics you use and the way you represent things. The hardware has got
nothing to do with it. That's a completely crazy view, but it seemed very plausible at the time, and so the computers we designed so that we could program them, had programs being a completely separate domain from hardware.
But that's not how the real brain is.
Was as funny this constant dance between us expecting our computers to be like us, and then expecting us to be like our computers, right as a kind of continual dance between those two things.
Yes, well, you always try and understand things in terms of the latest technology. So when telephones were new, the brain was clearly a very large telephone switchboard.
But is it different this time in our AI has taken off and become ubiquitous. Do you think this is indeed more than the telephone?
Yes, I think these artimicial neural networks for training are in many respects quite like real neural networks. Obviously, the neurons are much simpler. There's all sorts of properties that are different in the brain, but basically they're working in the same way. They learn things by changing connection strengths between neurons, just like the brain does.
And when does that question for you of wanting to understand how the brain works really really begin?
When I was at high school, so even before I went to Cambridge, I had a very bright friend who was always smarter than me called Inmund Harvey, who came into school one day and said, maybe memories in the brain are spread out over the whole brain. They're not localized like a hologram, because holograms when you so he was trying to understand memories in the brain in terms of this new technology of holograms.
And you were stimulated by this idea.
I was very stimulated by that.
It's very interesting idea, and ever since then I've thought a lot about how memories are represented in the brain. And then that also led into well how does the brain learn stuff?
Coming up how Jeffrey Hinton ended up at Google.
Stay with us, so bear with me, but I need to try and summarize what the Boltziman machine is because the Nobel Prize Committee credited the Boltzman machine with your win. According to their press release, the Boltzman machine can quote learn to recognize characteristic elements in a given type of data. The machine is trained by feeding it examples that are very likely to arise. The Boltzmer machine can be used to classify images or create new examples of the type
of pattern on which it was trained. Hinton has built upon this work helping initiate the current explosive development of machine learning. Now, in fact, the timeline right, you graduated from Cambridge, you worked on your PhD, and in the eighties you wound up at Connigie Mellon and that's where you work on the bolt of machine really took off.
Uh yes, just before I went to Carnegie Mellon.
Now am I writing thinking? At the time, there was a kind of debate that you were on the unpopular side of about artificial intelligence.
Okay, so there's two things to say about that. From the earliest days of AI in the nineteen fifties, there were these two approaches, two kind of paradigms. So how you build an intelligence system. One was inspired by the brain, that was neural networks. Then the other paradigm was no, no, no, no. Logic is a paradigm for intelligence. Intelligence is all about reasoning.
Learning comes later once we figured out how reasoning works, and reasoning depends on having the right representations for things. So we have to figure out what kind of logically unambiguous language the brain is using so that it can use rules to do reasoning. It's a completely different approach and it's very unbiological because reasoning something comes very late, and actually from most of the second half of the
last century, neural networks weren't seen as AI. AI was believing you have symbolic rules in your head and then manipulated using rules, using to screet symbolic rules.
And what was what were neural? They were seen as statistics or physically.
They were seen as this crazy idea that you could take this random neural network and just learn everything, which was obviously hopeless.
And what what gave you this conviction?
Well, the brain has to learn somehow, and of course there's a lot of innate structure whir i'd in, which explains why a goat can sort of drop out of the womb and five minutes later it's walking. But so stuff like learning to read, that's all learned. That's not innate, and we have to figure out how that works, and it's not symbolic rules.
This conviction that you have now, did you always have?
Yes?
Sorry to say it seemed to me it was obviously right. I think part of that is I was sent to a private school when I was seven from an atheist family, and they started telling me about religion and all this rubbish, all the stuff that was obviously nonsense, and I had the experience of being the only kid at school who thought this was all nonsense and turning out to be right.
And that's very good training for a scientist.
I brought it up earlier, but there was a wonderful profile of you in the New Yorker titled why the Godfather of Ai fears what He's built, And there was a quote that I found quite stunning where you said I was dead in the water at forty six.
Yes, my wife and I adopted to children, to babies, and then my wife got ovarying cancer. But she also, even though she was a science she started believing in homeopathy and she tried treating her very in cancer with homeopathy, and so she died and.
I was left with.
Two young children, one of three and one of five, who were both very upset, as was I, and I began to appreciate what life is like for female academics, which is impossible.
You can't.
Looking after small children makes it very very hard to spend long periods of time thinking about your current idea. It's just very difficult.
So you were bereaved, you were looking after two small children, and then yes, that moment in twenty twelve when you publish a paper called ImageNet classification with deep convolutional neural networks, which to the layman doesn't sound like something that would change the world and how we live, but it did well.
It changed the views of people in computer vision and the views of people in other areas of computer science. It basically showed neural networks actually do work. Now, people have showed that before, but it hadn't convinced people the same way.
See, you published that paper in twenty twelve with Iliot sutch Keva, who went on to be a very important figure at open AI and I want to talk more about him. But within a few days of the publication of that paper, you had an offer of millions and millions of dollars to move to China.
Ah. Yeah, either to move to China or to let them invest in our group.
I think it was a bit longer than a few days, but it was. It was that fall for sure.
Did you kind of know, Okay, we're going to publish and this is going to change everything or are you surprised by this response.
We thought it would have a big effect. We didn't realize quite how big.
And when you got the call from Baidu, Chinese tech company, What did you think, I mean, it must have been tempting.
H yes, I think they said they could fund our group, or they could we could go and work for them, or various possible alternatives. And in the end I just asked them, well, how much money are we talking about? Are you talking about like ten million dollars? And they said, yes.
You localed yourself.
I didn't think it at the time.
I thought of the biggest number I could think of, it was five million dollars and doubled it. So once they said that, I realized we had no idea how much we were worth. And so at that point we decided to set up an auction.
What does it mean to set up an auction for an academic paper?
The three of us founded a company. The company that belonged to the three of us owned these six pandent applications, so then we had something to sell, but mainly we were selling ourselves. But I insisted that I still be an academic so I could continue to advise my current students. That was a big problem because Google had never done that before. They were one of the companies.
The bid for us, so by do Google.
Microsoft and deep Mind, which wasn't owned by Google then.
And not only do you invent this well together with your colleagues, this advancement of neural nets, but you also invented your own process to sell this company essentially, right.
Yeah, we decided we just have an auction by Gmail, and you'd send me Gmail with your bid, and the time of the bid would be the timestamp on the Gmail. I would then immediately send the bid to all the other bidders, and you had to raise by a million dollars, and if there was no bid within an hour of the last bid, that was the end of the auction. I was amazed to see that the bids would come
in fifty nine minutes after the previous bid. Would be sitting there think okay, it's over, and then fifty nine minutes later a bid would come in.
Only everybody trusted Gmail.
I had worked at Google over the summer, and that meant two things. One, I did trust that they wouldn't read my Gmail. I knew they were very serious about that. And also I really wanted Google to win the competition because I really like Jeff Dean, who ran the brain team at Google. So I wanted Google to win, and in the end we slightly fudged it. So after deep minded Microsoft had dropped out, it was between Google and by Do, and we were scared by You would win, and I didn't want to go to China.
I couldn't travel at that point, well, so I felt I wouldn't.
Really understand what was going on in a Chinese company. So it got to forty four million, and we just said we've got an offer we can't refuse, and it's the end of the auction. And the offer we couldn't refuse was to work with Jeff Deane.
So you got what you wanted in the sense of.
We've got more money than we could imagine, and we got to work at the company that I wanted most to work.
With when we come back. Nobel Laureate Jeffrey Hinton on why he advocates for AI safety. In twenty eighteen, you receive the Touring Award, which is kind of like the Nobel Prize for Computer Science. But also in twenty eighteen, you were widowed for a second time. You lost your wife Rosalind in nineteen ninety four, and then twenty four years later your wife Jackie passed away also from cancer.
Yes, that was that was difficult. I didn't have My children were much older then, so I didn't have the problem of having to cope with young children at the same time as everything else. And Google was very understanding.
Part of my deal had been that I would spend three months a year in Silicon Valley, and they let me out of that and said I could spend my whole time in Toronto, and they helped me set up a small lab doing basic research in Toronto, so that was much less stressful, so I could be with my wife.
She had cancer as well, she got pancreatic cancer.
One of the most striking things, again in the New Yorker piece, was the way you talked about observing the way in which Rosalind and Jackie approached their cancer as a mental model for how to think about the implications of artificial intelligence.
Yeah, that's a rather sort of dark scenario.
So occasionally when I think, well, lis stuff probably will take over, which I sometimes think, then there's the issue of should you go into denial and say no, no, no, no, this can't possibly happen, which is what my first wife did. Actually, she wasn't my first wife I was married a long time before that, just briefly, and so Ross went into denial and Jackie was very realistic about it. And maybe we should be very realistic about the possibility machines will
take over. We should do our best to make sure that doesn't happen, but we should also think about whether there's ways of making that if that does happen, whether there's ways of making it not so bad, whether people could still be around even if the machines were in control, for example.
Yeah, I mean there's something. I mean, you use the word dark, but you've seen your life's work in many ways kind of fruition, haunted by these thoughts of how to deal with something as awful as the terminal council diagnosis.
Yeah, you have to be careful what you wish for.
Yeah, I mean my mentioned Oppenheimer at the beginning in terms of a Nobel Well, he wasn't almost a physics gloriate. He wasn't in the end.
Yes, it's rather absurd, isn't it That I've got a Nobel Prize in physics and Oppenheimer didn't.
That's utterly ridiculous.
I should say something people, particularly journalists, like to say well, how would you compare yourself with Oppenheimer?
And there's two big differences.
One is that Oppenheimer really was crucial to the development of the atomic bomb. He managed the science of it. He was a single, extremely important figure with the development of AI. There's a bunch of us, and if I hadn't been around, all stuff would have happened. That's one difference. The other difference is that atomic bombs aren't good for anything good.
They're just for destruction.
They actually did try using them for fracking in Colorado, but that didn't work out too.
Well and you can't go there anymore.
The big difference is most of the uses of AI are very good. It can lead to huge increases in productivity, huge improvements in healthcare, might help a lot with climate change. So AI is going to be developed because of all those huge beneficial uses. And that's very different from atomic bombs, where there was a possibility of not developing the H bomb.
And why have you taken it upon yourself as your responsibility? And you quit Google in twenty twenty three and since then have become one of the most vocal and qualified people in the world warning of these risks.
Well, I'm old.
I'm too old to do original research anymore, but people listen to me, and I really believe these risks are very real and very important, so I don't really have much choice. We are going to develop AI because it's got so.
Many good uses.
So I'm not warning against developing it, and I'm not saying slow down. What I'm saying is try and develop it as safely as you can. Try and figure out, in particular, how you can stop it eventually taking over, but also think about all the other shorter term risks like fake videos, corrupting elections, and loss of jobs. I'm saying we need to worry about all those things, and it might be rational to just stop developing it, but
I'm not. I don't think there's any hope of that, so I'm not advocating that.
You said in December there was a ten to twenty percent risk that AI would cause human extinction in the next thirty years. How do you count with those odds?
I just make them up.
That's I'm just like, no. If you think about subjective probabilities, they're based on intuition. I have a very strong intuition that the chance of super intelligent machines taking over from people is more than one percent, and I have a very strong intuition that is less than ninety nine percent. We're dealing with something extremely unknown, so your best bet for things totally unknown is maybe fifty percent, but that doesn't work very well because it depends on how you partition things.
So clearly the chance is.
Much bigger than one percent and much les than ninety nine percent, and maybe I should just stick at that. But I'm hoping that we can figure out a way that people can stay in control, because we're people and what we care about is people.
Now, do you view it as inevitable that if we if we quote unquote lose control, our destruction that is the is the next No, they.
Might I een on Musk for example.
I talked to him and he pushed the line that they'll keep us around as pets because we're quite interesting.
M hmm.
It seems a rather sort of thin thread to hang human existence by.
So, I mean, you've been vocal about the kind of overall societal threat, but you've also been specifically critical of Sambleman in particular.
Yes, because I think open Aye was set up to develop a GI safely, and it's just been progressively moving away from that towards developing a for profit and so it's best safety researchers have left and it's now trying to convert itself from a not for profit company into a for profit company. And that seems entirely wrong to me.
And you're I'm a colleague and student. Ilius Atsgev, who worked on the twenty twelve paper with took a big stand on this, which ultimately didn't break his way.
It didn't break his way in terms of Sam being fired as the head of the company. It did break his way in terms of people understanding that open Ai was going back on its pledge to develop Aji safely and India is now set up a company.
I'm that's trying to do that.
I mean, if if Samaltman called you tomorrow through Toronto and he were sitting with him, he said, you know, where have I gone wrong? And what should I do? What do you say?
I'd say, you're not Sam Altman.
Fair enough?
Okay.
So suppose something very surprising happens and Sam Altman suddenly has an epiphany and says, oh my god, we shouldn't be doing this for a profit. We should be doing it to protect humanity. I'd be very happy to doctor.
But what if you could affect his opinion in one way or that of other.
I would say, keep developing AI, but use a large fraction of the resources as you're developing it to try and figure out ways in which you might get out of control and what we could do about that. So if Sam Olman said we're going to use thirty percent of our compute, how much compute we have, we use thirty percent to get highly paid safety researchers to do research on safety, not on making it better, developing for the bit on making it safe, I'd take it all back, and I'd say he is a great guy.
I mean for a non for profit, that doesn't sound like unreasonable exactly.
That that was what I thought was happening to begin with, and that was what idiot thought was happening.
Right before Christmas, there was an article in the Wall Street Journal basically saying that chat GIPT five was behind schedule and that kind of the pace of improvement in deep learning was lowing for different because of lack of real world data. Could be other reasons, But then O three came out and Settlement sort of said that AGI is here. Where do you think we are on this agi? Do you think it's even a relevant metric?
So Ever, since twenty twelve, there've been people saying AI is about to hit a wall. So Gary Marcus made a strong prediction in twenty twenty two that AI was hitting a wall and wouldn't get much further. So you have to see that against a background of repeated predictions that I is about to hit a wall.
This is a bit more.
Real in the sense that we really are reaching peak data or peak easily available data. There's actually hugely more data in silos and companies and in videos. So yes, we're running out of easily available data and that may slow things down a bit. But if you can get them to generate their own data, then you can overcome that problem, and you can get to them to do
that by reasoning. So if you look at even with things like chess, there's only a limited number of expert moves, but you can overcome that by getting the system to play itself, and then you get an infinite amount of data to train on. And so the neural nets that are saying would this be a good move, or saying how good is this position for me, they now get an infinite amount of data, or rather unbounded amount of data. You can always generate your data at the appropriate difficulty
level two. And so nobody says your networks for Chest and Go are going to run.
Out of data.
They're already far better than any person, and we can make them much much better than that if we wanted to.
Shortly before you quit Google, you tweeted caterpillars extract nutrients, which are then converted into butterflies. People have extracted billions of nuggets of understanding and GPT four is humanity's butterfly. Can you explain that?
Okay, So if you look at insect most insects have larvae and they have adults. But let's take butterflies obvious example. And if you look at a caterpillar, it's not optimized for traveling and mating. A caterpillar is optimized extracting stuff. You then turn that stuff into soup, and then you get something very different. So what humanity has been doing for a long time is understanding little bits of the.
World, translating the world into data through photographs and words.
And yes, and now you could take all that work we've done at extracting structure from the world, like a caterpillar extracting nutrients, and you could take that extracted stuff and turn it into something different. You could turn it into a single model that knows everything.
When I read it first, I thought it was quite beautiful and optimistic, and then I read it again I didn't think that anymore.
Yes, I mean you could read it as we're.
History, Yes, being farewell to our caterpillar. Yes, how do you read your own metaphor?
I'm probably somewhat influenced by a piece of William Blake poetry which goes the caterpillar on the leaf repeats to thee thy mother's grief, which is basically saying the butterfly is much prettier than the caterpillar. I think I think that's what it's saying. You know, I don't know whether we're going to get replaced. I hope we're not. I hope people stay in control, but I hope we stay in control with assistance that are much more intelligent than us.
Well. As you said, mothers are the only creatures in there that we know of who are controlled by less sophisticated beings. I think he said something on this line.
And babies aren't much less intelligent than the mothers, like at most a factor of two. We're talking about huge factors.
You mentioned Blake just now. But the other person I thought of when I was reading that butterfly metaphor was your was your father, who of course was an entomologist.
And yes, that's why I got my interest in metamorphosis.
Yes, so jet jet GBT is humanity is butterfly, but also in a sense your butterfly, And this is a metaphor to don't know your.
Father, I guess grudgingly.
In the worst case scenario where AI does cause our extinction, what are the ways in which that could happen? And the best case scenario where it doesn't, what are the ways in which that could happen.
Okay, so the obvious way it could happen is we make air agents that can create subcoals. And they realized, because they're super intelligent, that a good sub goal is to get more control, because if you get more control, you can achieve your goals. So even if they're trying to achieve goals we gave them, they're trying and get more control. It'll be a bit like I don't know
if you have children. But if you have a three year old who's finally decided they want to try tying their own shoelaces, but you're in a hurry to get somewhere, you let them try tying their shoelaces for a few minutes and then you say no, no, I'm going to do it. You can learn that way you're older. AIS will be like the parent and will be like the children, and they'll just push us out the way to get
things done. So that's the bad scenario. Even if they're trying to achieve things that we've told them we want, they'll basically take control. And that scenario gets worse if ever one of those super intelligent ai I thinks I'd rather there were a few more copies of me and a few less copies of the other is super intelligent aies.
As soon as that happens, If that ever happens, you'll get evolution between superintelligent aies and we'll be left in the dust, and they'll develop all the nasty things you get from evolution, being nasty and competitive and very loyal to their own tribe and very aggressive other tribes, all that nonsense that we have. So that's the bad scenario. The good scenario is we figure out a way where we can guarantee they're never going to try and get
control away from us. They're always going to be subservient to us, and we figure out how we can guarantee that that'll happen, and then we will have these wonderful intelligent assistants and life.
Is just really easy.
Jeffrey, thank you, Thank you.
That's it for this week for Tech Stuff. I'm os Voloscian. This episode was produced by Eliza Dennis, Victoria, Di Mingez, Shino Ozaki, and Lizzie Jacobs. It was executive produced by me Kara Price and Kate Osborne, The Kaleidoscope and Katrina.
Novel for iHeart Podcasts.
The Engineer is Beheath Fraser Offspin mixed this episode, and Kyle Murdoch wrote our theme song. Join us on Friday for the Week in Tech. We'll break down the headlines and hear from some of our expert friends about the latest in tech. Please rate, review, and reach out to us at tech Stuff podcast at gmail dot com.
Thank you.