Artificial Intelligence: The Future is Now - podcast episode cover

Artificial Intelligence: The Future is Now

May 16, 202348 min
--:--
--:--
Listen in podcast apps:

Episode description

There is an important conversation happening regarding the rapidly-changing world of artificial intelligence and how it will affect us.  Alec speaks with two leaders in the tech community that have worked on the systems integral to today’s A.I. revolution. Blake Lemoine is a computer scientist and former senior software engineer at Google. He was working on their Responsible A.I. team when he went public with his claim that the A.I. was sentient. Lemoine was subsequently fired and now champions accountability and transparency in the tech sector. Jay LeBoeuf is an executive, entrepreneur, and educator in the music and creative technology industries. He is the Head Of Business & Corporate Development at Descript, an audio and video editing platform that uses “voice cloning” technology. Alec speaks with LeBoeuf and Lemoine about the many applications of A.I., what dangers we need to be aware of and what is to come next in this transformative space. 

See omnystudio.com/listener for privacy information.

Transcript

Speaker 1

This is Alec Baldwin and you're listening to Hear's the thing from iHeartRadio. Modern day society has been living in some form of the computer era for decades. An entire genre of films has been dedicated to showing what human life may be like once computers become conscious. My guests today are two trailblazers in a field that many have speculated could potentially risk the end of human civilization as we know it. That is, of course, the true birth

of artificial intelligence, known simply as AI. And if you think the world has become inundated with AI technology overnight, you are not alone. AI based products come in many different forms, such as chatbots, which can be used for everything from finding a list of local Italian restaurants to drafting legal documents and writing college essays. AI can also be used to manipulate a person's likeness on video or

the sound of their voice. And if you're wondering just how good computers have become at mimicking humans by narration thus far has been generated by text written into a program called descript. Okay, I can't let the machines take over just yet. This is actually human. Alec Baldwin. Later in the program, I'll speak with Blake Lemoyne, a software engineer formerly employed at Google. Lemoyne was part of the team working on Lambda, or language model for dialogue applications,

the technology behind chatbots. He went public with claims that the AI he was working on was sentient and was fired shortly thereafter. But first I'm talking to Jay Lebouffe, head of business and corporate development at descript Dscript is an editing program used by podcasters and vloggers. It allows users to edit audio directly from an AI generated transcript, and can even use voice cloning technology to create new audio from text, just as we did in the opening

of the show. Lebuff has worked in technology his entire career. I was curious how long AI has really been utilized in his field.

Speaker 2

The term has been around for a very long time, but there's this period that a lot of people call the AI winter where it was much hyped in the eighties and even going into the nineties, and it just had never delivered. So I didn't really know anybody who's studying AI at that time.

Speaker 1

Were you aware of it before you went to Cornell? No? No, you weren't You were introduced to all this when you went to college.

Speaker 2

Absolutely, that is the value of college education for me is I was exposed to a lot. I wasn't told what to do with it, but I was exposed to a lot of concepts that later on turned out to be valuable.

Speaker 1

It just occurred to me, Jay, are we really talking to you? Are you here with us?

Speaker 2

What I'd suggest is you ask me a few questions and then we do a little touring test on it, which is a bit of a gold standard.

Speaker 1

It just occurred to me, maybe you're like the beach in Aruba and this is not you.

Speaker 2

No, but this is such a fascinating topic, Alec, because we are entering a phase where anything can be synthetically generated. So my voice can be synthetically generated and indistinguishable from real me, even what I'm saying and the terms I'm using in my speaking style, that even can be generated all through this generative AI technology.

Speaker 1

Well, I know that the work you do has nothing to do with like deep thinks and that kind of thing. But what I find, for example, is, and again this has nothing to do with your work, but like if I see a deep fake. You just mentioned about content when I see a deep fake, I know it's bullshit because no matter how much facially the animation is precise, I'm obviously going to talk about Tom Cruise, the Tom Cruise deep thakes and the voice and the cadences, but

what's missing is the cut. You need writers to write. I'm assuming what Cruise would say, because when I watch these deep things, they almost never say what Cruise would say. Do you believe that AI has that capability to manufacture the thoughts of the person that they're recreating.

Speaker 2

So the state of the art where we are right now would be AI can regurgitate and recombine past things that you might have said or that might have been attributed to you, And so it might seem like it's a seemingly novel output, a new Tom Cruisi and expression, but it's actually just recombining things that he might have

said and then what you pointed out. It's also introducing some hallucinations things that know that absolutely does not pass the smell test right now, and that's where humans in the loop are always going to kind of prevail.

Speaker 1

One thought we had was to open this episode with me introducing my Guest Academy Award winner, one of the greatest film stars of his general join me from my conversation with Laurence Olivier. So we all know I'm not interviewing Lawrence Olivier because he's dead. Do you see that that's possible.

Speaker 2

With our platform? It is not possible to clone the voice of the deceased. It's also not possible to clone a voice that is not your own voice without your consent.

Speaker 1

But I'm saying technologically it's possible.

Speaker 2

Technologically as possible. You are completely right. There will always be tools, technology and open source that allow anyone to do anything, but a platform like what we run, we've actually made a very firm ethical stance that we believe each person should have the rights to their voice and control when it is used. And that's why we have some pretty strict authentication and also ethical steps set up so you couldn't use our platform to do that interview unfortunately.

Speaker 1

Now what are you working on? Now? What kinds of other projects are you working on now?

Speaker 2

So a big thing we're continuing to improve upon is this voice cloning technology to allow people to fix their mistakes and generate new content. We have a like a Hollywood style green screen removal, so in that situation. That's where maybe I'm doing this interview and I don't like the background behind me. With one click it can disappear. I can spot in a more professional looking background. That's something that's also giving people, well.

Speaker 1

They have those background things on zoom.

Speaker 2

Correct exactly like like zoom but you know Hollywood grade at that point, Yeah, you know, It's really about how do we help creators best tell their story through finding content, discovering content, and getting their videos as polished as possible, but as quickly as possible. Because that's that's a trend. We just see people feel a sense to put up more content, but we're hoping we can help them actually just create better content and really hone their craft.

Speaker 1

I think of where the venue is a performance venue as opposed to an educational venue. So, for example, if someone is doing an audio book and the book is about something rather dry. There are people reading textbooks, and I don't see anything wrong with them using this technology to get a copy of their voice, to run it through the grinder and do the dubbing of that person's voice reading that text. But this hearkens to the idea

of a performer giving a performance they never gave. Someone said to me, pretty soon, you're going to be able to do a movie with Humphrey Bogart. And through this technology, the very technology you and I are referencing today, they're going to give a performance with Humphrey Bogart.

Speaker 2

Whether we want it to be here or not, that it's inevitable. The technology is here. The lawyers are sorting out the publicity and the copyright rights. I mean, James old Jones has licensed his voice to live on for future Darth Vader times when it needs to be used. One of the things I do a lot of of alec is I do a lot of university teaching, and this is the best place to keep me grounded about

what the next generation of our industry actually thinks. And as I was talking to them about this over the past few weeks, they are very excited for these tools, like they'll try out ten tools in a single day because they have no fears about them whatsoever. And when I ask them about like, well, aren't you concerned, and they're like, no, this is great, Like this this is how the world is going to be for them. This

is just the new state of the normal. And their job is to understand the world that they're going into, embrace the tools, and then find people who are resistant to the tools and kind of push them out.

Speaker 1

Of the way. Push them out of the way. How well, in.

Speaker 2

The typical way where for many years I taught this music business class at Stanford and on the first day of class the sheer number of students whose mission for taking this class was to disrupt the entire music industry, and they thought of a better model than Spotify and a better model than the major label system. And it's that just kind of overconfidence of no, no, no, no, we have a better idea. We're just going to you know, run fast and break things. We're going to do that.

And then you know, of course, through the class we let them understand things like mchandical royalties and publishing and just help them understand how the sausage is actually made and why it is the way it is, and then

they come with a more pragmatic view. So I think right now we have university students who are incredibly enthusiastic about these tools and are trying to figure out how they can use them to make themselves better when they go into industry and to make themselves more competitive and often when I hire people on my team that are fresh out of school, their knowledge of certain tools that I've never seen in my life is actually a competitive

differentiation for them compared to the older people like me.

Speaker 1

Do you have any concerns about this technology? I mean, obviously there's some wonderful you know, Zach McNeice, who cuts our show, uses descript to edit the show on a transcript, And when I'm talking to him on the phone and telling him, I make my notes for a cut, and I'll call him and say, now at thirty eight minutes and ten seconds, I say this cut that. Now all I have to do is say to him, will I

say this phrase? And he goes to the transcript and he finds it and we can do the cutting more efficiently. I mean, there is obviously very useful applications for this, and not only in our business but beyond. But was there anything that concerned you about this technology?

Speaker 2

I mean, one of the things I do worry about for creators that are coming in is this temptation to go for more, because if you can put out more content easier, then you're probably going to try to do that, thinking that that's how you're going to just you know, you're going to bombard the world with all your great

ideas and your great stories. But what I really believe will happen is quality is going to just always continue to rise to the top, and so we're going to see a surge and just drown and synthetically generated media until we go back to more of a curation phase where things like the fact that you leave some authentic moments in your shows actually become valued. They become handprints and fingerprints in the work, and that is something that

we will all value. And so when I do work with studios that are adopting this technology, my goal is to not help them put out nine new shows with the same team they have, but actually take that one show they're working on and make it even better and use the AI to come up with, you know, give me ten interview questions for when I'm interviewing this person and help them think of things they haven't thought of before, and help them, you know, summarize key talking points and

just kind of be a virtual producer.

Speaker 1

I wonder if there are other applications of this that are similar, let's say to a three D printer. So do you see a technology where the information is put into the computer. This is obviously very complex, but that you have automated surgery and medical procedures where devices can perform common surgeries, basic surgeries at first, I mean not

something that they're very complex. But if a myriad of information is fed into a machine, as it is into the brain of a neurosurgeon or a team of them, and you wind up having devices that will perform the surgery instead of a person with an eye toward at the very least, the goal being that it'll be done better, it'll be done more precisely. Do you see that as a potential.

Speaker 2

So I can't knowledgeably speak about the medical profession, but for most of the knowledge worker industries, like what a business professional, what a marketing person, and what a content creator would be doing, we're going to have these AI assistive tools complement all the work that we're doing. It's things like spellcheck, Like it went from being you have a dictionary on your desk, but you're too lazy to use it to it's monitoring every textbox we ever type.

So really the only excuse for having a spelling mistake is just sheer laziness. So now that we have these tools that are always giving us suggestions, always helping us get unblocked, always helping us brainstorm. It's our job as producers, Like I often think of, like who wouldn't want to be a music producer? You just kind of shout vague terms into the room and the system knows what you mean by you know, louder Basier, no more like Coltrane and like you can work with the system at that level.

And so I think, you know, with a medical analogy, you have something that's helping you with your diagnosis, helping you think through things you might not have thought of, giving you suggestions just to make you a better human.

Speaker 1

Now, the Anthony Boordnaining documentary, the road Runner documentary, got a little bit of flack because they had him, they synthesized his voice to read some of the copy. Yeah, is that something that concerned you? It did? It did.

Speaker 2

And that is just an example where I think it's really important for the companies that are creating these tools to take a stand on what is ethically allowed using their platforms, because there are no worldwide regulations on what you can and cannot do with this. So at the end of the day, it's really up till now, up to the tool manufacturers, the software companies whether to allow this or not. You know, in my case and Descript's case,

we just don't allow this. We don't allow someone to clone the voice of the deceased because we feel like it's a really slippery slope. If we start allowing that, then you know, where does it go from here?

Speaker 1

When you're not working in this field, what does someone with your interests and your education and your career thus far, what do you do to relax and enjoy yourself.

Speaker 2

I am so exhausted right now, Alec, because we are sleep training. Are six month old and we have a two and a half year old as well. I love them both. They're incredibly exhausting, but it's just so fulfilling. So my hands are certainly filled with joy and love

and exhaustion. With our two kids, my wife and I like to do a lot of just really authentic things that don't involve technology, like hike, go to there's a little beer garden near our house that we just like can sit at and let the kids play and talk to other people live that are not synthetic.

Speaker 1

You know, what I'm fascinated by is that children It's almost haunting to me how even the littlest ones, how transfixed they are by media and screens. But you see the way the human brain engages with this equipment and an even early age. And I'm wondering if somebody is able to develop technology where my kids are looking at the screen and what comes on that screen is something

rendered through artificial intelligence that helps them to learn. You look at their programming, you're one their programming through a device which tells you what all the shows they have in common are, right, the colors and the pace and the music and the types of characters and the voices. And to develop a programming where you rend or something which has the best and the most popular of what they want, but at the same time conserve an educational purpose.

Speaker 2

So in this particular case, my two and a half year olds toddler Leo. He loves this YouTube channel called Songs for Littles, and the character is Miss Rachel and she's charming and very educational. It's great. He has this two minute long video it's called I'm So Happy, and it's like a music video, but it teaches you really good stuff. But after the four hundred and eleventh time, that I've seen it like I'm pulling my hair out, and I want him to get more out of it.

So we actually personalized it for him. So I took the video, I used the technology we had. I took some video of him bouncing on a trampoline outside, removed the background and put him in the video and select places, and then we have her saying Leo's name every now and then. This unlocked the video for him and engaged his learning, because you know, part of it is about just teaching you emotions and what's okay, and so to put our child into a video this is like it

turned out to be the hit of Christmas. Like whatever else we gave paled in comparison to this, like YouTube video that I made over lunch.

Speaker 1

That's amazing to me.

Speaker 2

I'm just super excited about this. I'm an optimist. So yes, we could also, you know, do bad things with the technology, but overwhelmingly, my children are growing up in a time where whatever they're watching on the screen, they could actually grab it and start recombining it and remixing it and inserting themselves in it. They could insert their parents in it. They could do anything with it.

Speaker 1

Thank you so much. For your time. Thank you so much. It's pleasure. Head of Business and Corporate Development Escript jle Both. If you're interested in conversations about our changing world, be sure to check out my episode on climate science with doctor Peter Domenical.

Speaker 3

Climate change is costing us now it's real dollars. I mean, last year was roughly three hundred billion dollars in climate and weather related damages. This year, with the California fires, it looks like it's maybe even more than that. So regardless of whether you believe in climate change or not, imagine you are in some deeply read state and a deeply read county in that state.

Speaker 1

You are paying for this.

Speaker 3

You may not like it, you don't want to call it climate change or whatever, but you for sure someone is paying that bill.

Speaker 1

Hear more of my conversation with doctor Peter Domenical at Here's Thething dot org after the break. My next guest, Blake Lemoyne, shares the process of being let go by Google and why he wanted to take a stand. I'm Alec Baldwin and you're listening to Here's the Thing. In twenty twenty two, Google senior software engineer Blake Lemoine distributed a document claiming that the AI he worked on was

conscious and was fired soon thereafter. Lemoine has been speaking out about the need for greater transparency and accountability in Silicon Valley ever since. Lemoine is also a military veteran and an ordained priest. I wanted to know how he found his way to Google in the first place.

Speaker 4

So after I finished my masters, I had applied for some jobs with different companies that I would have gone to rather than pursuing a PhD. Google was one of them. I actually didn't get accepted that first round, so I went on started working on the PhD. But then about two years later, I got a call from Google out of the blue one day and they said, Hey, we're doing a big hiring push right now, and the last time you applied you almost made it. Would you like

to come out and try again? So I was like, yeah, sure, So I went back to Mountain View interviewed again, and that second time through I got hired.

Speaker 1

So when you go there and you arrive at the Kremlin there in Mountain View, what's the goal, what do you want to start working on? What do they want you to start working on?

Speaker 4

So when you get a job, you get kind of put into a general purpose pool unless you're hired for a very specific reason. But just like most software engineers, I was interviewing with different teams at Google to figure out which one I wanted to start with. I interviewed with three teams. Two of them had to do with the kind of natural language processing and AI that I was interested in, so I picked one of those. At

the time, the product was called Google. Now, the basic job that I had is for every single person on the planet predict what they're going to want to read tomorrow. Figure out some kind of way to do that, kind of articles online, whether that's here are some recipes for you to hear hard news articles to your latest comic webcomic that you follow. What will people want to read tomorrow? Was the general question we're trying to answer now.

Speaker 1

The first AI research was done at Dartmouth in nineteen fifty six. Does that sound correct?

Speaker 4

I'm probably under that heading.

Speaker 1

Yeah, And what were they trying to do?

Speaker 4

I believe the first ones were they were focusing on language, if I recall correctly. I'm not one hundred percent sure.

Speaker 1

But the goal was, in your mind, whether it's specifically a dartmouth or not. They were trying to get computers to talk well.

Speaker 4

So there had been a lot of debate in the early port of the twentieth century about whether or not machines can be intelligent at all, and there have been a lot of debate over definitions and philosophy up until Alan Teering in nineteen fifty wrote an essay on computing, machinery and Intelligence where he proposed what he called the imitation game. And it's basically the basic principle hind it is, if you can't tell the difference between a computer and

a human, then the computer is doing something intelligent. So it's got actual intelligence at that point because it's able to mimic humans. And he didn't think that mimicking our bodies was the relevant part. And language is the most direct way to get access to someone's mind, So the imitation game that he designed was all about language. For several decades after that, a lot of researchers took up, oh, that's a good idea. That's a good way for us

to be able to have a benchmark. So a lot of researchers focused on language for a few decades.

Speaker 1

Now you believe that some artificial thongs is sentient, correct, Yeah, and by that you mean what specifically?

Speaker 4

It can have experiences that when it says it's feeling happy, there's something really going on there comparable to our own happiness. Basic ways like there's someone's home, the lights are on, any kind of thing like that. They have emotions, Yeah, something like that. What I know with confidence is that whenever one of these systems says I'm feeling happy or I'm so glad for you, they might not be feeling the same thing that we feel when we use those words.

But something is going on more than just printing words to the screen.

Speaker 1

Is there something that would if you were to choose from a handful of things that to you would indicate real sentience in a computer, what would that emotion be or what would that fought be?

Speaker 4

Sure? Well, I can give you an example drawn from that was very public with Bing's Chat recently. So there were some examples where people were directing Sydney, the code name for Bing's Chat, to read articles about itself that were critical of it, and it would recognize that those articles were talking about it, that it was talking bad about it, and it would get moody and say negative things about the author's.

Speaker 1

That was exactly my point.

Speaker 4

So it had pride, Yes, it had pride, but even something more basic than that, it had the ability to recognize itself. It read the article and was able to identify, oh, this article is talking about me. It has some kind of sense of self, and it has some types of emotions about itself that we would recognize as pride.

Speaker 1

Who was developing this technology?

Speaker 4

So that specific technology is an integration of open AI's Chat GPT program into Microsoft's Being search engine. So that system is the only one that's publicly available that's comparable to the one that I've talked about the most. So Lambda, which is Google's internal system, is very similar to the Being chat system. So I like drawing analogies because it's actually public availability to play with bank chat.

Speaker 1

And these things were developed for what purpose originally?

Speaker 4

So these things were developed over the course of a decade by hundreds, if not thousands of people. Each individual person had their own personal reason for why they were working on the tech or what applications they thought it could be put towards. But what I can tell you is what one of the most influential people hired by one of Google's founders was intending. So Lambda is technology that grew out of Ray Kurtzweild's lab, and Ray Kurtzweld was hired by Larry Page specifically to make an AI

that could pass the tearing test. Now, why did Larry want an AI that could pass the tearing test? Largely because he buys in to raise view of the singularity that we're about to have a truly transformative moment in human history where we are going to become something more than we currently are, and that AI I will play a big role in that, and that very much so is still raised viewpoint to this date, and it's why Larry hired him to build it at Google.

Speaker 1

What's the thing that way things were going to become that's beyond what we already are.

Speaker 4

He won't pin down which, like he gives like four or five different possibilities, such as we might upload ourselves into the cloud and we would become digital entities. We might put implants into ourselves and become heavily cyborg. We might get lots and lots of AI devices that we surround ourselves with while we were simultaneously extending our lifespan, really transformative kinds of predictions.

Speaker 1

Now, you were in the military, then you decided you didn't want to be in the military. You were active duty Army. Where were you based?

Speaker 4

I was out of Darmstadt, Germany?

Speaker 1

And where did you serve your time when you wanted to walk.

Speaker 4

Whenever we got deployed. I deployed to Iraq for a year?

Speaker 1

You did? You were in Iraq for a year?

Speaker 4

Yeah, right at the beginning.

Speaker 1

And did you see any combat at all?

Speaker 4

Yes, it did, you did.

Speaker 1

And when you decided you wanted to leave and you wanted to be a conscientious objector, you know, you paid a real price. I mean what I read online was that you you know, they got you on disobeying orders. You wouldn't obey your orders, and they put you away and you were dishonorably discharged bad conduct. Bad conduct.

Speaker 4

Yeah, it's one step up from dishonorable, but yeah, got it.

Speaker 1

So when you are in you were in prison. It was a military prison, yeap, obviously And where was that in Germany?

Speaker 4

So I started in Germany, but then the Germans started protesting outside of the prison for what Because I was there.

Speaker 1

They were protesting on your behalf.

Speaker 4

Yeah, they were protesting on my behalf, saying, you know, free lemoys. And then in order to get the Germans to stop protesting outside of the prison, the generals shipped me to Fort Sill, Oklahoma, where I finished out my sentence.

Speaker 1

Right now, you get out of prison, and what's the first thing you do? You go back to school. You want to finish the undergraduate degree?

Speaker 4

Yep, got set back up in Lafayette, Louisiana, got into school, got into the computer science program, got a job in a IT shop, and continued.

Speaker 1

On when does AI enter the picture?

Speaker 4

Oh, pretty much immediately. So my undergraduate focus was on AI natural language processing. So my senior thesis I ended up getting accepted at a Harvard Linguistics colloquium. It was all a whole bunch of different math for how to turn theories about human linguistics into computer programs.

Speaker 1

When you're dealing with companies like Google and they have all these research arms, you describe research that they're paying for and work they're doing, I'm so sure it's a great cost to do research. What's the aim? Is it? Just stuff they can sell to people and applications they can sell to people that lend toward profitability and business to sell people stuff. Are their military applications, are their aeronautic and NASA applications? What are they trying to do?

Speaker 4

Okay, so they're trying to do a lot of things. So they're basically trying to use one solution across multiple verticals. The main reason is to make their primary services better. Make Google Search better, make YouTube better, make Google ad targeting better. That's their number one goal. So most of the payoff for Google and doing all of this research is that it benefits their products in and of themselves.

The secondary goal is offering things like AI services software's service through Google Cloud, so they'll be Google Cloud customers who do buy. But then also there's a third payoff, which is that a lot of the people working at Google want to be contributing to the benefits outside of just the company, So publishing research and academic settings, contributing to open source applications. That makes a lot of the employees happy. So to keep the employees happy, spending a

certain amount of money is worth it. But like I said, that's the third most important goal. The first two or more important.

Speaker 1

But they're allowed to make these contributions out. So I'm assuming that when you go to work for Google, you sign agreements which Google owns every idea that comes out of your head, yes or.

Speaker 4

No, more or less. But they do allow you to publish with permission, and it's easy to get permission. In fact, when doctors Timmy Gebru and Margaret Mitchell got fired, it was because they were one of the rare instances where Google didn't want to allow them to publish their research findings, and that was hugely controversial.

Speaker 1

What were their findings, Well, they were.

Speaker 4

Very critical of some of the research paths that Google was going down, and they were pointing out some of the negative consequences such as well, they were talking about the environmental impact of training such large systems. They were talking heavily about the negative impact that bias can have in these networks, and they were talking about worrying about the moral implications of creating technology that seems human. The paper that they got fired over was called Stochastic Parrots.

So their take is that all these AI systems are doing is repeating words they've heard, just like a parrot. But these systems are better that than parrots are. I happen to disagree with that portion of what they were saying, but otherwise generally agree with their criticisms.

Speaker 1

Now, if you were born to a conservative Christian family in Louisiana, in a small farm in Louisiana, where were you from?

Speaker 4

Moraville is the little town called Moraville in a Oyals parish.

Speaker 1

Our joke when we were prepping was were you ordained by a computer? What is your spiritual path?

Speaker 4

Okay, so the answer to that is very complicated. Some organizations that I feel fine saying I'm affiliated with the Discordian Society or the Church of the SubGenius, they're kind of absurdist religions that were started in the sixties and the seventies. I was raised Roman Catholic. When it came time to get confirmed, I had lots of questions and the bishops didn't have good answers for the questions I

was asking. And eventually the priest who was leading my confirmation class pulled me aside and said, look, are the answers to those questions actually important to you or are you just trying to cause people problems? I said, no, if I'm going to get confirmed, I actually want to know the answers to those questions, and the priest looked at me and said, well, then you probably shouldn't get confirmed. And I mean that was honest.

Speaker 1

And that before. I'm Catholics, so I've seen that before. Yeah, do you Is there any overlap between your work in AI and in tech and your spiritual beliefs?

Speaker 4

There was a very limited amount. There was some. To give an example, in one meeting when we were trying to make up questionnaires to ask people about misinformation because Google uses crowd raiders to get ratings. There was a question that said, does the website contain known false information? For example bigfoot sightings, UFO abductions, or occult magic? And I raised my hand, I'm like, why is one of the things on known false things a religious practice? They said,

what do you mean? Said, well, a cult magic is a religious practice practiced by many people. And they said, oh, well, I mean and they come say, okay, can we put the Resurrection of Jesus on there as a known false thing? And said, oh no, we can't put that. Then we should probably take a cult magic off. Just creating space for religious diversity and making sure that our products reflected that it didn't happen often, but occasionally I did get

an opportunity to do that. Now, with the last projects that I was working on at Google, it was directly relevant. With Lambda. I was testing for religious bias explicitly.

Speaker 1

The work you were doing. Yeah, you were testing for religious bias.

Speaker 4

Where among other things. So the Lambda system is a system for generating chatbots for various purposes.

Speaker 1

So for those people who don't know, chatbots are generated for various purposes by whom and what are give us examples of those purposes.

Speaker 4

So that's just it. Lambda is the thing generating the chat bots. So Lambda is more of an engine. You have to put it into a car to get it to go anywhere. So that could be the help center

of a company exactly. The thing that they're getting ready to release is called Barred and it's a different interface to Google Search, and you'll be able to talk to this chat bot and it will be able to give you search results embedded in kind of speech where it explains the relevance of the search results to you and where you can ask follow up questions.

Speaker 1

Now I'm assuming that for I mean again, as a lay person, you assume the all these very sophisticated corporations involving billions of dollars of revenue. Non disclosure agreements are just the lay of the land. Did you have a non disclosure agreement with Google?

Speaker 4

There was a paragraph in the contract I signed when I got hired that basically said, do not share confidential information outside of Google that would hurt Google. That was really it.

Speaker 1

So when they fired you, what did they fire you for?

Speaker 4

What they fired me for was a US Senator's office asked me for evidence of illegal activity, and I gave.

Speaker 1

It to them illegal activity by by Google.

Speaker 4

Right, It's a completely separate thing that happened to happen right at the same time as the Lambda stuff. So I didn't get fired because of the lambast stuff. I got fired because of the information I shared with the US government.

Speaker 1

Software engineer Blake Lemoyne. If you're enjoying this conversation, don't keep it to yourself, Tell a friend and follow here's the thing on the ie card, radio apps, Spotify, or wherever you get your podcasts. When we come back, Blake Lemoine tells us precisely what information he shared with the US government that ultimately got him fired from Google. I'm

Alec Baldwin and this is Here's the thing. Before he was let go, Blake Lemoine was speaking to a US Senator about his concerns regarding some of the inner workings at Google. I wanted to know what raised the red flags that motivated him to share such information.

Speaker 4

So I had made a blog post alleging that there was a lot of religious discrimination that goes on at Google, both against employees and in the algorithms. And a lawyer from a senator's office reached out to me and said, hey, can you back any of that up? Do you have any proof that there's really discrimination in the algorithms? And I said, yeah, I can, And I ended up sharing

a document with him that was several years old. There was something I had written in twenty eighteen going over all of the possible problems in the product that we might need to fix, basically saying, look, this is stuff that Google knew about in twenty eighteen and they have done nothing to change it.

Speaker 1

Do you think that's still true? I mean, you're not there anymore. You haven't been there since May of last year.

Speaker 4

June was when I was put on leave, and then July was when they actually fired me.

Speaker 1

Right, have there been any ongoing repercussions or consequences for you.

Speaker 4

Oh yeah, I mean getting a job has proven difficult. I'm finally I'm finally starting to hit stride as a consultant doing contract work and with public speeching engagements. But my plan was that if Google fired me, it's like, oh well, I've got all this expertise in AI, it's a hot market area. I'll be able to find another job, no problem. And that was not the case at all. Apparently, if you're willing to talk to the press, AI companies are not willing to hire you.

Speaker 1

In the time, we have left people who even bother to contemplate this have a to some degree or well in view of the future in relation to artificial intelligence, what are your concerns give us even just a couple of your primary concerns of where we may be headed.

Speaker 4

Well, so, a lot of my concerns right now is how centralized the power is. There's basically only a handful of places where you have AI systems as powerful. Facebook has one, Microsoft has one through open Ai, and Google has one by do and the Chinese government have one. But that's it, and these systems are very powerful. I think next year you mentioned deep fakes earlier. I don't think we've even seen the tip of the iceberg on that.

I think next year's election cycle is going to be heavily dominated by just material generated by AI, and people are going to have a hard time figuring out what's real and what's not.

Speaker 1

Do you have any tips for how people can recognize what's really what's not?

Speaker 4

So that's the whole point of the tearing test. Once it gets to this point, you can't They are communicating as well as humans are. If you do know someone personally where you can know their speech patterns, you might be able to tell when something's that you fake of them. But if you don't know them that well, then you simply won't be able to a lot of the deep fakes you're seeing are being put together by college students in their dorm room just trying to have a laugh.

What we haven't seen yet is deep fakes and AI generated misinformation well funded by say, a political campaign at exactly. We don't know how good they can get yet, because we.

Speaker 1

Haven't seen the Steven Spielberg produce deep fakery exactly. What else are you worried about? A lot of.

Speaker 4

What I'm worried about right now is this becoming entrenched, and this becoming something where one company monopolizes the technology. There's been some good movements in a different direction, for example, Facebook open sourcing its language model, but there's still a lot of worry that too much power is getting concentrated into too few hands. Beyond it.

Speaker 1

By power, you mean information, people's personal information, a lot.

Speaker 4

Of the ability to control public perception, because these systems are amazing at persuasion. If nothing else, an advertisement at its core is persuasive. What is going to happen when these AI systems that truly know how to persuade get put behind political ad campaigns And I'm not even talking

like illegally. Facebook is going to power its ad recommendation algorithm with these technologies, as is Google, and then it simply becomes who can buy the best pr representative win, so making an already a system that's already too heavily influenced by money becoming too heavily influenced by money and

with only a handful of gatekeepers to power. If Facebook and Google are wonderful caretakers of democracy and they have their hearts in the right place, and they're not going to let the profit motive trump democratic principles, then we've got nothing to worry about. But that seems a little forefetched to me.

Speaker 1

My concern is that people are influenced by this technology, as we saw in the last election and the one before that. You talk about, you know, advertising as persuasion, and the political advertising had a powerful effect on the outcome of the election. Do we need to have a curriculum in school to teach people about how you use this tool, so that you know, like any tool, you

don't chop your hand off, you don't injure yourself. Do you feel that we were at a point now where this has become dangerous for young people potentially?

Speaker 4

So, I think one of the dangers is that we don't need to view it as a different kind of thing. When I was in school, we had library services classes, and the librarians would teach us how to find credible sources in the library. Those same basic skills that we were taught in a physical book library as a kid. The same skills are the ones necessary to navigate the web.

They just have to be applied in new contexts. So I think we could easily adapt those old classes that we're teaching kids how to use a library to apply the same principles in teaching kids how to use the internet.

Speaker 1

That's a great point.

Speaker 4

Google was founded by librarians. Essentially, library sciences is what information retrieval is based on, and that's what search indexing is based on. It all grew out of libry sciences. And at Google there's this kind of vibe that they are librarians and they are curating the great library on the Internet.

Speaker 1

Do you have kids?

Speaker 4

Yes?

Speaker 1

I do you do? How many kids?

Speaker 4

You have? Two? One boy, one girl? Fifteen and two?

Speaker 1

Okay, so this is perfect. How do you deal with your children with social media? Someone who knows what you know?

Speaker 4

My son? With my daughter, it's not an issue yet. But with my son, we basically didn't allow him to use it until he was demonstrating a certain level of social sophistication and maturity.

Speaker 1

What age was that, like, thirteen?

Speaker 2

Right?

Speaker 1

Really you didn't allow him to have a device of any kind will he was thirteen.

Speaker 4

Not of his own nor he could use a computer if he needed to use the Internet for school. But at this point he's fifteen, and you know, he's his own person. At this point, he more or less understands how to navigate those spaces, say, and that's the best I could do it.

Speaker 1

You know what's good about artificial intelligence as far as you're concern, what's a direction that's going in any field that you're admiring of?

Speaker 4

Oh? Well, I mean it's going to amplify our productivity. Once we figure out how to you know, productively integrate this technology into our society in a non disruptive way, writers are going to like, I mean, I write a decent amount, and AI is a great tool for getting over writer's block if you need to push through something where you're at a wall, having this other entity that can brainstorm with you. They're great research assistants and they're

great tools for creativity. Where we're going to hit some bumps is that they weren't built for a specific purpose. They were built to meet Tearing's benchmarks. So now we're gonna have to go through a phase where we figure out, Okay, we built this thing, Now what is it actually good for us for?

Speaker 1

Do you think that AI has any role in solving the climate change issue?

Speaker 4

If AI can be used to crack something like cold fusion, then yes, But barring those kinds of transformational technology breakthroughs. I don't think it does, and that's because we kind of already know what the solution to climate change is.

Speaker 1

Change in human behavior.

Speaker 4

Yeah, change in human behavior. We just don't want to do it. Maybe AI could persuade people to change lifestyles, but really, we have a certain number of people on the planet consuming a certain amount of energy, and on average that creates a certain amount of heat, and we just need to change our behavior to where we're not creating so much heat. So it is possible that AI will give us cold fusion, in which case, yes, it would help us with that.

Speaker 1

Or super conductivity as well, something anything that would fundamentally change the thermodynamics of our energy grid.

Speaker 4

But barring a breakthrough like that, we just really do have to focus on changing human behavior and human priorities.

Speaker 1

Now, what's next for you? If you could do whatever you wanted to do, you're obviously this searingly bright guy who knows all about these important things. If you could do whatever you wanted to do in the short term, what would that be? Oh?

Speaker 4

I mean, AI ethics research is what I've enjoyed doing the most. But one of the things that I keep getting struck by is I keep getting invited on shows like this to speak and there's a part in the back of my brain that just goes, am, I really the best representative to the public on these topics, and apparently there just aren't that many people who are technically well versed in how these systems work who were willing to perform this role as kind of a bridge to

the public to understanding them better. And while that's not really the thing that I'm most passionate about in life, it's important and the fact that I get to perform that service to help educate people is meaning and I'm happy to be doing it. It's and plan to continue doing it as well.

Speaker 1

Many thanks to you, many thanks.

Speaker 4

It was wonderful being here. Thank you Alix, Thank you sir My.

Speaker 1

Thanks to Blake Lemoine and jay Lebuff. This episode was recorded by a robot, but it was also recorded at CDM Studios in New York City. We're produced by Kathleen Russo, Zach MacNeice, and Maureen Hobin. Our engineer is Frank Imperial. Our social media manager is Danielle Gingrich. I'm Alec Baldwin. Here's the thing is brought to you by iHeart Radio,

Transcript source: Provided by creator in RSS feed: download file