Ep6 "What will AI mean for artists?" - podcast episode cover

Ep6 "What will AI mean for artists?"

May 02, 202350 min
--:--
--:--
Listen in podcast apps:

Episode description

Will writers, artists, and musicians find themselves unemployed by AI? What are the new capabilities we’re seeing and what does it all mean for human creativity? And what does this have to do with diamonds, Westworld, effort, Frankenstein, photography, Beethoven, and the Stark family in Game of Thrones?

Transcript

Speaker 1

Will writers and artists and musicians become unemployed by AI? What are the new capabilities that we're seeing all around us, and what is this.

Speaker 2

Going to mean for human creativity?

Speaker 1

And what does this have to do with diamonds and Westworld and effort and Frankenstein in Beethoven and the Stark Family and Game of Thrones. Welcome to Inner Cosmos with me David Eagleman. I'm a neuroscientist and an author at Stanford University, and in this episode, I get to dive into something that's right at the intersection of science and creativity. Most of my podcasts are about evergreen topics about our brains and our psychology, but there's something so extraordinary happy right now.

Speaker 2

We're in the middle of a revolution with AI, and what's called generative AI in particular. So I'm going to do a two part episode on this. For today, I'm going to dig into what generative AI is and what it means for human creativity, and then in the next episode, I'm going to tackle the question of sentience. Are these ais conscious and if not, now, could they be soon? And how would we know when we get there?

Speaker 1

So let's start in twenty seventeen when almost no one in the world paid attention when a team at Google Brain introduced a new way of building an artificial neural network. So this was different than the architectures that came before it, which were called things like convolutional neural networks and recurrent neural networks. Instead, they presented a new model that was called a transformer. Now, transformer is not one of those robots that shapeshift into trucks and helicopters.

Speaker 2

Instead, a transformer model is.

Speaker 1

A way to tackle sequential data like the words that are in a sentence or the frames in a video. And a transformer model takes in everything at once, and it essentially pays attention to different parts of the data. And this allows training on enormous data sets, bigger than what was trained on before. Like now it's essentially everything that has been written by humans that is on the Internet, which is petabytes of data. So these models they digest

all of that and what do they do. They essentially look at a sequence of inputs like the words and a sentence, and they ask what word is most likely to come next in that sequence. Now we'll come back to that in a second, but I just want to

note that this transformer model is finding uses way beyond text. So, for example, a recent Nature paper used this kind of model to look at amino acids, which run in a sequence to make proteins, and they looked at these chains of amino acids like techt strings, and they set a major new water mark in determining how proteins fold, which is a very difficult problem. And people are using transformers for everything from making music to reading giant reams of

medical records and so on. These transformer models are built into search already, and soon they're going to be in your phone and in your car, and in your bank and in your doctor's office. So what everyone in Silicon Valley is talking about is how this new kind of

AI is going to disrupt the workforce. And a lot of people are thinking about white collar jobs that have traditionally required memorization of long textbooks, and these jobs, whether they're legal or medical, suddenly seem to be kind of outmoded. And so we're all thinking about what this means for the economy because so many jobs are going to be displaced by this new technology. Now, there's nothing totally new about this kind of worry, because every generation sees new

technologies take over old jobs. That's natural, and we don't lament the fact that we don't have elevator operators anymore, or switchboard operators at telephone companies, or factories that make VCRs or eight track tape players, because new technologies continuously replace the old, and industries change and people adapt. But the concern that we're seeing with the AI revolution is the speed of it. It's probably the case that we've never before had a move forward in technology that's so

unbelievably rapid. So this is why everyone's talking about this with a different point of view than we did with previous innovations. But I want to zoom in on something

a little different for this episode. I want to know what this all means for human creativity, because the thing to note is these models have been trained up not just on the handful of novels and conversations and schoolwork that you have experienced on your thin trajectory through space and time, but they have been trained with everything that's ever been written by humans. Every textbook, every article, every poem,

every blog post, every novel. We're talking seventy one billion web pages and hundreds of trillions of words, It's something that's so far beyond any human's capacity to consume even a fraction of it, or to really imagine a corpus of text that large. Oh and by the way, it has a perfect memory for every word that it's read. So now you're talking about a system that's not the same as a brain, but is incredibly powerful at generating

text or visual art or music and soon video. And so while we'll talk about sentience next week, this week, I want to address a social point that has quickly risen to the surface, which is what will all this mean for human art and human creativity? Personally, I'm working on my next several books right now, and these are all projects that have spanned years, and so I'm fascinated and terrified about whether AI is going to replace me

as a writer. What does this kind of new AI mean for writers, for visual artists, for musicians who studied their whole lives to be able to compose beautiful piece of music? Is human creativity destined for the dust bin of history? So let's start with the downside of these models. So in my book Live Wired, I talked about how AI algorithms don't care about relevance they memorize whatever we

ask them to. So, now this is a very useful feature of AI, but it's also the reason AI is not particularly human like, because AI models don't have any sort of internal model of the world. They have no idea what it is to be a human and have drives and concerns. They don't care which problems are interesting

or germane. Instead, they memorize whatever we feed them. So whether that's distinguishing a horse from a zebra in a billion photographs, or tracking flight data from every airport on the planet, or composing music in the style of Brian Eno, they have no sense of importance except in a statistical sense, which is to say, which signals occur more often.

Speaker 2

So contemporary AI could never.

Speaker 1

By itself decide that it finds irresistible a particular kind of ice cream, or that it abhors a particular kind of music, or that it's heartbroken by King Lear's speech over his dead daughter. So AI can dispatch, you know, ten thousand hours of intense practice in ten thousand nanoseconds, but it doesn't care about any zeros and ones over any others. As a result, AI can accomplish incredibly impressive feats, but not the feat of being quite like a human.

And so some critics of AI say, look, it's like you want a sandwich, and what this transformer model does is it looks at all the billions of sandwiches out there in the world, and it gives you a slurry and it pours it out in.

Speaker 2

The shape of a sandwich.

Speaker 1

A fellow writer gave me that analogy the other day, and that doesn't sound particularly appealing, right, And yet these ais have massively surprised us.

Speaker 2

The text generation is so good, it's.

Speaker 1

So complete, it's so human like that we find ourselves not so much in the phase of invention like with all the machines we've made before. Instead, the whole scientific community is finding itself in a process of discovery. Everyone is exploring to find out what these enormous models are capable of, because nobody quite knows. They keep blowing our minds with things they're able to do which weren't pre

programmed and not even foreseen. Have a friend who works with a big city symphony, and she's trying to play a program for the symphony several months out, which is a typical timescale for symphony planning, but she's scheduling to put on a program with music composed by AI, and she's at a loss for how to plan this because she's well aware that things are moving so fast that the musical world and the skill level of AI composition is going to be entirely different. In a few months,

it's can be more advanced. So she was telling me that she doesn't quite know how to nail down plans for this, because unlike every symphony planner who has come before, she's now in a world where if she nails down a choice of music and trains up the musicians, it is guaranteed to be badly outdated some months from now.

And this is the world we're operating in now. So jennertive, AI is moving so rapidly that we have entered this massive revolution without most of us realizing that we were going there.

Speaker 2

Art and writing and music aren't.

Speaker 1

Going away, but they're going to completely change from how we know them today.

Speaker 2

Now.

Speaker 1

I told you earlier that AI doesn't have any idea of what it is to be a.

Speaker 2

Human, but I think it doesn't matter.

Speaker 1

AI doesn't need to feel anything to write great literature or great art or great music, because while you can think of it as a sandwich slurry. You can also think of chat GPT as a remix of every human writer that has come before. Its training set is humankind, and so even if it's just statistical, it's generating the expressions and the passions and the fears and the hopes

of millions of people. So it doesn't matter if it feels or knows or has theory of mind, or if it cries at king Lear's speech, because it can convincingly tell you a story that breaks your heart. And it does this by drawing on the best of human writing over the centuries. So as a result, it's incredibly good and it puts together things in a new way. And I think part of understanding this requires acknowledging a really important point, which is that the AI is really good,

but also that humans are so easily hackable. The phrase humans are hackable is a phrase that I first started hearing from my friend Lisa Joy Nolan, who with her husband Joan Nolan, created the television show Westworld, and that was a big theme in that show. The humans could so easily get seduced by the robots, or convinced to do bad actions or act violently and the robots were just running AI. But if they say the right thing, then they can get humans to do things, whether that's

fighting or fornicating or whatever. It's like turning the key in the lock. Now, there's a point that I want to dig into here. If you saw Westworld, you may remember the scene from the first episode where a man named William has just arrived to Westworld and he's greeted in a room by a beautiful woman who guides him to pick out his cowboy outfit and his gun in his hat, and she makes it clear that she's available for him sexually, and he uncomfortably asks her, are you real?

And she says, if you can't tell, does it matter?

Speaker 2

Now?

Speaker 1

This is a major theme throughout Westworld. Humans are hackable, and if you can't tell the difference between something that has evolutionary importance to you and a fake version of it, then it makes no difference. And this is what we see when we look at the text that is spit

out from chat GPT. It is statistically sound, meaning it falls in the orders and rhythms of millions of people who have written things like it before, and so we can be just as compelled by the text, and therefore the fact that AI can write a story that moves us and impresses us is no surprise. It's easy to move and impress us. In a sense, it's no more surprising than drawing a pornographic cartoon that turns someone on. You're just plugging into deeply carved programs. A human can't

mate with the cartoon. But nonetheless, it's easy enough to activate the biological programs, so a story can make you shed tears or laugh even if the transformer is just pushing around zeros and ones. And therefore we shouldn't be surprised that AI can write these really great pieces of prose. It doesn't have to be real and it doesn't matter. So now that we can write beautiful prose with AI, what does this mean for the future of books. Well, I think we can imagine a pretty cool future for

AI generated literature. We can imagine generating infinite, wonderful material.

Speaker 2

And you know what, Back in the day.

Speaker 1

Kings and emperors had poems written that were bespoke. The poems were written just for them. And now it's going to be trivial for us to all live as royalty, having bespoke literature written just for us as much as we want, as often as we want, in seconds, and maybe we'll come to enjoy dynamic novels, by which I mean a piece of literature that's not pre written, but instead is written on the fly depending on the decisions that you make, like a choose your own adventure. So

you say this is a good book so far. Now I want to see what happens if I go in the neighbor's door and get a view on his life, or the mailman life who just passed by, or the traffic cop and the book just keeps writing itself on the fly, thousands of pages that end up being.

Speaker 2

Unique for me, for you, for everyone as they go on their own adventure.

Speaker 1

Instead of having some poor author who has to write every possible branching path, now there's no need to do that.

Speaker 2

You just generated on the fly.

Speaker 1

So now we'll all get to experience literary worlds that are infinite in all directions. So in that light, it certainly seems that AI is going to replace human creatives. It can do things better and millions of times faster, and it can be there to write the next pages according to your wishes. So it looks like writers are going the way of the mastodon?

Speaker 2

Or are they?

Speaker 1

I think the real story is not so simple. I'm fairly sure that while AI will augment human told stories, there's essentially zero danger that it's going to do a wholesale replacement of human creatives. And I'm going to argue this for four reasons. The first is that we care about the overarching arc of a story, and at least at the moment, AI can't even come close to constructing this. And this is because of a fundamental limitation in its architecture.

And this isn't just a question of pouring more money in and getting more massive computers on the job. It has to do with the exponentially increasing computational cost of representing longer pieces of work. So currently with chat GPT four, it looks at the past four ninety six tokens, which is about three thousand words, and it decides what the most likely next word is. But without getting into the details of the math, I want to point out that

this requires a matrix. Think about it like a big spreadsheet that has four thousand ninety six rows in four thousand ninety.

Speaker 2

Six columns and an entry in every cell.

Speaker 1

That represents something about the probability of those words going with each other.

Speaker 2

Now, this matrix will grow larger.

Speaker 1

With time, but the size of the output is inherently constrained by this structure, and as a result, chat GPT is perfect for poems or blonde posts or small articles, but not something the size of a novel. Why because a novel has arcs and plot twists and cleverly planted clues and cliffhangers, and all of these operate at a longer timescale. So a human author mentally zooms in and

out such that their stories have this sweeping arc to them. So, for example, in a mystery novel, we get to the end and we realize that all the clues and the red herrings we saw or subservient to the solution to the mystery, which of course the author knew from the beginning, and the author was just spooling out clues to you one at a time. In writing, you often have to know the end to structure the beginning in the middle.

And this is, by the way, why chat GPT can't make up a new joke, even though it can repeat jokes that are already made.

Speaker 2

But it's because to construct a joke, just like a mystery novel, you have to know the punchline first, and then you construct the joke backwards. But these large language models are simply constructing everything in the forward direction. It does statistical calculations on what the most probable word to

come next is given all the words before it. So, coming back to the long arc, if you watched all eight seasons of Game of Thrones, for example, or you read those books, you come to care about these characters because you've been with them through so many trials and you feel like you know the and understand them, and you can predict things about their behavior, and you're invested

in their long term trajectories. So all the children of the Stark family end up scattered in different directions in the world, and then in the final season, they end up reconvening. After what seems like a lifetime of adventure. They're all back together for the final big showdown with the Knight King. And when we watch the series and we get to season eight, we think, wow, I didn't see that coming, that they're all back together now, and now this story has a beautiful shape to it.

Speaker 1

I'm really in the hands of a professional here. At least with our current AI architectures today, it's impossible to achieve that, except possibly in a few thousand word version, because chat ept is playing its statistical game, and of course it's playing it extremely well and successfully.

Speaker 2

But the trick to recognize here is.

Speaker 1

That it is amazing at the level of paragraphs and possibly a few pages, but not at the level of thinking about the details of a five hundred page novel, or a two hour movie screenplay or an eight season epic. It's great at this small stuff because it can do that with statistics, but it's fundamentally limited for the longer stuff because it has no way to zoom out and think about the crops that it wants to plant for

the long game. Okay, you might say, fine, maybe we'll get there at some point, but even for now, couldn't you build a big story out of smaller chunks. So one idea is to make this form of storytelling in which the world is infinitely big.

Speaker 2

Let's come back to this picture.

Speaker 1

I painted a moment ago of a choose your own adventure in which the AI generates plot points on the fly for you. So I say, okay, open that door to my left and the story continues as though it were all prescripted, as though I have an author, let's say, in the style of Henningway or Nibokov for Morrison, who has pre written every possibility. In certain ways, this would be amazingly cool, But I think the problem here is that a story like that would just equal randomness, and

that's not actually what we want in a story. Instead, we want to feel like we're putting our trust into an.

Speaker 2

Author who sees the big picture.

Speaker 1

We want the Stark children to reconvene such as we feel the overarching pattern of the story and we have a sense of completeness. If you just wanted randomness, you'd go out into the world and find it there. You wouldn't sit on your couch and read about meaningless characters who are just in Brownian motion. And I think this is the same issue with AI music, at least as it stands now.

Speaker 2

Recent examples show.

Speaker 1

That it can compose incredible sounding music moment to moment.

Speaker 2

But the reason it doesn't beat out.

Speaker 1

A real human composer, at least today, is because it doesn't have any long term vision, and so the whole piece of music just hangs together. Statistically, moment to moment, and that's perfectly good for composing things like elevator music, which is for a short ride, or commercial music which only needs to be twenty seconds. But it won't for now replace a human composer who writes with the long

arc in mind. For example, I was just talking with my friend Tony Brandt, who's a composer, and he was explaining to me that when Ludwig and vad Beethoven died, he left behind sketches for a tenth symphony. So a few years ago some computer scientists used AI to complete the symphony, to finish what was unfinished. Now did they do a good job. In one sense, it was an

incredible feat. They extracted the statistics of Beethoven's choices and preferences from everything he'd written, and they used that to statistically guess what moves he would have made next had he lived, What notes, what chords, what instruments. But even with this feat, it was clear that the AI didn't know how to think long term. For example, Beethoven's Ninth Symphony ends with a chorus, which was such a surprise

to end a symphony this way. It had not ever been done before, so the team training the AI decided Beethoven would have found a similar novelty to end his tenth Symphony, so they instructed the AI to include an organ, a church instrument that had also never been used in a symphony before. So at the start of the last movement, the AI generates an organ.

Speaker 2

But when we zoom in, we see the difference.

Speaker 1

The real Beethoven laid all sorts of clues in the Ninth Symphony to set the groundwork for the chorus. Like the orchestra plays a type of music called a recitative before the choir enters. Why because recitatives are found in opera, and opera has voices, So he was laying clues down. But in the AI tenth Symphony, there was no build up to the organ. There was no suspense, no hidden clues about.

Speaker 2

What was coming.

Speaker 1

The AI didn't know how to prepare the organ's arrival, how to give it the significance that's there for experts who listen for arcs that build through time. So, at least for now, AI is useful at writing brief articles and composing short ditties, but it doesn't have the architecture to write long pieces that humans love to create, and consume.

Speaker 2

So as I'm.

Speaker 1

Writing my next books, these large language models don't feel to me like a real threat, at least not yet. But let's imagine that we cut to ten years from now and some hardworking programmers have figured out how to build an AI with the right sort of architecture that zooms in and out on the scope of a story, and it can successfully generate a novel with cliffhangers and overarching themes and so on. It's certainly not impossible that we're going to get there, and it'll probably happen sooner

than we expect. So let's imagine we get there in a year or five or ten. An AI can generate a million good novels in an hour. Then what Well, there are several directions in which things can go, And the possibility that I mentioned earlier is that novels might become bespoke, totally personalized to you. So you prompt your AI to make an adventure story of exactly the type that you might like. So you say, tell me a murder mystery about a basketball player who's killed by someone

who appears to be his girlfriend. But then it turns out it's actually a CIA plot. That opens the door to a cover up involving a pharmaceutical company. Let's assume that the AI then spits out a book to your exact specification, and it does an amazing job, and it gives you a colorful story just how you wanted it, and you can enjoy that on the beach seconds later. Well that's cool, But I assert that this is never

going to replace literature. And this is my second point why artists don't need to worry, because when you define your own plot, the surprise is diluted.

Speaker 2

The joy of literature is diluted.

Speaker 1

After all, even if you are a creative prompter, you are limited to versions of what you have experienced or read before. And much of what we love in literature is this surprise that comes from a particular point of view that you have never considered, like characters or plot points that would never be generated by your own limited point of view. In the end, I think we don't want to be limited by the parochial fence lines of our own imagination.

Speaker 2

I suspect that.

Speaker 1

No matter how far in the future we look, we are still going to want stories that surprise us, plot twists that we don't see coming. Okay, fine, you might say, so you agree that it's more exciting if we go on rides that we didn't predefine. But you might point out there's another thing that AI can do. So let's address the next issue, the idea that AI could someday generate millions of highly creative versions of a single story, so there'd be no need to stick with just one

version of stories anymore. Instead of George R. R. Martin writing Game of Thrones over decades, future AI could generate thousands of fascinating versions in a second, and we wouldn't depend on him for the next slow novel. But I suggest that's not going to catch on either.

Speaker 2

Why.

Speaker 1

It's because we care about shared adventure. Would Game of Thrones have been so popular if we each saw our own version of it? In my version, John snow dies early, and in your version, danaris Mary's Tyrian lanister, and in your neighbor's version, Ariya marries into a royal family in some subplot island that never even appears in my version.

If this sounds less appealing to you, to have mutually exclusive worlds, it illustrates the point that I want to make, which is a big part of story is this social aspect, the shared experience. We certainly could use AI to generate a million different versions of west Ros, and in the future we can generate instant video around these plots with terrific special effect. But as a society, I think we wouldn't want to each consume our own version. You want your John Snow to do the same thing as my

John Snow. And this is because a huge part of story is this shared experience. We enjoy sharing fantasy worlds because we talk about them. This is why we do book clubs, so we can sit around and discuss something we all shared together. All the time, I hear people say, hey, did you see the latest episode of The Peripheral or Jack Ryan or Severance or Star Trek or whatever. And our love of communal stories stems partially from our need

for shared references. For example, I'm always making reference and is to how Neo in the Matrix saw in slow motion, and that's decades after that movie came out, but it serves as a quick, culturally shared way that we can

talk about concepts. We all have quick cultural references for time travel, where people say met me up Scotti when they're talking about teleportation, or we reference Obi wan Kenobi when we say may the force be with you, or we reference ex Machina or Westworld as a shorthand for AI going bad.

Speaker 2

And take this as an example.

Speaker 1

Imagine that you could generate a fantasy football game with your favorite players from any decade on one team versus players on another team, and you can now watch a full football game from stem to stern. But would you if no one else ever saw that game? In other words, would you follow teams all the way through the World Series if it was purely AI generated plays and games. I know that people might have different opinions on this, but to me, that sounds not the least bit appealing.

Why it's because a giant part about sports is the culture of talking about the game. Hey did you see that play last night? Can you believe that shot he took? Can you believe the call that refmade? And stories are analogous to sports in this way. We come to our book clubs to take the world that we read in solitude and find a community with other people who were

there with us from their own living rooms. So I suggest that as a culture, we are always going to desire and need a shared vocabulary, and the only way to grow that is to watch the same movies and read the same stories.

Speaker 2

And that's why I predict that.

Speaker 1

While individualized stories might find niche audiences, it won't replace our need for shared stories. This is an interesting dimension of literature that's not typically canered. Story gives us social glue. Okay, fine, so let's assume that at some point AI could write a story that's so evocative and beautiful that it becomes a shared story, an adventure which everyone taps into and enjoys.

And now we arrive at my fourth point about why AI won't totally displace creatives, and that is the question of whether we get something more out of a piece of literature or art if we feel there's.

Speaker 2

A heartbeat behind it.

Speaker 1

I read a beautiful quotation in The Atlantic about a decade ago quote one of the only requirements for literature is that the reader can feel a heart pulsing back from them on the other side of the page. The heartbeat matters because when we read, we consider the intention of the author. We think, oh, this is Mary Shelley, whose mother died a couple of weeks after she was born, and she had a troubled childhood, and her father homeschooled her.

And she married the romantic poet Percy bish Shelley, and he was already married and his wife committed suicide, and they moved to France, and she came back pregnant, and they were destitute, and their daughter died. And then they went to spend a summer in Geneva with friends, and they each set out to write a ghost story, and she ended up writing Frankenstein.

Speaker 2

So we read her.

Speaker 1

Novel and we think, this is her voice, and this is her viewpoint on the world, and these were the things that she knew and the things she didn't know, and the things she couldn't know.

Speaker 2

It isn't just the piece of art itself.

Speaker 1

It is the artist behind the art that colors our experience. So imagine we get Chad Gpt to adopt Mary Shelley's style and write a story involving cell phones and electric cars. It might be interesting and amazing, but I suggest we wouldn't enjoy it as much because we would recognize there's no unique human, no unique beating heart who had the experiences and slaved over the words. Now, you could argue

that almost all of the authors we enjoy. We live apart from them in space or time, and we'll never meet them, and we just have the vaguest sense of their existence.

Speaker 2

And that might be true, but it's still worth.

Speaker 1

Noting that we know fundamentally that they are human and they are like us in some way. They may be more successful, or more impoverished, or maybe from a different country, but we know that fundamentally they are fellow travelers with us on the human journey. Now, obviously we love a lot of things that aren't real, like Spider Man or Batman, but we all I also love the actors behind them.

If you had a chance to have dinner with or even to shake the hand of the actor behind some fantasy character that you love, you'd be thrilled about this.

Speaker 2

Now, I think that leads.

Speaker 1

To an interesting open question about some of these new avatars that are hitting the scene with hundreds of thousands of followers on Twitter. Even though they're fake. They're just avatars, they're not real people. The part that strikes me is really interesting is that the ones who get all the

attention are the creators behind the avatar. In other words, if I told you there was an avatar on Twitter, with a one hundred thousand followers, and you could get the chance to meet the young woman behind all this, you'd be thrilled. What this tells me is that we are compelled by the heartbeat that is just behind the actor or the avatar. In many ways, that's more interesting to us than the actor or the avatar themselves. Now, I don't think this goes on in so let me

just address the counterpoint. You might say, well, does that mean that if AI generated a thousand novels in a second, that I'd be really interested in meeting the team of young programmers behind that. I don't think so, because meeting the programmers doesn't expand your understanding of the story. But meeting an author who poured her heart into the story for years that does shape and color and expand your understanding.

Speaker 2

And by the way, beyond writing, I think.

Speaker 1

This applies to musical composers and visual artists in the same way, and in fact, to all human endeavors. I was just talking with a neighbor of mine. He and I spend a lot of time on airplanes flying to some city in the world to give a talk. He just got a three D scan and a high resolution avatar of himself made and he can combine that with

Chad GPT to make his avatar give little speeches. And so he and I were really chewing on this because the question is, the next time he gets invited to speak on some stage and some random city around the world, can he just have the avatar give the speech online instead? Will conferences still want him to fly across.

Speaker 2

The globe to give a talk.

Speaker 1

Or will the avatar be good enough and save a lot of expense and plane fuel? Possibly, But the flip side is do people value going to the talk because of the beating heart.

Speaker 2

On the stage?

Speaker 1

And my long bet is that conferences will continue to invite flesh and blood humans because audiences are humans who

care about other humans. So when it comes to legal documents, if AI can do it better, awesome, when it comes to medical diagnoses, if AI can do it better awesome, when it comes to hearing a speaker on the stage with his or her imperfections and limited knowledge and fundamentally human nature, I'm going to take the bet that that is going to last and beyond just appreciating the reality

of another human. This maybe for another reason as well, an interesting psychological effect that I think is going to be at play here. This is what I'm going to call the effort phenomenon. I'll give you an example of this. A well known colleague of mine here in Silicon Valley recently announced that he had published a book half written by him and half written by AI. And when I first heard about this, I thought, I wish I wanted

to read this, but I don't now. I did take a look at the book, and there are clever insights, and it's well written. But I'm simply not that inspired to read something that's even half written by AI, because it makes me feel, perhaps unfairly, that.

Speaker 2

He didn't put in the normal amount of effort.

Speaker 1

My analogy you would be if Picasso said, hey, will you buy this painting? My students painted most of it, but then I finished it off and put my signature on it. It feels like it would be slightly less valuable. So let's return to that scene in Westworld where William asks the host are you real? And she says if you can't tell, doesn't matter, Because this is the question that comes up.

Speaker 2

About a novel.

Speaker 1

If I spend seven years writing a novel, and if Chad Gpt or google bart spits out a novel that's word for word equivalent.

Speaker 2

Does it matter?

Speaker 1

And I think, perhaps surprisingly, the answer is yes, it matters. We care about the effort that went into it. If I were to show you two pieces of artwork that someone had done, and one of them just involves painting a single dot on the middle of a big white canvas, and the other one is the person carefully gluing marbles one on top of each other until they balance eight

feet high. You may have a p for looking at one or the other, but just think about how much money you would, in theory, be willing to pay for each of these. If you're like most people, you think the thing that took a lot of effort is worth more. There have been psychology studies on this since the nineteen fifties. It's difficult for people to separate out the effort that went into something from its value. In other words, the

effort is used as a shortcut for understanding quality. For example, in one paper done by Krueger at All, they had people rate a poem, or rate a painting, or rate a suit of armor, and the people generally thought it was better quality and worth more money, and they liked it better if they thought it took more time and effort to produce a friend of mine. Uses the example of diamonds. People will pay much more money for a real diamond with flaws than they will for a synthetically

grown diamond from laboratory that has no flaws at all. Now, why would you pay extra money for flaws? Part of this has to do with the notion of effort. The real diamond was produced by mother nature over millions of years of compression, so it's a very special thing that took quote unquote effort on the part of mother nature.

Speaker 2

But the lab grown diamond that can be done in a day and a half.

Speaker 1

And so even though it's more perfect, it is less valuable because it just took less time to make it.

Speaker 2

We actually pay for flaws.

Speaker 1

Now, I'm not arguing that we can't be fooled at some point into loving AI generated literature. It seems quite possible to me that in the future there will be novels written by AI, and we might not always know it, because the AI will also generate a false story about the author, complete with a biography and a generated photograph.

My assertion is simply that FA it is going to be an important part of what the AI will need to do, because it's more difficult to become invested in something that we think is simply doing massive statistical calculations rather than having a private, limited internal life. We care about other humans, So what's the big picture. My friend Kevin Kelly suggested to me the other day that generative AI may play a role that's analogous to the invention

of the camera. What happened at that moment in history was that painters lamented that this was the end of painting because you could now capture anything instantly with the click of a button, and you could capture it with zero mistakes. So why would you sit there with a paint brush and painstakingly try to capture every detail by hand. At that moment in history, it seemed clear that painters were done for But as it turns out, photographs ended up filling a different niche.

Speaker 2

Absolute realism wasn't the only end goal of art.

Speaker 1

People didn't only want a maximumly realistic print of a scene. They also wanted swirls, an amazing color, and more importantly, things that didn't exist in the outside world. So canvas painting remained an active field, even while photography grew and ended up flowering on a neighboring field. So one possibility is that AI generated literature will not foment it takeover, but instead it's going to fill a new niche, one that we don't quite see yet, but it isn't the

same plot of land. And I think there's one more possibility for where this could go for writers, not now, but in the coming years. And for that, I want to tell you what happened with the world champion Go player Could Jig. He was the world's number one player at Go, which is the game in which you use those small black or white rocks to define your territory

and try to surround your opponent. So in May of twenty seventeen, he faced off against an AI program called Alpha Go, which was designed by Deep Mind, and Alpha Go had been trained on millions and millions of games of Go, so it had deeply absorbed the statistics of possible plays. So they played the first game and Jiu lost. Alpha Go had pulled moves that none of his human opponents had ever thought of, and then Jua lost the

second game. The AI had won over a human in a game that's way more complex than chess, and subsequent versions of the AI are no doubt going to continue to win evermore. But that's not the interesting part of the story. The interesting part is what happened next. So Jig got over his embarrassment and he became mesmerized by what had just transpired, and he studied the games.

Speaker 2

That he lost.

Speaker 1

Before he played Alpha Go, Jia had won a majority of the games against his human opponents, but afterwards he found he was able to beat his human opponents even more easily. After his species shaming defeats in twenty seventeen, he went on to play twelve straight matches against humans and.

Speaker 2

He won them all in a row. So what had happened.

Speaker 3

He had been exposed to new kinds of moves and strategies that had been pulled by Alpha Go, and these all lay outside of traditional ways of doing it.

Speaker 2

All these moves that Alpha Go had done.

Speaker 1

Were legal and possible, but they were just different from what had been played over the last twenty five hundred years.

Speaker 2

If you're a Go officionado.

Speaker 1

This included things like playing a stone directly diagonal to your opponent's loan stone, or playing six space extensions, while humans tend to prefer.

Speaker 2

Five space anyway.

Speaker 1

Joe reported that playing against the AI was like opening.

Speaker 2

A door to another world.

Speaker 1

Once he was exposed to these alien game plays, he incorporated them, and this story I suspect typifies the future as humans and machines interface. Some people are worried that AI is going to take over, but we will continue

to adapt as well. We will become better writers as we see examples that are allowed by the language but no one had ever tried it, or visual art techniques that involve moves that are allowable, but culturally we just never thought to do it, Or musical moves that are possible to.

Speaker 2

Do with notes, but no one does.

Speaker 1

Them because traditionally we just wouldn't think of going there. Because fundamentally, as a writer, I think I'm doing all kinds of original things, but there's a very real sense in which I'm simply remixing what I've absorbed before. I

interpolate between examples that I've seen. So even if AI is just interpolating, it's read billions of times more texts than I have, and so it can do very clever interpolations, and I can learn from that a lot of people are worried that AI is going to leave humans far behind, and in many respects that's true. But as computers improve, so will we. In the battle of man and machine. Both are going to get better, and as we continue to adapt in parallel, the future definition of AI may

well shift from our official intelligence to augmented intelligence. In the best case scenario, this isn't going to be a war, but a collaboration. It's going to be an ongoing, guided tour into areas that were previously just beyond our view.

Speaker 2

That's all for this week.

Speaker 1

To find out more and to share your thoughts, head over to eagleman dot com, Slash Podcasts, and you can also watch full episodes of Inner Cosmos on YouTube.

Speaker 2

Subscribe to my channel so you can.

Speaker 1

Follow along each week for new updates until next time.

Speaker 2

I'm David Eagleman, and this is Inner Cosmos.

Transcript source: Provided by creator in RSS feed: download file