TDZ Top 5 Of 2023: #1 Overrated A.I. (9.26.23) - podcast episode cover

TDZ Top 5 Of 2023: #1 Overrated A.I. (9.26.23)

Dec 28, 20231 hr 12 min
--:--
--:--
Listen in podcast apps:

Episode description

In episode 1553, Jack and Miles are joined by A.I. Researcher and co-host of The Good Robot, Dr. Kerry McInerney, to discuss… Ways The Future Of AI Might Look Different Big And Small, The Pause Letter, Taking The Profit Motive Out? Alternatives to the Corporate Capitalism Stuff? There’s A Bias In The Media To Make It Seem Like AI’s Plotting Against Us, Are We Overrating AI? How Movies Shape How We Picture AI, What Should We Be Reading/Watching Instead? And more!

PRE-ORDER The Good Robot: Why Technology Needs Feminism NOW!

LISTEN: White Science feat. ZelooperZ by John FM

 

See omnystudio.com/listener for privacy information.

Transcript

Speaker 1

Hey guys.

Speaker 2

So I've always wanted to do some sort of like top Episodes of the Year rundown thing ey.

Speaker 1

This year, I had.

Speaker 2

A little extra time before taking off for the holiday, and so kind of threw something together just based on what the episodes were that you listened to the most. In future years, I'd like to open it up for voting, get your input. But for this year, we're just gonna be rerunning each of the top five episodes while we're on holiday break. And here we are. We're up to number one. This is the most listened to episode of

the year twenty twenty three. For a year where bullshit about AI dominated the headlines, bullshit AI headlines dominated my brain like the Van Vaught thing. I think it's very appropriate that the most listened to episode was about bullshit AI. It's another one of our expert guest interview episodes. I hope you enjoy listening back to it as much as I did, and I hope you're having a great holiday.

We'll see you in the new year. Thanks everyone, Bye, Hello the Internet, and welcome to season three oh six, Episode two of Alley's I Guys Say production of iHeartRadio. This is a podcast where we take a deep debt into America's shared consciousness. And it is Tuesday, September twenty sixth, twenty twenty three.

Speaker 1

Ooh you ready, you know what this is right? Ooh, you know what this is right?

Speaker 3

For all my pancake lovers, it's your day, is National Pancake Day.

Speaker 1

It's also National.

Speaker 3

Shamoo the Whale Day, Shout out Shamoo. It's also for all my dumpling lovers out there, for me, specifically my Gioza lovers out there, National Dumpling Day. Okay, so you know, celebrate however you've seen fit depending on your cultural disposition. It's also National Johnny Appleseed Day. And only bring that up because I remember in Christian school we were made to sing that as like a song before we went to lunch man.

Speaker 2

The amount of pro Johnny Applesey propaganda that I had to live with a lot of pro.

Speaker 1

Yeah, yeah, yeah, It's like it was like John Chapman or something original.

Speaker 2

I just really liked it, really fucked with apples like the children must learn.

Speaker 1

Yeah.

Speaker 3

I bet there's some dark ass shit probably behind, you know, like some a story like that.

Speaker 1

It's got to be a milkshick.

Speaker 2

Though, Yeah, for sure, very problematic. But the way, like the amount of information that I consumed, Like I had one of those my family had one of those book sets where it was like important historical figures and like right next to Lincoln was like Johnny Appleseed, and you know they I was prepared for Johnny Appleseed to be like one of the major figures in my.

Speaker 1

Life, right and education. Did you know the song? No, the Lord is good to me, And so I thank.

Speaker 2

The Lord for giving me the things I need, the sun and the rain and the apple seed.

Speaker 1

The Lord is good to me. And then we could go eat lunch.

Speaker 3

But we had to sing that ship before wow from the from the Disney movie apparently.

Speaker 2

So I wasn't even I didn't even get all of the Johnny Applesy propaganda.

Speaker 3

I got the Christian, I got the Christian capitalist Johnny apple Seed propaganda just mainlined into.

Speaker 1

My little brain. You were a pot on his head, right, I mean I think who sure like.

Speaker 2

Backwards kind of like Davy rocket. Yeah, anyways, this is all I don't know. The ship that is probably specific to American audiences.

Speaker 1

Yeah, but my.

Speaker 2

Name's Jack O'Brien aka start spreading the news eating today. I want to eat a part of it pecan pecan pie. These chattering teeth are longing to chew through the nutty part of it, pig can pecan pie. That is courtesy of Maxer twelve sixteen on the discord, A little just real solid, right down the middle meatball of a weird al parody and reference to our conversation about peacan pie. For some reason, he was lobbing that meatball up to you.

Speaker 3

Yeah, I saw in the discord. I was tagging that, but you took it. Oh so you just do other people's a case. Okay, interesting, All right.

Speaker 2

Well I'm throwst to be joined as always by my co host mister Miles Gras.

Speaker 3

Okay, so you did my AKA and I heard you do this one on Friday, so allow me to do it one more time.

Speaker 1

Same, Miles and Jackie. Have you seen it yet?

Speaker 4

Ooh it's just flying around La La La losing all my jets.

Speaker 5

Yeah.

Speaker 4

Oh there's a wee and his saw spaced out. Oh, Jetty, she's a really neat. She's gotta stell the use a bomb and shoot so fanzy, you can't even be seen.

Speaker 3

La La losing all my Jets, shout out to the Department of Defense for losing all of our multi million dollar killer fucking spacecrafts, and shout out to Johnny Davis and Blinky Heck.

Speaker 1

One more time. We're done more time.

Speaker 2

Yeah, maybe we can like layer those because you went high, I feel like I went a little bit lower.

Speaker 1

Maybe maybe we could get a little little harmony harmony. Yeah.

Speaker 2

Anyways, Miles, Yeah, you're thrilled to be joined in our third seat Yes for today's expert episode by truly brilliant guests who I felt especially stupid singing an AKA in front of She's a research associate at the Lever Holme Center for the Future of Intelligence, where she researches AI from the perspective of gender studies, critical race theory, and

Asian diaspora studies. He is also a research fellow at the AI Now Institute, the co editor of the upcoming volume The Good Robot, Feminist Voices on the Future of Technology, and the co host of the Wonderful Good Robot podcast. Please welcome the brilliant, the talented doctor Kerrie mcinnarn.

Speaker 1

Doctor.

Speaker 5

Hey, thanks so much for having me.

Speaker 1

Thank you so much for joining us.

Speaker 2

How different has this been so far from what you were expecting.

Speaker 5

Oh my gosh, I don't have a song.

Speaker 2

I was like, how do I say?

Speaker 5

I definitely don't have a song. And I also don't know who Johnny applesey it is. I grew up at a New zeal edge and so I didn't have a book of like historical figures. I had a book called watch Out these Creatures, bite and Sting, and it was like all the ways you could die being killed by like a jelly fish or like an octopus. I may leave you into Australia. And so it kind of like traumatized me for the good from childhood. But it does mean I didn't have the great apple seed song that you.

Speaker 1

Just say, did you guys have apples?

Speaker 2

Though, because I was under the impression that the only reason I had apples was a Johnny apple seed. Yeah, and the Lord and the Lord provided.

Speaker 1

It's just all kiwis right down there.

Speaker 5

Pretty much. Kiwis kiwi fruit, no apples, no songs, just no music. It's just silent.

Speaker 3

Is an extcute part of my ignorance is the kiwi is that like a native New Zealand fruit.

Speaker 5

I'm hoping so the kiwi is a native New Zealand bird. And then the Kiwi fruit is what we call that, like fuzzy brown fruit. Yeah. I feel like they kind of mix up the two or they you know, they're both brown and fuzzy and sort of bulbous.

Speaker 1

Ignorance incarnate here, folks.

Speaker 2

But if you do cut the bird in half, it looks exactly like the kiwi fruit. Right, A lot of people don't know that green with green with little.

Speaker 1

Horton seed seed core. Yeah.

Speaker 2

All right, doctor McNerney, we have you here today to talk to us about AI. We've talked to last week's episode about AI. We're thrilled to ask you our follow up questions, but before we get to any of it, we do like to get to know our guests a little bit better and ask you what is something from your search history.

Speaker 5

This is pretty embarrassing. I just got a Nintendo and I'm like finally entering my game Ago Era. I'm like, this is gonna be so fun. But my whole such history it's just me googling like very basic controls and Zelda Breath of the Wilds. It's like, how do I jump? Like it's really awful. I wait and look, it's like how do I befriend a dog? Like how do I climb a mountain? And so it's not going well, I need to outsource my zilta plane to like children who will be much better than me.

Speaker 1

Wait, but is it? Is it?

Speaker 3

You're not forcing you to give up, right, You're just you're You're just acclimating to the new environment, right.

Speaker 5

I mean the Kingdom is not going to be safe whatever I'm meant to be doing. I'm just walking in circles now for like an infinite amount of time. But I'm having fun, which I guess is the main point.

Speaker 1

Yeah, yeah, yeah, And wait, why did you or were you? Were you like a game?

Speaker 3

Did you play games in childhood and you're kind of coming back or this is completely new, like this is all new territory for you.

Speaker 5

I did, but I was also bad in childhood, Like this is a story of like oh, I used to be like super into the game, and I like fell out of touch, like how people were like amazing athletes as children and then like pick it up again, like I'm consistently awful again. So the such just three is not a surprise.

Speaker 1

Got it?

Speaker 5

Got it?

Speaker 3

I mean you were doing you know, worthwhile things like thinking about how to ethically deal with all this technology. While I was laughing my ass off playing Donkey Kong Country, you being like, Aha, slapped the ground and bananas come out.

Speaker 1

That's a little hack people didn't tell you about.

Speaker 2

Our AI guests on last week's Expert episode was saying the video games are one of the ways that he thinks that the future, the future of AI is going to impact our day to day lives. Is that is that something that you think about as you're out there kind of just walking in circles in Zelda Breath of the Wild.

Speaker 5

I mean, I feel like I performed so poorly they probably think I am like an NPC. It's just like that is not a person with free will like that following a fox.

Speaker 1

Around It just it just rotated seven hundred and twenty degrees over and over for no reason. I guess.

Speaker 2

So have they made games more fun for people who suck at video games? That that's the innovation that I'm looking for, because I was never very good at them. Yeah, I liked playing them, I like hearing about them. I think I'm ready to enter my gamer girl era and come back.

Speaker 1

But yeah you should. I mean, yeah, look, Jack.

Speaker 3

They already know you as the switch God because you've been on Nindado so much. The controllers basically fuse to your body, but there are like a lot of new or like new like the new Star Wars game. You could just set it to like, Man, I'm not trying to do all this like fancy shit. I just want to beat people up or just mash the keypad over

and over and win like that, and you can. You can't do that cause I think you know it is like it is about being able to play at different skill levels rather than like, oh, you don't know how to multi use the force while you know you're doing your melee attacks.

Speaker 1

Come on, some of us just want to be the kids.

Speaker 3

Unraveled is good for that too, of like a fun video game that is not like really requiring you to have like these sort of highly developed video game skills and it's an easy play.

Speaker 2

Nice doctor mcinnerney, What is something you think is overrated?

Speaker 5

I feel like I'm not gonna win myself any friends. It'll be like how let loose friends and stop influencing people. But all the Disney live action movies, I've hit a stage where I'm like, no more live act since The Little Mermage is beautiful. I love the original Cinderella, but you know I don't I don't want to see more and more of like the same movies again. Like it makes me a bit sad and like what other stories could be told if it weren't like always making these

live actions. But you know, maybe I'll be proven like totally wrong and the next ones will be amazing, but I'm ready for something else.

Speaker 3

Just yeah, based on everyone's first like sort of reaction like oh you saw how it wasn't like yeah it No one was like it was so good, I'm gonna see it nine times. Everyone just kind of I think they're they have to reconcile like their love for the original source material with that and.

Speaker 1

Not fully be like I didn't like it. They're like yeah, I mean it's it's yeah, it's interesting.

Speaker 3

You know.

Speaker 2

It's like their voice just goes up.

Speaker 1

Yeah, you know, it was like.

Speaker 2

I spent my afternoon going to that.

Speaker 1

I think that's right.

Speaker 2

I've never been more confident in a prediction about the future of a like subgenre of movies than in saying that they're not going to suddenly figure something out about the Disney Live action reskins of the animations, like we know what the original cartoons look like and what happens in them.

Speaker 1

We know what is possible here, right. I can't.

Speaker 2

I can't imagine a version like what they're gonna pull out where we're.

Speaker 1

Like, oh no, that.

Speaker 2

Is not did not see that one coming? Whoa that bear is talking and singing? Hold on, oh they already did that. That was the best one I think was Jungle Book, and that was the first one they did, and ever since I feel like, yeah, been diminishing returns Man, although I didn't see The Little Mermaid, so I cannot speak to that one.

Speaker 1

Is you what's your favorite genre of film though, doctor Kerry?

Speaker 5

Ooh? I mean okay, so I really have a soft spot for like old school superhero movies. At my most recent most amazing film I film was the latest into the Spider Verse film what was it? Spider the Spider Verse? And I just I love animation and that film. It's so witty and the visuals are extraordinary, just great films. So I oracle about how much I love these films at work and everyone has to deal with me being like, go watch it, So anyone listening, go watch it.

Speaker 3

Amazing favorite favorite Spider character, Spider person in the universe.

Speaker 5

Oh my goodness, I can't remember the name of it, but the Light really like the really like dark noir run for the first one. He's like from like all those like nineteen thirties kind of like mystery films, you know, all in black and white. I kind of love that.

Speaker 1

Yeah, yeah, it was it just Spider Man Noirs. Yeah, I think nor Spider Man. Maybe I don't know, or I'm sorry spider Man nor Nor.

Speaker 2

What is when you say old school superhero movies? Do you mean like Christopher Reeve's Superman or are you talking about.

Speaker 1

Like the first Iron Man? The first Iron Man.

Speaker 5

I think it's almost like less like specific films. I love, like almost films that have that like really cheesy like origin coming of age story, Like you know, it's just really just about like they're discovering themselves and it's such a simple narrative, and like I feel like I should crave more complexity than that, and I should want it to be more nuanced stories, but sometimes it's just really satisfying, and you want this clean cut narrative of this like

good guy to beating all these bad guys. When I want to relax, I really enjoy that. In my day job, I think about lots of like complexity and nuances and like what we do with the future of society, and so you know, sometimes I find it very relaxing just to be like soundless.

Speaker 3

Yeah, you're like, yeah, I love a Joseph Campbell type flick.

Speaker 1

Just give me that hero's journey in everything it's.

Speaker 5

Gonna be okay, Yeah, it's gonna go wrong.

Speaker 1

Challenge your old self to become your new self.

Speaker 2

Yeah, but DOCC is an example of what we're facing with AI Like in the immediate future, I think we can all have on that Spiderman two.

Speaker 3

In a way, You're like, it's giving me the equivalent of like eight arms.

Speaker 2

In a way you're like, oh boy, what's something you think is underrated?

Speaker 5

Oh? Underrated? I mean I'm a big like cozy knight in person, Like I want to be one of those fun party people who are like, you know, is out raving are Realistically at ten pm, I'm like, the day is over. I'm in bed, like I think a cozy Knight in especially autumn is coming, Like that's just my happy place, me at home, my husband. Life is good.

Speaker 3

Yeah, I like that. That's my That's my whole thing. Especially we don't get seasons here, so the second there's like just the feeling of like a chill.

Speaker 1

I'm like, I need to start nine fires and just being near them.

Speaker 2

And that is the problem that Miles has, and he's working on his therapy.

Speaker 3

I guess legally it's called arson or something, but what I say is I just want everyone to be cozy around the Greater Los Angeles metropolitan area. I'm yeah, I cannot go within four miles of the Angela's National Forest.

Speaker 1

But whatever, they must see my light. Yeah.

Speaker 3

I love coziness. I love a are you are you cozy? Are you summertime Jack? If you had to pick between, would you rather be summertime Jack or cozy Jack?

Speaker 1

I do.

Speaker 2

I do love a summer especially I'm in my swimming phase where I just I really like getting in a body of water, get running into the ocean, even when it's a little cold ocean. And yeah, so I think I'm like in a summer phase, but I yeah, I am feeling the need to get cozy.

Speaker 3

Yeah, I'll always take cozy over summer.

Speaker 1

That's just me.

Speaker 2

I know, you you it's it's like you fetishize winter.

Speaker 1

I do.

Speaker 3

It's problematic, like that's why I would only date like like women from the Northeast. I feel like you're dating me like for this like exotic sort of life, you think, I like, what's it like with the snow.

Speaker 2

You're appropriating Minnesota, like Minnesota culture, right, Yeah, well, hey, look we're all trying.

Speaker 1

Hey, I heard that.

Speaker 2

He said, Hey, all right, let's take a quick break and we'll come right right back and talk about AI. We'll be right back, and we're back. We're and doctor McNerney as mentioned, I am an idiot on this AI stuff. I think I generally have like my version of AI

up until last week. I guess like researching for our last expert episode was what I had read in you know, mainstream articles that went viral and films mood like Hollywood films, and then like messing around with open AI and or chat GPT, So I had this kind of disconnect in my mind where it was like, from an outsider's perspective, we have this C plus level like copywriter thing with like in chat GPT, GPT for and then like the Godfather of AI, who I'm just trusting people is the

godfather of AI. But that's what everyone uses, that same phrase, like the godfather of AI just quit Google and says we're all fucked in the next couple of years. And I think it's confusing to me because I don't know exactly, like I can't even like picture the way how he

thinks we're fucked. And there was this letter that was like, we need to pause development on AI for in the near few sure, And I guess I'm just curious to hear your perspective on that pause letter and what the kind of dangers of AI are in the near future.

Speaker 3

Yeah, because to Jack's point two, Also, we were talking with Juao Sadok last week at NYU about it, and like at the end of it, we're like, okay, so it's not sky Net right from terminator and they're like, oh great, But then we realize there's a raft of other things that come along with just not being the terminator. So yeah, that's I'm also from a similar perspective where I always assume sky Net.

Speaker 5

Yeah, I mean, I think you're totally not alone in the dominant of ideas like sky Nee the terminator because so much of our cultural framework for understanding what AI is comes from a very narrow set of movie It's like the terminator, like the matrix, which always positions AI is something that's going to dominate us. It's going to take over the world, and it's going to control us. And it's important too, I think highlight that that's definitely

not the only ideas that we have about AI. We've got thousands of years of thinking about our relationship with intelligent machines, and there's a lot of different cultural traditions that have really different ways of thinking about our relationships with AI and intelligent machines that could be much more positive, much more harmonious. And so I do think our immediacy of jumping to this idea of sky net is reflective of very much where we are right now. Right I'm

in the UK, you're in the US. These are countries that have a really long history of thinking about AI and very binary terms. So yeah, I think it's important that we think about these long term risks of AI. And you mentioned the pause letter calling for a wholt to generating more large language models like chat GPT until it had a bit more of a moment to think

about to the long term consequences of these models. But I think it's really important not just to think of the long term risks, but to think about which long term risks we prioritize, because I think the sky neet terminating fantasy eats up a lot of oxygen about how we talk about AI's risk. But there's a lot of different risks that AI poses. So another long term risk that we don't talk about very much at all is the climate cost of AI. Right, because AI it's hugely

energy intensive. Data centers require a huge amount of water to function, and we have this massive problem of e waste produced by a lot of electronic systems. But that long term problem of climate crisis is much less exciting. It's really scary, it's really grounded, and a lot of our experiences and so it just doesn't seem to get

as much airtime. So that's something that I think is really important, is changing the conversation a bit to say, Okay, it's sometimes interesting, sometimes scary to think about the terminator option, but what are some of the other long term options that could really shape our lives.

Speaker 2

Yeah, like the degree to which the deck is being stacked towards the terminator option was surprising to me, Like we dug in last week a little bit to the two stories I had always heard that are like kind of put into the terminator version of AI taking over a category.

Speaker 1

There's the AI that killed.

Speaker 2

A person in a military exercise that decided to like eliminate its controller, and then there was the AI that hired a task rabbit to pass the CAPSHA test, And like in both cases those are like the AI that killed a person in the military exercise, like that was somebody claiming that, and then when they went back they were like, oh, I was just saying it could hypothetically do that.

Speaker 1

It was a thought experiment.

Speaker 2

Yeah, it was a thought experiment of what an AI could do in the right circumstances. And the task rabbit one was more similar to the self driving car Elon Musk thing, where it was just there was a human prompting it to do that thing that seems creepy to us when we like start thinking about, oh, it's like scheming to get loose and get overcome the things that

we're using to keep it hemmed in. So it does feel like there is an incentive structure set up for the people in charge of some of these major AI companies to get us to believe that shit, like to think to only focus on the AI v. Humanity, Like AI gets loose of its control of our controls for it and takes over and starts like killing people version

of it. I'm just curious, like what are your thoughts on, Like why why why are they incentivized to do that when it would seem like, well, you don't want to make it. You don't want it to seem like this self driving car will take over and.

Speaker 1

Start kill your family.

Speaker 2

Yeah, start killing your family. It's so powerful. But it seems like with AI, they're more willing to buy into that fantasy and like have that fantasy projected to people who are not as closely tied to the ins and outs of the industry.

Speaker 5

Yeah, I mean, I think that's such an important point because it's a weird thing about the AI industry, right, Like you would never have this kind of hypewave around something like broccoli where you say, oh, the broccoli, if you eat it could kill you or it could like transform the world, and then you wouldn't expect that to somehow get people to buy a lot more broccoli and just feel like I don't want to eat broccoli.

Speaker 2

Now that's so fucking good and powerful. If you explode like maybe.

Speaker 5

Makes it even worse brocci that you can then also eat you know what broccoli would be doing. But I do think that we see this real cultivation of hype around AI and that a lot of firms explicitly use that to sell their products, and it gets people interested in it because on the one hand, people are really scared about the long term and short term impacts of AI. On the other hand, it also scared then though, of

getting left behind. So you see more and more firms saying, well, now I've got to buy the latest generative AI tool so that I look really tech savvy and I look peck forward and I look futuristic, and so it's part of this bigger hype cycle, I think, to draw a lot of tension towards their products, but also to make

them seem like this really edgy, desirable thing. But I think what's also interesting about both the stories that you raise is when you looked under the hood, there was human labor involved, right There were people who were really central to something that was attributed to an AI, And I think that's a really common story we see with a lot of the type around AI is often the

way we tell those stories. Eraises a very real human labor that drives those products, whether it's the artists who originally made the images that trained the generative AI through today, the labelers all sorts of people who are really central to those processes.

Speaker 3

Right, And I know, like in your episode of The Good Robot when you're discussing the pause letter, you know, I think that the I think the version that we see as like sort of these short term threats, at least in the most immediate way is like for me working in and around entertainment and people who work in advertising and seeing like an uptake in that section, I go, Okay,

that's easy. Like I can see how a company immediately goes, yeah, it's a tool, and then suddenly it's like and now you're on your ass because we'll just use the tool now and we don't even need a person to prompt it, or we need many we just need fewer people to operate it. So to me, I'm like, Okay, that's an obvious sort of thing I can see, like on the horizon.

And you did talk about, well, there was a lot of talk of these sort of long term existential or quote unquote existential threats that there were a lot of things in the short term that we're actually ignoring. What are those sort of things that we need to bring a little bit more awareness to, Like I know you mentioned the climate, and I look at it from my perspective, I see like the just massive job loss that could happen.

But what are sort of like the more short term things that kind of maybe are less sexy or interesting to the people who just want to write about killer terminators and things.

Speaker 5

Yeah, I mean, I think less sexy is exactly the right phrase for this, which is a lot of the short term issues. I very much about entrenching existing forms of inequality and making them worse. And that's often something people don't really want to hear about because they don't want to acknowledge its inequalities, or because it takes away from the shiny units of AI. It makes it very much like a lot of other technological revolutions that we've

already seen, and that's super boring. Like you don't want to hear about how the wheel somehow brought about some kind of inequality the big heart take for today. But yeah, I mean something that I look at, for example, are like very mundane but important technologies, like technologies used in

recruitment and hiring. So I look at AI powered video interview tools and look at how that affects people's particular you know, likelihood of being employed and how they go through the workforce, and yep, it's less exciting seeming than the terminator. But again, when you look under the hood and dig into them, you're like, oh wow, this could actually really really compound inequalities that we see in the work force under the guise of the idea that these

tools are going to make hiring more fair. And that's a massive problem.

Speaker 3

Right, so because like the idea with those hiring tools like it will actually take away these sort of like demographic cues that someone might use to like you know, they apply their own biases to So in fact, it is the most equitable way to hire. But is it because of just the kinds of people that are creating these sort of systems, Because they tends to be a bit one note that that's inherently where like sort of that like it begins to wobble a bit.

Speaker 5

It's a mixture. So of course, yes, the lack of diversity in the AI industry is like very stark. It's also sadly in the uk AN industry, where for example, women's representation is actually getting worse, not better. So that's a sad slap in the face for a lot of the progress narrative that we want to see. But sometimes it's not even necessarily that the people creating these tools have bad intentions, maybe not even that they're using faulty

data sets or biased data sets. These are two of the really big problems that are flagged, But sometimes the underlying idea behind their product is just focus. Like it's just a really bad concept, and yet somehow it gets brought to market again because of all this hype around AI.

So with the video interview tools that we look at, for example, they basically claim that they can discern a candidate's personality from their face and from their words, so how they talk, how they move, They can decide how extroverted you are, or how you know open you are, how neurotic you are, how conscientious you are, all these different markets of personality, to which I would say, firstly, no,

there's absolutely no when AI can do that. This is just a very old kind of racial pseudoscience making its way back into the mainstream, saying Okay, we can totally guess your personality from your face. Like it's like your friends looking at someone's profile picture on like Pinder or whatever and being like they look like they'd be really fun at a party. Like it's about that level of accuracy, you know. And then second, is that even a good way of judging if someone's gonna be good for a job?

Like how extra vetted do you want to person in a job to be? Maybe in your job that's really really helpful In my job, I don't know how helpful it is. So there's just kind of a lot of flaws at the very you know, bottom of these products that we should be worried about.

Speaker 2

Just like a C minus level job hiring process. Like that's what I feel like so many of the things, like when you get down to them and see them

in action, they're like not that good. Like it does feel like the whole thing is being hyped to a large degree, And like that's something I heard from somebody I know who like works in you know, like all of my friends who work in finance, or like any of those things, Like my brain shuts off when he starts talking about what he does, but he was saying, like he pays attention to the market, and he was saying, there's like a big thing propping up the stock market

right now is AI. And it really is like that, that's where so much of people's wealth is tied up, as in like the stock market, and it's just tied to like what you can get people excited about in a lot of cases. So it really like that from that perspective, the incentive structure makes sense, Like you want people talking about how your AI technology can do all these amazing things because that literally makes you have more money than you would have if they knew the truth.

Speaker 3

About your your product without being like yeah, wait, how many seven fingered Trump pictures can we create?

Speaker 1

And right be like yeah, man, fucking millions into this.

Speaker 2

Yeah yeah.

Speaker 5

I mean that's kind of something that's really come out of the last few years is how many fans just use the label of AI to get funding. Like I think there was a study a couple of years ago that said of European AI startups didn't use AI, which I point, you're like, well, what are you doing?

Speaker 2

Well, we could though, this podcast is actually an AI podcast because eventually it could use AI. And before we were recording, Miles was actually putting in. He asked an AI to pitch him an episode of Friends in which the cast and you know, the people on the the friends on the show deal with the fallout from the events of nine to eleven.

Speaker 1

And it wouldn't do it. So we can't.

Speaker 2

Quite claim that we are an AI podcast yet, but but we did it.

Speaker 3

Did do it when I said, do it, pitch me an episode where Joey and Monica drive Uber from Yeah, and it did.

Speaker 1

So clearly it because you can see where these guardrails out. They're like, don't do nine to eleven stuff, though that's right now.

Speaker 2

But I mean, so yeah, I think there's two things we're talking about here, Like from from one perspective, like, yes, you could put it in the category of like, well, yes, the wheel makes racism or colonialism more frictionless is a word that gets used a lot, but like literally in the case of the wheel, frictionless. But AI and like a lot of technology is designed to make groups of people and like our interactions and the things that make

people money more frictionless. And that's something that you guys have talked about on recent episodes of Good Robot, Like, there's this one example that really jumped out to me that I think was from your most recent episode, or at least the one that's up most recently right now as we're recording this, where you guys were talking about a company that asked a regulating body to make an exception to a law around like a high risk use of AI, and the law said that people had to

supervise the use of AI like just because it seemed dangerous, and the company appealed to the regulating body by saying, well, we just like that would cost too much and we would never be able to like scale this make a profit. And it feels to me like our answer as a civilization to that complaint needs to be that's not our problem,

Like that's then you shouldn't be doing it. But instead, it seems like the answer too often, not just in AI, but just across the board, especially in the US, is like, Okay, well we have to make an exception so that they can make a profit around this technology, or else the technology won't get developed because the only thing that drives

technological progress is like the profit motive. But that's you know, as I think you guys talked about in that episode, that's never been the best way to develop technology, Like it's it's been a good way sometimes to democratize existing technology, but like that's I don't know.

Speaker 1

I feel like that.

Speaker 2

Idea of you have to make it profitable, you have to make it easy on these companies to keep trying different things for AI to become profitable is baked in at a like cellular level at this point in how a lot of you know, Western colonial civilizations operate.

Speaker 5

Yeah, I mean, I think too often a lot of the technologies that shape our daily lives are made by a very narrow set of people who ultimately aren't beholden to us. They're beholden to their shareholders or to their boss and right, so they don't really have our best interests at heart, right, Like, for example, take this whole

rebranding of like Twitter to Eggs by Musk. I remember waking up and finding my little Twitter bird replaced with this huge X and just being like, oh, con firstly because it was part of you know, Twitter's low decline, but secondly it maybe you feel pretty disappointed or really aware of the fact that one guy could have such a huge impact on how literally millions of people use a social networking platform that's actually super important, you know, to their daily lives and has played a huge role

in activist movements and fostering different communities. And I think that's a story we see time and time again with some of these big tech com which is not only do they have their own profit motive at heart, they also they're not beholden in any way to the public, and they're not being compelled by regulation to make good

decisions that necessarily benefit the public. So I think a really important question going forward is how do we support kinds of technology development that are very much based in the communities that the technology is for. I think one really big part of that is recognizing that so many AI models, as you mentioned right, they're designed to be

scalable and that's how they make money. This idea that you can apply them globally and universally, and I think that's a big problem, partly because it often is really homogenizing. It can involve like explitting a model from the US usually out to the rest of the world. It's probably not actually appropriate to using those contexts. But also a lot of really exciting and good uses of technology I think come from these really localized, specific, community based levels.

So sometimes I think it can be about thinking smaller rather than bigger.

Speaker 3

Yeah, that was like another thing that struck me about like just all the warnings and even in that pause letters sort of like the presumption that it's like, well, all you motherfuckers are going to use this, so we got to talk about it where it's like I don't know, I don't even fucking know what it is, Like a second ago, I thought it was Skynet, and now, like you know, you have your company being like, yeah, we now have enterprise AI tools like welcome.

Speaker 1

You're like, but what am I huh like?

Speaker 3

And I think that's what's an really interesting thing about this, as like a sort of technological advancement is before people really understand what it is, there is like from the higher the powers that be sort of going into it being like, well, this is it. Like everyone's using it,

but I'm still not sure how. And I guess that probably feeds into this whole model of generating as much you know, excitement market excitement about AI is by taking the angle of like everything's different because everyone is going to be using AI. Most of y'all don't know what that is, but get ready. And I think that's what also makes it very confusing for me, as just like a lay person outside of the text sphere to just be like, wait, so are.

Speaker 1

We all using it?

Speaker 3

And even now, I really I still can't see what that is and how that benefits me. And I think that's a big part of I'm sure your work too, or even like any ethicist, is to understand like, well, who does it? Ben Like, first, we're making this because it benefits who and how?

Speaker 5

Yeah?

Speaker 1

And I think yeah, I mean right now, it benefits the companies that are making it. And it sort of feels like.

Speaker 3

That's the way it's being present or slightly being like, yeah, you guys are gonna love this, but really we're going to benefit from the adoption of this technology.

Speaker 5

Yeah, I mean, I think that's this crucial question, is this stepping back and saying, actually, is this inevitable? And do we even want this in the first place? And I think that's what really frustrated me about the Pause letter and about a number of kind of big text figures signing on to it, is that they're very much pushing this narrative of like, oh, this is like unstoppable and it's inevitable and it's happening. We've got a fine

way to deal with it. And it's like you're making it like you are the people really making these technologies, and a lot of cases, so if you really think it's an existential risk to humanity, stop it. Honestly could even be that simple, but that's you know, what makes me really then question their motives and sort of coming forward with a lot of this kind of very doom and gloom language. I think it's also interesting if you

look at, for example, countries as national AI strategies. So if you look at say like China and the UK and the US and these countries that are now thinking about what their national long term AI strategy is going to be, they also very much frame it around the idea that AI is completely inevitable, that this is going to be the transformative technology for imaging the future, for

geopolitical dominance, for economic supremacy. And again, I think as an ethesis, what I really want people to do is step back and say, I think we're actually at a crossroad, or we can decide whether or not, don't we think these technologies are good for us, and whether they are sustainable, whether they are a useful long term thing for our societies, or actually whether the benefits of these technologies are going to be experienced by very few people and the costs are going to be borne by many.

Speaker 2

Right, we talked last week about the scientific application that you know, used deep learning to figure out the shapes of proteins, the structures of proteins, and that could have some beneficial uses, will probably have some beneficial uses for you know, how we understand disease and medicine and how we treat that. But there are ways to probably differentiate and think about these things, Like it's not you don't just have to be either luddite or like AI pedal

to the floor. You know, let's just get out of the way of the big companies. You know, it feels like, but it is such a complicated technology that I think there's going to be inherent cloudiness around how people understand it and also manufactured cloudiness because it is in the overall system's benefit. The overall system being like capitalism, it's in their benefit to generate like market excitement where there shouldn't be any.

Speaker 5

Basically, yeah, I mean, I think it's easy to generate this kind of nebulousness around AI because to some extent we still don't really know what it is. It's still more of a concept than anything else because the term AI is like stretched and used to describe so many applications. Like I spent two years interviewing engineers and data scientists. Is a big tech firm, and they would sort of grumble, well, fifteen or twenty years ago, you know, we didn't even

call this AI, and we were already doing it. It's just a decision tree, you know. Again, it's kind of part of that branding. But also we have these again thousands of years of stories and thinking about what an intelligent machine is, and that means we can get super invested in super cloudy very very quickly. And yeah, I don't want people to feel bad for being scared of

being cloudy about these technologies. It is dense and confusing, But at the same time, I do think that it is really important to come back to that question of what does this do for us? So, you know, the question of like Luddism or being a lud die I think is really interesting because I you know, personally I do use AI applications. There's certain things about these technologies

that really excite me. But I'm really sympathetic to some of the kind of old school ludyet who weren't necessarily anti technology, but we're really against the kind of impacts

that technology we're having on their society. So the way that new technology is, like I think it would be things like spinning and weaving, we're causing mass unemployment and the kind of broader rammifications that was having for people in the UK socially, and that kind of has quite a scary parallel to today in terms of thinking about maybe what AI will bring about for the rest of us who maybe aren't researchers in a lab, but who maybe might be replaced by some of these algorithms in

terms of our work and our output.

Speaker 2

Yeah, can you talk at all about open source like models of because you know what, when we talk about this idea that corporations have all this power and are incentivized to do whatever is going to make the most money, which in a lot of cases is going to be the thing that removes the friction from consumption decisions, and you know, just how people interact and do these things, which well, as you guys talked about in your episode, like removing the friction, Like friction can be really good

sometimes sometimes your system needs friction to stop and correct itself and recognize when bad shit, when things are going wrong. But you know, there's also a history in even in the US, where corporations are racing to get to a development and ultimately are beat by open source models of technological organizing around like getting a specific solution. Do you have any hope for like open source in the future of AI.

Speaker 5

Yeah, I mean, I think I'm really interested in community forms of development, and I think open source is a really interesting example. I think we've seen other interesting examples around things like collective data labeling, and I think that these kinds of collective movements, on the one hand, seem like a really exciting community based alternative to the concentration of power in a very very narrow segment of tech companies. On the other hand, though, I think community work is

really hard work. We had doctor David Adlani on our podcast, who's very important figure in Mascana, which is a grassroots organization that aims to bring some of the over four thousand African languages into natural language processing or NLP systems, and he talks a lot about how the work he does with Masakana is so valuable and so important, but it's also really really hard because when you're working in that kind of collective, decentralized environment, and it can be

much slower, and as you said, there can be a lot more friction in that process. But counter to this move fast, break things kind of culture, sometimes that friction can be really productive and it can help us slow down and think about, you know, the decisions that we're making very intentionally rather than just kind of racing as farst as we can to make the next newest, shiniest product.

Speaker 3

Was almost like in your work too, you know you talk about how, you know, like looking at these technologies especially through a lens of like feminism and intersectionality and you know, bipop communities and things like that, and I like broadly in science there's like, you know, there's an issue of like language hegemony in scientific research, where if things aren't written in English a lot, sometimes studies just get fucking ignored because like I don't speak Spanish or

I can't read Chinese, therefore I don't know if this research is being done and therefore it just doesn't exist because the larger community is like, we all just think in English, like, so how do you like, you know specifically because you know, in hearing the description of your work, help me understand, like and the listeners too, like, of how we should be looking at these things from that also from that perspective too, because I think right now we're all caught up and.

Speaker 1

Like it's talking side at and it's not you.

Speaker 3

Know, hold on, like there are other subtleties that actually we should really think deeply about because to your point, I feel like those are the dimensions of an emerging technology or trend or something that gets ignored because to your point, it's like the thing, well, of course it's it's it's unequal, of course it's racist or whatever. But what are those like, what are those ways that people need to really be thinking about this technology?

Speaker 5

Yeah, I mean I think English language hegemony is a really good example of this broader problem of the more subtle kinds of exclusions that get built into these technologies, because I think we've all probably seen the cases of AI systems that have been really horrifically and explicitly racist or really horrifically sexist from you know, Tay, the chatbot that started spousing horrific right racist propaganda and had get taken down through to Amazon hiring tool that's systemically discriminated

against female candidates. These are really I think avert depictions of the kinds of harms a I can do. But I think things like English language hegemony are also incredibly important for showing how existing kinds of exclusions and patterns of power get replicated in these tools. Because to an English language speaker, very crucially, they might use CHATGPT and think this is great, this is what my whole world

looks like if they only speak English. Obvious to anyone who is not a native English speaker who doesn't only speak English, it's going to be an incredibly different experience. And that's where I think we see the benefits of these tools being really unequally distributed. I think it's also important because there's such exclusions in which kinds of languages

and forms of communication can get translated into these systems. So, for example, I work with a linguists at the University of Newcastle and she talks about the fact that there's so many languages like signed languages and languages that don't have a written language, but they're never going to be translated into these tools and never going to benefit from them. You might think, Okay, well, do these communities want those languages translated into an AI tool? Maybe maybe not. I'd argue,

of course, that's up to them. But those communities are still going to experience the negative effects of AI, like the climate coth of these tools. And so I think it's just really important, like you said, to think about what kinds of hegemony are getting further entrenched by AI powered technologies.

Speaker 2

All right, great, let's take a quick break and we'll come back.

Speaker 1

And finish up with a few questions. We'll be right back.

Speaker 2

And we're back and all right, So, I mean, one thing that I want myself to just like get out of this conversation is just a sense of like ways that we like what we think AI might look like, how it might impact what the world around us looks like in the not too distant future. And we've we've already talked about like some ways that AI is being overrated,

as like a autonomous killbot like terminator style kilbot. You know, there's there's some other examples like On one of the episodes of your podcast, an AI expert talks about getting an AI enhanced cancer scan and assuming the scan was like taking three D video and doing or doing like three D modeling, and it was just putting a box on a two D image. And I believe the guests like admits that they were influenced by like Hollywood movies, and it seems like that's what people who are trying

to make money off of this one. So like that we're going to have this steady push to make us overrate misunderstand what the actual promise of AI is going to look like, how what tools are going to be given to us in the near future, and like what are those tools actually able to do? So I'd just be curious to hear from you, like what do you think ways that AI might intervene in our lives in

the near future might already be intervening in our lives? Well, what are those things that we're underrating, like and what are the things that you think are probably taking up too much of our bandwidth in terms of how we're picturing AI.

Speaker 5

Oh, that's a great question. I think to start with the second half, to start with what I think is maybe a little bit overrated, or maybe we'll take a bit longer to pan out. I would say some of these really high tech applications, things like AI and medicine, for example, I think that we might not see them

expand beyond a very narrow set of controlled circumstances. I think this idea that you know, soon every hospital will have this tool, or say AI in education, that every classroom is going to have access to this tool, I think, unfortun she's just really grossly overestimating the kinds of resources, but only in this country that schools and the hospitals have. I was talking to a friend who's a doctor here and she was saying, oh, well, what do you think

about AI in medicine? And then she kind of stopped for a moment and just laughed and said, well, I don't know why I'm asking you that, because my hospital uses paper notes. And I said what, and she said, yeah, our whole hospital is not computerized. And that was a good reality check for me because it made me realize, like, wow, actually, some of the stuff that I'm thinking about is so far away from the reality of what people like my friend the doctor is having to deal with. On the ground,

and same with schools. You know, here a whole hundreds of schools I think have had to be closed because there's some kind of issues with the concrete that I might collapse, which is a horrific but B is such a different level of infrastructural problem that thinking about AI in the classroom is really not on the radar that scenario.

Speaker 1

Exactly.

Speaker 5

Yeah, autonomous, it's like a low bar. Yeah, people like, what are students cheat? And you're like, look, yeah, you know. So those are things that I think and maybe a little bit on the overrated side. I think things are underrated, Like I'm really interested in how AI can change how we see the world and how we see ourselves, like I think, you know, take for example, something like TikTok. You know, I have to confess I'm a big TikTok user. I love grolling a lot of mind list diarbage. It

brings me a lot of joy and peace. But even things like AI powered beauty filters, right, like, I think that has such a profound effect on how you just understand and see yourself and your own faith. They're often really imbued with gendered and racist kinds of assumptions, as well, I often look a lot whiter when I use a

beauty filter, even though I'm multiracial. Like all of those assumptions, I think they get baked in that seep into our lives and like really suther ways, but I think collectively it can have quite a big impact.

Speaker 2

Yeah, I was sorry, I was talking about the I don't know if this is technically AI, but then again, I don't know what AI is or if there's like a specific argument, but like the way that like iPhones take forty pictures like consecutively and then like choose the best one from the forty, and like the live photo setting like feels like a thing and like it's really good at it, and it's something that I had, like just it happened on my phone with like an update

and I was like, yeah, this is just how you take pictures now, and like pictures are way better than they used to be. I think I saw somebody speculate that like some of the software like Photoshop and other things that have traditionally had sort of a difficult learning curve will suddenly get much easier for people to use. So yeah, I like those like having actual tools that are like kind of easy to use them, like very simple, straightforward.

We know what the goal is here, and we're able to use these enhanced kind of programs to achieve that goal. Like that that feels more possible to me and like something that we we might see in the not too distant future.

Speaker 3

Totally, that's like a pitch you can understand, like this will make it easier for you to photoshop a friend out and just put someone else in. You're like, okay, rather than right now it's like this shit will end the world and you're like, well, what is it? I don't know, I don't know, I don't know, but.

Speaker 1

It will fuck you up. And you're like, huh, and I guess that's such a different proposition up front and earlier, doctor Carry was saying like there are things that genuinely.

Speaker 3

Excite you about like this sort of like these everging technologies from I, and I would love to hear from your vantage point because you are looking at this and a like like what is ethical and what is going to bring meaningful value to people? Like what are those things that I can feel like, oh yeah, yeah, I can get down with that future.

Speaker 5

Yeah. I mean I recognize that I sound very doom and gloom a lot of the time that I speak, and all my friends have to deal with my sort of constant existential anxiety about technologies. But at the same time, you know, I'm an active user of a lot of them, Like I use a lot of voice dictation software, voice editing software, you know, for when we record our podcasts, and these things that genuinely make my life so much

better and easier. I love being able to transcribe, you know, what I'm saying and have it appear on the screen and not have to type out my emails and have them a pair. I recognize you, they're like, all very boring. I should have said something like I DJ no, not

at all, I write email using voice. It's very you know, but I think kind of, you know, extrapolating out from that, you know, there's really amazing applications accessibility, and when it comes to the kinds of access people can now have online because of advancements in aipowered tools, and so I think, though, you know, what's really central there is kind of that broader vision of you know, what is the kind of benefit this is meaningfully bringing to society, What problem is

being solved? And are the people who are like most affected by that problem, the ones leading the conversation and saying what they need, because too often I think we see tech developers creating stuff that actually no one really asked for. I'm sure you have like all seen that thing on Amazon when you're like, you're asked for that

banana holder or avocado peeler or something. And too often I think tech add on to be a little bit like that, Whereas I think when we have like really interesting conscious development of say, for example, feminist tools that are designed to encourage good conversations things like that, that really makes me excited about the kinds of futures we could have with technology.

Speaker 1

Right.

Speaker 3

It almost feels like the danger here is when someone has like a technology and.

Speaker 1

Go and this is going to be in every home, and you're like, this is a sales pitch. Actually yeah, because.

Speaker 3

If you're not saying this is how we will work less and have more time to frolic, to enjoy our human existence, to connect with our families, then then like miss me with that because it reminds me of Crypto, it reminds me of the metaverse, and it reminds me of Zuckerberg being like every worker will have this fucking headset on no no, but nice fucking triasshole. And I feel like this is kind of like it has a similar tone of like, get ready, folks for this thing.

And granted they have some a shiny toy in the form of these large language models that are fun to do, and really they're just skimming the Internet and just giving

it to you in a nice, tidy sentence. But yeah, like I feel like that is just kind of like I'm seeing that dimension when you see the hype around it, which feels a little more like y'all are talking about this to make money, whereas the other things that you're talking about, like accessibility and trying to democratize certain applications. With things like that less there's less scale.

Speaker 1

Involved with something like that. So men, Yeah, hasn't.

Speaker 5

Talked about exactly, And I just think it's really important to recognize that, you know, I'm not saying that say things like spart home technology is like Amazon Alexa and things are like inherently bad products. I'm sure gips of people like love having those technologies in their home, they find them useful, But I just think whenever you bring in a new technology, you also bring a new vulnerabilities and you bring a new costs, and sometimes the hype

wave means we focus on the benefits. You know, Oh, if you bring this to every home, everyone's life will be easier, if everyone will have a more seamless home experience,

when actually that's not the case. There's real costs for bringing those tools into a lot of people's homes, everything from, for example, the way that certain kinds of elder care is getting replaced by technological tools in the UK care system, through to the fact that those tools can really easily be used for the purposes of domestic abuse, into a partner violence, It can be used to control or trap

someone in their home. And these are really ugly truths about those technologies, and they're often not the ones that are at the heart of technology development. And so that's a lot of my job, I guess, is just to say, how do we put those vulnerabilities in, those costs at the forefront, and then judge holistically whether or not we really want this product to exist.

Speaker 3

Yeah, but here's the thing, I got a lot of my stock portfolio tied up in these companies.

Speaker 1

I need these I need these fucking things to moon. If you know, what I mean.

Speaker 2

So I want to end the episode just doing something that you guys do on The Good Robot, which is talk talk about some like books that are recommended or

you know what works you recommend. You know, you talk on an episode earlier, I think it was in the summer about how a lot of the ideas and fears and hopes that we have currently are very similar to what we've seen in movies and like those like when you look at Elon Musk's ten favorite books, they're all like fed by this same like Isaac aha as I am of like I robot thing I'd had a lot of Hubrick's ideas are around AI are go back to this very specific version of technology where it inevitably turns

into a mean psychopath who is set on dominating all of humanity.

Speaker 1

So as an.

Speaker 2

Alternative, because we do like to talk about, you know, the importance of expanding our imagination, Like you know, with regards to climate, a lot of it is like either it's either apocalypse or business as usual capitalism. There's not like people have a hard time imagining the alternative on AI.

Speaker 1

On the subject of AI, like what are is there?

Speaker 2

First of all, is there any like mainstream kind of popular movie that you feel like, actually like that example of AI, like got it right? And then are there other kind of more obscure works that you would send people to in order to sort of feed their imagination of like what what a world with AI technology could look like?

Speaker 5

Yeah? I mean that question of the imagination and how our current imagination of AI is really narrow is super key, And you know, I think it's fascinating that, yes, you know, Musk and these folks have like a very narrow set of stories that they refer to. In those stories, it's always like, oh, the type of masculine take Bro makes an AI and then it looks like him and it takes over the world, and you're like, are you okay that you're like your bedtime story?

Speaker 2

Weird? The mean sociopath bent on world domination thinks that all AI is going to be a mean sociopath bent on world domination.

Speaker 1

That's so weird, right, Yeah?

Speaker 5

Who would have guessed that?

Speaker 1

Yeah?

Speaker 5

But yeah, I mean, and something that I really like our stories, particularly science fiction stories that I love sci fi and fantasy. I'm like a huge believer in like the way that it can help us imagine different worlds. In terms of like mainstream films, I guess the one that I comes to the top of my head is

Big Hero six. So the Disney Pixar film where you have an AI powered kind of robot healthcare buddy called bay Max, And I think that's a really interesting example of an AI that you know, at one point in the film, he gets dressed up in armor and you know, the kid here to starts to try and use him

as a kind of weapon. But like, this is an AI that so hasn't been designed to be a weapon, that kind of resists being weaponized a lot of ways, And in that sense, it's really countercultural to a lot of the AI we see in Western film and cinema, which is often very weaponized. It's often either this like kind of sexy cyborg figure or it's this hyper masculine terminator figure. Yeah, so I find bay Max is kind of gender lid and like.

Speaker 2

Just puffy balloon person. But it's also so accurate of what a young boy would do immediately with that is like weapon. It's like the tweet I can tell you me and my friends would have killed et with hammers.

Speaker 1

It's like.

Speaker 2

Little boys are monsters, but that's a great example.

Speaker 5

Yeah, in terms of like some alternative like stories and ideas, try to do a really shameless self plug But we have a book coming out called The Robot Why Technology Needs Feminism, and I'm plugging it because it's really beautiful. We worked with a science fiction illustrator because Sin John Lee, and there was myself and doctor Elinoaldrage, my co host. And the whole thing is these two thousand word essays.

They're really short and really punchy by lots of different guests with had on the podcast, and they all respond to this idea of good technology dot dot dot. So it might be good technology is free or good technology, you know, challenges power, and the idea is you can just dip in read one. They're also illustrated, and then just dip out when you need a break or a moment.

But yeah, I really would encourage you to pick that up because it just contains the most incredible feminist philosophers, technologists and vendors, activists who are really pushing forward different ideas of the kinds of technological futures we could have.

I mean, apart from that, I read a lot of like Chinese diaspora Asian American sci fi as well, particularly sci fi that's like thinking a lot about the climate crisis and AI and the intersections of that, and so I think a lot of those stories have really interesting

and different perspectives on what AI is. It can be from like Elliott de Badad's work, which explores, you know, the relationships between like humans and sentient mind shifts in this base opera universe, through the like lyrical lized Fultfish Girl, which is like a very dystopian imagining of like a

future society based on labor from clones. Like all of these novels, I think just as they're like a lot more love and a lot more could ask, they're just absolutely interesting and imaginative, been gorgeous ideas about the future.

Speaker 1

Amazing.

Speaker 2

Well, this has been such a fascinating conversation. Thank you so much for taking the time. At what I have to assume is midnight where you.

Speaker 1

Are right now? It's almost nine?

Speaker 2

Almost nine, Okay, yeah, that's too late. Well, thank you doctor McNerney for doing this show. Where can people find you?

Speaker 1

Follow you? Hear you all that good stuff?

Speaker 5

Oh yeah, thanks so much for having me on. It's been a bluff. Yeah, check out the Good Robot. We're on YouTube, Apple, Spotify, and then keep an eye out for the book coming in February twenty twenty four.

Speaker 1

Amazing. Yeah, we'll have to have you back for that. Yeah.

Speaker 2

Yeah, absolutely. And is there a work of media that you've been.

Speaker 5

Enjoying ooh, media in terms of like a show.

Speaker 2

Like a pot show, a podcast, book and tweet.

Speaker 5

Oh. I feel like I shoul choose something like really intellectual. But I actually just sent my husband this tweet that really got me because very me, which is it was it's actually a bit dark, but with this news headline about this hiker had been missing and the rescuers couldn't get in contact with them and they didn't get rescued for twenty four hours because they wouldn't answer their phone.

It was an unknown number. And this is just exactly me, like my husband like please fuck you know what, I'm like, I will not I will not answer it under a number. So that brought me a little bit of joy today.

Speaker 2

Absolutely, And I feel like that's a great way the great example of how technology has marched forward and completely ruined like something that used to work, phone communication, but we can't answer our phones anymore because the fucking spots, Miles, where can people find you? What's a work media you've been enjoying?

Speaker 3

Yeah at Miles of Gray wherever they got at. I'm there soon to Blue Sky, shout out Ill Will Christy Main. Yeah, you and you and I were hopping in okay codes we got we got the invite.

Speaker 1

Just check out the Blue Sky. So yeah there shortly. I'll also check Jack and Eye out on our basketball podcast.

Speaker 3

Miles and jackot Man Maps. Please check me out on my ninety Day Fiance podcast for twenty Day Fiance and also the true crime show The Good Thief, which has all episode eight episodes out, so.

Speaker 1

Please stream those binge those talking about the Greek robin Hood, the man who legitimately was kidnapped millionaires and doing some old fashioned wealth redistribution in a positive way, never hurting people either, never hurting people, like again, an ethical criminal, if there is such a thing. Some tweets I like.

Speaker 3

First one is from you know, shout out to the WGA and all the negotiators, because it looks like the WGA strike is they're close to getting something ratified, so we like that. This tweet is from Jeff Yang at Original Spin tweeted just underscoring how WGA negotiators told the studios the union wouldn't go back to work until SAG also has a deal, because if the last five months proved anything, it's in together, win together. And there's this snippet from it might be a deadline, but it says quote.

Speaker 1

The studios also.

Speaker 3

Inquired if once a tentative agreement is ratified by the scribes, if the writers would pick up their pens and hit their keyboards again. Very soon afterwards, the guild, from what we understand, made it clear that they would not be going back to work until SAG Aftra also had a

new agreement with the AMPTP. Reflective of the w w GA's feeling of solidarity between the two unions that has characterized their first mutual strikes in nineteen sixty love the solidarity and I know AYATSI is also going to have a contract that is going to need to be renegotiated next year. And it looks like guess what, folks, they did.

Speaker 1

It only listen to one thing.

Speaker 3

Yeah, putting your fucking putting the tools down. Put the tools down to pick them other tools up, which is getting in your solidarity back, so we'd love to see that. And another tweet I like is from t Pain at t Pain just put bartender just doing her job.

Speaker 1

Me just this prudo of Kevin James at the butt of Kevin James. That's going fucking rugged, but it is drugging, like trying to be like, hey, hey, can I get your eye content?

Speaker 2

Yeah?

Speaker 1

Just a just a ginger beer if you can. Those are my tools.

Speaker 2

Tweet I've been enjoying at BRNBNE. I don't know how to pronounce that, but their display name is bbb bbbb space BBBBBBB, which I have.

Speaker 1

To assume stands for.

Speaker 2

Bone bone bone bone bone boom bone boom boom. But the tweet I usually don't use for my tweet, the ones that things that I've retweeted, but this one got me so much I had to reshare it. The tweet is what if you went to ET's planet and all of the other ets were wearing clothes.

Speaker 1

That really, oh, really fucked up? How I thought about Et? Yeah? Yo what?

Speaker 2

You can find me on Twitter at Jack Underscore O'Brien, and on threads at Jack Underscore, Oh Underscore, Brian, and on Blue Skill. I'll have a username there eventually soon and you can find us on Twitter at daily szeikeist d daily Zekeist on Instagram.

Speaker 1

We have a Facebook fan page.

Speaker 2

On a website daily zeikes dot com where we post our episodes and our footnote. We off to the information that we talked about in today's episode, as well as a song that we think you might enjoy. Myles, what's the song people might enjoy?

Speaker 1

Uh?

Speaker 3

This is an artist Detroit's very own John FM, and I just figure out what an appropriate title. It's called White Science h And the track is like very It feels like if, like like I don't know, like if Prince was making shit in like the late nights. It has like a frinsy vibe like it's frincy and it's kind of got this like vocal modulator on it that feels a little bit princey. So and also it's just a really good track. So this is John FM with White Science.

Speaker 2

That's what Jack FM's mom calls him when he's in trouble.

Speaker 1

John FM, get in here now.

Speaker 2

The Daily Zeitge is in production by Heart Radio. For more podcasts from my heart Radio is the iHeartRadio app Apple podcast wherever you listen to your favorite shows. That is gonna do it for us this morning, back this afternoon to tell you what is trending, and we'll talk to you all then.

Speaker 1

Bye bye,

Transcript source: Provided by creator in RSS feed: download file