Are We Overrating A.I.? Which Movie A.I. Gets It Right? 09.26.23 - podcast episode cover

Are We Overrating A.I.? Which Movie A.I. Gets It Right? 09.26.23

Sep 26, 20231 hr 11 minSeason 306Ep. 2
--:--
--:--
Listen in podcast apps:

Episode description

In episode 1553, Jack and Miles are joined by A.I. Researcher and co-host of The Good Robot, Dr. Kerry McInerney, to discuss… Ways The Future Of AI Might Look Different Big And Small, The Pause Letter, Taking The Profit Motive Out? Alternatives to the Corporate Capitalism Stuff? There’s A Bias In The Media To Make It Seem Like AI’s Plotting Against Us, Are We Overrating AI? How Movies Shape How We Picture AI, What Should We Be Reading/Watching Instead? And more!

PRE-ORDER The Good Robot: Why Technology Needs Feminism NOW!

LISTEN: White Science feat. ZelooperZ by John FM

See omnystudio.com/listener for privacy information.

Transcript

Speaker 1

Hello the Internet, and welcome to season three oh six, Episode two of Alley's I Guys Day production of iHeartRadio. This is a podcast where we take a deep dive into America's shared consciousness. And it is Tuesday, September twenty sixth, twenty twenty three.

Speaker 2

Ooh you ready.

Speaker 1

You know what this is?

Speaker 3

Right?

Speaker 2

Oh, you know what this is right? For all my pancake lovers, It's your day, is National Pancake Day. It's also National Shamoo the Whale Day. Shout out Shamoo. It's also for all my dumpling lovers out there, for me, specifically my Joza lovers out there, National dumpling Day. Okay, so you know, celebrate however you've seen fit depending on

your cultural disposition. It's also National Johnny Applesey Day. And only bring that up because I remember in Christian school we were made to sing that as like a song before we went to lunch.

Speaker 1

Man. The amount of pro Johnny Applesey propaganda that I had to live with a lot of.

Speaker 2

Approach, Yeah yeah, yeah, it's like like John Chapman or something original.

Speaker 1

I just really liked it, really fucked with apples like the children must learn.

Speaker 2

Yeah, I bet there's some dark ass ship probably behind you know, like some a story like that. It's got to be a milkshack.

Speaker 1

Though, Yeah, for sure, very problematic. But the way, like the amount of information that I consumed, Like I had one of those my family had one of those book sets where it was like important historical figures and like right next to Lincoln was like Johnny Appleseed, and you know they I was prepared for Johnny Appleseed to be like one of the major figures in my.

Speaker 2

Life, right and education. Did you know the song, No, the Lord is good to me, And so I thank the Lord for giving me the anxiety, the sun and the rain and the apple seed. The Lord is good to me, And then we could go eat lunch. But we had to sing that ship before wow from the from the Disney movie apparently.

Speaker 1

So I wasn't even I didn't even get all of the Johnny apples propaganda.

Speaker 2

I got the Christian, I got the Christian capitalist. Johnny apple Seed propaganda just mainlined into my little brain. He wore a pot on his head, right, I mean I think who sure like backwards kind of like Davy Rocket.

Speaker 1

Yeah, anyways, this is all don't know the ship that is probably specific to American audiences. Yeah, but my name is Jack O'Brien aka start spreading the news eating today. I want to eat a part of it pea can pecan pie. These chattering tea are longing to chew right through the nutty part of it, pig can pecan pie. That has courtesy of Maxer twelve sixteen on the discord, a little just real solid, right down the middle meatball of a weird al parody and reference to our conversation

about peacock pie. For some reason, he was lobbing that meatball up to you. Yeah, I saw in the discord. I was tagging that, but you took that. Oh so you just do other people's a case. Okay, introting? All right, Well I'm throws to be joined as always by my co host mister Miles Gras.

Speaker 2

Okay, so you did my ak and I heard you do this one on Friday, so allow me to do it one more time, same Miles and Jackie.

Speaker 3

Have you seen it yet? Ooh, it's just flying around la la la, losing all my jets. Yeah, oh there isn't were and his sauce based out. Oh Jetty, she's a really neat. She's gotta stell the use a bomb and shoot so fazy you can't even be seen.

Speaker 1

Losing all my jets.

Speaker 2

Shout out to the Department of Defense for losing all of our multi million dollar killer fucking spacecrafts, and shout out to Johnny Davis and Blinky Heck, one more time.

Speaker 1

We're done. More time. Yeah, maybe we can like layer those because you went high, I feel like I went a little bit lower. Maybe maybe we could get a little little harmony harmony. Yeah, anyways, Miles, Yeah, you're thrilled to be joined in our third seat yes for today's expert episode by a truly brilliant guest who I felt especially stupid singing an aka in front of She's a research associate at the lever Home Center for the Future of Intelligence, where she researches AI from the respective of

gender studies, critical race theory, and Asian diaspora studies. He is also a research fellow at the AI Now Institute, the co editor of the upcoming volume The Good Robot, Feminist Voices on the Future of Technology, and the co host of the Wonderful Good Robot podcast. Please welcome the brilliant The talented doctor Kerry mcinnern doctor Belly.

Speaker 4

Thanks so much for having me.

Speaker 2

Thank you so much for joining us.

Speaker 1

How different has this been? So far from what you were expecting?

Speaker 4

Oh my gosh, I don't have a song. I was like, how do I say? I definitely don't have a song. And I also don't know who Johnny Appleseed is. I grew up at a New zeal End and so I didn't have a book of like historical figures. I had a book called watch Out these Creatures, Bite and Sting, and it was like all the ways you could die being killed by like a jellyfish or a stay or like an octopus. I may leave you into Australia. And so it kind of like traumatized me for the good

from childhood. But it does mean I didn't have the great apple seed song that you.

Speaker 1

Just say, yeah, yeah, did you guys have apples though, because I was under the impression that the only reason I had apples was a Johnny apple seed Yeah, and the Lord and the Lord provided It's just all Kiwi's.

Speaker 2

Right down there, pretty much.

Speaker 4

Kiwis kiwi fruit, no apples, no songs, just no music.

Speaker 2

It's just silent, is an excute. Part of my ignorance is the kiwi is that like a native New Zealand fruit.

Speaker 4

I'm hoping so the kiwi is a native New Zealand bird, and then the kiwi fruit is what we call that like fuzzy brown fruit. Yea, yeah, I feel like they kind of mix up the two or they you know, they're both brown and fuzzy and sort of.

Speaker 2

Bulbous ignorance incarnate here, folks.

Speaker 1

But if you do cut the bird in half, it looks exactly like the kiwi fruit. Right, A lot of people don't know that green with green with little horton seed seed core.

Speaker 2

Yeah.

Speaker 1

All right, doctor McNerney, we have you here today to talk to about AI. We've talked to last week's episode about AI. We're thrilled to ask you our follow up questions, but before we get to any of it, we do like to get to know our guests a little bit better and ask you what is something from your search history?

Speaker 4

This is pretty embarrassing. I just got a Nintendo and I'm like finally entering my game agol Era. I'm like, this is going to be so fun. But my whole such history, it's just me googling like very basic controls and Zelda Breath of the Wilds. It's like how do I jump? Like it's really awful. I went and look it's like how do I befriend a dog? Like how do I climb a mountain? And so it's not going well. I need to outsource my zilta plane to like children who will be much better than me.

Speaker 2

Wait, but is it? Is it? You're not forcing you to give up, right, You're just you're You're just acclimating to the new environment, right.

Speaker 4

I mean the kingdom is not going to be safe whatever I'm meant to be doing. I'm just walking in circles now for like an infinite amount of time. But I'm having fun, which I guess is the main point.

Speaker 2

Yeah, yeah, yeah, And wait, why did you or were you? Were you like a game? Did you play games in childhood and you're kind of coming back or this is completely new, like this is all new territory for you.

Speaker 4

I did, but I was also bad in childhood, Like this isn't a story of like oh, I used to be like super into games and I like fell out of touch like how people were like amazing athletes his children and then like pick it up again, like I'm consistently awful a game such history is not a surprise.

Speaker 2

Got it? Got it? I mean you were doing, you know, worthwhile things like thinking about how to ethically deal with all this technology while I was laughing my ass off playing Donkey Kong Country, you being like, Aha, slap the ground and bananas come out. That's a little hack people didn't tell you about.

Speaker 1

Our AI guests on last week's Expert episode was saying the video games are one of the ways that he thinks that the future, the future of AI is going to impact our day to day lives. Is that Is that something that you think about as you're out there kind of just walking in circles in Zelda Breath of the Wild.

Speaker 4

I mean, I feel like I performed so poorly they probably think I am like an NPC. I think it's just like that is not a person with free will like that following a fox around.

Speaker 2

It just it just rotated seven hundred and twenty degrees over and over for no reason. I guess.

Speaker 1

So have they made games more fun for people who suck at video games? That that's the innovation that I'm looking for, because I was never very good at them.

Speaker 2

Yeah.

Speaker 1

I liked playing them, I like hearing about them. I think I'm ready to enter my gamer girl era and come back.

Speaker 2

But yeah you should. I mean, yeah, look, Jack, they already know you as the switch God because you've been on Nindado so much. The controllers basically fuse to your body. But there are like a lot of new or like new like the new Star Wars game. You could just set it to like, man, I'm not trying to do

all this like fancy shit. I just want to beat people up or just mash the keypad over and over and win like that, and you can you can't do that because I think you know it is like it is about being able to play at different skill levels rather being like, oh, you don't know how to multi use the force while you know you're doing your melee attacks. Come on, some of us just want to be the kids.

Unraveled is good for that too, of like a fun video game that is not like really requiring you to have like these sort of highly developed video game skills and it's an easy play.

Speaker 1

Nice doctor mcinnerney, What is something you think is overrated?

Speaker 4

I feel like I'm not gonna win myself any friends. It'll be like how let lose friends and stop influencing people. But all the Disney live action movies, I've hit a stage where I'm like, no more live actions, Like The Little Mermaid is beautiful. I love the original Cinderella, but you know I don't I don't want to see more and more of like the same movies again, Like it makes you a bit sad and like what other stories could be told if it weren't like always making these

live actions. But you know, maybe I'll be proven like totally wrong and the next ones will be amazing, But I'm ready for something else.

Speaker 2

Just yeah, based on everyone's first like sort of reaction like oh you saw how it wasn't like yeah it, No one was like it was so good, I'm gonna see it nine times. Everyone just kind of I think they're they have to reconcile like their love for the original source material with that and not fully be like I didn't like it. They're like yeah, I mean it's it's it's interesting.

Speaker 1

You know, it's like their voice just goes up. Yeah, you know it was like I spent my afternoon going to that. I think that's right. I've never been more confident in a prediction about the future of a like subgenre of movies than in saying that they're not going to suddenly figure something out about the Disney live action reskins of the animations, like we we know what the original cartoon UNEs look like and what happens in them.

We know what is possible here, right. I can't I can't imagine a version like what they're gonna pull out where we're like, oh, no, that is not did not see that one coming?

Speaker 2

Whoa that bear is talking and singing? Hold on, oh, they already did that. That was the best one I think was Jungle Book, and that was the first one they did, and ever since, I feel like, yeah, been diminishing returns man, although I didn't see The Little Mermaid, so I cannot speak to that one.

Speaker 4

Is you?

Speaker 2

What's your favorite genre of film though, doctor carry Ooh?

Speaker 4

I mean okay, so I really have a soft spot for like old school superhero movies. But my like most recent, most amazing film I film was the latest into the Spider Verse film what was it? The Spider Verse? And I just I love animation and that film It's so witty and the visual is extraordinary, just great films. So I oracle about how much I love these films at work and everyone has to deal with me being like, go watch it, So anyone listening, go watch it.

Speaker 2

Amazing favorite favorite Spider character, Spider person in the universe.

Speaker 4

Oh my goodness. I can't remember the name of it. But the Light really like the really like dark noir run for the first one. He's like from like all those like nineteen thirties kind of like mystery films, you know, in black and white. I kind of love that.

Speaker 2

Yeah, yeah, it was it just spider Man Noirs.

Speaker 1

Yeah, I think nor Spider Man. Maybe I don't know, or.

Speaker 2

I'm sorry spider Man, nor Nor.

Speaker 1

What is when you say old school superhero movies? Do you mean like Christopher Reeves Superman? Or are you talking about like the first Iron Man? The first Iron Man?

Speaker 4

I think it's almost like less like specific films. I love, like almost films that have that like really cheesy like origin coming of age story, like you know me, it's just really just about like they're discovering themselves and it's such a simple narrative, and like I feel like I should crave more complexity than that, and I should want

it to be a more nuanced stories. But sometimes it's just really satisfying and you want this clean cut narrative of this like good guy to beating all these bad guys. When I want to relax, I really enjoy that. In my day job, I think about lots of like complexity and nuance and like what we do with the future of society, and so you know, right, sometimes I find it very relaxing just to be like sonless.

Speaker 2

Yeah, you're like, yeah, I love a Joseph Campbell type and flick. Just give me that hero's journey in everything.

Speaker 4

It's gonna be okay, Yeah, it's gonna.

Speaker 2

Go wrong, challenge your old self to become your new self.

Speaker 1

Yeah, but DOCC is an example of what we're facing with AI like in the immediate future. I think we can all agree on that, right, Spiderman two, in.

Speaker 2

A way, you're like, it's giving me the equivalent of like eight arms. In a way you're like, oh boy.

Speaker 1

What, what's something you I think is underrated?

Speaker 4

Oh? Underrated? I mean I'm a big like cozy knight in person, like I want to be one that's fun party people who are like you know, is out raving. But realistically at ten PM, I'm like the day is over, I'm in bed, like I think a cozy knight in especially Ortumn is coming. Like that's just my happy place, me at home, my husband. Life is good.

Speaker 2

Yeah, I like that, that's my That's my whole thing. Especially we don't get seasons here, so the second there's like just the feeling of like a chill, I'm like, I need to start nine fires and just being near them.

Speaker 1

And that is a problem that Miles has, and he's working on his therapy.

Speaker 2

I guess legally it's called arson or something, but what I say is I just want everyone to be cozy.

Speaker 1

Around the Greater Los Angeles metropolitan area.

Speaker 2

I'm yeah, I cannot go within four miles of the Angelus National Forest.

Speaker 1

But whatever, they must see my light. Yeah.

Speaker 2

I love coziness. I love a are you are you cozy? Are you summertime Jack? If you had to pick between, would you raither be summertime Jack or cozy Jack?

Speaker 1

I do. I do love a summer especially I'm in my swimming phase where I just I really like getting in a body of water, get running into the ocean, even when it's a little cold ocean. And yeah, so I think I'm like in a summer phase. But I yeah, I am feeling the need to get cozy.

Speaker 2

Yeah, I'll always take cozy over summer. That's just me. I know you you It's it's.

Speaker 1

Like you fetishize winter. I do.

Speaker 2

It's problematic, like that's why I would only date like like women from the Northeast. Ye're like, I feel like you're dating me like for this like exotic sort of life, you think, I like, what's it like with the snow.

Speaker 1

You're appropriating menace like Minnesota culture?

Speaker 2

Right, Yeah, well, hey, look we're all trying.

Speaker 1

Hey, I heard that. He said, Hey, all right, let's take a quick break and we'll come right right back and talk about AI. We'll be right back and we're back. And doctor McNerney as mentioned, I am an idiot on this AI stuff. I think I generally have like my version of AI up until last week. I guess like researching for our last expert episode was what I had read in you know, mainstream articles that went viral and films move like Hollywood films, and then like messing around

with open AI and or chat GPT. So I had this kind of disconnect in my mind where it was like from an outsider's perspective. We have this C plus level like copywriter thing with like in chat GPT GPT for and then like the Godfather of AI, who I'm just trusting people is the Godfather of AI. But that's what everyone uses that same phrase though, like the Godfather of AI just quit Google and says we're all fucked

in the next couple of years. And I think it's confusing to me because I don't know exactly, like I can't even like picture the way how he thinks we're fucked. And there was this letter that was like, we need to pause development on AI for in the near future, and I guess I'm just curious to hear your perspective on that pause letter and what the kind of dangers of AI are in the near future.

Speaker 2

Yeah, because to Jack's point two, also, we were talking with Juao Sadoc last week at NYU about it, and like at the end of it, we're like, okay, so it's not sky Net right from terminator, and they're like, oh great, But then we realize there's a raft of other things that come along with just not being the terminator. So yeah, that's I'm also from a similar perspective where I always assume sky Net.

Speaker 4

Yeah, I mean, I think you're totally not alone in the dominance of ideas like sky Nee the Terminator, because so much of our cultural framework for understanding what AI is comes from a very narrow set of movie It's like the Terminator, like the Matrix, which always positions AI is something that's going to dominate us, it's going to take over the world, and it's going to control us. And it's important to, I think highlight that that's definitely

not the only ideas that we have about AI. We've got thousands of years of thinking about our relationship with intelligent machines, and there's a lot of different cultural traditions that have really different ways of thinking about our relationships with AI and intelligent machines that could be much more positive, much more harmonious. And so I do think our immediacy of jumping to this idea of sky net is reflective of very much where we are right now. Right I'm

in the UK, you're in the US. These are countries that have a really long history of thinking about AI and very binary terms. So yeah, I think it's important that we think about these long term risks of AI. And you mentioned the pause letter calling for a wholt to generating more large language models like chat GPT until it had a bit more of a moment to think

about to the long term consequences of these models. But I think it's really important not just to think of the long term risks, but to think about which long term risks we prioritize, because I think the sky net terminator fantasy eats up a lot of oxygen about how we talk about AI's risks. But there's a lot of different risks to AI poses. So another long term risk that we don't talk about very much at all is the climate cost of AI. Right because AI it's hugely

energy intensive. Data centers require a huge amount of water to function, and we have this massive problem of e waste produced by a lot of electronic systems. But that long term problem of climate crisis is much less exciting. It's really scary, it's really grounded, and a lot of our experiences and so it just doesn't seem to get as much airtime. So that's something that I think is

really important. It's changing the conversation a bit to say, Okay, it's sometimes interesting, sometimes scary to think about the terminator option, but what are some of the other long term options that could really shape our lives.

Speaker 1

Yeah, like the degree to which the deck is being stacked towards the terminator option was surprising to me. Like we dug in last week a little bit to the two stories I had always heard that are like kind of put into the terminator version of AI taking over a category. There's the AI that killed a person in a military exercise that decided to like eliminate its controller. And then there's the AI that hired a task rabbit

to pass the CAPSHA test. And like in both cases those are like the AI that killed a person in the military exercise, like that was somebody claiming that, and then when they went back they were like, oh, I was just saying it could hypothetically do that.

Speaker 2

It was a thought experiment.

Speaker 1

Yeah, it was a thought experiment of what an AI could do in the right circumstances. And the task rabbit one was more similar to the self driving car elon musk thing, where it was just there was a human prompting it to do that thing. That seems creepy to us when we start thinking about oh, it's like scheming to get loose and get overcome the things that we're

using to keep it hemmed in. So it does feel like there is an incentive structure set up for the people in charge of some of these major AI companies to get us to believe that shit, like to think to only focus on the AI v. Humanity, Like AI gets loose of its control of our controls for it and takes over and starts like killing people version of it.

I'm just curious, like what are your thoughts on, Like why why why are they incentivized to do that when it would seem like you, well, you don't want to make it. You don't want it to seem like this self driving car will take over and.

Speaker 2

Start kill your family.

Speaker 1

Yeah, start killing your family. It's so powerful. But it seems like with AI, they're more willing to buy into that fantasy and like have that fantasy projected to people who are are not as closely tied to the ins and outs of the industry.

Speaker 4

Yeah, I mean, I think that's such an important point because it's a weird thing about the AI industry, right, Like you would never have this kind of hypewave around something like broccoli where you say, oh, the broccoli, if you eat it could kill you or it could like transform the world, and then you wouldn't expect that to somehow get people to buy a lot more broccoli and just be like, I don't want to eat broccoli now.

Speaker 1

It's so fucking good and powerful you explode like maybe.

Speaker 4

Makes it even worse broccoli that you can then also eat, you know what broccoli would be doing. But I do think that we see this real cultivation of hype around AI, and that a lot of firms explicitly use that to sell their products, and it gets people interested in it because on the one hand, people are really scared about the long term and short term impacts of AI. On the other hand, it also scared then though, of getting

left behind. So you see more and more firms saying, well, now I've got to buy the latest generative AI tool so that I look really tech savvy and I look pick forward and I look futuristic. And so it's part of the bigger hype cycle, I think, to a lot of tension towards their products, but also to make them

seem like this really edgy, desirable thing. But I think what's also interesting about both the stories that you raise is when you looked under the hood there was human labor involved, right, There were people who were really central to something that was attributed to an AI. And I think that's a really common story we see with a

lot of the type around AI. Is often the way we tell those stories erases the very real human labor that drives those products, whether it's the artists who originally made the images that train the generative AI through today, the labelers all sorts of people who are really central to those processes.

Speaker 2

Right, And I know, like in your episode of The Good Robot when you're discussing the pause letter, you know, I think the I think the version that we see as like sort of these short term threats, at least in the most immediate way is like for me working in and around entertainment and people who work in advertising and seeing like an uptake in that section, I go, okay,

that's easy. Like I can see how a company immediately goes, yeah, it's a tool, and then suddenly it's like and now you're on your ass because we'll just use the tool now and we don't even need a person to prompt it, or we need many we just need fewer people to operate it. So to me, I'm like Okay, that's an obvious sort of thing I can see like on the horizon.

And you did talk about, well, there was a lot of talk of these sort of long term existential or quote unquote existential threats, that there were a lot of things in the short term that we're actually ignoring. What are those sort of things that we need to bring a little bit more awareness to, Like I know you mentioned the climate, and I look at it from my perspective, I see like the just massive job loss that could happen.

But what are sort of like the more short term things that kind of maybe are less sexy or interesting to the people who just want to write about killer terminators and things.

Speaker 4

Yeah, I mean, I think less sexy is exactly the right phrase for this, which is a lot of the short term issues. I very much about entrenching existing forms of inequality and making them worse. And that's often something people don't really want to hear about because they don't want to acknowledge these inequalities, or because it takes away from the shiny units of AI. It makes it very much like a lot of other technological revolutions that we've

already seen, and that's super boring. Like you don't want to hear about how the wheel somehow brought about some kind of inequality. We all know the big heart take from today, But yeah, I mean something that I look at, for example, are like very mundane but important technologies, like

technologies used in recruitment and hiring. So I look at AI powered video interview tools and look at how that affects people's particular you know, likelihood of being employed and how they go through the workforce, and yep, it's less exciting seeming than the terminator. But again, when you look under the hood and dig into them, you're like, oh wow, this could actually really really compound inequalities that we see in the workforce under the guise of the idea that

these tools are going to make hiring more fair. And that's a massive problem.

Speaker 2

Right, so because like the idea with those hiring tools like it will actually take away these sort of like demographic cues that someone might use to like you know, they apply their own biases to So in fact, it is the most equitable way to hire. But is it because of just the kinds of people that are creating these sort of systems, Because they tends to be a bit one note that that's inherently where like sort of that like it begins to wobble a bit.

Speaker 4

It's a mixture. So of course, yes, the lack of diversity in the AI industry is like very stark. It's also sadly in the uk an industry, where for example, women's representation is actually getting worse, not better. So that's a sad slap in the face for a lot of the progress narrative that we want to see. But sometimes it's not even necessarily that the people creating these tools have bad intentions, maybe not even that they're using faulty data sets or biased data sets. These are two of

the really big problems that are flaggs. But sometimes the underlying idea behind their product is just fogus, Like it's just a really bad concept and yet somehow it gets brought to market again because of all this hyper AI.

So with the video interview tools that we look at, for example, they basically claim that they can discern a candidate's personality from their face and from their words, so how they talk, how they move, They can decide how extroverted you are, or how you know open you are, how neurotic you are, how conscientious, you are all these different markets of personality, to which I would say, firstly, no,

there's absolutely no when AI can do that. This is just a very old kind of racial pseudoscience making its way back into the mainstream, saying Okay, we can totally guess your personality from your face, Like it's like your friends looking at someone's profile picture on like Pinder or whatever and being like they look like they'd be really fun at a party. Like it's about that level of.

Speaker 1

Accuracy, you know.

Speaker 4

And then second, is that even a good way of judging if someone's gonna be good for a job? Like how extroverted do you want a person in the job to be? Maybe in your job that's really really helpful in my job, I don't know how helpful. So this just kind of a lot of blow is at the very you know, bottom of these products that we should be worried about.

Speaker 1

Just like a C minus level job hiring process, Like that's what I feel like so many of the things, like when you get down to them and see them in action, they're like not that good. Like it does feel like the whole thing is being hyped to a large degree, and like that's something I heard from somebody I know who like works in you know, like all of my friends who work in finance or like any of those things. Like my brain shuts off when he

starts talking about what he does. But he was saying, like he pays attention to the market, and he was saying, there's like a big thing propping up the stock market right now is AI. And it really is like that, that's where so much of people's wealth is tied up, is in like the stock market, and it's just tied to like what you can get people excited about in

a lot of cases. Really like that. From that perspective, the incentive structure makes sense, Like you want people talking about how your AI technology can do all these amazing things because that literally makes you have more money than you would have if they knew the truth about your your product.

Speaker 2

Without being like, yeah, wait, how many seven fingered Trump pictures can we create? And right be like yeah, man, fucking millions into this.

Speaker 4

Yeah yeah, I mean that's kind of something that's really come out of the last few years of how many fans just use the label of AI to get funding. Like I think there was a study a couple of years ago that said like of European AI startups didn't use AI, which I point, you're like, well, what are you doing.

Speaker 1

Well, we could though this podcast is actually an AI podcast because eventually it could use AI. And before we were recording, Miles was actually putting in. He asked an AI to pitch him an episode of Friends in which the cast and you know, the people on the the friends on the show deal with the fallout from the events of nine to eleven. And it wouldn't do it. So we can't quite claim that we are an AI podcast yet, but but we did it.

Speaker 2

Did do it when I said, do it, pitch me an episode where Joey and Monica drive Uber from Yeah, and it did so clearly it because you can see where these guardrails out. They're like, don't do nine to eleven stuff. Though that's right now.

Speaker 1

But I mean, so yeah, I think there's two things we're talking about here, Like from from one perspective, like, yes, you could put it in the category of like, well, yes, the wheel makes racism or colonialism more. Frictionless is a word that gets used a lot, but like literally in the case of the wheel, frictionless. But AI and like a lot of technology is designed to make groups of people and like our interactions and the things that make

people money more frictionless. And that's something that you guys

have talked about on recent episodes of Good Robot. Like there's this one example that really jumped out to me that I think was from your most recent episode, or at least the one that's up most recently right now as we're recording this, where you guys were talking about a company that asked a regulating body to make an exception to a law around like a high risk use of AI, and the law said that people had to supervise the use of AI like just because it seemed dangerous,

and the company appealed to the regulating body by saying, well, we just like that would cost too much and we would never be able to like scale this and make a profit. And it feels to me like our answer as a civilization to that complaint needs to be that's not our problem, like that then you shouldn't be doing it.

But instead it seems like the answer too often, not just in AI, but just across the board, especially in the US, is like, Okay, well we have to make an exception so that they can make a profit around this technology, or else the technology won't get developed because the only thing that drives technological progress is like the

profit motive. But that's, you know, as I think you guys talked about in that episode, that's never been the best way to develop technology, Like it's it's been a good way sometimes to democratize existing technology, but like that's I don't know, I feel like that idea of you have to make it profitable, you have to make it easy on these companies to keep trying different things for AI to become profitable is baked in at a like cellular level at this point in how a lot of

you know, Western colonial civilizations operate.

Speaker 4

I mean, I think too often a lot of the technologies that shape our daily lives are made by a very narrow set of people who ultimately aren't beholden to us. They're beholden to their shareholders or to their boss and right, so they don't really have our best interests at heart, right, Like, for example, take this whole rebranding of like Twitter to

X by Musk. I remember waking up and finding my little Twitter bird replaced with this huge X and just being like, oh, converirstly because it was part of you know, Twitter's low decline. But secondly it maybe you feel pretty disappointed or really aware of the fact that one guy could have such a huge impact on how literally millions of people use a social networking platform that's actually super important, you know, to their daily lives and has played a

huge role in activist movements and fostering different communities. And I think that's a story we see time and time again with some of these big tech companies, which is not only do they have their own profit motive at heart, they also they're not beholden in any way to the public, and they're not being compelled by regulation to make good

decisions that necessarily benefit the public. So I think a really important question going forward is how do we support kinds of technology development that are very much based in

the communities that the technology is for. I think one really big part of that is recognizing that so many AI models, as you mentioned right, they're designed to be scalable and that's how to make money, this idea that you can apply them globally and universally, and I think that's a big problem, partly because it often is really homogenizing. It can involve like explitting a model from the US usually out to the rest of the world, it's probably

not actually appropriate to using those contexts. But also a lot of really exciting and good uses of technology I think come from these really localized, specific, community based levels. So sometimes I think it can be about thinking smaller rather than bigger.

Speaker 2

Yeah. Yeah, that was like another thing that struck me about like just all the warnings and even in that pause letter is sort of like the presumption that it's like, well, all you motherfuckers are going to use this, so we got to talk about it where it's like I don't know, I don't even fucking know what it is, Like a second ago, I thought it was Skynet, and now, like you know, you have your company being like, yeah, we now have enterprise AI tools like welcome. You're like, but

what am I huh like? And I think that's what's an really interesting thing about this as like a sort of technological advancement is before people really understand what it is, there is like from the higher the powers that be sort of going into it being like, well, this is it, Like everyone's using it, but I'm still not sure how.

And I guess that probably feeds into this whole model of generating as much you know, excitement market excitement about AI is by taking the angle of like everything's different because everyone is going to be using AI mostly y'all don't know what that is, but get ready. And I think that's what also makes it very confusing for me, as just like a lay person outside of the text sphere, to just be like, wait, so are we all using it? And even now, I really I still can't see what

that is and how that benefits me. And I think that's a big part of I'm sure your work too, or even like any ethicist, is to understand like, well, who does it? Ben Like, first, we're making this because it benefits who and how? Yeah, And I think yeah, I mean right now, it benefits the companies that are

making it. And it sort of feels like that's the way it's being preserved or slightly being like, yeah, you guys are gonna love this, but really we're going to benefit from the adoption of this technology.

Speaker 4

Yeah, I mean, I think that's this crucial question, is this stepping back and saying, actually, is this inevitable? And do we even want this in the first place? And I think that's what really frustrated me about the pause letter and about a number of kind of big text figures signing on to it, is that they're very much pushing this narrative of like, oh, this is like unstoppable and it's inevitable and it's happening, We've got a fine way to deal with it.

Speaker 3

And it's like.

Speaker 4

You're making it like you're the people literally making these technologies and a lot of cases, so if you really think it's an existential risk to humanity, stop, it's honestly could even be that import But that's you know, what makes me really then question their motives and sort of coming forward with a lot of this kind of very doom and gloom language. I think it's also interesting if you look at, for example, countries as national AI strategies.

So if you look at say like China and the UK and the US and these countries that are now thinking about what their national long term AI strategy is going to be, they also very much frame it around the idea that AI is completely inevitable, that this is going to be the transformative technology for imaging the future, for geopolitical dominance, for economic supremacy. And again, I think

as an ethesis. What I really want people to do is step back and say, I think we're actually at a crossroad, so we can decide whether or not we think these technologies are good for us, and whether they are sustainable, whether they are a useful long term thing for our societies, or actually whether the benefits of these technologies are going to be experienced by very few people and the costs are going to be borne by many.

Speaker 1

Right. We talked last week about the scientific application that you know, used deep learning to figure out the shapes of proteins, the structures of proteins, and that could have some beneficial uses, will probably have some beneficial uses for you know, how we understand disease and medicine and how we treat that. But there are ways to probably differentiate and think about these things, like it's not you don't just have to be either luddite or like AI pedal

to the floor. You know, let's just get out of the way of the big companies. You know, it feels like, but it is such a complicated technology that I think there's going to be inherent cloudiness around how people understand it and also manufactured cloudiness because it is in the overall systems benefit the overall system, being like capitalism in their benefit to generate like market excitement where there shouldn't be any basically.

Speaker 4

Yeah, I mean, I think it's easy to generate this kind of nebulousness around AI because to some extent we still don't really know what it is. It's still more of a concept than anything else because the term AI is like stretched and used to describe so many applications. Like I spent two years interviewing engineers and data scientists is a big tech firm, and they would sort of grumble, well, fifteen or twenty years ago, you know, we didn't even call this AI, and we were already doing it. It's

just a decision tree, you know. Again, it's kind of part of that branding. But also we have these again thousands of years of stories and thinking about what an intelligent machine is, and that means we can get super invested in super cloudy very very quickly. And yeah, I don't want people to feel bad for being scared of being cloudy about these technologies, like it is dense and confusing, But at the same time, I do think that it is really important to come back to that question of

what does this do for us. So, you know, the question of like Luddism or being a lude I think is really interesting because I you know, personally, I do use AI applications. There's certain things about these technologies that really excite me. But I'm really sympathetic to some of the kind of old school ludyet who weren't necessarily anti technology, but we're really against the kind of impacts that technology

we're having on their society. So the way that new technology is like I think it would be things like spinning and weaving, we're causing mass unemployment and the kind of broader rammifications that was having for people in the UK socially, and that kind of has quite a scary parallel to today in terms of thinking about maybe what AI will bring about for the rest of us who maybe aren't researchers in a lab, but who maybe might be replaced by some of these algorithms in terms of

our work and our output.

Speaker 1

Yeah, can you talk at all about open source like models of because you know, when we talk about this idea that corporations have all power and are incentivized to do whatever is going to make the most money, which in a lot of cases is going to be the thing that removes the friction from consumption decisions, and you know just how people interact and do these things, which well, as you guys talked about in your episode, like removing

the friction, Like friction can be really good. Sometimes sometimes your system needs friction to stop and correct itself and recognize when bad shit, when things are going wrong. But you know, there's also a history in even in the US, where corporations are racing to get to a development and ultimately are beat by open source models of technological organizing around like getting a specific solution. Do you have any hope for like open source or in the future of AI.

Speaker 4

Yeah, I mean, I think I'm really interested in community forms of development, and I think open source is a really interesting example. I think we've seen other interesting examples around things like collective data labeling, and I think that these kinds of collective movements, on the one hand, seem like a really exciting community based alternative to the concentration of power in a very very narrow segment of tech companies. On the other hand, though, I think community work is

really hard work. We had doctor David Adlani on our podcast, who's very important figure in Mascana, which is a grassroots organization that aims to bring some of the over four thousand African languages into natural language processing or NLP systems, And he talks a lot about how the work he does with Marskana is so valuable and so important, but it's also really really hard because when you're working in that kind of collective, decentralized environment, it can be much slower,

and as you said, there can be a lot more friction in that process. But counter to this move fast, break things kind of culture, sometimes that friction can be really productive and it can help us slow down and think about, you know, the decisions that we're making very intentionally, rather than just kind of racing as fast as we can to make the next newest, shiniest product.

Speaker 2

I was almost like in your work too, you know, you talk about how, you know, like looking at these technologies, especially through a lens of like feminism and intersectionality and you know, bipop communities and things like that, and I like broadly in science there's like, you know, there's an issue of like language hegemony in scientific research, where if things aren't written in English a lot, sometimes studies just get fucking ignored because like I don't speak Spanish or I

can't read Chinese. Therefore I don't know if this research is being done, and therefore it just doesn't exist because the larger community is like we all just think in English, like, so how do you like, you know, specifically because you know, in hearing the description of your work, help me understand, like and the listeners too, like, of how we should be looking at these things from that also from that perspective too, because I think right now we're all caught

up and like it's talking sky at and it's not you know, hold on, like there are other subtleties that actually we should really think deeply about because to your point, I feel like those are the dimensions of an emerging technology or trend or something that gets ignored because to your point, it's like the thing, well, of course it's it's unequal, of course it's racist or whatever, But what are those like what are those ways that people need to really be thinking about this technology?

Speaker 4

Yeah, I mean I think English language hegemony is a really good example of this broader problem of the more subtle kinds of exclusions that get built into these technologies. Because I think we've all probably seen the cases of AI systems that have been really horrifically and explicitly racist or really horrifically sexist, from you know, Tay the chatbot that started spousing horrific right wing racist propaganda and had get taken down through to Amazon hiring tool that's systemically

discriminated against female candidates. These are really I think, avert depictions of the kinds of harms a I can do. But I think things like English language hegemony are also incredibly important for showing how existing kinds of exclusions and patterns of power get replicated in these tools. Because do an English language speaker, very crucially, they might use chat GPT and think this is great, this is what my

whole world looks like if they only speak English. Obvious to anyone who is not a native English speaker who doesn't only speak English, it's going to be an incredibly different experience. And that's where I think we see the benefits of these tools being really unequally distributed. I think it's also important because there's such exclusions in which kinds of languages and forms of communication can get translated into

these systems. So, for example, I work with a linguists at the University of Newcastle, and she talks about the fact that there's so many languages like signed languages and languages that don't have a written language, but they're never going to be translated into these tools and never are going to benefit from them. You might think, Okay, well, do these communities want those languages translate into an AI tool? Maybe maybe not, I'd argua, of course that's up to them.

But those communities are still going to experience the negative effects of AI, like the climate cost of these tools. And so I think it's just really important, like you say, to think about what kinds of hegemony are getting further entrenched by AI powered technologies.

Speaker 1

All right, great, let let's take a quick break and we'll come back and finish up with a few questions. We'll be right back and we're back and all right. So, I mean, one thing that I want myself to just like get out of this conversation is just a sense of like ways that we like what we think AI might look like, how it might impact what the world around us looks like in the not too distant future.

And we've we've already talked about like some ways that AI is being overrated as like a autonomous killbot, like terminator style kilbot. You know, there's there's some other examples, like on one of the episodes of your podcast, an AI expert talks about getting an AI enhanced cancer scan and assuming the scan was like taking three D video and doing or doing like three D modeling, and it was just putting a box on a two D image.

And I believe the guests like admits that they were influenced by like Hollywood movies, and it seems like that's what people who are trying to make money off of this one. So like that we're going to have this steady push to make us overrate misunderstand what the actual promise of AI is going to look like, how what tools are going to be given to us in the near future, and like what are the tools actually able

to do? So I'd just be curious to hear from you, like what do you think ways that AI might intervene in our lives in the near future might already be intervening in our lives? Well, what are those things that we're underrating, Like what and what are the things that you think are probably taking up too much of our bandwidth in terms of how we're picturing AI.

Speaker 4

Oh, that's a great question. I think to start with the second half, to start with what I think is maybe a little bit overrated, or maybe you'll take a bit longer to pan out. I would say some of these really high tech applications, things like AI and medicine, for example, I think that we might not see them

expand beyond a very narrow set of controlled circumstances. I think this idea that you know, soon every hospital will have this tool, or say AI and education, that every classroom is going to have access to this tool, I think unfortually just really grossly over its demanding the kinds of resources only in this country that schools and the

hospitals have. I was talking to a friend who's a doctor here and she was saying, oh, well, what do you think about AI in medicine, And then she kind of stopped for a moment and just laughed and said, well, I don't know why I'm asking you that, because my hospital uses paper notes. And I said what, and she said, yeah,

our whole hospital is not computerized. And that was a good reality check for me because it made me realize, like, wow, actually some of the stuff that I'm thinking about is so far away from the reality of what people like my friend the doctor is having to deal with on

the ground. And same with schools. You know, here a whole hundreds of schools I think have had to be closed because there's some kind of issues with the concrete that I might collapse, which is a horrific But b is such a different level of infrastructural problem that thinking about AI in the costume is really not on the radar in that scenarios exactly. Yeah, autonomous, Yeah, it's like a low Yeah, people like what if students cheat? And

you're like, look, yeah, you know. So those are things that I think are maybe a little bit on the overrated side. I think a way things are underrated, Like I'm really interested in how AI can change how we see the world and how we see ourselves, like I think, you know, take for example, something like TikTok. You know, I have to confess I'm a big TikTok user. I love grolling a lot of mind list diarbage. It brings

me a lot of joy and peace. But even things like AI powered beauty filters, right, like, I think that has such a profound effect on how you just understand and see yourself and your own faith. They're often really imbued with gendered and racist kinds of assumptions as well. I often look a lot whiter when I use a

beauty filter, even though I'm multiracial. Like all of those assumptions, I think they get baked in that seep into our lives and like really subtle ways, but I think collectively can have quite a big impact.

Speaker 1

Yeah, I was talking about the I don't know if this is technically AI, but then again, I don't know what AI is or if there's like a specific argument, but like the the way that like iPhones take forty pictures like consecutively and then like choose the best one from the forty, and like the live photo setting like feels like a thing and like it's really good at it, and it's something that I had, like just it happened on my phone with like an update and I was like, yeah,

this is just how you take pictures now, and like pictures are way better than they used to be. I think I saw somebody speculate that like some of the software like Photoshop and other things that have traditionally had sort of a difficult learning curve will suddenly get much easier for people to use. So yeah, I like those are like having actual tools that are like kind of easy to use and like very simple, straightforward. We know what the goal is here, and we're able to use

these enhanced kind of programs to achieve that goal. Like that that feels more possible to me and like something that we might see in the not too distant future.

Speaker 2

Totally, that's like a pitch you can understand, like this will make it easier for you to photoshop a friend out and just put someone else in. You're like, okay, rather than right now, it's like this shit will end the world and you're like, well, what is it? I don't know, I don't know, I don't know, but it

will fuck you up. And you're like huh, and I guess that's such a different proposition up front and earlier, doctor Carry was saying, like there are things that genuinely excite you about like this sort of like these emerging technologies from I, and I would love to hear from your vantage point because you are looking at this and a like like what is ethical and what is going to bring meaningful value to people? Like what are those things?

That I can feel like, oh, yeah, yeah, I can get down with that future.

Speaker 4

Yeah. I mean, I recognize that I sound very doom and gloom a lot of the time that I speak, and for all my friends have to deal with my sort of constant existential anxiety about technologies. But at the same time, you know, I'm an active user of a lot of them, Like I use a lot of voice dictation software, voice editing software, you know, for when we record our podcasts, and these are things that genuinely make

my life so much better and easier. I love being able to transcribe, you know, what I'm saying and have it appear on the screen and not have to type out my emails and have them a pair. I recognize you they're like, all very boring. I should have said something like I DJ not at all I write email

of using voice. It's very you know, but I think kind of, you know, extrapolating out from that, you know, there's really amazing applications and accessibility, and when it comes to the kinds of access people can now have online because of advancements and aipowered tools, and so I think, though, you know, what's really central, there is kind of that broader vision of you know, what is the kind of benefit this is meaningfully bringing to society? What problem is

being solved? And are the people who are like most affected by that problem the ones leading the conversation and saying what they need? Because too often I think we see tech developers creating stuff that actually no one really asked for. I'm sure you have like all seen that thing on Amazon when you're like, you're asked for that banana holder or avocado peeler or something. And too often I think tech add on to be a little bit

like that. Whereas I think when we have like really interesting conscious development of say, for example, feminist tools that are designed to encourage good conversations things like that, that really makes me excited about the kinds of futures we could have with technology.

Speaker 2

Right, it almost feels like the danger here is when someone has like a technology and go and this is gonna be in every home, and you're like, this is a sales pitch. Actually yeah, because if you're not saying this is how we will work less and have more time to frolic, to enjoy our human existence, to connect with our families, then then like miss me with that,

because it reminds me of Crypto. It reminds me of the metaverse, and it reminds you of Zuckerberg being like every worker will have this fucking headset on no no, but nice fucking tri asshole. And I feel like this is kind of like it has a similar tone of like, get ready, folks for this thing. And granted they have some a shiny toy in the form of these large language models that are fun to do, like do and really they're just skimming the internet and just giving it

to you in a nice, tidy sentence. But yeah, like I feel like that is just kind of like I'm seeing that dimension when you see the hype around it, which feels a little more like y'all are talking about this to make money, whereas the other things that you're talking about, like accessibility and trying to democratize certain applications. With things like that, less there's less less scale involved with something like that, So me, yeah, it hasn't talked.

Speaker 4

About exactly, and I just think it's really important to recognize that. You know, I'm not saying that say things like smart home technology is like Amazon Alexa and things are like inherently bad products. I'm sure gaps of people like love having those technologies in their home. They find them useful. But I just think whenever you bring in a new technology, you also bring a new vulnerabilities, and you bring a new costs. And sometimes the hype wave

means we focus on the benefits. You know, Oh, if you bring this to every home, everyone's life will be easier, if everyone will have a more seamless home experience, when actually that's not the case. There's real costs to bringing

those tools into a lot of people's homes. Everything from, for example, the way that certain kinds of elder care is getting replaced by technological tools in the UK care system, through to the fact that those tools can really easily be used for the purposes of domestic abuse, into a partner violence, It can be used to control or trap someone in their home. And these are really ugly truths about those technologies, and they're often not the ones that

are put at the heart of technology development. And so that's a lot of my job, I guess, is just to say, how do we put those vulnerabilities in those costs at the forefront, and then judge holistically whether or not we really want this product to exist.

Speaker 2

Yeah, but here's the thing. I got a lot of my stock portfolio tied up in these companies. I need these I need these fucking things to moon, if you know what I mean.

Speaker 1

So I want to end the episode just doing something that you guys do on The Good Robot, which is talk talk about some like books that are recommended or

you know what works you recommend. You know, you talk on an episode earlier, I think it was in the summer about how a lot of the ideas and fears and hopes that we have currently are very similar to what we've seen in movies and like those like when you look at Elon Musk's ten favorite books, they're all like fed by this same like Isaac Asimov, like I

robot thing I'd had coot. A lot of Kubrick's ideas are around AI are go back to this very specific verse version of technology where it inevitably turns into a mean psychopath who is set on dominating all of humanity. So as an alternative, because we do like to talk about, you know, the importance of expanding our imagination, Like you know, with regards to climate, a lot of it is like

either it's either apocalypse or business as usual capitalism. There's not like people have a hard time imagining the alternative on AI. On the subject of AI, like, what are is there? First of all, is there any like mainstream kind of popular movie that you feel like actually like

that example of AI like got it right? And then are there other kind of more obscure works that you would send people to in order to sort of feed their imagination of like what what a world with AI technology could look like?

Speaker 4

Yeah? I mean that question of the imagination and how our current imagination of AI is really narrow is super key, And you know, I think it's fascinating that, yes, you know, Musk and these folks have like a very narrow set of stories that they refer to. In those stories, it's always like, oh, the type of masculine take Bro makes an AI and then it looks like him and it takes over the world, and you're like, are you okay that you're like your bedtime story?

Speaker 1

Soopath bent on world domination thinks that all AI is going to be a sociopath bent on world domination. That's so weird, right, Yeah?

Speaker 4

Who would have guessed that?

Speaker 2

Yeah?

Speaker 4

But yeah, I mean, and something that I really like us stories, particularly science fiction stories that I love sci fi and fantasy. I'm like a huge believer in like the way that it can help us imagine different worlds. In terms of like mainstream films, I guess the one that I comes to the top of my head is

Big Hero six. So the Disney Pixar film where you have an AI powered kind of we're about healthcare buddy called Bamax, And I think that's a really interesting example of an AI that you know, at one point in the film, he gets dressed up in armor and you know, the kid here to starts to try and use him

as a kind of weapon. But like, this is an AI that so has it been designed to be a weapon that kind of resists being weaponized a lot of ways, And in that sense, it's really countercultural to a lot of the AI we see in Western film and cinema, which is often very weaponized. It's often either this like kind of sexy cyborg figure or it's this hyper masculine terminator figure. Yeah, so I find be Max is kind of gender lid and like just.

Speaker 1

A well, puffy balloon person. But that's also so accurate of what a young boy would do immediately with that is like a weapon. It's like the tweet I can tell you me and my friends would have killed et with hammers. It's like little little boys are monsters. But that that's a great example.

Speaker 4

Yeah, in terms of like some alternative like stories and ideas, started to do a really shameless self plug but we have a book coming out called The Robot Why Technology Needs Feminism, and I'm plugging it because it's really beautiful. We worked with a science fiction illustrator because Sinjon Lee, and there was myself and doctor Elinoaldrage, my co host.

And the whole thing is these two thousand word essays, so they're really short and really punchy by lots of different guests with had on the podcast, and they all respond to this idea of good technology dot dot dot, So it might be good technology is free, or good technology, you know, challenges power, and the idea is you can just dip in read one. They're also illustrated, and then just dip out when you need a break or a moment.

But yeah, I really would encourage you to pick that up because it just contains the most incredible feminist philosophers, technologists and vendors, activists who are really pushing forward different ideas of the kinds of technological futures we could have.

I mean, apart from that, I read a lot of like Chinese, diasporic Asian American sci fi as well, particularly sci fi that's like thinking a lot about the climate crisis and AI and insects of that, and so I think a lot of those stories have really interesting and different perspectives on what AI is it can be from like Elliott de Badad's work, which explores, you know, the relationships between like humans and sentient mind shifts in this

base Opera universe. Through the like lyrical lized Faltfish Girl, which is like a very dystopian imagining of like a future society based on labor from clones. Like all of these novels, I think just if they're like a lot more love and a lot more could ask, they're just absolutely interesting and imagine have been gorgeous ideas about the future.

Speaker 1

Amazing. Well, this has been such a fascinating conversation. Thank you so much for taking the time. At what I have to assume is midnight where you are right now, it's almost nine almost nine, Okay, yeah, that's too late. Well, thank you doctor McNerney for doing the show Where can people find you? Follow you, hear you all that good stuff.

Speaker 4

Oh yeah, thanks so much for having me on. It's been a bluff. Yeah, check out the good Robot. We're on YouTube, Apple's Spotify, and then keep an eye out for the book coming in February twenty twenty four.

Speaker 1

Amazing.

Speaker 2

Yeah, we'll have to have you back for that. Yeah.

Speaker 1

Yeah, absolutely. And is there a work of media that you've been enjoying?

Speaker 4

Ooh, media in terms of like a show, like a pot.

Speaker 1

Show, a podcast and tweet.

Speaker 4

Oh, I feel like I should choose something like really intellectual. But I actually just sent my husband this tweet that really got me because very me, which is it was it's actually a bit dark, but with this news headline about this hiker had been missing and the rescuers couldn't get in contact with them and they didn't get rescued for twenty four hours because they wouldn't answer their phone

it was an unknown number. And this is just exactly me, like my husband like please, fuck you know what I'm like, I will not I will not answer it under a number. And so that brought me a little bit of joy today.

Speaker 1

Absolutely, And I feel like that's a great way that a great example of how technology has marched forward and completely real and like something that used to work phone communication, but we can't answer our phones anymore because the fucking spots. Miles, where can people find you? What's the work media you've been enjoying?

Speaker 2

Yeah at miles of Gray wherever they got at. I'm there soon to Blue Sky, shout out ill Will Christy Main. Yeah, you and you and I were hopping in okay codes we got we got the invite, just ch the blue Sky.

So yeah there shortly. I'll also check Jack and Eye out on our basketball podcast, Miles and jackout man mas pease check me out on my ninety day Fiance podcast for twenty Day Fiance and also the true crime show The Good Thief, which has all episode eight episodes out, So please stream those binge those talking about the Greek robin Hood, the man who legitimately was kidnapping millionaires and doing some old fashioned wealth redistribution in the in a

positive way, never hurting people either, never hurting people, like again, an ethical criminal if they're is such a thing. Some tweets I like, first one is from you know, shout out to the WGA and all the negotiators, because it looks like the WGA strike is they're close to getting

something ratified. So we like that. This tweet is from Jeff Yang at Original Spin, tweeted just underscoring how WGA negotiators told the studios the union wouldn't go back to work until SAG also has a deal, because if the last five months proved anything, it's in together, win together.

And there's this snippet from it might be a deadline, but it says quote the studios also inquired if once a tentative agreement is ratified by the scribes, if the writers would pick up their pens and hit their keyboards again very soon afterwards, the guild, from what we understand, made it clear that they would not be going back to work until SAG AFTRA also had a new agreement with the AMPTP, reflective of the wj WGA's feeling of

solidarity between the two unions that has characterized their first mutual strikes since nineteen sixty love the solidarity, and I know AYATSI is also going to have a contract that is going to need to be renegotiated next year, and it looks like guess what, folks they did. It only listen to one thing, Yeah, putting your putting the tools down. Put the tools down and pick them other tools up, which is getting in your solidarity back. So we'd love

to see that. And another tweet I like is from t Pain at t Pain just put bartender just doing her job. Me just this prudo of Kevin James.

Speaker 1

At the butt of Kevin James. It's going going, but it.

Speaker 2

Is trying to be like, hey, hey, can I get your eye content? Just just a ginger beer if you can. Those are my tools.

Speaker 1

Uh tweet I've been enjoying at b r n bn E. I don't know how to pronounce that, but their display name is bbbbbbb space bbbbbbb, which I have to assume stands for bone bone bone bone bone bom bone boom boom. But the tweet I usually don't use for my tweet the ones that things that I've retweeted, but this one got me so much I had to reshare it. The tweet is what if you went to ET's planet and all of the other ET's were wearing clothes that really, oh really fucked up? How I thought about Et?

Speaker 2

Yeah? Yo, what.

Speaker 1

You can find me on Twitter at Jack Underscore O'Brien and on threads at Jack Underscore, Oh Underscore Brian, and on Blue Skill. I'll have a username there eventually soon. And you can find us on Twitter at daily Zeikeeist d Daily Zekeeist on Instagram. We have a Facebook fan page on a website daily zeikeist dot com where we post our episodes and our footnotes. We link off to the information that we talked about in today's episode, as

well as a song that we think you might enjoy. Miles, what's the song people might enjoy?

Speaker 2

H This is an artist, Detroit's very own John FM, and I just figure out what an appropriate title. It's called White Science, and the track is like very It feels like if, like like I don't know, like if Prince was making a shit in like the Late Nights. It has like a frinsy vibe like it's frincy and it's kind of got this like vocal modulator on it that feels a little bit princy. So and also it's just a really good track. So this is John FM with White Science.

Speaker 1

That's what Jack FM's mom calls him when he's in trouble.

Speaker 2

John FM, Get in here Now.

Speaker 1

The Daily ZEITGEI is a production of by Heart Radio. For more podcasts from my heart Radio is the iHeartRadio app, Apple podcast. Wherever you listen to your favorite shows, that is gonna do it for us this morning, back this afternoon to tell you what is trending and we'll talk to you all then.

Speaker 2

Bye bye

Transcript source: Provided by creator in RSS feed: download file