The Tech Behind Signalgate + Dwarkesh Patel's "Scaling Era" + Is A.I. Making Our Listeners Dumb? - podcast episode cover

The Tech Behind Signalgate + Dwarkesh Patel's "Scaling Era" + Is A.I. Making Our Listeners Dumb?

Mar 28, 20251 hr 8 minEp. 129
--:--
--:--
Listen in podcast apps:

Summary

This episode of Hard Fork explores the SignalGate scandal involving the Trump administration's use of Signal for sensitive communications, features an interview with Dwarkesh Patel about his book "The Scaling Era" and the future of AI, and includes listener takes on whether AI is affecting critical thinking skills. Discussions cover the security risks of using commercial apps for government business and the potential benefits and drawbacks of AI advancements.

Episode description

This week, we dig into the group chat that’s rocking the Trump administration and talk about why turning to Signal to plan military operations probably isn’t a great idea. Then, we’re joined by the podcaster Dwarkesh Patel to discuss his new book “The Scaling Era,” and whether he’s still optimistic about the broad benefits of A.I. And finally, a couple weeks ago we asked whether A.I. was making you dumber. Now we hear your takes.

 

Guest:

  • Dwarkesh Patel, tech podcaster and author of “The Scaling Era: An Oral History of A.I., 2019-2025”

 

Additional Reading:

 

We want to hear from you. Email us at [email protected]. Find “Hard Fork” on YouTube and TikTok.

Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.

Transcript

What's my subscription to the New York Times have me doing this week? Preparing a strawberry pretzel pie. Solving spelling bee with no hints. Planning a trip to one of the 52 best places to go. Getting to the bottom of the big pants trend. And I'm finally replacing my vacuum with a recommendation I can trust. What will your subscription to The Times have you do? Why not find out with our best offer?

Go to nytimes.com slash subscribe. Listen to this. This week I checked my credit card bill, normally a pretty boring process in my life, and I see a number. that astonishes me in its size and gravity. And so the first thing I think is, how much DoorDash is it possible to eat in one month? Have I hit some new level of depravity? But then I go through the statement.

And I find a charge that is from the heating and plumbing company that I used to use when I lived in the home of Kara Swisher. Kara Swisher, of course, the iconic technology journalist, friend, and mentor. originator of the very podcast feed that we're on today, Kevin. And former landlord of Casey Newton. Former landlord of me. And when I investigated, it turned out that Kara Swisher... had charged my credit card for $18,000.

For what? What costs $18,000? I don't know what is going on, but it costs $18,000 to fix. And until I made a few phone calls yesterday, that was going to be my problem. So here's what I want to say to the people of America. You need to watch these landlords.

You might think that you're out from underneath their thumb, but they will still come for you, and they will put $18,000 on your credit card if you do not watch them. Now, this is slightly terrifying to me, the idea that Kara has access to your credit card in some way, shape, or form. Well, and I should say, basically it was on file with the heating and plumbing company. So I'm not sure that I could actually blame Kara for this, but...

I did have to talk to her about it. Oh, she's crafty. I think she knew what she was doing. She's been waiting to get back at us like this for a long time. And mission accomplished, Kara. I'm Kevin Roos, a tech columnist at the New York Times. I'm Casey Newton from Platformer. And this is Hard Fork. This week, the group chat that's rocking the government will tell you why the government turned a signal for planning military operations and why it's probably not a great idea.

Then, podcaster Dworkesh Patel stops by to discuss his new book and tell us why he still believes AI will benefit all of us in the end. And finally, we asked you if AI was making you dumber. It's time to share what you all told us. I feel smarter already. Well, Casey, the big story of the week is... Signal Gate. Yes. What I would say, Kevin, is the group chats are popping off at the highest levels of government.

Yes. And if you have been hiding under a rock or on a silent meditation retreat or something for the last few days, let's just quickly catch you up on what has been going on. Yeah. So on Monday... The Atlantic, and specifically Jeffrey Goldberg, the editor-in-chief of The Atlantic, published an article titled, The Trump Administration Accidentally Texted Me.

war plans. In this article, Goldberg details his experience of being added seemingly inadvertently to a signal group chat with 18 of the U.S.'s most senior national security leaders, including Secretary of State Marco Rubio, Tulsi Gabbard, the Director of National Intelligence, Pete Hegseth, the Secretary of Defense, and even Vice President J.D. Vance. The chat was called Houthi PC Small Group.

PC presumably standing for Principal's Committee and not Personal Computer. And I would say this story lit... The internet on fire. Absolutely. You know, we have secure communications channels that we use in this country, Kevin, to sort of organize and plan for military operations. They were not used in this case. That is kind of... of a big deal in its own right, but you accidentally add...

One of the more prominent journalists in all of America to this group chat as you're planning it is truly unprecedented in the history of this country. Yeah, and unprecedented in my life, too. Like, I never get invited to any secret class. classified group chats. But I also feel like there is an etiquette and a procedure around the mid-sized group chat. Yes. So I'm sure you've had the experience of being added to a group chat in my...

In case it's usually like planning a birthday party or something. And there's always like a number or two on this group chat that you don't have stored in your phone, right? That's right. The unfamiliar area code. pops up along with the named accounts of everyone who you do know who's in this group chat. Absolutely. And the first thing that I do when that happens to me is I try to figure out who the unnamed people in the group chat are. Yes. And until you figure that out...

You can't be giving your A material to the group chat. This is so true. You know, I saw someone say on social media this week that gay group chats have so much better operational security than the national security advisor does. And this is the exact reason. If you're going to be...

in a group with seven or eight people and there's one number that you don't recognize, you're going to be very tight-lipped until you find out who this interloper is. Exactly. And maybe someone will even say, hey, who's, you know, 347? There's a protocol for this is what I'm saying. Yes, it's a protocol. pointing out it's a protocol that most people take extremely seriously.

Yes, even when they are talking about things like planning birthday parties. Yes. And not military strikes. Exactly. Yeah. So before we get into the tech piece, let's just sort of say what has been happening since then. So this story comes out on Monday in The Atlantic. Everyone freaks out about this. The government officials involved in this group chat are sort of...

has to respond to it. There's actually a hearing in Congress where several of the members of this group chat are questioned about how a reporter got access to these sensitive conversations. And basically the posture of the Trump officials implicated in this has been to...

Deny that this was a secret at all. There have been various officials saying nothing classified was discussed in here. This wasn't an unapproved use of a private messaging app. Basically nothing to see here, folks. Yes, and on Wednesday, The Atlantic actually published the full... text message exchanges so that people can just go read these things for themselves and see just how detailed the plans shared were. Yes.

Let's just say it does look like some of this information was, in fact, classified. It included details about the specific timing of various airstrikes that were being ordered in Yemen against the Houthis, which are a sort of rebel terrorist militia.

Like this was not a party planning group chat. Here's a good test for you. When you read these chats, imagine you're a Houthi in Yemen. Would this information be useful to you to avoid being struck by a missile? I think it would be. To me, that's the... test here, Kevin. Totally. Yeah. So let's dive into the tech of it all, because I think there is actually an important and interesting tech story kind of beyond the headlines here. So Casey...

What is Signal and how does it work? Yeah, so Signal, as you well know, as a frequent user, is an open source, end-to-end, encrypted messaging service that has been with us since July 2014. It has been... growing in popularity over the past several years. A lot of people like the fact that unlike something like an iMessage or a WhatsApp, this is something that is built by a nonprofit. Actually, it's funded by a nonprofit.

organization, and it is fully open source. It's built on an open source protocol so anyone can look and see how it is built. They can poke holes in it, try to make it more secure. And, you know, as the world has evolved more and more... people have found reasons to have both end-to-end encrypted chats and to have disappearing chats. And so Signal has been sort of part of this move away from...

permanent chats stored forever to more ephemeral, more private communications. Yeah, and I think we should add that among the people who think about cybersecurity, Signal is seen as kind of the gold standard of encrypted communication. apps. It is not perfect. No communications platform is ever perfectly secure because it is used by humans on devices that are not perfectly secure.

it is widely regarded as the most secure place to have private conversations. Yeah, I mean, and if you want to know why is that, we could go into some level of detail here. Signal makes a priority to collect as little metadata as possible.

And so, for example, if the government went to them and they said, hey, we have like Kevin's signal number, tell us all of the contacts that Kevin has. They don't actually know that. They don't store that. They also do not store the chats themselves, right? Those are on your device.

So if the government says, hey, give us all of Kevin's chats, they don't have those. And there is some pretty good encryption and privacy practices in some of the other apps that I think a lot of our listeners use on a daily basis. iMessage has pretty good protection, but there are a bunch of asterisks around that. And so if security is super, super important to you, then I think many of us would actually recommend Signal as the best place to do your communicating.

You and I both use Signal. Most reporters I know use Signal to have sensitive conversations with sources. I know that signal has been used by government officials in both Democratic and Republican administrations for years now. So, Casey, I guess my first question is like. Why is this a big deal that these high-ranking government officials were using Signal if it is sort of the gold standard of security? Sure. So I would put it in maybe two sentences, Kevin, that sums this whole thing up.

Signal is a secure app, but using Signal alone does not make your messages secure. So what do I mean by that? Well, despite the fact that Signal is secure, your device is vulnerable. particularly if it's your personal device, if it is your iPhone that you bought from the Apple store. There is a huge industry of hackers out there developing what are called zero-day exploits. And a zero-day exploit...

is essentially an undiscovered hack. They are available for sale on the black market. They often cost millions of dollars, and criminals and more often state governments will purchase these attacks. because they say, hey, it is so important to me to get into Kevin's phone. I have to know what he's planning for hard fork this week. So I'm gonna spend $3 million. I'm gonna find a way to get onto his personal device. And if I have done that, even if you are using Signal, it doesn't.

matter because I'm on your device now. I can read all of your messages, right? So this is the concern. So wait, what are American military officials supposed to do instead? Well, we have special designated channels for them. used. We have networks that are not the public internet, right? We have messaging tools that are not commercially available, and we have set up protocols to make them use those protocols to avoid the scenario that I just described.

Yeah, so let's go into that a little bit, because as you mentioned, there are sort of designated communications platforms and channels that... high-ranking government officials, including those with access to classified information, are supposed to use, right? There are these things called SCIFs, the sensitive compartmented information facilities. Those are like the physical rooms that you can go into to receive like...

classified briefings. Usually you have to like keep your phone out of those rooms for security. Yeah, I keep all of my feelings in a sensitive compartmentalized information facility.

But you're working on that. I'm working on that. I'm working on it. But if you're not like physically in the same place as the people that you're trying to meet with, there are these secure communication channels. Casey, what are those channels? Well, there are just specialized services for this. So this is like what a lot of the tech...

will work on. Microsoft has something called Azure Government, which is built specifically to handle classified data. And this is like sort of rarefied air, right? Not that many big platforms actually go to the trouble of making this software. It's a pretty small, addressable market, so you've got to have a really solid product and really good sales force to make this worth your while. But the stuff exists, and the government has bought these services over the years and installed them.

that this is what the military is supposed to use. Yeah, so I did some research on this because I was basically trying to figure out, like, are these high-ranking national security and government officials using Signal because...

it is the kind of easiest and most intuitive thing for them to use? Are they doing it because they don't want to use the stuff that the government has set up for its own employees to communicate? Like, why were they sort of doing this? Because one thing that stuck out to me in the transcripts of these group chats... is that nobody in the chats

seemed surprised at all that this was happening on Signal, right? No one, when this group was formed and these 18 people were added to it, you know, said anything about, hey, why are we using Signal for this? Why aren't we using Microsoft Teams or whatever the sort of official approved thing is? found out when I started doing this research is that there is something of a patchwork of different applications that have been cleared for use by various agencies of government.

And one reason that these high-ranking government officials may have been using Signal instead of these other apps is because some of these apps are not designed to work across the agencies of government, right? The DoD has its own communication protocols. Maybe the State Department has its own communication protocols. Maybe it's not trivially easy to kind of start up a conversation with a bunch of people from various agencies on a single...

Yeah. And that should not surprise us because something that is always true of secure communications is that it is inconvenient and annoying. This is what makes it secure, is that you have gone to great lengths to conceal what you were doing. I read some reporting in the Washington Post this week that... For the most part, when they are doing their most sensitive communications, those communications are supposed to be done in person.

Right. Like that is the default. And if you cannot do it in person, then you're supposed to use these secure communication channels, again, not the public Internet. So that is the protocol that was not followed here. Right. I think one other possible explanation for why these high-ranking officials were using Signal is that Signal...

allows you to create disappearing messages, right? That is a core feature of the Signal product is that you can set in any group chat, like all these messages are going to delete themselves after. an hour or a day or a week. In this case, they seem to have been set to delete after four weeks. Now, there are good reasons why you might want to do that. If you're a national security official, you don't want this stuff to hang around forever. But we should also say that that is also...

an apparent violation of the rules for government communication because there are records act that require the preservation of government communications. And so one reason. that the government and various agencies have their own communications channels is because those channels can be preserved.

To comply with these laws about federal record keeping. Yes, there is a Federal Records Act and a Presidential Records Act. And the idea behind those laws, Kevin, is that, well, you know, if the government is planning a massive war campaign that will kill a bunch of people, we should.

have a record of that. We do, you know, in a democracy, you want there to be a preservation of some of the logic behind these attacks that the government is making. So yes, it seems like they clearly have just decided they're not going to follow those. Yeah. And so... I think the place where I land on this is that this is, I would say, obviously a dumb...

probably unforgivably dumb mistake on the part of a high-ranking national security official. My favorite sort of, like, cover-up attempt on this was the national security advisor, Michael Waltz, was asked sort of... how this happened because he was the person, according to these screenshots of this chat, who added Jeffrey Goldberg from The Atlantic to this chat. And he basically gave this statement that was like, we're all trying to figure out what happened here. We saw the screenshot, Michael.

You added him. Yeah. And I think there are obvious questions that raises about, you know, whether he... had mistaken him with someone else named Jeffrey Goldberg, maybe a national security official of some kind. Oh, I bet the words Jeffrey Goldberg never even appeared on Michael Waltz's screen. Okay, this is like the realm of pure speculation, but let me just tell you, as somebody who is routinely contacted by people anonymous,

on Signal, usually their full name is not in the message request. It's like, you have a new message request from JG. So just those initials. And so I will look through my signal chats and I'll be trying to, I want to ask that one person about the one thing. What was their signal name? And I'm looking through a soup of initials. So like, I actually understand why that happened, which is yet one more reason why you might not want to use signal to do your work.

planning. Yes, exactly. I think the most obvious sort of Occam's razor explanation for why all these high ranking officials are on signal is that it's just a better and easier and more intuitive product than anything the government.

is supposed to be using for this stuff. It's more convenient. Yes, and I find this totally plausible having spoken with people who have been involved with government technology in the past. It is just not the place where cutting-edge software is developed and deployed.

You know, there famously was this sort of struggle between President Obama and some of his security advisors when he wanted to use a BlackBerry in the Oval Office. And there was sort of like no precedent for how to do that securely. And so he fought them until they sort of made him a special. Blackberry that he could use. Like, this is a time-honored struggle between politicians who want to use the stuff that they used when they were civilians.

while in political office and are told again and again, like, you can't do that. You have to use this clunkier, older, worse thing instead. Well, I'm detecting a lot of sympathy in your voice for the Trump administration here, which is somewhat surprising to me. Because while I can't...

can stipulate that, sure, they must go through an annoying process in order to plan a war. I'm somebody who thinks, well, it probably should be really annoying and inconvenient. You probably should actually have to go physically attend a meeting to do all of this.

stuff and you know if we are going to decide that war planning is something that like the secretary of defense can do like during commercials for march madness just like pecking away on his iphone we're going to get in a lot of trouble like imagine you're an adverse

of America right now. And you've just found out that the entire administration is just chatting away on their personal devices. Do you not think that they have gone straight to the black market and said, what's a zero day exploit that we can use to get on? of course they have, for sure. And so what I'm not saying here is that this is excusable behavior.

What I am saying is that I think people, including government officials, will gravitate toward something that offers them the right mix of convenience and security. I would like for this to be an incident that kind of spurs the development of much better and more secure ways for the government to communicate with itself. It should not be the case that if a bunch of high-ranking officials want to start a group chat with each other,

They have to go to this private sector app rather than something that the government itself owns and controls and that can be verifiably secure. So, yes, I think this was extremely dumb. It is also, by the way, something that I'm sure was happening in Democratic administrations, too. Like, this is not a partisan issue here. Well, what exactly do you think was happening? Like, yes, the Democrats were using Signal, and yes, they were using disappearing messages.

It's not clear to me that they were planning military strikes. I don't know. I have no information either way on that. What I do know is that I have gotten messages on Signal from officials in both parties. I have gotten emails from the personal Gmail. accounts of administration officials in both parties. Like, this is, I think, an open secret in Washington that the government's own tech stack is not good, and that a lot of people, for reasons of convenience or privacy or what have you...

have chosen to use these less secure private sector things instead. And I think I should make a serious point here, which is that it is in the national interest of the United States...

to have a smaller gap between the leading commercial technology products and the products that the government is allowed to use, right? Right now in this country, if you are a smart and talented person who wants to go into government, One of the costs of that move is that you effectively have to go from using the best stuff.

that anyone with an iPhone or an Android phone can use to using this more outdated, clunkier, you know, less intuitive set of tools. I do not think that should be the case. I think that the stuff that... the public sector is using for communication, including of very sensitive things, should be as intuitive and easy to use and convenient as the stuff that the general public uses.

Yes, it should have additional layers of privacy. Yes, you should have to do some kind of, you know, procurement process. But a recurring theme on this podcast whenever we talk about government and tech is that... It is just way too slow and hard to get standard tools approved for use in government. So if there's one silver lining of the SignalGate fiasco, I hope it is that our government takes access to...

good technology products more seriously and starts building things and maintaining things that are actually competitive with the state of the art. I'm going to take the other side of this one, Kevin. I think if you look at the way that the government was able to protect their secrets in... previous administrations prior to the spread of signal, they were actually able to prevent

high-ranking officials from accidentally adding journalists to conversations that they shouldn't have been in. There is no evidence to me that because of the sort of... aging infrastructure of the communication systems of government, we were unable to achieve some sort of military objective. So, you know, even as somebody who generally likes technology, I think some of these tech oligarchs have this extremely know-it-all attitude.

attitude that our tech is better than your tech, yours sucks. And they sort of bluster in and they say, you know, all of your aging legacy systems, we can just get rid of those and move on to the next thing. And then you wake up after SignalGate and you're like, oh, that's why there was a system.

That's why there was a protocol. It turns out it was actually protecting something, right? Like this is the Silicon Valley story over and over again is we are going to come in and try to build everything from first principles. We're going to be completely ahistorical. We're not going to learn one lesson that anyone else has ever.

learned before because we think we're smarter than you. And SignalGate shows us that actually, no, sometimes people have actually learned things and there is wisdom to be gleaned from the ages, Kevin, and maybe that should have been done here. Well, Casey, the Defense Department may be in its failing era, but AI is in its scaling era. We'll talk to author of The Scaling Era, Dwarkesh Patel, when we come back.

Hey, I'm Robert Vinlo and I'm from New York Times Games and I'm here talking to people about Wordle and the Wordle Archive. Do you all play Wordle? I play it every day. Alright, I have something exciting to show you. It's the Wordle Archive. That's awesome. So now you can play every Wordle that has ever existed. There's like a thousand puzzles. What? Wordle Archive. Oh, cool. Now you can do yesterday's Wordle if you missed it. Yeah.

New York Times game subscribers can now access the entire Wordle archive. Find out more at nytimes.com slash games. While Casey... There are a number of people within the clubby and insular world of AI who are so well known that they go by a single name. That's true. Madonna, Cher, and who else? Well, there's Dario, Sam, Ilya, various other people. And then there's Dwarkesh. Yes. Who is not working at an AI company. He is an independent journalist, podcast. public intellectual blogger.

He hosts the Dwarakash podcast, which has had a number of former hardcore guests on it. And he is, I would say, one of the best known media figures in the world of AI. Yeah, absolutely. You know, Dwarakash seemingly came out of... a few years back and quickly became well-respected for his highly technical, deeply researched interviews with some of the leading figures, not just in AI, but in also history.

and other disciplines. He is a relentlessly curious person, but I think one of the reasons why he is so interesting to us is on the subject of AI, he really has just developed an incredible roster of guests and a great understanding of the material. Yes, and now... As of this week, he has a new book out, which is called The Scaling Era, An Oral History of AI, 2019 to 2025. And it is mostly...

excerpts and transcripts from his podcast and the interviews that he's done with luminaries in AI. But through it, he kind of assembles the history of what's been happening for the past six or so years in AI development, talking to some of

the scientists and engineers who are building it, the CEOs who are making decisions about it, and the people who are reckoning with what it all means. Indeed. So we have a lot to ask Dworkesh about, and we're excited to get him into the studio today and hang out. All right, let's bring in Dwarkesh Patel. Dwarkesh Patel, welcome to Hard Fork. Thanks for having me. I want to start with the Dwarkesh origin story.

You are 24 years old, correct? You graduated from UT Austin. You majored in computer science. I'm sure a lot of your classmates and people with your interest in... tech and AI, chose the more traditional path of going to a tech company, starting to work on this stuff directly. Presumably, that was a path that was available to you. Why did you decide to start a podcast instead?

So it was never my intention for this to become my career. I was doing this podcast basically in my free time. I was interested in these economists and historians, and it was just cool that I could cold email them and get them to come on my podcast and then pepper them with questions for a few hours.

And then when I graduated, I didn't really know what I wanted to do next. So the podcast was almost a gap year experience of, let me do this. It'll help me figure out what kind of startup I want to launch or where I can get hired. And then the podcast just went well enough that... Dot, dot, dot. I'm like, yeah, this could actually be a career. This is a more fun startup than whatever Code Monkey, you know, third setting in Android kind of. So basically just kept it up and it's.

grown ever since, and it's been a fun time. Yeah, I mean, I'm curious how you describe what you do. Do you consider yourself a journalist? I guess so. I don't know if there's a good word. I mean, there's like journalists, there's content creator, there's blogger, podcaster. Sure. Journalists. Yes. Humanitarian. I ask because. I started listening to your podcast a while ago, back when it was called The Lunar Society. And the thing that I noticed right away was that you were not doing a ton of...

explanation and translation. I often think of our job as journalists as one primarily of translation, of taking things that insiders and experts are talking about and making them legible to... a broader and less specialized audience. But your podcast was so interesting to me because you weren't really doing that. You were kind of not afraid to stay in the sort of wonky insider zone. You were having...

conversations with these very technical experts in their native language, even if it got pretty insidery and wonky at times. Was there a theory behind that choice? No, honestly, it never occurred to me because nobody was listening in the beginning, right? So...

I think it was a bad use of my guest time to have said yes in the first place. But now that they've said yes, like, let's just have fun with this, right? Like, who is listening to this? It's me. It's for me. And then what I realized is that people appreciated that style of, because with a lot of...

these people they've done so many interviews you've heard their you know what is your book about kind of thing before the intuition i always go for is pretend like you're at dinner with this person and if you're at dinner with them you just ask them about your main cruxes. Like, here's why, here's, you know, what's going on here? Here's why I disagree with you. You tease them about like their big ideas or something. But initially it was just an accident.

I think in mainstream media, we are terrified that you might read something we write or listen to something we do and not understand a word of it because there's always an assumption that that is the moment that you will stop reading. I think what you have discovered with your podcast is that that's actually a moment that causes people to lean in.

and say, hmm, I didn't get all of that, but I'm getting enough of it that I'm curious, what is this thing that is going to happen next? Right, and everyone's got Google, right? So if you don't understand something or, you know, or chat GPT, like you can always look it up in a way that may not have been possible, you know, with talk radio back in the day or something. That's right.

So you've got this new book out, The Scaling Era, basically a sort of oral history of the past six or so years of AI development. Tell us about the book. So I have been doing these interviews with... The key people thinking about AI over the last two years, you know, CEOs like Mark Zuckerberg and Demis Asabes and Dario Amadei, researchers at a deeply technical level, economists who are thinking about what will the deployment of these technologies be like.

philosophers who are talking about these essential questions about like AI ethics and how do we how will we align systems that are you know millions of times more powerful or more at least more plentiful and

These are some of the most gnarly, difficult questions that humanity has ever faced. Like, what is the true nature of intelligence, right? Or what will happen when we have... millions of intelligent machines that are running around in the world is is the idea of superhuman intelligence even a coherent concept like what exactly does that mean and what exactly will it take to get there obviously so all it was like such a cool experience to just see

All of that organized in this way where we have annotations and definitions and just beautiful graphs. My co-author, Gavin Leach, and our editor, Rebecca Hiscott, and the whole team just did a wonderful job making this really beautiful artifact.

That's the book. I also really liked the way that the book sort of slows down and explains some of these basic concepts, footnotes the relevant research. Like you really do. It is more accessible than I would say the average episode of the Dworkesh podcast. Thank you.

in the sense that you can really start from, like, I would feel comfortable giving this to someone as a gift who doesn't know a ton about AI and sort of saying, like, this is sort of a good primer to what's been happening for the past few years in this world. And... it won't treat you like an idiot like a lot of these other ai books are just about this oh big picture how will society be changed and it's like no to understand ai you need to know

Like what is actually happening with the models? What is actually happening with the hardware? What is actually happening in terms of like actual investments and CapEx and whatever? And we'll get into that. But also because of this enhancement with the notes and definitions and annotations, we still, you know, we'll keep...

It's written for like a smart college roommate in a different field. Yeah. One question that you asked at least a couple people in your book, Some Version Of, was basically what's their best guess at why scaling works? Why... pouring more compute and more data into these models tends to yield something like intelligence. I'm curious what your answer for that is. What's your current best guess of why scaling works?

I honestly don't think there's a good answer anybody has. The best one I've heard is this idea that intelligence is just this hodgepodge of different kinds of circuits and programs. And this is so hand-wavy, and I acknowledge this is hand-wavy. You got to come up with some answer. And that fundamentally what intelligence is, is this pattern matching thing, this ability to see how different ideas connect and so forth. And as you make this bucket bigger.

You can start off with noticing, does this look like a cat or not? And then you get to higher and higher levels of abstraction. Like, what is the structure of time and the so-called ether and the speed of light and so forth? Again, so hand-wavy.

But I think it ultimately will just be this hodgepodge. It does strike me that this just sort of feels similar to the way that human beings work. Like you're born into the world and you just essentially get blasted with data for many, many years until you have some kind of symbolic understanding of everything. And then you go from there.

So that's how I think about it. Yeah, I mean, there seems to be this sort of philosophical divide among the AGI believers and the AGI skeptics over the question of whether there is something other than just materialism. in intelligence whether it is just like

Intelligence is just a function of having the right number of neurons and synapses firing at the right times and sort of pattern matching and doing next token prediction. I'm thinking of this like famous Sam Altman tweet where he posted, I am a stochastic parrot and so are you. basically sort of dealing with rebutting the the sort of common attack on

large language models, which was that they were just stochastic parrots. They're just learning to sort of regurgitate their training data and predict the next token. And among a lot of the sort of AGI true believers that I know, there is this feeling that we are just essentially doing

what these language models are doing in predicting the next tokens or synthesizing things that we've heard from other places and regurgitating them. That's a hard pill for a lot of people to swallow, including me. Like, I'm not quite...

a full materialist. Are you? Like, do you believe that there is something about intelligence that is not just raw processing power and data and pattern matching? I don't. I mean, it's hard for me to think about... what that would be there's obviously religious ideas about there's maybe a soul or something like that but separate from that something we could like sort of have a debate about or analyze yeah actually i'm curious about like what kind of thing could it be

Ethics? I don't know. That sounds very fuzzy and non-scientific, but I do think there is something essential about intelligence and being... situationally intelligent that requires something outside of your immediate experience, like knowing what is right and what is wrong.

Well, I think one reason why this question might be a bit challenging is that there are still many areas where the AI we have today is just less than human in its quality level, right? Like, these machines don't really have common sense. memories are not great. They don't seem to be great at acquiring new skills, right? If it's not in the training data, sometimes it's hard for them to get there. And so it does raise the question, well, is the kind of intelligence we have...

kind of categorically different than whatever this other kind of intelligence is that we're inventing. Yeah, that's right. On the ethics thing, I think it's notable. that if you talk to gpt4 it has a sense of ethics if you talk to claude it has a sense of ethics it will tell you you talk about like what do you think about animal ethics what do you think about this kind of moral dilemma it

Like it has, I mean, I'm not sure what you mean by a sense of ethics. In fact, the worry is that it might have too strong a sense of ethics, right? And by there, I'm referring to maybe it's ethics becomes like, I want more paperclips or, or I mean, sorry, on a more serious note. But those ethics are given. to it in part by the process of training and fine-tuning.

the model or making it obey some constitution. Like, where do you think you get your ethics? Who trained you, Bruce? Yeah. I mean, it is notable that we, most people in a given society shared the basic world. Dude, that... Like you and I agree on 99% of things, and we would probably agree on like 50% of things with somebody in the year 1500. And the reason we agree on so much has to do with our training distribution, which is this real, you know, the society we live in.

So, I mean, maybe this argument that there is something more to intelligence than just brute force computation is somewhat romantic. That's what they call code. Yes, I was trying to figure out a more sophisticated way of saying cope, but do you think that is cope? Do you think that the people who are sort of skeptical of the possibility of AGI because they believe that...

computers lack something essential that humans have is just a response to not being able to cope with the possibility that computers could replace them? I think there's two different questions. One is... Is it Cope to say that we won't get AGI?

in the next two years or three years or whatever short timelines that some people in san francisco some of our friends seem to have i don't think that's cope i think there's actually a lot of reasonable arguments one can make about why it will take a longer period of time maybe it'll be five years ten years maybe this

ability to, as you were saying, keep this coherence and engage with the task over the course of a month just requires a different kind of skill than these models currently have. I don't think that's code. I think the idea that we'll never get there...

is cope because there's always this argument of the god of the gaps of the intelligence of the gaps the thing it can't do is a thing that is fundamentally human one notable thing Aristotle had this idea that what makes us human is fundamentally our ability to reason.

And reasoning is the first thing these models have learned to do. They're not that useful at most things, except for raw reasoning. Whereas the things we think of just as pure reptile brain, of having this understanding of the physical world as a... about it or something. That is the thing that these models struggle with. So we'll have to think about what is the archetypical human...

skillset as these models advance. That's fascinating. I never actually, that never actually occurred to me. I think it speaks a lot to why people find them so powerful in this sort of like therapist mentor coach role, right? Is that those figures that we bring into our lives. are often just there to help us reason through something. And these models are increasingly very good at it. Yeah. In your conversations with all these AI researchers and industry leaders,

Are there any blind spots that you feel they have consistently or places where they are not paying enough attention to the consequences of developing AI? I think they do not. with a few notable exceptions, they don't have a concrete sense of... what things going well looks like and what stands in the way. If you just ask them what the year 2040 looks like, they'll say things like, oh, we'll cure cancer, we'll cure these diseases. But...

What is our relationship to billions of advanced intelligences? How do we do redistribution such that the, I mean, it's not your or my fault that we'll be out of a job, right? There's no in principle reason why everybody couldn't be better off, but there shouldn't be this zero something where we should make sure the AIs don't take over and we should also make sure we don't treat them terribly.

Something else that's been on my mind recently that you're sort of getting at, or that maybe you were getting at with your question, Kevin, is how seriously do the big tech companies take the prospect of AGI arriving? Because on one hand, they'll tell you.

We're the leading frontier labs. We're publishing some of the best research. We're making some of the best products. And yet, it seems like none of them are really reckoning with any of the questions that you just raised. It sort of makes sense, even saying some of the stuff that you just said.

right now, which seems quite reasonable to me, would sound weird if Satya Nadella were talking about it on an earnings call, right? And yet, at the same time, I just wonder... Quarter four was so strong with 4,000 happy AIs growing 10%. year over year. Right. But like, on some level, it's weird to me. You know, somebody recently was talking to me about Google and was sort of saying, if you...

If you look at what Google is shipping right now, it doesn't seem like they think that very powerful intelligence is going to arrive anytime soon. What they're taking seriously is the prospect that ChatGPT will replace Google in search. And that maybe if you actually did... take AGI seriously, you would have a very different approach to what you were doing. So as somebody who has talked to the CEOs of these companies, I'm curious, how do you rate how seriously they're actually taking AGI?

I think almost none of them are AGI-pilled. They might say the word AGI, but if you just ask them, what does it mean to have a world with actually automated intelligence? There's a couple of immediate implications. So right now... These companies are competing with each other for market share in chat. If you had a fully autonomous worker, even a remote worker, that's... worth tens of trillions of dollars that's worth way more than a chatbot right so you'd be much more interested in deploying

That kind of capability, I don't know if API is the right way, maybe it's like a virtual machine or something. I'd just be much more interested in developing the UI, the guardrails, whatever, to make that work than trying to get more people to use my chat app. And then...

I also think compute will just be this huge bottleneck if you really believe that what compute buys you is a human level. Human intelligence is worth a lot, right? Like we'll just look at every GDP per capita is like $70,000 or something. So I would just be interested in...

getting as much compute as possible to have it ready to deploy once the AIs are powerful enough. One of the things I really enjoyed about your book is getting a sense not just of what the people you've interviewed think about. ai and agi and scaling but what you believe and i i have to say i was surprised at the end of the book you said that you believe

AI is more likely than not to be net beneficial for humanity. And I was surprised because a lot of the people you talk to have quite high P dooms. They're quite worried about the way AI is going. That seems not to have... spread to you. Like you seem to be much more optimistic than some of your guests. So is that just a quirk of your personality or why are you more optimistic than the people you interview? So if you have a P.U. Doom of 10% or 20%,

That is, first of all, unacceptable. The idea that everything you care about, everybody you care about, could in some way be extinguished, disempowered, so forth. That is just an incredibly high number. Let's say nuclear weapons is like a doomed scenario. If you're like, should I go over the war with this country? And there's a 20% chance that there's no humans around. You should not take that bet. But it's harder to maybe express.

the kinds of improvements which are this will sound very utopian but we do have peak experiences in our life we know that or we have people we really care about but we know how how beautiful life can be how much connection there can be how much uh how much joy we can get out of whether it's learning or curiosity or other kinds of things and there can just be many more people us digital whatever who can experience it

And there's another way to think about this because it's fundamentally impossible to know what the future holds. But one intuition here is imagine I gave you the choice. I'll send you back to the year 1500. Tell me the amount of money I would have to give you.

But you can only use that money in the year 1500, such that it would be worth it for you to go back to the year 1500. I think it's quite plausible the answer is, there's no amount of money I'd rather have in the year 1500 than just be alive right now with my normal standard of living. And I think, I hope, we'll have a similar relationship with the future. What is your post-AGI plan? Like, do you think that you will...

be podcasting. Will you still hang out with us? It's funny because we have our post-AGI careers already, right? Even after the AGI comes, they might automate everybody else in this office, but you and I will just get in front of...

the camera and... That there will still be value in sort of like having a personality, being able to talk, explain, being somebody that people relate to on a human level. That's right. I think so. I am curious though because... A thing that I know about you from our brief interactions and just reading things that have been written about you is that you believe in learning broadly.

have been described as a person who's being on a quest to learn everything. Sounds exhausting. Casey's on a quest to learn nothing. I'm on a quest to learn what I need to learn. Just-in-time manufacturing. I think a lot of people right now, especially students and younger people, are questioning the value of accumulating knowledge. We all have these like pocket oracles now that we can consult on basically anything.

Sometimes I think I was at a school last week talking with some college students, and one of them basically said they felt like they were a little bit like... The taxi drivers in London who still had to like memorize all the streets, even after Google Maps was invented. And that was sort of like obsolete. Like they felt like they were just sort of doing it for the sake of doing it. I'm curious what, for you, the value of...

broad knowledge accumulation is in an age of powerful AI. The thing I would say to somebody who is incredibly dismayed is like, why am I going to college? Why is any of this worth it? If you believe AGI, ASI is going to be here in two years, that's fine. I don't think that's particularly likely. And if it is, what are you going to do about it anyways, right? So might as well focus on the other worlds.

And in the other worlds, what's going to happen before the fully automated robot that's automating the entire economy is these models will be able to help you at certain kinds of tasks, but... They will fundamentally just give you more leverage on the world. My friend, Asholto Douglas, put it this way. Just imagine you're going to have 100x the amount of leverage on the future. And the kinds of things that you will be in a good position to do.

is if you have deep understanding of a particular industry the relevant problems in it and It's hard to give advice in the abstract like this because I don't know about these industries, so you'll have to figure it out. But this is...

probably the time to be the most ambitious, to have the most amount of agency, to actually, these models currently aren't really good at actually doing things in the real world or even the digital world. If you can do that and use these as leverage, this is probably the most exciting time to be around.

Here's my answer for that. You don't want to be in a world where you just have to ask ChatGPT everything. Do you know what I mean? Like, there's a lot of effort involved in just sitting down, writing the prompt, reading the report that comes out of it, internalizing it, synthesizing it, relating it, like... you'd be better off actually just getting an education and then checking in with the chatbot for the things that chatbot is good at. At least for, you know, I don't know, next few years.

Yeah, I don't know. I believe that and I want to believe that the thing I've spent my life doing is not going to be obsolete, trying to be smarter and learn things. My sort of guiding principle on this is like... learning is fun. And if you can just do it for your own enjoyment, like...

I don't think learning the streets of London is that fun, but I think learning broadly about the world is fun. And so you should do it if it's exciting and fun to you. Absolutely. I think that's totally correct. I also... If I'm like actually talking to a younger version of myself. Who would be six years old, to be clear? Who's the young man we're talking to today? Hey, little buddy.

I just advice on careers in general is so bad. And for especially with how much the world is going to be changing, it's going to get even worse. And so, I mean, who would have told me? What kind of reasonable person would have told me four years ago, man, this computer science stuff, just stop that. Focus more time on the podcast, right? So...

Yeah, it's going to change a lot, I think. But see, that's not helpful. Like, what are you going to do with this idea that, like, all advice is wrong? It was in an even worse position. Just this idea that, like, yeah, be a little bit skeptical of advice in general. Really trust your own intuition, your own interest. Don't be delusional about things, obviously, but explore. Try to get a better handle on the world and do more things and run more experiments.

then just this is the thing that's going to be high leverage in AI, and that's where I'm going to do this based on this first principles argument. Yeah. I think run more experiments is just really great underused advice. Is that why you built a meth lab in your house? Yeah, it's going great for me. Bought me that hot tub. This is great. Thank you so much, Rakesh. Thanks, Rakesh. This is fun. Thanks for having me on, guys.

Welcome. When we come back, we ask listeners whether they thought AI might be affecting their critical thinking skills. And it's time to reveal what they all told us. Well, Casey, a couple of weeks ago, we talked about a study that had come out from researchers at Carnegie Mellon and Microsoft about AI and its effects on critical thinking. That's right. And we wanted to know...

how our listeners felt about how AI was affecting their critical thinking. And so we asked people to send in their emails and voicemails. Yeah, and we got so many responses to this. I mean, almost 100 responses from our listeners that reflected kind of the more qualitative side.

this of how people actually feel like AI is impacting their ability to think and think deeply. Yeah. And look, there may be a bit of a selection effect in here. I think if you think AI is bad and destroying your brain and don't touch the stuff, you probably... are not sending us a voicemail. But at the same time, I do think that these responses show kind of the range of experiences that people are happening. And so, yeah, we should dive in and find out what our listeners are feeling. Okay.

So first up, we're going to hear from some listeners who felt strongly that AI was not making them dumber or worse at critical thinking, who believe that it is enhancing their ability to engage critically with new material and new subjects. let's play one from a perspective that we haven't really engaged with a lot on this show so far, which is people of the cloth.

My name is Nathan Bourne, and I'm an Episcopal priest. A big part of my work is putting things in conversation with one another. I'm constantly finding stories, news articles, chapters of books, little bits of story that people have shared with me, and interpreting them alongside Scripture.

I've long struggled to find a good system to keep track of all those little bits I've found. Over the last year, I've turned to AI to help. I've used the Readwise app to better store, index, and query pieces that I've saved. I've also used claw to help me find material that I would never encounter otherwise.

These tools have expanded my ability to find and access relevant material that's helped me think more deeply about what I'll preach and in less time than I used to spend sifting through Google results and the recesses of my own hazy memory. Wow, I love this one. This one was particularly fascinating to me because I've spent some time working on religion-related projects. I wrote a book about going to Christian college many years ago, and I spent a lot of time in church services over the years.

And so much of what the church services that I've been in have done has tried to like sort of find a modern spin or a modern take or some modern insights on this very old book, the Bible. And I can imagine AI being very useful for that. Oh yeah, absolutely. I mean, this feels like a case where Nathan is almost setting aside the question of...

AI and critical thinking and just focusing on ways that AI make his researching and writing that he has to do every week much easier, right? Like, these are just very good, solid uses of the technology as it exists, and they're still... leaving plenty of room to bring his own human perspective to the work, which I really appreciate. And, you know, of course, always love to hear about a man of the cloth sort of clasping his hands together and saying, Claude, help me.

All right, let's hear the next one. This is from a software engineer named Jessica Mock, who told us about how she's taking a restrained approach to asking AI for help with coding. When I was being trained... My mentor told me that I should avoid using autocomplete. And he said that was because I needed to train my brain to actually learn. the coding. And I took that to heart. I do that now with AI. I do use Copilot, but I use it for...

floating theories, asking about things that I don't know. But if it's something that I know how to do, I put it in myself and then I ask Copilot for a code review. And I found that to be pretty effective. My favorite use of Copilot, though, is what does this error mean when I'm debugging? I love asking that because you get more context into what's happening. And then I start to understand what's actually going on. Is it making me dumber?

I don't think so. I think it's making me learn a lot. I'm jumping into languages that I was never trained in, and I'm trying things that I normally would have shied away from. So I think it really depends on how you use it.

So I love this one. If you talk to software engineers about how they solve problems, a lot of what they'll do is just ask a senior software engineer. And that creates a lot of roadblocks for people because that senior software engineer might be busy doing something else. Or maybe you just feel a little... bit shy about asking them 15 questions a day.

What Jessica is describing is a way where she just kind of doesn't have to do that anymore. She can just ask the tool, which is infinitely patient, has a really broad range of knowledge. And along the way, she feels like she is leveling up from a more junior developer to a senior one. That's pretty cool.

Yeah, I like this one. I think it also speaks to something that I have found during my vibe coding experiments with AI is that it does actually make me want to learn how to code. Like, even though it is probably unnecessary for me to learn how to code to build stuff and will become increasingly unnecessary.

There is sort of just this like intellectual kick in the pants where it's like, you know, if you just like applied yourself for a few weeks, you could probably learn a little bit of Python and start to understand some of what the AI is actually doing here. Absolutely. You know what makes me realize? I'll throw it away. But that moment where you're like, oh, I get this a little bit, it unlocks this whole world of curiosity. And it sounds like AI is making...

maybe, you know, giving Jessica that experience. Jessica's message also highlights something really important, which is we actually know who the worst writers in the world are, and they are the people that wrote the error messages, right? How many times have you just seen a pop-up that says, well, you hit error 640 times?

try again. You're like, wait, what is error 642? And it turns out all that information was on the internet and AI has now made that accessible to us and helps us understand. So if nothing else, AI has been good for that. Yeah. This next one comes to us from a listener named Gary. He's from St. Paul, Minnesota, which is one of the Twin Cities, Kevin, along with Minneapolis.

And it points to the importance of considering different learning challenges or disabilities when considering this question of AI's impact on critical thinking. Let's hear Gary. I'm a 62-year-old marketing.

guy who does a lot of writing. I'm always trying to get new ideas, um, keep track of random thoughts. And I also have ADHD, so I get a ton of ideas, but, um, that was going to come distractions to be honest and so what i found with ai is i get to have a thought partner if you will who can help me just download all of these different ideas that i've got

and you know if i need to follow a thread i can follow a thread by asking more questions but at the end of one of these brainstorming sessions i can say just recap everything that we came up with give it to me in a list and all of a sudden my productivity just gets

massively improved because i don't have to go back and sort through all of these different notes all these different things i've jotted down all over and you know can sort through what's real and what isn't real so it's been super helpful to me in that way kevin what do you make of this one Yeah, I like this one because I think that one of the things that AI is really good for is people with not just like

challenges or disabilities with learning, but just different learning styles, right? One of the most impressive early uses of ChatGPT that I remember hearing about was the use in the classroom to sort of tailor a lesson to a visual learner or an auditory learner or just someone who...

processes information through metaphors and comparisons. Like, it is so good at doing that kind of work of making something accessible and personalized to the exact way that someone wants to learn something. Yeah. And, you know, I...

And I imagine that Gary may be doing this already, but the sort of use cases that he's describing seem like they would be great for somebody who wants to use one of these voice mode technologies. I'm somebody who's most comfortable on a keyboard, but there are so many people that just...

love to record notes to self. And there are now a number of AI tools that can help you organize those and sort of turn them into really useful documents. And so if you're the sort of person that kind of just wants to let your mind wander and talk into your... phone for a few minutes and then give to the AI the job of making it all make sense. We have that now. And that is kind of crazy and cool. Yeah. Yeah. All right. Let's do one more in this camp of people who don't.

think that AI is making them dumber or worse at critical thinking. My name is Anna, and I live in a suburb of Chicago. I wanted to share a recent experience I had with AI and how it made me think harder about solving a problem. I'm self-employed and don't have the benefit of a team to help me if I get stuck on something. I was using an app called Airtable, which is a database product. I consider myself an advanced user, but not an expert.

I was trying to set up something relatively complex, couldn't figure it out, and couldn't find an answer in Airtable forums. Finally, I asked ChatGPT. I explained what I was trying to do in a lot of detail and asked ChatGPT to tell me how I should configure Airtable to get what I was looking for. ChatGPT gave me step-by-step instructions, but they were incorrect.

I prompted ChatGPT again and said, Airtable doesn't work that way. And ChatGPT replied, you're right. Here are some additional steps you should take. The resulting instructions were also incorrect, but... They were enough to give me an idea, and my idea worked. In this example, the back and forth with ChatGPT was enough to help me stretch the skills I already had into a new use case.

I love this one because I think what made AI helpful to Anna in this case is not that she used it and immediately gave her good information, is that she knew enough about it to know... that it was unreliable, and so to do her own deeper dive based on her experience that she wasn't getting good information from the AI. My worry is that people who aren't Anna, who aren't...

sort of deeply thinking about these things will just kind of blindly go with whatever the AI tells them. And then if it doesn't work, they'll just kind of give up. I think it really is a credit to her that she kept going and kept figuring out what is the real solution to this problem. It is a risk, but let me just say, and this is just kind of a free tip for your life, if you are someone who struggles with using software, I increasingly believe that...

One of the best uses of chatbots is just asking them to explain to you how to use software. I recently got a PC laptop and like everything is different than I've been used to for the past 20 years of using a computer. But my PC has a little. co-pilot button on it and I press it and I can say, how do I connect an Xbox controller to this thing? And it told me in 10 seconds, save me a lot of Googling. So anyway, Anna, you're onto something here. It said get a life. It actually did say that.

I was offended. Shame on you, co-pilot. All right. Now let's hear from some listeners, Kevin, who are more skeptical about the way AI might be affecting their own cognitive abilities or maybe their students' ability to get their work done. For this next one, I want to talk about an email we got.

from a professor, Andrew Fano, who conducted an experiment in a class he teaches for MBA students at Northwestern. Northwestern, of course, my alma mater, go Wildcats. And that is why we selected this one. And Andrew sent us a sort of longer story about a class that he was teaching. And the important thing to know about this class is that he had divided the students into two groups. One could use computers, which meant also using large language models, and another...

group of students who could not. And then he had them present their findings. And when the computer group presented, he told us that they had sort of much more creative ideas, more outside the box, and that involved listing many of the items that the LLMs had proposed for them. And one of the reasons that Andrew thought that was interesting was that many of the ideas that they presented were ones that had actually been...

considered and rejected by the people who were not using the computers because they found those ideas to be sort of too outlandish. And so the observation that Andrew made about all of this was that the computer-using group saw these AI...

generated ideas as something that they could present without them reflecting negatively on themselves because they weren't their ideas. These were the computer's ideas. And so it was like the LLMs were giving them permission to suggest things that might otherwise seem embarrassing. or ridiculous. So what do you make of that? That's interesting. I mean, I usually think of AI as being kind of a flattener of creative ideas because it is just sort of trying to give you the most predictable outputs.

But I like this angle where it's like actually, you know, giving you the permission to be a little weird. Because you can just say, if someone hates the idea, you can just say, oh, that was the AI. Yeah, don't blame me. Blame this corpus of data that was harvested from the internet. Which is why I plan, if anyone objects to any segments that we do on the show today or in the future, I do plan on blaming Chachupiti. That was the Chachupiti's idea. Yeah, interesting. If it's a good segment...

i did it if not it was claude all right let's move to another listener message this one's from A listener named Katia, who's from Switzerland, she told us about how looming deadline pressure caused her to maybe over-defer to AI outputs. She wrote, quote, last semester, I basically did an experiment on this myself. I was working on a thesis during my master's studies and decided to use some help. My choice fell on Cursor, which is one of these AI coding products. She writes,

Initially, I intended using it for small tasks only just to be a bit faster, but then the deadline was getting closer, panic was setting in, and I started using it more and more. The speed was intoxicating. I went from checking every line of code to running around. of automatic bug fixing without understanding what the problems were or what was being done.

So I actually think this is the most important email that we've gotten so far because it highlights a dynamic that I think a lot of people are going to start feeling over the next couple of years, which is my bosses have woken up to the fact that AI exists. they're gradually raising their expectations for how much I can get done.

If I am not using the AI tools that all my coworkers are now using, I will be behind my coworkers and I will be putting my career at risk, right? And so I think we're going to see more and more people do. exactly what Katja did here and just use these tools like Cursor. And while...

You know, to some certain level, I think that's okay. We've always used productivity tools to make ourselves more productive at work. There is a moment where you actually just stop understanding what is happening, and that is a recipe for human disempowerment, right?

At that point, you're just sort of barely supervising a machine, and the machine is now doing most of your job. So this is kind of like a small story that I think contains a dark warning about what the future might look like. Yeah, I think that... kind of mental outsourcing does worry me, the sort of autopilot of human cognition. An analogy I've been...

thinking about recently in trying to distinguish between tasks that we should outsource to AI and tasks that we probably shouldn't is forklifting versus weightlifting. Okay, tell me about this. So there are two reasons that you might...

want to lift heavy things. One of them is to get them from point A to point B for some, like, you know, purpose. Maybe you work in a warehouse. Obviously, you should use a forklift for that, right? There's no, like... salutary benefit to carrying heavy things across a warehouse by yourself and that's very slow it's very inefficient and the point of what you're doing is to try to get the thing from point a to point b use a forklift for that

Weightlifting is about self-improvement. Weightlifting is, yes, you could use a machine to lift this heavy object, but it's not going to make you stronger in any way. The point of weightlifting is to improve yourself and your own capabilities. think when you're in a situation where you have the opportunity or the choice of using AI to help you do some task, I think you should ask yourself whether that task is more like forklifting or more like weightlifting and choose accordingly.

I think it is a really good analogy and people should draw from that. I want to offer one last thought of my own, Kevin, which is that while I think it is important to continue this conversation of how is AI affecting my critical thinking, I think in this last anecdote... we see this other fear being raised, which is, what if the issue isn't

do I still have my critical thinking skills? And what if the actual question is, do I have time to do critical thinking? Because I think that one effect of these AI systems is that everybody is going to feel like they have less time. The expectations on them have gone up at work.

They're expected to get more done because people know that they have access to these productivity tools. And so you might say, you know what, I actually really want to take some time on this. And I don't want to turn to the LLM. And I want to bring my own human perspective to this.

you're going to see all your coworkers not doing that. And it is just going to drag you into doing less and less of that critical thinking over time. So while I think, you know, is AI making me dumber is a really like... interesting and funny question that we should keep asking. I think, am I going to have the time that I need to do critical thinking might actually be the more important question. Yeah, that's a really good point. All right. Well, that's enough critical thinking for this week.

I'm going to go be extremely ignorant for the next few days, if that's okay with you, Kevin. That's fine by me. Fork is produced by Rachel Cohn and Whitney Jones. We're edited this week by Matt Collette. We're fact-checked by Ina Alvarado. Today's show is engineered by Alyssa Moxley. Original music by Marion Lozano, Diane Wong, and Dan Powell.

Our executive producer is Jen Poyant. Our audience editor is Nell Golotli. Video production by Chris Schott, Sawyer Roquet, and Pat Gunther. You can watch this whole episode on YouTube at youtube.com slash hardport. Special thanks to Paula Schumann. Pui Wing Tam, Dalia Haddad, and Jeffrey Miranda. You can email us at hardfork at atytimes.com, or if you're planning a military operation, just add us directly to your signal chats.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.