Adobe Express makes it quick and easy to create everything I need for my business. From social posts, TikToks and flyers, all in just a few clicks. Get Adobe Express for free. Search for Adobe Express to find out more. Do you have a name that you want me to use? Playboy Farty. I'm not going to call you Playboy Farty. Give me a different name. Ken Fartzen. Is that going to be...
a rapper's name, but you changed part of their name to fart. Are you going to be copyrighted? Do you want to be Ken Fartson? Do you want to be Ken Fartson? I'm going to be Playboy Farty. Okay, you can be Playboy Farty. Playboy Farty is a teenager in the early part of his high school career. Mr. Farty, a very bright kid who, like many teenage boys, has recently discovered the joys of defiance and mischief.
I was hoping Playboy could tell me about this story I'd been tracking, the rise of AI, but how it had been viewed from his teenage vantage point. Do you remember the first time you heard about ChatGPT? I heard about it over social media and I didn't really know what it was. So then I kind of looked it up and it was kind of the first AI thing I'd ever used. So I found it really interesting.
This then 13-year-old boy started talking to this then one-month-old AI agent. And like most of us, at first Playboy Farty was just trying stuff, asking questions like, how do I get to the moon? How would I say this English phrase in another language? Overall, I found it really cool and I was like, how can I use this? And what, you just wiggled your eyebrows. And when you thought, how can I use this, what were you thinking? At the time, I was doing a coding class and...
I actually used it to create some code and run some code. But after that, I started realizing, oh, I could probably use this for something more interesting. And so then I started asking for ideas on different homework assignments. So walk me through. what this would have looked like. Like you had a homework assignment and you were supposed to do what? Well, it was a slideshow presentation on some country just for the beginning of class. And it was just like facts, population, stuff like that.
Playboy Farty asked ChatGBT to provide facts about this country he'd been assigned, and then he rewrote those facts and stuck them in his presentation. Honestly, not so different from how teens typically use reference materials. And then the second time I used it, I think it was for not an essay, but a small like two paragraph writing project on a book. That time it just gave me like a thesis and.
I summarized the chapter, which I had already read, but it was a pretty in-depth summarization. I may or may not have gotten caught, but apart from that, it actually wrote it pretty well, and I just changed it a bit. Despite Playboy's breeziness, here in the adult world, the behavior he's describing, asking the chatbot to provide his paper's thesis and summarize the plot, and then slightly modifying that language into his own words,
That is considered cheating, something that would have gotten him in serious trouble at school. But Playboy Farty never got a chance to hand in the assignment. Instead, the adults in the house noticed that the homework that night had been completed. a little too quickly, and he'd been hoisted on the damning evidence of his own browser history. But the conversation afterwards between the adults and the teenager was unusual, because both sides just disagreed on the meaning of what had happened.
Was Playboy Farty actually guilty of an academic crime, as the adults saw it? Or was he just an early adopter of a new labor-saving technology, which is more how he saw it? Playboy was threatened with consequences, and he says he has not used AI for homework since. But there's a difference between following a rule because you understand it versus following a rule because you know that otherwise you'll get in trouble.
Did you feel guilty? Maybe afterwards, but not really. You didn't feel guilty? Afterwards, yes. But not while you were doing it? No, because I was kind of like... My brain, the process going on in my brain at the moment was like, I'm looking at this, but I'm writing it. So technically speaking, it's still my writing. Now I realize that's not really how it works. I mean...
I'll tell you my fear. So when I was a kid in math class, we had to learn long division and I felt like it was hard and stupid and like I could just use a calculator, which was like the best technology we had for doing homework at the time.
And the adults in my life were like, no, it's really important that you know how to use long division. And I was like, I really think you're wrong. And they made me learn long division. And I was right. I don't, I've never asked an adult to use long division and I can't. Not even for taxes.
No, you don't use long division. Use a calculator. Use a calculator. Exactly. That's my feeling about HIV-AIDS. Why would I ever not use it as an adult? Well, this is what confuses me is I think writing is really important. Like, I think when... When I write down my thoughts, it actually helps me think and it helps me think more clearly. It's not just a skill for communication. For me, it's actually a tool for thinking. And the slight worry I have, or the real worry I have is...
I don't think people should have to do long division if calculators exist. But I worry that if your generation is not really learning how to write, they're just learning how to edit a machine, you might... be losing at least one tool people use to think clearly. Do you worry about that? I feel like as an adult, I would use CHPT. Right. Like, why wouldn't I? I'm an accountant. I could just...
get it to do all the values for me. What job would I get in trouble for using ChaiGBT for? Being a podcaster? I feel like if I wrote a book using ChaiGBT, it would be a world-renowned book. It'd be like... oh my god, the first book written by an AI, and then I still wouldn't get in trouble. Well, I mean, some adults, it's kind of neither here nor there, but adults are upset.
when adults use ChatGPT, not because they care about cheating on homework, but because they care about people losing their jobs. They're like, oh, if I wrote a book with AI, people would get mad at me because they'd say, well, the AI uses other people's work.
you would have hired a research assistant, but instead like a computer using it. So that's the adult conversation around it. So they'd get mad at me for writing the book by myself? Yes. Some adults would get mad at you for using AI instead of research assistants.
So we're getting rid of the work smarter, not harder mentality just so we can hire more people? That is how some adults feel, yes. Okay. Anyway, we're not even going to get into that. There's like more trouble there than you want. It's funny. In the adult world, one factor I have seen slowing down widespread AI adoption is actually a fear of getting in trouble. A lot of people feel nervous about being seen using a tool that could automate their colleagues' jobs away, or their own.
But for the next generation, who don't yet have jobs to lose, these concerns seem much less present. Playboy Farty said that by his estimation, all the kids in his class were playing around with AI, and about a third of them were covertly using it for homework. The more I talked to him, the more I came to believe that the adults in his life were just not going to be able to keep him off of AI forever.
The digital walls adults have been throwing up to stop teens like him from using these tools seem exactly as ineffective as the digital limits my parents tried to put on me 25 years ago. Are there kids whose parents are trying to stop them from using it who are finding ways around it? All I know is my parents, and yes, in every way possible. There's always a way around it. Yes. There's a way around every barrier.
There's a way around every barrier. The motto of Teen Nation. This week, can we stop teenagers from feeding their homework assignments into AI? And if not, then what? That's after this break. Welcome back to the show. I went on a real journey this month with this question about teens and AI and homework and when it was done I ended up thinking very differently about all of this.
Like, my mind had fundamentally changed in some ways. Just to give it away, I've somehow landed in a place where I'm actually both much more worried about AI and its effect on society, particularly teenagers, but also, weirdly, These conversations forced me to confront some of my assumptions about how education works, to actually examine how writing is taught in American schools right now, and to ask, is there something here that needed to be fundamentally changed anyway?
But it took me a while to see all this the way I see it now. So I want to retrace my steps for you, and you can make up your own mind. So to start, let's just refresh ourselves on how we got to this moment. Tonight, we're taking a deep dive into the world of AI with a special focus on ChatGPT, the revolutionary new language model developed by OpenAI with the ability to generate human-like text. and hold natural conversations.
November 30th, 2022, ChatGPT version 3.5 arrived. This was the first version that actually got widespread attention. In the adult world, people were impressed but also very concerned about the implications of this. new technology? What do we mean for jobs, for disinformation, for the environment? And who was going to regulate all this, assuming someone even could? Those are the sorts of questions
I was preoccupied with in the last year. So much so that I missed a more immediate one. A new artificial intelligence tool is going viral for cranking out entire essays in a matter of seconds. How will this affect English homework? The idea is to give you a simple prompt like write a summary of the American Revolution, and the bot will generate several paragraphs in under a minute. I was not thinking about this use case, but students obviously were, as were their teachers.
both sides seem to understand the implications of the new technology from the jump. Just 10 days after GPT 3.5 hits the internet, The Atlantic magazine publishes a piece from a teacher explaining why he thinks this marks the end of traditional high school English. But other educators see this, at least in the beginning, as a still-winnable fight.
The New York City Department of Education is cracking down on a particular tool. Students and teachers can no longer access an artificial intelligence chatbot that generates writing. The New York City public school system, the largest in the nation,
makes an announcement that first January. New York City Department of Education has restricted the tool on all city public school networks and devices. Citing negative impacts on student learning and concerns regarding the safety and accuracy of content. In the months that follow, many other school districts follow suit. That March, OpenAI releases an even more powerful version, GPT-4. It can do harder homework, more efficiently.
This is all still just the first school year. I spoke to two academics who were tracking this as it happened. I'm Beck Tench, and I'm a researcher and designer at the Center for Digital Thriving. And I'm Emily Weinstein. I'm a psychologist and longtime researcher studying teens and technology, and I co-founded the Center for Digital Thriving.
Harvard's Center for Digital Thriving tries to open up conversations between teens and adults about the technological changes we are constantly inflicting on them. For Beck and Emily, ChatGPT may have been new, but the anxieties, the conversations around it, reminded them of earlier skirmishes in the teens versus adults alarming new technology wars. Here's Emily.
There was something that felt very familiar about the way that schools and parents were asking questions about ChatGPT and kind of rushing to figure out what policies would help ban it. And I think this is a really imperfect. comparison for a lot of reasons, but it reminded me a lot of the early days of Wikipedia, where
I don't know if you were in school when we, I don't know if Wikipedia was part of your trajectory. Yeah, yeah, it was. Yes. Anyway, it felt like the early conversation around Gen AI is sort of like that early Wikipedia conversation of like. Don't go there. That is bad. We need to figure out how to make sure that no one even looks at that. And we're hearing teens say things like, you know, my parents have blocked chat GPT from our Wi-Fi in our house because they're so scared of me looking at it.
I'm smiling. Did you try and do that, PJ? I vlogged to chat GPT. I've blocked Copilot. I've blocked the new Chinese one, Deep Seek, that came out last week. I've blocked weird knockoffs. There's one called like GPT Chat, I think. Yeah, and it's very much been, it's funny, I used Wikipedia.
I didn't use it for plagiarism. I, like, learned illiteracy around it and, like, checked, like, knew to check sources and knew to work backwards and knew the difference between a really poorly written, like, like... I used that tool and turned out fine. And now in the other position, I find myself like a reactive, strange adult who's like, shut it off.
unplug it while we figure this out. I think for good reason, because obviously chat GPT is completely different than Wikipedia in terms of the like implications for our thinking. But, but I do think this impulse of like, shut this thing off while we figure it out is.
so widespread right now, parents in 2022, 23, and still now feel like they are still trying to wrap their heads around cell phones and social media. And those issues feel so front of mind that I think that a lot of people, a lot of adults feel like,
what? Like, I don't want them to cheat on Gen.ai, but also like, I still don't understand TikTok. I haven't figured out Snapchat. If you imagine like the pie chart of parents' concerns and then the slice of it that is tech and then the amount of that slice that is currently being still taken up by social media. and mobile phones. I think there's just, frankly, not a lot of time left and people don't even really know what questions to ask. So...
This is the mental state parents and educators were starting from. AI had arrived at a moment where nobody had really figured out yet what to do about cell phones in schools or social media. Those are unsettled debates. So AI felt to some of these adults like having a pandemic during your pandemic. Many of the school districts would reverse their initial bans, but everyone is still trying to figure out with AI what is okay and what is not okay.
And in that vacuum, students have just been using it more and more. According to a study from just a couple months ago, 70% of students now report having used AI for schoolwork. And now we have this new category of young influencers. I'm going to call them... homework influencers, students themselves usually, with massive TikTok followings, educating their peers about how to cheat with AI at a level of sophistication that I cannot help but marvel at.
If you want an easy way to cheat on your homework then stop scrolling. This AI tool allows you to upload any PDF file and it will generate you answers.
i'm going to show you how easy it is if you're in school right now and planning on using chat gpt or ai to do your work for you this year you're going to want to watch this video so here is just written me an essay in literally seconds and it gets even better it can even edit and expand the writing you already have now this will take more time than just capping and pacing from jpt but remember good cheating takes time
The best of these videos we saw comes from a young influencer named Carter P.C. Smith. Carter is 19. He has 5.6 million followers, nearly four times as many TikTok followers as the New York Times. Carter comes across like a smart, jockish kid who you might meet on the first day of school, happily offering to show you the ropes.
There's a couple tools you'll need for a good AI written essay. You need the instructions for the essay. Co-Pilot, GPT, and GPT-0. GPT is going to do the bulk of the writing. Co-Pilot is good for factual information. And GPT-0... helps you not get caught. Carter set up a classic LED-adorned gamer cave. From within, he points to his ultra-wide monitor, one window with the SA instructions,
the other three with different AI agents, each overseeing one part of his signature automated homework process. I start by putting in the whole rubric and copying and pasting in another essay that I actually wrote and say... It is, I'm sad to admit, a well-made piece of service journalism. Carter signs off with the verbal equivalent of a mischievous wink. The message... We all know we're not supposed to do this, but catch us if you can. This is what the educators are up against.
This is the status quo in February 2025, during the fifth semester of the new war on homework. So to recap, the grownups so far have tried regulating to the point of banning, and then... unregulating to the point where this is basically a free-for-all. Obviously, the question to answer is, how should teenagers be using these tools? And those Harvard researchers, Emily Weinstein and Beck Tench,
They are part of the search for an answer here. For the past year, they've been studying how teens currently use AI. Their approach is to almost look at teens with the curiosity and respect with which any good anthropologist approaches a different culture. Here's back. The way that we orient to young people using tech is we listen, we look at them as experts of their experience. We're kind of first generation to tech in a way, and they're actually living it.
And we knew that if we weren't listening, policies would be created that actually would take away agency from them. The goal of the project was, how do we use participatory design? How do we include young people in creating the policies that will...
govern their use of AI in the classroom. So that was the larger question. And so the idea is like, if adults don't understand this stuff correctly, they'll create rules that don't make any sense. The kids are going to do what the kids are going to do.
And you're not going to get to the outcome you want, which is like figuring out how the kids are using this anyway to work with them and create rules around them so that they're using these as tools and not in a way that destroys their ability to think or learn. Yeah, and we wanted to change the shape of the way that educators were responding to this. Instead of limiting...
access to a technology because people were cheating. We wanted to help educators create assignments that you would feel you were missing out if you used AI to help you with them. So what you're saying is like, if the problem is that, and I had not... Isolate the problem just cleanly in my head. But like if the problem is that AI can do drudge work and if homework historically and famously has been like drudge work.
that trains kids on the idea that life is dredgy you're like rather than trying to put handcuffs around their ability to use ai one way you could think about solving this problem is you have to make homework interesting Yeah. And school, interesting. And not only interesting, but interesting and meaningful and joyful. Yeah. That seems very ambitious. Yeah.
Perhaps you can hear my skepticism here. This is the first time I encountered this idea, that the problems of AI were highlighting an older, existing problem, homework, and how boring it is. And this notion that in the future, homework technology must advance. Homework simply must be not just fun, but interesting, meaningful, and joyful. Obviously, right now, it's not.
Anyone raising kids has confronted this. Making them do homework is almost worse than when you had to do it yourself. But you tell yourself, you tell them, that tolerating this painful boredom is important training for adult life, which contains... multitudes of painful boredoms. I did not believe this as a kid. I don't think I believed it until I found myself having to say it myself. I had to say something to these kids who didn't want to do their homework. But was it true?
Was I actually sure homework was good? Or was this just a required belief for members in good standing of the grown-up community? I spoke to a person who really challenged my thinking on this. His name is John Warner. He's a writer. He edited the website McSweeney's. He's taught writing to tons of students, and he's just written a book called More Than Words, How to Think About Writing in the Age of AI. Before we even begin, I just, like, the first LLM...
chatbot AI agent I remember seeing was ChatGPT. I'm just curious, do you remember when you first saw that it existed, tried it, and do you remember what your reaction was? Yeah, for sure. I think I saw it the day it released. I had actually played around with GPT 3.0, right? And ChatGPT was 3.5. and had written a little piece about it and said, this stuff is not really a threat to student writing or education because...
While it could do something that seemed sort of like writing, it was still essentially gibberish, right? It was not doing the work. So I saw... And like a lot of people, I was like, whoa, hold the phone. This thing can do something that I didn't think machines could do, which is string together sensible syntax in response to a plain language.
And so I was like, this is new, we have to deal with this. But at the same time, because of my background as a writing teacher, who had been frustrated for... well over a decade about the kinds of things students are asked to do when they write in school, I was excited. Really? Because I was excited to have a conversation about writing as a human activity of thinking, feeling, and communicating. So, let's have that conversation.
The rest of this episode, we're going to talk about the act of writing, what writing is for, and what American high schools seem to think it's for, because those are two very different things. John Warner, our guide for this section, The thing you need to know about him is that he loves writing, and he's very skeptical of AI. He says what chatbots really generate is, quote, bullshit. But even though he's concerned about the future,
He thinks the status quo for English class, for English homework, deserves to be demolished. John says if you want to understand how broken English class is, look at the writing assignment we ask kids to complete over and over again. An assignment that Playboy Farty wanted a machine to do for him. The five-paragraph essay. One of your sort of... A villain, an intellectual villain for you. And the thing that keeps coming up with the students I'm talking to is...
The five-paragraph essay. For Americans, this is very, very, very understood. For some non-American listeners, they're not as familiar. What is the five-paragraph essay? Well, the five-paragraph essay is essentially a template. that can be used to make a simulation of an argument where you establish your thesis and topic at the beginning.
You use three body paragraphs to support the thesis that you've established at the beginning. And then you summarize those three paragraphs in your conclusion, which starts with in conclusion. And in our school system, that also has come along with all kinds of other prescriptions that students are familiar with. Things like no contractions, never use the first person pronoun, I. always have transition phrases.
In fact, I still recall a student I had at my last job at College of Charleston where she had brought in her high school notebook that had three handwritten pages of the allowable transition words and phrases that she had. had checked with her teachers over the years and she wanted to see if i was good with them also right that's pretty because that's like computer like you know what i mean like they're following rules yes yes yes
Five Paragraph essay itself is more an avatar of the problem than the problem itself, right? Like the problem itself is that we... decided we're going to try to measure students' progress as writers, and we need this kind of common scheme to do it. John's thesis is at the five-paragraph essay.
sucks. First, it sucks because it's an artificial form, the simple thesis always followed by exactly three supporting points. It further sucks because of how teachers grade it, enforcing these made-up rules, like never make an I statement. And finally, it sucks because it trains students not to write to explain or to think, but instead to just obey the rules of this rigid form. In conclusion...
John Warner believes we need to understand the reasons for this assignment's popularity because the story of how it came to be reveals larger dysfunctions in American education. One place you might start that story is with one particular American. a Hollywood actor turned president. Today our children need good schools more than ever. We stand on the verge of a new age, a computer age, when medical breakthroughs will add years to our lives.
This is Ronald Reagan in 1983, his first term, talking about a new priority for his administration, education. But from 1963 to 1980... Scholastic aptitude test scores showed a virtually unbroken decline. Science achievement scores of 17-year-olds have shown a steady fall. And most shocking, today more than one-tenth. of our 17 year olds are functionally illiterate.
I look at a report that came out during the Reagan administration called The Nation at Risk, which said our kids are dum-dums. And because they're so dumb, we're going to fall behind. Our worry at that time was Japan, who was eating our lunch in terms of... electronics and this kind of stuff. So Japan, with a population only about half the size of ours, graduates from college more engineers than we do.
We need kids to be smarter. We need to start measuring these things. And this sort of begat what has essentially been a bipartisan approach to school reform. Reagan had actually tried and failed to eliminate the federal Department of Education. He'd settled instead for giving its budget a haircut. But his argument was, we could still have better outcomes in our high schools if we just held those schools more accountable.
The plan didn't really work. Test scores did not rise in Reagan's era. But at the state level, some politicians listened to his pitch, that schools themselves needed to be more rigorously tested. And later... Another Republican president really ran with this idea. Today begins a new era, a new time in public education in our country. As of this hour...
America's schools will be on a new path of reform and a new path of results. This is George W. Bush in 2002 announcing the passage of No Child Left Behind, 20 years after Reagan. the federal government would be able to withhold funding from states whose schools had bad enough test scores. And so what this bill says, it says every child can learn. And we want to know early before it's too late whether or not a child has a problem in learning.
I understand taking tests aren't fun. Too bad. We need to know in America. We need to know whether or not children have got the basic education. School funding now relied in part on the strength of students' measured reading skills. The way you measure reading is through comprehension essays. And the drill teachers relied on to prep teens for those tests.
was a five-paragraph essay. Easy to teach, close to the formulaic writing the test demanded. John Warner has been in education long enough that he's watched this whole transformation happen. I'm 54, almost 55. I graduated high school in 1988. I happen to still have my fifth grade writing portfolio. that my mom for whatever reason kept over the years and there is not a single five paragraph essay in the entire thing right
Fifth graders today are already either writing five paragraph essays or there's a couple of forms that are essentially precursors to it. And what, like, just to spell it out, like, as an educator, as a person who loves writing. What's the downside of it? Why does the five paragraph break your heart as a person who's trying to teach kids to write and to love writing? Well, it prevents them from having the kinds of experiences that help you learn to write.
Writing is fundamentally making choices inside of a specific rhetorical situation, message audience purpose, right? In the book, I talk about my third grade teacher, Mrs. Goldman, who taught me the rhetorical situation by requiring our class to write instructions for making a peanut butter and jelly sandwich. And then...
is requiring us to make the peanut butter and jelly sandwiches strictly according to our own instructions. So if you had done something like I did, which is forget to say, spread the peanut butter on the bread with a knife. I was dipping my hand in the jar of peanut butter to spread it on the bread. And in fact, I have a picture of this moment. It's the avatar I use on my newsletter of me in third grade smiling, doing a writing assignment.
where I had learned that audience fundamentally matters, that on the other side of this piece of writing is somebody who's going to read it and use it. And so you have to take care with what you write in order to make sure it connects with the audience. Now, I was in third grade, right? Yeah. So we're still mostly just... messing around, but that's a great way to learn how to write when you're young is to mess around, right? Again, I was working with 18-year-old college freshmen.
who would report that they had never had the experience in school of writing to an audience, which is what writers do, right? But if you've never practiced this, you don't have that frame of reference when you go about writing. The story that John is telling here, it actually exactly fits my memory. I arrived at high school in 2000, where America's greatest English teacher, Ann Gerbner, taught me to love the thing I'm doing for you right now.
But when I arrived at college in 2004, I remember struggling with writing for the first time, smacking my head against the brick wall of the five-paragraph essay. In high school, I'd learned to love writing. In college, I learned to slightly loathe it. I wouldn't have used these words then, but I was being asked to write less like a person and more like a machine. Me and everybody else. And maybe it would have gone on like this for students forever.
But now, an actual machine has arrived that can itself write like a machine, which functionally kills the five-paragraph essay as a test of anything. All it measures now is that the student is capable enough to ask ChatGPT to write an essay. But John says you have to be really careful here with what you do next.
There's this temptation from a lot of teenagers and some adults to see chatbots as analogous to calculators, to think it's okay to no longer learn a skill that a machine can approximate. But John says that that is faulty reasoning. You know, when ChatGPT arrived, there were a lot of people analogizing it to calculators. And this is true in some senses, but it's not true in some very important ways. The most important way is that the labor of you struggling to do long division...
at your kitchen table. Yeah. And what a calculator does when it calculates is identical, right? The exact same thing is happening between a calculator or a human doing calculation. ChatGPT is not... using a process that is the same as what humans do when they write. ChatGPT is generating syntax on the basis of weighted probabilities. And that is not what humans do when they write.
When humans write, we are using syntax in order to try to capture an idea or notion or image or whatever we're trying to get on the page. Writing is thinking, writing is feeling, writing is communicating. All of these things happen as humans write. None of them happen when ChatGPT writes. This is meaningful. It must be meaningful. If it isn't, then we can just pack it in. One of the parts of the social media era that I only understood in retrospect
was how big a deal it turned out to be. Just that we replaced reading sentences in books with mainly reading sentences on screens, on social media. It took me too long to notice how, for me, That meant I'd lost important capacities. For deep attention, for reflection, because I didn't notice how different the thing being replaced, one kind of reading, was from the thing replacing it. A totally different kind of reading.
The idea that the next generation will need to understand what they're losing when they ask machines to write, to generate their arguments, to generate the emails where they explain themselves or apologize or just think through a problem. that does not seem to me like a small deal. That's the alarming part. After the break, we'll return to the research. We'll get some actual information about how this next generation views AI and things like authenticity. Their ideas.
very much challenged and surprised me. That research, after some ads. This episode is brought to you in part by Lumen. Did you know your metabolism can influence how you feel throughout the day? Your energy, your sleep, and even how easy it is to stay on track with your health goals? Lumen is a game changer for understanding and supporting the metabolism.
Lumen is a handheld metabolic coach that measures your metabolism through your breath. It syncs with an app to show whether you're burning fat or carbs and gives you daily recommendations to optimize your nutrition, workouts, sleep, and more. Every morning, you breathe into your Lumen, and it gives you personalized insights into what your metabolism needs. If you're burning more carbs, Lumen adjusts your nutrition plan to help balance things out.
You can even use it after meals or workouts to track what's happening in your body in real time. What's even better? Lumen tailors its advice to your hormonal changes to help you maintain steady energy, reduce cravings, and feel your best. If you're ready to take control of your health, visit lumen.me slash search to get 20% off your lumen. That's lumen.me slash search for 20% off your purchase. Thanks, Lumen, for sponsoring this episode.
Welcome back. Everyone I've talked to for this story who's been thinking deeply about AI in education, their common understanding is that this new thing is just very different from what preceded it. Metaphors from the past mislead us here. Chatbots are not Wikipedia. They're not calculators. They're something new. And while the chatbots are powerful, they're not substitutions for what is valuable about human thought.
Which means the job then, for the adults, is some kind of conversation with the teens who will someday soon use these chatbots to take our jobs away from us. Ha ha ha, but also maybe really. A conversation about what these tools offer. and what they might take away. Over at the Center for Digital Thriving, with Emily in back, those conversations have actually already begun. They shared with me some results from their most recent study, one that's still ongoing.
Okay, so here's what we did this week. Okay, so what are we looking at? Okay, so this is a Miro board. It's basically a, Beck, how do I describe this? It's a digital whiteboard. Okay, digital whiteboard. And what we did was we, so Beck mapped out these five different kinds of school assignments. Oh, wow. On the whiteboard, you see the cover of the book, The Great Gatsby, Some Things Never Change, and a theoretical homework assignment.
Write an analytical essay about the Great Gatsby. And then a bunch of yellow post-it notes with ideas about how someone might use AI to get through that assignment. Ideas like checking grammar all the way through using AI to write the essay. and then paraphrase it. Hello, Playboy Farty. Emily and Beck would show this whiteboard to a group of real high schoolers over Zoom and ask them, how do you feel about each of these ideas? From crosses a line, through feels kind of sketchy,
All the way to totally fine. Every one of these graphs is one teen's plotting. Interesting. So you're like, you've plotted the moral judgments of... Yes. And the thing that's interesting is like, I can just see visually, they're all... over the place. Yeah, they're all over the place. But one of the really interesting implications of this is you can actually see that kids just truly doing their best to make a judgment call.
end up with really different judgments about what is okay and what is not okay in terms of the use of AI. Obviously, it's not unusual for teens to see things differently from each other. or from adults. But what's interesting here is that when the researchers talk through the logic undergirding these views, the teens are just operating under totally different logical assumptions from my generation.
We were just doing this listening session that was actually what sparked the direction that we've been iterating in. And we were talking to teens about how AI is coming up in their lives. And one of the teens was like, you know, I am not using it that much right now, but I'll tell you a really cool story. My dad just told me about how his friend was writing a birthday card for his wife and he used AI.
to come up with what to write in the birthday card. And then he gave the birthday card to his wife, and it was so nice that she cried. She said it was the best card he'd ever written her. And he's like, so isn't that amazing? And what do you say? I was like, wait.
tell me more. And I was, and so I'm like, tell me more about your reaction. And he's like, well, like he made his wife so happy. It was like the best card he had ever written. And I'm in a, it's me. And I'm in a conversation with a group of teens. And a few of the others start agreeing, like, that's really cool. And so I said, wait, I want to talk about this because I actually have a really different reaction to that.
experience, and I want to understand more about how you think about it. My reaction is not, oh, that's so great. I feel sort of like if my... husband gave me a card that was written by AI and I didn't know it was written by AI and I cried. I would feel kind of betrayed or lied to. And we start unpacking that. And interestingly, one of the teens says to me, well,
Why is that? Like if your husband wrote a card with AI, he's still trying to do something nice for you. And isn't the intention what matters? Like if he brings you flowers from a flower store, he didn't grow the flowers or pick them. And you still think it's really nice. So isn't that kind of like what's happening here? And I was like, huh, interesting. If you were born before the year 2000.
Your body is convulsing right now. You want to grab the nearest teenager, maybe they're in the car with you, and explain to them that their generation is completely wrong about all of this, which is not how it works. Trust me, I've tried. Their generation just seems to hold some strange, to us, new views. Like, they seem to be really focused in that conversation on the idea that if the outcome was that the person's feelings were good...
then the use was good. And if the outcome was that the person's feelings were hurt, then the use was bad. Maybe that was anecdotal. Maybe it was just the teens who happened to be in that group. Maybe it was developmental. Maybe it was generational. But like part of what you're saying that's so interesting at this moment is there's one version of what could be going on here, which is just like.
I think the average adult feels differently about that story where they think, hey, a letter says to me, you sat down and thought about how you felt and you communicated something true about how you felt and you're taking the time you thought about me and making it visible. And that's what's meaningful. And that if an AI wrote it, it's not meaningful. And you could think that these kids are going to feel that way when they're 20 or 30. Or...
Another possibility is that you could be watching a norm change. Like, you could, the same way, like, my parents would not have taken a thousand photos of their faces. But in my generation, that's not considered, like, a huge mark of vanity. A norm might be changing, and we just don't know. Right. In real time, in these rooms, we are watching values begin to shift. Sure.
We can all say that we agree some things are just human, but then we start trying to decide which things and realize, oh, this is complicated. The researchers report that some teens told them that talking to a chatbot, not just for homework, but for advice or companionship, feels safer than talking to a human. It doesn't judge. It never leaves. It's always available.
To me, that can sound a little tragic. Like, what is the world we've created where the chatbot's window is a safer place for teens than the rooms with humans in them? At the same time, people find what they need. They make do with what's available. We escaped into novels and TV shows and songs. We got by on imaginary relationships while we waited for the real ones to arrive. I can almost get myself there. But then...
A birthday card to your wife written by AI? Surely we're losing something here. I wanted to say on the letter writing, I, um... My cousins lost their house in the fire in the Palisades. And so I've been thinking a lot about items. And partly as a result of that, I spent part of my weekend digitizing letters that I had gotten across my life from my grandfather.
And I had like over 35 handwritten pages and pages, like these letters he had written. And I was thinking about... what this is like the number one thing I was like when I thought about losing things this was the thing that I was like oh this I have to I have to figure out a way to preserve this and So I was thinking about the power of letters a lot. And I was also reflecting on the reality that even though a lot of us, I think, can totally write letters now.
I don't have that many letters from people in my life. And I actually have a lot of friends who are writers. And I don't think it's just that they don't know how to write a letter. I think it's partly that people don't really have...
time. They haven't prioritized it. They haven't decided that's something they want to do. And so I say that to say like, yes, the skill is really important. And I really care about that. And I want us to think about that. But part of what our research I think keeps telling us is that we also have to be paying attention. attention to sort of the roots of what's going on. Like, why is it?
that kids are cheating on all their homework assignments? Or why is it that they're feeling all this pressure to have 5 million group chats? Like what is going on here? And how do we get to the roots of loneliness or boredom or pressure or whatever? it is so that we're actually having conversations about what we want to preserve and not just protect but also cultivate with a fuller spectrum. Yeah. I think someone could tell me this many times before it would ever sink in.
that when a new technology comes along and seems to have an uncomfortably firm grip on people, it's worth not just looking at the technology, but about the underlying need it's addressing. You can try to turn off the machine or regulate it towards being less addictive. But you might also try to understand why the void it's promising to attend to is there in the first place. John Warner, the writer and teacher, he reminded me
Not every young person has this same need that the new technologies seem to burrow into. Last fall, I went to Harvey Mudd College in California, which is a small liberal arts engineering college, right? Part of the Claremont Colleges. And the students there who are brilliant, right, they're like 1600 SAT type students, they're all in computer science, engineering, all this kind of stuff, physics, astrophysics, they...
have no interest in using this technology. Oh, interesting. Because they are fascinated with the byproducts of their own minds. And to outsource their crazy thoughts to... something else. It doesn't make any sense to them. Now I'll go to other places and students will sheepishly admit
Yeah, when it doesn't matter, when it's like a discussion board post that my teacher makes me do, or it's not going to be graded, or I'm not worried about it. Yeah, I'll just do chat GPT. And then there's the students who... In a way, they're a little bit in between in that they get it. Like, school is not a game. We should learn stuff. I'm going to have to know something when I go off and be an adult in the working world. But also, man, I don't have time.
Right. I've got a job. I'm taking 18 hours of credit. My student loans come due when I'm finished with this. And so if I can like shave off a little, that helps me cope. This is a good technique. I recognize there's a compromise there, but this is just the thing I have to do. And what we're looking at really is the underlying conditions of where the work is happening.
really, I think are far more dispositive towards how people feel about this technology than sort of the technology in isolation. What do you mean? If you look at the data around student disengagement, student stress, anxiety... A lot of it is based in school. And so to ask students to sort of forego a hack that will help them relieve stress, relieve anxiety. get this credential that they think is necessary for the life they want to lead, it's very tough to ask them to give that up.
Right? Sort of voluntarily. Unless you can show a superior route towards something that they want and they need. Right. So, to return to our original question. What is the fix for kids using AI to do their homework? Before these conversations, here's what I was doing. Using the powers I have as an adult in the house to set up firewalls and filters to just try to shut it all down. But I will tell you...
It feels like playing whack-a-mole and losing. I'm professionally good at computers. I'm not good enough at computers to stop a medium-strength teenager. And not only did it not work in the narrow sense, it also created... a lot of anxiety for the adults in the house. This constant feeling of losing, being overwhelmed. So maybe the solution we've landed on for now, at least in our house, is to sit with the kids.
to talk about where AI use might be okay, but also to try to persuade them when it comes to actually writing a first draft, please go offline and struggle with the blank page like so many teenagers before them have. I don't know that the process of writing is ever going to be entirely joyful, but it has joys, and it's very meaningful. And however the world shapes up, it's hard to imagine the ability to think and reflect won't be skills a future adult would want.
John Warner says when he looks at the adults currently using AI, what he notices is that they're people who benefit from having already developed critical thinking skills and some actual knowledge of whatever their job is. If you have a... functioning practice as a lawyer and you know how to write a brief, you may be able to use this technology to help you write a brief, but if you don't have that underlying knowledge and ability...
You have no idea if this technology is going to write you a good legal brief. I've seen people use image generators in amazing ways, but... The reason they're able to use them in amazing ways is because they can talk to the image generator like an art director. Right. Not like a prompt engineer, but like an art director. And these are the things that we need to be teaching people to use technology that can generate images or syntax or video or music or what have you.
Not, how do you interact with this thing on a moment-to-moment basis to get an output? Because that just puts the whole sort of, everything's behind the veil. It's like the great Oz is back there doing stuff. We should know exactly what Oz is up to.
when we are asking it to do things. Yeah, it's funny. I remember years ago, I think Tyler Cowen had this observation where he was saying there's this debate about, you know, when a computer is going to beat a human in chess. And the real thing isn't computers versus humans. It's... human-assisted computers. That is going to be the best chess player. And similarly, to your point, setting aside the IP considerations, if you can create a world where...
All the people whose work has been hoovered into these AIs actually gets compensated for it, which isn't a crazy thing to imagine. They don't have to be theft machines. You could work out a deal. But in that world where people could use image generation without... feeling morally squicky about it. You want people who know what they're doing to be checking the working machine. And the reason it makes educators' jobs so important right now is that our generation, as you're saying, has native...
earned knowledge of all these things. And the next generation, we just need to make sure that continues to be true. Yeah, we can't be passive consumers of this sort of output. You know, I go give talks on this stuff. I sometimes will joke around. I say, like, which AI future are you interested in? Do you want the Terminator, right, where the super intelligence kills us all? things were a threat. Do you want WALL-E where robots do everything for us and
We sit around on our floating barca loungers and eat and watch media, or do we want an idiocracy where we actually fall apart because we don't know how to do anything? And I just firmly believe we have to keep... knowing how to do stuff. It's not only important for practical reasons, it's part of like... having the experience of being human, right? When I look at the increases in anxiety and depression among student-age kids.
I see a direct correlation between the kinds of things they're asked to do in school and those emotions and the pressure of doing those things well. And if instead... they could just kind of exist and do this work in a way that's meaningful to them, that still helps them build these capacities that are going to serve them well, I think it could be a catalyst towards
increased human thriving. But this is not outsourcing this work to this technology. This is using this technology to allow ourselves to be more human. John Warner, his book, which I found extremely clarifying and you should definitely check out, is called More Than Words, How to Think About Writing in the Age of AI. And...
If you're curious to learn more about the fascinating ongoing work being conducted by Emily Weinstein and Beck Tench, go check out the website for the Center for Digital Thriving at Harvard University. Search Engine is a presentation of Odyssey and Jigsaw Productions. It was created by me, PJ Vogt, and Shruthi Penmaneni, and is produced by Garrett Graham and Noah John. Fact-checking by Claire Hyman. Theme, original composition, and mixing by Armin Bazarian.
Additional production support by Sean Merchant. If you'd like to support our show and get ad-free episodes, zero reruns, and the occasional bonus audio, please consider signing up for Incognito Mode. You can learn more at searchengine.show. Our executive producers are Jenna Weiss-Berman and Leah Reese-Dennis. Thank you to the team at Jigsaw.
Alex Gibney, Rich Parello, and John Schmidt, and the team at Odyssey. J.D. Crowley, Rob Mirandy, Craig Cox, Eric Donnelly, Colin Gaynor, Matt Casey, Maura Curran, Josephina Francis, Kurt Courtney, and Hilary Schott. Our agent is Oren Rosenbaum at UTA. Follow and listen to Search Engine for free on the Odyssey app or wherever you get your podcasts. Thanks for listening. We'll see you next week.