The Story: Selfhood in the Digital Age w/ Vauhini Vara - podcast episode cover

The Story: Selfhood in the Digital Age w/ Vauhini Vara

Jul 02, 202535 min
--:--
--:--
Download Metacast podcast app
Listen to this episode in Metacast mobile app
Don't just listen to podcasts. Learn from them with transcripts, summaries, and chapters for every episode. Skim, search, and bookmark insights. Learn more

Episode description

Writer and journalist Vauhini Vara has been entangled with the tech world for most of her life — first as a kid in the Seattle suburbs and a college student in the Bay Area, and then as a reporter covering Silicon Valley for The Wall Street Journal. But it wasn’t until she turned to creative writing that Vara began to see just how deeply technology had shaped her own life. Vara sits down with Karah to talk about her new book, Searches: Selfhood in the Digital Age and how she’s learning to use technology as a writing tool, while remaining critical of the industry.

See omnystudio.com/listener for privacy information.

Transcript

Speaker 1

Welcome to tech stuff. This is the story. I'm Kara Price. Today I want to spend some time discussing how the Internet has shaped us as individuals. I was exposed to the Internet very early on, with both a techno optimist for a father and attending a school in the early two thousands that insisted every student have a laptop starting in fifth grade, I was bred to be chronically online.

I started thinking about how this influenced me after reading a new book called Searches Selfhood in the Digital Age. The book is by Jahini Vara, whose debut novel, The Immortal King Raw was a Pulitzer finalist in twenty twenty three. But before diving into creative writing, Vara was a tech reporter. She worked at The Wall Street Journal in the early two thousands, when Silicon Valley was rapidly becoming the force it is today, among the first to cover companies like

Facebook and Oracle. VARA's curiosity for tech stayed with her through the years. In twenty twenty one, she wrote Ghosts, an experimental essay where she collaborated with a nascent version of chat GPT to intimately write about her sister's untimely death. When Vara was a teenager. When she wrote Ghosts, being Vulnerable with your chatbot of choice wasn't yet a thing. In her latest book, Searches Self Food in the Digital Age, Vara fully dives into how technology has impacted her life

in the most intimate ways. She shares the Amazon reviews that she writes to hold herself accountable when buying from the site. She asked chat gpt to review her chapters as she writes them. All of these exercises to help her answer a question that haunts many of us. How have we been shaped by the Internet? And that's where

I started my conversation with Wahini Vara. I wanted to start with something very broad, your relationship to the Internet growing up and how that relationship has changed since then.

Speaker 2

Yeah, I mean I was born in nineteen eighty two, so I was in middle school when I first encountered the Internet. It was at the home of like these family friends where the daughter was my age. She sort of like pulled me into this room with one of those beige desktop computers and was.

Speaker 3

Like, I have something to show you, and.

Speaker 2

You know, opened up a little window, pressed some keys, tapped a button and that like screechy AOL log in sound started going like that, and it turned out she was showing me what AOL was and what AOL chat rooms were. Where at the time, for those who can remember this, like you would just go into these rooms in these little kind of windows on your desktop screen and usually kind of like pretend to be whoever you wanted to be, like a version of yourself, but not

really your actual offline self. And I found that really enchanting. I found it kind of miraculous. I was like this nerdy middle schooler. We had just moved to a suburb of Oklahoma City from Canada. I had very few friends, and it felt really exciting to like get to go online and be whatever I wanted to be without like the encumbrances of who I was offline.

Speaker 1

You start as someone who is like playing with the early Internet to now being someone who is a tech reporter, a teacher, a novelist. Why, after all this time being online, do you choose to write a book about selfhood in the digital age.

Speaker 2

So the thing that I understand now that I didn't understand then is that this idea that I had back then that the Internet was like this safe space for self exploration, self expression, where like I didn't have to worry about the kinds of power dynamics I had to

worry about in my middle school halls. Right Like, there was a way in which that was true and beautiful, and there's also a way in which it was an illusion then, just like it's an illusion now, because actually, anytime I was engaging with a technology company's product, I was providing that product with information about me that they could then use to go get more wealth and more power.

Speaker 1

And in the book you describe how your own personal and professional trajectory unfolded alongside the tech boom of the nineties and early two thousands.

Speaker 2

How So, when I was in middle school the eighth grade, my family moved to Seattle, and we lived in a suburb of Seattle that was one town away from the town where Jeff Bezos and his wife Mackenzie were building Amazon out of their garage, and they would go have their meetings at this Barnes and Noble on Bellevue Way, which was the same Barnes and Noble where I would like go and sit on the carpet and read magazines

and books. I was very aware of Amazon at the time, I was also aware of Microsoft because one of its co founders, Paul Allen, was engaging in this big campaign to buy the Seattle Seahawks and was putting a lot of money into this local political campaign that was bound up in that, and so I got access to the early use of tech money to influence politics, just by virtue of growing up in Seattle. I then went to college at Stanford, and I arrived there right after the

founders of Google had built Google out of Stanford. My senior year of college, Facebook shows up on campus. I'm one of the first, like, I don't know, five thousand or so users of Facebook, because I happened to be there when Facebook was just getting started.

Speaker 3

It was still really new. Nobody really knew what this thing was.

Speaker 2

But it took off super super quickly, and by virtue of being the news editor at the time, I edited our papers first stories about what Facebook was and how it was growing so fast. We interviewed the journalist who interviewed Mark Zuckerberg, and already then he was talking about like creating this space where people could connect, which is kind of still the rhetoric he uses now, And so I kind of like fell into this world of Silicon Valley while being at school.

Speaker 3

And then I graduated from.

Speaker 2

College and my first job was as a tech reporter at the Wall Street Journal, where I had previously interned, and there I ended up in the San Francisco bureau. I happened to be the youngest person in the office because I was the newest fresh college grad working there. And so I ended up saying to my colleagues and to my bureau chief, Hey, there's this new company called Facebook, and we're not really covering it. I think it's worth covering.

And because they were a private company and actually not that influential yet in the grand scheme of things, my editor kind of finally relented and was like, Okay, fine, if you want to cover this Facebook company, go ahead and cover them. And then fast forward a couple of years.

I ended up becoming actually a little disillusioned, as maybe too strong a word, but maybe not with the state of the tech industry at the time then, like it felt like these companies like Facebook were already growing really fast, and you can imagine this future in which that speed and wealth and power became a problem. And I think we were having a hard time contending with how to

write about that journalistically. And so what I did was I just I took a leave of absence and I went and studied creative writing at the Iowa Writers Workshop. And I thought I was sort of like leaving this tech stuff behind entirely, but apparently I hadn't mentally, because I ended up writing this novel about a tech ceo who takes over the world. It ended up being kind of like my way of trying to contend with the stuff that I couldn't contend with in journalism.

Speaker 1

So tech has been inescapable for you, yeah, for much of your well for all of your life, it seems, actually, So eventually you would be among the early users of chat GPT with a version that looked very different from the one we know today. Can you tell me how that came about and also how it inspired you to write Ghosts, which ultimately became a viral essay back in twenty twenty one.

Speaker 2

So back in like twenty nineteen, twenty twenty, I was starting to read about this technology that OpenAI was developing, which was first called GPT, then GPT two, then GPT three, And so when GPT three came around, I wrote to Sam Altman, I had previously profiled him for this magazine called California Sunday, so I had his number. So I texted Sam Altman and I said, hey, I heard that you're working on this thing.

Speaker 3

Would you give me access to it so I can play around with it?

Speaker 2

And he eventually gave me access to it. It worked differently from chat GPT back then. So the way it worked was the interface was this kind of web app where you would type some text and then hit a button and then it would just sort of keep writing for you.

Speaker 3

It would kind of complete.

Speaker 2

That text, and it did that in a way that was often kind of like more interesting, I would say, than the text that chatchypt produces. It would be more surprising, what you might describe as more creative. And the thing that came to mind was that it was promising that it could provide language for us when we find ourselves at a loss for language. And as a writer, there aren't that many things where I find myself at a

loss for language. But a thing that I have always struggled to write about, to talk about is the death of my sister when we were in college from cancer and my grief over it. I think it's like an incomprehensible thing to me that happened, And so I wrote this sentence that was when I was in my freshman year of high school and my sister was in her

junior year she was diagnosed with e in sarcoma. And then I hit the button and it produced this text for me that was like, had nothing to do with me or my sister.

Speaker 3

Really, it was a story kind.

Speaker 2

Of from the perspective of somebody whose sister had been diagnosed with ing sarcoma in high school. But the last line of the little story it produced was she's doing great now. So it took like this essay that I wanted to write about my sister's death and kind of wrote something that was the exact opposite of what I was interested in. So I kept trying it over and over, and each time I would like delete the what GPT three had written and add more writing of my own.

And what I found that was that as I wrote more, this technology did a relatively better job like kind of matching the subject and tone of what I was writing, such that there were these lines that it came up with that it generated that sounded to me like if a human had written them would be really well written lines, you know, like lines that moved me that I thought were really intellectually engaging. And yet ultimately it wasn't expressing my experience on my behalf.

Speaker 1

And so by the end of it, why do you decide to write the whole thing yourself?

Speaker 2

It kept not describing what it was like to be me, to be my sister having this experience right understandably, like it sounds silly in retrospect, but of course, like this machine that's built to put words one after another based on statistical predictions about which words should come after which other words was not going to somehow reach into my

soul and express the thing that I couldn't express. And then furthermore, like even if it could, I think, for me as a writer, what is important is the actual act of writing, as the attempt to take something confusing and try to articulate it right. And so you know, there is it turns out there is no machine that can do that for me, right, because the thing that I'm trying to do is the act of writing it.

Speaker 3

And so ultimately I was the one who had to do that.

Speaker 1

After the break how to resist big tech stay with us? So from your perspective as someone who's been a tech journalist for a long time, who teaches creative writing, but who also thinks about these things on a really philosophical level. How do you engage with but not give yourself completely over to these seemingly innocuous technologies.

Speaker 2

I mean, I don't have a great answer to that question. I think that there's a way in which no matter what we do, even when we're using these technologies, say we're doing Google searches for how to is this big tech? Or we're chatting with chat GBT about all the problems with chat GBT, or we're organizing our friends on Instagram or Facebook or whatever meta owns social network, right to

take political action. All of those things are really beautiful acts of like of self expression in some ways, of resistance.

Speaker 3

Right.

Speaker 2

And it's a fact that those are legitimate forms of self expression, of community building, even of resistance. And yet each of those actions that I just described benefit the companies that we are attempting to resist.

Speaker 3

Right.

Speaker 2

So when I search in Google for how to resist big tech, Google then knows that I'm interested in resisting big tech, and it can target messages to me based on knowing that fact about me, when I organize friends on social media for political action, Meta then knows my political orientation and can target messages to me based on what it knows about my background. And so any self expression, any community action that takes place in these platforms that

sort of necessarily bound up with them. Interestingly, I'm finding this with my book as well. Like I published this book thinking of it as a critique, I still think of this book as as being a book that contains a critique of big tech and uses big technology companies products to make that critique.

Speaker 3

But a lot of the headlines that.

Speaker 2

Have appeared about my book and some of the interview questions I've gotten have said things like, you know, this author collaborated with chat GBT to explore her selfhood. Or I've been asked questions like, so what did chat GBT teach you about yourself that you didn't already know?

Speaker 3

Right?

Speaker 2

And so it turns out that my book itself, which I constructed as a critique, the life of the book, like the public life of the book, turns out to also be bound up in that that sort of complicit relationship that we have with these companies, which at first I found really upsetting and now I find kind of kind of fascinating.

Speaker 1

The description of collaborating with AI has become a very common way of describing how we interact with it. At face value, it makes sense, especially in cases where someone is approaching it to work through something that in the past would have happened with someone else, like a creative idea or an emotional problem. So what do you think of this description of collaborating with a large language model.

Speaker 2

You know, there's an extent to which we talk on Google and we talk on Meta's social media platforms about like the problems we're facing, our deepest, darkest secrets. But my sense is that people are going even further with CHATDBT and products like that in doing that because they have the impression that they're talking to something that is like another right, but doesn't have the baggage of actual human beings.

Speaker 1

Right.

Speaker 2

So we might be talking to chat GPT and feeling like it's using the language that our therapists might and giving us good advice.

Speaker 1

Right.

Speaker 2

But whereas with our therapist, we pay our therapists, and our therapist's job is to help us understand something about ourselves, and they have training in order to do that well. Chat GPT's job is to get information from us, is to get us to keep using the product, to like the product, to use it more and more. And so I think that's why that term collaboration is really problematic. It's a term that people like Samul and the CEO of open Ai are using a lot.

Speaker 3

I think partly.

Speaker 2

Because they used to say things like AI is going to take over all our jobs, are going to be any jobs left. Nowadays they talk more about collaboration. They say, don't worry, AI is not going to get rid of all of us. What's going to happen is that it's going to be a really helpful tool we can use in our daily lives. It's going to be a collaborator. It's going to be helpful, It's going to be useful.

They use language like that a lot, which I think is meant to kind of defying the threat of AI taking our jobs, right or AI consolidating more wealth and power in the hands of people who already have it.

Speaker 1

Given that you did use artificial intelligence at points in your book, and I can imagine that tech companies would love to claim that you collaborated with AI and it helped you explore your selfhood. Do you think they can do that?

Speaker 2

Well, It's complicated because using artificial intelligence products in the book helps me show what the promise is that these companies behind the products are making to us, and what we as users do in response to those products, like how we engage with those products in response to those promises, and also help me show how those promises fall short and how ultimately the way in which we're engaging with these products is bound.

Speaker 3

Up in the exploitation of us.

Speaker 2

That was what I was interested in showing by using the products in the book. A critical reader will read the book and see that, For example, when I ask, you know, image generation products from companies like Microsoft and open ai to generate images to go along with stories I'm telling about my mom's childhood in rural India, like, those images are not factual representations, unbiased representations, representations that bear any relationship with the reality of my mom's experience.

Speaker 3

Right, Like I think, I think.

Speaker 2

I expect readers to see that and think, Okay, I see that. I understand what's happening here, right, I understand that there's a critique here.

Speaker 3

I don't believe that these.

Speaker 2

Products, due to any significant extent, help with self expression, help with you know, communication with others, and so that was not something I was particularly interested in exploring. However, I was interested in the promise these companies were making that their products could do that.

Speaker 1

Yet we seem to be moving more towards a future where people believe that their self expression is bolstered by these products. How do we resist the belief that these products are actually like aiding in our self expression.

Speaker 2

I mean, I think there's a bigger problem that these companies are exploiting. I think there's a bigger problem in society in which the kind of like dominant cultures are held up as like and dominant sort of ways of expressing oneself and communicating are held up as like the

best ways. So, for example, chat GBT they have this kind of style guide that's public and they open a eye does and they explain how chat GPT works, and they say that chat GPT is built to sound kind of like a colleague, you know, at an office, presumably like a white collar office. It's meant to be polite, it's meant to be empathetic, it's meant to be rationally,

rationally optimistic, I think is the phrase. So they are all these ways that chat gypt is built to use language that itself is political, right, like the fact that they're saying this is how chat GPT should talk, and the way it should talk should be like this sort of like, you know, the language tends to be American English, right, like like office building, white collar American English, because chat GPT talks that way, and because chat gbt is held

up as something that can help us with our writing. What chat GPT is really doing when we go to chat gbt and say, hey, I wrote this, can you help me make it better? Which is embedded with this assumption that we also have in our broader culture that like a certain kind of like corporate, white collar, like you know, white guy in the cubicle next door kind of language is like the best kind of language to use.

I write in the book about how my dad has this habit of like writing me emails and then having chat GPT edit them and then sending me the edited version, And he wrote this perfectly well written, well reasoned email to me at one point in which he uses this phrase. You know, he says something like you have to talk to people to understand where they're coming from.

Speaker 3

Then only can you.

Speaker 2

Can you write well about their experiences something like that. The chat GBT version inverted the then and the only, So the chat GBT changed it to you have to talk, you have to spend time with people and understand their experiences. Only then can you understand what they went through. And the interesting thing about that to me is that then only is a very common and very correct construction in Indian English, and my dad's from India. Only then is

American English. And so chat GBT is like purportedly improving my Dad's English, but what it's actually doing is just turning my Dad's English into American English. And there's this like value judgment embedded in that that that American English is a better English, right, And so I think when we turn to these kinds of products and think that they're improving our self expression, like that's not actually what

they are designed to do. They're designed to like use a certain kind of language, and so yes, they can turn our language into that, but of course, like I both question the usefulness of that in self expression, and I worry that if there's a future in which, like we all start to talk like that, it's going to be like culturally problematic, politically problematic, they're all kinds of issues that are eyes.

Speaker 1

If that happens, I want to go to another I think very important piece of your book, which you know has to do with the handwringing over AI taking jobs of creative workers. To what extent do large language models

threaten our current definition of what a novel is? And do you think it's important that people like yourself, people who write both fiction and nonfiction, consider the way in which large language models might affect not only writers, I think, but more importantly readers and what readers expect from the written word.

Speaker 2

Yeah, I mean I love that question because I think a lot of scholars of the novel would argue that what makes a novel a novel is its novelty and typically, like I think, as a culture, historically, what we've celebrated in the novel is like finding new ways to express

ourselves as human beings. So I think AI has come along at a moment in which that has been complicated by commercial changes to and commercial pressures on the book, on the novel and other forms of literature, where the publishing industry has become more and more consolidated and dominated by just a few companies that do you know, want to turn a profit, and because of that, are increasingly publishing things that.

Speaker 3

Are going to do well on Amazon.

Speaker 2

And finding that the things that do well on Amazon on our books that are not particularly unconventional, that do like follow certain well worn paths, Like if the goal of a book isn't to find some new way of expressing something, but rather to like follow a formula, that is something that I think a large language model could be trained to do.

Speaker 3

That said, I think all kinds.

Speaker 2

Of readers, including readers of the sort of genre books like romance and fantasy that are thought of as following certain genres, have a lot of room for experimentation. Like they want to see something new, they want to see something human feeling in what they're reading. They want to know that there's an actual human being behind the text.

And so I think I think like the future of books is really going to be determined in some ways by readers more than writers, because they're the ones who buy the books and create the market.

Speaker 1

And so to sort of answer the question about how one exercises autonomy in an age of increasingly less agency, just because we're all sort of trapped in this bubble of technology that's been created around us that we now have to use. It's about kind of insisting on the type of stories, information, images, movies, TV shows, poetry, whatever it may be, insisting that it's human centered and human created. Like that seems to be where the autonomy lies.

Speaker 2

Yeah, I mean the thing that I find really exciting is like, we as human beings, as communities, can create whatever future we want, right Like, there are certain futures that big technology companies are invested in moving us toward, but we can be invested in something totally different. In fact, we can be invested in something that's in resistance to what they want for our.

Speaker 3

Future, and we can build that.

Speaker 2

And Yeah, I think it has partly to do with the things we choose to engage with, the things we choose to spend our time.

Speaker 3

On, how we.

Speaker 2

Choose to connect with friends, with family members, with community members, right Like, all of those things are important.

Speaker 3

We can also think about our choice of products.

Speaker 2

Right An example I like to give because it's an easy one to wrap our heads around is Wikipedia.

Speaker 3

Wikipedia is the seventh most.

Speaker 2

Visited website in the world, and it's run by a foundation It doesn't cost that much to run it, relatively speaking, and it's built on volunteer labor. But that volunteer laborer I don't think of as exploited labor, because everybody is sort of coming together to provide this communal service that everyone can benefit from, and nobody is trying to make a buck off of it. What I mean is there is no corporation behind Wikipedia that is trying to use Wikipedia as a tool tom.

Speaker 3

As wealth to a mass power.

Speaker 2

And so there are lots of different kinds of entities like that building technologies that we can think of as potential models. We can put our funds together, as communities, as individuals, as workers, as users.

Speaker 3

We can build things.

Speaker 2

In new ways and not have them be bound up in you know, technological capitalism in the way that our dominant technologies are right now.

Speaker 1

Absolutely, And I think, what's one of the things that I find, I don't know, I guess so deeply disconcerting is this idea that publishers are now allowing for the training of large language models on their pre existing content. I think, again, you know, where does autonomy factor in? It's now like a new point of contention for both union workers and well, mostly union workers actually to say, like, to what extent do I get to participate in the way in which my company does business?

Speaker 2

Yeah? I think that's something that's really hard about this, because you know, I hear sometimes from people who are like, you know, I am a teacher and I don't want to use AI, but my district says that we should. Or more commonly, I work at a company. I have questions about AI or concerns, but my bosses that I need to use it.

Speaker 3

Right, What then do you do? And listen?

Speaker 2

I think they're perfectly legitimate uses of these products, of products that sort of fall under the AI umbrella, by the way, But for those who are interested in resisting use of these products, who happen to get paid by institutions that say, no, you have to use these products, like that puts you in a pretty tight spot. I think autonomy is very bound up in knowledge, right. I mean I write in my book a lot about like these gaps and knowledge between big technology companies and us.

Speaker 3

But one meaningful thing that we.

Speaker 2

Can do is collect information about how these companies actually function as ourselves, questions about like what these companies are promising with their products, what they're actually delivering what we're delivering to them, and how they might make money from that,

how that might help them a mass more power. Right, Just to sort of like ask ourselves questions and understand the dynamics that are underlying our use of these products, I think is a really valuable thing to do because I think you know, for many of us, once we understand these dynamics, we feel that we're in a better position to enact individual or collective resistance.

Speaker 1

One of the things that I thought was so interesting about your book is like, in a certain way, it's sort of a self help guide for people who are online, right in that like you're asking the reader to think about the history of themselves online because you're considering the history of yourself online. And so, what are the most profound ways in which the Internet and the kind of ensuing AI powered internet. H How has it changed your life and your idea of selfhood?

Speaker 3

Yeah, I mean.

Speaker 2

These products have given me sights of self expression, of communication and communion with other people, have allowed me to gain more information about the world about myself, have made.

Speaker 3

My life more convenient. All of these things are true.

Speaker 2

And with each of those things that I just mentioned there is a significant cost, and that cost is really hard to see, and so I think experiencing life with these products. But I think, even more importantly, just like writing it all down in this book has helped me to clarify for myself, like just how bound up the things that I get out of these products are with the exploitation of myself and of all of us and

our planet. And the reason I wanted to write about this wasn't because I thought anybody should care, particularly about my own experience. It was that I know that other people are like me. I know that there are other people who know all the problems with Amazon and still use Amazon, who know about the problems with Google and

still use Google. And I wanted to talk really frankly about my own complicity as a way to kind of like open this window for all of us, for readers to have a conversation about our collective complicity and figure out what to do about it, because I don't think I come out of the book with an answer to the question and now what we should we do about it?

In part because I don't think that I want to be this sort of like single authority figure, like a kind of like you know, twin to a Sam Altman, right, who has so much conviction about his about the future. I don't want to be standing here saying and here is the one single future we should move toward, and here are the five steps we should take to get there. It want us all to be interested in our own complicity.

Speaker 1

Thank you so much for talking to me, and I just I think your book is such an interesting exploration of digital life and one that I think if someone isn't thinking about, they're not thinking enough.

Speaker 3

Thank you so much for having me. This was a really interesting conversation.

Speaker 1

For tech Stuff, I'm Cara Price. This episode was produced by Eliza Dennis and Adriana Tapia. It was executive produced by me Oswa Wasshan and Kate Osborne for Kaleidoscope and Katrina Norvell for iHeart Podcasts. Jack Insley mixed this episode and Kyle Murdoch wrote our theme song. Join us on Friday for the week in tech Oz and I will run through the tech headlines you may have missed. Please rate, review, and reach out to us at tech Stuff podcast at gmail dot com.

Transcript source: Provided by creator in RSS feed: download file
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast