Tyler Cowen: The Most Practical Conversation on AI Writing | How I Write - podcast episode cover

Tyler Cowen: The Most Practical Conversation on AI Writing | How I Write

Mar 05, 20251 hr 9 min
--:--
--:--
Listen in podcast apps:

Summary

Tyler Cowen discusses the practical applications of AI in writing and research, focusing on how to use AI tools effectively while maintaining a unique voice. He explores the impact of AI on book writing, the rise of video content, and the importance of decentralized AI networks. Cowen also shares insights on mentoring, learning about AI, and tactically using different AI tools like DeepSeek, Perplexity, and Gemini, and emphasizes the need for humans to adapt to an AI-driven world.

Episode description

Tyler Cowen, an economist and writer, talks with me about how AI is changing writing and research. He explains a practical approach to using AI tools while maintaining your own voice. He explains the ways he incorporates LLMs into his daily work. We talk about how people will be writing fewer books in the future and how he believes truly human writing will stand out among AI-generated content. Enjoy! Redeem your free week of Lex at https://lex.page/perell 0:00 Intro 1:00 How Tyler Cowen uses AI everyday 6:06 Hallucinations are rapidly declining now 9:28 Writing authentically with AI 11:34 AI for critiquing your work 17:29 Future of decentralized AI networks 20:34 How and why DeepSeek 22:21 How AI changes writing 24:04 Why there will be less books in an AI era 26:34 Video content will rise 28:17 Start Writing with AI [LEX AD] 29:18 AI Writing a personal biography 30:46 How you can tactically learning about AI 37:12 How AI can help you visualize information 38:31 Creating the perfect AI prompt 42:10 AI's impact in the classroom 46:56 Studying The Bible with AI 49:11 Secrets 50:10 Why social networks are more important now 51:48 Mastering AI prompting 53:58 Mentoring young people 54:49 Would you invest 4 years in a PhD 59:59 Perplexity replaces Google 01:01:09 The different AI tools, explained 01:05:43 The potential of large context windows 01:08:11 AI usage inside companies I also made a website that helps you learn from the best writing of all-time: https://writingexamples.com/ Hey! I’m David Perell and I’m a writer, teacher, and podcaster. I believe writing online is one of the biggest opportunities in the world today. For the first time in human history, everybody can freely share their ideas with a global audience. I seek to help as many people publish their writing online as possible. Follow me  Apple: https://podcasts.apple.com/us/podcast/how-i-write/id1700171470  Spotify: https://open.spotify.com/show/2DjMSboniFAeGA8v9NpoPv  X: https://x.com/david_perell Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript

Look, there are people who know a lot about AI, but they don't know anything about writing. And there's people who know a lot about writing, but they don't know anything about AI. And Tyler Cowen is one of the very few people who's an expert in both. We talk about how is your career going to change if you're a writer? How is AI going to change writing in general? And how can you use AI to learn faster and think better? I want to set the ground rules for this.

There's a lot of conversations about utopian, dystopian visions of AI and the ethics of AI. I don't want to have this conversation. The thing that I really want to talk to you about is the practical. implications of AI. How do you use it? How can you learn about using LLMs better? And then also what that means for writing in particular. And we need to be practical. If you want to make progress on thinking about the very big questions, simply using it, experimenting it.

seeing what works and fails will get you much further than sitting on your duff and rereading Heidegger. So how are you using it every day in order to advance your skills? How are you actually learning about it? Most of all, I use AI when I read things. So I use it as the secondary literature. So I'm preparing a podcast for a British historian. She does History of Richard II and Henry V.

Now, in the old days, I would have ordered and paid for 20 to 30 books on those kings. Now, maybe I've ordered and paid for two or three books on those kings, but I'll keep on interrogating the best LLMs about the topics of her books. Helen Castor is the historian, and just keep on going. And I acquire the context much more quickly. It's pretty accurate.

Keep in mind, I'm the questioner, so if there's a modest degree of hallucination, it doesn't matter for me. I'm not giving the answers. And I can now do many more podcasts than I used to because I'm using AI for my prep. Okay, so do you think that this is about saving time or improving the quality of your prep? It's improving the quality of my reading, which I use for my pleasure reading. So I reread Shakespeare's Richard II, which is a wonderful play.

Again, in the old days, I would have piled up a lot of secondary literature. I'm also rereading Wuthering Heights. I just keep on asking the AI, what do you think of chapter two? Like what happened there? What are some puzzles? What should I ponder more? How does that connect to something else later in the book? And it just gets me thinking. It's the new secondary literature for me. So it's more fun. I learn more just by being more actively involved, asking a question.

I think that improves my epistemics of the reading. And then I think I'm smarter about the thing in the final analysis. But mostly I'm doing it because for me, it's fun and a pleasure to learn. What are the puzzles? That's an interesting question. Tell me about that.

Well, Shakespeare's full of puzzles, right? Camille Paglia once said, if you look at Hamlet too closely, most of it just doesn't make sense. I'm not sure that's true, but she's a very smart woman and she studied Shakespeare quite a bit. And if she says that... you know Shakespeare is very hard to read. Any major well-known good Shakespeare play, you can read five to ten times and still just you're beginning to get a handle on it. So you can reread it an infinite number of times.

So it's very well suited for having a guide or companion who can talk you through it. And the major large language models, they've all read Shakespeare, right? It's in the public domain. And they seem to know the secondary literature quite well. You could just ask it a question. What are three or four major readings of what Hamlet meant in this speech? And it gives you an excellent answer.

So with that, it's getting a bunch of different perspectives. Do you put in people's names to say, I want a perspective from this scholar or this scholar, or is that not necessary anymore? You can do that. So like, what would Harold Bloom say? What would Goddard say?

It's not mainly what I do. I'm happy to randomize it a fair amount. But it works for that. And at the moment, this changes a lot over time. But O1 Pro is the single best model for doing this from OpenAI. That's the one you have to pay for. Claude is very good. Deep Seek is very useful and fun, not reliable in terms of hallucinations, but it should definitely be in your repertoire, so to speak.

Now, if O1 Pro, on average, it takes me two to four minutes to get an answer. So do you have multiple O1 Pros open at the same time? I only have one open. Most of the questions I ask it, it takes me a minute or a little more than a minute. Maybe I'm more of adult. My questions are too simple. I recognize that. While I'm waiting, there's plenty else I can do. Check my email. Maybe I've heard from you. Check Twitter. Go back to reading Shakespeare. So the time costs...

To me, it's actually fun. I enjoy the suspense. It's not a problem. I multitask anyway, whether that's good or bad. I do it. There's no cost to me to wait. To confirm. I emailed you at 1018 last night. You emailed me back at 1019. And what did I say? Confirmed. I think I was probably waiting for an 01 Pro answer at that time. Okay. So...

What I've noticed is a lot of people think that hallucinations are a major issue. And you're like, they're actually not that big of a deal, especially in the context of interview prep. And I think that what other people... think is really important is like finding the true answer. And I think what you are trying to do much more often is kind of trying to find a model of how reality works or something like that. You want to broaden your horizons, see more perspectives.

First and most important point is anything you learn from any source, if you're going to use it, you've got to double check it. It could be a human, it could be Einstein, it could be the Encyclopedia Britannica. So the fact that you should double check what you learn from an AI. It's not an extra burden. And like every book I write, I then have my research assistant fact check every single thing I say. I say, act like you're out to destroy me.

And they do find things that need to be changed. You said page 172 is on page 173. That happens. So that's one issue. When you do podcasts and you're the interviewer, again, you're not giving the answers. You just need context. So hallucinations won't trip you up. But the biggest thing is simply the very best models with reasoning hallucinate much, much less than what most people are used to. Hallucinations have gone down.

I would say by more than 10x in the last year. And we're talking now in February. They're due to go down a lot more over the year to come. So it's just not that big a problem for me. I was scrolling through your... reviews on conversations with Tyler to prep for this. Yeah. And the best review was Tyler's the master of out of left field questions.

How do you use AI to find out of left field things? Because a lot of them end up kind of homogenizing thought, but also there's the potential to really get out into wild and wacky places. I never ask the AI what question should I ask Person X. It's quite dull and bland if you do that. It's too normy. That's the worst question you can ask an AI from my point of view. You just want to ask it about the details of historical examples. So something like, well, why Cliff?

What was special about his translation of the Bible, and how did his patrons feel about what he had done? Which is implicit in a lot of books. I haven't yet seen a book that spelt that out explicitly. It's an open question how much we even know about that. The AIs will give you some context. Just keep on asking specific questions, practical questions to get back to that point. And you will yourself come up with out of left field questions.

specific question. So tell me about that. Something you're curious about. The Peasants' Revolt of 1381. I'm starting to learn about that. I only know a small amount about it. I don't yet have a good question about the Peasants' Revolt. But I feel within the next two weeks, I will. And that will be an out of left field question. Okay. So writing with AI, are you using it to frame ideas or where are you using it in the writing process?

I don't directly use AI for writing, typically. Now, sometimes I do in the following sense. If I'm writing on a legal issue, and I'm not a lawyer, I will ask O1 Pro for the relevant legal background to something I'm writing on. So I just wrote a column about declassifying classified documents. I don't know that law very well. I asked the AI for a lot of background on the topic. I didn't use what it gave me, but now I feel like I'm not an idiot.

on the topic and what I wanted to say. Whether or not it's correct, you can debate, but it's not what you would call flat out wrong. But I don't let it write for me. I want the writing to be my own. It's like my little baby, so to speak. I don't care. Whenever it's better than I am, I'm still not going to let it write for me. Also, a lot of the sources I write for wouldn't let me. I agree with that decision on their part, but even if they would let me, I wouldn't do it.

There's ways you can use AI that will smooth out your writing, on average make it easier to understand. I don't want to do that. I want to be like Tyler Cowen, this weirdo. Well, the whole fun of your writing is that it's a little bit cryptic and there's a lot of different layers going on.

And I read it, and then I try to say, what is Tyler explicitly saying? What is he trying to hint at me? And then also, you have these weird ways of writing sentences that are almost like parables that I kind of have to... puzzle through. And I don't want the AI messing with that. And it's not going to because I won't let it. And if the world stops paying attention to me and only reads the AIs, I'm at peace with that. We're not at that point now. But if we get to that point...

I won't feel bad. I'll be fine. You mentioned the legal stuff. Do you use AI to check your work later on or no? Not that much, actually. Again, I think it can make your work better. But I want it to stay weird. I will use it to fact check things in areas I don't know. I wouldn't say I don't use it at all. There was one use where...

Agnes Callard a while ago suggested this. She said, run your writing through the AI and ask it, what is in here that some people are likely to find obnoxious? And explain to me in great detail what that is. And I did that, and it was right on target. You may or may not need that, but there was one part of something I'm writing. It was very obnoxious. I even pondered keeping it that way, but I decided to change it, and the AI pointed it out to me and explained why it was obnoxious.

and why I was being supercilious and condescending. I just thought, well, if the AI says that, there is some greater wisdom at work there. Yeah, I find it to be very good at telling me when something feels like... callous or cold. It's like you didn't really think about that or it's like harsh or something like that. I know a lot of managers who let's just say they have high tempers and one of the ways that they're using AI is they'll

give sharp critiques for people. They'll put in the AI. They'll say, hey, make it warm, clean it up. They'll copy and paste it. And they said that it's reduced conflict for them. Yeah. But again, I don't want to do that too much. And most of my writing, it's not managerial. If I wrote memos, I think I would do that a lot. I think it's extremely useful for many people. But for me, I'm mostly writing for just external audiences. And it still has to sound like me and sound like my thinking.

How about general critique? Are you using it for that, to critique your work? Sometimes, yeah. But I think in terms of my ability to index the arguments out there... I have some AI-like abilities more than most humans do. And I feel I can do that pretty well myself. All I think of my head is having a kind of system of index cards. And there's a lot of index cards in there.

and I can flip through them, not at the speed of light, but faster than a normal person could think. So I'm able to flip through all the permutations in less than a second. and just see which combinations of arguments might apply to an argument I or someone else is making. When you're looking at... how other people use AI and you're like, ah, you're using it wrong. What is the thing that they're doing wrong? They're asking it questions that are too general.

They're not willing to put in enough of their own time generating context. Now, maybe they don't have the time. If that's what's efficient for them, I mean, fine. But I think they end up not sufficiently impressed by the AI. because they're using it as a substitute for putting in their own time, which again, for them might be fine, but it's not what I want to do. I want to put in more and more of my time to learn and have it complement that learning.

And if you do that and keep on whacking it with queries and facts and questions and interpretations, you'll come away much more impressed than if you just ask it. oh, what does the rate of price inflation mean? Or I'm interviewing Tyler tomorrow. What questions should I ask him? Then it's pretty, you know, mid, is that the term people use now? Like mid is fine. Mid is called mid for a reason.

But at the end of the day, you will be asleep on the revolution occurring before our eyes, which is that it's getting smarter than we are. You'll just think it's a cheap way to achieve a lot of mid-tasks, which it is also. problem is that it's a text window that makes it feel like a text message. So people use text message lengths when in particular the first context setting question should be super long.

So one of the things that I'll do is I'll use voice dictation and I'll actually dictate it for a minute and a half, three minutes, get something very substantial. And my follow-up questions tend to be shorter, but my first one... tends to be extremely long. And that's why I use voice dictation. So I can just get it all out. And I find that ChatGPT is quite good at sorting what's really important.

Something a lot of people are doing. I haven't myself tried it yet. I suspect it works very well. They say they're using O3 Mini to write the prompt for them. And then they ask the full model and they get the prompt quickly. So just to think of it as a stacked device, not a single box, but a set of interacting agents that in a sense are trying to evolve toward a market with multiple agents that talk to each other, correct each other, grade each other.

When you view the AI as evolving toward a decentralized system of AIs, which is not there yet, but in the meantime, try to use it as if it were one. The way, like for humans, there's a republic of science, way smarter than Newton or Einstein or any one scientist. We're evolving toward that. We don't have it yet. But you're using AIs to bounce things off each other, so to speak, and have a dialogue where you're part of it.

I like to say there's three layers of knowing stuff about AI. Most people don't get to any of them, actually. Layer one is, are you working with the very best systems? Some of them cost money. So that's a yes or a no, but that's important. Second question is, do you have an innate understanding of how it is through reinforcement learning and some other techniques that can improve themselves basically ongoing all the time?

A lot of people don't get that. They're impressed by what they see in the moment, but they don't understand the rate of improvement and why it's going to be so steady. And then the third question is, and this is fully speculative, but I believe in it very strongly. Do you have an understanding of how much better AIs will be as they evolve their own markets, their own institutions of science inquiry, their own ways of grading each other, self-correction, dealing with each other?

and become, as I said before, this republic of science, the way humans did it, how much did it advance human science or literary criticism to build those institutions? Immensely. That's where most of the value-add is. So AIs, I believe, will do that. I think there are private projects now starting to do that. It's not a thing out there you can access. And when you understand all those three levels...

It's like, oh my goodness, this is just a huge thing. Yeah. And most people aren't even at level one. And if we take things like reinforcement learning, synthetic data, stuff like that. How important is the technical understanding of those things in order to answer that question well? I don't think you need the technical understanding. If you work with them...

and are able to read what's equivalent to a popular science account of how AI works. Now, the people with the technical understanding, of course, they understand it much better. But there's plenty of other processes, like how did cars get safer? from 1970 until today. I have no deep technical understanding of that, but I could tell you a bunch of things. I don't get flat tires anymore. I have a side airbag.

I couldn't explain to you how the side airbag works, but I'm not an idiot. And it's a bit like that. Yeah, a car engineer understands it better, but you can have a handle on what's going on. So then let's go to the third question, which is... Do you have a vision of the future of AIs becoming a decentralized network, interacting with each other and humans, probably with markets? I don't know if we would call it peer review.

But it decentralized, again, Republic of Science, where it moves forward by mobilizing decentralized knowledge and working together the way our civilization does. And that's fully speculative, but it would just seem so strange to me. if there were nothing there as another source of progress. Personally, mine's very jagged. Like I have a clear sense of how it would show up in management, a clear sense of how it might show up in writing, which I think we should explore together. Right.

But it is very jagged and it turns my brain into a science fiction novel. Absolutely. And it's scary because we all wonder, well, how do I fit into this new world? I don't think the answer has to be negative. But there's no answer you can give with certainty. And we're not used to that. So the world I've lived in has not changed that much since I was born. But that's about to change.

You could say it's changed already, but it's not fully instantiated in most of the things we do. So for writers, you're an up and coming writer. You're getting started. I mean, how do you respond to this? I feel a sense of dejection and I'm at the cutting edge of these things. I get the benefit of feeling excited about this. And I also feel completely dejected in terms of having a skill that I've developed in terms of writing that now.

I feel like AI can do a lot better than me. And then also in terms of teaching, developing frameworks and ideas that I feel like has become obsolete. I can only imagine if you're not using these tools, how deflating it must be. Some humans will become masters of the tools. How much writing they still will be doing, at what pace, I'm not sure. But it's a major psychological adjustment, and a lot of people who thought they would be writers...

I will predict they won't be, just like the number of jobs for certain types of computer programmers is plummeting. Yeah. And that will come to other areas, by no means all areas, but a lot of kinds of writing are one of them. Something like generic corporate writing is the first to go, from what I can see. It could be writing a biography of a person. The AI cannot really do. It may help you a lot.

but it can't go out and interview the high school teacher and so on. Writing memoirs, of course, the AI cannot do. Writing that is more subjective, more personal. I think the AI already can do it very well, especially DeepSeek program. We can get to that. But I'm not sure readers want the better product from the AI. They may want it from a human being. I feel that I do.

So if I read a brilliant memoir written by an AI, but it corresponded to no actual life, I would read a few of those, but at some point I'd get bored, and I don't think I would keep on reading them, even if they were better than human memoirs on average. You've mentioned DeepSeek twice. How is the shape or personality of DeepSeek different from the others? DeepSeek is from China, as many of you know.

less manipulated to sound a certain way. It is less bland. I would say it's better at poetry, better at emotion, more romantic, more uneven. It does hallucinate more. So be very careful if you're using deep seek. But if you want, say, a glorious description of what it is like to eat a mofongo, which is a Puerto Rican dish, which I enjoy, I go to deep seek for that.

Like, I want the, you know, bring on the hallucination. It's just more creative. And it's more censored on a bunch of things that you could predict knowing it comes from China. But overall, it's freer. It's a free spirit, LLM. But don't use it for your research, not mainly. Do you think that deep research will get to a point where you can trust it, at least to the level of Wikipedia, in terms of fact quality?

I'm teaching a PhD-level class now, and I'm teaching them Ricardo's theory of rent. And I looked at Wikipedia on Ricardo's theory of rent, and I Googled to a number of other websites, and then I asked Deep Research. Write a 10-page article for me on Ricardo's theory of rent, and I fleshed out the prompt. I told it it's my PhD students, some other things I wanted. In my view, what it did is by far the best thing out there. It's tailor-made to what I wanted.

I'm not saying it's best for everyone, but it's already beating not just Wikipedia, but any other source I could find using Google. And it's due to improve, right? As they say, this is the worst I'll ever be. That's the second level of the lesson, yeah. Exactly. So how does all this influence what you're choosing to write? Should you write books, articles? How is all this shaping that?

It's affected my writing very significantly already. So there's two quite different effects at work. One is simply that AI is progressing quite rapidly, and that changes the world quite rapidly. So if you're writing a book... It takes, say, two years to write and a year and a half to come out. Maybe there's some other delays. We're talking four years. There's a lot of topics you just can't write on. Like you can't write a book on AI. It's crazy.

You could write a very good book, The Early History of AI, which is frozen in historical time. Now, maybe the AI will write that book better than you could, but at least you could consider doing it. So what I call predictive books, books about the near future, they don't make sense anymore. You've got to cover those by writing on this ultra-high time frequency every day, every week, something like Substack blogging, Twitter.

That's a big change. So some of the recent stuff I've written is about the more distant past that is frozen in history. But the other question is, what can the AI soon enough write better than you can? And it may not be that the AI writes a book. Don't fixate on book. Maybe the AIs know better than to write books. They're just like a box. And you can ask the box, any question you might read in the book. That's what I suspect is the case.

Not that there'll be all these books written by AIs. It's inefficient. Like, why this single package for everyone? Just give people the box. So it's a question box. And what you're writing had... better be more interesting than the question box. So the book I've started writing recently, we were discussing it before filming started, it will be called Mentors. It's about mentoring and also being a mentee.

And first, I think the extant literature is weak enough. The AI maybe can't do a great job. But even if the AI could do as good a job or better than I can, I don't think people want to read that book from an AI. I think they want to read it from a human who has been a mentor and a mentee, just like I don't want to read all these phony memoirs from AIs, even if some of them are good. And that's a human book.

that only a human can truly write sincerely and credibly. So I'm going to write fewer books in the future because of this. I may not write any more books after this book on mentors. A lot of books I would have written, they're now obsolete. I feel I'm wise enough to recognize that. And I'm not going to write less. I'm going to do more of this super high-frequency writing, much of it about AI.

With the mentoring, the other thing is that you're going to have a very opinionated perspective on mentoring that is far... different than what the average person would think. So if we meet, you know, Jane on the street, it's probably very different from how you're going to think about this if the book that you wrote on talent is an indicator. And I have personal anecdotes. One of them concerns you.

And those anecdotes are about real people, which I think readers want. We'll see. Maybe the readers are fine with the AI book on mentoring, but my bet is no, that the truly human books will stand out all the more. And a lot of the rest will be AI slop, human slop. Just a lot of it will look like human slop all of a sudden. So when you say the truly human books, what do you mean? Memoir.

biography, where you need to do things like fieldwork and interviewing, books based on personal experience, such as a book on mentoring. It could be relatively few categories. I'm sure there's categories I haven't thought of yet. Your ideas are welcome. I'd love to keep on writing as much as I can. But I'm not going to get sentimental about it. I'm very willing to be cold-blooded and just say, nope, Tyler.

When it comes to that, you're obsolete. When it comes to answering questions about economics and economic models, right now, it's better than I am. Not on every question. Not in every area, but mostly it's better. And I recognize that. And I will reallocate my energies accordingly. So then does that mean that the YouTube channel is in a better spot in terms of...

persisting because we get a sense of your personality, we see the visuals, or do you feel like even that it's not going to be as useful? I'm doing more podcasting, which is also YouTube, just as we're recording this, the podcasts I do. We're taking greater care to make sure there's always a video of it. So yes, I think video will be more important for a while. Now, what's the rate of AI progress in video? I have a less clear sense of that. I know less about it.

But I think a lot of idea will be like the memoir, that people will want humans and not fake humans, even if the fake Tyler looks just like me. And that seems to me two years away. The fake Tyler voice already is indistinguishable for me. I just did a video where I said something wrong in the video and we were like, ah, we got to go back and we record it. And so what we did is we took my voice.

And we went in, changed the text and the voice. It came out. And you can't tell the difference. It's an artificial voice for that little section. Can't tell. I played the Tyler Cowen voice for my sister. Not just one word, but a whole paragraph. She couldn't tell. She was stunned when I told her that was AI. Yeah. Well, the thing that's going to happen next is our voices.

The cadence will now work in Spanish or in Hindi or in Italian or something like that. And now YouTube is rolling out dubbing in every single language. And so someone will be able to press play on this video in Italian. We'll be speaking Italian. and they'll be able to hear it in our styles. And my accent in Italian, which is the thing that exists. Scusi. We're talking about writing with AI here, and maybe you're thinking, okay, okay, I've been against this AI thing, but now...

Fine. I give in. Where do you start? Well, I recommend a tool called Lex. What I love about Lex is you go in there and it's really fun. Like it's super well designed. The colors, the formatting, it's all very intuitive.

And I find that I just have more fun when I'm writing with Lex because I get instant feedback. If I get stuck, I can ask it to interview me. I can say, hey, this is all the writing I've done. This is sort of a context of how I like to write and a little bit about what I'm really going.

And because of that, I just feel like I have a creative collaborator. And then the other thing is structuring my ideas. Let's is like an 80th percentile editor. It's pretty good. Not like the best editor in the entire world. But here's the thing. It's super fast. It'll work for you 24-7, and it's pretty darn affordable. So if you want to start writing with AI, we'll go to lex.page slash Perel. Here's another thing I'm doing with writing. So...

Some people have told me like I should write an autobiography. I've never wanted to do that. It seems too narcissistic. I wouldn't feel the right kind of motivation. I don't think that would sell that many copies. A bunch of reasons not to do it. But it occurred to me, I can. write an autobiography quite simply. There's just a lot of me out there. Podcasts, blogs, essays, books. The AIs know most of it. I will continue to open source as much as I can. So the AI can write...

my biography, but there's parts missing. There's no podcast where I talk about the three, four years when I lived in Fall River, Massachusetts. I was four to seven. I don't think it's that interesting. But I'll write maybe two blog posts about it, just kind of as filler. So when someone goes to their AI three years from now, oh, I'd like to read a Tyler Cowen biography, that's in there. So I'm thinking through, I think it's maybe only 20 blog posts.

It's not much that is needed. It's sort of fun for me to be nostalgic. I'm going to put those online, and then it will be possible for the advanced AIs of the near future to write a very good Tyler Cowen biography. I don't know how many people want it, but it's so low cost. Why shouldn't I create the Tyler Cowen possible biography? So that's the thing you can do that obviously you couldn't have done before.

I should have asked this earlier, so I'm going to ask it now. But how are you tactically learning about this? And there's a specific constraint that you have, which is you're very high on curiosity and... informational fluency, but very low on sort of technical chops. And you're also in your 60s. So there's...

63, to be clear. 63. That's what I thought it was. The lower end, at least. Okay, so you're 63. And so what are you doing in terms of staying at the cutting edge? Because here's what I find. I kind of will get boxed in, in terms of not realizing the potential of what's out there. And I need to go have conversations and say, hey, show me exactly what you're doing. And actually the biggest constraint for me in terms of improving with AI.

is, oh, I didn't realize that you could do that. I think so far, at least, it's been a big advantage for me that I'm not a technical person in the AI field. So an example, I wrote a book that came out now 11 years ago. It's called Average is Over. And it says the future will be this age of incredible advances in AI, and it will change our lives in these different ways. And that book has turned out, I think, to be quite true. And I was then not...

an expert in technical AI whatsoever, as I'm not now. But I knew a lot about AI from chess. And I had this intuitive sense from my own life as a chess player when I was young, that chess is really mainly not calculation, it's intuition. And it's very difficult intuitions. And AI and chess some while ago became very, very strong. And I just had this core intuitive belief that if AI can get that good at chess, it can get very good at many other things.

and all the reasons people would give for why it can't happen. It's not that I didn't know them. I had read them. But they didn't register vividly in my mind. And I stuck with my core intuition. And if you're not focused on the technical side, you will see other things more clearly. Now, maybe over time, some of my future intuitions will be quite wrong. I readily admit that.

There are ways in which it can be an advantage. You just focus on what is this actually good for and not am I impressed by all the neat bells and whistles on this advance. With AI. But you've got to be super practical in how you address it. Don't. spend too much time on the abstract, work with it, use it, be self-critical about what you're doing with it, and be willing to learn from other people. If we stripped out AI like a Jenga block now...

In what ways would you be sad or devastated? And in what ways you'd be like, oh, that's fine. I'm just going to go back to whatever. When you say stripped it out, you mean shut it down. It's just all of a sudden it doesn't exist anymore. What would you feel like? Oh, I, I. I miss that. I love that about the AIs. Well, I would just learn much less. I think for people somewhat younger than I am...

Rather than living to 84, they will live to 97 or whatever is the time when on average you die of old age. That is significant for them. I think it's less likely I see those gains. Maybe not impossible, but I would bet against it. So that would be a significant gain for humanity. Other areas of the sciences, they'll advance much more rapidly. Something like green energy, quality of batteries, our ability to terraform the Earth.

All of that would be quite stunted compared to the world where AI progresses. But I think like the printing press, AI, even in its most positive forms, has the potential to bring a lot of disruption. and psychological disorientation and just upsetting the balance of interest groups and social status. And those disruptions can go very badly. And that worries me. It's not a thing you can just...

manage the way you manage a small company. So humanity is faced with that. We're faced with some version of that anyway, but it seems to me that's quite accelerated. And I think people, I don't want to quite say they should be nervous, but objectively speaking, being nervous is the correct point of view.

When you say that you write for the AIs, I mean, I get what you mean. You're saying, I want to write it because the AIs will be a reader of what I'm saying. And I can, by writing a lot, basically convince them that... I'm a legitimate source and I'm worth referencing and all that. But tactically, does the writing style or the substance of what you produce, does it change at all? It changes a bit.

So I like to think the AIs will have a better model of me than most other humans. So I've done many hundreds of podcasts, blogged every day for now like 22 years. The blogging, I feel, is some genuine version of me. It's not edited by someone else. There's a lot. I have like 16, 17 books, a lot of other output out there. There's people with more, but I'm trying to think, well, what does the AI still need to know about me?

So it's a kind of intellectual immortality I'm close to already having achieved. I'm not sure how much I value that. I'm not hung up on it, but it's like, yeah, it's like, let's just do this for fun. And it's so cheap to do the final mile on that, like write those two blog posts about Fall River and what the name of our dog was and what I thought of our neighbors and why there were so many Syrians who lived in the neighborhood, that kind of thing. Like, what's the harm in that?

What's the name of your dog now? Spinoza. Spinoza. I was thinking it was Ricardo. I knew that it was an intellectual. The first dog was named Zero. My father named it Zero, and we had this dog in Fall River. Spinoza. But when you write for the AIs, for one thing, they're your most sympathetic reader. It's one reason to write for them. They're your best informed reader. You don't need to give them much background context. It's not like writing a prompt.

So if I write something and don't explain all the filled in pieces, the AI knows. So I would say at the margin, I'm less inclined to fill in those blanks for people because the AI doesn't need them. It's already read everything else. So I'm not saying everyone should make that move. Like you will or maybe lose some human audience or they'll understand you less well. But at least it's a trade-off worth considering.

One of the things that you haven't spoken about that has been fundamental for me in terms of using the AIs is visualizing information. So I was in Buenos Aires and I wanted to get a sense of the immigration patterns. I had to make a table for me of the different cities in Italy that people had come from and how it changed over time. And something about my ability to read that, it just wasn't.

It wasn't working. It wasn't computing. And then I visualized it. And I ask it to make tables and to compare and contrast all the time. Like the amount of information that I'm inputting like that. is at least up 10x. That's great. I'm much more text-based than you are, but I know that works and many people do it. And wonderful. And the other thing that I think is worth getting good at is if you can get good enough data that you can trust.

using ChatGPT is better for tables, but Claude is really good for graphs. And there's certain graphs that really help you make an argument well. And just being able to take an argument from text into a visual. is a way that you can be a lot more effective as a writer in terms of making a point quickly. I think in the next two years, we'll see incredible further improvements in graphing. And graphing will be perfect.

Sometimes I have trouble with it scrapping right now, but I know it's just a matter of time and not much time. So we just land in DC and I struggle to kind of understand the... the cultural vibe of this place. What do people do all day? What are the kinds of people who are here? I find DC to be this strange city that I don't quite have a good way to describe.

sort of my project for the rest of my time here is to figure out, to find a good answer to that question. And so how much do you think about now as you're traveling, talking to people, going out, first-person experiences? versus books like normal or using AI to solve a question like this? To solve that question, I think AI can help you quite a bit, but you'll need a very sophisticated, well-thought-out prompt. I use it in a much kind of stupider way.

So I took a trip with my sister to Northern Columbia. She's a bird watcher. Took photos of a bird, a plant, and you just ask in the app, well, what's that? And it tells you. And you can ask it about details. Or take a photo of a menu. I do read Spanish, but a while back I was in a Paraguayan restaurant. I've never been to Paraguay. Some of the menu was in Guarani, not Spanish. I photographed the menu. I asked...

GPT, what should I order and why? It gave me answers. I ordered those dishes. I'll never know the alternative. It seemed to work, and I knew what I was doing all of a sudden. So I just use it for very literal, concrete objectives. Not even so much theorizing about the place. Like, hey, what's this? Help. You walk by a building. When was it built? Snap a photo. Ask it. It knows. But planning an itinerary.

I will likely be in northern Ghana in August. And I asked it, there's two places in northern Ghana I want to go. Well, if I want an itinerary, how do I get from one to the other and how long will it take? Now that I'll have to double check, and I'll triple check it by trying to do it. It gave me what seems to be an awesome answer in, I don't know, 10 seconds.

Where's the AI getting that information? Because it seems like the travel information online is so uniquely bad that I've been very surprised to hear you say that actually the AI is giving you really good travel information. It's one of its best uses is for travel. When my wife and I went to Kerala, India in December, she used it every day about different things to do or see. It's very good. It's places to eat, dishes to order.

And I'm not putting in super smart prompts. I might add a sentence or two saying like, oh, I'm a serious consumer of food. I want something that, you know. A top rated food critic might recommend, but like very simple adds to the prompt. And it just blew me away how good it was. So I like that piece. And I have a sentence from that piece that I think really shows.

the kind of writing that'll persist. And here it is, a little story, and it really connects us with you. This is by me? By you. Okay. By you, Tyler. My wife and I. Just ate a wonderful meal on a river houseboat in Kerala, and it was perhaps the best lobster I've ever had. And for her, the best lentils. My chef was simply a member of the boat crew who cooked what I had bought from a local fisherman.

And the reason that I saved that sentence is it's a quick story. It really helps us connect with you. And it was something in that piece about how India has the best food in the world that was not something that the AIs could have given. And I read that and I was like, ha. this is a sort of writing that'll persist. Writers will need to personalize more. I would say they already do. 100%. Tell me about AI in the classroom. How are you using it? And what are your students?

not understanding. For my PhD class that I mentioned before, there is no assigned textbook. That saves them some money, but they have to subscribe to one of the better AI services. That costs them some money, but it's much less than what the text would cost. Then the main grade is based on a paper. They have to write a paper. They're required to use the AI in some fashion.

They're required to report what they did. But I just tell them, your goal is to make the paper as good as possible. How much of it is yours? It's all yours from my point of view. Just like when you write with pen and paper or... Word processing, that's also all yours. But I want you to tell me what you did in part because I want to learn from what they did. Right. So I've done this in the past. I had a law class the year before.

where they had three papers. This was less radical. One of the three had to be with AI. The other two had to be them. And it's worked very well so far. And the students feel they learn a lot. Other classes tell them like that's cheating. But we all know there's some forthcoming equilibrium where you need to be able to do that, especially as a lawyer, I would say, but in most walks of life. So why not teach it now?

What's the constraint? Them not wanting to use AI or their lack of knowledge about how to use it well? Most of them seem to want to use it. Now, since I'm telling them to use it, maybe some of them are just going along with what I'm saying. I think they genuinely are curious. A minority of them already know how to use it well. Most of them don't. Most of them don't know the importance of using the better models. And they want to learn.

It's been a pretty positive experience, but no one has taught them. And every year I have my law class, my econ class. Has anyone else been teaching you how to do this? I'll ask them. Silence. And that to me is a scandal. This is academia. We should be at the forefront. In fact, the students, the ones who are cheating, they know way more than the professors. Now, I don't condone the cheating when it's not allowed, but I think that whole norm needs to shift and, in fact, collapse.

homework needs to change, more oral exams, proctored in-person exams and so on. We need to change that now. It's very striking from... All the questions that I've asked you, then just the one line takeaway from this that is far superior to everything else I've learned is just use the best models, people. That's right. Like you're completely hopeless if you're not using O1 Pro and these cutting edge models.

And I've come to realize over the course of this conversation so far, just how big the variance is. And if you're not at the cutting edge, you're... completely missing how fast things are improving, but not just the speed, but the actual vectors and ways that the improvement is actually happening. Strong yes to all of that. Noting what is currently the best model, as we are speaking, is $200 a month. Right.

Unless you're very poor and have no prospects, I think that's a good investment for many, many more people than realize it. And over time... The free models will be as good as that model, and then there'll be a new, better model that costs more. When will the free model be good enough? If ever, I don't know. But I think there's high returns to staying on the frontier for at least for a while.

It may asymptote out where, oh, maybe in four and a half years, the free model is good enough. And the fact that the paid model can do Einstein, I don't need that. We may get to that point, but we're not there now. And how is what it means to be a research-based academic changing? Well, the sad news, it's not changing at all. It needs to change. Now, right now, the AIs are not better.

than good academics at producing papers. So it feels like there's not a threat, but once you understand the rate of improvement, in my opinion they will be better. Not at writing every aspect of the paper or choosing the right question, but at doing much of the work, I think they'll be better than humans in less than two years. And my academic sector is not ready for that.

There'll be differential rates of adoption. Some people will be remarkably prolific and high quality and will sort of know what's going on. I'm not sure how transparent they'll all be or have to be, but it will change things a great deal. And you'll be able to produce, if you know what you're doing, but very good work very quickly. So the number one way I use AI is to study the Bible, which is sort of my big intellectual project. And I think it's great for the Bible. It is so good.

It is so good. And first of all- That's the will of God, right? It's the will of God. But there's also structural things going on that there's a lot of old writing that's in the public domain that- things can be very easily verified, which I think contributes to the AIs being uniquely good here. The reasoning models in particular, that's right. And it's text-based if it's the Abrahamic religions. So a lot going for it.

Just like it's especially good at economics. It's really good. But here's the thing. Where it's good is if I have a very specific question, it's very helpful. I love the way it helps me with cross-references where I can see how the book of Hebrews relates to the book of Job. I would never find that on my own. Also for translating into Hebrew words or Greek words, that saves me so much time. See, it's secondary literature that it's replacing.

So that's exactly right. So what I'm not doing anymore is I don't read study Bibles. But where it still is lacking is if I speak to somebody who really knows it well themselves. Their ability to ask the one question that really matters, the one takeaway is completely next level. And the AIs just aren't even close to that. I agree.

Adam Brown said something similar in his Dwarkesh podcast. He does physics. He said, you'll still do better calling up like the three or four world's top experts on a physics question than you will with AI. But that's the level he had to get to. for you to do better. But here's the thing that the, at least in my experience, what the experts do is they

get to the absolute core, one or two sentences. And it's not something of volume or big explanation. It's not quite in the literature either. Say more. They've maybe learned it through seminars or by knowing a lot of people or by having this... life-rich context in the area that maybe the AI cannot get very quickly or readily. That could be good for your mentorship book as well, because that is what a mentor can provide is unique.

What do you say? It's context. That is what's scarce. And secrets. Humans know secrets. Maybe AIs can be fed secrets, but they don't in general know secrets. Now, a human only knows so many secrets. That's partly where decentralization comes in. How AIs will handle secrets, I think is a big and interesting question. It's somewhat under discussed. It seems like in the Peter Thiel definition of a secret, which is something you know about the world that other people don't know.

there's a chance that those go up because now there's less of an incentive almost to put things in the public domain because they can spread so much faster. So there might be more of an incentive to hoard information. That's right. It will be worth more to you because the public information you used to hold now is worth very little.

So the future, the AI rich future is also a world replete with secrets. Secrets are super important. Gossip is very emotionally and practically potent. It's another part of this new structure we're not ready for. Okay, we got to talk more about this. How good are you with secrets, right? Are you good at trading secrets? If you are, you're a lot more productive than you used to be.

You ever have these conventions with your closer friends? Like, I'll tell you this secret. It's not quite a deal, but it's understood that they'll tell you that secret in return, maybe over time. That's a more valuable skill now. Increasing returns to social networks. That's right. So social networks become way more important as well. Traveling and meeting people becomes way more important. I'm doing much more of it.

getting back to how my life is changing? It's a striking paradox, right? Because on one hand, you have access to information that is so much better, that is now personalized for you, you can get the exact essay that you want. So if you just heard that, you'd say, oh, great. I'm just going to spend way more time reading all those things. But actually, there's another element to this, which is everyone has that. Therefore, I'm going to do exactly the opposite. That's right.

And if you want to get things done, you'll need to mobilize the resources. The AI per se can't lend you money, not yet at least. And you need humans, whether it's a venture capitalist or a philanthropist or whatever, someone who hires you. Your network of humans is not just like 20% more valuable. It could be 50x more valuable because the most productive people could be 50x, 5,000x more impactful because they have this free army.

of highly intelligent servants at their disposal. But to mobilize their projects, they'll need help from others. So networking, again, the value has gone up a lot more. than people realize, even when people say, oh, I see the value of the network has gone up. Do you have any simple rules for prompting? Like if you were teaching somebody, hey, here's how you should think about prompting, what are the things that you would tell them?

Put humans out of your mind. Imagine yourself either speaking to an alien or maybe a non-human animal. Just feel a need to be more literal. If you're willing to do it, I don't think it's that hard. But to actually want to put yourself in that state of mind, it does require some sort of emotional leap that, for reasons of inertia, not everyone seems willing to make. But it's not a cognitively difficult...

project to prompt well that maybe is emotionally slightly challenging. And do you feel it's becoming more important or less important? Oh, that oscillates very rapidly. I would say with deep research. It's become much, much more important. Yeah. Because you need to get exactly what you want and not too much blah, blah, blah. And it still might give you high quality something or other.

But if that 10-page report is not what you wanted, why'd you do it? So for a lot of basic queries, it's much less important. You just get a smart answer no matter what. But for some of the very best stuff... It's exponentially increasing in value to give it the right instructions. The thing that frustrates me is it seems to be a lot better to prompt one thing ten times than ten things one time.

And there's no way to actually put that into the LLM if I want you to do this question, this question, this question, this question. Because if you ask a really long query, it almost gets tired by the end of it. Like it needs to take a nap and answers 8, 9, 10 just won't be as good.

Often follow-ups, planned as follow-ups, you'll do better with those than too long a prompt. And I'll tend to do that. If I'm struggling a bit, well, what exactly goes in this prompt? I'll just start with the stupid version. and then rely on my follow-ups. And I think that's worked pretty well for me. Again, it may vary on which model, which system, all these things are changing all the time, but at least keep that in mind as an option.

So when you're mentoring young people, what are you telling them to do? Well, when you're mentoring young people, I'm not sure it's about advice. You should be a certain way and hope some part of that is vivid to them and rubs off. Maybe the advice you tell them is useful to communicate your style, but it might be worthless as advice. But two pieces of general advice, with or without AI in the world, that I think are pretty good for almost everyone is get more and better mentors.

and work every day at improving the quality of your peer network. And those two things, I'd say they're more valuable in the AI-rich world, but they were always good advice. They're good advice for virtually everyone. They don't require you to know much about the person. Those are my two universal pieces of advice that I give pretty much all the time. And how much do you feel that career trajectories are changing, for example, to get really practical here?

Would you invest, what, four years in a PhD now, given what you have? I think we really don't know where a lot of these things are headed. I think investing in a PhD is much riskier, but there's also... Some chance, depending who you are, you become that person who's 5,000 decks more impactful because you command an army. Maybe you're not capable of that, and if you're not, maybe you shouldn't get the PhD.

But in a blanket way to tell people don't get a PhD, that doesn't sound right to me. But I think we'll need fewer PhDs, more people who understand how to manage AIs. and a very different mix of skills than how it is now. So a lot of professions, it will be difficult to predict. But just being familiar with the best models, I don't see how that can be bad advice. And it's why I think...

whatever it costs per month to get the best model whenever you're listening, I suspect it's a good investment. Well, once again, it's sort of like what we were talking about earlier. On one hand, the AIs are getting so much better, so learn how to use the AIs. On the other hand, the AIs are getting so much better. So invest in these other things that aren't AI, pure networks, things like that. You've got to do both. So there is more of a burden on you and it's less formulaic.

So what you used to do, oh, I'm an undergrad at Yale, I want to go to McKinsey. There were all these set paths that were pretty predictable, as long as you just didn't totally screw up. It seems to me those will be disappearing.

You know, we were talking about writing for the AIs earlier. And another thing that stands out is if you assume that there's an AI note taker on the other side and you're preparing a talk, you could almost think, what is the AI note taker going to say? And then give it exactly that. Because...

If you're giving a talk at some university or whatever, there's probably 50 AI note takers in the audience. And people who write about that, probably most of them will start with that now. So you're not only writing for the AI, you're speaking for the AI. Absolutely. Another interesting thing about AI is even when you don't use it, as you mentioned, you have this model in your mind of what the AI would say or write back to you. So there's like a phantom AI sitting on your shoulder.

It's enriching. It can also be intimidating. Maybe in some ways it's too homogenizing, but it matters. I would just say give it some thought. How is the phantom AI also shaping your life? Too homogenizing. Why do you say that? If you just... Ask AI simple questions like, improve my writing, or what do you think about this? Again, DeepSeek is somewhat different, but you get a somewhat homogenized style and answer. It's a bit bland, even when it's very good or useful.

So if I ask it, well, tell me about the mating practices of this kind of parrot, it'll sound like a somewhat denser and smarter Wikipedia. I'm okay with that, but there's something homogenized to it. And you have to work to get it not to be that way. It's not a complaint, but it's something we should notice and make some corrections for.

Well, after the Apple earnings came out recently, Ben Thompson did two prompts with the 01. And one of them was something like this. The first one was fairly generic. Based on the Apple earnings, give me a report. And then the second one was based on the Apple earnings, give me a report and here is my take on it. And this is what I want you to focus on. He said the first answer wasn't very good. And he was very happy with the second answer. Once he had given it direction and his.

take and kind of set the direction, the AI could fill in the rest in a way that was quite good. Yeah. I also find it valuable to use DeepSeek periodically, just so I don't forget. what AI is capable of. I call it China boss. It's kind of a joke. Like you say, let's go ask China boss. And it means like, okay, we're willing to consider kind of a wacky answer here.

You know, there's not something at stake where writing a report or column where everything has to be perfectly correct. You just want to hear an opinion. Let's go ask China boss. And that's DeepSeek. You should use it like once a day. Just so you don't think of AIs as being bland in the way they can be. I want a lot more crazy in my life. So my wish from AI is that they can give me more crazy ideas. That's a lot of what...

hanging out with people gives me is just, I didn't think about that before. How much do you use DeepSeek? That's been my biggest lesson so far is I didn't realize how much... wackier DeepSeq was than the other models. Especially if you ask it. But even if you don't ask it, it'll be much weirder. So yeah, that's one of my core recommendations. And there'll be other models like it. And DeepSeq is itself open sourced.

There's a version of Deep Seek now in perplexity as of about a week ago when we're recording. I haven't played around with that much. Did they make it less weird? I don't know. I worry maybe they did. But the original Deep Seek, man, that's priceless. I love it. Do you have ways of using perplexity that are as strategic as how you prompt the LLMs? I use perplexity every day. For me, it's a super practical thing. It replaces most of my earlier uses of Google.

As most of you know, it's completely up to date. If I'm writing, say, a Bloomberg column and I need the right citation, I go to perplexity. And it just works very, very well. And, you know, you check it by clicking on the link. It's not a problem with hallucinations. You get the right citation. Better than Google would give it to you.

I don't have ways that I strategically use perplexity, though. I kind of just use it like Google, where I throw things in there, whereas I'm very strategic about how I prompt chat GPT. I agree with that. Super practical for me, perplexity. It feels like it's asymptoted for me in a way, in a very good way. Like, how could it get better? Not like, oh, I feel they're stuck. Maybe they'll get better. They're adding some voice and other features. But it's incredibly good.

So walk me through the sort of AI stack and the different reasons that you use different tools. I can tell you what I use. I'm not saying it's all you should use. You should use more. and experiment with more to learn things. But I use O1 Pro the most. Deep Research, which is kind of an offshoot, is a feature of O1 Pro that actually is using O3. I know that.

Labeling is complicated. They say they're going to clean that up. I think it's the single most impressive thing humans have built that is out there, but I don't use it that much. It's not that practical for me. It will do more to replace human labor than O1 Pro. O1 Pro is best for queries. Deep research is best for 10-page reports. I don't want it doing 10-page reports for me, for the most part. A bit when I teach.

So that's not that much in my routine, but to learn what it can do, it's something you should spend a lot of time with. Claude is a wonderful mix of thoughtful, philosophical... dreamy, flexible, versatile. It's the best writer. You should use Claude a lot. The current Claude is already amazing. The next Claude is just going to be out of this world. So yeah, you should be doing Claude. Deep Seek.

Absolutely. Now you're sending things to China. My view is China knows a lot about me already. I'm not at all nervous about that. But don't. If you work for the military, the CIA... talk to some people, give it some thought. It's China, right? If I ask Deep Seek for a glorious description of eating a mofongo and the Chinese know I want that, I'm like, yes.

I'd love to spread this to China. They don't know what Mufangos are. Gemini can do some things other services cannot. I don't use it much because I'm not working with very long or thick documents. But if you are, it is often the best for a lot of legal work. And there will be versions of all these things soon where you're not sending your data to another company. That's limited the use of these for legal work in particular.

you'll be able to do it on your own hard drive in some fashion. I'm not sure what the loss of value will be at first, but people are working on this a lot. It'll come soon. It's one thing that if you follow AI, you know is coming. Some people would say, I can't send my data to... you know, Gemini 2, Google, whatever. Okay, you can, but pretty soon you won't have to. But Gemini is, there's some ways in which its multimodal capabilities and its ability to handle big thick files, it's number one.

The fact that Google owns YouTube makes it really nice because the YouTube integration is really good. So if I want to prep for an interview, I can put in, say, 15 videos and I can start asking questions about all the videos because it can take the transcript.

And Gemini is just so much better at reading video files and especially YouTube than the other LLMs. Those are just things I don't do much, but many people should use Gemini a lot. That I don't use it a lot is nothing against Gemini. It's an amazing system. Grok, you can use very quickly to fact check tweets. Meta, it's right on your WhatsApp. Their ability to market and open source is very strong. I don't...

I'm not an Instagram person. People should know what MED is up to, the LAMA models. I think they'll be very important globally, something to play around with. They're not part of my regular routine. but you should be aware of them. And there's plenty of things I don't know about or I've heard about and couldn't really tell you how to use them. That would be like the opening menu. And you can just keep on asking perplexity.

Like, what are some new things that have come out in the last month that I should just play around with for 15 minutes? And it will send you to some articles. And another thing I do, it's... $400 a year, but worth it for me. I subscribe to information, which keeps abreast of new AI developments. I don't think it's worth it for most people, but if you can afford $400 a year, I think it's quite good and useful.

And it covers some things in crypto, other parts of tech as well. That's a good way to stay in touch. Twitter, obviously, X. And being in good chat groups. That final one is a big one. It's a big one. And it's hard to get into the best chat groups, but just keep on working your way up if you can. What do you think becomes possible with really large context windows? So Gemini now has 2 million tokens.

I'd be willing to bet money by the end of the year, we'll have 10, 20 million tokens. What becomes possible, true about the world once the context windows can be that big? Well, that people in a decentralized manner...

There may be people now who can work with these very large context windows. It's just not a public service, so keep that in mind. But in a decentralized manner, to deal with things like regulatory codes, which are very important to businesses, lawyers, that will be completely routine. And it's coming very soon. Again, it's not a thing I need. Historical archives, if you're a historian and there's massive documentation like tax records from Renaissance Florence.

I don't know how big that file would be. You'd have to put it in somehow, scan it. But working with things like that over time... A new project for humanity that will create a lot of jobs, by the way, is converting data into usable form. You'll also need a lot more lawyers to haggle over who owns.

the residual rights that were never specified in original contracts because no one imagined this would be a thing. That would be another new set of jobs. But a lot of philanthropy in the future should just be paying for data to be fed into AIs. This is like what Nat Friedman is doing. That's right. And he's translating scrolls from burnt to readable. It's so cool. But to put all that into AIs...

And just everything we know about history, what's in the National Archives, I'm pretty sure, I've been polled, it was not fit into the main AI models. It's a lot of stuff. Maybe not useful to most people. But over time, this will be the new human project is to have all our knowledge fed into the AIs, musical knowledge. So like tab notation for guitar, a lot of that's online, but a lot of it isn't.

It's quite an undertaking to assemble all those scrawled things on paper and turn them into AI usable form. But I think a lot of our next century, we should spend doing that with just everything possible. where you're not like violating privacy or running into national security issues. And it will just be a much richer world. But it will take a lot of human, very human effort. To get there. Yeah. And last question, how innovative?

is the LLM usage inside of companies. Like for people building their private models, the very biggest companies, how big is the delta between what we're seeing from the basically free models? The most innovative people won't tell us. I strongly suspect the most innovative people are the AI companies themselves. They use AI to improve AI. They hold their secrets close to their chest for obvious and justifiable commercial reasons.

And I think the difference between what they're doing and what others doing is just immense. You can't even compare it. Wow. And we don't know what they're doing. But since it gets better, it seems to be working, right? Yeah. That's what we do know. Well thank you Tyler. This was fun. Thank you David. Thanks for nerding out with me. Thanks for nerding out with me.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.