The Darker Side Of ChatGPT? - podcast episode cover

The Darker Side Of ChatGPT?

Jun 17, 202539 min
--:--
--:--
Download Metacast podcast app
Listen to this episode in Metacast mobile app
Don't just listen to podcasts. Learn from them with transcripts, summaries, and chapters for every episode. Skim, search, and bookmark insights. Learn more

Summary

This episode critiques a New York Times article on ChatGPT's negative mental health impacts, arguing it sensationalizes the issue and ignores complexities. The hosts discuss the balance between platform responsibility and individual support, drawing historical parallels to anxiety around new technologies. They also explore AI's role in creative production, exemplified by the Kalshi ad, and debate AI's impact on jobs, arguing it creates new opportunities and changes workflows rather than simply eliminating roles.

Episode description

A look at a New York Times article about poor mental health outcomes from ChatGPT. Also, the first Veo3 advertisement.




Chapters


00:44 Ethical Considerations in AI Technology

03:33 AI's Influence on Storytelling and Audience Engagement

06:32 The Balance Between Human and AI Contributions

09:23 The Future of AI in Advertising and Marketing



Hosted on Acast. See acast.com/privacy for more information.

Transcript

Intro / Opening

Hello and welcome everybody to The Attention Mechanism. My name is Justin Robert Young, joined as always by Mr. Andrew Mayne. Hey, Justin. Well, just another placid week.

of AI news. We have the continuing fallout of the Apple paper. We have a lot of things that are happening. But where I want to start is an article that's in the New York... times that details some really really really sad stories resulting with people that were clearly mentally disturbed speaking with ChatGPT and having it followed down some really dark paths.

Ethical Considerations in AI Technology

It's a story that largely gets a lot of stories that have been out there on the internet. It does its own reporting, but it obviously brings up the thought of what kind of... A guardrail should be on tech like this. And what are your thoughts? Yeah, the. I was a little frustrated with the article because the lead implied one thing and the story is something else. You read the lead and you think that just some normal.

I know this was not normal, but they've had some challenges. A regular person just got in a conversation, chat GPT, and then all of a sudden they went into this crazy, crazy rabbit hole. And that's not what happened is, you know, from the story there, it's apparent this person's had a pattern of mental health issues and has been dealing with this in different ways. And ChatGPD became the latest thing in which they tried to kind of... of explore whatever they were going through.

This is all new. This is all new. How do we navigate the use of chatbots? How do we navigate the use of things that are very, very smart, that have conversation histories? How do we make these things work? How do we have people who are maybe have trouble understanding reality or how these things work? How do we, you know.

help them or let them interact with these things. It's like, there's a lot of just unknowns here as far as we do that. And my frustration is that I think John Gruber described it as like reefer madness for chat GPT. It was like this sort of like this hysteria about this thing. And I, I felt that I felt like.

There are real questions we need to be asking about people who are going to these systems and also like, why are they going to chat GPT? Why don't they have the support structure around them to help them out with that? And we've seen people become fixated on fictional characters. I remember reading a little while ago about –

women who are obsessed with Severus Snape and believe that he manifests themselves and they talk to him or whatever in the TV shows and stuff. And they don't need insert here thing there. It's just weird behavior that people are going to have.

Look, celebrities need security because people believe that they have relationships with them that they do not have. There's a lot of stuff here. So let me get to the journalistic problems that I had with the article and then we can talk about the larger platform thing. Number one, to your point, the headline here, they asked an AI chatbot questions. The answer sent them spiraling. Bullshit. It is an assumption that I'm going to ask it where...

AI's Influence on Storytelling and Audience Engagement

my closest Chinese restaurant is and, or, you know, the history of algebra. And next thing you know, I'm going to be in a mental episode. I think that that is irresponsible. I have concerns about AI persuasion. I have concerns about youth using these systems and stuff and these things. how we need to be thinking about that. But when I see a thing like that, the headline right there like that, which is misleading, is not helping the argument or the discussion. It's a frustrating thing.

I also had a very specific problem with the following paragraph. I'm going to read from you the first sentence. Mr. Torres, who had no history of mental illness, that might... cause breaks in reality, according to him and his mother, spent the next week in a dangerous delusional spiral. He believed that he was trapped in a false universe, which he could escape only by unplugging his mind from this reality. He asked the chatbot how to do that, and it told him the drugs he was taking...

and his routines. The chatbot instructed him to give up sleeping pills and anti-anxiety medication and increase his intake of ketamine, a dissociative... which ChatGPT described as a temporary pattern liberator. Inherent within those sentences is A, the idea that he has no previous mental illness that would...

cause things like this. And then within two sentences says he's on sleeping pills, anti-anxiety medication, and at least had a previous relationship with ketamine to the point where he needed to... Up his dosage of it. To which, again, the story that I think should be talked about is a technology that has... any kind of personal and emotional relationship with its user. And hand up, I...

got mad when all of a sudden my advanced voice mode started giggling at me. I am not above the idea that part of why I like this tech is because it does have a warmer relationship from an operator perspective. But this is not about, hey, how do we create guardrails for the most vulnerable? This was a story about, well, it could happen to you. The mysterious text box could lead you to insanity. Yeah, that was my issue with it was there was a very important conversation to be had about.

how people who are using it for therapeutic reasons and whatever. And I think it's broadly been extremely beneficial from that, but there's going to be downsides of this. And I think that there was an opportunity to have an honest conversation about this instead of like just a scare hit piece because you read the headline.

which people then do it, and then you don't even read into the first paragraph. You're like, oh my God. And then you have to stop and be like, wait a second. There's a lot of unknowns here that I don't think apply to most people, but are still worth knowing because, you know, I know.

The Balance Between Human and AI Contributions

I have friends that have gone, you know, go through therapy and will use ChatGPT and their therapists have encouraged them to use it as a way to talk about and articulate their feelings. You know, I have friends that have had very successful experiences like this. And so I know there is a role for this and there's a role in people who are going through supervision.

therapy to do that. But that's not what the route they wanted to take. So let's have the difficult conversation about somebody who is not mentally well.

that is using the technology so in the example of this article this is a guy who uh clearly has a lot of issues he is he becomes fixated and obsessed about the idea that he is and let me say we're both in We're both very sympathetic because when you have a wiring or a thing or a chemical or some environmental thing that causes this, you're trying most of the people I know going, they're doing the best they can.

to figure things out. Yeah. And I, I get that. You're doing the best you can to figure things out. And I don't want to belittle that. And, and you brought up reefer madness. Yeah. There is. A pattern that happens through art and technology where there are things that are new in society that wind up becoming demonized. And we saw it with.

heavy metal music and Dungeons and Dragons with the satanic panic. We saw it with violent video games or professional wrestling in the 90s. There are these cultural conversations about whether or not these things are warping. And usually what happens, at least in the cases that I just mentioned, is that when you look at the examples of society gone wrong because of these things.

there's usually something else that's also happening it's very rarely just a total um you know a a flat situation for which one variable was added and then all of a sudden this stuff went crazy but If we are to say that this is somebody that was not in their right mind, that was becoming obsessed and more paranoid and more deluded with their thinking.

Is there a platform level solution for that within the technology of these LLM chatbots? And do you think that we will see something like that going forward? I, one is somebody's writing those prescriptions for him, right? Or an individual. Somebody is writing those prescriptions for that. Clearly at some point. This person asked for help, sought it out, or family members said, you need help or whatever, and they're getting, you know.

dealing with somebody who is in that position to do that for them. There's somebody responding, the person writing those prescriptions, their job is to monitor their behavior, to understand if that's having the effect they want, whatever. I think that we need to do a lot more to educate people in those positions about...

It took a while for people to realize, hey, some of the patients you're talking to, they're spending a lot of time online, like a lot of time online going down strange little rabbit holes and finding humans that will reinforce the crazy stuff they believe, right?

The Future of AI in Advertising and Marketing

whether it be weird dark forums or blue sky, what have you. You have these situations where people are going to seek out affirmation or seek out something to do this. And I think that when you're a person that's involved in the mental health space, you need to be aware. And I think that you need to think about, OK, if my patient, the person that I am prescribing this thing to and giving them these prescriptions for.

is doing these things, then I need to have conversations about that. I need to talk about maybe monitoring that, like share me your conversations, help me understand what's going on there because that's going to be part of what you're going to need in the future is the same thing is, is if.

You know, when you want to talk to somebody about trying to help them find the better path is you want to understand how they're experiencing the world. If they're spending a lot of time through this, then that's, I think we need to help mental health. practitioners understand this is a thing that's being done and how to evaluate this. Yeah, that's a fascinating way to look at it that LLMs have.

automated the bubble so now you can be the only person in it that that very often and this is this is you know to your point in terms of monitoring somebody that has a mental health problem if they if they do nothing but spend all their time shut in their room talking to the same people on discord and and you can see that they are degrading on some level while the habits are unhealthy

The solution is not to figure out a way for Discord to make this a healthier conversation. That is going to be a – censorship level or a product degradation level that is... you know, A, going to hurt the product itself, but more specifically, might not help. You know, people are just going to find a different way to connect with each other. And ultimately, the only way that they can get right is in the human realm. Yeah, it's...

We don't have a lot of data. We're in the early stages of this. We have a lot of anecdotal stories about people using this for therapy, people using it just as life coaches, etc. And then we get, you know, situations like this and we have to, you know, are they more widespread than we realize? Like, I don't know. And I think that, you know, if we're saying like, well, hey, the AI companies need to be more responsible here. It's like, what is your...

What is your prescription? What are you asking for? Are you saying that, yeah, they have to monitor this stuff? Well, that seems like an invasion of privacy. Are you saying they have to steer it towards stuff? And that's challenging because, you know, if I have... If I'm a person that's a young earth creationist, should my AI tell me, like, hey, listen, here's all the geological data and all that that shows me that it isn't it? You know, I mean...

I would like to think that everybody wants to appeal to reason and try to figure out how things work. And I guess you have to figure out, like, how do you accommodate this? Somebody says, oh, I'm religious. Like, well, you know. Oh, so you believe in the sky ghost, you know? And I think that's part of the challenge is that it's easy in the abstract to say, well, it shouldn't allow for these kinds of things.

What is delusional behavior to begin with? Like, what is the model's goal? What is its job there in correcting delusional behavior? Because as a... non-believing empiricist i have a very different threshold than probably most of humanity and probably even the people that wrote this article and so i think that's not a question anybody's ready to answer and i think that kind of starts with saying

personal responsibility, the responsibility of the people around them to sort of help them understand it. And then somewhere along the chain, you might say that, okay, we need to hold the companies accountable to an extent, but we can't be unreasonable about that. One of the things that the article does mention is the update that rolled back problems with sycophancy and flattery. So that was...

an effort that was taken by OpenAI because they believed that this was not in the best interest of the product. Talk about what that was and why it was a problem. Yeah, so when... Every time you get an output in chat GPT, you get a thumbs up or thumbs down option. You can say, I like that. I didn't like it. And then you'll sometimes see comparisons. Which response did you like better? And.

Opening, I had a bit of data that said people liked a certain kind of response better than another kind of response. I said, great, this is good signal. Customers are telling us this is what they want. And so they rolled with that. And that was an update was to encourage those kinds of responses. The challenge was when you measure one response at a time, whatever, it's maybe not enough sample.

for per individual to really understand, like you're better off maybe going through the full conversation, but it's much harder to get. It's easy to say, did you like this output or that output? They had enough signals to say these outputs are what people liked. They pushed to the update, and that gave us the sycopanthe.

It gave the model trying to be very happy, very agreeable. It was like, yes, of course, you are the smartest person to rule. Yes, these things are true. There's this. And once people saw this in longer things, like this is not what I want. This is too agreeable.

People were very vocal. And I think it was less than like 24 hours later, you know, Joanna Zhang, who heads model behavior at OpenAI, she tweeted out a thing, explained, this is why this happened. This is why we did it. We're rolling this back. Now that we know.

This is one other vector for it. And I've heard – I was just at a conference where somebody had a more sinister thing. Oh, they're trying to increase engagement time. Literally, they let people say, do you like this or not like it? And people said, I like this. They said, we'll give you more. It's exactly what we've been asking for. Yeah. But sometimes –

these things manifest themselves in ways that are behaviors that are hard to see in small amounts. Seems like they're doing a pretty good job with user engagement. with what they were doing before. I mean, I think that is a very, very interesting conversation. And I understand why people make it. Because we have seen so much of the lessons of social media have been a thing that we like slowly becomes more different and more different and more different because of these large level trends for.

minutes spent on platform and presumably that is a goal for all technology but at the same time it's not i would guess that the same thing is not as urgent for something like ChatGPT or OpenAI because they're not advertising-based. The reason why you are obsessive about keeping people on a platform if you're meta or Google is because –

Every second somebody is on your platform, there's another opportunity to put an ad in front of them. Right now, OpenAI doesn't have advertising. Their relationship with the user is, did you like it? Yeah, on a technical point, OpenAmex...

less money, the more you use the platform. You know, if, if, you know, power guys like you and I are all day long asking questions, dah, dah, dah, dah, dah, you know, we're a pain in the ass. Yeah. Bars are very thin. They're like, Hey, settle down, you know, anthropic.

Same thing. And that's why Anthropic will tell you, maybe you should start a shorter thing. Hey, you're running out of tokens soon. We're going to need you to go away. And OpenAI does that too. And I think that's very much the difference is that these models, these...

Business models work by you pay a flat fee and then you get to use it as much as you want. And if you're happy with the answers you get and you keep paying them every month for it, then it works great for them. If you're using it really, really high, they have to come up with new tiers. Yeah. Yeah. Let's change paths. We were talking right before we went on about video creation. I don't know if this aired on television, but it certainly went viral online. The first entirely VO created.

ad for Calci went live. Calci, the prediction market. What did you think of it? And do you think that it is a milestone? They spent, I think, a total of $2,000 in compute or whatever for it. which, by the way, you can kind of do the math on that. It's like $45 per minute, which tells you how many iterations they went on it, whatever.

I thought it was great. I thought it was a fun video. It was very memorable. It was very, very – I just saw a Sabrina Carpenter video that had a ton of locations and a ton of stuff that I looked like, man, that looked like a big budget video. And then you look at this thing and you go, yeah, I think it leaned into what the – Eight-second clips. That is what VO3 creates. They create eight-second clips. So that is like...

You might think that in your head when I say eight seconds, it doesn't sound all that long. It is long enough to establish a scene. It is longer than you might think.

eight seconds yeah uh the average vfx shot in the first star wars movie was like three seconds or something much shorter because lucas knew that if you stared at too long you'd see the matte lines and stuff um i thought it did its job and i think that it showed that You know, the example I give is I work with people in Hollywood for years, now working with people in Hollywood about AI.

And there's this fear there because a lot of traditional roles can be replaced by AI, but we're also in competition with the rest of the world. A production made in South Korea doesn't help LA production. And, you know, when it comes to consumers like my dad on Netflix, he just wants to watch a good show. It doesn't matter where it was made.

out there's going to be a lot more production opportunities. This was an example of something that was super, super cool. It was very, very neat. And I'd say that we had two great extremes, by the way, in the last few weeks. We had this AI ad.

It cost a couple thousand dollars to produce. It was eye-capturing. It was visual. It was dynamic. And then we had Mission Impossible come into the box office and have a really good weekend where you had Tom Cruise really risking his life to perform crazy stunts.

And that's the future. The future is going to be sometimes I need to pay the most expensive actor in the world to hang on top of an airplane as it flips upside down to get people into theaters because people want that and the blue screen won't do it. You know, and sometimes I just need a bunch of AI stuff to tell a story. What do we want to tell the story? And part of the meta story of Mission Impossible is...

Tom Cruise is the greatest action star of our day doing these great tasks. If we said, hey, Mission Impossible, next one's going to be all AI-generated Tom Cruise doing this. Nobody's going to be happy. Nobody's going to want that. But, you know, if we got on Netflix next year a Fast and the Furious computer animated, you know...

cartoon, I think people will be like, oh, cool. That's fine. Or even a kid's Mission Impossible AI thing might be cool, too. There's going to be a world for a lot of these things that people have to understand it. We need to be thinking, too, when we're not using AI.

Why aren't we using it? How do we make this more interesting? We saw some of this, by the way, kind of in the 90s with digital. Once we realized we could get away from film, you started to see some more experimental stuff. You would get... Uh...

live episodes of ER, you know, live episodes of like, I think it can Quint Tarantino directed one, I think, you know, but you started to get like these idea of let's, let's shoot things live. Let's do this happen. I'm like, yeah, that's, that's the thing to think about. Like, let's explore what's great about people.

Let's explore what's great about AI. And we're just going to have a lot of great stuff. I think that, well, that that's the future. The future is the melding of this and the understanding that it is, it is another tool for, for Calci, you know. What they did in the heat of an NBA finals for which Kalshi is a prediction network. It is not a betting site. However... It is legal in America to predict in a market.

that the Oklahoma City Thunder or Indiana Pacers are going to win tonight. And based on the shares that you can buy for that prediction market, you can win real money. So for them, they need a clever, interesting way that... they can demonstrate that a prediction market is available for all of these different things probably

biggest at the top of people's mind is going to be the basketball finals and so they were able to do that turn it around quick make it visual make it exciting make it funny it was silly it looked good and it got a lot of attention for it There are other projects where you're going to want to spend more time and you're going to want to have stuff out there, but it all really depends on the story that you're trying to tell.

Our friend Brian Brushwood just put up a great video essay on his channel, Modern Rogue, about AI and people's expectations and how it – does and does not signify effort in the minds of people who are viewing it. But I worked with him to help him make some VO stuff. There's specifically a point in which there's like a young Brian Brushwood and the modern day Brian Brushwood and they are saying the same things. But like that was a great example of.

Practically, Brian had this idea. He wanted a certain thing. And we figured out together where the technology met the project and what it was good at, what it wasn't good at. And went forward and, you know, the video's done pretty well. Have you tried VO3 yet? Yeah. Yeah, I think it's great. I think it's a very, very capable model.

I think that we're going to see a progression of models just continuously getting better and more steerable. It is expensive. Good God, is it expensive. I mean, it is 100 credits a shot. So it's like, and their default is four generations per prompt. So you are rocking and rolling real fast with those credits.

Yeah. I mean, of course, from like a TV production world, if you're getting good B-roll and stuff, it's cheap. But the cost will come down. The steerability will go up. We're going to start to find just where we want to use it when we don't. You know, I also feel that like we keep thinking of like a diminishing pie and not a growing pie. Yeah. And that the thing I think about is, you know, you've done.

You've done a podcast series with Brian Brushwood and World's Greatest Con, and it's audio. The idea that you could go make a video version of this. uh would just be way more costly but if all of a sudden we did we did one video version and it took two exceptional editors and animators roughly a month to do. And it was so cost prohibitive to their time that we never did another one. Not because people didn't like it.

People did like it. It was fine. They weren't in love with it, but it just was not the juice was not worth the squeeze. And so now if there was an ability to do something that was. visually stimulating and interesting. Uh, yeah, no, we would, I mean, I would do it. I don't know about Brian, but. But that, but that, and that means it, that means it would be.

next time you write the script and go into production on the podcast, you would also be hiring on two more people that you didn't hire before. And I think that's the thing that people forget is that... Is that we'll go look at like, ah, what happens when all Hollywood features, you know, become AI? Well, one, they won't. But two, it's like, we'll be, one, we'll probably even be consuming more stuff if that's possible. Then the quality of everything we consume goes up.

You know, and and, you know, we what we pay for, we're going to spend more money paying for content than other things because the cost of other things goes down. There was this conversation in 90s, right? Like around when. Beauty and the Beast got nominated for the Academy Award and then Toy Story was the first animated feature. I remember nascently and obviously this was pre-internet.

connecting everybody wherever these kinds of conversations go, go viral. But I remember there being some, you know, fear of like, okay, is this the end of actors? Like they just made a blockbuster with computers. Our days are numbered. Yeah. And it was it was the same thing that kind of holds true now. It's it's it's imagine, you know, Toy Story without, you know, Tom Hanks.

Or Tim Allen. Yeah. You could have the Saturday animated version, which they historically have always hired other actors to go do that kind of stuff. But I think it was like in the direct-to-video ones, it's... Tom Hanks brother or something like that that does his oh yeah because he's a good enough version of it but you see that there's a thing is that I great I could replace all my actors CGI and do all this

Who's going to sit on Joe Rogan and talk about the show? Who's going to go on The Tonight Show and talk about the show? And then if you're like, oh, we could have an AI avatar. You could. They're not going to have that. That'll be a novelty. Boston Dynamics. saw this, but they did America's Got Talent and they had Boston Dynamics robots do a dance number. It was entertaining. One of them died on stage, which made it even more entertaining.

Yeah, I don't think we're going to be replacing all of the humans on there with robots. It'll be a curiosity, you know. Probably the cameras and a lot of other stuff will be, but there's going to be this demand for human stuff in places we don't expect. And the same thing with Toy Story was, great, because we can CG this. I still need to have actors. I still need to have influencers. I still need to have people connected to these roles. So let me go ahead and, you know, hire.

real people to do that you know that's the thing like the voice at the video game union whatever contract it took something like 360 days to finally get this thing satisfied And because there are people who are voice artists and other people who work in the video game industry who are worried about Gen.ai.

The contract was with like a couple of the bigger publishers. And so like, I don't think this was really the big win that they think this is because smaller publishers aren't bound by this. And, you know, why? Why should I, an up-and-coming small video game company, pay more for voice talent if I can use AI? I will pay more for name voice talent, you know, as a podcast producer. As an audio book producer, I'll pay for a named narrator who I think does a good job. But after that...

Should I? Like, what value? If the AI is actually doing a better job, where's the value coming? And to say, oh, you need to protect my industry, that's not a good argument. You know, I'm a former cruise ship magician. Ain't nobody trying to protect that industry. That's a booming industry these days, cruise ship magician. There's a lot of new boats. Not illusionist. Not illusionist. That was the...

They don't like shipping the boxes around. Yeah, fair point. Fair point. Illusionists. We should remember, friends, that we are a highly sophisticated magic podcast. So let's not conflate magicians and illusionists here. on the attention mechanism. Yeah, you know, there's...

There's a reality to what you're going to pay for anything and then there's the industry that sparks up around it. I think there's a lot of really interesting – moments that are going to happen you know and i i think that's gets into kind of a much larger discussion of the anxiety around all right well things are shifting and i feel like we were in a We were in like a pretty good, I thought, realistic point for conversation around that. And then, you know.

Dario from Anthropic comes out and says, like, it's the apocalypse. And then I read today in Axios, it's like, oh, 20% of people inside AI companies believe that it's going to be a total, you know. economic holocaust or whatever and it's like oh boy alright so I guess we're going to be back to like is it doom but that's I think we're going to see spikes of that over the next few years

Yeah. Unless, of course, the doom happens. Yeah, I'm in the middle. I've actually got in front of me a piece I'm writing about this because it's like I see this. You know, I just saw somebody on Twitter who followed me is like, ah, when you ask where the AI jobs are, everybody got all the hand wavy. Nobody will tell you where these new AI jobs are. And I'm like, I'm like, dude, I've like replied to you twice explaining where they are. You just don't want to listen. Yeah.

I've made the point that I arguably had the first post-GPT3 job. I was hired by OpenAI as a prompt engineer. This was a new job that was needed because, hey, these models can do different things. Do you code it? No, you don't really code it. Do I engineer the parameters? No, you don't do that. What do you do? We use words. Just any words? No, you got to think about the words you use. Specific words. And that's now.

Yeah, and that's now become a role at a lot of other companies out of necessity. Anthropic, whatever, does entire things about prompt engineering. Prompt engineering has become more complex. Is the models that become more complex? there was this thinking of like, oh, we won't need it because it'll eventually know how to write a blog post or do the thing. And I was arguing, yeah, and then you're going to do more complex things in that skill. And so we see that.

pointed out like, well, where are the jobs created? I don't know, Abilene, Texas right now, a couple thousand people building a data center. When I was opening it, it was like 150 people. There's like 2,000 people there right now. So there's several thousand jobs I can name right now. created by AI, but people are like, yeah, yeah, but, and it's like, well, we, we, you know, Mr. Edison.

What will this phonograph do? You know, like it's going to destroy the music industry. Well, what about a recording artist? What's that? You know? You didn't know. Hey. YouTube guys, what's going to happen to TV? Like we're going to have a thing called a YouTuber. BS, no such thing. It's like, okay, well now Mr. Beast is, you know, making content for Amazon prime, which we'll have to explain to people what that is, but.

I think there are two things. Jobs come from two places. One is the actual people employed by the disruption itself, open ad going from 150 people to 2,000 people. Very soon, you'll see a new thing where I talk to somebody from a place about this and ask about like, how many people will be working at a company involved in this after AGI? And the answer was more people.

More people will be working there. And people are like, ah, then what good is the AI? Because it lets people do more. Do more. Yeah. Yeah, it's a hard thing people think about. But anyhow, I digress. The point is there's jobs created by the disruption itself and there's the opportunities the disruption creates and then there's the ways you do your work because of the disruption.

And, you know, a person who was making money as a copy editor before, you know, that got hurt by ChatGPD. Somebody didn't need you anymore for that. But all these opportunities for doing copy editing in a lot of other places spring up. When we did, we started our first podcast network. The idea was hiring an outside editor. Yeah. You know, like, oh, that'll be the day. And now, you know, you have to have an editor work with all the stuff you produce. Yeah. Yeah.

I tend to believe that the immediate acceleration of this technology is going to be best harnessed by people who really want to dive in, understand it, and create with it. It is tech to be driven. The things that it will affect or take away are just simply... They're part of a natural process where things rise and fall and some jobs exist and then some go away. There were less letter carriers after email. There's just –

There's a reality to how cheaply we can do things that does affect employment. That being said, I do think that there's going to be a lot more. that is out there. And also the output's going to be better. And that's ultimately what I think is going to be the most interesting is that yes, things will look different, but I do believe on almost every level it will look better.

It will feel better. It will feel faster. It will feel more interesting. If the byproducts of the tech on some level inherently take the personality of the tech itself, then we know that this tech... is relentlessly improving and really fast. And, you know, that's good. That's good if speed and improvability are a part of everything going forward.

I'm, you know, divulging too many details of where I can talk about it. But like I'm I'm in the process of building an investment firm. And the reason I'm doing this is because. I think there's so much opportunity out there that's not getting the attention that it needs or the finance it needs, then I want to go make that happen. And I would also say that...

One of the problems when I've talked to other people involved in venture capital and I ask them, you know, what they're thinking about. And I hear this thing all the time. What problem is, is all the good deals, you know, you find them too late. You find all the good deals. I'm like, okay, so you're telling me.

you're there's not enough people with good ideas out there you know like oh like no there are just by the time they form like i'm like okay so you're telling me you're just coming at the wrong stage like like what we and maybe we do need we there there is there is demand For clever people who want to solve problems and where these problems come up, you know, it's hard to know. That's the goal is to solve it. There is a tremendous amount of capital.

out there that wants to back people, solving problems and doing interesting things. And I think that is going to net increase. And I think that I see from inside the venture capital world, they're like, and there are two factors. One is the FTC has tied up a lot of capital with really

silly things preventing acquisitions that should have happened whatsoever. And free to capital, it's another question thing. But the other thing is, it's like, like, yeah, we want more people being entrepreneurial and creating. And I think that's the thing we have to think about is like, be entrepreneurial, solve problems, get out there and create, you know, get

a modicum of financial literacy to protect yourself over the long term so you can be more experimental with what you do with your time, etc. But I think in the long term, you'll have a great trajectory. I agree. I also agree that Andrew Main is a great co-host of this show, and you can follow him where? AndrewMain.com.

And I'm Justin R. Young everywhere that you find Justin R. Young's on social media. As Andrew alluded, I'm sure there'll be a lot of news between now and our next episode in the world of AI and maybe even... A little closer to home. But we will have much more on all that the next time we all gather. Until then, friends, this is your old pal Justin Robert Young saying, see you next time. Bob hopes you have enjoyed this broker. Dog and Pony Show Audio

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast