Wondery Plus subscribers can listen to How I Built This Early and Add Free Right Now join Wondery Plus in the Wondery app or on Apple Podcasts. This message comes from How I Built This Ponsored Crow. There is no shortage of volatility in business today from regulatory shifts to digital disruption, but volatility isn't your enemy.
Doing nothing is. You can uncover opportunity in uncertainty. Crow offers top flight services, an audit, tax, advisory, and consulting to help you take on your biggest challenges. Visit embracevolatility.com to discover how Crow can help you embrace volatility. Once again, that's embracevolatility.com.
Today's business travelers are finding that fitting in a little leisure time keeps them recharged and excited on work trips. I know this because whenever I travel for work, I always try and meet up with a friend to catch up, have a great dinner or hit a museum wherever I am. So if you're traveling for work, go with the card that puts the travel in business travel, but Delta Sky Miles Platinum Business American Express Card.
If you travel, you know. Amica knows that your home, auto, and life insurance are more than just a policy. It isn't just home insurance, it's about protecting the life you've built. And it isn't just about auto insurance. It's about the car that's taken you on all of your adventures. Everything they do at Amica, they do to make insurance feel more human.
Amica's representatives are there when you need them and Amica is a mutual, which means they don't have investors or stakeholders to please. They're customer owned and they only work in service of you. Amica asks about your life and your needs so you can build a policy together. As Amica says, empathy is our best policy.
Hello and welcome to How I Built This Lab. I'm Guy Roz. So what if you could no longer trust the things you see in here? I'm not talking about conspiracy theories. I'm talking about the breakdown of what we now consider to be a hard fact. Evidence. What if you couldn't trust the signature on a check? The documents or videos presented in court, the footage you see on the news, the calls you receive from your family, because they could all be perfectly forged by artificial intelligence.
The breakdown of trust in our society, that's just one of the risks that could be headed our way as AI gets smarter and smarter. That's why my guest today, Tristan Harris, is sounding the alarm about the rapid development of AI. This episode is part two of my conversation with Tristan. He's the co-founder of the Center for Humane Technology.
If you haven't listened to part one yet, you'll want to go back and listen to that first. In that episode, Tristan talked about how so many of the technological tools we use every day, things like social media and search engines, were designed to grab as much of our attention as possible.
And that's had some really damaging effects on our society. We also talked about the exponential development of AI and how it's advancing so quickly that even the people that work on it aren't aware of the full scope of its capabilities. Today, Tristan is back to talk more about how AI is changing our lives, what we need to worry about and how we can protect ourselves from some of the scary stuff. But not everyone is worried about the dangers of AI, which presents its own challenge.
Tristan, I'm here in the Bay Area, you are to. And I go to, from time to time, I'll go to events, meet-ups, just to observe what people are talking about around generative AI. And there's a lot of excitement about what's happening in San Francisco and talk about, this is the next big thing. People are saying it's like what it felt like to be here in 2003, 2004 with Web 2.0.
So there's a lot of excitement around it and not that much skepticism. And so I wonder in a world where, or a profit is incentivized, obviously we live in a capitalist system, what would stop somebody from pursuing this at lightning speed? I mean, if they're incentivized by financial rewards to pursue it?
Well, if they're fully incentivized to go as fast as possible and there's no counter-insentive that says you're liable, let's say, for the harms that might show up, then of course you're going to go as fast as possible. And so that's why in our work, people think that we're criticizing Sam Altman, or OpenAI, or one company, or we're criticizing AI overall.
No, neither of those things are true. What we're criticizing are perverse incentives that lead to bad outcomes. Because we are true futures who want the good future, and we see that to get that future, we have to change the incentives that we're currently operating with.
So the example of this is liability. So what lesson to be learned from social media? For those who don't know or remember this, in 1996 there was this thing called the Communications Decency Act, in which there's a section, famously called Section 230, that basically gave all Internet companies in a mutated shield that you would not be liable for anything that you're online bulletin board, where someone posts hate speech or something like that, or tells people to commit suicide or smear someone.
You would not be liable for any of those harms. And that made sense when the Internet was just a bunch of bulletin boards, and it was not powered by AI. But we used that immunity shield and applied it to social media companies when they came along later. And so when they go and intentionally addict children, use social comparison, use variable schedule rewards, use social proof and social validation and direct messaging, and all that to try to jack up their products.
We allowed social media companies to not be liable for any of the downstream harms that we're now living with. And the correction we could make to AI companies is that instead of being incentivized to race as fast as possible. What do we want to incentivize? We want to have them move at the pace that we can get this right. We'll get this right, what if everybody was liable for the downstream harms that could occur, and they all moved at a slower pace, not a generically slower pace,
but at a pace in which we're doing the relevant safety work. And how would we rebalance that equation so that everyone is doing a race to safety versus a race to power and capabilities? How will people start to see this impact their lives? I was on Instagram this week, and I got delivered a surfing video, and it was a video of Kelly Slater surfing and beautiful pristine waters.
And he was just weaving in and out of other surfers, maybe 40 surfers, this is an amazing video. And I looked at the comments, and they were uniformly, you know, you just went down and they're like, wow, this guy's the goat, this is amazing. Wow. And then finally, there was one comment, and it was like, guys, this is AI generated. And I looked at this video, and I'm pretty sure it was AI generated, you know.
That's already happening. Some of it is very good, and it's not even a fraction as good as it's going to be, if, as you say, this is exponential curve, as good as it'll be in a year, five years, ten years from now. What are we talking about? To lay it out for me, Tristan, I mean, are we talking about a world where nothing is real and everything is, I mean, it's like Aldous Huxley again on steroids, which we just will not be able to even know of a call from our spouses real.
Yeah. So AI is not going to get worse at emulating someone's voice, someone's handwriting, someone's likeness, there's this, that's what generative AI does is it gives you those capabilities. Anything that can be emulated will be emulated. And that's why it's generative AI. It's generating text generating images generating 3D models from scratch generating architectural designs, generating movie scripts, generating amicus briefs, generating, you know, fake articles about people.
Anything that can be simulated will be a new hamster, someone a deep fake Joe Biden, an automated symbol calls when Joe Biden's voice telling people not to vote in New Hampshire. Yeah. And I actually listened to it. To be honest, that one, I would have thought that sounded like Joe Biden. And the point is that it's, that's the worst that it will ever be.
So if you're not impressed today with where it is, just look at the growth rate of how much better and how quickly it's getting better. And what can you do in the face of that? Well, you know, by default, yes, if we live in the world that we live in today, we won't know what's true. But I was just talking to the digital minister of Taiwan, Audrey Tang, and she's talking about the need for authenticated privileged messages.
So like anytime the government sends a message, it now comes through one number. If you get a text from the government from that number, you know, it's the government. If you don't get a text from that number, it's not the government. You know, Apple and Google could start working on an interoperable standard saying that we're going to verify and make sure that when there's a phone call, there's a real handshake that there's a new security and encrypted handshake.
It's just like we move from, we went from the default on the internet being HTTP to HTTPS secure. So we went from kind of an unsecured, open, unencrypted internet to more secured encrypted connections. I think that in the age of generative AI, we're going to move to these more privileged and secure environments. I just wonder how it's like it's like putting a finger in the dike. Yeah.
And there's more and more water building and it's just about to that dam is going to burst. And I just think, well, already people have used CheshpT4 to figure out how to break into into passwords. I mean, even something as simple as every now and again, I'm sure this happens to you. I'm sure this happens a lot of people listening. You get a text and it looks like it's from your bank. And it says,
fraud, a detected on your account. And many people who aren't as used to the getting these things might click on it. That's simple. But you know what I mean? I mean, it's just a matter of time. It's going to get smarter before it's able to just break through all of these systems that are designed to protect us. Yeah. Well, and this, this by the way, I think is how we frame the way that we're worried about the risk, which is that we're just simply releasing more capabilities into society.
Faster than society has the immune systems to absorb and adapt to all the new changes that accompany all of that AI getting released. You know, when the first time someone released that open source code that said the AI that said, with three seconds of your voice, I can speak to your bank was every bank in the world prepared for that and planning for that like years and events. No, they don't know what new AI capabilities are going to be released.
And that's just one tiny one. There's literally hundreds of them per week. It's hard to track. In fact, in our AI dilemma talk, we quote the co founder of Anthropic Jack Clark, who said that unless you're scanning Twitter every single day for all these updates, you are missing updates that are critical for national security.
And sort of what it means to have a safe world. And so that that's where the you would say, okay, so why don't we just stop all this? Why don't we just not race? Why don't we just stop releasing all AI? Well, then people would respond to that thing. But if China doesn't stop, then the US is just going to fall behind. But I want to I want to push back against this, which is not just that I think we should stop in the US.
We have to get smarter about what does it mean to be China? Because if they race so fast that they release stuff that then undermines their own society, that's not in their interest either. We have to be smarter than that. The US has to lead and say we need to set the terms of the race. And it's actually a race to the responsible and conscious deployment of technology that in its effect strengthens your society relative to other ones. That's the true competition.
We're going to take a quick break. But when we come back more from Tristan on the measures we could take to responsibly deploy AI. And the role Tristan played in a recent White House executive order on AI stay with us. I'm Guy Ross and listening to how I built this lab. I've talked to hundreds of founders on how I built this and I've heard time and time again how important it is to have a strong web presence in order to really grow a business.
Squarespace is an all in one platform for building a brand and engaging customers online. Squarespace lets you easily create a dynamic website and sell anything, your products and services, and even content you create. Squarespace makes it really easy to get started with best in class website templates for all types of businesses that can be customized to fit your specific needs.
Squarespace also provides the tools you need to run your business smoothly, including inventory management, a simple checkout process, and secure payments. And with Squarespace email campaigns you can build a community of email subscribers and customers. Start with an email template and customize it by applying your own brand ingredients like colors and logo.
And once you send, built-in analytics measure your emails impact. Go to squarespace.com slash built for a free trial. And when you're ready to launch, use offer code built to save 10% of your first purchase of a website or domain. Picture that thing you've always wanted to learn. Now picture learning it from the person who's literally the best at it in the world. That's what you get with masterclass.
Don't just talk about improving masterclass helps you actually do it. Masterclass offers over 180 world class instructors. So whether you want a master negotiation with Chris Voss, link like a boss with Martha Stewart, or learn about the power of resilience with filmmaker Ava DuVernay, masterclass has you covered.
There are over 200 classes to pick from with new classes added every month. And one of my favorites is with Sarah Blakely, who we've had on our show. She teaches self-made entrepreneurship and had a bootstrap a great idea. Every new membership comes with a 30 day money back guarantee. So there's no risk. And right now our listeners will get an additional 15% off an annual membership at masterclass.com slash built. Get 15% off right now at masterclass.com slash built masterclass.com slash built.
Welcome back to how I built this lab. I'm Guy Roz and my guest is Tristan Harris, co-founder of the Center for Humane Technology. And Tristan as you've heard has been sounding the alarm about the rapid development of AI. And he says that advancements in the technology could unravel the very fabric of our society.
Human societies depend on more or less depend on a sense of trust that there's common information. And even if there are differences of viewpoints and so on, I mean you can trigger riots, conflicts, violent demonstrations with misinformation that is so credible that seems so real. I mean videos, you know, I keep thinking about this movie, the running man that came out in like the 80s with Arnold Schwarzenegger. Have you seen that movie? It's vaguely familiar remind me the plot.
So basically I think if I'm recalling correctly from my childhood, Arnold Schwarzenegger is a bakers field cop and he's a good guy. But he's disliked by his superiors or his colleagues or whatever and there's a scene where they're all in helicopter and there's an anti-government riot and the helicopter fires on these demonstrators and massacres them.
And they essentially frame Arnold Schwarzenegger who tries to prevent the other pilots from doing this. They frame him as the guy who did it and they create a video where it looks like he is the butcher of Bakersfield. And so he's sent to prison. But you can imagine that future, I mean it's so crazy that that film, you know what happened that film could easily happen. You can imagine in the court of law, you know, documentary evidence being presented. Video evidence being presented.
And signatures of our documents, photographs, recordings, all of these generated by AI that are so good, it's impossible to discern from real evidence. And again, like I'm on a rant here, but it seems like this is going to completely upend how we think about communication, what we believe, what we present as fact and evidence, how we function as societies.
Yeah, 100%. I mean, here's a metaphor. Imagine that the whole world is run on top of Windows 95. You know, it's running the world's computers and everything in the world runs on Windows 95. Governments run on Windows 95. Banks run on Windows 95. Hospitals run on Windows 95. Legal documents. Court cases. Lawyers that all runs on Windows 95. And then imagine one day someone publishes this code to the whole internet and it basically teaches you how to hack any Windows 95 computer in the world.
So now Windows 95, which runs the world, is not secure anymore. It's insecure. So in this metaphor, the way that our whole society has been constructed is like sitting on top of this box called civilization 2000s, right? Like we've sort of living on a early 2000s world stack of the assumptions of paperwork and signatures are our actual signatures and photographic evidence is actual photographic evidence and people's voices are real and can only represent their only voice.
But suddenly we just undermined collectively with AI that set of assumptions. And so what do you do when this happens? Well, you don't try to pretend let's all keep running the world on Windows 95.
And this moment with AI is forcing a kind of right of passage humanity has to kind of go through a bar mitzvah or bot mitzvah to upgrade the systems that we have been relying on to accommodate the new assumptions and we've done those upgrades before in democracies, you know, when you said the printing press came out, the printing press both killed the previous forms of government of futile governments.
Made way for democracies first through a really unstable period and then ultimately you could have public education, you could have the fourth estate news articles, it forced this reorganization of what kind of governance that we need to live in. We are in this uncomfortable, but we have to do it adaptation period where we are needing to upgrade the basic legal philosophical mechanisms we have to come up with new meaning for what is evidence in a world where AI can generate that evidence.
And there are ways of doing that we could live in a world where the only places that any media you see on the internet will only be on the internet if it's watermarked because we know that it was real. So there are things like this that are the building blocks the puzzle pieces of this upgrade, but there's about a million pieces that has to happen. And I know that can sound daunting to people, but I almost want us to be collectively saying, okay, we're going to hold hands together.
We got to go through this transition and yes, it's going to be a little bit rocky and we have to make these changes together. I know that a big part of what you do is just creating public awareness, but you also went to the lighthouse to help put together an executive order around the stuff late in 2023.
Tell me what that order actually in practical terms will do. Like what will it slow down this process will it actually create actionable protections for us, or is it just, I don't know, I mean, again, it's an executive order. It's not long, you know, congressional laws. What does it do?
Well, so there's been multiple parts to answering this question. So I mean, this is, I think, like 111 pages, it was done in record time in six months, it touches algorithmic bias. So AI that's used in current AI systems that are biased and how to deal with those issues, it deals with AI and biological weapons and needing to lock down the supply chains for where people can get dangerous materials and saying, you know, we did handle that better.
It deals with AI and the next GPT 5 and GPT 6 systems, it says that if you train a system that uses more than 10 to the 26, I believe, flops or floating point operations to technical jargon, then you have to notify the government. That's like basically like saying, if you're building a nuclear weapon that's really powerful, the government has to know.
But to your real point, your real question you're asking is, what can that executive order do? Because it's not law, it's an executive order. It's not legally binding for making sure that all the companies have to do all these things. A lot of it is changing what's called federal terms and conditions. So to get federal funding, if you're a biology lab, you will not be able to get that funding if you don't do these new sort of protective measures for the dangerous biological materials.
So what that's doing is using the leverage of the government and its funding power to start to incentivize different aspects of the supply chains of the world, educational environments, banks, etc. to do more of the things that are AI resilient. So think of it as a movement and a signal, like a big bat signal blasted into the sky that says the US government is taking AI seriously.
Now, it's not the security blanket that suddenly makes the world safe or suddenly open AI and then stop everything they're doing and before they study or do more research, they look at the executive order. They're still racing to build AI as fast as possible. And we mentioned the nuclear metaphor. How did we get to nuclear proliferation safety? How do we get to nuclear non-proliferation and controls?
There is also back then a lot of track what are called track to dialogue. So informal conversations between American nuclear scientists and back then Soviet nuclear scientists about basically making sure that we had safer controls on nuclear weapons. They couldn't accidentally go off. I'm happy to report that informally there are some of those dialogues that are happening between Chinese AI scientists about the risks and US AI scientists about the risks.
As I say all this is this adequate to where we need to go? No, it is not. It is far, it is a small drop in the pond compared to what needs to happen. What we really need now is for people to demand from their lawmakers that we take these issues seriously. And I think things like liability as a regulatory framework are powerful because people understand it.
You as an AI company shouldn't be worried about being liable for the harms if there are not going to be any harms or risks. So if you don't think there are risks, then go ahead and release it. But if you do think they're going to be risks and you're liable for them, what that does is it has everybody move it at slower pace at the pace that we can get this right.
We're going to take a quick break, but when we come back, what Tristan thinks it'll take for the world to unite against the dangers of AI and how he stays motivated in the face of such an enormous challenge. Stay with us, I'm Guy Roz and you're listening to how I built this lab. And how I built this we love to highlight businesses that are doing things a better way. That's why when I found Mint Mobile, I just had to share.
Mint Mobile ditched retail stores and those overhead costs and instead sells their phone plans online and passes those savings onto you. Right now, Mint Mobile has wireless plans starting at just $15 a month that's unlimited talk and text plus data for $15 a month. Before Mint Mobile, I was paying hundreds of dollars a month for my family's cell phone plan and I still dealt with dropped calls and moody customer service agents, not anymore with Mint Mobile.
To get your new wireless plan for just $15 a month and get the plan shipped to your door for free, go to MintMobile.com-built. That's MintMobile.com-built. Cut your wireless built to $15 a month at MintMobile.com-built. Additional taxes fees and restrictions apply. See Mint Mobile for details. With everyone fighting for attention, how can your business stand out? Easy. Get constant contact. Constant contact has helped millions of small businesses stand out and see big results fast.
Constant contact makes it easy to promote your business with tools like email and SMS marketing, social media posting and even events management. With constant contact, you'll reach new audiences, grow your customer list and communicate more effectively to sell more and fast track growth. Don't know much about marketing. Constant contacts writing assistance tools and automation features help you save the right thing every time.
You can send with confidence knowing your emails are actually reaching your customers thanks to constant contacts 97% deliverability rate. Tackle any challenge with expert live customer support plus everything's backed by their 30-day money back guarantee. So get going and start growing your business today with a free trial at constantcontact.com. Just go to constantcontact.com right now. Constant contact helping the small stand tall. Constantcontact.com.
Welcome back to How I Built This Lab. I'm Guy Rise and I'm talking with Tristan Harris, co-founder of the Center for Humane Technology. Tristan has compared the development of AI to the development of nuclear weapons but in some ways the AI problem is even more complicated. I keep thinking about the nuclear analogy right because there are nine nuclear powers and probably will be 10 with Iran eventually and we're talking about states, nation states.
And all of them pursued this more or less for power to increase their power. This is different because it's not just countries. It's not like it's just China or Iran or the United States or North Korea. It's individual companies. It's individual people. I mean not to say that some guy working in the basement in Ukraine or Belarus is going to build something as effective as what OpenAI will do.
Every day there's a new company that is researching generally AI capabilities and what they might be able to build. So how do you create mechanisms to control all of that? Yeah, I want to say that you know you could have been there in 1945 and said you see the first nuclear bomb go off you get that there's going to be a nuclear arms race and you could say
I'm going to throw up my hands. The world is over every country is going to get a nuclear weapon. There's going to be conflict and then there's going to be a nuclear escalation in the world is going to be over. Yeah, notice that we made it through that. It's a miracle we made it through even for for the next 40 years it didn't feel like that was going to happen.
That's right for a long time. Yeah. And it didn't happen just because like humans are good or humanity got lucky. There's a lot of people who worked very hard.
There was a hug wash movement. There was the Russell Einstein manifesto. There was the union of concerned scientists, the atomic bulletin all the nuclear non proliferation work, you know building satellites that could detect when people are moving nuclear weapons around doing better controls and understanding of all the sources of uranium in the world.
There's a lot of people global infrastructure to try to have better understanding of safety and control for what would make a world with nuclear technology safe that prior to a lot of people working really hard. So now I want to say the situation looks pretty similar.
And towards artificial general intelligence and going faster every day looks pretty bleak it does. It's not as tractable or easy as nukes because back then you needed to have state level resources and access to uranium, which is a very specific and hard to find thing not easy to get.
In this case what uranium was for nuclear weapons advanced Nvidia GPU chips are for AI. So when you see that the Biden administration has created the chips act and is actually restricting sales of Nvidia chips to China that's basically like saying.
We need to start controlling and looking at the global supply and flows of Nvidia GPU chips. Now how do you get out of this with the new union of concerned AI scientists and a movement of tech engineers and a movement of the public and legislators that are calling to action and so there's going to be that kind of effort here.
So what does that effort look like like what do we need to do to you know to prevent the a version of a nuclear catastrophe right especially when so many of these AI tools are publicly available for anyone to use.
We need there to be different norms around that where we probably don't want to open source the really advanced AI systems that are coming think of it this way for those who don't know by the way we're talking about open source AI models different open source code open source code is more safe and more secure because if I do Linux and open source code way more people look at the code they can identify the bugs they can improve the code it makes the overall thing safer more secure more trustworthy because it's so transparent.
But AI systems AI models that are open source means that anybody can retrain them to do even more dangerous things so for example. Facebook released llama to their open AI model and they tried to tune it to be safe so if you ask it how do I make a biological weapon it will not answer you would say sorry I can't answer that yeah but once it's out there in the open.
I won't go into technical details but basically for about a hundred dollars you can retrain all the safety controls off of it so you can say be the worst evil version of yourself your evil twin personality and it'll suddenly answer happily any questions about biological weapons now it's not smart enough to have accurate really deeply accurate instructions about how to do that but we probably don't want to be releasing llama three llama for in Microsoft and you know Mark Zuckerberg has publicly stated within the last week or so.
So I think that he wants to build open source artificial general intelligence which is the most dangerous thing you could possibly do you know I still think most of us can't fully imagine how quickly our lives are going to change and it's already created chaos a certain level of cast but manageable cast and I.
I don't think that most of us can imagine what could happen and I sometimes I wonder like is that effective is it effective to scare people or to create these kinds of you know doomsday scenarios and people's minds but at the same time I think about a new references in your talk this film the day after they came out and like the mid 80s I was like eight or nine years old and I came out and I for the life made I don't know why my parents let me watch it with them and I was terrified I mean watching I remember the scenes of the game.
I remember the scenes of the bombs exploding in Kansas City, Missouri and it was just terrifying I had nightmares for years you know I was a and that film really did I mean not to say that you know resulted in major treaties but it did create a sense sort of this it built a consciousness at least in the United States because at the end of the film it's like it says this is just a representation of what could happen nuclear war and in fact it will be much much worse than what you've seen.
And I don't know is there a world where it's worthwhile creating some like a day after around AI so people just understand what we're possibly facing. Yeah I'm so glad you're bringing this up and you know what's interesting about it is you know Reagan had military advisors saying we can win a nuclear war we just have to keep we gotta keep building a building bomb some people in them have more of them yeah exactly.
And if one side believes that the other side actually believes that they're gonna try to win a nuclear war that's what creates the risk is that everyone's on hair trick or alert for anything that looks like it could be a new can then that something that's an accident like a block of birds comes across the radar and you almost hit the button.
So what we needed to do was create a new trustworthy basis for coordination that the U.S. and Russia would trust that they're actually so existentially terrified by our again and that both of them would fear everyone losing. More than they fear me losing to you. Yeah. And I think what that film the day after did is it painted a picture of how everyone loses if this happens.
So this actually can have a really big impact and the point of this from a metaphorical stance is that we as public communicators you guy with this podcast and people who are listening to this.
We have to make the dark future legible so that we can steer towards the light if we don't have the dark future be legible and people just want to focus on AI making cancer better and giving us solutions to climate change but not really seeing how the incentives pull us to racing to roll out capabilities quickly as possible and destabilize society if we're not honest with ourselves about that.
We're going to get the thing that we're not honest with ourselves about and it's by being honest with ourselves about that risk side that we can actively collectively choose to steer towards the light side. And that's if all the open source developers agree on those risks that's if China agrees on this risk that's if the UAE which is also building an open source model called Falcon agrees with those risks. That's the world that we need to create. How much time do we have?
Well, like many things with climate change too we should have started more than a decade ago. The next best time is today. I just think the gravity this is enormous and how quickly it's happening is enormous and we have very few choices in this game. You know, it's I feel disempowered because I hear you and we yeah, that's what I mean in my mind it's like it's the next 12 months it's like everything has to happen and don't get anxious about that.
Just say, okay, what can we all do over the next 12 months that amounts for the maximum set of things shifting the incidences to you know this isn't a problem with the solution. This is a predicament with responses and ways of navigating and this is about how do we find the wisest clearest steady handed path through this that we can. And I think that we all have to stay resolved and calm and say what will it take you know for the world to go well and to work every day at assuring that outcome.
By the way, if you're interested you can text AI to 55444 we're interested in gathering sort of public support and power around demanding the kind of AI guardrails and safety that we want. There are many groups that you can get involved with online you know demanding from congress and legislators that we need better mechanisms of having liability for AI systems.
There's a lot that can happen but really just sharing this around and having more people talk about it is one of the best ways to make an impact. Just I imagine that you get attacked here in the Bay Area and you come from that world imagine you get attacked not just praised I mean a lot of people love what you do in your message but there probably people really hate what you do and.
Claim that you're over hyping this and you know you're not making money off this right this is a nonprofit organization this is I mean what is your incentive what drives you to keep doing this even with all the pushback that you you get. It's really simple guy it's it's love like I want to be able to live in a future and have other human beings and life forms be able to enjoy and love the future that we're creating just like you have to care about the planet.
And you know the health of the environment underneath our feet and that supplies our air we also have to care about protecting the social fabric trust in the shared reality upon which everything else depends. Yeah. Do you think that humans are going to be around in 500 years. I don't know is the honest answer. I don't know.
We often in our work at center for human technology we do think about this moment as an initiatory threshold like a right of passage for humanity that we cannot keep doing technology the way that we have been doing it. You know we did do pot chemistry whose motto was better living through chemistry and we all love that we reverse engineered this whole field of organic and organic compounds and we can synthesize anything with chemistry.
And we got a lot of amazing things out of that that everyone's grateful for but we also got forever chemicals and forever chemicals literally never go away. That's why they're called forever chemicals your body can't degrade them. We all have it in our bodies every single one of us knew me and everybody listening to this if you go to Antarctica right now. Can you open your mouth to drink the rainfall in the rain in Antarctica.
You will have levels of forever chemicals that enter your body that are more than the current EPA says are safe for human health. Now we have created this mess the answer isn't we should be self hating primates who don't want to build any technology. It's how do we do technology without externalities. How do we do social media without destroying mental health of teenagers. How do we do smartphones without destroying attention spans.
How do we do food packaging without creating forever chemicals and plastics and plastic pollution. I think we can be pro technology and anti externalities and that's the nuanced position that I want everybody to be in. Rather than say you're either for tech and for acceleration or you're a diesel it's like no I'm forgetting this right. I hope we get this right as a species. Me too. It really do. Me too. Tristan thanks so much. Thanks so much guys really appreciate it.
That's Tristan Harris co-founder of the Center for Humane Technology. And thanks for listening to the show this week. Please make sure to click the follow button on your podcast app so you never miss a new episode of the show and as always it's free. This episode was researched and produced by Alex Chung with editing by John Isabella. Our music was composed by Routine Arableui our audio engineer was Neil Rauch.
Our production team at How I Built This also includes Carla Estevez, Chris Messini, J.C. Howard, Catherine Cipher, Carrie Thompson, Malia Aguadello, Niva Grant and Sam Paulson. I'm Guy Raaz and you've been listening to How I Built This Lab. If you like How I Built This you can listen early and add free right now by joining Wondery Plus in the Wondery app or on Apple podcasts.
Prime members can listen add free on Amazon music before you go tell us about yourself by filling out a short survey at Wondery.com slash survey. In the 1980s Frank Tharion was riding high as a successful German music producer but he was bored. German pop was formulaic, dull and oh so white. Frank had bigger dreams, American dreams. He wanted to create the kind of music that would rival larger than life artists like Michael Jackson or Run DMC.
So he assembled a hip-hop duo, two once in a lifetime talents who were charismatic, full of sex appeal and phenomenal dancers. The only problem? One very important element was missing but Frank knew just how to fix that. Wondery's new podcast Blame It on the Fame dives into one of pop music's biggest controversies. Millie Vanille set the world on fire but when their adoring fans learned about the infamous lip syncing, their downfall was swift and brutal.
With exclusive interviews from frontman Fab Morvan and his producers Frank Tharion and Ingrid Zaghi, this podcast takes a fresh look at the exploitation of two young black artists. Follow Blame It on the Fame wherever you get your podcasts. You can listen to Blame It on the Fame early and ad free by joining Wondery Plus.