The AI arms race to build digital god - podcast episode cover

The AI arms race to build digital god

Oct 24, 202447 min
--:--
--:--
Listen in podcast apps:

Episode description

Today, we’re going to try and figure out "digital god." I figured we’ve been doing Decoder long enough, let’s just get after it. Can we build an artificial intelligence so powerful it changes the world and answers all our questions? The AI industry has decided the answer is yes.  In September, OpenAI’s Sam Altman published a blog post claiming we’ll have superintelligent AI in “a few thousand days.” And earlier this month, Dario Amodei, the CEO of Anthropic published a 14,000-word post laying out what he thinks such a system will be capable of when it does arrive, which he says could be as soon as 2026. Verge senior AI reporter Kylie Robison joins me on the show to break it all down.  Links:  Machines of Loving Grace | Dario Amodei The Intelligence Age | Sam Altman Anthropic’s CEO thinks AI will lead to a utopia | The Verge AI manifestos flood the tech zone | Axios OpenAI just raised $6.6 billion to build ever-larger AI models | The Verge OpenAI was a research lab — now it’s just another tech company | The Verge California governor vetoes major AI safety bill | The Verge Inside the white-hot center of AI doomerism | NYT Microsoft and OpenAI’s close partnership shows signs of fraying | NYT The $14 Billion question dividing OpenAI and Microsoft | WSJ Anthropic has floated $40 Billion valuation in funding talks | The Information Credits:  Decoder is a production of The Verge and part of the Vox Media Podcast Network. Our producers are Kate Cox and Nick Statt. Our editor is Callie Wright. Our supervising producer is Liam James. The Decoder music is by Breakmaster Cylinder. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Transcript

Support for Decoder comes from AT&T. What's it like to get the new iPhone 16 Pro with AT&T next up anytime? It's like when you first light up the grill and think of all of the mouthwatering possibilities. Learn how to get the new iPhone 16 Pro with Apple Intelligence on AT&T and the latest iPhone every year. With AT&T next up anytime. AT&T, connecting, changes everything. Apple Intelligence coming fall 2024 with Siri and device language set to US English.

Some features and languages will be coming over the next year. Zero dollar offer may not be available on future iPhones. Next up anytime feature may be discontinued at any time. Subject to change. Additional fees, terms, or restrictions apply. See AT&T.com slash iPhone for details.

Amgen, a leading biotechnology company, needed a global financial company to facilitate funding and acquisition to broaden Amgen's therapeutic reach, expand its pipeline, and accelerate bringing new and innovative medicines to patients in need globally. We found that partner in city, who seamlessly connected banking markets and services businesses can advise, finance, and close deals around the world. Learn more at city.com slash client stories.

Support for Decoder comes from Vanta. Do you know the status of your compliance controls right now? Like literally right this moment. You know that real time visibility is critical for security. That's where Vanta can help.

Vanta automates compliance for SOC 2 ISO 27001 and more saving you time and money while also helping you build customer trust. Over 8,000 global companies like Atlassian, Flow Health, and Cora use Vanta to manage risk and prove security in real time. Learn more at Vanta.com slash decoder. That's Vanta.com slash decoder.

Hello and welcome to Decoder. I'm Eli Patel, Editor and Chief of the Virgin. Decoder is my show about big ideas and other problems. Today, we're going to try and figure out digital code. No figure we've been doing Decoder long enough. Just get after it. Can we build an artificial intelligence so powerful that changes the world and answers all our questions?

You will not be surprised to know that the AI industry has decided the answer is yes. In September, Open AI is Sam Altman, Published a blog post claiming we'll have super intelligent AI in just a few thousand days. An earlier this month, Daria Umode, the CEO of Anthropic, published a blog post laying out what he thinks such a system will be capable of when it does arrive, which he says could be as soon as 2026.

The blog post is 14,000 words long. Daria has a lot of ideas. What's fascinating is that the vision Sam and Daria layout in their posts are so similar. They both promised dramatic super intelligent AI that will bring about massive improvements to work, to science and healthcare, even to democracy and prosperity, to happiness. Digital God, baby. But while the visions are similar, the companies in many ways are openly opposed. Anthropic is the original Open AI defection story.

Daria and a cohort of his fellow researchers left Open AI in 2021 after growing concerned with the company's increasingly commercial direction and approach to safety. And they created Anthropic to be a safer, slower AI company. And the emphasis really has been on safer, which is sometimes had a pretty dramatic effect on the company's reputation. Just last year, a major New York Times profile of Anthropic called it quote, the White Hot Center of AI Dumerism.

But the launch of ChatGBT in a generative AI boom that's followed has kicked off a colossal tech arms race. And Anthropic is as much in that game as anyone else. It's taken in billions of funding, mostly from Amazon, and it's built Claude, a chatbot and language model to rival Open AI's GPT-4. And now Daria is writing long blog posts about spreading democracy with AI. So what's going on here?

Why is the head of Anthropic suddenly talking so optimistically about AI when his company was previously known for being the safer, slower alternative to the progress at all costs Open AI team? Is this just more hype to court perspective investors or researchers? And if AGI really is just around the corner, how are we even measuring what it means for it to be safe?

To break it all down, I brought on Verge Senior AI reporter Kylie Robinson to discuss what it means, what's going on in the industry, and whether we can even trust all these AI CEOs to be telling us what they really think. All right, digital cut and capitalism, but mostly digital cut. Here we go. Kylie Robinson, welcome to the cutter. Thank you for having me.

I'm excited to talk to you about digital God and the race to either build it or spend money on building it and whether digital God will be cool. Right. If it feels like there's a lot of debate on whether digital God will be cool or not, where do you come down? That's a great way to start. Do I think digital God will be cool? Like chill. What's the vibe check on digital God? I think, you know, our humans good and chill. It's just a philosophical debate at this point. I hope so. I don't think so.

Yeah. Is digital God chill? I think is as a motivating question for Silicon Valley right now really sums up a lot of things and you have described this as tribalism. You've described it as religious. You've described it as ideological and the conversations we've had at a high level just explain what's going on here.

It is very tribal and it's something I've experienced covering it, which, you know, the side that's building it increasingly is saying, listen, we are building something that is going to transform the world. It is going to as one CEO put it. It could spread democracy. It could cure diseases, not just like diseases, but PTSD and anxiety, really, nebulous things.

They truly believe this and they are pushing hard on this narrative, whereas a whole other side is saying that this is all a scam and that they shouldn't be trusted at it. So both sides seem to be completely notting at each other. Yeah. And that conversation is not chill. Regardless of whether digital God is chill, the debate right now seems ferocious.

Definitely. And I am really sympathetic to both sides. I was just listening to another podcast about tribalism, which is why it's at the front of my mind, which is both sides want the same thing, which is for the betterment of humanity. And one side thinks that AI is going to make humanity worse. And one side thinks it's going to be made better.

So into this steps and the topic and and the topic CEO Dario on the day. And the topic famously the first of the we're leaving open AI to start a safer AI company companies. There are now lots of them, but they were the first. He's trying to split the difference. He's got this long blog post called machines of loving grace. And he is saying like we're trying to build the safest one. But look at all this cool stuff we could do if we can pull it off. What is going on there?

So it was about 14,000 words where Dario says, you know, I know that this is very fabulous and crazy to say, but I'm going to say it anyways, because I think it's worth saying that we could shrink about 100 years of scientific breakthroughs and progress in five to 10 years with AGI. He doesn't like to call it AGI. He thinks it's like sort of a crazy term, which is artificial general intelligence. He likes to call it powerful AI. It can cure PTSD. It can spread democracy.

It can do all of these crazy things just if humans weren't so limited in terms of compute and yeah, it's it's really selling. This is the future we can have if we work hard enough if we achieve AGI if we achieve it in a chill way. This is right next to open AI, which is making many of the same claims. Same Altman wrote his own blog post a few weeks ago saying within a few thousand days, we might have AGI and here's all this stuff we could do. They've obviously just raised a lot of money.

There's a ferocious competition for talent in this industry. We keep calling it digital. God, because it's funny to say, but is the end state the same or they all racing to the same place. Yes, I believe deep minds first mission was build AGI opening eyes mission build AGI and thropic build AGI they have stated very clearly that's what they want to build.

I don't know if they would agree with our joke about digital God, but it is more fun to say. Yeah, they all want to build general intelligence because they see that as a way to change the world in many different ways rather than only changing one sector. They could generally change the entire world with general intelligence.

Can you actually explain the mechanism of that to me? I've used these tools today. Some of them are very powerful. They can certainly make a video of Will Smith eating spaghetti at ever increasing levels of fidelity. But I don't know how they spread democracy.

It's again 14,000 words. He really sells this in a way that's these tiny breakthroughs just you know for science he had quoted this person who said you know it's all these tiny breakthroughs that get you to larger breakthroughs so it can make us more efficient in terms of our processes.

And that can be said for large scale data analytics for finance for medicine for a lot of different sectors. So what they see is a model that can understand and analyze and parse through large, large, large amounts of data and ways humans can't and they see all the ways that can change the efficiencies of certain sectors in which it can get us humans to have more breakthroughs.

But not only that, they are hoping that they can do this autonomously all the time. So I think he says a million of the smartest people in a data center is how he views this is like cities of people, but they're just AI working all the time on these issues. That's how they view it.

So this is a show about decision making and every time I hear a pitch like that it occurs to me that the goal is to give up some enormous amount of decision making. I don't know how to distribute food throughout our city or lay out the electrical grid or whatever it is and we're just going to let the robots in the data center do it and the data center might be on the same moment that's fine.

Don't worry about it and then we'll be free because the AI will just do it. That's the pitch right is that we'll just hand over a bunch of control to an AGI. I mean that's why I keep calling it digital God. What they like to say is that when it comes to really complex issues, it will work all day hours and hours thinking through this issue and then it will come back for clarification.

So that's the caveat is like no, it's not going to control everything, but it will control those wrote tasks and then come back to you and be like, okay, I thought about this. Can you answer these questions? What do you think about this? They think of it as a partner in that way, but they ultimately, yeah, they don't want you to have to check up on it all the time. That's true.

But a partner to who a partner to what they claim are some of the world's smartest people, which are people working on cancer, people working on autonomous vehicles themselves, which I can get into. The promise I sort of understand right will have ultra powerful computing systems that can reason and help us solve problems and they'll never get tired or have feelings about what we're using them for.

I read these blog posts, I read Sam's and it seems like the part where a bunch of people still have to make decisions is fully swept under the rug. Yeah, I read the entire Amade blog and I felt a starry eyed and cool this utopia might be like, where are we to getting there? Like what is the answer is to actually getting there?

And I have the job of explaining to readers who are extremely skeptical because they're like, it can't even count the letter of ours and strawberry. What are you talking about? I don't feel like they're doing a great job at convincing us that the tools that we have today are much different that we won't just need humans. Like I just don't see like a coherent path other than don't worry, we're building utopia. Don't worry about it. Just give us money.

That's the thing, right? Just give us money. Is that why Daria wrote this? Is that why Sam wrote his? Is that why Mark Andreessen wrote his? We can never know for certain unless they say out loud. This is why I wrote this and I think, you know, it's sort of my job to look at this and not just take it at face value because when I read it, I thought, well, Anthropic is reportedly looking for funding right now.

And the competition has never been more fierce. Everyone is leaving OpenAI to build an even safer AI company every day. And you're a morati. Their CTO is reportedly making her own company. And then another VP of research there might also be making their own company. That's just like in the last month.

So you have to compete for money. You have to compete for talent. You have to compete for compute to a lesser extent. But money and talent are really where it's at right now. And safety is not like the sexiest pitch. I believe that these AI executives believe what they're saying that they're going to build Utopia. I think why Daria released his blog at the time that he did, which was out of step for Anthropic, he says at the top, we don't usually do this.

I think it does have a lot to do with competition and market pressures and funding. We need to take a quick break. We'll be right back. Support for this podcast in the following message is brought to you by E-Trade from Morgan Stanley. Take control of your financial future with E-Trade, no matter what kind of investor you are. Our tools and resources can help you be ready for what's next.

Now when you open an account, you can get up to $1,000 with a qualifying deposit. Terms apply. Learn more at eTrade.com slash Fox. Investing involves risks. Morgan Stanley Smith, Bernie LLC, member CIPIC. E-Trade is a business of Morgan Stanley. Fox creative. This is advertiser content from Zell.

When you picture an online scammer, what do you see? For the longest time, we'd have these images of somebody sitting crouched over their computer with a hoodie on just kind of typing away in the middle of the night. And honestly, that's not what it is anymore. That's Ian Mitchell, a banker turned fraud fighter. These days, online scams look more like crime syndicates than individual con artists and they're making bank. Last year, scammers made off with more than $10 billion.

It's mind blowing to see the kind of infrastructure that's been built to facilitate scamming at scale. There are hundreds, if not thousands of scam centers all around the world. These are very savvy business people. These are organized criminal rings. And so once we understand the magnitude of this problem, we can protect people better.

One challenge that fraud fighters like Ian face is that scam victims sometimes feel too ashamed to discuss what happened to them. But Ian says, one of our best defenses is simple. We need to talk to each other. We need to have those awkward conversations around what do you do if you have text messages you don't recognize? What do you do if you start getting asked to send information that's more sensitive?

Even my own father fell victim to a thank goodness, a smaller dollar scam, but he fell victim and we have these conversations all the time. So we are all at risk and we all need to work together to protect each other. Learn more about how to protect yourself at vox.com slash zel. And when using digital payment platforms, remember to only send money to people you know and trust.

Support for decoder comes from Shopify always be selling. It's the de facto motto at the core of most businesses out there. But commitment to moving a product and actually doing it successfully are two different things. Sustainable growth isn't always easy. A partnering with the right platform can help you achieve more than going it alone. So how do you find that partner? You might want to check out Shopify.

Shopify is an all-in-one digital commerce platform that wants to help your business sell better than ever before. It doesn't matter if your customers spend their time scrolling through your feed or scrolling past your physical storefront. Shopify says they can help you convert browsers into buyers and sell more over time. And their shop pay feature may even convert more customers and end those abandoned shopping carts for good.

There's a reason companies like all birds turn to Shopify to sell more products to more customers. Businesses that sell more sell with Shopify. Want to upgrade your business and get the same checkout all birds uses? You can sign up for your $1 per month trial period at Shopify.com slash decoder. You can go to shopify.com slash decoder to upgrade your selling today shopify.com slash decoder.

We're back with Verge Senior AI reporter Kylie Robinson. Before the break, we're talking about Anthropics CEO Daria Emodez very long, very intense blog post discussing the benefits of super intelligent AI. But a big part of developing super intelligent AI is safety. For AI to benefit humanity, it needs to be safe. You'll hear AI researchers talk about this using the word alignment. AI needs to be aligned with humanity and our best interests.

But what does it even mean to develop safe AI? That's a big question. And that's why it's such a big deal that Anthropic, which was the highest profile of the safer AI companies, is starting to talk more about how a super intelligent AI could change the world. And not just focusing on how it might go wrong. Let's talk about safety broadly. And then I want to talk about Anthropics specifically.

So the idea of AI safety is, whoop, we built a reasoning robot that can take action in the world all by itself. That thing had better be aligned with us. Right? It had better follow the rules we lay out for it. Open AI famously overthrows Sam Altman for 25 minutes because they're bored things that he's not trustworthy. But now he's back and then everyone's quitting because they want to start safer AI startups.

What is going on there? Is opening I just not building a safe AI? Is it not safe enough? What are the dynamics? That's funny because I wrote about this. I said, open AI is no longer a research lab. It is a tech company like everyone else. And I had researchers reach out to me and disagree. My take is that it's like academics versus like a product manager at Metta.

They're extremely different people. So a company that was started to do deep research on AI filled with a lot of academics and incredibly smart people who just wanted to do that research. They're not exactly looking to work fast, break things, build products. That's not exactly why a lot of them went there. And they might deem the market pressures to build these products on these powerful models. They might deem that as unsafe.

It is just a philosophical debate. It is that tribalism every day. And some people are like, I don't care. I think it's really cool that we can build products for everyone to use on these LLMs that we have spent millions of dollars and so much time building. So I think the people who are leaving because they deem open AI not safe. It's a debate. And unfortunately open AI is not very transparent in their processes.

So it's hard to deem from an outsider's point of view. We have to rely on these people leaving and saying it's not safe. So the culture of these new companies would say we're safer. How are they measuring safety? Are there just everyone saying it? So we believe it. Is there a test? Is there like an SAT for AI safety? There is a whole lot of benchmarks. And I got to write an article about reward hacking, which was my favorite thing, which basically the AI lying to you. Really fun stuff.

So yeah, they do a whole bunch of benchmarks. They do a whole bunch of safety tests. My opinion here what I'm gleaming from this is that safety is moving slow and thoughtfully versus moving fast to launch things. I think a lot of people see the AI safety debate is don't make racist pictures in GROC.

Whatever don't let Gemini make racist photos and they're going to pull it down and we're going to make sure we don't do it. And there's just a combination of content moderation and prompt engineering that feels very familiar. That debate feels very familiar. And then there's the bigger problem, which is what if these things take actions that we don't want them to take because we've given them control.

We have given control of the electrical grid to AI and we know it's safe, which is the promise of the AGI system. And it feels like we can't solve the first one. Yeah. So how on earth are we going to solve the biggest one? Yeah. And I think that you can see why these researchers are so sensitive to a change in equilibrium. And why they're like, OK, opening eye is not safe.

We've got to go to anthropic, which takes these dangers much more seriously. And I think the broader public doesn't exactly see these dangers because if you can't count the number of ours and strawberry, how is it going to destroy the world. But a certain subset of these people take it very seriously. But no, I don't know how we get there.

So let's talk about anthropic specifically. Daria's post particularly interesting because anthropic has the safety reputation because they were the first to their kind to leave open AI and say we're building a safer one. But the post is, hey, I'm still building AGI. Even though we have this reputation, even though I want to go slow and even though we care about safety, I'm chasing the same goal as open AI. Why do you think he's trying to walk that line right now?

I think that it's important to sell a utopia and a dystopia is harder to sell. That was my take reading it because I have yet to see this from anthropic since they were born.

We're going to build this utopia. It's been mostly we need to slow down and that doomer sort of personality that they've adopted. I think it really just has to come down to market pressures. They have to compete. They have to be as cool as Sam Altman. It's the drama, the intrigue, the building utopia. It's where you'd want to put your money. It's where you might want to work.

You wrote about Darius Post. You wrote in that piece. Anthropic is looking to raise at a $40 billion valuation. Open AI just raised $6.6 billion. Is all this money just for Nvidia GPUs? What are they spending it on?

Well, researchers cost millions and millions of dollars at this point because they're in such small supply and there was a story not that long ago that Mark Zuckerberg was emailing researchers directly to recruit them. There's a lot at stake for researchers. They're getting a lot of money. That's a huge chunk.

Yes, GPUs, cloud compute. It cost so much money to train these models. I had likeness just in conversation about this to imagine you're leaving your AC on all the time at home and then 1000X that. You already know what your bill looks like when you leave your AC on too long.

It is so expensive to cool these GPUs to run them all day long and then people are also using your products which are run on large language models that run on these compute. That's expensive as well. It's just a very, very expensive operation and it eats up money because they're not making much money.

Yeah, that's the other part of this. How are any of these companies going to make money? How is Anthropic going to make money? I wrote about this in terms of agents that's what everyone's building Google Anthropic open AI. They're all building agents. So that's kind of what we've been talking about this autonomous AI that can do your work for you.

The book travel for you, etc. I think that this is their next thing that they believe will be able to show off that these large language models are useful and can also charge for it. Do I actually think that they're going to be profitable anytime soon? No, I don't think that's coming anytime soon.

All these fundraising moments are happening right on top of each other right opening. I just raised XAI is raising obviously Anthropic is looking is there reason is just coincidence life cycle these companies. No, I think that you're just running out of money. I think if they want to build the next frontier model is like Dario says this himself that we are reaching models that are going to take a hundred billion dollars to train.

Like they they just need that money to train the next frontier models and also these VCs really want to see the next GPT five right so they need to rush quickly to get this compute to spend this money and they're just burning through it. Is there any chance that these companies are going to run out of money before they raise again. Like they're burning it that fast and you to raise this much.

It does feel like these lines might converge faster than anyone hopes. Yes, I think so. I wouldn't say that I'm so well versed in funding and finance, but I think I can do normal math. And if they're losing billions of dollars hand over fist and they're only raising 6.6 billion dollars.

You're not making a profit, you're kind of screwed, you're going to run out of money. I don't think any of these companies are going to go under, but the smallest companies that don't have a Microsoft or an Amazon to fund them. I think that those are the companies that are going to suffer that we've already seen that with like inflection, for example.

I want to come back to that and how much these companies are reliant on the big companies because there's a lot of complication there, but just big picture. Here we are famous tech CEOs or writing manifestos about building digital God so they can somehow spread democracy. And I'm getting the this is all just deployed to raise money vibe from you. Is it that simple? Is it that cynical?

No, I do believe that Altman and Dario actually believe that this is how AI is going to change the world. I do believe that the researchers who spend day in day out building this technology believe that's the future. I think that the timing of Dario's blog, it's weird. It's just weird. It's like, okay. Well, obviously this seems tied to the fact that you need to raise a lot of money.

And XAI just raised the most that anyone's ever raised and then open AI raises the most that anyone's ever raised. Everyone's trying to build the next biggest model and it costs a lot of money and saying we're going to be really slow and chill and that doesn't really make people excited to invest.

And the devil's advocate position is does he really need to do that? Does he need to write a blog to get people to invest? It's already so busy and I just come back to the competition competition has never gotten stiffer. Let me ask you one very dumb question that I do want to talk about the big companies and how they're related to all this. Both inthropic, open AI, the rest of them.

They're all kind of built on LLM's right like they're built on one very foundational technology and the idea is that we just throw more data and compute and time and electricity and money at it. We can just get there. We're just going to horsepower away into an AGI. Then it met others, Yan McCoon who's like, no, you can't. There are some other people who are very skeptical of this approach. Can they do it? Is this the right path? Is this even worth it?

Worth it? I mean, I, there's, again, so nascent, so many ways to be argued and I think that's what I find the hardest part about covering AI is that it is just so easy to argue about the smallest things all day. They believe that you have to completely change the structure, which we build LLMs to reach AGI. And yes, that this is not the path to take to reach it. Very smart people like Yan believe that no, you can't just horsepower your way through building AGI.

I think my answer is I would like to see. I would like to see proof. I am just asking every day for proof. And we don't have it. Show me digital God. Show me digital God. Show me a path to digital God. No, it's just like there's no path. It's just like, let's just keep going this way. And it should be fun. Yeah, it just strikes me that if you're, you know, an Andreessen Horowitz limited partner, you are probably on the order of like a college pension fund.

And you're like, so digital God, if you just, if we just give you all the money, you'll make digital and that's going to return us how? And it doesn't seem like that loop is closing very fast. No, no one's making money and we need more money to build the next thing with which might make us money by putting all the travel agents out of business. And just somewhere in there is a bunch of question marks and I it seems unclear to me how any of that gets resolved.

For us and for the listeners, I think that this is also really unclear to the people who are building it and investing in it. Because if you look at open AIs mission statements on their website, it has like a big pink box that says anything you invest should be considered a donation. So it is clear that like investors were like, ah, it'll be fine. And now they have to change from a nonprofit to a for profit because they're like, actually, I don't want to just donate.

I want some money back. So that's where we're at. And they're figuring it out. We have to take another quick break. We'll be back. Support for decoder comes from AT&T. What does it feel like to get the new iPhone 16 Pro with AT&T next up anytime? It's like when you first pick up those tongs and you're now the one running the grill. It's indescribable like something you've never felt before.

All the mouthwatering anticipation of new possibilities, whether that's making a perfect cheeseburger or treating your family to a grilled baked potato, which you know will forever change the way they look at potatoes. With AT&T next up anytime, you can feel this way again and again. Learn how to get the new iPhone 16 Pro with Apple Intelligence on AT&T and the latest iPhone every year with AT&T next up anytime.

AT&T connecting changes everything. Apple Intelligence coming fall to 2024 with Siri and device language set to US English. Some features and languages will be coming over the next year. Zero dollar offer may not be available on future iPhones. Next up anytime feature may be discontinued at any time. Subject to change. Additional fees, terms and restrictions apply. See AT&T.com slash iPhone for details.

Support for this show comes from the refinery at Domino. Location and atmosphere are key when deciding on a home for your business. And the refinery can be that home. If you're a business leader specifically one in New York, the refinery at Domino is an opportunity to claim a defining part of the New York City skyline.

The refinery at Domino is located in Williamsburg, Brooklyn and it offers all the perks and amenities of a brand new building while being a landmark address that dates back to the mid 19th century. It's 15 floors of Class A modern office environment housed within the original urban artifact making it a unique experience for inhabitants as well as the wider community. The building is outfitted with immersive interior gardens, a glass domed penthouse lounge and a world class event space.

The building is also home to a state of the art equinox with a pool and spa, world renowned restaurants and exceptional retail. As New Yorkers return to the office, the refinery at Domino can be more than a place to work. It can be the magnetic hub fit to inspire your team's best ideas. Visit the refinery.nyc for a tour. Support for decoder comes from Grammarly.

Revolving in a meeting and wondered, couldn't this have been an email? Well next time it is an email, Grammarly can help you out with writing more clear and efficient communications. Grammarly is a trusted AI writing partner that can save your company from miscommunication and all the wasted time and money that goes with it. Grammarly helps you improve the substance, not just the style of your writing, by identifying exactly what is missing.

It can reduce unnecessary back and forth, resulting in less confusion, less meetings, and more clarity. According to Grammarly data, teams that use it report 66% less time spent editing marketing content, and 70% improved brand compliance across the company. Grammarly works where you work, from docs to messages to emails. It integrates seamlessly across 500,000 apps and websites.

For 15 years, Grammarly is help professionals do more with their writing. You can join the 70,000 teams in 30 million people who trust Grammarly to get results on the first try. You can go to Grammarly.com slash Enterprise to learn more. Grammarly. Enterprise ready AI. We're back with Verge Senior AI reporter Kylie Robinson. Before the break you heard Kylie mention a big piece of news from earlier this month,

that OpenAI is shifting towards a for-profit structure. That was part of OpenAI's recent 6.6 billion funding round. The switch to a for-profit company has to happen within two years, or those investors can ask for their money back. This is important for a very decoder reason. If you're a decoder listener, you know that structure is important. How companies like Anthropic and OpenAI are organized, who their investors are, how they plan to make money, and where all the compute comes from,

will have a huge impact on the kinds of products they build. It will affect how fast they release those products to stay competitive, and whether safety will take even more of a backseat in the future. If you believe that AI is going to usher in a utopia, as Sam Altman and Daria Emote theorize, well, it increasingly looks like utopia depends on major cloud computing providers continuing to write the checks.

And whether other investors think there's a massive payout waiting for them on the other side of the race to build AGI. So I think that brings us to now basically. OpenAI is converting to a for-profit. It seems that's very contentious. Just before we started speaking, there was both a big New York time story and a big Wall Street Journal story about different aspects of that process, and mostly OpenAI's relationship with Microsoft.

So how much equity will Microsoft get in exchange for already being the biggest investor slash donator to open AI right now, and then how much more compute and how much more dependency will Microsoft have on OpenAI versus going in some way. There's a lot in there. My favorite piece is that if OpenAI does build AGI, it gets out of its Microsoft contract, which is cited as a goal as an incentive for OpenAI.

We should build digital gods so we can get out of this Microsoft deal, which is hilarious, just on its face, hilarious. And then there's also people at OpenAI who are apparently complaining that Microsoft won't give it enough compute so it can train the next model and actually build AGI. What is going on here? Is this just those two companies had a weird falling out after Sam got asked it and came back? Is it OpenAI is totally dependent on Microsoft and there's friction there.

If Microsoft goes away, can OpenAI continue to succeed? So OpenAI needed money because Elon Musk was a co-founder of OpenAI and said, actually, I am not into this anymore by. And he took all his money with him. And they really needed money and Microsoft saved them and now OpenAI is an awkward position where they really need Microsoft to survive because that's who provides the bulk of their compute. They have an exclusive cloud partnership with them.

So now we have gotten to a point where, oh my gosh, Microsoft does not have enough compute. So they are not happy about that. Microsoft made one concession in this exclusive agreement to let them make a partnership with Oracle to get some more compute, which was rare.

But yes, Altman was ousted last year and I think that really pissed off Satya. I think he was having a nice Thanksgiving break and I think that he had to go on CNBC and defend OpenAI and he's like, we are just too dependent on this company for the future, what I believe is the future of technology.

So we've got to create a backup plan and I think that's where inflection comes in and that's what some of the New York Times article gets into Mustafa Sullyman who was the CEO of inflection is now the CEO of AI at Microsoft. I just think it's so messy. They both want to build the future and they both depend on each other.

It seems like broadly OpenAI being dependent on people writing it ever bigger checks and getting more and more Azure compute time. That's a huge dependency for a company, right? They are completely dependent on these cloud companies and they are realizing that and they're trying to figure out how to be slightly less dependent.

I mean, Sam Altman is apparently going around the world trying to pitch his own multi trillion dollar chip startup so he can own this portion of his business. I think they're scared that they're so dependent on Microsoft. So OpenAI is really dependent on Microsoft and Thropic has that same kind of relationship with Amazon, right? Yes.

They're paying their bills and that's fine. I don't think it is as testy not that I've heard not that's been reported. It seems like Anthropic is moving a lot slower. It's a lot less dramatic. There's not the boardroom coups or you know the splashy releases ahead of Google IO. There's not the boardroom coups is like a real just a real measure of a company.

Right. Exactly. So no, I don't think it's as testy, but I think Anthropic seems to be really pulling their punches. They're moving a lot more carefully and trying to avoid stepping in mass. Do you think that the state of the industry and that the tone of these big pitches is related to these business pressures?

Hey, we have to search shipping products that people pay for its scale to prove out that there's demand for all of this investment. Hey, there's ferocious competition for talent. Hey, our big cloud provider benefactors might start to wonder if they should just build their own products. It seems like that's a lot of anxiety that is being expressed is oops, we might destroy the world if we succeed.

I was thinking about this for the people who want to argue with me about how I don't truly believe in a GI and such. I think if these are your messiahs and you don't also just notice them as businessmen to think that this is not full of tactical decisions like these blog posts are not tactically written with

many factors in mind is just ludicrous. I think it would be a lot easier to focus on those technology pressures when you didn't have the business pressures because you need the money, you need the talent to move your technology forward and these people are like throwing elbows for this kind of thing. Again, like Zuckerberg writing emails, XAI holding recruiting parties in San Francisco and inviting open AI employees. It's testy out here and you have to do anything to fight your way in.

Do you think we should trust these folks that's a tough one my instinctive answer is no right where reporters we shouldn't trust them, but they are trying to build things that products are shipping you can use them to whatever extent you want to use them. They have a vision. You can believe it or not it are they generally trustworthy and your interactions with them or the people they work for or the people who work for them.

This is so funny back to that episode on tribalism I was talking about that I listened to a podcast about their advice against tribalism was like you should probably just take people at their word and believe that they believe what they're saying. And I thought that that's a great way to look at it. I do believe that they think that they're building a GI and going to change the world and such.

In terms of trust, I think it is my job to be skeptical. I don't think I can read a blog and tell our readers they are definitely going to do this and just ignore all of the other factors at play here. I think that they have to earn that trust. I think that this is sort of a pitch we have seen for years. Look at all of the times tech executives have promised to cure death to give us to Mars to like to fix all of these ailments in here we still are with all of these ailments.

So I think that they just have to earn it. And I think that's okay. They can't just demand today that we all trust them because it's kind of a damaged reputation in Silicon Valley. It's okay. You're going to have to earn it. There's two ways to keep folks in line. One is ferocious market competition. Then people will just vote with their dollars. The other way is politics where people vote with their votes.

I'm not sure that the market competition is producing much alignment for lack of a better word. Like no one's picking an AIS system right now because it's quote unquote safer. They're just picking one that's in front of them and maybe one day they're just pick the one that's pre loaded on their iPhone. On the politics side. California had a bill and Gavin Newsom just vetoed it would have made these products safer.

And the topic didn't oppose it. They didn't endorse it. There was some ferocious opposition. Is the politics of this is doomed and we're relying on the market. In terms of SB 1047, which is that regulation California. That was a really difficult one because California is filled with these technologists who do not exactly want strict regulation right out of the gate.

And Governor Gavin Newsom was lobbied pretty hard against passing this and people were threatening to leave. And so much of the California economy does rely on these big spenders coming here and building their technology. So in terms of California regulation, I think that's going to be an uphill battle. I think it's going to rely on federal regulation. And I think that remains to be seen if they'll get that right.

I don't know if they have a history of getting that right. And I think, you know, I wasn't a journalist when section 230 was passed. But I think that has caused a lot of unease that we should pass something now to control this before things get worse. And we have no control over the technology that runs our society. I get that unease. I just want to clear out 16 years old, one section 230 was passed. I'm not that old. Sorry.

But it is true that we live in the shadow of that law and people have many, many opinions of it. And here it just seems a lot simpler, right? Like I feel like we know how to write product liability laws. Is it just too hard or the tech industry is too good at claiming that no government can ever possibly understand their work? Well, I think that we're trying to like get our hands on the slippery fish. It's so nice.

I don't know if the people who are building this and the people who are regulating this know exactly what to do to fix something that is so new. It's, it feels like trying to regulate Facebook when it was still a Harvard social media platform. It's just hard to figure out exactly how this will change the world. And I'm not sure promising utopia is going to help. I don't know if promising dystopia is going to help. We just don't know for sure how this is how this is going to shake out.

It kind of sounds like the fact that the big tech companies have a ton of control is the regulating part of the market right now. It's not Gavin Newsom. It's not whatever by administration executive orders were passed. It's not any other law. It's not competition between them, even though they also they're safer. It's maybe just such an adela saying, well, you seem out of control. I'm going to build my own or it's Andy jassy saying I want to use AWS for something else.

It seems like that is actually the place where the most control over these companies will be expressed. Right. Is that the way I have to trust this is such as a good guy. It's so funny because there's this line in Silicon Valley, the TV show that I quote in my article is like, I don't know about you guys, but I don't want to live in a world where someone makes the world a better place than we do.

That's where we're at right now. Why should we be forced to trust these big tech executives? Why does it have to be in the hands of just a handful of big tech executives? Have they really proven that they can be trusted with digital God? I'm just asking. Yeah, I don't think so.

So where we're at right now, just to sum this up is it feels like everyone is racing towards building the same kinds of products against the same vision. It faster and slower rates. Some people think they shouldn't because they might destroy the world. But if we get it right, everything will be groovy and no one's really in charge who is to say whether Ilya Suskov is company, which is literally I believe called safe super intelligence is actually safer than on the topic.

Like, there's no, there's just a lot of people claiming this thing that they think the market wants or people might want or is worth the money. But there's, I don't think the market broadly understands that it even wants that or how to measure it or how to say it.

And then the other choice you have is some other body of people, whether that's just the providers of cloud computing or whether it's the government could make some decisions and they seem not motivated or not capable to make those decisions. Two things. I don't think anyone was going to choose a safer Facebook. They just want the one that's, as you said, is in front of them. The one that works better.

The one they enjoy using. So I think that's how that's going to shake out. I don't think the normal person my siblings are going to care which one's safer. Who decides if they're safe. I really do like the idea of them having to talk to the government and be completely and fully transparent about what their models are capable of and what those tests are doing because I find pause that they're able to be like,

that's fine. Like we tested it. It's totally chill. They do have some third party researchers, but it's not as transparent as it could be. So yeah, I think regulation would be a good place to start of the government having their own researchers and be like, OK, we are going to test this model for safety. We're not just going to rely on these people to test themselves. And that's how you prove if this is safe. If these are people we can trust.

What's next for these companies? What should people be looking for? I think that both are going to be really keen on getting reasoning models out there to the public with different speeds. They want something that can code faster that can reason for you. Reliably, I got a demo where getting back to agents is I got a demo from open AI where they called a fake dessert shop to place an order for them, but it did get some things wrong.

That's the future that these companies are seeing is autonomous agents that can reason it. So I think that's what we're going to continue seeing. I was promised by open AI that we'll start seeing those agents in the wild as soon as early 2025. So we'll see. All right. Well, I'm going to be hidden away from them safely in a bunker somewhere else. Thank you so much for coming on the show. Thank you.

Thanks again to Kylie for joining me on the show and thank you for listening. I hope you enjoyed it. And please let us know what you think of chill digital gosh. I'm curious to know what you think. If you have those thoughts, you can email us at decoderatthroverage.com. We really do read all the emails. Or you can hit me up directly on threats on Matt Breckless-1280.

We also have a TikTok check it out. It's at DecoderPod. It's a lot of fun. If you like Decoder, please share it with your friends and subscribe over here. Podcasts. And if you really love the show, get us with F5 Startup View. Decoder is a production of the Virgin part of the Boxing and Podcast Network. Our producers are Kate Cox and Nick Statt. Our editor is Kelly Wright. Our supervising producer is Liam James. The decoder music is my break master cylinder. We'll see you next time.

Support for this show comes from Alex Partners. You don't need us to tell you that AI is reshaping how business is done. But during critical moments of disruption, it can be hard to figure out how to leverage cutting edge technology. With a focus on clarity, direction, and effective implementation, Alex Partners provides essential support when decisive leadership is crucial. You can discover insights and learn how to convert digital disruption into revenue growth.

By reading the 2024 Digital Disruption Report at www.AlexPartners.com slash box, that's www.alixpartners.com slash V-O-X. In the face of disruption, businesses trust Alex Partners to get straight to the point and deliver results. When it really matters. Support for Decoder comes from Service Now. AI is set to transform the way we do business, but it's early days, and many companies are still finding their footing when it comes to implementing AI.

Service Now partnered with Oxford Economics to survey more than 4,000 global execs and tech leaders to assess where they are in the process. They found their average maturity score is only 44 out of 100. But a few paysetters came out on top, and the data shows they have some things in common. The most important one is strategic leadership, the operating with a clear AI vision that scales the entire organization, which is how Service Now transforms business with AI.

Their platform has AI woven into every workflow with domain-specific models that are built with your company's unique use cases in mind, your data, your needs. And most importantly, it's ready now, and early customers are already seeing results. But you don't need to take our word for it. You can check out the research for yourself, and learn why an end-to-end approach to AI is the best way to supercharge your company's productivity. Visit servicenow.com slash AI maturity index to learn more.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.