Microsoft & OpenAI CEOs: Dawn of the AI Wars - podcast episode cover

Microsoft & OpenAI CEOs: Dawn of the AI Wars

Aug 18, 202329 min
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

On this episode of The Circuit, Emily Chang sits down with Microsoft CEO Satya Nadella to hear how AI is shaking up the competition for search. Nadella argues that this new wave of technology is as big as the web browser or the iPhone. Chang also speaks with OpenAI CEO Sam Altman to discuss his company (which has some help from Microsoft), its ambitions and the latest on ChatGPT. 

See omnystudio.com/listener for privacy information.

Transcript

Speaker 1

I'm Emily Chang, and this is the circuit. I've been covering this industry for a long time, and there is always some new new thing that big tech is chasing. First it was self driving cars, and then it was the metaverse, and now everyone is all in on AI. Now there's one big tech giant that's made it clear it's not missing out. Microsoft is OpenAI's main commercial partner, trading powerful servers and billions.

Speaker 2

Of dollars for access to chat.

Speaker 1

GPT, sparking new life into old products, especially their languishing search engine.

Speaker 2

So it's as big as the inner I think it's as big.

Speaker 1

But winning in AI is a totally different story. I'm about to talk to Microsoft CEO Satsia Nadella to find out how he thinks he can do it. We'll talk to open ai CEO Sam Altman in a moment, but first, a new AI chatbot is helping Nadella in some surprising ways. Have you been playing around with a lot like fun stuff discovery?

Speaker 3

I am super verbose and polite now in email responses.

Speaker 4

It's watching. Yeah, it's always it was fun.

Speaker 3

Like the guy who leads our office team, and I was responding to him and he was like, what is this man?

Speaker 4

You're like sort of so pleasant. That's amazing. How about the chat, the bing chat? I mean, have you been what have you been using it to search for?

Speaker 5

Oh?

Speaker 3

You know, interesting enough everything right from the very schedules to like the biggest thing I have felt is such you always used it to learn new things.

Speaker 4

And what have you. But you never stay in the floor.

Speaker 3

Because you get distracted by you click it, go away, whereas here you can stay.

Speaker 4

On task and on one topic and go deep.

Speaker 3

It's sort of very habit forming in the sense that once you get used to having chat, even if I'm using it one because there's a lot of times I'm just navigating using search as a navigational tool, but once you get used to it, you kind of feel like, I got to have these rails, right, So.

Speaker 2

Once you try it, you're hooked.

Speaker 1

Microsoft has been working on AI for decades and chatbots actually aren't anything new, but all of a sudden everyone is salivating. Why do you think the moment for AI is now?

Speaker 3

AI has been here, in fact, its mainstream right.

Speaker 4

I mean, search is an AI product.

Speaker 3

Even the current generation of search, every news aggregation recommendation and you know, YouTube or e commerce or TikTok or all AI products, except that all I would say, today's generation of AI is all autopilot. In fact, it's a black box that we just sort of use that is dictating in fact, how our attention is focused. Whereas going forward, the thing that's most exciting about this generation of AI is perhaps we move from autopilot to copilot where we

actually prompt it. I think this shift from autopilot to copilot is actually, yes, the next phase of AI, which in fact is perhaps going to put us as humans, you know, more in the center of using AI to our benefit.

Speaker 1

How transformative a change do you think this will be in how we work?

Speaker 3

But I think that probably the biggest difference maker will be business chat because if you think about the most important database in any company is the database underneath all of your productivity software.

Speaker 4

Except that data is all siloed today.

Speaker 3

But now I can quariate with natural I can say, oh, I'm going to meet this customer. Can you tell me the last time I met them? Can you bring up all the documents that are written about this customer and summarize it so that I am current on what I need to be prepped for how do you make.

Speaker 1

Sure it's not clippy to point out that it is helpful delightful, doesn't want to make me click out asap.

Speaker 4

Two sets of things. One is you know you're laughing because because look.

Speaker 3

Like our industry is full of lots of you know, examples from Clippy to even let's say current generation of these assistants and so on, they all are brittle. I think we are also going to have to learn that ultimately, these are tools. Just like anytime somebody sends me a draft, I review the draft, I just don't accept the draft. And so that ability to work with this co pilot,

give it feedback, know how to verify it. It's literally like inspecting somebody's homework, right, which is, hey, tell me exactly how you did this and so that I can verify.

Speaker 4

Those are the kinds of things that we'll learn.

Speaker 1

You're trying to reinvent search with this AI powered thing, and I believe it's been using GPT for.

Speaker 4

For a while now.

Speaker 2

What's worked what hasn't.

Speaker 3

One thing that we're learning is the search context, right, So conversational search is a thing. So this grounding of your commonation with search data, I think is one mode, and then there is a completely different.

Speaker 4

Mode that we're also learned, which is people just want to chat.

Speaker 3

So we are now getting good and even the product designed so that we make that an explicit choice.

Speaker 4

So for example, when we launched being.

Speaker 3

We didn't have these three modes. We now have how precise do you want it to be? Or how creative you want to be or you want to be balanced? That I think is one of the biggest learnings we learned it that oh wow, people do in fact want to engage even in what is chat inside of search in different ways, and we've got to put the user control back.

Speaker 1

How much market share do you think you can really take from Google? Like your prediction, give me a thirteen year agree.

Speaker 4

We are thrilled to be in search.

Speaker 3

We're a very small player in search, and I look forward to every inch we gain is a big game.

Speaker 1

You're coming for Search, They're coming for office. They're now putting AI in there. You know, Google Docs and sheets and Gmail. Are we just going to see you in Sindar kind of one up each other every week and it's race to greatness.

Speaker 3

You know.

Speaker 4

I just want barred in being both to thrive.

Speaker 3

I just want Google Workspace and Microsoft three six why both to thrive? I mean, look, at the end of the day, the fun part of being in this industry and competing is the innovation and competition is the last time I checked a fantastic thing for users and the industry. And so yeah, I think you know Google's going to do. You know, is a very innovative company, and uh, and we have a lot of respect for them, and I expect us to compete in multiple categories.

Speaker 1

In my decade plus covering Microsoft, I can't remember you releasing this much in quick succession.

Speaker 2

Why is it all happening so fast?

Speaker 3

Yeah, it's you know, it's sort of sometimes it feels it's all happening fast.

Speaker 4

It's we started working.

Speaker 3

On this, you know, a good four years ago, right, I mean in some sense if you think about when open AI and Microsoft came together and said, hey, this next generation of large language models need new infrastructure. Let's build the infrastructure, tune the infrastructure, let's understand even what AI safety in alignment looks like for these.

Speaker 4

What are the use cases?

Speaker 3

And this has been four years plus in making so yes, it feels.

Speaker 4

That we've launched a lot of things just in a hurry.

Speaker 3

This year, but it's been four years in the making, and obviously it's a great partnership with OpenAI.

Speaker 1

Microsoft reportedly laid off a team focused on ethical and responsible AI. Meantime, you've got the Center for Humane Technology calling the race to AI a race to recklessness.

Speaker 2

How do you respond to that?

Speaker 3

In terms of impact on anybody at Microsoft? This is just probably the thing that weighs on me heavily, because after all, any restructuring is hard heart on the people who are most impacted.

Speaker 4

That said, two things.

Speaker 3

One is this is no longer a side thing for Microsoft, right because in some sense, whether it's design, whether it's alignment, safety, ethics, it's kind of like saying quality of performance and design core design. So I can't have now an AI team on the side. It's all the mainstream. So in some sense the hard process that companies like ours are going to constantly go through.

Speaker 4

A lot of change.

Speaker 3

And then I think, if anything, debate, dialogue and scrutiny on what is this space of innovation? Is it really creating benefits for society? I think are absolutely and I'll welcome it. Right, I look at it and say, no one can run faster than the benefits to the broader society and then the norms that we enforce as a democratic society on any technology. And so I feel like

we are at the very early stages of it. So I would ask us to be open to it, but at the same time scrutinize it, and let's have a dialogue on what the benefits are. And in that context, let's also recognize, especially with this AI, why were we not asking ourselves, like the AI that's already in our lives and how what is it doing? It's right, we've gone straight to saying, oh wow, these l lens have some hallucination.

Speaker 4

Guess what.

Speaker 6

There's a lot of AI that I don't even know what it's doing, and except I'm happily clicking away and accepting the recommendations. So why don't we, in fact educate ourselves to ask all of what AI is doing in our lives and then say how to do it safely in an aligned way.

Speaker 1

Elon musk Kuko founded open ai and then left has said it's not what he intended. It is closed, sourced and effectively controlled by Microsoft.

Speaker 4

How would you respond?

Speaker 3

First of all, open ai cares deeply about their mission and doing it in the most safe way and in the most open way, and there's an interesting trade between openness and safety. So that is sort of one of the reasons why they have what they have in terms of their governance architecture, and so therefore at some level

they have been very very clear on what principles drive them. Similarly, we have been very very clear on the print that drive us around AI safety and responsibility, and we'll stick to them.

Speaker 1

I have to ask you a question about the economy and whether you're concerned about a prolonged tech bust. I mean, we've seen the collapse of three banks, tighter money, more uncertainty.

Speaker 2

How are you thinking about this?

Speaker 3

I think at the highest levels, I think there was an aberration of maybe a ten year period of low interest rates and everything that came with it, not just in tech but in the broader economy, and I just think that we're just getting back to normal at least.

The thing that perhaps we have to remind ourselves mostly the world looked like this, which is interest rates were higher than zero, Inflation was perhaps maybe structurally going to be higher, just given everything that's happening with supply chains in the geopolitics, and we all as businesses have to be accountable to how to manage in that environment, and tech is one sector. And so I kind of look at this and say, hey, it's a return to normal as opposed to anything sort of that we need to

be worried about as being prolonged. In fact, this is the long run. The economies have to sort of be more real.

Speaker 2

All right, So this is normal to you.

Speaker 3

I mean I think that sometimes we sort of say, you know, the last ten years can never be sort of the way way forward, and it's good. I think it's better to have businesses that are run efficiently that have actually measured on the way, both whether it's on the societal impact or on real economic impact.

Speaker 1

In nineteen ninety five, Bill Gates Ente memo calling the Internet a title wave that would change all the rules and was going to be crucial to every part of the business.

Speaker 2

Is AI that big?

Speaker 3

Yeah? I mean, in fact, I sort of say the chat GPT when it first came out was like when Mosaic first came out I think in nineteen ninety three as the first browser.

Speaker 4

And so yes, it does feel.

Speaker 3

Like you know, to the Bill memo in nineteen ninety five, it does feel.

Speaker 4

Like that to me.

Speaker 1

So it's as big as the enter.

Speaker 4

I think it's as big.

Speaker 3

It's just like in all of these things, right we in the tech industry or you know, classic experts at overhyping everything. I hope at least that what motivates me is I want to use this technology to truly do what I think at least all of us are in

tech for, which is democratizing access to it. So when someone says to me, hey, here is how a farmer in rural India, you know, can use this technology to express a complex thought on how to get a subsidy from a government program and can do that successfully.

Speaker 4

That gives me a lot of sort of you know, hope.

Speaker 1

I think a lot about my kids and how AI will have something that I don't, which is an infinite amount of time to spend with them, and how these chatbots are so friendly, and how quickly that could turn into an unhealthy relationship or you know, maybe it's nudging them to make a bad decision.

Speaker 2

As a parent, does any part of that scare you?

Speaker 3

So that's kind of one of the reasons why I think this moving from autopilot to this co pilot hopefully gives us more control, whether it's.

Speaker 4

As parents are more importantly even as children.

Speaker 3

Like one of the things that was very cool to see in the launch of GPT four was the demo or the launch of con academy stuff. Al sent me this last night and I was looking at his algebra class. It was so engaging, right, I mean, think about it, Like one of the dreams we've always had is can I have a personalized tutor that is engaging, that is

actually trying to teach me. We should, of course be very very watchful of what happens, but at the same time, I think this generation of bots, in this generation of AI probably just go from engagement to more giving us more agency to learn.

Speaker 1

What do you think about GPT four and like how big aly it is?

Speaker 4

It is? It's pretty nonlinear, right.

Speaker 3

That's the advantage of these models is that they're showing generation to generation that they're getting more efficient at the current task, and they're showing emergent capability. Like for example, from GPD three to three five we learned the code, so similarly now it's sort of really like look at it's the performance on all those standardized tests. I mean

that is pretty stunning reasoning. So the thing that I feel this is the closest thing we have to having a reasoning engine that all of us can use to better.

Speaker 4

Make sense of the world.

Speaker 3

And so going back to the kids' spot, like my daughter said this Sami the other day, which I think it was the most profound, which is it's kind of like having a study body and a tutor all at the same time.

Speaker 4

She was using bing chat.

Speaker 3

I think she had a PDF open and she was querrying the pdf, and it's kind of like, Wow, I'm able to ask questions, which you know, it's always not. We're not that like you and go to your tutorials or whatever it is that you need to go. It's so much easier to have this tool available to make better sense of the world.

Speaker 4

So yeah, I think it's a tool that has its place.

Speaker 2

And you're just excited, not scared.

Speaker 3

That's kind of the big debate that right now, I am more excited.

Speaker 4

Like the reason is, even if.

Speaker 3

You narrowly look at it from a technology perspective, this is more empowering and more understandable than these recommendations on some social media site or what have you, which are being driven by some other black box engagement algorithms. So I'm not I'm open for pushback and scrutiny and debate on it. I don't think this is anyway close to AGI. We are not close to anything runaway AI problem jail breaks, yes we can, but this, you know, we can always learn and put them on safety rails.

Speaker 4

So I think we overstate the risk.

Speaker 3

We're understating the benefit, even in relation to current set of technolog.

Speaker 4

And its users and it's harms.

Speaker 3

That's kind of what I think would be a good one to actually pull out, Like, Hey, would I rather have this or a recommendation engine that I don't even know what it's doing.

Speaker 1

Yeah, Well, continue this conversation after this quick break.

Speaker 4

I want to ask about.

Speaker 1

Jobs, because obviously Microsoft makes software that helps people do their jobs, and I wonder if AI laid in software.

Speaker 2

Will put some people out of jobs.

Speaker 1

Saam Altman has this idea that AI is going to create this kind of utopia and generate wealth that's going to be enough to cut everyone a decent sized check but eliminate some jobs.

Speaker 2

Do you agree with that?

Speaker 3

You know, look a bit from kines to I guess Altman, they've all talked about the two day work week, and I'm looking forward to it.

Speaker 4

But the point is, look, the lump of labor.

Speaker 3

Fallacy has never proven out right, which is in some sense there is displacement. And in fact, if anything, what we have to do is really do a fantastic job as a society to deal with any displacement, because if one job turns into another job, you have to then skill people on another job. And in fact, in an interesting way, here's one thing. Even in this Microsoft three sixty five tool, like there is this power automate tool. Up to now we've called it the low code no

code tool for doing workflow automation. Interestingly enough, you now can automate workflows just using natural language.

Speaker 4

Guess what that means.

Speaker 3

Anybody who is in the front lines in healthcare and retail can automate or be part of the IT journey. That to me gives means their new jobs and better wage support. So I feel, yes, there's going to be some changes in jobs. There's going to be some places where there will be wage pressure, there will be opportunities

for increased wages because of increased productivity. We should look at it all and at the same time being very clear eyed about any displacement risk, because one thing that we've also learned in the last twenty years is that any society that doesn't really pay attention to who are the winners and the losers and to make sure that as a society we are not really you know, imbalanced in it in terms of economic opportunity, we will be better on.

Speaker 1

At the center of a potentially tectonic shift in job creation is Sam Altman. He's promised that AI will create a kind of utopia when it joins the workforce, while also raising alarm about the dangers, signing his name to statements warning about the risk of extinction from AI. Over the summer, Altman traveled the world to talk about the promise and peril of AI. I caught up with him when he returned to San Francisco backstage at Bloomberg's annual Tech summit.

Speaker 2

So you've been traveling a time.

Speaker 7

Yeah, what's the like, eat, sleep, meditate, yoga tech routine.

Speaker 8

There was like no meditation or yoga on the entire trip, and almost an exercise that was tough.

Speaker 5

I slept fine.

Speaker 2

Actually was the goal more listening or explaining?

Speaker 8

The goal was more listening it ended up with more explaining than we expected. We ended up meeting like many many WARD leaders and talked about the sort of the need for global regulation, and that was like more explaining that listening was super valuable.

Speaker 5

Came back with like one hundred handwritten pages of notes.

Speaker 7

I heard that you do handwritten What happens to the handwritten notes?

Speaker 8

But in this case, like I distilled it into like here were the top fifty pieces of like feedback from our users and what we need to go off and do. But there's like a lot of things when you like get people in person, like face to face or over a drink or whatever, where people really will just like say, you know, here is like my very harsh feedback on what you're doing wrong.

Speaker 5

And I don't want to be different.

Speaker 2

You didn't go to China or Russia.

Speaker 5

I spoke remotely in China, but not Russia.

Speaker 2

Should we be worried about.

Speaker 7

Them and where they are on a what they I do?

Speaker 8

I'd love to know more precisely where they are. That would be hopeful. We have I think very imperfect information there.

Speaker 2

So how has chachipied you changed your own behavior?

Speaker 8

There's like a lot of like little ways and then kind of like one big thought. The little ways are, you know, like on this trip, for example, the translation was like a lifesaver.

Speaker 5

I also use it if.

Speaker 8

I'm trying to like write something which I write a lot to never publish, just like for my own thinking, and I find that I like write faster and can think more somehow, So it's like a.

Speaker 5

Great unsticking tool. But then the big ways I am I see the.

Speaker 8

Path towards like this just being like my super assistant for all of my cognitive work super assistant.

Speaker 7

You know, we've talked about relationships with chatbots. Did you see this as something that people could.

Speaker 2

Get emotionally attached to? And how do you feel about that?

Speaker 8

I think language models in general are something that people are getting emotionally attached to, and you know, I have like a complex set of thoughts about that. I personally find it strange. I don't want it for myself. I have a lot of concerns. I don't want to be like the kind of like people telling other people what they can do with tech. But it seems to me like something you need to be careful with.

Speaker 7

You've talked about how you are constantly in rooms full of people going holy, Yeah, what was the last holy moment?

Speaker 8

It was like very interesting to get out of the SF echo chamber or whatever you want to call it and see like the ways in which the holy concerns were the same everywhere, and also the ways they're different. So like everywhere people are like the way to change seems really fast. You know, what is this going to do to the economy?

Speaker 5

Good and bad? There's change, and change brings anxiety for people.

Speaker 2

There's a lot of anxiety out there. There's a lot of fear.

Speaker 7

The comparisons to nuclear, the comparisons to bioweapons are those fear is that over to matter.

Speaker 8

It is a lot of anxiety and fear, but I mean there's like way more excitement out there. I think, like with any very powerful technology synthetic, bio and nuclear, two of those, AI is a third. There are major downsides. We have to manage to be able to get the upsides. And with this technology, I expect the upsides to be far greater than anything we have seen, and the potential downside is also like super bad. So we do have

to manage through those. But the quality of conversation about how to productively do that has gotten so much better, so fast. Like I went into the trip somewhat optimistic and I finished it super optimistically.

Speaker 7

Yeah, so is your bunker prepped and ready to go for the AI apocalypse?

Speaker 8

A bunker will not help anyone if there's an AI apocalypse. But I know that, Like, you know, journalists seem to really love that story.

Speaker 2

I do love that.

Speaker 8

I wouldn't overcorrect on like Boyhood's survival prep.

Speaker 5

Uh, co Scott, I like this stuff. Yeah, it's not going to help with AI.

Speaker 7

There's been talk about the kill switch, the big red button.

Speaker 5

I hope it's clear that's a joke.

Speaker 2

It's clear it's a joke. Could you actually turn it off if you wanted to?

Speaker 8

Yeah, sure, I mean we could like shut down our data centers or whatever.

Speaker 5

But I don't think that's what people mean by it.

Speaker 8

I think what we could do instead is all of the best practices we're starting to develop around how to build this safely, the safety tests, external audits, internal external red teams, lots more stuff. Like the way that it would be turned off in practice is not the dramatic, you know, gigantic switch from.

Speaker 5

The movies that cuts the power. Blah blah blah, it's that we have.

Speaker 8

Developed and our continue to develop these rigorous safety practices, and that's what the kill switch actually looks like.

Speaker 7

But it's not as theatric There is now a new competitive environment, for sure, and open aye is clearly the front runner.

Speaker 2

But who are you looking over your shoulder at?

Speaker 8

This is like not only a competitive environment, but I think this is probably the most competitive environment in tech right now.

Speaker 5

So we're sort of like looking at everybody.

Speaker 8

But I always, you know, given my background and startups, I directionally worry more about the people that we don't even know to look at yet, that'll that could come up with some really new idea we missed.

Speaker 2

How would you describe your relationship with Sacha in Nodella? How much control they have?

Speaker 1

You know, I've heard people say, you know, Microsoft's you're just going to buy open Ai.

Speaker 2

You're just making big tech bigger.

Speaker 8

Companies not for sale. I don't know how to be more clear than that we have a great relationship with them. I think it's a that these like big major partnerships between tech companies usually don't work.

Speaker 5

This is an example. If you're working really well, we're like super grateful for it.

Speaker 2

Have you talked to Elan at all behind the scenes. Sometimes, what do you guys talk about? I mean it's getting heeded in the public.

Speaker 8

Yeah, I mean we talk about like a super wide variety of important and totally trivial stuff.

Speaker 7

Why do you think he's so frustrated or kind of I mean, it's almost there's some attacking.

Speaker 5

Going on, and you should ask him.

Speaker 2

I would like to know. I'd like to better understand it.

Speaker 8

I don't think this is in the top like one hundred most important things happening related to AI right now.

Speaker 7

For what it's worth, is there any aspect of our lives that you think AI should never touch?

Speaker 8

My mom always used to say never, say never, never, say always, And I think that's like generally good advice. If I've made a prediction now, I'm sure it could end up being wrong in some sort of way. I think AI is going to touch most aspects of our lives, and then there will be some parts that stay surprisingly the same. But those kind of predictions are like humbling and very easy to get wrong.

Speaker 1

What's the percentage chance we get to the good future versus the bad future?

Speaker 5

Very high? But I don't know how to put a precise number on it.

Speaker 8

You know, when you hear these people say, like my probability of doing is three percent and minus twelve or mine sometimes it's like minus nine and minus thirteen.

Speaker 5

And I have this huge argument. I'm just not smart enough to give the numbers that precise.

Speaker 2

What do you think kid should be studying these days?

Speaker 8

Resilience, adaptability, a high rate of learning, creativity, certainly, familiarity with the tools.

Speaker 1

So it should kids still be learning how to code, because I've heard people say, don't need to learn.

Speaker 2

How to code anymore. Just Matt, just biology.

Speaker 8

Well, I'm biased because I like coding, but I think you should learn to code. I don't write code very much anymorld. I randomly did yesterday, But learning to code was great as a way to learn how to think. And I think coding will still be important in the future. It's just going to change a little bit or a lot. We have a new tool.

Speaker 2

What are we all going to do when we have nothing to do?

Speaker 8

I don't think we're ever going to have nothing to do. I think what we have to do may change, you know, like what you and I do for our jobs would not strike people from a few thousand years ago as real work. But we found new things to want and to do and ways to feel useful to other people and get fulfillment and create, and that will never stop. But probably I hope you and I look, you know, if we can look at the world a few hundred years in the future, be like, Wow, those people have it so good.

Speaker 5

I can't believe they call the stuff work. It's so true.

Speaker 2

So we're not going to be all just laying on the beach eating bond bonds.

Speaker 5

Some of us will and more power to people who want to do that.

Speaker 7

Do you think in your heart of hearts that the world is going to be more fair and more equity?

Speaker 4

I do?

Speaker 5

I do.

Speaker 8

I think that technology is fundamentally and equalizing force. It needs partnership from society and your institutions to get there. But if we can like my like my big picture, highest level like I'll zoom all the way out. View of the next decade is that the cost of intelligence and the cost of energy come way way down. And if those two things happen, it helps everyone, which is great. But I think it lifts up the floor a lot.

Speaker 2

So where do you want to take open AI next?

Speaker 8

We want to keep making better and better, more capable models and make them available more widely and less expensive.

Speaker 2

What about the field of AI in general.

Speaker 8

There's many people working on this, so we don't get to take the field anywhere.

Speaker 5

But we're pretty happy with our.

Speaker 8

Contribution, Like, we think we have nudged the field in a way that we're proud of. So we're working on new things too.

Speaker 2

What are the new things?

Speaker 5

They're still in progress?

Speaker 2

Is there room for startups in this totally?

Speaker 5

I mean, we were a startup not very long ago, but.

Speaker 2

You're almost already an incumbent.

Speaker 8

Of course, But when we started, like you could have asked the same question, In fact people did. In fact, I myself wondered, like, is it possible to take on Google?

Speaker 5

Indeed? Mind? Or have they already won?

Speaker 2

And they clearly have it?

Speaker 8

Yeah, Like, I think there's a lot of It's always easy to kind of count yourself out as the startup, but startups keep doing your thing.

Speaker 2

Well, nobody's counting you out.

Speaker 1

Thanks so much for listening to this episode of the Circuit. I'm Emily Chang. You can follow me on Twitter and Instagram at Emily Chang TV. You can watch new episodes of the Circuit on Bloomberg Television or on demand by downloading the Bloomberg app to your Smart TV and check out our other Bloomberg podcasts on Apple Podcasts, the iHeartMedia app, or wherever you listen to shows and let us know what you think by leaving us a review. I'm your

host and executive producer. Our senior producer is Lauren Ellis. Our associate producer is Lizzie Phillip. Our editor is Sebastian Escobar. Thanks so much for listening

Transcript source: Provided by creator in RSS feed: download file
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast