I'm Emily Chang, and this is the circuit. Google has been the front door of the Internet for more than two decades. Now there are so many other doors TikTok, Instagram, Amazon, Reddit and of course Open Ai with chat, GPT and so many rising AI players. Google may not be the first place you go for answers anymore. So the question is what are they going to do about it? I'm at the person in charge of answering that question. Alphabet
and Google CEO Sundhar Pachai. At Google's campus in Mountain View. We had a wide ranging discussion about where the search giant stands in this AI moment ahead of its big annual Google Io conference. Joining me now, Alphabet and Google CEO Sundhar Pachai, thank you for doing this. US really appreciate it well.
Hikewise, and you've.
Got a big event to prepare for you. What does it take to steer something as large as Google? And has your answer changed in the last ten years?
I think the part Whige hasn't changed is I think we are a deep technology company and so we focus at that level and AI is the best example of that. So focusing at that level and making sure you're driving innovation there and translating it into products and solutions. So I think that part doesn't change. But at the scale, at what we do with many different businesses, you have to find a way to focus and channel on the real areas that matter, so that takes constant work.
I think you're bringing in three hundred billion dollars in revenue a year, making more money faster than ever before, from multiple businesses. What does printing money look like in the age of AI.
These things take time. For example, in a we just announced that a combination of YouTube and cloud will like sit Q four at an undredate of over one hundred million dollars. Now, we build this businesses over the past eight years or so, right, and so these things take time and you have to have a long term view and invest to its. We are doing the same beach search, YouTube, cloud, Android. We have longer term bets like subscriptions, way more and
so on. So again you're investing and you build it over time, and that's what translates into revenue and business success.
Search is still the heart of Google. Some leading computer scientists have said search is getting worse. More seo spam et cetera. Do you see their point.
Part of what makes search a hard problem where we put all our focuses. Anytime there's a transition, you get an explosion of new content, and AI is going to do that. So for us, we view this as the challenge, which is what will define the work we do. So the reason at Google we take pride in our search quality teams is to separate the high quality content from low quality content.
So we have.
Always, through many, many years of search, there are moments where we see arise in new content, which is both great because it allows for richer information to come in, but there's a lot of spam that comes into so solving that is a viewed as strength of search. Over the past few months, we have announced a set of changes and we are getting started making sure search will again do the same through this AI moment, and I actually think there'll be people will struggle to do that right.
So doing that well is what will define a high quality product, and I think it's going to be the heart of what makes such successful.
The choices you make influence how billions of people get their information, and if the new Google is only going to be more and more AI, AI is super helpful sometimes but sometimes it's still deeply wrong. Where do you draw the line?
I think part of what makes Google Search differentiator is while there are times we give answers, it will always link to a wide variety of sources, and so the ability we know from having served users for a long time. We've had an answers in search now for many minus years. We are just now using Generator AI to do that. But we know people want to explore more. They have an interest in the curiosity.
And so the links will live on, yes, and.
It will always be an important part of search. That's what users want, and so there will be times when they want quick answers.
And I gave the earlier example.
My son is Celiac, so we did a quick question to see whether something is included and free. We just want to know, but often it leads to more things, and you know then you want to explore more. So people have intent different intent. I think understanding that meeting all that needs is part of what makes such unique.
The images that Gemini initially generated of Asian Nazis and black founding fathers, you said that was unacceptable. Why haven't you re released this yet?
We obviously have instituted a set of changes organizationally, process wise, investing more in red teaming. We realize it's an opportunity. We are training our next generation of image models, so from the ground up, retraining these models just to make sure we're also making the product better. As part of that, we viewed it as an opportunity to do everything from the ground up correct and so we are working on that and as soon as it's ready, we'll.
Get it out to people.
So it's going to be a while.
I don't think so.
I think it will be a few weeks from now, but you know, we are definitely making great progress there, so we'll be out.
So now people are calling this Woki, and it's not just happening here, it's happening across the industry. The way I understand it, AI is built on patterns that it sees, which if you look at any pictures of the founding fathers, you're seeing old white men. How did the model generate something that it never saw?
We explained this before, but what we did obviously we are a company which serves products to users around the world, and there are generic questions. For example, people come and say, show me images of school teachers or doctors or nurses, and we want it to be representative. You know, we have people asking dis query from Indonesia or the US. How do you get it right for our global user base.
So that's what we were trying to get right. Obviously, the mistake was that we over applied, including cases where it should have never applied. So that was the bug, and you know, we got it wrong, So we're investing more in testing and re teaming to make sure that doesn't happen.
Would you say it's like good intentions gone awry.
In this particular case, Yes, but still we are rightfully held to a high bar and I think we clearly take responsibility for it and we're going to get it right.
How concerned are you about AI generated content ruining search? For example, the AI generated selfie of the tank man in tenem and square. It shows up in Google search results, but it never happened.
The challenge for everyone and the opportunity is how do you have a notion of what's objective and real in a world where there's going to be a lot of synthetic content. I think it's part of what will define search in the next decade ahead. For example, in AMA search being able to detect AI generated content also over time, showing provenance. When did this image first appear online? Giving sources? You know, it's the kind of work we are undertaking,
right so I actually see it as an opportunity. We see this today, even in this world of AI, people often come to Google right away to see whether something they saw somewhere else actually happen. It's a common pattern we see, and so we understand what people are trying to do, and so we're working very, very hard. We are making progress, but it's going to be an ongoing journey.
That's such an interesting point. How much of content on Google is AI generated? And is that percentage growing? How do you track it, how do you categorize it? And do you worry about it degrading the results over time?
If he stands still for sure there is more AI generated content. We just announced a new set of guidelines for ranking and quality, and as part of that, we are using AI to make our algorithms better to detect and we want to elevate human voices in human perspectives, and so that is the core work we are undertaking.
Will llms ever be truly reliable or is there a ceiling to their accuracy.
It's a great question. You know.
Llms are essentially predicting the next words in a sequence or so on, and so today they can hallucinate. I think they will get better. We will also have newer breakthroughs. So, for example, in search, when we are using llms for AIO, ABUWSE, we are grounding it. We call it grounding in the underlying search results. So we check it to make sure what it's saying there is accurate. So we focus on factuality. So you're trying to harness the creativity of llms, but
grounding it to be factual. So there are going to be more and more techniques we will all work on. We are definitely very very focused on it. So I think this will be an area of debate. I think we will constantly make progress on this. That's the way I think about it.
But will they ever be perfectly right?
I think as long as it's anchored and presented in a product with supporting information, I think it can be if you just stand alone give an LLM answer. We see this today people if they've read something somewhere and they don't know whether it's true, they come to Google to check whether it's true. So we understand this, and so we'll always ground it with sources and pointing it to what others have written about it, etc.
You make a ton of money on ads next to the links generated by searches. If a chatbot is giving you answers and not links, and maybe more answers than links. Sometimes are we in the midst of an assault on Google's business model.
What we've always seen people ask questions like that when people switch from desktop to mobile. What we've always seen is we don't show ads on a vast majority of our queries. We show it when users have a commercial intent, right. People are looking for commercial content and their ads happen to be a value source of information. So you have
merchants trying to reach users in those moments. So we've always found people want choices, including in commercial areas, and that's a fundamental need and I think we've always been able to balance it. As we are rolling out Aiovius and search, we've been experimenting with ads, and the data we se shows that those fundamental principles will hold true during the space as well.
Now, every keystroke, every email, everything we've searched is data that we've given to Google, and that can all be fed into your AI models, which is a huge competitive advantage. What debates are you having internally about how you use that data?
We give a lot of controls to users. You can automatically delete your data in Google as you use it, and for AI, like for example, if you use Gemini, you know we don't use your data to train the models. In general, there may be cases in use cases where we will get permission to do so, but I've always felt for two decades people uses in products like Gmail, Google Photos. We've earned that trust because we don't misuse that data. I think that's the foundation on which we
are achieving our success. So privacy is always foundational day everything we do, and that'll be true even with AI as well.
When did you learn open ai was using YouTube transcripts to train its models and what's your position on that?
I mean, it's a question for OpenAI to answer. We have clearly stated policies in terms of what is acceptable use for YouTube, and so we'll definitely expect others to abide by the guidelines. So that's how I think about it.
Meantime, you've got AI systems that are running out of training data, like, what are the implications of that?
I think one of the challenges is going to be as we scale to the next generation of models and the models get much larger than they are today, you know, what is the source of training data? I think there is still data which is not included in these models that can be included that can be useful. But I think over time, if you look at Alpha Go, which is a product which we designed to solve Go and chess and so on, AI models learned by playing.
With each other.
So in the field you call this self play, there are notions of synthetic data. So over time there's this notion of can you have models create outputs for other models to learn? These are all research areas now, so I think those are all important areas where we will achieve breakthroughs to continue making progress.
Right now, you've got companies turning to AI generated data to train their models. Aren't there risks to that?
Yes?
I think that through it all, are you creating new knowledge? Are these models developing reasoning capabilities?
Right?
Are you're making progress and intelligence of these models? I think those are the frontiers we need to prove that you can do that by using these techniques, and to be very clear, these are the cutting age research areas where we are investing a lot of resources.
On that data is the new oil and lms are proving that out all over again. But do we need new laws? Can I really publish an article online and say, but the AI can't train on it.
We allow people who are creating content to opt out, specifically out of our Gemini training models, and so we've given the choice for people to opt out. You know, I think it's an important moment where we have to balance what has always been important notions of fair use and how can you use it for derivative work and
how do you protect the rights of content holders. I think that's these are important questions and it's important we strike the right balance even through a moment like that, And I think both notions are equally important.
Yeah, I know you've said there will be more and more breakthroughs, But is LLM technology nearing a plateau?
I would be surprised if llms are the only thing we would need to make progress. Right, So, the way I would think about it is already the current generation of While we call it LLMS, there are a lot of underlying breakthroughs. Many of it were developed at Google transformers or contributing to the mixture of experts, architecture underneath
these models, or reinforcement learning with human feedback. There are a lot of breakthroughs which have gone into what makes AI generatd AI what it is today, and so I expect to have more breakthroughs, whether we think of as next generation of llms or just AI making progress that it is a definitional thing, but what is more important is we are driving that progress. One of the things that excites me about Google Deep Mind is we are
not only building the cutting edge models. We are investing a lot of computing and resources our aid such as talent in driving the next generation set of breakthroughs. So we are doing that equally with a lot of focus, which is what gives me a lot of optimism we will have more breakthroughs.
There are big concerns that AI is creating this underclass of workers that are pouring through pages and pages of text and video and images while in upper class gets richer. What do you do about that?
The answer to a lot of this is companies need to have a bar to make sure workers are well taken care of. Over the past few years, we've had to invest, for example, when people were monitoring content on YouTube, how do you support people better? So I think there are ways by which you can take care of workers well being through these things. So I think those are important notions, important principles, and I think the same principles apply during this AI moment as well.
Training all these models requires a ton of energy. How does the industry keep up the demand for this computing power without ruining the planet.
It's definitely very important to get right. At Google, we've been carbon neutral since two thousand and seven, and over time we've made a lot of renewable energy investments and commitments. Some of our largest AI data centers today run almost entirely on carbon free so we have to really push
the bus boundaries year. I think the question is the pace of inflection we are seeing with computing will make this an extraordinary challenge, particularly in a three year plus timeframe, So it's going to be important to keep up that focus. We have definitely stated goals and we're going to try really hard to make sure we can do this very sustainably.
You know, as we were talking about you pivoted the company to be AI first years ago. But it seems when you look at the big picture, like Google missed the big moment and chat Ept took it. If you could go back, what would you do differently?
To be clear, I take a long term perspective and say, when the Internet just first came about, Google didn't even exist there, right, So we were in the first company to do search, we were in the first company to do email, we were in the first company to build a browser. So I view this ais we are in the earliest possible stages, and we've built so much foundational components within the company, and we are channeling all that to innovate ahead. So I think we are exceptionally well
set up. You always look back and say, well, if you had done this differently or something. You do that to learn and make the company better, But at any given time, you want to be forward focused in terms of what you can do from this moment on, and that's what we've been focused on. I see the relentless space at which the teams are innovating now within Google, and when I look at twenty twenty four, the year ahead, I'm excited at our roadmap, and so I feel very optimistic.
Your leadership style has been described as slow and steady and cautious, sometimes maybe too cautious. And you're often compared to these other tech leaders who are moving fast and breaking things. How would you describe yourself?
The reality, I think is quite different. One of the first things I did when I became a CEO is to pivot the company sharply to focus on AI as well as really invest more in YouTube and cloud to build them into big businesses. These are big, important decisions and consequential decisions. I constantly look to make those decisions. I think the larger the company is, you are making fewer consequential decisions, but they need to be clear and you have to point the whole company to that. Part
of that at times involves bringing the company along. You build consensus because that's what allows you to have maximum impact behind those decisions. But I think in the technology industry you have to make fast decisions.
You have to move at a fast pace.
If you don't do that, we won't be as successful today as we are, so we'll continue to do that as we move ahead.
Any leader in a position like yours has to be willing to hear the criticism, and I'm not going to make you read the mean tweets like they do on late night, But I do have a few Where is Google running things through legal? Google doesn't have one single visionary leader, not a one. Do you think you're the right person.
To lead Google? Look, it's a privilege to lead the company.
I look at all the progress we have made, and I look at the opportunity ahead. It's definitely the privilege of lifetime. I think people will see the progress the company is making. You know, as I said earlier, I think people tend to focus in this micro moment, but it is so small.
In the context of what's ahead.
And when I look at the opportunities ahead across everything we do, and for the first time all of that as a common leverage technology with AI, I'd put a lot of chips, at least from my perspective, on Google.
All Right, good to know, one more backward looking question, and then we're going to look only forward. Google researchers invented the transformer, literally the T and GPT. Do you wish you capitalized on that? Louder and sooner?
People underestimate part of what has made search better. We use transformers in search. Berth and mom. These are all transformer based models in search. That's what led to large gaps in search quality compared to other products. So we've infused transformers across our products. We have a chance to do that better with Generating AI and with the Gemini series of models, and we are doing that across our product portfolio as well as providing it to businesses everywhere
using Google Cloud. Again, I feel there's going to be more breakthroughs in this field, and so it feels like we are well set up, we are moving fast and there's a lot of innovation ahead.
You recently fired Google employees who were protesting your work this contract with the Israeli government for cloud services. It seemed like a distinct change in tone for a company that's historically welcomed all kinds of views. Why did you take this.
Stand Well, first of all, it's important to step back. I think as a company, we've always had a culture of vibrant and open debate. I think it's directly has led to a culture of creativity and collaboration, pushing each other to build better products. I think it's always worked best when it's in the service of our mission and what we are doing for our users, So I think
that's an important principle to keep in mind. I think, more than any the company, we give various ways by which employees can praise their concerns, and we take them seriously, and that hasn't changed. But it has to happen in a framework of a respectful debate and the civilized debate, and in a way that it does not disrupt the workplace. We are a business, I think in a vast, vast,
vast majority of employees you know, buide by that. I think when we have cases, including in this case, but a few employees cross beyond what's in the court of contact and disrupt the productivity of the workplace, or do so in a way that it makes other people feel uncomfortable, I think we have to take action, and that's what we are doing. There's nothing to do with the matter or the topic they are discussing. It's about the conduct of ulevent about it.
I've talked to a lot of employees about this, actually, and some folks thought it was a little draconian, But some of your employees were glad to see you taking a stand. Is this a new Google or.
A new you?
Almost all the employees here who I've talked to agree with the decision. I think they definitely don't think this is the way you express disagreement. So I think it is important. Over the past years, through the pandemic, the company has grown a lot, so sometimes for a large company, it's worth going back and restating what you mean. And I think it's partly what I did, you know, re
anchoring the company. I view, particularly in this moment with AI, the opportunity we have ahead of us is immense, but it needs a real focus on our mission. So I felt it was more important than ever to REI trade that to the company.
There have been multiple rounds of layoffs. Why take this approach, Why not cut once and cut deep.
It's a moment of growth and investment as well. But rather than just do it by hiring, we are reallocating people to our highest priorities. So that's what this hard work is. There are cases where you're simplifying teams, You're moving people to focus on newer areas. There are times here simplifying the organization, removing layers so that you can improve velocity. So I think these are deliberate changes being
undertaken by teams with a view too. It's making the company better and making sure we are putting as many people against our highest priorities. And so that's why we have taken the time to do it correctly.
And BA Microsoft is obviously making huge investments in AI as well, open Ai, Inflection and Stroll. We've reported that their open ai investment was actually impartant because they were worried about Google and wanted to catch up. How do you feel about the competition there and should regulators be looking at it?
If anything, I look at AI and I see a vibrant, dynamic competitive field, which is great. We'll really push innovation ahead. I think of always tell the view if you're working in the technology space, there is a lot of competition. We see it all the time. The way you stay ahead is by innovating relentlessly, right, I think it has be true all the time, and so I think we've done that with search, will continue to do that with search across our other products, be it YouTube and so on.
So I view this is no different. It's just that it's happening at a faster pace. But you know, technology changes tend to get faster over time, so it's not surprising to me at all.
Microsoft Ceosauch and Adela has had some fighting words and moves. Who's really choosing the dance music?
I think one of the ways you can do the wrong thing is by listening to Noise South Terre and playing to someone else's dance music. I've always been very clear. I think we have a clear sense of what we need to do. We've been gearing up this for a long time, and that's what you'll stay focused on.
All right, So you're listening to your own music, That's that's exactly right. Mark Zuckerberg is making waves as an open source AI player. Are you going to let him own that narrative?
Look?
I think there's going to be important open source contributions. I think it's important for the field. I mean, Google is published and shared. I had a lot of knowledge to make this field progress forward. We are doing that with some early models as well. We've announced him a series of open models. I think it's great that there's more open source memutum beat from Mistral, beat from Meta. I think I would expect that in the field, and I think it's good to keep the frontier of innovation moving.
Any chance you want to buy TikTok now.
I think we are focused on the products we are doing, so it's not something we are looking at.
What does a TikTok band mean for Google?
I think it's not clear there'll be a ban on TikTok. I think the bill that's passed allows for a sale of the product, so it's too early to tell. I think there are many wayses could play out, but in all scenarios, I think there will be a version of the product maybe around for users. So not spending too much time thinking about it.
Apple and Google are huge partners in a search deal struck years ago. Will you be partners on Gemini too? And to be clear, we've reported that Apple's talking to both Google and open Ai.
We don't comment on partent discussions, but we've always cared about making sure people can access our products easily. I think it's consistent with our mission of making our products universally accessible and useful, and so we've long had a framework in which we think about these things. And so, you know, maybe that's all I have to say there.
What do you think is the future or potential of AI powered hardware and what will Google's role in it be? Is the smartphone going to be the form factor. Will there be something completely new?
I think two things I think still today is smartphone sort the center of your computing experience, and I think with AI you get a chance to rethink that experience over the next few years. And so I viewed as an exciting opportunity for us to rethink Android, both with our partners and with Pixel as well. But I think one of the things that excites me about the way we are thinking about Gemini, it's natively multimodal. I think it can really come to life in a form factoral eyeglasses.
So I think AI will end up playing a strong role in the vision of ar et cetera. So I'm excited about that future as well. So I think it will apply to both people will build purpose built devices, but I think that's still early. Like I still see the center of where the AI innovation will happen in smartphones, followed by glasses, right, That's how I see it.
In Nvidia has become the power broker for AI chips. Meantime, you are now investing in making your own chips. What made you realize you needed to do this? It's a huge undertaking.
Nvidia as an extraordinary company. I think Jensen has been driving this investment for a long long time and they're seeing the fruits of that long term view. As a company in media is an important partner for us. But we've always thought about you know, we are very proud of our infrastructure. We believe we have the best infrastructure in the world, and that applies to AI as well.
And part of that when we said the company was going to be AI first, we realized AI would need special purpose built chips, so we built our first TPUs in twenty I announced ratat Io in twenty sixteen. We are now in our fifth generation, so you will see us continue to invest there. We'll embrace both GPUs and TPUs and we'll give our customers choice. But these are
areas where we viewed as foundational investments. We think about subse cables, we think about our networking chips, we think about our into an AI what we call our AI hypercomputer, right our AI data centers. So these are what I view as core strengths of what positions as well for the decade ahead.
Google is facing a ton of regulatory pressure in the US abroad over your dominance in search, video ads, the app store, some other big companies have split themselves up to focus on their core. Has Google thought about that?
If we look at it from a user perspective, people are trying to solve problems in their day to day lives, and so a lot of our products integrate in a way that provides value for our users. So I think that is important. Part of what allows us to compete in the Google Cloud market is the investment in AI we undertook because of search is what allows us to take that and compete hard against other larger companies like Amazon and Microsoft in cloud.
So I would argue that the way.
We are approaching it drives innovation adds choice in the market.
That's how I think about it.
Last time we talked to you told me China will be at the forefront of AI. How should policy makers factor that in to their decisions?
I continue to hold that view. I think China is investing a lot in AI. I think they will be at the forefront of this technology as well. I think it's important we as a country invest in AI as well and are at the forefront. But I think over time, from an AI safety standpoint, we need to develop frameworks by which we achieve global cooperation.
To achieve AI safety.
I know it sounds far fetched now, but we've done it in other areas like nuclear technology and so on to some extent. I think we're going to need framework like that, and so I would expect over time there needs to be engagement with China on important issues like AI safety.
The world is voting this year, and misinformation is only going to get more complicated in the age of generative AI, and it's worse in other languages. What do you worry about?
Look, I think the integrity of elections, particularly in a year like that.
You're right.
I think almost one in three people in the world are going to go through some kind of democratic electoral process, which is extraordinary to see. I think we should celebrate that. I think the role for us. You know, we've all invested so much in election integrity over the years. We have a lot of learnings to bring to bear, and so we are investing early and ahead and deeper than ever before to get it right. I think AI is a new tool, but I think so far I don't
think we've seen something extraordinarily different because of it. But time will tell. But we are doing our utmost prepare for what's ahead?
Have you checked out how Google's doing back at home in the Indian elections.
We take pride in being a source of information for people, and I think people come look for information and I view this is no different than any other moment in time.
I'm proud of one of.
The largest democratic processes anywhere in the world, and it always hair raising moment to see people wort and so it's great to see.
Yeah, you're on the cusp of becoming a billionaire. What are your philanthropic goals? Will we see you bring resources back to India?
Definitely?
We've done some limited amount, but I've always viewed it as there's a phase of my life and I'm not doing what I'm doing now, and I do want to put a lot more time and energy and passion into philanthropically giving back, and it's a privilege to be able to do that.
Are there any particular causes that you are really passionate about?
Too early to tell.
I've done a variety of things, but I'm still forming where it can be most impactful.
There's no question that ai I will reshape the labor market. Is blue collar going to be the new white collar?
I think at least the current phase of AI I see looks like it will help people, and it's true in my use today, and that's how I expect a radiologists to use it, to have AI assisting you. So I think there is a real scenario in which it lowers the barrier take coding, for example, more people will be able to code, it will take the grunt work out of coding, it'll make people who code more productive, it will expand the opportunity set, etc. So that's the
near term. Longer term, it's tough to predict. Typically in technology, when we have predicted, it's kind of played out a bit differently, so I still think it's too early to tell. But yeah, I think AI in the physical world will happen slower than in the virtual world. So maybe there's an element where it impacts differently than other technology transformations in the past.
Artificial general intelligence is it mean to you do we get there?
And when it's not a well defined phrase, it means different things to different people. I think it meant a lot more many years ago in the context when AI was more narrow it couldn't do a general set of tasks. That's why people would call out agis distinct. We are definitely working on AI in a way. It's more generalized technology now. But I think if you define AGI as AI becoming capable across a wide variety of economic activity and being able to do it well, I think that's
one way to look at it. That's how I think about it in my head. I still think we have some ways to go, but the technology is progressing pretty rapidly.
So Google's going to get us to AGI.
We are committed to making foundational progress towards AGI in a bold and responsible way, and so you know I we'll focus on the effort to do that and do that well.
The concerns about AI leading to human extinction? Are those legitimate or totally overblown?
I think we are far away from needing to think about things like that at this moment. But I definitely have a more optimistic view of how this will play out. I think the essence of humanity is being able to harness technology in a way that benefits society, and more than any other technology, I see as having the conversations early enough with AI so that gives me faith in humanity that we will get it right.
You've said there's even some things about AI that you don't understand well. AI always be somewhat in a black box? Will there always be some things that we will just never know?
I have a little bit of a counter into the view there. Humans are very mysterious too, right, And humans are more of a black box than we give credit for. Often when people explain why they did things, telling something, it's not entirely clear that's why they did that specific thing. AI will also help us today. We can't make sense of many complex systems. You know, how does the global economy work, et cetera. You know, AI will give us more insight and more visibility into many complex things. So
it will explain the world better. And maybe over time you can query the AI and you can get a better explainability. I think that should be one of our design goals design attributes is to develop explainable AI over time, and so I think it's too early to tell.
When I asked open Ai CEO Sam Altman why we should trust him, he said, you shouldn't. Why should we trust Google?
I shared the notion that you should blind lead. You know, that's why it's important to have systems in place. Regulation is a part to play. It does to balance innovation. But I think as THECAI systems get more capable, you know, regulation will have an important role to play, and it shouldn't just be based on a system of trust people or trust companies. I think that's that's not how you
deploy very powerful technology. But at this early stage of technology, you have to balance that with a view to allow.
Innovation to flourish.
We have to remember that the positive upside here is tremendous as well in the areas like healthcare and many other areas. So I think we have to take that view too. But over time, I think you have to build frameworks to make sure this technology is deployed responsibly.
We've talked a lot about the opportunities. What is the biggest threat to Google's future?
I view for all companies, particularly at scale, you know, biggest threat is not executing well. I think as long as we stay focused on our mission and approach it by building foundational technology and using it to build products, innovate with it, and do that with a sense of urgency and focus on users, I think will do well. But that's what will define our success more than anything else.
You spend so much time thinking about what Google's future will be should look like, what's the killer bet that could secure Google for the next twenty five years? Is it AI or is it quantum computing?
I would say AI is I've always viewed AI as that transformational opportunity for the company. I felt that for almost a decade, and you know, I continue to feel that about the next decade ahead.
Are we going to look back on this lm era and laugh? Is this going to all look so basic and rudimentary?
I hope we do.
My kids aren't impressed by touchscreens or the fact that they have this extraordinary amount of computing in their hands. So today, for example, people tell about like, look at how much computing we are using. To me, doesn't feel like a large amount. It's just large relative to what it was before. So similarly, there's no reason we won't scale up our computing one hundred thousand times in a few years.
So yes, I.
Hope some of this looks like a toy in the future, because that will mean that we've applied it to achieve breakthroughs in cancer, etc. Right, So I hope it is that way. Otherwise we didn't do our job.
Well, you just did a big reorc is that with succession in mind.
When you run a company at the scale reorganizations which focused clearly towards meeting the moment with AI making sure we are simplifying the company and able to execute well. So that's what, at least the set of reorganizations we're focused on.
How long do you see yourself continuing to do this?
I don't think about that on a day to day basis, but you know, as a board, etc. We've always had responsible conversations around this topic, and I think it's important to do that. But I'm committed and I'm excited about the journey ahead.
So what motivates you to keep going? It's a hard job, and this is like a huge job, tons of energy.
I still get delighted and surprised by how technology makes progress and playing a part in that is where I get a lot of my energy from. And so I view this as to me, if anything, this moment is more it's something I've thought about for a long time and almost it's part of a journey I've been working on for a long time, and so this is the moment, and so it's more exciting than most of the moments.
Is there a healthy dose of paranoia, like not becoming stand the t Rex out there, yeah, I think, and going extinct.
I think that you know, there's a part of me which is always internalizedating the old Andy Grove face. Only the paranoid survive, but to a healthy level. I don't obsess about it. But I never take our successful granted, you know. I constantly feel you have to re earn it, and you have to do it with a sense of hunger and urgency and being mission focused and being user focused. So all that is important, and I think this moment is no different.
Thanks so much for listening to this episode of the Circuit. You can watch our full episode with Google CEOs and Darpachai on Bloomberg Originals. I'm Emily Chang. Follow me on Twitter and Instagram at Emily Chang tv, and watch new episodes of the Circuit on Bloomberg Television or streaming on the Bloomberg app or YouTube. And check out our other Bloomberg podcasts on Apple Podcasts, Spotify, the iHeartMedia app, or wherever you listen to your shows and let us know
what you think by leaving a review. They really make a difference. I'm your host and executive producer. Our senior producers are Lauren Allis and Alan Jeffries. Our editor is Alison Casey. Catch you next time,