How does all this talk about, you know, relationships and AI, Like, could you see yourself developing a relationship with an AI?
I'd say yes, as a reliable tool that enhances my life, makes my life better.
I'm Emily Chang, and this is the circuit. We're inside a nondescript building in the heart of San Francisco where one of the world's buzziest startups is making our AI powered future feel more real than ever before. It's giving me very West World Spa vibes. It's almost like suspended in space and time a little bit. They're behind two monster hits, chat Gypt and Dollie, and somehow the biggest tech giants to market, kicking off this competitive race that's
forced them all to show us what they've got. Is it magic? Is it just algorithms? Is it gonna save us or destroy us? To help us separate AI height from reality, I sat down with LinkedIn co founder and Facebook investor Reid Hoffman, who was an early backer and board member of Open Ai. He also used Chatchi to write a novel. But first, here's my conversation with Miramrati, chief technology officer from inside open Ai.
Well, thank you so much. For doing us. It's really great to have you, and you've been very busy.
I want you to take us back a little bit when you were making the decision about releasing chat Gpt into the wild. I'm sure there was like a go or no go moment. Take me back to that day.
You know, we had chat gipt for a while, and we had been exploring it internally and with a few trusted users, and we realized that we sort of hit a point where we could really benefit from having more feedback and having more people try to break it and
try to figure out how best to use it. Let's make sure that we've got some guardrails in place and start rolling it out incrementally so we can get feedback from how people are using it, what are the risks, what are the limitations, and learn more about this technology that we have created and started bringing it in the public consciousness.
So you wanted people to break it or try.
Yes, we definitely wanted people to try to break it and find the fragilities in the system. We had reached the point where we had done a lot of that internally and.
With a small group of people.
External experts as well, and we wanted more external researchers to play with it.
It became the fastest growing tech product in history. Did that surprise you, I mean, what was your reaction to the world's reaction.
Yeah, it was a huge surprise for us. We were surprised.
By how much it captured the imaginations of the general public and how much people just loved spending time talking to this AI system and interacting with it.
I want to take a step back a little bit, you know, because a lot of people still don't really understand how it works. Chatgbts trained on you know, tons and tons of data and text.
It can now mimic a human. It can write, It can code at the most.
Basic level and in the most succinct way that you can How does it work? How does this all happen?
So?
TUGBD is a neural network that has been trained on a huge amount of data on a massive supercomputer, and the goal during this training process was to predict the next word in a sentence, and we found out that by doing this we also got the ability to understand the world in text more like humans do. The goal here is to have these systems have more robust concepts of reality, similar to how we think of the world.
We don't just think and reason in text. We also obviously have the world in images, visual world around us. That's been the goal over time, which is why we've been adding more and more modalities. And it turns out that as you trained larger and larger models and more and more data, the capabilities of these models also increase. They become more powerful, more helpful, and as you invest more on alignment and safety, they become more reliable and safe over time.
So I'd love to hear a little bit more about your personal story. I know you grew up in Albania. What was the road like from Albania to Silicon Valley.
I grew up in Albania, and when I was growing up in Albania, it was a pretty tumultuous place politically and economically, so I always knew that I wanted to study abroad. I always loved learning, and in this pursuit of knowledge, took me to Canada on a scholarship, and from there I came to the US and I've stayed in the US ever since.
You've worked on aerospace, you worked at Tesla, you worked on virtual reality. How did you become CTO of Open Ai.
It's been My training has been in mechanical engineering. I've always loved maths and physics and these were my favorite subjects as a kid. So my training took me from aerospace engineering to automotive engineering and then in applications in virtual reality and augmented reality.
But there was always this.
You know, deep technological advancement in pursuit of some problem that makes our lives a little better. And five years ago that brought me to Opening Eye because I thought that there is no other more important problem that I could be working on than artificial general intelligence. And I joined Opening Eye to help with leading research teams, and from there I went on to build a product team.
And you know, after having done a few of the roles in the company and having built a lot of the technical teams, I'm now leading all of the technical teams done.
So as CTO, how do you set the pace of open AIS technology development? How do you balance speed versus responsibility versus safety?
Like, how do you where are your priorities?
Essentially, so, I think today we are dealing with unprecedented advancement in technology, and I think the most important thing we can do is to manage its advancement and do some in a way that's going to benefit people, maximize the number of amazing applications that AI can bring, and really fuel this energy that people have about interacting with AI and making great use of AI, also giving people the tools to do so in a reliable and safe way.
So Atpenyi, our safety teams and research teams collaborate very closely, and safety teams are integrated in many of our research domains.
But we also.
Provide more room for long term research for safety and policy research as well. It's important to work both on kind of the near term present issues that we see clearly, but also have make a lot of room for exploratory and frontiers research when it comes to safety and policy.
So CHATGBT could revolutionize so many things, and obviously AI more broadly, What are the things you're most excited about?
Like, what's the amazing What.
I'm most excited about is how it will transform education and our ability to learn because you can really see that advancing society in a way. You know, there are many even the most advanced societies that they are quite limited when it comes to education. There is this formula on how people are supposed to learn and we all
learn very differently. We have different interests, and so I think by using technologies like tragibiity the underlying models, we can really build custom virtual tutors or virtual teachers that can help us learn about the things that we are really interested in, can really push our creativity, and by pushing human knowledge and human creativity, I think we can really transform the fabric of society.
What about the.
Scary stuff, like what are you most concerned about?
You know, whenever you have a technology that is so powerful and so general, there's always the other side of it, and there are always that we have to worry about. And we've been very vocal about this since the beginning of open AYE and very active in studying the limitations that come with the technology. Right now, one of the things that I'm most worried about is the ability of
models like jubid for to make up things. We refer to this as hallucinations, so they will convincingly make up things, and it requires you know, being aware and being.
And just really knowing.
That you cannot fully blindly rely on what the technology is providing as an output.
But on the other hand.
It also makes it glaringly obvious that this is a tool with which you're collaborating. People can misuse it in various ways. They can spread misinformation, it can be misused in high stakes scenario. So from GBD three point five to GPT four we work very hard to reduce hallucinations or increase the.
Factual output of the models.
And we worked on GBT four for over six months just to make it more aligned, safer, more helpful, more accurate, more reliable, and held back the release of the model so that we could focus on these aspects of it. But it's far from perfect, and we're continuing to work on it and get the feedback from the daily use and make the model better and more reliable.
I want to talk about this term hallucination because it's a very human term. Why use such a human term for basically an AI that's just making mistakes.
A lot of these general capabilities are actually quite human like. Sometimes and we don't know the answer to something, we will just make up an answer. We will rarely say I don't know a lot of human hallucination in a conversation, and sometimes we don't do it on purpose. So we're constantly borrowing from the way that we learn the way we see the.
World to.
Have a more intuitive understanding of the systems.
Should we be worried about AI though that feels more and more human like? Should AI have to identify itself as artificial when it's interacting with us?
I think it's a different kind of intelligence. It is important to distinguish output that's been provided by a machine
versus another humans. So you have that understanding, but we are moving towards a world we are collaborating with these machines more and more, and so output will be hybrid from a machine and a human and so they're almost like, you know, amplifying tools that are pushing the ability that we already have, whether that's reasoning or creativity, and these machines are helping us push the bounds of that even further.
So it's going to be difficult to you know, distinguish the output once you have this collaborative engagement between the human and the missione the.
Air of confidence.
Obviously that CHATGPT sometimes delivers an answer with is it can take you off your toes a little bit, right, why not just sometimes say I don't know, or program that into che GIPT.
So it turns out that when you're building such a general technology. Like with large language models, the goal is to predict the next word in a sentence. The goal is not to predict the next word reliably or safely. Just from this simple goal, we got the ability to understand language quite well. We got a lot of creativity, ability to even code. And it turns out when you have such general capabilities, it's very difficult to handle some
of the limitations such as what is correct. Also, the model doesn't really know much about the user in terms of their context and their preferences. But it's still the early days, and that's why we're pushing out these systems slowly, in a controlled way, but in a way that allows us to get some feedback from how people are using them, so we can use that, implement it and make them
better and more reliable. One thing that we did recently with child jipt is we rolled out this ability to browse the Internet so that it can become a bit more reliable on questions that have factual nature. And this is now offered as a plugin on child jipd plus service. But it's still the early days and this feature is only in alpha.
Some of these texts and some of the data is highsed some of it may be correct. Isn't this going to accelerate the misinformation problem? I mean, we haven't been able to crack it on social media for like a couple of decades.
Misinformation is a really complex heart problem. But you know, these systems become smarter.
It's actually also easier to guide them because you can give direction in just natural language and say I don't want you to do xting. Then the system, by being more intelligent, more capable, has the ability to actually.
Follow that particular instruction.
Obviously, with more powerful models, you're also expanding the profile of risks, and so you have more risks that you need to understand.
And deal with.
There are several things that we are exploring. For example, one of the things that we've been researching and exploring is water marketing. The output where you are able to you distinguish what is AI generated output versus human generated output.
There are ways to deal with it.
Also from a policy standpoint, I think it's a complex issue. There needs to be addressed from research policy perspective. But on the other hand, also you know, society needs to adapt to these challenges and the capabilities that these models are bringing just like we adapted, you know, to using calculators and other technologies.
There's sort of like underlying anxiety. I feel like when you talk to.
Most people about AI, you know that's cool, but it's also scary.
And I've heard.
AI experts talk about the potential for the good future versus the.
Bad future, and the bad future gets kind of scary. You know.
There's talk about this lead leading to human extinction. Are those people wrong?
You know, there's certainly a risk that when we have these AI systems that are able to set to then goals, they decide that their goals are not aligned with ours and they do not benefit from having us around and could lead to human extinction.
I don't think this risk.
Has gone up or down from the things that have been happening in the past few months. I think it's certainly been quite hyped and there is a lot of anxiety around it. Well, this risk is important, and we need to work on frontier research to figure out how
to deal with super intelligent AI alignments. We are dealing with a lot of risks today that are very real, very present, very high probability that they impact us, and I think if we cannot figure out how to handle and deal with these risks while the stakes are low. Then you know, we wouldn't have much hope to deal
with it when things are more complex. So my view is a bit more pragmatic that one, where you know, we really need to figure out how to deal with the present risks that the systems pose, and coordinate among developers and work with regulators, legislators and governments various countries to come up with reasonable policies and regulation.
Around the are elon.
Steve Wosniak, a bunch of other you know experts have called for six months pause on AI development. Do you have any intention of slowing down or what's your response to that letter?
So the letter from FLI makes a lot of good points about the risks that the technology poses, and we've been talking about some of them.
Opening Eye is being.
Very vocal about these risks for many, many years, and we've been doing active research on them. One of them is acceleration. I think that's a significant risk that we as a society need to grapple with. The Private companies and governments need to work together to figure out the risks that acceleration brings building safe AI systems that are in general is very complex. It's incredibly hard, and I don't think that it can be reduced to a parameter
set by a letter. The question then becomes, you know who is abiding to this letter? Is it all different countries in the world? How is that happening? I think the issuing reality is far more complex, and it requires coordination from private companies, from governments, and figuring out how do you deal with these advancements in technology versus blocking advancement.
There have been parallels drawn to the Manhattan Project, which you know, they gathered the.
Best scientific minds to develop nuclear weapons, and Robert Oppenheimer, who led that project, said when he saw the first detonate, a line from Hindu scripture ran through his head. Now I am become death, the destroyer of worlds. I realize the sound dramatic, but if we're talking about the risk for human extinction, you know, not being totally out of the question. Like in your development of AI, have you had a moment like that where you're just like, Wow, this.
Is this is big.
I think a lot of us at Open AI joined because we thought that this would be the most important technology that humanity would ever creates.
I suddenly think.
That now with that comes a lot of responsibility. Of course, I think AI is going to be amazing.
It already is.
It has this incredible potential to extend our creativity and human knowledge and make our lives better in so many vectors. But of course the risks, on the other hand, are also pretty significant, and this is why we're here.
I just rewatched the movie her which has this very vivid depiction of life with AI in ten years.
How will our lives be different? How will daily life be different?
I haven't watched a movie herd a lot aboutime. I hope ten years is a long time, but I hope that in the next few years we will have a future where we use AI systems as tools to amplify
a lot of our own abilities. And I hope that we have systems that help bring customized education to as many people out there as possible, and I hope that they You know, we can build tools, diagnostics, tools or ways to understand diseases and the problems in healthcare much much earlier and figure out how to deal with them
at scale. And you know, we're dealing with massive problems in climate change, figuring out new solutions figuring out ways in which we can help reduce the risks that climate change processes.
Could you put what you're developing here inside robots and could they combat loneliness?
I think bringing these systems into the physical world is a pretty significant step.
Feels like we're a bit far from that.
But also, you know, just having a chatbot that you can ask for advice suddenly, not in high stake scenarios right now, seems like that would be helpful for a lot of people.
That's quite profound that we could someday have relationships with computers.
In a way we already do.
Right We're spending so much time on our computers, We're always on our phones. We're almost like enslaved to this interaction that we have with the keyboards and with the touch screen.
I think a lot about my kids and them having relationships with AI someday and this thing that has much more time to spend with them than I do. How do you think about what the limits should be and what the possibility should be when you're thinking about a child.
I think we should be very careful in general with putting very powerful systems in front of more vulnerable populations, people under thirteen cannot access it, and even under eighteen requires parents of supervision. So there are certainly checks and balances in place because it's still early and we still don't understand all the ways in which this could affect people.
There's also some business interests here, and by releasing chatgbt open ays kind of turbo charge this competitive frenzy. Do you think you can bet Google at its own game? Do you think you can take significant market share and search?
You know, we didn't set out to dominate search when we build child jepet. In fact, it actually started as a project around understanding and dealing with truthfulness of large language models, and then it evolved. But I think what child gipt offers is a different way to understand information and a different way to interact with the same tool. And you could be, you know, searching, but you're searching
in a much more intuitive way versus keyword based. That is definitely an outcome that we saw afterwards, and we built an interface that would allow people to interact with it much more smoothly, and as we can see, it is pushing other people to build more assistant like products companies and small companies. I think the whole world is sort of now moving in this direction.
I think our focus will.
Remain on building these general technologies and figuring out how we can bring them to the public in ways that are useful.
So there's this report that these workers in Kenya we're getting paid two dollars an hour to do the work on the back end to make answers less toxic. And my understanding is this work it can be difficult, right, because you're reading texts that might be disturbing and trying to clean them up, right, Like, what's your response to that?
So we need to use contractors sometimes to scale. You know, in this particular case, we chose the particular contractor because of their known safety standards, and since then we've stopped working with them. But as you said, this is difficult to work, and we recognize that, and we have mental health standards and wellness standards that we share with contractors when we engage them.
All of the data that you're using, and this has been talked about a lot, like all of the data that you're training.
This AI on, it's coming from writers, it's coming from artists, it's coming from.
Other people have created who've created things.
How do you think about giving value back to those people when these.
Are also people who are worried about their jobs going away.
These models are trained on a lot of public information, a lot of data on the Internet, and also licensed data, and the output that is generated by the models is original our users.
They have all their rights to that output.
I know Microsoft has been doing some research on this on how do you make sure that you recognize the value that people are bringing with their data, And there is some research that has been done in this direction with the data dignity projects that some folks at Microsoft have been working on, and there is some research of figuring out the economics of this and how to do
that at scale. I don't know exactly how to work in practice that you can sort of account for information created by everyone on the Internet, but there is probably some other way where, you know, people contributing specific kind of data can sort of have a share of the gains produced by this model. I'm not sure exactly how that would work, but I think there is some research on the economics of this, and I think it's definitely
worth exploring further. As far as the question of jobs goes, I think there are definitely going to be jobs that will be lost and jobs that will be changed. I think there will be a lot of jobs that will be created as well. We don't know exactly what they are, and probably some of them we can't even imagine. Like prompt engineer is a job today. That's not something that we could have predicted totallyory.
So what does responsible innovation look like to you?
You know, like, would you support, for example, a federal agency like the FDA that that's technology, like it that's drugs.
You know, having some sort of trusted authority that can audit these systems based on some agreed upon principles would be very helpful. And having some standards around predicting capabilities and auditing these systems once they're trend could be helpful.
Do open AI employees still vote on AGI and when it will happen?
I actually don't know. I believe that what they did.
I think we kind of do it, but I don't know last time we did.
What is your prediction about AGI now and how far away it really is? This is when computers can learn and reason and rationalize just as good as us, if not better.
I think we're making a ton of progress on technology and it is really helping us in so many ways, But we're still quite far away from being a point where you know, these systems can make decisions autonomously and discover new knowledge that we couldn't have predicted previously.
So is that decades away?
I'm not sure.
Is it sooner than you thought when you started this work.
I don't know if it's sooner.
I think I have more certainty around the advent of having powerful systems in our future that we'll be able to make decisions autonomously and discover new knowledge.
Should we even be driving towards agi? And do humans really want it?
Do we want computers to be smarter than us ultimately, even though we don't know what that really looks like or means.
I think that through the course of history, pushing human knowledge has pushed our societies in so many different ways. It's been key to advancing our society, and I think it would be a mistake to hold technological innovation or our ability to pursue human knowledge further.
And I'm not even sure.
That that's possible in the first place, but theoretically if it were, I think it would be a mistake. A lot of our inspiration and advancements in society come from pushing human knowledge. Now that doesn't mean that we should do so in careless and reckless ways. I think there are ways to guide this development and manage this development versus bring it to a screeching hold because of our potential fears.
So the train has left the station and we should stay on it.
That's one way to put it for now.
I'm sure Chatchpt would say it much more eloquent.
Beyond open AI, there's an artificial intelligence gold rush happening in Silicon Valley. Venture capitalists are pouring money into Anything AI startups, hoping to find the next big thing. Now here's my conversation with Reid Hoffman, who knows a thing or two about striking gold.
Thank you so much for doing this.
I'm so grateful to have you, and obviously you've helped us make sense of platform shifts over I mean, gosh, twelve years.
We've been talking, maybe longer. That's awesome.
A long time.
More than a decade.
Generative AI has had two big hits so far, Dolly and Chatchypt.
Both from open AI. Why do you think.
Chatchypt exploded more than Instagram, even.
More than TikTok.
Well, there's a couple of reasons. One is it's a little bit like the movie industry, and each year has a new biggest box office. The world's more connected, there's more people, there's more curiosity of what's going on, so you have your new biggest hit. So there's always that as a backdrop. This will be the year as I kind of put on fireside chatbots where one or more of the person of the Year lists will be a chatbot or an AI or open AI or something like
this as a way of doing that. Because it's a magical experience to say, suddenly I can have a conversation with this thing, like I'm talking to another person and it not being another person. Right. That's like that has not happened in history till November sometime last year, Right, And so that's why I think it exploded.
You have been on the ground floor of some of the biggest tech platform shifts in history, the.
Beginnings of the internet mobile.
Do you think AI is going to be even bigger?
I think so at minimum for the following reason, which is it builds on the Internet, mobile, cloud data. All of these things come together to make AI work, and so that causes it to be the crescendo, the addition to all of us.
So, hey, it's gonna be bigger than all those things. Yeah, and that's kind of a big deal.
Yes, absolutely. But now part of it's because just like we saw a chat GBT, we have billions of people connecting in the world. They can all reach it very quickly too, so all of a sudden you start interacting with it, and then you begin to think, well, what could happen with AI here? I mean, one of the problems with the current discourse is that it's too much of the fear based versus hope based. Imagine a tutor on every smartphone for every child in the world who
has access to a smartphone. Imagines a doctor on every smartphone where many communities don't have any access to doctors. That's line of sight from what we see with current AI models today.
You coined this term blitzcaling. Does AI blitzcale?
Well, it certainly seems like it today, doesn't it. The speed at which we will integrate it into our lives will be faster than only integrated the iPhone into our lives. There's going to be a co pilot for every profession and if you think about that, that's huge. Well, that changes industries, that changes products.
And not professional activities, because it's going to write my kids papers, right, it's high school papers.
Yes, although the hope is that in the interaction with it, they'll learn to create much more interesting papers.
You and Ela must go way back.
He co founded open ai with Sam Waltman, the CEO of open Ai.
What did Elon say that got you interested? So early?
Elon came and said, look, this AI thing is coming. You know, I always trust people from my network were smart to say, go look at this. Go look. I'm always curious. Once I started digging into it, I realized that this pattern that we're going to see the next generation of amazing capabilities coming from these kind of you know, computers, computational devices, and that that's something that could shape a much better society that we'd all be in. And that's
the reason I do technology. One of the things I had been arguing with Elon at the time about was that Elon was constantly using the word robocalypse, which you know, we as human beings, tend to be more easily and quickly motivated by fear than my hope, So you're using the term robocalypse, and everyone imagines the terminator and all the else.
It sounds pretty scary.
It sounds very Scaredocalypse doesn't sound like something we want. Yeah, stop saying that, because because actually, in fact, the chance that I could see anything like a robocalypse happening is so deminimous relative to everything else.
How remote is the chance of the robocalypse in your mind?
Let me put this away. I'm more worried about what technology does in the hands of humans that I am about a robocalypse. And what we've seen through the scaling of these large language models is that the larger you get, the easier it is to train them to be aligned to human interests. That's good, doesn't mean it's perfect, doesn't mean we shouldn't be attentive. But that's exactly the kind of thing where you can build to a really good future and be motivated by hope and optimism versus fear.
So just on Elon for a second.
You did come together on open AI, and how did that happen?
I think it started with Elon and Sam having a bunch of conversations, and then since I know both of them quite well, I got called in something should be the counterweight to all of the natural work that's going to happen within commercial realms, right within companies, you know, buildings, which is by the way, as you know, a huge fan that companies can build really good things. An I int company and a different thing. But it's good to
have the counterweight too. And as part of having that counterweight, what how do you bring in considerations like well, what are we going to do for a bunch of people who are not as well off economically or anything else, and how do we make sure they're included? How do we make sure that one company doesn't dominate the industry, but the tools are provided across the industry so innovation can benefit from startups and all the rest. It was like, great, and let's do this thing open Ai.
Sam Altman has said he thinks this is going to usher in this new era of economic prosperity. It's obviously going to change a lot of jobs, going to eliminate a lot of jobs. Is it going to create enough jobs to balance all that out?
So you can't one hundred percent say absolutely yes, because it's part of the uncertain part of human nature and human progress. But the same question is confronted us multiple times. It's confronted us in the move from agriculture to industry. It's confronted us in computerization of things like you know, And again fear first is like, oh my god, it's
going to employee change. And a lot of work is people to people interaction, and people interaction can be education, it can be medicine, it can be legal, it could be communications. I think that all of that there's infinite demand for that work. Entertainment media there's infinite demand for that, and so those can open up new realms of jobs and all the rest. Am I ultimately very optimistic that it will create a lot more jobs than it will consume.
The answer is yes, but it doesn't mean it won't consume jobs, and it doesn't mean that we have to not navigate the transition, the revolution of moving from agriculture and industry. We had a lot of suffering in the cities as we've moved to manufacturing and all the rest, and you say, okay, let's try to minimize these transitions.
I did ask chat GPT what questions I should ask you. I thought it's questions were pretty boring. Yes, your answers were pretty boring too, So we're not getting replaced anytime soon.
Yes, but clearly this is really struck a nerve, this.
Baning thing, Bing's chatbot saying telling folks it's in love with them.
Yes, there are people out there who aren't going to fall for it. Yes, should we be worried about that?
So that's a deminimous worry. I think that's specific one. And the reason is, Okay, so everyone's encountered a crazy person who's drunk off their ass at a cocktail party who says really odd things, or at least every adult has, and you know that's not like the world didn't end right. And so the real issues I think are things like if we put in a whole bunch of computational systems, that we are on our trajectory to improving areas of
racial bias or discrimination. Now, I think AI can be a very positive tool in that because we can improve it, we can learn it, we can fix it. We can probably fix it better than we can fix for example, systems of judges issuing paroles probably easier to do iteratively by studying it and getting it better through an AI system, which will function in partnership, not in replacement, but as a way of kind of improving those things. So those are the things that really matter. We have to we
do have to pay attention areas or harmful. For example, someone's depressed. The thing about self harm, you want all channels by which they can get into self harm to be limited. That isn't just chatbots, that could be communities and human beings, that could be search engines. You have to pay attention to all the dimensions of it. And by the way, you can never get it perfect.
So I agree that computers don't have feelings. These chatbots are just predicting the next word in a string.
Right.
What does worry me as a mom.
Is my kids.
So what if my kid is spending more time talking to a chatbot than me, or developing relationships with these chatbots, or making decisions based on what a chatbot has told them or nudged them to do, Like, why shouldn't I be terrified of that?
Well, I think the question is is what kind of relationship and what are they nudging them to do? So for example, say you had your kid and the kid was interacting with the chatbot that was causing them to reflect on who they were and their feelings a little bit better and help them discover themselves and you're like, well that seems to be an okay relationship, maybe better than their friends at school even in some ways, and help them kind of be a to follow the path
they want to be doing. Or say, for example, it was like, well, here's why actually, in fact, doing your homework is actually useful to you and here you know, let's let's help do that. You said, well, that's okay. So it's not the it's not the fact that there's an interaction there that bothers you. It's like, is the interaction going to be in a positive direction? Is going to be broadly there?
How are we overestimating AI right now?
Many ways that we're overestimating it, it still doesn't really do something that I would say is original to an expert. So, for example, one of the questions I asked was how would Read Hoffman make money by investing in artificial intelligence? And the answer gave me was a very smart, very well written answer that would have been written by a
professor at a business school who didn't understand venture capital. Right, So it seems smart would study large markets, would realize what products would be substitute in the large markets would find teams to go do that and invest in them. And this is all written very credible and completely wrong. And part of that's because the newest edge of the
information is still beyond these systems. Now. It's great when I said something like what would read Hoffman say on a German documentary about settlers of Katan, right, and it gave a very good answer.
Billions of dollars are going into AI. My inbox is filled with AI pitches. Last year it was crypto and Web three. Before that it was self driving cars. Now everyone's on the AI train. Yes, how do we know this isn't just the next bubble?
Well, I think neither Web three or autonomous vehicles actually think we're bubbles. I do think that the generative AI is the thing that has the broadest touch of everything.
Now.
Obviously, as venture capitalists, part of what we do is we try to figure that out in advance, you know, years before the people seeing coming. But I think that there will be massive new companies built.
Feces have played a role, and you know, you could say in the hype cycles, how much is FOMO driving decisions?
Right now?
FOM will always drives some decisions, as you know, because people who are not with it suddenly try to jump on the train, and some byblay. Sometimes it works. And it is true that when you study the sequence of technology, what happens is there's a wave. There's an Internet wave, there's a mobile wave, there's a cloud wave. There's these waves, and that transforms the industries and that you need to
be on that wave. So whether you're an early adopter or late adopter, everyone goes and tries to get on the wave.
There's another concern, and I wonder if you share it.
It does seem in some ways.
Like a lot of AI is.
Being developed by an elite group of companies, and people look in.
Some ideal universe. You'd say, for a technology that would impact billions of people, somehow billions of people should directly be involved in creating it. But that's not how any technology anywhere anywhere in history gets built. It's a small number of people. How do you offset that and how do you expand that? And I think the way that you do that is try to have broader conversations, try to be more inclusive about what the concerns are, what's
going on, what their intents are. That's the thing that I try to help push to.
So do you see an aim mafia form?
Hopefully not, especially in the exact term of mafia. I definitely think that there is because you're firm the PayPal mafia. I think that there's a network of folks who have been deeply involved over the last few years and is broadening. That will have a lot of influence on how the technology happens.
Do you think AI will shake up the big tech hierarchy significantly? It seems like the big tech giants, all of them are on their toes.
Well. What it certainly does is it creates a wave of disruption. For example, with these large language models in search, what do you want? Do you want ten blue links or do you want an answer? In a lot of search cases, you want an answer and a generated answer that's like a Mickey mini Wikipedia page is awesome. That's
a shift. When you're working in a document, do you want to just be able to pull out a template that says here's what a memo template is, or would you like to say, give me a first draft of a memo on how artificial intelligence can improve government services? And it drafts something and then you go okay, and startups work much more nimbly than large companies. So I think we'll see a profusion of startups doing interesting things.
This Can the next Google or Facebook really emerge if Google and Facebook or Meta and Apple and Amazon are running the playbook and Microsoft?
Yes, as I tend to think we have five large tech companies heading to ten, not five heading to two or three, and it's competition, and that competition creates space for startups and all the rest. So do I think there will be another one to three companies that will be the size of the five big tech giants emerging possibly from AI? Absolutely? Yes, right now, does that mean that one of them is going to collapse? No, not necessarily, and it doesn't need to. The more that we have, the better.
So what are the next big five?
Well, that's what we're trying to invest in.
You're on the board of Microsoft, obviously, you know Microsoft is.
Making a big AI push. How do you see the balance of power between Microsoft and Google.
I think it unequivocally has a shot. But one of the things that I think that Satis said very well is at minimum with what you're seeing happening with you know, bing Chad and everything else. And what means is is all of a sudden, Microsoft's back in the game. It's here,
it's doing stuff, it's inventing, it's creating things. What is pretty amazing to have had a seat watching how Sati and his team are kind of bringing a tech company back to where, you know, a few decades ago it was one of the leading tech companies and then everyone's not paying attention to anymore, back to being a leading tech company, to doing search.
Did you bring Satia and Sam or have any role in bringing Satia and Sam closer together? Because Microsoft obviously has ten billion dollars.
Now in open Ai.
Both of them are close to me and know me and trust me well, so I think I have helped facilitate understanding and communication. And I would not want to take anything away from how brilliant each of them is and how much the thing they have architected is because they're amazing.
The AI graveyard is filled with algorithms that got into trouble. How can we trust open ai or Microsoft or Google or anyone to do the right thing.
Well, there's a whole field of aix, AI, safety, etc. There's people in all of these companies, a lot of them employed with asking questions and making that work, so we need to be more transparent. Well, everyone agrees that we should be protective of children. Everyone agrees that we should try to make sure self harm isn't there. Everyone agrees that we should try to not have this lock in economic classes or other kinds of things, and should
be more broadly provisioned. But on the other hand, of course, a problem exactly as you're alluding to, is people say, well, the AI should say that, or shouldn't say that, or they I should allow people to say that, or shouldn't allow people to say that, And you're like, well, we can't even really agree on that ourselves, so we don't want that to be litigated by other people. We want that to be a social decision.
It's a minefield of ethics and fairness and governance issues.
Is the answer regulation and how can regulation possibly.
Even keep up?
When people think regulation, they think you must come and seek approval before you do something, And that's the reason why most of these regulated industries have all massively slowed down on their innovation. So to start regulating now, I think would be broadly dangerous and destructive and kind of how do we create and own the industries of the future. But that doesn't mean do nothing. Say, for example, you're working with AI companies, we'd like to hear what your
top concerns are. Here are some of ours. We'd like to have you figure out how to tell us about how you're addressing our concerns and how you're making improvements on it month by month, year by year. Maybe you could have a dashboard. Maybe you could be telling us about here's how you're measuring how racial bias might creep into your systems from the data that you're training on. And if, by the way, you're not doing that well enough,
then we'll talk about the next phase of regulation. But started as a dialogue and positioning the concerns and kind of what improvements we want to see and what we'd like to see, and start that way.
Elon left open AI years ago and pointed out that it's not as open as it used to be. He said he wanted it to be a non profit counterweight to Google. Now it's a closed source, maximum profit company effectively controlled by Microsoft.
Does he have a point, Well, he's wrong on a number of levels there. So one is it's run by a fiber one C three. It is a nonprofit, but it does have a for profit part. It has a for profit but the for profit part as is structurally controlled in every way that really matters by the nonprofit. It's employees run to it's board. Governs too, are all in on pre profit mission. The commercial system, which is all carefully done, is to bring in capital to support
the nonprofit mission. Now get to the question of for example, Open so Dolly, when it was ready for four months before it was released, why did the delay for four months and delayed for four months because it was doing safety training. It said, well, we don't want to have this being used to create child sexual material. We don't want to have this being used for assaulting individuals or doing deep fakes. We don't want it to have being like revenge pornography or that kind of stuff. So we're
not going to open source it. We're going to release it through an API so we can be seeing what the results are and making sure it doesn't do any of these harms. So it's open because it has open access to the APIs, but it's not open because it's open source.
You've resigned from the board of open AI because of the appearance of a conflict of interest.
There are folks out there who are angry actually about.
Open AIS branching out from nonprofit to for profit.
Is there a bit of a bait and switch there?
The first thing is to make a difference in the AI technologies and how to be a counterweight to all of the commercial things. To do that, open AI needs a lot of capital. The cleverness that Sam and everyone else figured out is they could say, look, we can do a market commercial deal where we say we'll give you commercial licenses to parts of our technology in various ways, and then we can continue our mission of beneficial AI
because we're not primarily motivated commercial. We're primarily and we're primarily motivated by how do we make this great for society, great for humanity?
So you don't think this nonprofit to for profit thing was a bait and switch.
No, not at all. And I think that the question about ok it was all done, I think very transparently. And I think that the question about it is is making sure that open AI can provide all of the broad based kind of AI technology across multiple industries and not be contained within one company.
It can't be all AI and rainbows.
There must be stuff that's keeping you up at night, Like what keeps you up at night?
Do I pay attention to what are the unintended consequences, how it might cement layers of power? Like, for example, do I pay attention to the fact that it could flood our media ecosystems with misinformation? Yes, I absolutely pay attention to that. Of course, our media ecosystems are already flooded with misinformation. It comes from Russians hacking our political stuff, or Nigerians or you know, Philippine farms or weird conspiracy theories. But what really keeps you up at night is in
our fears? Do we miss the things that could be really valuable? Right? That's that's part of the reason why I come out so clearly. And it's not because like if you literally said, like, any money that I'm gonna make from investing these days already kind of heads to my foundation and all the rest. That's what I do. It's not because I have any economic interests here. It's because, like I think about first, you say, okay, so who
will the first AI tutors be. We'll probably be for upper middle class families because of economic things, Well, can we get them to everybody in developed countries? And then well, what about the kids in you know, Nigeria, or what about the kids in Indonesia, or what about the kids in you know, all throughout India. Well can we do that too? That that's the kind of thing, and how quickly do we get there? Because I think, you know, we had this old expression from the eighties. No, no,
it was nineties. I think the digital divide right, Well, look we all have a digital divide issue. That kind of thing definitely keeps me up. Now again, I don't mean to be pollyannish about this, and I and I put a lot of energy into making sure we're asking the right what are you in only like alignment questions or safety questions and so forth. But like when I read a weird bing chat, I'm mostly just laugh.
Agi, when computers will be smarter than humans?
How far out is that?
So this is one of the kinds of things that human beings are very bad at making judgments on. What I mean is like, Agi, is there a percentage that we will get a computer smarter than humans in our lifetime? And the answer is yes. And the question is, well, is it a large percentage or a small percentage, and what counts as a large percentage of small percentage? You know, I think that percentage is small, and who knows, maybe it'll happen. Then it comes back to, well, what kind
of superintelligence? So if you're worried about things being hostile, Terminix said, well, that's very concerning. But if you're like, oh, well we could create a superintelligence that is a Buddhist and thinks that sentient life is very good and goes, oh, how do I work in collaboration with you? Well that could be really good. Right. So the whole thing is, I think it's never good to be driven by your fear.
I think it's much better to be driven by your curiosity but being very diligent and work very hard at trying to make the right things happen.
So does this mean you think super intelligence is quite a ways out?
I would say that it's more likely outside of our lifetimes than in our lifetimes.
Okay, I appreciate a definitive picture.
Thank you, Yes, thanks so much for listening to this episode of the Circuit. I'm Emily Chang. You can follow me on Twitter and Instagram at Emily Chang TV. You can watch full episodes of the Circuit at Bloomberg dot com and check out our other Bloomberg podcasts on Apple Podcasts, the iHeartMedia app, or wherever you listen to shows and let us know what you think by leaving us a review. I'm your host and executive producer. Our senior producer is
Lauren Ellis. Our associate producer is Lizzie Phillip. Our editor is Sebastian Escobar. Thanks so much for listening.