Look, this is the beginning of something amazing because there's no limit. This is right now an inflection point where we're sort of, you know, redefining how we interact with digital information. These are the fastest-going open-source projects. These are the fastest-going products. Some of the fastest-going companies we've seen in the history of the industry. We, for a long time, really focused on building our own infrastructure. We have hundreds of thousands of servers.
I think we can get by with like 500. I said, okay, I think we can find 500 cases somewhere. And I remember you deadpan thing. Do it. I'm about $500 million. The internet was the dawn of universally accessible information, and we're now entering the dawn of universally accessible intelligence. The AI revolution is here. But as we collectively try to navigate this game-changing technology, there are still many questions that even the top builders in the world are grappling to answer.
And that is why A16Z recently brought together some of the most influential founders from OpenAI and Thropic, Character AI, Roblox, and more, to an exclusive event called AI Revolution in San Francisco recently. And in today's episode, we share the most important themes from this event starting with the economics of AI, but we also touch on Broad versus Specialized models, and which ultimately may win. The importance of UX, and also whether we can expect
scaling laws to continue. By the way, several founders comment on what they're seeing there, including Nome Shazir, lead author of the pre-eminent Transformer paper from back in 2017. Now, I won't delay us any longer, other than saying we've got a lot more coverage of the spent coming, including how AI is disrupting everything from games to design, how two important waves in machine learning and genomics are colliding, and what we can expect from the enterprise.
But in the meantime, if you would like to listen to all the talks in full today, you can head on over to A16Z.com slash AI Revolution. As a reminder, the content here is for informational purposes only. Should not be taken as legal, business, tax, or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund. Please note that A16Z and its affiliates may also maintain
investments in the company's disgust in this podcast. For more details, including a link to our investments, please see A16Z.com slash Disclosures. General partner at A16Z giving the Y now, and also how the economics of the space may finally be coming together. I will give you the punchline up front. The punchline is if you've ever wanted to start a startup or join a startup, now is a great time to do it. But how early are we in the trajectory of this
technology? For example, the microchip was invented in the late 50s. But it wasn't until the turn of the century when Steve Jobs famously put a thousand songs in your pocket. So just how much opportunity is still in the table. Okay, so what is the narrative been for AI over the last 50 years? The narrative is this episodic thing with summers and winters and all of these false promises. I remember when I joined PhD in 2003, for my cohort that joined, I would say 50% of the people were
doing AI. This was like when Beijing stuff was super popular. And then within three years, everybody's like AI is dead, right? And so it's kind of been with this love-hate relationship with it for a very long time. But if you look at all of the graphs, we've made tremendous amount of progress in the last 70 years. And along the way, we've solved a lot of very real problems, right? Like way back in the 60s, we're doing expert systems that you're still used for
diagnosis, right? Like we're very good at beating Russians at chess. You know, we're doing some driving cars, we're doing vision. There's just a lot of stuff that we've solved. And so much so has become a cliche that every time we solve a problem, we're like, oh, well, that wasn't really AI, right? So we just keep moving the goalposts. So we've had steady progress. We've solved real problems. And not only that, it's been a while now that we've been better at humans for some very
important things, for example, like, you know, perception or handwriting detection. It's been about 10 years since we've been better than humans at entity identification. And not only that, we've actually gotten very good at monetizing this, particularly for large companies, right? And so, as we all know, there's been a ton of market cap that's been added to companies like Meta and Google and Netflix by using AI. So I think the question we should all ask ourselves is why hasn't this
resulted in an actual platform shift? And by platform shift, I mean, why has the value accrued to the incumbents? And why hasn't there been a new set of kind of AI native companies that have come up and displaced them, which we've seen in many other areas, right? We saw that in mobile, obviously, we saw with the microtip, et cetera. But I'm going to argue is that the capabilities
of all been there, but the economics just haven't for startups. So if you step back and you look at the standalone case for AI academics, not like what a big company can extract from it, but the startup it's actually not been great. I mean, to begin with, a lot of the sexier use cases are pretty niche markets, right? Like, you know, it's great to beat Russians at chess, like maybe it's a useful tool that you can apply to solving bigger problems, but that not itself as a market. I actually think
the second point is the most important point and it's pretty subtle. Many of the traditional use cases of AI require correctness and the tail of the solution space. And that's a very hard thing for startup to do for a couple of reasons. One of the reasons is if you have to be correct and you've got a very long and fat tail, either you do all of the work technically or you hire people. So often we hire people, right? And for start, start hiring people to provide solutions
as a variable cost. And the second one is because the tails of these solutions tend to be so long, think something like self-driving, where there's so many exceptions that could possibly happen, the amount of investment to stay ahead increases and the value decreases, right? You have this perverse economy of scale. So we've actually done studies on this and it turns out many companies that try to do this as startups end up with non-software like margins. They're lower margins and
they're just much harder to scale. Of course, with robotics comes the curse of hardware, classically a very difficult thing for startups to do. And if you really think like what is the competition of most use cases of AI, it tends to be the human and traditionally it's stuff like the human brain is really good at like perception, right? Like the brains that we have have evolved over a hundred million years to do things like whatever pickberries and evade lions or whatever it is,
and it's incredibly efficient at doing that. So this leads to something that most investors know, which we call the dreaded AI mediocrity spiral. And what is it? It's very simple, which is let's say a founder comes in and they want to do an AI company and they're going to use AI to automate a bunch of stuff. Of course, correctness is really important and they want it to look at it first so they hire people to do it instead of the AI. Then they come to us, we invest in them and I join
the board. Then I say listen, this is great. You need to grow. And they're like, oh man, we need to grow this AI is hard like the tail's very long. I'm going to hire more people. And now you're on this treadmill of continuing hiring people. And this is one of the reasons why so many startups that have tried to do this just haven't had kind of this breakaway economics and the value accrues to large companies that can actually seek these perverse economies of scale. But you know,
market transformations are created with an economics get 10 times better. They get created when they're 10,000 times better. So what is the learning from the last say 70 years? It's not that the technology doesn't work. It's not that we can't solve the problems. It's not even that we can't monetize it. Big companies are great at monetizing it. It's that it's very, very hard for startups to break away. And if startups can't break away, you don't get a transformation.
But what about the current wave where the everyday consumer can prompt LLMs with natural language and have it output of variety of things from conversations to images to even 3D models. So this wave is very, very different and we're already seeing productive viable businesses, right? Like I like to call them the three C's. There's creativity, like any component of a video game. You can automatically generate. There's companionship, which kind of more of emotional connections.
And then there's the class that we call co-pilot, which will help you with tasks. These are already emerging as independent classes. So remember the properties of AI previously that made it difficult to build a startup company. So none of these really apply to this current model. The first one obviously these are large markets that this is being applied to. It's like arguably all of white, color work, even just like video games and movies is like $300 billion in market.
These are massive, massive markets. The second one again I think is you know the most important point and maybe the most subtle. In this domain, correctness isn't as much of an issue for two reasons. One of them is when you're talking about creativity, the first C, there is no formal notion of correctness really. And what does it mean to be incorrect for like a fiction story or a video game? I mean for sure like you want to make sure like they have all their fingers but
even then do you really in sci-fi? And so we have absolutely adapted to use cases where you know correctness is not a huge issue. The second one is a little more subtle. I just think it's so important, which is the behavior that's developed around these things is iterative. And so that human and the loop that used to be in a central company is now the user. So it's not a variable cost the business anymore. The human and the loop has moved out. And as a result you can do things
where correctness is important. Like for example developing code because it's iterative so like you're constantly getting feedback and correction from the user. And I want to talk to about this brain portion because I think it's so interesting. I'm not a neuroscientist but for these types of tasks the silicon stack is way better than the carbon stack, right? So if you think about it like traditional AI a lot of it is doing stuff like the 100 million year old brain is doing, right?
The one that's been fleeing predators or like picking strawberries or whatever it is. And that's very very hard to compete with. Like remember if you have the CPU GPU setup a self driving car some of these kits are like the 1.3 kilowatts where the human brain is 15 watts. So economically that's very tough to compete with. The new gen iWave it's kind of competing like the creative language center of the brain, it's like 50k years old. Like it's much less evolved and it turns out it's incredibly
competitive. So much so that you actually have the economic inflection we look for for a market transformation. So let's just go ahead and break down the numbers very quickly. So let's say that iMartine wanted to create an image of myself as a Pixar character, right? So if I'm using one of these image models the inference cost let's call it you know a tenth of a penny it's probably less than that actually. Let's say it takes one second. If you compare that to hiring a graphic artist
let's say that it was a hundred bucks in an hour. I've actually hired graphic artists to do things like this it tends to be a lot more money than that but let's conservatively say that you've got four to five orders of magnitude difference in cost and time. So these are the type of inflection you look for certainly as an economist when it's like there's going to be actually a massive
market dislocation. I'll give you another example from insta-base. So let's assume that you have a legal brief it's in a PDF you throw it into this kind of unstructured document lLM and then you ask questions for that legal brief. Again the difference cost say a tenth of a penny maybe it's a little more maybe it'll less. Time to complete maybe one second maybe a little more maybe a little less but as someone who has actually spent a lot of money on lawyers hours I want to point on a
couple of things. The first one is it takes more than one hour to iterate on this for sure and the second one is they're not always correct. In fact built in for any interaction I have with a lawyer is cross checking their work and double checking their work. So again we have four to five orders of magnitude difference in cost and time and if you want to have like an example of like how the extremely nutty that this can get like I see no reason why you can't generate an entire game.
There are companies today working on it the 3D models the characters the voices the music the stories etc like their companies that are doing all of these things and if you compare the cost of like hundreds of millions of dollars and years versus you know a few dollars of inferences now we have like current like internet and microchip level asymmetries and economics. Now listen I'm not saying this happens soon we're not there yet. What I'm saying is this is the path that
we're on and these types of paths are what you look for with big transformations. So it's little wonder why we're seeing so much takeoff the way that we have and these are the fastest going open source projects. These are the fastest going products and some of the fastest going companies we've seen in the history of the industry and it's because again it's less the capabilities and it's much more that the economics work. So listen this may sound hyperbolic but I
really think that we could be entering a 30-pock of compute. I think that the first epoch of course is the microchip. Before the advent of the computer you actually had people calculating longer than tables by hands. That's where the word comes from. They were computers. They would compute. Then we created any act along with other machines but let's look at any act. So any act
was 5,000 times faster than a human being doing it right. There's your three to four hours of magnitude and that kind of ushered in the compute revolution right and this gave us a number of companies that were either totally transformed like IBM or totally net new. So the microchip brought the marginal cost of compute to zero. The internet brought the marginal cost of distribution to zero. So listen
in the 90s when I wanted to get a new video game I would go to a store and buy a box. And again I don't have the math appear but if you actually calculate the price per bit relative to DSL in the late
90s is about four or five orders of magnitude again relative to actually shipping it. So I think it's a pretty good analog where you say these large models actually bring the marginal cost of creation is there some very fuzzy vague notion of what creation means but for sure we could talk about it of like content conversation whatever it is. And like the previous epochs when those epochs happen you had no idea what new companies were going to be created. Nobody predicted Amazon nobody predicted
Yahoo like I remember this happened. So I think listen I think we should all get ready for a new way of iconic companies. I don't think we know what they're going to look like but forget the capabilities economics are just too compelling. We'll hear more from our team at the end of this episode but speaking of economics and the scale of top models today here is our new general partner Anjani Midha reminiscing about an early call he had with Dario Amade co-founder of Anthropic who
you'll also hear from shortly. I'm going to take you all back in time to about three years ago. You and Tom gave me a call one of your co-founders and said hey I think we're going to go start Anthropic and I asked you great okay like what do you think we need to get going you said well I think we can get by with like 500. I said okay I think we can find 500k somewhere and I remember you dead band thing. I'm doing $500 million and that's when I realized the things were going to be a little
bit different. Tech week is back and we're coming to New York City. We had over 750 events in San Francisco and LA this year and starting on October 16th there are already over 300 events on the
calendar for New York Tech week. So to celebrate we are giving away three tickets to A16Z's welcome party the kicks the whole week off and there are several ways to enter including you can retweet the giveaway announcement post you can also tweet your own attendance using hashtag NY Tech Week or you can let us know on YouTube by using the phrase see you at New York Tech Week. All the details and war can be found at a16z.com slash Tech Week NYC.
Dario was one of the first employees at OpenAI and spent five years there before co-founding Anthropic and the last year of AI has absolutely captured the masses but people like Dario were early in recognizing just how far these technologies could scale. What was it that at that moment when you've been the team at OpenAI had started publishing your first experiments on scaling loss that gave you so much confidence that this was going to hold when everybody else just thought
that was crazy talk. Yeah so for me the moment was actually GPT-2 in 2019 where there were two different perspectives on it right when we put out GPT-2 you know some of the stuff that was considered most impressive at the time was oh my god you give this five examples just offer
it straight into the language model five examples of English to French translation and then you put a six sentence in English and it actually translates and into French like oh my god it actually understands the pattern that was crazy to us even though the translation was terrible it was almost
worse than if you were to just take a dictionary and substitute word for word but you know our view was that look this is the beginning of something amazing because there's no limit and you can continue to scale it up and there's no reason why the patterns we've seen before won't continue
to hold the objective of predicting the next word is so rich and there's so much you can push against that it just absolutely has to work and then some people looked at it and they're like you made a bot the translate's really badly it was just I think two very different perspectives
on the same thing and we just like really really believed in the first perspective what happened then was you saw a reason to continue down that line of inquiry which resulted in GPT-3 and what would you think was the most dramatic difference between GPT-3 and the previous efforts?
yeah I mean I think it was much larger and scaled up to a substantial extent I think the thing that really surprised me was the Python programming where the conventional wisdom was that these models couldn't reason at all and when I saw the Python programming even though it was very simple
stuff even though a lot of it was stuff you could memorize you know you could you could put it in kind of new situations come up with something that isn't going to be anywhere in GitHub and it was just showing the beginning of being able to do it and so I felt that that ultimately meant that
we could keep scaling the models and they would get very good at reasoning what was the moment at which you realized well okay we think this is actually going to generalize much broader than we expect what were some of the signals there that gave you that conviction I think one of the
signals was that we hadn't actually done any work we had just scraped the web and there was enough Python data in the web to get these good results when we looked through it it was like maybe 0.1% to 1% of the data that we scraped was Python data so the conclusion was well if
if it does so well with so little of our data and so little effort to curate it on our part it must be that we can enormously amplify this and so that just made me think well okay we're getting more compute we can scale up the models more and we can greatly increase the amount of data so we have so many ways that we can amplify this and so of course it's going to work it's just a matter of time.
Another person optimistic about scaling laws at the time was Nome Shazir. Nome was one of the researchers and lead author behind the transformative 2017 transformer paper and has since co-founded character AI. I knew that you know you can make this technology better in a lot of ways we can improve it with model architecture and distributed algorithms and quantization and all of these things so it was working on that but then struck me hey the biggest thing is just scale can you
throw like a billion dollars or a trillion dollars at this thing. What would happen if we did massively scale compute? Well many companies chose to find out and we the consumer are the beneficiaries of that but can this realistically continue can the industry just continue to throw more computer the problem and get better solutions or will a more fundamental unlock be required? This theme was top of mind for many at the event and here is OpenAI's co-founder and CTO Mira Mirati
tackling that question head on. Do you think the scaling laws are going to hold and we're going to continue this advancements or do you think we're hitting diminishing returns? So the reason to any evidence that we will not get much better and much more capable models is we continue to
scale them across the axis of data and compute. Whether that takes you all the way to AGI or not that's a different question there are probably some other breakthroughs and advancements needed along the way but I think there's still a long way to go in the scaling laws and to really gather a lot of benefits from this larger models. We'll hear more from Mira and touch on AGI in part two
but first here's no again in conversation with ACCCZ general partner Sarah Wang. On just how much compute we expect to soon be available but also how much innovation is on deck even if there aren't additional fundamental breakthroughs and for those listening on audio yes no fully did this computation in its head. I see this stuff like massively scaling up it's just like not that expensive.
I think I saw an article yesterday like Nvidia is going to build like another one and a half million H100s next year so like that's two million H100s so that's two times 10 to the sixth times they can do about 10 to the 15th operations per second so two times 10 to the 21 divide by like
eight times 10 to the nine people on earth so that's roughly a quarter of a trillion operations per second per person which means that could be processing on the order of like one word per second on like a hundred billion parameter model for everyone on earth but like really it's not going to be everyone on the earth because like some people are blocked in China and some people are sleeping like it's not that expensive you know like this thing is massively scalable if you do it right and
you know we're working on that. You said this once that the internet was the dawn of universally accessible information and we're now entering the dawn of universally accessible intelligence. Maybe building off your last answer what did you mean by that do you think we're there yet?
Yeah I mean I think it's like we're really a right brother is first airplane kind of moment right like we've got something that works and it's useful for now some large number of use cases and looks like it's scaling very very well and without any breakthroughs it's going to get massively
better as everyone just kind of scales up to use it and there will be more breakthroughs because now you know like all the scientists in the world are like working on making this stuff better it's great that like all this stuff is accessible as open source like you know we're going to see like a
huge amount of innovation and what's possible in the largest companies now can be possible in you know in somebody's academic lab or garage in a few years and then yeah the technology gets better there's just going to be all kinds of great use cases that emerge and pushing technology
forward pushing science pushing the ability to you know help people in various ways I love to get to the point where you can just ask it how the cure cancer or something you know I mean it seems a few years away for now do you think we need another fundamental breakthrough like the transformer technology to get there or do you think we actually have everything that we need?
I mean it's impossible to predict the future but like I don't think anyone's seen like the scaling laws you know stop I think as far as anybody has experimented stop just keeps getting smarter so we'll be able to unlock lots and lots of new stuff I don't know if there's an end to it
but at least everybody in the world should be able to talk to something like really brilliant and have incredible tools all the time and I can't imagine that that will not be able to build on itself at the core the computation isn't that expensive like operations cost like 10 to the negative
18 dollars these days and like you know if you can do this stuff efficiently even talking to the biggest models ever trained the cost of that should be like way way lower than the value of your time or most anybody's time and really we should you know there's the capacity there to scale these things up by orders of magnitude. As the industry does pursue scale here's Stario's take on what bottlenecks maybe along the way. But the next 2436 months what do you think the
biggest bottlenecks are in demonstrating that the scaling laws continue holding? Yeah so I think there's three elements there's data there's compute and there's algorithmic improvements so I think we are on track even if there were no algorithmic improvements from here even if we just scaled up what we had so far. I think the scaling laws are going to continue and I think that's going to lead to amazing improvements. I think the biggest factor is simply that more money is being poured into it.
The most expensive models made today cost about a hundred million dollars say plus or minus a factor of two. I think that next year we're probably going to see from multiple players models on the order of one billion dollars and in 2025 we're going to see models on the order of several billion I don't know perhaps even perhaps even ten billion dollars and so I think that
factor of one hundred plus the compute inherently getting faster with the H-100s that's been a particularly big jump because of the move to lower precision so you put all those things together and if the scaling laws continue there's going to be a huge increase in capabilities. But if compute does increase how might this impact the size of models and ultimately the cost
of inference for consumers? Infrains will not get that much more expensive. The basic logic of the scaling laws is that if you increase compute by factor of N you could increase data by a factor of square root of N and size the model by a factor of square root of N. So that square root basically means that the model itself does not get that much bigger and the hardware is getting faster while you're doing it. So I think these things are going to continue to be servable for the next three or
four years. If there's no architectural innovation they'll get a little bit more expensive. I think if there's architectural innovation which I expect there to be they'll get so much cheaper. Increased model size and performance should unlock fundamentally new applications which will explore further in part two. But first entering the conversation is David Busuki, co-founder and CEO of Roblox commenting on the value of owning your infrastructure and the impact of that on inference
cost especially in a 3D world constantly reinventing itself. And even further extension with takes probably a lot of compute horsepower which is completely personalized to generation in real time backed by massive inference stuff. So you could imagine okay I'm making the super dungeons and dragons thing but as it watches you play and maybe we know your history you'll be playing a 3D
experience that's known they've ever seen before. One of the good things we've done is we for a long time really folks that are building our own infrastructure we have hundreds of thousands of servers many many edge data centers, terabits of connectivity that we've traditionally used for 3D simulation that the more we can run inference jobs on these we can run super high volume inference at high quality at low cost and make this you know just freely available so the creators don't
worry about it. Whether we can continue scaling is one thing but another topic on the minds of many builders is whether they can compete with the largest models. Will bigger models always win or will specialization trump generality? Martine and Mira discuss. It reminds me very much of the Silicon industry so I remember the 90s when you buy a computer there are all these weird co-processors there's like here's like string matching here's a floating point here's crypto and like all of them
got consumed into basically the CPU it just turns out generality was very powerful and that created a certain type of economy one where like you had you know in to an AMD and like you know it all went in there and of course there's a lot of money to build these chips and so like you can imagine two
futures there's one future where like you know generality is so powerful that over time the large models basically consume all functionality and then there's another future where there's going to be a whole bunch of models and like the things fragment and you know different points of the design
space. Do you have a sense of like is it opening eye in nobody or is it everybody? It kind of depends what you're trying to do so obviously the trajectories one where these AI systems will be doing more and more of the work that we're doing and they'll be able to operate autonomously but we will need to provide direction and guidance and oversee but I don't want to do a lot of the repetitive work that I have to do every day I want to focus on other things but in terms of like how this
works out with a platform we make a lot of models available through our API from the very small models to our frontier models and people don't always need to use the most powerful the most capable model sometimes they just need the model that actually fits for their specific use case
and that's far more economical so I think there's going to be a range and there's a lot of focus right now on building more models but you know building good products on top of these models is incredibly difficult plus each industry may have unique requirements here's David commenting
on how a suite of models will likely be required in order to power the class of games of the 21st century in any company like a row blocks it's probably 20 or 30 end user vertical applications that are probably very bespoke natural language filtering very different than generative 30 and at the end
user point we want all of those running we want to use all of the data in an opt-in fashion to help make these better tune these better but as we go down down down there's probably a natural two or three clustering of general bigger fatter type models and a company like ours there's definitely
one around safety civility natural language processing natural language translation generally one more multi modal thing around 3d creation say some combination of text image whatever generate a great avatar and then there's probably a third area which gets into the virtual human area which
is how would we take the five billion hours of human opted-in data what we're saying how we're moving where we go together how we work in a 3d environment and could we use that to maybe inform a better 3d simulation of a human so I would say yes looking at large models in those three areas
and I think the market as we see it there's going to be these super big god model massive LLM type companies I think we are probably a layer below that we're very fine tuned for the disciplines we want and it's worth noting that the back-and-model is only one part of the product here is Mira
with a reminder to builders about the importance of UX actually you can see sort of the contrast between making this model available through an API and making the technology available through chat GPT it's fundamentally the same technology maybe with a small difference with
through enforcement learning this human feedback for chat GPT but it's fundamentally the same technology and the reaction and the ability to grab people's imagination and to get them to just use the technology every day is totally different here is David Bazooki again in conversation
with A6CNZ general partner John Lai on what UX may be required especially given the sheer number of games and experiences that we expect to be enabled with AI do you think you'll need to have a new user interface or discovery mechanism I think the user interface there's a lot of opportunity
in addition to thinking of this just as content and thinking of this as your real-time social graph it's fascinating because I think one of the examples of AI being used by big companies is Netflix and I think TikTok as well if they're sort of a personalized YouTube recommendations and
you could maybe imagine a feature where a user that onboards into the burblock doesn't actually see a library or a catalog of games but it's just presented with like a beat and it's almost like you're just going from one to another this is really right I think we're we are constantly testing
you know the new user experience should that be 2D should that be 3D what's the waiting between creating your digital identity versus discovery what's the waiting between connecting with your friends and optimizing all that and we may find that it has to be personalized having
text or voice prompt is just something that's an actually part of any experience wherever you go just like in a traditional avatar editor rather than sliders and radio buttons that will move to I think a more interactive type text prompt thing as we think about UX and the increasing
capabilities of these models how might they let us further integrate with the world around us and connect more data streams here is Mira David and Nome exploring the world of multi-modality today obviously have this great representation of the world in text and we're adding other
modalities like images and video and various other things so these models can get a more comprehensive sense of the world around us similar to how we understand and observe the world the world is not just in text it's also in images yeah I think there's a lot of interesting stuff
going on in various ecosystems around this copilot notion there's one copilot where we're all wearing like our little earbud all day long and that copilot is talking to us so that's more maybe more consumer real-time copilot there's obviously many companies trying to build the copilot
that you hook up to your email your text your slack your web browser and whatever and it starts acting for you I'm really interested in the notion that copilets will talk to other copilets using like natural English I think will be the universal interface of copilets and you go to
imagine NPCs being created by prompts you know hey I'm building the historical constitutional thing I want George Washington there but I want George Washington to act at the highest level of civility and I don't know new users through the experience tell them a little about constitutional
history go away when they're done like I actually do think you will see those kind of assistance I mean also multimodal maybe you want to hear a voice and see a face and then you know then also just able to interact with multiple people like yeah you would want a virtual person like
in there with you know say with with all your friends or do you want the experience it's like you got elected president you get the earpiece and you get like the whole cabin of the friends or advisors there's like you know like you walk into cheers and everyone knows your name and they're
glad you came so there's a lot we can do to make things more usable but then also to make it more intelligent and more more connected to you know to why people want as these dynamic multimodal products emerge will natural language be enough to effectively interface with computers so it
could be the case that like over time these things evolved into like you just speak natural languages or do you think it will always be a component of a finite state machine at traditional computer that's it yeah I think this is right now an inflection point where we're sort of you know
redefining how we interact with digital information yeah yeah and it's through you know the form of this AI systems that we collaborate with and maybe we have several of them and maybe they all have different competences and maybe we have a general one that kind of follows us around everywhere
knows everything about what my goals are sort of in life work and kind of guides me through and coaches me and so on but you know there is also we don't know exactly what the future looks like and so we are trying to make these tools available and the technology available to a lot of other people
so they can experiment and we can see what happens it is hard to imagine a world where AI doesn't continue to evolve and disrupt the world as we know it but as this happens a common reaction is to wonder what happens to all the jobs here's Martine the man who opened this episode
closing us out with an important reminder there's always a question when you have market dislocations like they're staring you in the face like you know it's coming what happens to the jobs what happens to people there's something called javans paradox and it's very simple
javans paradox is very simply if the demand is elastic it turns out like there's unlimited demand for compute even if you drop the price the demand will more than make up for it normally far more than make up for it this is absolutely the case with the internet right so you get kind of more value
more productivity etc and I personally believe when it comes to creating any creative asset or any sort of kind of work automation clearly the demand is elastic right I think the more that we make that the more people consume it so I think that we're very much looking forward to massive
expansion productivity a lot of new jobs a lot of new things I think it's going to follow just like the microchip and just like the internet thank you so much for listening to part one of our coverage from AI revolution we really hope this gave you a glimpse into what maybe to come from scaling
laws to multi-nodality and we will be back in a few days with more key lessons from the event including how AI is disrupting design games and entertainment plus modern day turn tests AI alignment and future opportunities and as a reminder if you would like to listen to all the talks
in full today you can head over to a16c.com slash AI revolution we'll see you soon if you liked this episode if you made it this far help us grow the show share with a friend or if you're feeling really ambitious you can leave us a review at breakthispodcast.com slash a16c you know candidly producing a podcast can sometimes feel like you're just talking into a void and so if you did like this episode if you liked any of our episodes please let us know we'll see you next time