Every single interface that I interact with, every single problem space that I'm trying to solve are going to be made easier by virtue of this new technology. If you were starting from scratch today, you probably wouldn't build this app-centric world. The past 20 years of consumer technology have been a story of apps,
of touchscreens, and of smartphones. These form factors seemingly appeared out of nowhere and may be replaced just as quickly as they were ushered in, perhaps by a new AI-enabled stack, a new computing experience that is more agentic, more adaptive, and more immersive. Now in today's episode, A16C's growth general partner, David George, discusses this feature with arguably one of the most influential builders of this era. That is MetaCTO.
Boz Bosworth, who spent nearly two decades at the company, shaping consumer interaction from the Facebook newsfeed all the way through to their work on smart glasses and AR headsets. Here, Boz explores the art of translating emerging technologies into real products that people use and love. Plus, how breakthroughs in AI and hardware could turn the existing app model on its head.
In this world, what new interfaces and marketplaces need to be developed? What competitive dynamics hold strong? And which fall by the wayside? For example, will brands still be emote? And if we get it right, Boz says the next wave of consumer tech won't run on taps and swipes. It'll run on intent. So is the post-mobile phone era upon us? Listen in to find out. Oh, and if you do like this episode, it comes straight from our AI Revolution series.
And if you missed previous episodes of the series with guests like AMD CEO Lisa Su, Anthropic co-founder Dario Amadei, and the founders behind companies like Databricks, Waymo, Figma, and more, Head on over to a16z.com slash AI Revolution.
As a reminder, the content here is for informational purposes only, should not be taken as legal, business, tax, or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16C fund. Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast. For more details, including a link to our investments, please see A16Z.com.
Boss, thanks for being here. Thanks for having me. Appreciate it. Okay, I want to jump right in. How are we all going to be consuming content five years from now and 10 years from now. 10 years, I feel pretty confident that we will have a lot more ways to bring content into our view shed than just taking out our fun.
I think augmented reality glasses obviously are a real possibility. I'm also hoping that we can do better for really engaging in immersive things. Right now, you have to travel to, like, the sphere, which is great, but there's one of them. It's in Vegas, and it's a trip.
are there better ways that we can have access to if we really want to be engaged in something, not just immersively, but also socially. So it's like, oh, I want to watch the game. I want to watch it with my dad. I want to feel like we're courtside. Sure, we can go and pay a lot for tickets. Is there a better way? I think there is. So 10 years I feel really good about all these alternative content delivery vehicles.
five years is trickier for example i think the glasses the smart glasses the ai glasses the display glasses that we'll have in five years will be good Some of them will be super high-end and pretty exceptional. Some of them will be actually little and not even tremendously high-resolution displays, but they will be always available and on your face. I wouldn't be doing work there. But like if I'm just trying to grab simple content in moments between, it's pretty good for that.
So I think what we are seeing is, as you'd expect, We're at the very beginning now of a spectrum of super high-end but probably very expensive experiences that will not be evenly distributed across the population. A much more broadly available set of experiences that are not really rich enough to replace the devices that we have today. and then hopefully a continually growing number of people who are having experiences that really could not be had any other way.
today, thinking about what you could do with mixed reality and virtual reality. Yeah, we're going to build up to a lot of that stuff. So throughout your career, I would say one of the observations I would have is You've been uniquely good at piecing together various big technology shifts.
into new product experiences. So in the case of Facebook early days for you, obviously you famously were part of the team that created the news feed. And that's a combination of social media, a mobile experience. and applying your old-school AI to it. That's right. The old-school AI. Yeah, exactly. But that's pretty cool. And a lot of times, these trends, they come in bunches, and that's what creates the breakthrough product.
So maybe take that and apply it to where we are today with the major trends that are in front of you. Let me say two things about this. The first one is, I think, if there was a thing that, not me specifically, but I think me and my cohorts at Meta were really good at, was like, we really immersed in what the problem was. What were people trying to do? What did they want to do? And when you do that...
you are going to reach for whatever tool is available to accomplish that goal. That allows you to be really honest about what tools are available and see trends. I think the more oriented you are towards the technology side,
you get caught in a wave of technology, and you don't want to admit when that wave is over, and you don't want to embrace the next wave. And you're building technology for technology's sake, so solving a product problem. But if you're embracing what are the issues that people are really going through in their life, and they don't have to be profound, I bring that up just because I think...
Interesting moment where I think all of us have been through a phase where a lot of people wanted a new wave to be coming because it would have been advantageous to them. But those things weren't solving problems that regular people had. I think the reason we're so enthusiastic about... the AI revolution that's happening right now is
it really feels tangible. These are real problems that are being solved, and it's not solving every problem, it creates new problems, it's fine. So it feels like a substantial, real Nuka capability that we have. And what's unusual about it is how broad-based it can be applied. And while it has these interesting downsides today on factuality and certainly compute and cost and inference, those types of trade-offs feel really solvable and the domains that it applies to are really broad.
And that's very unusual. Certainly in my career, when these technological breakthroughs happen, they're almost always very domain-specific. It's like, cool, this is going to get faster, or that's going to get cheaper, or that's now possible. This kind of feels like, oh, everything's going to get better. Every single interface that I interact with, every single...
are going to be made easier by virtue of this new technology. That's pretty rare. Mark and I always believed that this AI revolution was coming. We just thought it was going to take longer. We thought we were probably still 10 years away at this point. But what we thought would happen sooner was this revolution in computing interfaces. And we really started to feel...
10 years ago, like, the mobile phone form factor, as amazing as it was, this is 2015, was, like, already saturated. That was what it was gonna be. And... Once you get past the mobile phone, which is, again, the greatest computing device that any of us have ever used to this point, of course, it's like, okay, well, it has to be more natural in terms of how you're...
Getting information into your body, which is obviously ideally usually through our eyes and ears, and how we're getting our intentions expressed back to the machine. You no longer have a touchscreen, you no longer have a keyboard. So once you realize those are the problems, it's like, cool, we need to be on the face because you need to have access to eyes and ears to bring information from the machine to the person.
And you need to have these neural interfaces to try to allow the person to manipulate the machine and express their intentions to it when they don't have a keyboard or mouse or a touchscreen. And so that has been an incredibly clear-eyed vision we've been on for the last 10 years. But we really did grow up in an entire generation of engineers for whom the system was fixed. The application model was fixed, the interaction design.
Sure, we went from a mouse to a touchscreen, but it's still a direct manipulation interface, which is literally the same thing that was pioneered in the 1960s. So we really haven't changed these modalities. And there's a cost to changing those modalities, because... we as a society have learned how to manipulate these digital artifacts through these tools. So the challenge for us was, okay, you have to build this hardware.
which has to do all these amazing things and also be attractive and also be light and also be affordable. And none of these existed before. And what I tell my team all the time is, like, that's only half the problem. The other half the problem is, great, how do I... Like, how do I make it feel natural to me? I'm so good with my phone now. It's an extension of my body, of my intention at this point. How do we...
make it even easier. And so we were having these challenges. And then what a wonderful blessing. AI came in two years ago, much sooner than we expected. and is a tremendous opportunity to make this even easier for us because the AIs that we have today have a much greater ability to understand what my intentions are. I can give vague reference and it's able to work through the corpus of information it has available to make specific outcomes happen from it.
work to be done, to actually adapt it. And it's still not yet a control interface, like I can't reliably work my machine with it. There's a lot of things that we have to do, but we know what those things are. And so now you're in a much more exciting place, actually. Whereas before we thought, okay, we've got this big hill to climb on the hardware, you've got this big hill to climb on the interaction design, but we think we can do it.
And now we've got a wonderful tailwind, where on the interaction design side at least, there's the potential of having this much more intelligent agent that now... has not only the ability for you to converse with it naturally and get results out of it, but also...
to know by context what you're seeing, what you're hearing, what's going on around you. And make intelligent inference based on that information. Let's talk about like Reality Labs and this suite of products, what it is today. So you have Quest headsets, you have the smart glasses. And then on the far end of the spectrum is Orion and some of the stuff that I demoed. So just talk about the evolution of those efforts and...
what you think the markets are for them and how they converge versus not over time. So when we started the Ray-Ban Meta Project, they were going to be smart glasses. And in fact, they were entirely built and we were six months away from production when Llama 3 hit.
And the team was like, no, we got to do this. And so now they're AI glasses, right? Like they didn't start as AI glasses, but the form factor was already right. We could already do the compute. We already had the ability. So yeah, now you have these glasses that you can ask questions to. And in December, to the Early Access program, we launched what we call Live AI. So you can start a Live AI session with your Ray-Ban Meta Glasses.
And for 30 minutes until the battery runs out, it's seeing what you're seeing. And it's funny because on paper, the Ray-Ban meta looks like an incremental improvement to Ray-Ban stories. And this is kind of the story I'm trying to tell, which is the hardware isn't that different between the two.
The interactions that we enable with the person using it are so much richer now. When you use Orion, when you use the full AR glasses, you can imagine a post-phone world. You're like, oh, wow, like if this was... attractive enough and light enough and had battery life enough to wear all day this would have all the stuff i need like it would all be right here and when you start to combine that with images that we have of what ai is capable of so you did the demo where we showed you the breakfast
Yeah, it did. And it's, yeah, and for what it's worth, I mean, I'll explain it because it's very cool. Got to walk over and there's a bunch of breakfast ingredients laid out. And I look at it and I say, hey, Mehta, what are some recipes? That's right. And these ingredients. That is, for me at least, when we think about Orion,
Initially, it didn't have that AI component when we first thought about it. It had this component that was very direct manipulation, so it was very much modeled on the app model that we're all familiar with in the phone.
And I think there's a version of that. Yeah, of course, you're going to want to do calls, and you're going to want to be able to do your email, and be able to do your texting, and you want to be able to play games. We have a stargazer game, and you want to do your Instagram reels. What we're now excited about is, okay, take all those pieces and layer on the ability. to have an interactive assistant that really understands not just what's happening...
and what email is coming in. But also what's happening in the physical world around you and is able to connect what you need in the moment with what's happening. And so these are concepts where you're like, wow, what if the entire app model is upside down? What if it isn't like, hey, I want to go fetch Instagram right now. It's like, hey...
The device realizes that you have a moment between meetings, you're a little bit bored. Hey, do you want to catch up on the latest highlights from your favorite basketball team? Those things become possible. Having said that, the hardware problems are hard, and they're real, and the cost problems are hard, and they're real. And come for the king, you best not miss. The phone is an incredible...
centerpiece of our lives today. It's how I operate my home. I use my car. I use it for work. It's everywhere, right? It's everywhere, yeah. The world has adapted itself to the phone. So it's weird that my ice maker has a phone app, but it does. Like, I don't know. I'm not sure. Seemed excessive, but like for somebody today who's like, I got to make an ice maker. Number one job.
Gotta have it out. It's like the smart refrigerator. You're like, I don't need this. Take it out of me. I do think it's going to be a long... That's why I said the 10-year view for me is, I think, much clearer. I think these things are going to be...
The five-year view is harder because, man, like, even if it seems amazing... Knocking out the dominance of the phone in five years, it just seems so hard. It seems unthinkable. It's, like, unthinkable for us, right? That's why I said, like, Orion was the first... time I thought me. Orion, like, putting that in my head, I was like, okay, it could happen. Like, there does exist a life for us as a species past the phone. Yeah.
Yeah, it still has the whole dynamic of, well, how do I envision my life without the operating system that I'm so accustomed to? The physical stuff that you do. but just the familiarity and all the stuff that's working in there. So what do you think of the interim period? So maybe you get to the point where the hardware is capable. It is market accessible.
But do you tether to the phone? Do you take a strong view that you will never do that and let the product stand? Like, how do you think about that piece? The phones have this huge advantage and disadvantage. Huge advantage is that the phone is already central to our lives. It's already got this huge developer ecosystem. It's this anchor device, and it's a wonderful anchor device for that. The disadvantage is I actually think what we found is the apps want to be different.
when they're not controlled via touchscreen. And that's not super novel. A lot of people failed early in mobile, including us. by just taking our web stuff and putting it on the mobile phone and being like, oh, the mobile phone, we'll just put the web there. But because it wasn't native to what the phone was... And I mean everything from interaction design, to the actual design, to the layout, to how it felt. Because we weren't doing phone native things,
we were failing with one of the most popular products in the history of the web. This is like the major design field, like the skeuomorphic idea versus the native idea. Yeah, and I think having the developers is a true value, and I think having all this application functionality is a true value. But then once you actually...
re-project it into space and you're manipulating it with your fingers like this as opposed to a touch screen you have much less precision it doesn't respond to voice commands because there's no tools for that. There was no design integration for that. So having a phone platform today feels like, wow, I've got this huge base to work from on the hardware side, but I've also actually got this kind of huge anchor to drag on the software side.
And so we're not opposed to these partnerships. I think it'll be interesting to see once the hardware is a little bit more developed how partners feel about it. And I hope they continue to support. People who buy these phones for $1,200, $1,300, being able to bring whatever hardware they want to bring didn't take the full functionality of that with them.
The biggest question I have is whether the entire app model, because we were imagining a very phone-like app model for these devices, admittedly a very different interaction design, input, and control schemes are very different, and that demands like a little extra developer attention. I am wondering if the progression of AI over the next several years doesn't turn the app model in its head. Right now, it's kind of an unusual thing where I'm like, I want...
So in my head, I translate that to I have to go open Spotify or open Tidal. And the first thing I think of is who is my provider going to be? Yeah, of course. As opposed to like, that's not what I want. It's extremely limiting. What I want is to play music. Yes. And I just want to be like, go to the AI. I'm like, cool, play this music for me. Yeah. And it should know, oh, like you're already using this service.
we'll use that one or these two services are both available to you but this one has a better quality song Or it's like, hey, the song you want isn't available on any of these services. Do you want to sign up for this other service that does have the song that you want? I don't want to have to be responsible for orchestrating what app I'm opening to do a thing.
We've had to do that because that's how things were done in the entire history of digital computing. You had an application-based model that was the system. So I do wonder how much AI inverts things. That's a pretty hot take. Yeah, that's a hot take. Inverts things. And that's not about wearables. That's not about anything. That's just like even at the phone level.
If you were building a phone today, would you build an app store the way you historically built an app store? Or would you say like, hey, you as a consumer, express your intention. Express what you're trying to accomplish. and let's see what we have. Let the system see what it can produce. But I do think if you were starting from scratch today, you probably wouldn't build this like app-centric world where I, as a consumer, I'm trying to solve a problem and first have to decide...
which of the providers I'm going to use to solve that problem. Yeah, of course. That's fascinating. And again, I think it's a function of where the capabilities are today and I think where we have line of sight into orchestration capabilities. Because I'd say... Knowledge-wise, that is probably capable today. I think orchestration-wise, it's probably...
or a little bit away. And then, of course, you got to build the developer ecosystem to develop on the platform. Which is incredibly hard. That's the thing I want to see more focus on. That's the hardest piece, right? That's the hardest piece. Yeah. The stronger we get at agentic reasoning and capabilities, the more I can rely on my AI to do things in my absence. And at first it will be knowledge work, of course, that's fine. But once you have a flow of consumers...
coming through here, what you're going to find is that they're going to have a bunch of dead ends. Where they're going to ask the AI, hey, can you do this thing for me? And it's going to say, no, I can't.
the gold mine that you take to developers and you're like, hey, I've got 100,000 people a day trying to use your app. They're trying to use your app. They don't know they are, but they're trying to use their app. Look, here's the query stream. Here's what's coming through. And we're going to tell them no today. You got 100,000 people clamoring for something today. Coming in for your service. Yeah. And it's totally fine for...
RAI to go back and say, hey, you've got to pay for this. There's a guy who does this for you, but you've got to pay for it. And by the way, I'm not just talking about apps. I'm like, it's a plumber. There's some kind of a marketplace here that I think... emerges over time. So that's how I see it playing out. I don't see it playing out as like,
someone goes into a darkroom and comes up with this app platform? No. What's going to happen is there's going to become a query stream of people using AI to do things and that AI will fail. repeatedly in certain areas because that's a type of functionality that is currently behind some kind of an app wall and there's no
Or it hasn't been built native to whatever consumption mechanism. There's no bridge that can build. And everyone wants to build the bridge, and it's like, no, no, it's going to manipulate... the pixels and it's going to manipulate, it's like, fine, it can do those things. I'm not saying the AI can't cross those boundaries, but I think over time, that becomes the primary interface.
for humans interacting with software as opposed to the like, pick from the garden of applications. Yeah, that makes a ton of sense. just as a consumer, right? Yeah, it's messy. And I think it creates these very exciting marketplaces for functionality inside the AI. It abstracts away a lot of companies' brand names, which I think is going to be very hard for an entire generation of. brands like the fact that I don't care if it's being played on one of these two music services.
really want me to care. And they want me to have a stronger opinion about it. And they want me to have an attachment. I don't want to have an attachment. There are some things where you may value the attachment. In the world where I'm like, here's an AppGuard, and these two are competing for my eyeballs. the brand that they've built is the hugely valuable asset. In the world where I just care if the song gets played and sounds good.
a different set of priorities are important. I think that's net positive because what matters now is performance. on the job being asked. For actual product experience. And value and price for performance matters a lot. I think a lot of companies will love that. Well, abstracting away, that's like effectively articulating abstracting away margin pools.
which puts a lot more pressure on us trusting the AI or the distributor of the AI. And so far as I'm floating between different companies that are each providing AIs, the degree which I trust them to not be bought and paid for in the back end. they're not giving me the best experience or the best price for money. They're giving the one that gives them the most money. Yeah, of course.
Yeah, it's the experience of Google Search today, right? It is a very different world. It is a very different world. It is a very different world. But you can actually see inklings of it today, right? So certain companies are willing to work with the new AI providers in... agentic task completion. And then they're like, well, actually, wait a minute.
I don't just want the bots executing this stuff. I want the humans coming to me. I think I need that. It's existential that I have this brand relationship directly with the demand side. So that's potentially messy, but... a bright future, especially if we don't have to pay that like brand tax. Yeah, it'll be very messy. I don't know it's avoidable because I think once consumers start to get into these tight loops where more and more of their interactions are being moderated by an AI.
you won't have a choice. That's like where your customers will be. Yeah. But it's going to be a pretty different world. Yeah, it'll be a different world. And there will probably be some groups that try to move fast to it as a way to compete with things that are branded. Yeah. And just say, I'm going to compete on performance and price. Yeah, that's right. Where do you think that could potentially happen first?
It probably will mirror query volume. I think of this a lot. We do have a model of this, which was in the web era when Google became the dominant search engine. So before that... the web era was like very index based it was like yahoo and it was like links and getting major sources of traffic to link to you was the game
And then once Google came to dominance, which happened very quickly over maybe a couple of years, I feel like, all that mattered was SEO. All that mattered was where you were in the query stream. And the query stream dictated... what businesses came over and succeeded. Yeah. Because the queries that were the most frequent
those were the ones that came first. And so, like, I was like, Travel's the one that's like... Travel came right away, right? Like, it was a huge disruption. And travel agents went from a thing that existed to a thing that didn't exist. in a relatively short... Immediately. And they all competed on the basis of execution of the best deal in a seamless fashion with the highest conversion. I think SEO has gotten to a point now...
where it's kind of a bummer. It's like made things worse. It's just like game. Everyone's gotten so good at it. Especially with AI actually. That's right. So I actually think it's like we have this incredible flattening curve and now it's like starting to kind of rise up in terms of... Especially with paid placement too. Yeah. That's so...
Dominant. Yeah, that's right. And this is like probably the cautionary tale for how this plays out in AIs as well. I think there will be a... pretty good golden era here where the query stream will dictate what businesses come first.
because those are the queries that are, that's the volume of people unsatisfied with the existing solutions that they have. Otherwise, they wouldn't be asking about it. And product providers and developers will follow that. And build specifically to solve those problems. That's right. Once it tips. In each vertical, we get a lot of progress very quickly towards better solutions for consumers.
And then once it hits a steady state, it starts to be gamesmanship. And that's the thing we fight. That'll be the true test of AI. The true test. Can it get through that? Can it avoid falling into that trap? Can it avoid that trap? Yeah, that's right. Well, a lot of that is business model driven, and we'll see how that evolves over time, too. That's right. You guys have also been leading from the front on this idea of open source. Yeah. And so talk about...
some of your efforts on that side of the business? And then what is the ideal market structure of the AI model side for you guys? There's two parts that came together. The first one is LAMA came out of FAIR, our fundamental AI research group, and that's been an open source research group since the beginning, since Jean Lacoon came in and they established that.
It's allowed us to attract incredible researchers who really believe that we're going to make more progress as a society working together across boundaries of individual labs than not. And to be fair, it's not just us. Obviously, the Transformer paper was published at Google. And like, you know, big self-supervised learning was our contribution. Like everyone's contributing to the knowledge base.
When we open-sourced Llama, that's how all models were open-sourced at that point. Yeah, yeah, yeah, of course. Like, everyone was open. The only thing that was unusual was... Everything else just went closed-source over time, effectively. Yeah, that's right. But before that...
Every time someone built a model, they open sourced it so that other people could use the model and see how great that model was. That was mostly how it was done. If it was worth anything. Certainly some specialized models for translations and whatnot were kept closed. But if it was a general model, that was what was done.
Lama 2 was probably the big decision point for us. Lama 2, and this is where I think the second thing that came in was a belief that I've had that I was advancing really strenuously internally that Mark really believes in too, and he's written his post about this.
which is, first of all, we're going to make way more progress if these models are open. Because a lot of these contributions aren't going to come from these big labs. They're going to come from these little labs. And we've seen this already with DeepSeq in China, which was... put in a tough spot and then innovated incredibly in the memory architectures and a couple other places to really get amazing results. And so we really believe we're going to get the most progress collectively.
The second thing is, inside this piece is, you know, this is a classic, I believe these are going to be commodities. and you want to commoditize your complements. And we're in a unique position strategically where our products are made better through AI, which is why we've been investing for so long. Whether it's recommendation systems in what you're seeing in feed or reels.
whether it's simple things like, what friend do I put at the top? When you type, you want to make a new message, who do I think you're going to message right now? Little things like that. to really big, expansive things like, hey, here's an entire answer, here's an entire search interface that we couldn't do before in WhatsApp. That now is a super popular surface. So there's all these things that are possible for us that are made better by this AI.
but nobody else having this AI can then build our product. The asymmetry works in our favor. Yeah, of course. And so for us, like, commoditizing your compliments is just good business sense and making sure that there is a lot of competitively priced, if not almost free, models out there helps.
It helps a bunch of small startups and academic labs. It also helps us. Yeah, you as the application provider are huge. So we're all super aligned on that business model alignment and industry model alignment. It's a strong alignment.
There. So it comes from both this fundamental belief in how this kind of research should be done, and then aligns with a business model, and so there's no conflict. Societal progress plus business model align that. It's all together. It's all going the same direction. It's awesome. It's great.
i want to shift gears to talking about the impediments to progress and like what you think you know are kind of linear versus not so the risks to the vision to the overall vision that you articulated obviously hardware yep ai capabilities vision capabilities and screens and all that, resolutions. We talked about the ecosystem and developers and native products. So maybe just talk about what you see are kind of the linear path things and the things that may be harder or riskier. We have real...
invention risk. There exists risk that the things that we want to build We don't have the capacity to build as a society, as a species yet. And that's not a guarantee. I think we have... Windows to it. You've seen Orion, so it can be done. Yeah, it feels like it's a cost reduction exercise. It's a materials improvement exercise, but it can be done. There is still some invention risk. Far bigger than the invention risk I think is the adoption risk.
Is it considered socially acceptable? Are people willing to learn a new modality? We all learned to type when we were kids at this point. We were born with phones in our hands at this point. Are people willing to learn a new modality? Is it worth it to them? Ecosystem risk, even bigger than that. Great, you build this thing.
But if it just does like your email and reels, that's probably not enough. Do people bring the suite of software that we require to interact with modern human society to bear on the device? Those are all huge risks. I will say we feel pretty good about where we're getting on.
The hardware on acceptability, we think we can do those things. That was not a guarantee before. I think with the Ray-Van metaglasses, we're feeling like, okay, we can get through... You feel like the acceptability... Humans will accept that I'm using technology. Within that... super interesting regulatory challenges.
Here I have an always-on machine that gives me superhuman sensing. My vision is better. My hearing is better. My memory is better. That means when I see you... a couple years from now and i haven't seen you in the internet i'm like oh god i don't remember that guy we did a podcast together what's the guy's name
Can I ask that question? Am I allowed to ask that question? What is your right? It's your face. You showed me your face. And if I was somebody with a better memory, I could remember the face. But I don't have a great memory, so am I allowed to use a tool to assist me or not? So there's really subtle regulatory, privacy, social acceptability questions that are embedded here that are super deep individually.
and can derail the whole thing. Like you can easily derail the whole thing and slow progress. That's the things I think we sometimes think in our industry, it's like, feel the dreams. If you build it, they will come. And it's like, no, a lot of things have to happen right. Well, you can also overstep, too. That's the risk. You're sure you can get your hands locked. Great technology can get derailed for long periods of time. Nuclear power got derailed. For absolutely stupid reasons. For 70 years.
for bad reasons. We know we're bad now. And it was like, they just played it wrong. Yeah, of course. And they were like, ah, ignore this. It's like, no, these people actually feel this way. So I think, yeah, I feel pretty good about the invention risk. The acceptability risk is looking better than it has been, but I think there's still a lot of... I actually think the ecosystem risk was one I would have said previously was the biggest one.
But AI is now my potential silver bullet there. If AI becomes the major interface, then it comes for free. And I will also say that we've had such a positive response from even just set aside Orion, even the Ray-Ban metas. and building that platform. It's not a platform yet. There's so little compute. Connect an app. We literally don't have any space yet. But we did do a partnership with Be My Eyes, which helps blind and hard-of-vision people navigate, and it's really spectacular.
And so there's a little window there where we can start building. So yeah, I would say the response has been more positive than I had expected. So everything right now, tailwinds abound right now. And to be honest, after eight years of, it's been a lot of nine years of headwinds.
Having a year of tailwinds is nice. I'll take it. I'm not going to look at the face. No victory laps yet, but that's good. But it's all hard. At every point, it could all fail. I like that you just started with, it's invention risk. There's many ways this just won't work. Yeah, that's right. Even if it does work, it might not take. Well, I'll say two things about this, and this is where Mark just deserves so much credit, is we're true believers.
like we have actual conviction yeah mark believes this is the next thing It needs to happen, and it doesn't happen for free. Like, we can be the ones to do it. Our chief scientist, Michael Arash, who's one of my favorite people I've ever gotten the chance to work with. He talks a lot about the myth of technological eventualism. It doesn't eventually happen. There's a lot of people in tech who are like, yeah, AR will eventually happen. That's not what...
fucking works, man. AR is a specific one that would just absolutely not. You have to stop and put the money and the time and do it. Somebody has to stop and do it. And that is the difference. The number one thing I'd say is the difference between us and anybody else. is we believe in this stuff in our cores. This is the most important work I'll ever get a chance to do. This is Xerox PARC level.
new stuff where we're rethinking how humans are going to interact with computers. It's like JCR Licklider and the human in the loop computing. We're seeing that with AI. It's a rare moment. It's a rare moment. It doesn't even happen once a generation, I think. It may happen every other generation, every third generation. Like, you don't get a chance to do this all the time.
so we're not missing it. We're just like, we're going to do it. And we may fail, like it's possible. We will not fail for lack of effort or belief. Great. Thanks a ton, boss. Cheers. Yeah, cheers.