Welcome to the Huberman Lab podcast where we discuss science and science-based tools for everyday life. I'm Andrew Huberman and I'm a professor of neurobiology and Ophthalmology at Stanford School of Medicine. My guest today are Mark Zuckerberg and Dr. Priscilla Chan. Mark Zuckerberg, as everybody knows, founded the company Facebook. He is now the CEO of Metta, which includes Facebook, Instagram, WhatsApp, and other
Technology platforms. Dr. Priscilla Chan graduated from Harvard and went on to do her medical degree at the University of California, San Francisco. Mark Zuckerberg and Dr. Priscilla Chan are married and the co-founders of the CZI or Chan Zuckerberg initiative, a philanthropic organization whose stated goal is to cure all human diseases.
The Chan Zuckerberg initiative is accomplishing that by providing critical funding not available elsewhere, as well as a novel framework for discovery of the basic functioning of cells, cataloging all the different human cell types, as well as providing AI or artificial intelligence platforms to mine all of that data to discover new pathways and cures for all human diseases.
The first hour of today's discussion is held with both Dr. Priscilla Chan and Mark Zuckerberg during which we discuss the CZI and what it really means to try and cure all human diseases. We talk about the motivational backbone for the CZI that extends well into each of their personal histories. Indeed, you'll learn quite a lot about Dr. Priscilla Chan, who has, I must say, an absolutely incredible family story leading up to her role as a physician and her motivations for the CZI and beyond.
And you'll learn from Mark how he's bringing an engineering and AI perspective to discovery of new cures for human disease. The second half of today's discussion is just between Mark Zuckerberg and me during which we discuss various meta platforms, including, of course, social media platforms and their effects on mental health in children and adults.
We also discuss VR virtual reality, as well as augmented and mixed reality, and we discuss AI, artificial intelligence, and how it stands to transform not just our online experiences with social media and other technologies, but how it stands to potentially transform every aspect of everyday life. Before we begin, I'd like to emphasize that this podcast is separate from my teaching and research roles at Stanford.
It is, however, part of my desire and effort to bring zero cost to consumer information about science and science-related tools to the general public. In keeping with that theme, I'd like to thank the sponsors of today's podcast. Our first sponsor is 8-Sleep. 8-Sleep makes smart mattress covers with cooling, heating, and sleep tracking capacity.
I've spoken many times before on this podcast about the fact that getting a great night's sleep really is the foundation of mental health, physical health, and performance. One of the key things to getting a great night's sleep is to make sure that the temperature of your sleeping environment is correct. And that's because in order to fall and stay deeply asleep, your body temperature actually has to drop by about 1-3 degrees.
And in order to wake up feeling refreshed and energized, your body temperature actually has to increase by about 1-3 degrees. With 8-Sleep, you can program the temperature of your sleeping environment in the beginning, middle, and end of your night. It has a number of other features like tracking the amount of rapid eye movement and slow wave sleep that you get, things that are essential to really dialing in the perfect night's sleep for you.
I've been sleeping on an 8-Sleep mattress cover for well over two years now, and it has greatly improved my sleep. I fall asleep far more quickly. I wake up far less often in the middle of the night, and I wake up feeling far more refreshed than I ever did prior to using an 8-Sleep mattress cover. If you'd like to try 8-Sleep, you can go to 8-Sleep.com slash Huberman to save $150 off their pod-3 cover. 8-Sleep currently ships to the USA, Canada, UK, select countries in the EU, and Australia.
Again, that's 8-Sleep.com slash Huberman. Today's episode is also brought to us by Element. Element is an electrolyte drink that has everything you need and nothing you don't. That means plenty of electrolytes, sodium, magnesium, and potassium, and no sugar. The electrolytes are absolutely essential for the functioning of every cell in your body and your neurons, your nerve cells, rely on sodium, magnesium, and potassium in order to communicate with one another electrically and chemically.
Element contains the optimal ratio of electrolytes for the functioning of neurons and the other cells of your body. Every morning, I drink a packet of element dissolved in about 32 ounces of water. I do that just for general hydration and to make sure that I have adequate electrolytes for any activities that day.
I'll often also have an element packet or even two packets in 32 to 60 ounces of water if I'm exercising very hard and certainly if I'm sweating a lot in order to make sure that I replace those electrolytes. If you'd like to try Element, you can go to drinklmt.com slash Huberman to get a free sample pack with your purchase. Again, that's drinklmt.com slash Huberman.
I'm pleased to announce that we will be hosting four live events in Australia, each of which is entitled the brain body contract, during which I will share science and science related tools for mental health, physical health, and performance. There will also be a live question and answer session. We have limited tickets still available for the event in Melbourne on February 10th, as well as the event in Brisbane on February 24th.
Our event in Sydney at the Sydney Opera House sold out very quickly. So as a consequence, we've now scheduled a second event in Sydney at the aware Super Theater on February 18th. To access tickets to any of these events, you can go to HubermanLab.com slash events and use the code Huberman at checkout. I hope to see you there. And as always, thank you for your interest in science. And now for my discussion with Mark Zuckerberg and Dr. Priscilla Chan.
Priscilla Mark, so great to meet you and thank you for having me here in your home. Oh, thanks for having us on the podcast. I'd like to talk about the CZI, the Chan Zuckerberg initiative. I learned about this a few years ago when my lab wasn't still is now at Stanford. As a very exciting philanthropic effort that has a truly big mission. I can't imagine a bigger mission.
So maybe you could tell us what that big mission is and then we can get into some of the mechanics of how that big mission can become a reality. So like you're mentioning in 2015, we launched the Chan Zuckerberg initiative. And what we were hoping to do at CZI was think about how do we build a better future for everyone. And looking for ways where we can contribute the resources that we have to bring philanthropically.
And the experiences that Mark and I have had for me as a physician and educator for Mark as an engineer. And then our ability to bring teams together to build builders. You know, Mark has been a builder throughout his career and what could we do if we actually put together a team to build tools, do great science. And so within our science portfolio, we've really been focused on what some people think is either an incredibly audacious goal or an inevitable goal.
But I think about it as something that will happen if we sort of continue focusing on it, which is to be able to cure prevent our manage all disease by the end of the century. All disease all disease. So that's important right then a lot of time people ask like which disease and the whole point is that there is not one disease.
And it's really about taking a step back to where I always found the most hope as a physician, which is new discoveries and new opportunities and new ways of understanding. How to keep people well come from basic science. So our strategy at CZI is really to build tools, fund science, change the way basic scientists can see the world and how they can move quickly in their discoveries. And so that's what we launched in 2015.
We do work in three ways. We fund great scientists. We build tools right now software tools to help move science along and make it easier for scientists to do their work. And we do science. You mentioned Stanford being an important pillar for our science work. We've built what we call bio hubs, institutes where teams can take on grand challenges. To do work that wouldn't be possible in a single lab or within a single discipline.
And our first bio hub was launched in San Francisco collaboration between Stanford UC Berkeley and UCSF. Amazing. Curating all diseases implies that there will either be a ton of knowledge glean from this effort, which I'm certain there will be and there already has been we can talk about some of those early successes in a moment.
But it also sort of implies that if we can understand some basic operations of diseases and cells that transcend autism, Huntington's Parkinson's cancer and any other disease that perhaps there are some core principles that would make the big mission of real reality. So to speak, what I'm basically saying is how are you attacking this might belief is that the cell sits at the center of all discussion about disease given that our bodies made up of cells and different types of cells.
So maybe you could just eliminate for us a little bit of what the cell is in your mind as it relates to disease and how one goes about understanding disease in the context of cells because ultimately that's what we're made up of. Yeah, well, let's get to the cell thing in a moment, but just to even take a step back from that.
You know, we don't think that it's C.C.I. that we're going to cure prevent or manage all diseases. The goal is to basically give the scientific community and scientists around the world the tools to accelerate the pace of science. And then we spent a lot of time when we were getting started with this looking at the history of science and trying to understand the trends and how they've played out over time.
And if you look over the this very long term arc, most large scale discoveries are preceded by the invention of a new tool or a new way to see something. And it's not just in biology, right? It's like having a telescope, you know, came before a lot of discoveries in astronomy and astrophysics. But similarly, you know, the microscope and and just different ways to observe things or different platforms like the ability to do vaccines, preceded the ability to kind of cure a lot of different things.
So this is sort of the engineering part that you were talking about about building tools. We view our goal is to try to bring together some scientific and engineering knowledge to build tools that empower the whole field. And that's that's sort of the big arc and a lot of the things that were focused on including the work in single cell and cell understanding, which.
You know, you can you can jump in and get into that if you want, but yeah, I think I think we generally agree with the premise that if you want to understand this stuff from first principles. People study organs a lot, right? They like the study kind of how things present across the body.
But there's not a very widespread understanding of how kind of each cell operates. And this is sort of a big part of, you know, some of the initial work that we tried to do on the human cell atlas and understanding what are the different cells. And there's a bunch more work that we want to do to carry that forward. But, but overall, I think when we think about the next 10 years here of this long arc to try to empower the community to to be able to cure prevent or manage all diseases.
We want that we think the next 10 years should really be primarily about being able to measure and observe more things and human biology. There are a lot of limits today. And it's like you want to look at something through a microscope. You can't usually see living tissue is because it's hard to see through skin or things like that.
So there are a lot of different techniques that will help us observe different things. And this is sort of where the engineering background comes in a bit because I mean, it's when I think about this is from the perspective of like how you'd ride code or something.
You know, the idea of like trying to debug or fix a code base, but not be able to like step through the code line by line. It's like not going to happen. Right. And at the beginning of any big project that we do at meta, you know, we like to spend a bunch of the time up front just trying to instrument things and understand, you know, what are we going to look at and how we're going to measure things before.
So we know we're making progress and know what to optimize. And this is such a long term journey that we think that it actually makes sense to take the next 10 years to build those kind of tools. For biology and understanding just how the human body works in action and a big part of that is sells. I don't know. Do you want to jump and talk about some of the efforts?
Could I just interrupt briefly and just ask about the different the different interventions. So to speak that C. Z. I is in a unique position to bring to the quest to cure all diseases. I can think of I mean, I know as a scientist that money is necessary, but not sufficient. Right. Like when you have money, you can hire more people. You can try different things. So that's critical. But a lot of philanthropy includes money.
The other component is, you know, you want to be able to see things as you pointed out. So you want to know that normal disease process. Like what what is a healthy cell? What's a disease cell? Are the cells constantly being bombarded with with challenges and then repairing those and then what we call cancer is just kind of runaway train of those challenges not being met by the cell itself or something like that.
So better imaging tools. And then it sounds like there's not just a hardware component, but a software component. This is where AI comes in. So maybe we can at some point we can break this up into three different avenues. One is understanding disease processes and healthy processes will lump those together. Then there's hardware. So microscopes lenses digital deconvolution ways of seeing things in bolder relief and more more precision. And then there's how to manage all the data.
And then I love the idea that maybe AI could do what human brains can't do alone. A managed understanding of the data because it's one thing to organize data. It's another to say, you know, you know, this as you pointed out in the analogy with code that this particular gene and that particular gene are potentially interesting, whereas a human being would never make that potential connection.
So, you know, the tools that CZI can bring to the table. We fund science like you're talking about. And we try to, there's lots of ways to fund science and just to be clear, you know, what we fund is a tiny fraction of what the NIH funds, for instance. So, you guys have been generous enough that it's it definitely holds weight to NIH. Any NIH is contribution.
Yeah, and but I think every every funder has its own role in the ecosystem. And for us, it's really how do we incentivize new points of view? How do we incentivize collaboration? How do we incentivize open science?
And so a lot of our grants include inviting people to look in look at different fields. Our first neuroscience RFA was aimed towards incentivizing people from different backgrounds immunologists, microbiologists to come and look at how our nervous system works and how to keep it healthy. Or we asked that our grantees participate in the pre-print movement to accelerate the rate of sharing knowledge and actually others being able to build upon science.
So, that's the funding that we do. In terms of building, we build software and hardware, like you mentioned. We put together teams that can build tools that are more durable and scalable than someone in a single lab might be incentivized to do.
There's a ton of great ideas. And nowadays most scientists can tinker and build something useful for their lab. But it's really hard for them to be able to share that tool sometimes beyond their own laptop or forget, you know, the next lab over or across the globe. So, we partner with scientists to see like what is useful, what kinds of tools, you know, in imaging, NAPari, like it's, there's, it's a useful image annotation tool that is born from an open source community.
And how can we contribute to that? Or a cell by gene, which works on single cell data sets and how can we make it build a useful tool so that scientists can share data sets, analyze their own and contribute to a larger corporate. So, we have software teams that are building, collaborating with scientists to make sure that we're building easy to use durable, translatable tools across the scientific community in the areas that we work in.
We also have institutes. This is where the imaging work comes in where, you know, we are proud owners of electronic right scope right now. It's going to be installed at our imaging institute and that will really contribute to a way where we can see work differently.
But we can, the more hard we're need does need to be developed, you know, we're partnering with a fantastic scientists in the bio hub network to build a mini phase plate to increase to align the electrons through these through the electron microscope to be able to work with the scientists. And we can see the micro scope to be able to increase the resolution so we can see in sharper detail to so there's a lot of innovative work within the network that's happening.
And these institutes have grand challenges that they're working on back to your question about cells. Cells are just the smallest unit that are alive. Are your body, ever, all of our bodies have many, many, many cells. There's, you know, some estimate of like 37 trillion cells, different cells in your body. And what are they all doing and how do it, what do they look like when they're healthy and you're healthy? What do they look like when you're sick?
And where we're at right now with, you know, our understanding of cells and what happens when you get sick is basically we have, we've gotten pretty good at from the human genome project looking at how different mutations in your genetic code lead for you to be more susceptible to get sick or directly cause you to get sick. So we go from a mutation in your DNA to wow, you now have Huntington's disease, for instance. And there's a lot that happens in the middle.
And that's one of the questions that we're going after at CZI is what actually happens. So an analogy that I used, I like to use to share with my friends is right now say we have a recipe for a cake. We know there's a typo in the recipe. And then the cake is awful. That's all we know. We don't know how the chef interprets the typo. We don't know what happens in the oven. And we don't actually have no sort of how it's exactly connected to how the cake didn't turn out, how you had expected.
A lot of that is unknown, but we can actually systematically try to break this down. And one segment of that journey that we're looking at is how that mutation gets translated and acted upon in your cells. And all of your cells have what's called mRNA. mRNA are the actual instructions that are taken from the DNA. And what our work in single cell is looking at how how every cell in your body is actually interpreting your DNA slightly differently.
And what happens when healthy cells are interpreting the DNA instructions and when six cells are interpreting those directions. And that is a ton of data. I just told you there's 37 trillion cells. There's different large and sets of mRNA in each cell. But the work that we've been funding is looking at how, first of all, gathering that information. We've been incredibly lucky to be part of a very fast moving field where we've gone from in 2017 funding some methods work.
To now having really not complete, but nearly complete, atlases of how the human body works, how flies work, how mice work at the single cell level. And being able to then try to piece together, like, how does that all come together when you're healthy and when you're sick. And the neat thing about the sort of inflection point where we're at an AI is that I can't look at this data and make sense of it. There's just too much of it.
And biology's complex, human bodies are complex. We need this much information. But the use of large language models can help us actually look at that data and gain insights, look at what trends are consistent with health. And what trends are unsuspected. And eventually our hope through the use of these data sets that we've helped curate in the application of large language models is to be able to formulate a virtual cell.
A cell that's completely built off of the data sets of what we know about the human body. But allows us to manipulate and learn faster and try new things to help move science and then medicine along. Do you think we've catalogued the total number of different cell types? Every week I look at great journals like Cell, Nature and Science. And for instance, I saw recently that using single cell sequencing, they've categorized 18 plus different types of fat cells.
We always think of like a fat cell versus a muscle cell. So now you've got 18 types. Each one is going to express many, many different genes and RNAs, mRNAs. And perhaps the one of them is responsible for what we see in the advanced type 2 diabetes or in other forms of obesity or where people can't lay down fat cells, which turns out to be just as detrimental in those extreme cases.
So now you've got all these lists of genes, but I always thought of single cell sequencing as necessary, but not sufficient. You need the information, but it doesn't resolve the problem. And I think of it more of it as a hypothesis generating experiment. Okay, so you have all these genes and you could say, wow, this gene is particularly elevated in the diabetics cell type of, let's say one of these fat cells or muscle cells for that matter, whereas it's not in non-diabetics.
So then of the millions of different cells, maybe only five of them differ dramatically. So then you generate a hypothesis, oh, it's the ones that differ dramatically that are important, but maybe one of those genes, when it's only 50% changed, has a huge effect because of some network biology effect.
And so I guess what I'm trying to get to here is, you know, how does one meet that challenge and can AI help resolve that challenge by essentially placing those lists of genes into a, you know, 10,000 hypotheses, because I'll tell you that the graduate students and postdocs in my lab get a chance to test one hypothesis at a time. And that's really the challenge, let alone one lab.
And so for those that are listening to this and, you know, hopefully it's not getting outside the scope of kind of like standard understanding of the understanding we've generated here. But what basically saying is you have to pick at some point more data always sounds great, but then how do you decide what to test? So no, we don't know all the cell types. I think one of the one thing that was really exciting when we first launched this work was, you know, cystic fibrosis.
Like cystic fibrosis is caused by mutation and CFTR. That's pretty well known. It affects a certain channel that makes it hard for mucus to be clear. That's the basic cystic fibrosis when I went to medical school, it was taught as fact. So they're lungs fill up with fluid. These people are carrying around sacks of fluid filling up. I've known people like I've worked with people and then they have to literally dump the fluid out.
Exactly. They can't run or do it in 10 sex or size. Life is shorter. Life is shorter. And when we applied single cell methodologies to the lungs, they discovered an entirely new cell type that actually is affected by mutation and the CF mutation and cystic fibrosis mutation that actually changes the paradigm of how we think about cystic fibrosis. Just done that. So I don't think we know all the cell types.
I think we'll continue to discover them and we'll continue to discover new relationships between cell and disease. Which leads me to the second example I want to bring up is this large data set that the entire scientific community is built around single cell. It's starting to allow us to say this mutation, where is it expressed, what types of cell types it's expressed in. And we actually have built a tool at CZI called cell by gene, where you can put in the mutation that you're interested in.
And it gives you a heat map of cross cell types of which cell types are expressing the gene that you're interested in. And so then you can start looking at, okay, if I look at gene X and I know it's related to heart disease. But if you look at the heat map, it's also spiking in the pancreas. That allows you to generate a hypothesis. Why? And what happens when this gene is mutated and you're in the function of your pancreas? Really exciting way to look and ask questions differently.
And you can also imagine a world where if you're trying to develop a therapy, a drug, and the goal is to treat the function of the heart. But you know that it's also really active in the pancreas again. So what is there going to be an unexpected side effect that you should think about as you're bringing this drug to clinical trials? So it's an incredibly exciting tool and one that's only going to get better as we get more and more sophisticated ways to analyze the data.
I must say I love that because if I look at the advances in neuroscience over the last 15 years, most of them did necessarily come from looking at the nervous system. Came from the understanding that the immune system impacts the brain. Everyone, prior to that, talked about the brain as immune privileged organ. What you just said also bridges and the divide between single cells, organs and systems.
Right? Because ultimately cells make up organs, organs make up systems and they're all talking to one another. And everyone nowadays is familiar with like gut brain access or the microbiome being so important. But rarely is the discussion between organs discussed so to speak. So I think it's wonderful. So that that tool was generated by CCI or CCI funded that tool. So how does we built that? We built it. So is it built by meta? Is this meta? Is it has its own engineers? Got it.
Yeah, they're completely different organizations. Incredible. And so a graduate student or postdoc who's interested in a particular mutation could put this mutation into this database. That graduate student or postdoc might be in a laboratory known for working on heart, but suddenly find that they're collaborating with other scientists that work on the pancreas. Yeah. Which also is wonderful because it bridges the divide between these fields.
Fields are so siloed in science, not just different buildings, but they people rarely talk. Unless things like this or how? I mean, the graduate student is someone that we want to empower because one they're the future of science as you know. And within cell by gene, if you put in the gene you're interested in and it shows you the heat map, we also will pull up like the most relevant papers to that gene. And so like read these things. Fantastic.
As we all know, quality nutrition influences, of course, our physical health, but also our mental health and our cognitive functioning, our memory, our ability to learn new things and to focus. And we know that one of the most important features of high quality nutrition is making sure that we get enough vitamins and minerals from high quality, unprocessed or minimally processed sources.
As well as enough probiotics and prebiotics and fiber to support basically all the cellular functions in our body, including the gut microbiome. Now, I like most everybody try to get optimal nutrition from whole foods, ideally mostly from minimally processed or non processed foods.
However, one of the challenges that I and so many other people face is getting enough servings of high quality fruits and vegetables per day, as well as fiber and probiotics that often accompany those fruits and vegetables. That's why way back in 2012, long before I ever had a podcast, I started drinking AG one. And so I'm delighted that AG one is sponsoring the Hubertman lab podcast.
The reason I started taking AG one and the reason I still drink AG one once or twice a day is that it provides all of my foundational nutritional needs. That is, it provides insurance that I get the proper amounts of those vitamins, minerals, probiotics and fiber to ensure optimal mental health, physical health and performance. If you'd like to try AG one, you can go to drinkag1.com slash Hubertman to claim a special offer.
They're giving away five free travel packs plus a year supply of vitamin D3K2. Again, that's drinkag1.com slash Hubertman to claim that special offer. I just think going back to your question from before, I mean, are there going to be more cell types that could discover it? I mean, I assume so. Right? I mean, no catalog of the stuff is ever, you know, it doesn't seem like we're ever done. We keep on finding more.
But I think that that gets to one of the things that I think are the strengths of modern LLMs is the ability to kind of imagine different states that things can be in. So from all the work that we've done and funded on the human cell Atlas, there is a large corpus of data that you can now train a kind of large scale model on. And one of the things that we're doing, it's the eye which I think is pretty exciting is building.
What we think is one of the largest nonprofit life sciences AI clusters, right? It's like a, you know, on the order of a thousand GPUs and you know, it's larger than what most people have access to an academia that you can do. It's a serious engineering work on. And and you know, by basically training a model with all of the human cell Atlas data and a bunch of other inputs as well.
We think you'll be able to basically imagine all of the different types of cells and all the different states that they can be in. And when they're healthy and diseased and how they'll interact with different, you know, interact with each other interact with different potential drugs. But I mean, I think the state of LLMs, I think this is where it's helpful to understand, you know, have a good understanding and be grounded in like the modern state of AI.
I mean, these things are not foolproof, right? I mean, one of the flaws of modern LLMs is they hallucinate, right? So the question is, how do you make it so that that can be an advantage rather than a disadvantage? And I think the way that it ends up being an advantage is when they help you imagine a bunch of states that someone could be in.
But then you, you know, as the scientist or engineer go and validate that those are true, whether they're, you know, solutions to how a protein can be folded or possible states that a cell could be in when it's interacting with other things. But, you know, we're not yet at the state with AI that you can just take the outputs of these things as like as gospel and run from there.
But they are very good, I think as you said, hypothesis generators or possible solution generators that then you can go validate. So I know that's a very powerful thing that we can basically, you know, building on the first five years of science work around the human cell Atlas and all the data that's been built out. But carry that forward into something that I think is going to be a very novel tool going forward.
And that's the type of thing that I think we're set up to do well. I mean, you all, you had this exchange of a little while back about, you know, funding levels and how CZI is, you know, just sort of a drop in the in the bucket compared to NIH. But I think we have this, the thing that I think we can do that's different is funding some of these longer term bigger projects that it is hard to galvanize the and pull together the energy to do that.
And it's a lot of what most science funding is is like relatively small projects that are exploring things over relatively short time horizons. And one of the things that we try to do is is like build these tools over, you know, five, 10, 15 year periods. They're often projects that require hundreds of millions of dollars of funding and world class engineering teams and infrastructure to do.
And that I think is a pretty cool contribution to the field that that I think is there aren't as many other folks who are doing that kind of thing. But that's one of the reasons why I'm personally excited about the virtual cell stuff because it just it's like this perfect intersection of all the stuff that we've done and single cell to previous collaborations that we've done with the field and, you know, bringing together the industry and AI expertise around this.
Yeah, I completely agree that the model of science that you're put together with CZI is isn't just unique from an age, but it's extremely important that the independent investigator model is what's driven the progression of science in this country and to some extent in northern Europe for the last hundred years and it's wonderful on the one hand because it allows for that image we have of a scientist kind of tinkering away or the people in their lab and then the reekers.
And that hopefully translates to better human health, but I think in my opinion, we've moved past that model as the most effective model or the only model that should be explored. Yeah, I just think it's a balance. It's a balance. You want that, but you want to empower those people. I think that's these tools and sure and there are mechanisms to do that like NIH, but but it's hard to do collaborative sciences.
It's sort of interesting that we're sitting here not far because I grew up right near here as well, not far from the garage model of tech, right? The Hula and the record model, not far from here at all. And the idea was that you know the tinker in the garage, the inventor and then people often forget that to implement all the technologies they discovered took enormous factories in warehouse.
So, you know, there's a similarity there to Facebook, meta, et cetera, but I think in science, we imagine that the scientists alone in their laboratory and those eureka moments. But I think nowadays that the big questions really require extensive collaboration and certainly tool development. And one of the tools that you keep coming back to is these LLM's, these large language models. And maybe you could just elaborate for those that aren't familiar.
What is a large language model for the uninformed? What is it? And what does it allow us to do that different types of other types of AI don't allow? Or more importantly perhaps what does it allow us to do that a bunch of really smart people highly informed in a given area of science staring at the data? What can do that they can't do?
Sure, so I think a lot of the progression of machine learning has been about building systems, neural networks or otherwise that can basically make sense and find patterns in larger and larger amounts of data. And there was a breakthrough a number of years back that some folks at Google actually made called this transformer model architecture.
And it was this huge breakthrough because before then there was somewhat of a cap where you know if you fed more data into a neural network past some some point it didn't really glean more insights from it. Whereas transformers just we haven't seen the end of how big that can scale to yet. I mean I think that there's a chance that we run into some some ceiling but it never asked some toots. We haven't observed it yet but we just haven't built big enough systems yet.
So I would guess that I don't know I think this is actually one of the big questions and in the AI field today is basically our transformers and are the current model architecture is sufficient and if you just build larger and larger clusters do you eventually get something that's like human intelligence or super intelligence or
is there some kind of fundamental limit to this architecture that we just haven't reached yet and once we kind of get a little bit further and building them out and we'll reach that and then we'll need a few more leaps before we get to you know that the level of of AI that I think will unlock you know a ton of really futuristic and amazing things but there's no doubt that even just being able to process the amount of data that we can now with this model architecture has unlocked a lot of new use cases.
I mean the reason why they're called large language models is because one of the first uses of them is people basically feed in all of the language from basically the worldwide web and you can think about them is basically prediction machine so if you you you fit in.
You put in prompt and it can basically predict a version of what should come next so you you like type in a headline for a new story and it can kind of predict what it thinks the story should be or you could train it so that it can be a chatbot right where if you're prompted with this question you can get this response.
But one of the interesting things is it turns out that there's actually nothing specific to using human language in it so if instead of feeding it human language if you use that model architecture for for a network and instead you feed it all of the human cell out with state then if you prompt it with a state of a cell it can spit out different versions of like what you know how that cell can interact or different states that the cell could be a next when it interacts with different things does it have to take a look at the data.
So for instance if you give it a bunch of genetics you have to say hey by the way and then you give it a genetic class to understand so that you know you got DNA RNA and protein. The basic the basic nature of all these machine learning techniques is there they're basically pattern recognition systems so there these like very deep statistical machines that are very efficient at finding patterns. So it's not actually you don't need to teach a language model that's trying to speak a language.
You know a lot of specific things about that language either you just feed it in a bunch of examples and then you know let's say you teach it about something in English but then you also give it a bunch of examples of people speaking Italian.
It'll actually be able to explain the thing that it learned in English in Italian right even though it is so the crossover and just the pattern recognition is the thing that is pretty profound and powerful about this but it really does apply to a lot of different things. Another example in the scientific community has been the work that alpha fold you know that basically the the focus that deep mind have done on on protein folding.
It's you know just basically a lot of the the same model architecture but instead of language there they folded that they kind of fed in all these protein data and you can give it a state and it can spit out solutions to how those proteins get folded so it's very powerful.
It's very powerful I don't think we know yet as an industry what the what the natural limits of it are and that that's one of the things that's pretty exciting about the current state but it certainly allows you to solve problems that just weren't solved with the generation of machine learning that came before it.
Sounds like C.C.I. is moving a lot of work that was just done in vitro in dishes and in vivo in living organisms model organisms are humans to in silico as we say so do you foresee a future where a lot of biomedical research certainly that the work of C.C.I.
Included is done by machines I mean obviously it's much lower cost and you can run millions of experiments which of course is not to say that humans are not going to be involved but I love the idea that we can run experiments in silico and mass. I think the in silico experiments are going to be incredibly helpful to test things quickly to cheaply into just unleash a lot of creativity.
I do think you need to be very careful about making sure it still translates and matches this humans you know one thing that's funny and basic sciences we've basically cured every single disease and mice like mice have.
We know what's going on when they have a number of diseases because they're where they're used as a model organism but they are not humans in a lot of times that research is relevant but not directly one to one translatable to humans so you just have to be really careful about making sure that it actually works for humans.
Sounds like what C.C.I. is doing is actually creating a new field as I'm hearing all of this something okay this this transcends immunology department you know cardiothoracic surgery I mean neuroscience I mean the idea of a new field where you certainly embrace the realities of universities and laboratories because that's where most of the work that you're funding is done is that right it's okay so maybe we need to think about what it means to do science differently and I think that's one of the things that's not going to be.
It's most exciting along those lines it seems that bringing together a lot of different types of people different major institutions is going to be especially important so I know that the initial C.C.I. bio hub gratefully included Stanford we'll put that first in the list but also UCSF forgive me many friends at UCSF and also Berkeley but there are now some additional institutions involved so I think that's a good thing.
So maybe you could talk about that and what motivated the decision to branch outside the Bay Area and and why you selected those particular additional institutions to be included well I mean I'll just say part of a big part of why we wanted to create additional biohubs as we were just so impressed by the work that the folks who are running the first bio hub did yeah and I also think and you should walk through the the work of the Chicago bio hub and the New York bio hub that we just announced but I think it's actually an interesting set of exactly.
And I think that's one of the things that I think is the most important example that balance the limits of what you want to do with like physical material engineering and and and where things are purely biological because the Chicago team is really building more sensors to build understand what's going on in your body but that's more of like a physical kind of engineering challenge.
So we talk about this as like a cellular endoscope of being able to have like an immune cell or something that can go and understand what is like what's the thing that's going on in your body but it's not like a physical piece of hardware it's a cell that what you can basically have have just go report out on on different things that are happening inside the body so you should sell the microscope.
And then eventually actually being able to act on it but I mean but you should go into more detail on all this so core principle of how we think about bio hubs is that it has to be. When we invited proposals it has to be at least three institutions so really breaking down the barrier of a single university oftentimes asking for the people designing the research aim to come from all different backgrounds.
And to explain why that the problem that they want to solve requires interdisciplinary inter university institution collaboration to actually make happen. We just put that request for proposals out there with our San Francisco bio hub as an example where they've done incredible work in single cell biology and infectious disease.
And we got I want to say like 57 proposals from over 150 institutions a lot of ideas came together and you know we're so so excited that we've been able to launch Chicago and New York. Chicago is a collaboration between UIUC University of Illinois or Rana champagne and University of Chicago and Northwestern.
And if I obviously these universities are multifaceted but if I were to sort of describe them by their like sterile typical strength Northwestern has an incredible medical system and hospital system university of Chicago brings to the table incredible basic science strengths. University of Illinois is a computing powerhouse and so they came together and proposed that they were going to start thinking about cells in tissue.
So that one of the one of the layers that you just alluded to so how do the cells that we know behave and act differently when they come together as a tissue and the first one of the first issues that they're starting with a skin. So they've been already been able to as a collaboration under the leadership of Shana Kelly design art of engineered skin tissue the architectural looks the same as what's in you and I.
And what they've done is built these super super thin sensors and they embed these sensors throughout the layers of this engineered tissue and they read out the data they want to see how these cells what these cells are secreting how these cells talk to each other and what happens when these cells get inflamed inflammation is an incredibly important process that drives 50% of all deaths.
And so this is another sort of disease agnostic approach we want to understand inflammation and they're going to get a ton of information out from these sensors that tell you what happens when something goes awry because right now we can say like when you have an allergic reaction your skin gets red and puffy.
What is the earliest signal of that and these sensors can look at the behaviors of these cells over time and then you can apply a large language model to look at the earliest statistically significant changes that can allow you to intervene as early as possible so that that's what Chicago's doing they're starting in the skin cells they're also looking at the neuromuscular junction which is the connection between where neuron.
It attaches to a muscle and tells the muscle how to behave super important in things like ALS but also in aging the slowed transmission of information across that neuromuscular junction is what causes old people to fall their brain cannot trigger their muscles to react fast enough and so we want to be able to embed these sensors to understand how these different
connected systems within our bodies work together in New York they're doing a related but equally exciting project where they're engineering individual cells to be able to go in and identify changes in in a human body.
So what they'll do is they're calling us it's a wild I mean I love it I mean it's this is we I don't want to go on a tangent but for those that want to look it up adaptive optics you know there's a lot of distortion and interference when you try and look at something really small a really far away and really smart physicists figured out well use the interference as part of the microscope make those actually lenses of the microscope we should talk about imaging separate so it's extremely clever along with the
years it's extremely clever along those lines it's not intuitive but then when you hear it it's like it makes so much sense you know it's not immediately intuitive make the cells that are already can navigate to tissues or embed themselves in tissues be the microscope within that tissue I love it. The way that I explain this to my friends and my family is this is fantastic voyage but real life.
Like we are going into the human body and we're using the mean cells which you know are privileged and already working to keep your body healthy and being able to target them to examine certain things so like you can engineer a mean cell to go in your body and look inside your coronary arteries and say are these
arteries healthy or are there plaques because plaques lead to blockage which lead to heart attacks and the cell can then record that information and report it back out that's the first half of the New York biohub is going to do the second half is can you then engineer the cells to go do something about it can I then tell a different
cell that is able to transport in your body to go in and clean that up in a targeted way and so it's incredibly exciting they're going to study things that are sort of immune privilege that your immune system normally doesn't have access to things like ovarian
or any of these are genetic cancer they'll also look at a number of neurogegenerative diseases since the immune system doesn't presently have a ton of access into the nervous system but they it's it's both mind blowing and it feels like sci fi but science is actually in a place where if you really push the group of incredibly qualified scientists they could you do this if given the chance the answers like probably
time the bright team and resources like doable yeah I mean it's a 10 to 15 year project yeah but it's it's awesome engineer cells yeah I love the optimism and and the moment you said make the cell the microscope so it's we I was like yes yes and yes it just makes so much sense
that's what I would have a decision to do the work of C. C. I. in the context of existing universities as opposed to you know there's still some real estate up in redwood city where there's a bunch of space to put biotech companies and just hiring people from all all backgrounds and saying hey you know have at it and doing the
stuff from scratch I mean it's a it's a very interesting decision to do this in the context of an existing framework of like graduate students that need to do the thesis and get a first author paper because it's all set of structures within academia that I think both facilitate but also limit the
progression of science you know that independent investigator model that we talked about a little bit earlier it it's so core to the way science has been done this is very different and frankly sounds far more efficient if I'm to be to completely honest and you know we'll see if I renew my NIH funding after saying that.
But but I think we all want the same thing we all want to as scientists and as you know it as humans we want to understand the way we work and we want healthy people to persist to be healthy and we want sick people to get healthy I mean that's really ultimately the goal it's not super complicated it's just hard to do so the teams at the
BiHUB are actually independent of the universities so each BiHUB will probably have in total maybe 50 people working on sort of deep efforts however it's an acknowledgement that not all of the best scientists who can contribute to this area are actually going to one want to leave a university or want to take on the full
scope of this project so it's the ability to partner with universities and to have the faculty at all the universities be able to contribute to the overall project is how the BiHUB is structured. But a lot of the way that we're approaching CZI is this long term iterative project to figure out try a bunch of different things figure out which things produce the most interesting results and then double down on those in the next five year push right so we just went through this period where we
kind of wrapped up the first five years of the science program we try a lot of different models right all kinds of different things and it's not that the BiHUB model we don't think it's like the best or only model but we found that it was sort of a really interesting way to unlock a bunch of collaboration and bring some technical resources that allow for this longer term development
and it's not something that is widely being pursued across the rest of the field so we figured okay this is like an interesting thing that we can that we can push on but I mean yeah we do believe in the collaboration. But I also think that we come at this with you know we don't think that the way that we're pursuing this is like the only way to do this or the way that everyone should do it we're pretty aware of you know that.
You know what is the rest of the ecosystem and how we can play a unique role in it feels very synergistic with the way science is already done and also fills in incredibly important niche that frankly it wasn't filled before.
Along the lines of implementation so let's say your large language models combined with imaging tools reveal that a particular set of genes acting in cluster I don't know set up an organ crash let's say the pink the pancreas crashes particular stage of pancreatic cancer I mean still one of the most deadliest of the cancers and there are others that you certainly
want to get but that's among the ones you wouldn't want yet the most so you discover that and then and the idea is that okay then I reveal some potential drug targets that then bear out in vitro and addition in a mouse model. How is the actual implementation of to drug discovery or even maybe this target is drugable maybe it's not maybe and requires some other approach you know laser laser
ablation approach or something we don't know but ultimately is C.C.I. going to be involved in the implementation of new therapeutics is that the idea less so less so that's you know this is where it's important to work in an ecosystem and to know your own limitations like there are groups and startups and companies that take that and bring it to translation very effectively I would say the place where we have a small window into that world is actually our work with where does that work.
We have a thorough where is one portfolio funded patient advocates to create. Where disease organizations where patients come together and actually pull their collective experience they build bio registries registries of their natural history and they both partner with researchers to do the research about their disease and with drug
developers to incentivize drug developers to focus on what they may need for their disease and you know one thing that's important to point out is that rare diseases aren't rare they're over 7000 rare diseases and collectively impact many many individuals and I think the thing that's
from a basic science perspective them incredibly fascinating thing about rare diseases is that they're actually windows to how the body normally should work and so there are often mutations that when when genes that were when they're treated cause very specific diseases but that tell you how the normal biology works as well.
So you discussed basically the goals major goals and issues of the CCI for the next say five to 10 years and then beyond that the targets will be explored by biotech companies will go grab those targets and test them and implement them. So I think then a couple of teams from the initial bio hub that were interested in spinning out ideas right into start up so that's just it's even though it's not a thing that we're going to pursue because we're a philanthropy.
We want to enable the work that gets done to be able to get turned into companies and things that other people go take and and run towards towards building you know ultimately therapeutics so that's another zone but that's just that's not a thing that we're going to do.
I gather you're both optimists yeah is that part of what brought you together forgive me for you switching to a personal question but I love the optimism that seems to sit at the root of this easy I. I will say that we are incredibly hopeful people but it manifests in different ways between the two of us yeah what what do you what how would you describe your optimism versus mine it's not a loaded question. I don't know.
I mean I think I'm more probably technologically optimistic about what can be built and I think you because of your focus as you know an actual doctor kind of have more of a sense of how that's going to affect actual people in their lives.
Whereas for me it's like I'm in a lot of my work. It is you know it's like we touch a lot of people around the world and the scale is sort of immense and I think for you just like it's like being able to improve the lives of individuals whether it's you know students at any of the schools that you started or any of the stuff that we've supported through the education work which isn't the goal here or you know like just being able to improve people's lives in that way I think is the thing that I've seen you be super passion.
I know does that do you agree with that characterization I'm trying to try to. Yeah I agree with that I think that's very fair and I'm sort of giggling to myself because in a day to day life as like life partners are relative optimism comes through as Mark just like is overly optimistic about his time management and we'll get engrossed in interesting ideas I'm late and he's late.
Physicians are very punctual and because he's late I have to channel Mark is an optimist whenever I'm waiting for him. That's a nice way. Okay I'll start using that. That's what I think when I'm in the driveway with the kids waiting for you I'm like Mark is an optimist and so his optimism translates to some tardiness.
Whereas I'm sort of I'm like how is what's how is this going to happen like I'm going to open a spreadsheet I'm going to start putting together a plan and like putting pulling together all the pieces calling people to sort of like bring something to life. But it is it's one of my favorite quotes that is. Optimists tend to be successful and pessimists tend to be right. And yeah I mean I think it's it's true in a lot of different aspects of life right.
Did you say that Mark said I did not I like it I did not invent it. We'll give it to you we'll put it out. I do think that there's really something to it right and there's like if you're discussing any idea there's all these reasons why it might not work. And so I think that and those reasons are probably true the people are stating them are probably have some validity to it. But the question is that is that the most productive way to view the world.
And I think across the board I think the people who tend to be the most productive and get the most done you kind of need to be optimistic because if you don't believe that someone can get done then why would you go work on it.
The reason I asked the question is that you know that these days we hear a lot about you know the future is looking so dark in these various ways and you have children so you have families and you are a family excuse me and you also have families independently that are now merged. But I love the optimism behind this easy I because you know you know behind all this that there's sort of set of big statements on the wall.
One of the future can be better than the present in terms of treating disease maybe even you said eliminating diseases all diseases love that optimism. And that there's a tractable path to do it like what we're going to put literally you know money and time and energy and people and technology and AI behind that.
And so I have to ask was having children a significant modifier in terms of your view of the future like wow like you hear all this doom and gloom like what's the future going to be like for them. Did you sit back and think you know what would it look like if there was a future with no diseases.
Is that the future we want our children in I mean I'm voting a big yes so we're not going to we're not going to debate that at all but was having children sort of an inspiration for the C.C.I. in some way. Yeah. So I think my answer to that I would dial backwards for me and I'll just tell a very brief story about my family.
I'm the daughter of Chinese Vietnamese refugees my parents and grandparents were both people if you remember people left Vietnam during the war in these small boats into the South China Sea. And the there were stories about how these boats would sync with whole families on them and so my grandparents who both sets of grandparents who knew each other decided that there was a better future out there and they were willing to take risk for it.
But they were afraid of losing all their kids I my dad is one of six my mom is one of 10. And so they decided that there was something out there in this bleak time and they paired up their kids one from each family and sent them out on these little boats before the Internet before cell phones and said we'll see you on the other side. And the kids were between the ages of like you know 10 to 25 so young kids my my mom was a teenager with that early teen when this happened.
And everyone made it and I get to sit here and talk to you. So how could I not believe that like better is possible and like I hope that that's in my like epigenetic somewhere and then I carry that on.
That is a spectacular story is that while it is spectacular how can I be a pessimist with that I love it and I so appreciate that you became a physician because you're now bringing that optimism and that epigenetic understanding and cognitive understanding and emotional understanding to the field of medicine.
And you're a grateful to the people that made that decision yeah and then you know when I think you don't I've always known that story but you don't understand how wild that feels until you have your own child and you're like well I can't even I'm I refuse to let her use you know glass bottles only or something like that and you're like oh my God like the risk and sort of willingness of my grandparents to believe in something bigger and better.
And it's just astounding and it else our own children sort of give it a sense of urgency. And spectacular story and you're sending knowledge out into the fields of science and bringing knowledge into the fields of science and I love this we'll see on the other side. Yeah I'm confident that it will all come back. Well thank you so much for that I mark out you have the opportunity to talk about did having kids change your world view it's really tough to beat that story.
It is tough to beat that story and they are also your children so in this case you know you get you get you get two for the price of one so to speak so.
Having children definitely changes your time horizon something that that's one thing is you just like there are all these things that I think we had talked about if for as long as we've known each other that you eventually want to go do but then it's like oh we're having kids we need to like get on this right so I have there's that was actually one of the checklist the baby checklist before the first. It was like the babies coming we have to start CZI truly.
I'm like sitting in the hospital delivery room finishing editing the letter that we were going to publish to to announce the work. It was not we really were editing the final draft. First CZI before you.
Well it's it an incredible initiative I've been following it since its inception and and it's already been tremendously successful and everyone in the field of science and I have a lot of communication with those folks feels the same way and and the future is even brighter for it's clear and thank you for expanding to the Midwest and New York and. We're all very excited to see where all of this goes I share in your optimism and thank you for your time today.
I'd like to take a quick break and thank our sponsor inside tracker inside tracker is a personalized nutrition platform that analyzes data from your blood and DNA to help you better understand your body and help you reach your health goals. I've long been a believer in getting regular blood work done for the simple reason that many of the factors that impact your immediate and long term health can only be analyzed from a quality blood test.
Now major problem with a lot of blood tests out there however is that you get information back about metabolic factors lipids and hormones and so forth but you don't know what to do with that information.
With inside tracker they make it very easy because they have a personalized platform that allows you to see the levels of all those things metabolic factors lipids hormones etc but it gives you specific directives that you can follow that relate to nutrition behavioral modifications supplements etc that can help you bring those numbers into the ranges that are optimal for you.
If you'd like to try inside tracker you can go to inside tracker dot com slash Hubertman to get 20% off any of inside tracker's plans again that's inside tracker dot com slash Hubertman and now for my discussion with Mark Zuckerberg slight shift of topic here.
You're extremely well known for your role in technology development but by virtue of your personal interests and also where meta technology interfaces with mental health and physical health you're starting to become synonymous with health but they don't realize it or not part of that is because there's post footage of you rolling jiu jitsu you wanted to do jitsu competition recently.
You're doing other forms of martial arts water sports including surfing and and on and on so you're doing it yourself but maybe we could just start off with technology and get this issue out of the way first which is it I think many people assume that technology especially technology that involves a street screen excuse me of any kind is going to be detrimental to our health.
But that doesn't necessarily have to be the case so could you explain how you see technology mashing with inhibiting or maybe even promoting physical and mental health sure I mean I think this is a really important topic it's um you know the research that we've done suggest that it's not like all good or all bad I think how you're using the technology has a big impact on whether.
It is basically a positive experience for you and even within technology even within social media there's not kind of one type of thing that people do I think at its best you're forming meaningful connections with other people and there's a lot of research that basically suggests that you know it's it's the relationships that we have and friendships that kind of bring the most happiness and in our lives and it's some of the things that we have.
And it's some level end up even correlating with living along or in healthier life because you know that kind of you know grounding that you have in community ends up being important for that so I think that aspect of social media which is. You know the ability to connect with people to understand what's going on in people's lives have empathy for them communicate what's going on with your life express that that's generally positive there are ways that it can be.
Negative in terms of bad interactions things like bullying which we can we can talk about because there's a lot that we've done to basically make sure that people can be safe from that and give people tools and give kids ability to have the right parental controls that their parents can oversee that but that's sort of the interacting with people side.
There's another side of of all this which I think is just like passive consumption which it at its best its entertainment right an entertainment is an important human thing to but I don't think that that has quite the same association with the long term well being and health benefits as being able to help people connect with other people does.
And I think at its worst some of the stuff that we see online I think these days a lot of the news is just so relentlessly negative that it's just hard to you know come away from an experience where you're you're looking at the news for half an hour and like feel better about the world so.
I think that there's a mix on this I think the more that social media is about connecting with people and the more that when you're kind of consuming and using the media part of social media to learn about things that kind of enrich you and can provide inspiration or education as opposed to things that just leave you with a more toxic feeling that that sort of the balance that we try to to get right across our products and.
I think we're pretty aligned with the community because at the end of the day you know people don't want to use a product and come away feeling bad you know there's a lot that people talk about.
Value it a lot of these products in terms of information and utility but I guess as important when you're designing a product to think about what kind of feeling you're creating with the people who use it right whether that's kind of an aesthetic sense when you're designing hardware or just kind of like what.
What do you what do you make people feel and generally people don't want to feel bad right so I think when you know that doesn't mean that we want to shelter people from bad things that are happening in the world but I don't really think that it's not what people want. You know for us to just be kind of just showing like all this super negative stuff all day long.
So we work hard on all these different problems you know making sure that we're helping connect people as best as possible helping make sure that we give people good tools to block people who might be bullying them or harass them or especially for younger folks anyone under the age of 16 defaults into an experience where they're.
Experiences private we have all these parental tools so that way you know parents can can kind of understand what with their children is up to and are up to in a good balance.
And then on the other side we try to give people tools to understand how they're spending their time and we try to give people tools so that you know if you're if you're a teen and you're kind of stuck in some you know loop of just looking at one type of content will nudge you and say hey you've been looking at content of this type for a while like how about something else and here's here's a bunch of other examples so.
And then there are things that you can do to kind of push this in a positive direction but I think it just starts with having a more nuanced view of like this isn't all good or all bad and the more that you can make it kind of a positive thing the better this will be for all the people use our products.
And that makes really good sense in terms of the negative experience I agree I don't think anyone wants a negative experience in the moment I think where some people get concerned perhaps I think about my own interactions with say Instagram which I use all the time for getting information out but also. Consuming information and I happen to love it it's where I essentially launched the non podcast segment of my podcast and continue to.
I can think of the experiences that are a little bit like highly processed food where it tastes good at the time it's it's highly engrossing but it's not necessarily nutritious and you don't feel very good afterwards so for me that would be my the little collage of default options to click on and it's not.
I can't really get the options to click on in Instagram occasionally I notice and this just reflects my failure not Instagram's right that there are a lot of like street fight things like a people beating people up on the street.
And I have to say these have a very strong gravitational pull I'm not somebody that enjoys seeing violence per say but you know I find myself I'll click on one of these like what happened and I'll see someone like you know get hit and there's like a little melee on the street or something and those seem to be offered to me a lot lately and again this is my fault it reflects my prior. Searching experience but it I noticed that it has a bit of a gravitational pull where.
You know there's no I didn't learn anything it's not teaching me any kind of useful street self defense skills of any kind and at the same time I also really enjoy some of the cute animal stuff and so I get a lot of those also so there's just polarized you know collage that's offered to me that reflects my prior search behavior.
You could argue that the cute animal stuff is just entertainment but actually it fills me with a feeling in some cases that truly delights me I delight in animals and we're not just talking about kids I mean animals I've never seen before interactions between animals I've never seen before that truly delight me they energize me in a positive way that when I leave Instagram I do think I'm better off so I'm grateful for the algorithm in that sense but.
I guess the direct question is is the algorithm just reflective of what one has been looking at a lot prior to that moment where they log on or is it also trying to do exactly what you described which is trying to give people a good feeling experience that leads to more good feelings.
Yeah I mean I think we try to do this in a long term way right I think one simple example of this is we have this issue a number of years back about clickbait news right so articles that would have basically a headline that grabbed your attention that made you feel like I need to click on this then you click on it and then the
article is actually you know about something that's somewhat tangential to it but people clicked on it so you know the naive version of this stuff you know the like 10-year-old version was like oh people seem to be clicking on this maybe that's good but it's actually pretty straightforward exercise to
instrument the system to realize that hey people click on this and then you know they don't really spend a lot of time reading the news that after clicking on it and after they do this a few times they don't really correlate with them you know saying that they're having a good experience some of what you some of how we measure this is just by looking at how people use the services but it's also important to balance that by having
like real people come in and you know tell us okay we show them here the stories that we could have showed you which of these are most meaningful to you or would would make it so you have the best experience and just kind of like mapping the algorithm and what we do to that ground truth of what people say that they want so I think that through a set of things like that we really have
it made large steps to minimize things like clickbait over time it's not gone from the internet but I think we've done a good job of minimizing it on our services within that though I do think that we need to be pretty careful about not being paternalistic about what makes different people feel good
right so I don't know that everyone feels good about about cute animals I can't imagine that people would feel really bad about it but maybe they don't have as profound of a positive reaction to it as you just expressed and I don't know maybe people who are more into fighting would look at the you know the street fighting videos assuming that they're within our community standards I think that there's a level of violence that we just don't want to be showing it all but that's a separate question
but if they are I mean that you know it's like I mean I'm pretty into MMA I don't get a lot of street fighting videos but if I did maybe I feel like I was learning something from that I think at various times in the company's history we've been a little bit too paternalistic about saying this is good content this is bad
you should like this this is unhealthy for you and I think that we want to look at the long term effect so you don't want to get stuck in a short term loop of like okay just because you did this today it doesn't mean it's like what you aspire for yourself over time but I think as long as you look at the long term of what people both say they want and what they do
giving people a fair amount of latitude to like the things that they like I just think feels like the right set of values to bring to this now of course that doesn't go for everything there are things that are kind of truly off limits and and you know things that like bullying for example or you know things that are really like inciting violence things like that and we have the whole community standards around this but I think except for those things which I would hope that most people can agree
okay bullying is bad right I hope that you know 100% of people agree with that you know not 100 maybe 99% except for the things that kind of get that sort of very that feel pretty extreme and bad like that I think you want to give people space to like what they want to like yesterday I had the very good experience of learning from the meta team about safety protections that are in place for kids who are using meta platforms and frankly I was like really
positively surprised at the huge number of filter based tools and and just ability to customize the experience so that it can stand the best chance of enriching not just remaining neutral but enriching their mental health status yeah one thing that came about in that conversation however was I
realized there are all these tools but do people really know that these tools exist and I think about my own experience with Instagram I love watching Adam a series Friday Q&As because he explains a lot of the tools that I didn't know existed and people haven't
seen that I highly recommend that they watch that it's I think every takes questions on Thursdays and answers them most every Fridays so if I'm not aware of the tools without watching that that exists for adults how does meta look at the challenge of making sure that people know that they're all these tools I mean dozens and dozens of very useful tools but I think most of us just know the hashtag the tag the click stories versus feed
and I also post a thread so we know the major channels and tools but this is like owning a vehicle that has incredible features that one doesn't realize can take you off road can allow your vehicle to fly there so what do you think could be done to get that information out maybe this conversation could cue people to that I mean that's part part of the reason why I wanted to talk to you about this is I mean I think most of the narrative around
this is not okay all the different tools that people have to control their experience it's you know the kind of narrative of is this just negative for for teens or something and I think again a lot of this comes down to you know do you know how was the
experience being tuned and is it actually you know like our people using it to connect in positive ways and if so I think it's really positive so yeah I mean I think part of this is we probably just need to get out and talk to people more about it and then there's an
in product aspect which is you know if you're a teen and you sign up we take you through a pretty you know extensive experience that tries to outline some of this but that is limits to right because when you sign up for a new thing if you're barred with like here's a list of features you're like okay I just sign up for this I don't really understand much about what the services like let me go find some people to follow who are my friends on here before I like learn about
controls to prevent people from harassing me or something that's why I think it's really important to also show a bunch of context so you know if you're looking at comments and you know if you if you go to you know delete a comment or you go to edit something you know try to give people prompts and lines okay did you know that you can manage things in these ways around that or when you're in the inbox and you're filtering something right it's a remind people in line so I know
just because of the number of people who use the products and the level of nuance around each of the controls I think the vast majority of of that education I think needs to happen in the product but I do think that through conversations like this and others that you know we need to be doing and we can create a broader awareness that those things exist so that way at least people are primed so that way when those things pop up in the product people like oh yeah
I knew that there was this control and like here's here's like how I would use that like I find the restrict function to be very well more than the block function in most cases I do sometimes I feel like the restrict function is really useful that you
could filter specific comments you know someone might have it you might recognize that someone has a tendency to be a little aggressive and I should point out that I actually don't really mind what people say to me but I try and maintain what I call classroom rules in my comment section where I don't like people attacking other people because I would never tolerate that in the university classroom I'm not going to tolerate that in the comment section for instance yeah and I think that the
example that you just you just used about restrict versus block gets to something about product design that's important to which is that block is sort of this very powerful tool that if someone is giving you a hard time and you just want them to disappear from the experience you can do it but the design trade off with that is that in order to make it so that the person is just gone from the experience and that you don't show up to them they don't show up to you
inherent to that is that they will have a sense that you blocked them and that's why I think some stuff like restrict or just filtering like I just don't want to see as much stuff about this topic you know people like using different tools for very subtle reasons right I mean maybe maybe you want the content to not show up but you don't want the person who's posting the content to know that you don't want it to show up maybe you don't want to
get the messages in your main inbox but you don't want to tell the person that you like that you actually you know that you're not friends or something like that I mean you actually need to give people different tools that have different levels of kind of power and nuance around how the social dynamics are using them play out in order to really allow people to tailor the experience in the ways that they want in terms of trying to limit
total amount of time on social media I couldn't find really good data on this you know how much time is too much I mean I think it's going to depend on what one is looking at the age of the user etc. But I agree I know that you have tools that queue the user to how long they've been on a given
platform are their tools to self regulate like thinking about like the Greek myth of the sirens and you know people you know tying themselves to the mast and covering their eyes so that they're not drawn in by the sirens is there a function aside from deleting the app temporarily and then reinstalling it every every time you want to use it again is there a true lockout self lockout function where one can lock
themselves out of access to the app well I think we give people tools that let them manage this and and and there's the tools that you get to use them there's the tools the parents get to use to basically see how the usage works but yeah I think that there's there's different kind of you know I think for now we've mostly focused on you know helping people understand this and then you know give people
reminders and things like that it's tough though to answer the question that you were talking about before this is there an amount of time which is too much because it does really get to what you're doing but if you fast forward beyond just the apps that we have today to an experience that
there's like a social experience in the future of the augmented reality glasses or something that that we're building a lot of this is going to be you know you're interacting with people in the way that you would physically as if you were kind of like hanging out with friends or working with
people but now they can show up as holograms and you can feel like you're present right there with them no matter where they actually are and the question is is there too much time to spend interacting with people like that well at the limit if we can get that experience to be kind of as
rich and giving you as good of a sense of presence as you would have if you were physically there with someone then I don't see why you would want to restrict the amount that people use that technology to any less than what would be the you know amount of time that you'd be comfortable interacting
with people physically which obviously is not going to be 24 hours a day you have to do other stuff you have work you need to sleep but I think it really gets to kind of how you're using these things whereas if what you're primarily using the services for is to you're getting stuck in
loops reading you know news or something that is really kind of getting you into a negative mental state then I don't know I mean I think that there's probably a relatively short period of time that maybe that's kind of a good thing that you want to be doing but again even then it's not zero right because it's it's just because news might make you unhappy doesn't mean that the answer is to be unaware of negative things that are happening in the world I just think that there's like different
people have different tolerances for what they can take on that and I think we you know it's generally having some awareness is probably good as long as it's not more than your kind of constitutionally able to take so I don't know try to not be too paternalistic about this is our approach but we want to empower people by giving them the tools both people and if you're a teen your parents to have tools to understand
what you're experiencing and what the and how you're using these things and then and then go from there yeah I think it requires of all of us some degree of self-regulation I like this idea of not being too paternalistic I mean that's it seems like the right way to go I find myself occasionally having to make sure that I'm not just passively scrolling that I'm learning I like forging for organizing and dispersing information
that's been my my my life career so I have learned so much from social media I find great papers great ideas I think comments are a great source of feedback and I'm not just saying that because you're sitting here I mean it Instagram in particular but other meta platforms have been tremendously helpful for me to get science and health information out one of the things that I'm really excited about which I only had the
chance to try for the first time today is your new VR platforms the newest Oculus and when then we can talk about the glasses the Ray bands sure those are still that those two experiences are still kind of blowing my mind especially the the the the Ray band glasses and I have so many questions about this so I'll resist but we've got into that okay well yeah I have some experience with VR my lab is used VR Jeremy Bale instance lab at Stanford is one of the pioneering labs of VR and mixed reality
I guess some least called augmented reality but now mixed reality I think what's so striking about the VR that you guys had me tried today is how well it interfaces with the with the real room what's called it the physical room because I could still see people I could see where the furniture was
so I didn't want to bump in anything I could see people smiles I could see my my my water on the table while I was doing this what felt like a real martial arts experience except I wasn't getting hit I was getting hit virtually but it's extremely engaging and yet it on the good side of things it really bypasses a lot of the early concerns that Baylinson lab again Jeremy's lab was early to say that oh you know there's a limit how much of the R1 can or should use each day even for the adult brain
because it can really disrupt your vestibular system your sense of balance all of that seems to have been dealt with in this new this new iteration of the art like we didn't come out of it feeling dizzy at all I didn't feel like I was re entering the room in a way that was really jarring going into it is obviously this is a different world but you can look to your left and say oh someone just came in the door hey how's it going hold on I'm playing this game
just as it was when I was a kid playing in the Nintendo and someone walk in it's fully engrossing but you'd be like hold on and you see they're there so first of all bravo incredible and then the next question is you know what is this what do we even call this experience because it is truly mixed
it's truly mixed reality experience yeah I mean mixed reality is sort of the umbrella term that refers to the combined experience of virtual and augmented reality so augmented reality is in what you're eventually going to get with in a some future version of the smart glasses where you're primarily seeing the world right but but you can put holograms in it right so like that we'll have a you know future where you're going to walk into a room and you're going to be like as many
holograms as physical objects right if you just think about like all the paper the kind of art physical games media your workstation if we refer to let's say an MMA fight if we could just draw it up on the table right here and just see it repeat as it does turning and looking at a screen pretty much any screen that exists could be a hologram in the future with smart glasses right there's nothing that actually physically needs to be there for that when you have
glasses that can put a hologram there and it's an interesting thought experiment to just go around and think about okay what of the things that are physical in the world need to actually be physical your chair does right because you're sitting on it a hologram isn't going to support you but I like that art on the wall I mean that doesn't need to physically be there I mean so I think that that's that's sort of the augmented reality experience that we're moving towards and then we've had these
headsets that historically we think about as VR and that has been something that kind of it's like a fully immersive experience but now we're kind of getting something that's a hybrid in between the two incapable of both which is a headset that can do both virtual reality and some of these augmented reality experiences and I think that that's really powerful both because you're going to get new applications that that kind of allow people to collaborate together and you know maybe the two of us
are here physically but you know someone joins us and it's their avatar there or maybe you know it's some version of the future like we're having a you know you're having a team meeting and you have some people there physically and you have some people dialing in and they're basically like a hologram there virtually but then you have some AI is that personas that are on your team that are helping you do different things they can be embodied
as avatars and around the table meeting with you are people going to be doing first dates that are physically separated I could imagine that some people would is it even worth leaving the house type date and then they find out and then they meet for the first time I mean maybe I think you know dating has physical aspects to it too and some people might not be they want to know whether or not it's worth the effort to head out to it would not
you know it's possible it is possible I mean I know like some of my friends who are dating basically say that in order to make sure that they have like a safe experience and when they're if they're going on a first date they'll schedule something that's like shorter and maybe in the middle of the day like some maybe it's coffee so that way if they don't like the person they can just kind of get out before like going and scheduling a dinner
or like a real full date so I don't know maybe in the future people will kind of have that experience that where you can feel like you're kind of sitting there and it's and it's even easier and lighter weight and safer and if you're not having a good experience you can just like teleport out of there and be on but yeah I think that this will be an interesting question in the future is there are clearly a lot of things that are only possible physically that or so much better physically
and then there are all these things that we're building up that can be digital experiences but it's this weird artifact of kind of how this stuff has been developed that the digital world and the physical world exist in these like completely different planes or you want to interact with the digital world when we do it all the time but we pull out a small screen or we have a big screen and then just basically we're interacting with the digital world through these screens
but I think if we fast forward in a decade or more it's I think one of the really interesting questions about what like what is the world that we're going to live in I think it's going to increasingly be this mesh of the physical and digital worlds
that will allow us to feel a that the world that we're in is just a lot richer because there can be all these things that people create that are just so much easier to do digitally than physically but at B you're going to have a real kind of physical sense of presence with these things and not feel like interacting in the digital world is taking you away from the physical world which today is just so much viscerally richer and more powerful.
I think the digital world will sort of be embedded in that and we'll feel kind of just as vivid in a lot of ways so that's what I always think when you're saying before you felt like you could look around and see the real room
I actually think that there's an interesting kind of philosophical distinction between the real room and the physical room which historically I think people would have said those are the same thing but I actually think in the future the real room is going to be the combination of the physical world
with all the digital artifacts and objects that are in there that you can interact with them and feel present where the physical world is just the part that's physically there and I think it's possible to build a real world that's the sum of these two that will actually be a more profound experience than what we have today. I was struck by the smoothness of the interface between the VR and the physical room.
Your team had me try a, I guess it was an exercise class in the form of a book it was essentially like hitting mitz boxing so hitting targets boxing. Supernatural. Yeah and it comes at a fairly fast pace that then picks up, it's got some tutorials, very easy to use and certainly got my heart rate up, I'm in at least decent shape and I have to be honest I never once desired to do any of these on screen fitness things.
I mean I can't think of anything more immersive than like a clap, I don't want to insult any particular products but like riding a stationary bike while looking at a screen pretending I'm on a road outside, I can't think of anything worse for me. I do like the leaderboard, but maybe I'm just a very competitive person. Like if you're going to be running on a treadmill, at least give me a leaderboard so I can beat the people who are ahead of me.
I like moving outside and certainly an exercise class or a aerobics class as they used to call them.
But the experience I tried today was extremely engaging and I've done enough boxing to at least know how to do a little bit of it and really enjoy it, gets your heart rate up and I completely forgot that I was doing an on screen experience because in part because I believe I was still in that physical room and I think there's something about the mesh of the physical room and the virtual experience that makes it neither of one world or the other.
I mean I really felt at the interface of those and certainly got presence this feeling of forgetting that I was in a virtual experience and got my heart rate up pretty quickly. We had to stop because we were going to start recording. But I would do that for a good 45 minutes in the morning and there's no amount of money you could pay me truly to look at a screen while pedaling on a bike or running on a treadmill.
So again, Bravo, I think it's going to be very useful. It's going to get people moving their bodies more, which certainly social media up until now and a lot of technologies have been accused of limiting the amount of physical activity that both children and adults are engaged in. And we know we need physical activity. You're a proponent of them practicing their physical activity. So is this a major goal of meta, get people moving their bodies more and getting their heart rates up and so on?
I think we want to enable it and I think it's good. But I think it comes more from like a philosophical view of the world than it is. I don't go into building products to try to shape people's behavior. I believe in empowering people to do what they want and be the best version of themselves that they can be. It's a no agenda. That said, I do believe that there's the previous generation of computers were devices for your mind. And I think that we are not brains and tanks.
It's like there's sort of a philosophical view of people of like, okay, you are primarily what you think about or your values or something. It's like, no, you are that and you are a physical manifestation. And people were very physical.
And I think building a computer for your whole body and not just for your mind is very fitting with this world view that like the actual essence of you if you want to be present with another person, if you want to like be fully engaged in experience, is not just, okay, it's not just a video conference call that looks at your face and where you can like share ideas. It's something that you can engage your whole body. So yeah, I mean, I think being physical is very important to me.
I mean, it's just a lot of the most fun stuff that I get to do. It's a really important part of how I personally balance my energy levels and just get a diversity of experiences because I could spend all my time running the company. But I think it's good for people to do some different things and compete in different areas or learn different things. And all of that is good.
And if people want to do really intense workouts with the work that we're doing with Quest or with eventual AR glasses, great. But even if you don't want to do like a really intense workout, I think just having a computing environment and platform, which is inherently physical, captures more of the essence of what we are as people than any of the previous computing platforms that we've had to date. It was even thing, just of the simple task of getting a better range of motion, a K flexibility.
I could imagine inside of the VR experience, leaning into a stretch, a standard kind of like a lunch type stretch, but actually seeing a meter of like, are you approaching new levels of flexibility in that moment? Where it's actually measuring some kinesthetic elements on the body and the joints. I mean, I would just try to, whereas normally, you might have to do that in front of a camera, which then would give you the data on a screen that you'd look at afterwards or hire an expensive coach.
But so, or looking at a form in resistance training. So you're actually lifting physical weights, but it's telling you whether or not you're breaking form. I mean, there's just so much that could be done inside of there. And then my mind just starts to spiral into like, wow, this is very likely to transform what we think of as, quote, unquote, exercise. Yeah, I think so. I think there's still a bunch of questions that need to get answered.
I don't think most people are going to necessarily want to install a lot of sensors or cameras to track their whole body. So we're just over time getting better from the sensors that are on the headsets of being able to do very good hand tracking. So we have this research demo where you now, just with the hand tracking, from the headset, you can type it. Just project a little keyboard onto your table and you can type people to type like 100 words a minute with that. With a virtual keyboard.
We're starting to be able to using some modern AI techniques, be able to like simulate and understand where your torso is position is. Even though you can't always see it, you can see it a bunch of the time. And if you fuse together what you do see with like the accelerometer and understanding how the thing is moving, you can kind of understand what the body position is going to be. But some things are still going to be hard, right? So you mentioned boxing.
That one works pretty well because we understand your head position, we understand your hands. And now we're kind of increasingly understanding your body position. But let's say you want to expand that to more tie or kickboxing. Okay, so legs, that's a different part of tracking. That's harder because that's out of the field of view more of the time.
And then you can do the element of resistance, right? So you can throw a punch and retract it in shadow box and you know, do that without upsetting your kind of physical balance that much. But if you want to throw around house kick and there's no one there, then I mean, you know, the standard way that you do when you're shadow boxing is you basically do a little 360.
But like, I don't know is that is that going to feel great? I mean, I think there's a question about what that experience should be. And if you want to go even further, if you want to get like grappling to work, I'm not even sure how you would do that without having resistance of understanding what the forces applied to you would be. And this then you get into, okay, maybe you're going to have some kind of body suit that can apply, you know, haptics.
But I'm not even sure that that even a pretty advanced haptic system is going to be able to be quite good enough to do to simulate like the actual forces that would be applied to you in a grappling scenario. So this is part of what's fun about technology, though, is you get you keep on getting new capabilities. And then you need to figure what things you can do with them. So I think it's really neat that we can kind of do boxing and we can do this supernatural thing.
And there's a bunch of awesome cardio and dancing and things like that. And then there's also still so much more to do that that I'm excited to kind of get to over time. But, but it's a long journey. And what about things like painting and art and music? You know, imagine, you know, of course, you know, like different mediums, like I like to draw a pen and pencil, but I could imagine trying to learn how to paint it virtually.
And of course, you could print out a physical version of that at the end. This doesn't have to depart from the physical world. They could end in the physical world. Did you see the demo, the piano demo where you either you're there with a physical keyboard or could be a virtual keyboard. But the app basically highlights what keys you need to press in order to play the song. So it's basically like you're looking at your piano.
And it's teaching you how to play a song that you choose an actual piano. Yeah. Yeah. But it's illuminating certain keys in in the virtual space. And it could either be a virtual piano if you or keyboard, if you don't have a piano or keyboard, or it could use your actual keyboard. So yeah, I think stuff like that is going to be really fascinating for education and expression.
And for brought, excuse me, but for and for broadening access to totally expensive equipment on my piano is is no small expense. And it takes up a lot of space and needs to be tuned. Yeah. You can think of all these things like the kid that has very little income or their family has very little income could learn to play a virtual piano at much lower.
Yeah, it gets back to the question I was asking before about the thought experiment of how many of the things that we physically have today actually need to be physical. The piano doesn't maybe there's some premium where it's maybe it's a somewhat better, more tactile, tactile experience to have a physical one.
But for people who don't have the space for it or who can't afford to buy a piano or just aren't sure that they would want to make that investment at the beginning of learning how to play piano.
I think in the future you'll have the option of just buying an app or a hologram piano, which will be a lot more affordable. And I think it's going to be that's going to be unlock a ton of creativity too because I'm instead of the market for piano makers being constrained to like a relatively small set of experts who have like perfected that craft.
You're going to have like, you know, kids or developers all around the world designing crazy designs for potential keyboards and pianos that look nothing like what we've seen before but are maybe like bring even more joy and or even more kind of fun in the world where you have fewer of these physical constraints. So I think it's going to just give me a lot of wild stuff to explore.
This is definitely going to be a lot of wild stuff to explore. I was just had this idea slash image in my mind of what you were talking about merged with our earlier conversation when Priscilla was here of, you know, I could imagine a time not too far long from now where you're using mixed reality to run experiments in the lab literally mixing virtual solutions getting potential outcomes.
And then picking the best one to then go actually do in the real world, which is very both financially costly and time wise costly. Yeah, I mean, people are already using VR for surgery and education on it.
And there's some study that was done that basically did it tried to control experiment of people who learned how to do a specific surgery through just the normal kind of textbook and lecture method versus like you show the knee and you have it, you know, be a large blown up model and people can manipulate it and practice where they would make the cuts and like the people in that class did better.
So I think that there's yeah, it's I think that it's going to be profound for a lot of different areas and last example at Leap's to mind, you know, I think social media and online culture has been accused of creating a lot of real world, let's call it physical world social anxiety for people, but I could imagine practicing a social interaction or a kid that has a lot of social anxiety or that needs to advocate for themselves better learning how to do that progressively through a virtual interaction and then taking that to the real world because it's.
In my very recent experience today, it's so blended now with with real experience that the kid that feels terrified of advocating for themselves or or just talking to another human being or an adult or being in a new circumstance of a room full of kids, you could really experience that in in silico first get comfortable, let the nervous system attenuate a bit and then take it into the quote unquote physical world.
Yeah, I think we'll see experiences like that, I mean, I also think that some of the social dynamics around how people interact in this kind of blended digital world will be more nuanced and other ways so I'm sure that there will be kind of new anxieties that people develop to just like, you know, teens today need to navigate dynamics around texting constantly that.
That we just didn't have when we were kids, so I think it will help with some things I think that there will be new issues that hopefully we can help people work through to but but overall I think yeah, no, I think it's going to be really powerful and positive. Let's talk about the glasses sure this was wild yeah put on a pair of Ray bands, I like the way they look.
They're clear they look like any other glass Ray band glasses except that I could call out to the glasses I could just say, you know, hey meta I want to listen to the Bach variations, the Goldberg variations of Bach and meta responded and no one around me could hear but I could hear with exquisite clarity.
By the way, I'm not getting paid to say any of this I'm just still blown away by this folks. I want to pair these very badly. I could hear, okay, I'm selecting those now and or that music now and then I could hear it in the background but then I could still have a conversation. So this was neither headphones in or headphones out. And I could say wait pause the music and it would pause.
And the best part was I didn't have to leave the room mentally and I even have to check out a phone. It was all interfaced through this very local environment in and around the head.
And as a neuroscientist, I'm fascinated by this because of course all of our perceptions auditory visual, etc. So occurring inside the casing of this thing we call a skull, but maybe could comment on what the origin of that design for you, you know, what the ideas behind that and where you think it could go because I'm sure I'm just scratching the surface.
The real product that we want to eventually get to is this kind of full augmented reality product in a kind of stylish and comfortable normal glasses form factor. And the VR headset so to speak. No, I mean, the VR headset does feel kind of like it will, but there's going to be a place for that to just like you have your laptop and you have your workstation. Or maybe the better analogy is you have your phone and you have your workstation.
These AR glasses are going to be like your phone in that you have something on your face and you will I think be able to if you want where it for a lot of the day and interact with it with it very frequently. I don't think that people are going to be walking around the world wearing VR headsets. But yeah, that's certainly not the future that I'm that I'm kind of hoping we get to.
But I do think that there is a place where for having, you know, because it's a bigger form factor, it has more compute power. So just like your workstation or your kind of bigger computer can do more than your phone can do. There's a place for that when you want to settle into an intense task, right, if you have a doctor who's doing a surgery, I would want them doing it through the headset, not through the kind of not through their phone equivalent or the just kind of lower powered glasses.
But just like phones are powerful enough to do a lot of things and the glasses will eventually get there too. Now that's that there's a bunch of really hard technology problems to address in order to be able to get to this point where you can like put kind of full holograms in the world, you're basically miniaturizing a super computer and putting it into a pair of glasses.
So that the pair of glasses still looks stylish and normal and that's a really hard technology problem, making things small is really hard. And a holographic display is, you know, it's different from what our industry is optimized for for 30 or 40 years now building screens.
There's like a whole kind of industrial process around that that goes into phones and TVs and computers and like increasingly so many things that have different screens like there's a whole pipeline that's gotten very good at making that kind of screen.
The holographic displays are just a completely different thing right because it's not that's not a screen right it's a thing that you can shoot light into through you know a laser or some other kind of projector and it can place that as an object in the world. So that's going to need to be this whole other industrial process that gets built up to doing that like in an efficient way. So all that said we're basically taking two different approaches towards building this at once.
One is we are trying to keep in mind what is the long term thing that it's not super far off I think within you know a few years I think we'll have something that that's sort of a first version of kind of this full vision that I'm talking about and we have something that's working internally that we use is like a that we'll use a dev kit. But that one that's that's kind of a big challenge it's going to be more expensive and it's harder to get all the pieces working.
The other approach has been all right let's start with what we know we can put into a pair of stylish sunglasses today and just make them as smart as we can so you know for the first version you know we worked with.
We did this collaboration with Rayban right because that's like well accepted you know these are well designed glasses their classic people have used them for decades for the first version we got a sensor on the front so you could capture moments without having to take your phone out of your pocket so you got photos and videos. You have the speaker and the microphones you can listen to music.
You could communicate with it but it was you know that was that was sort of the first version of it we had a lot of the basics there but we saw people used it and we tuned it we made the camera is like twice as good for for this new version that we made the audio is a lot crisper for the use cases that we saw that people actually used which is some of it is listening to music but a lot of it is like people want to take calls on their glasses and listen to podcasts right you.
The biggest thing that I think is interesting is the ability to get AI running on it which it doesn't just run on the on the glasses it also it kind of proxies through your phone but with all the advances in LLM's we talked about this a bit in the first part of the conversation.
Having the ability to have your meta AI assistant that you can just talk to and basically ask any question throughout the day is I think it would be really fascinating and you know like you were saying about kind of how we how we process the world is people.
Eventually I think you're going to want your AI assistant to be able to see what you see and hear what you hear maybe not all the time but you're going to want to be able to tell it to go into a mode where it can see what you see and hear what you hear.
And what's the kind of device design that best kind of positions in AI assistant to be able to see what you see and hear what you hear so can best help you let's glasses right where we're basically as a sensor to be able to see what you see and a microphone that is close to your ears that can hear what you hear.
The other design goals is like you said to keep you present in the world right so I think one of the issues with phones is they kind of pull you away from from what's physically happening around you and I don't think that the next generation of computing will do that.
I just I'm chuckling myself because I have a friend he's a very well known photographer and he was laughing about how you know people go to a concert and everyone's filming the concert on the phone so that they can be the person that posted thing but like they're literally millions of other people posted the exact same thing but somehow our unique it feels important to post our unique experience with glasses that would essentially
of smooth that gap completely yeah totally you can just worry about it later download it there are issues I realize with with glasses because they are so seamless with everyday experience even though you and I aren't wearing them now it's very common for people to wear glasses issues of recording and consent. Yeah, like I go into a locker room at my gym.
I'm assuming that the people with glasses aren't filming whereas when right now because there's a sharp transition when there's a phone in the room and someone's pointing it. People generally say no no phones in the locker rooms and recording so that's just one instance I mean there are other instances privacy light I don't know did you do you get this to explore that yeah so it's anytime that it's active that the camera sensor is active.
It's basically like pulsing a white bright light got it so which is by the way more than cameras do right someone is holding a phone's aren't are in kind of showing a light bright sensor when you're taking a photo so. No people oftentimes will pretend they're texting and they're actually recording actually saw an instance of this in a barbershop once where someone was recording and they were pretending that they were texting and it was a pretty intense interaction.
That ensued and it was like wow you know it's pretty easy for people to train texting while actually recording yeah so I think when you're evaluating. A risk with a new technology. The bar shouldn't be is it possible to do anything bad it's does this.
New technology make it easier to do something bad than what people already had and I think because you have this privacy light that is just broadcasting to everyone around you hey this thing is recording now I think that that makes it actually less discrete to do it through the glasses than what you could do with the phone already.
Which I think is is basically the bar that we wanted to get over from a design perspective thank you for pointing out that it has the privacy light I didn't get long enough in the experience to explore all the features but again I can think of a lot of uses. Being able to look at a restaurant from the outside and see the menu yeah get status how crowded it is.
As much as I love I don't want to call out let's just say on app based map functions that allow you to navigate and the audio is okay it's nice to have a conversation with somebody on the phone or in the vehicle and just be great if the road was traced where I should turn. Yeah absolutely these kinds of things seem like it's going to be straight forward for for a med engineer to create.
The future version will have it to also have the holographic display where it can show you the directions but I think they're all just basically just be different price points that pack difference amounts of technology. The holographic display part I think is going to be more expensive than doing one that just has the AI but is but is primarily communicating you with you through audio.
So I mean the current Rayban metaglasses are 299 you know I think when we have one that has a display in it it'll probably some amount more than that but it'll also be more powerful so I think that the people choose what they want to use based on what the capabilities are that they want and what they can afford but. But a lot of our goal in building things is.
You know we try to make things that can be accessible to everyone you know our game as a company isn't to build things and then charge a premium price for it we try to build things that then everyone can use and then become more useful because of very large number of people are using them.
So it's just a very different approach you know we're not we're not like Apple or some of these companies that just try to make something and then sell it for as much as they can which I mean they're a great company so I mean I think that that model kind of is fine too but. But our approach is going to be we want stuff that can be affordable so that way everyone in the world can use it.
Long lines of health I think the glasses will also potentially solve a major problem in a real way which is the following for both children and adults it's very clear that viewing objects in particular screens up close for too many hours per day leads to myopia. Literally a change in the length of the eyeball and near sightedness and on the positive side we know based on some really large clinical trials that kids who spend and adults who spend two hours a day or more out of the world.
And then we're not going to be able to do that today or more out of doors. Don't experience that and maybe even reverse their myopia and it has something to do with exposure to sunlight but has a lot to do with long viewing viewing things at a distance greater than three or four feet away and with the glasses I realize one could actually do digital work out of doors.
I mean I just tell you how much time you've spent looking at things up close versus far away I mean this is just another example that leaps to mind but in in accessing the visual system you're effectively accessing the whole brain because it's the only two bits of brain that are outside the cranial vault so it just seems like putting technology right at the level of the eyes seeing what the I see is just got to be the best way to go.
Yeah I think well multimodal right I think is you want the visual sensation but you also want kind of text or language so sure I think it's that all can be brought to the level of the eyes right what do you mean well I mean I think what we're describing here is essentially taking the phone the computer and bring it all to the level of the eyes and of course one would like more physically at your eyes right yeah and one would like more kinesthetic information as you mentioned before.
Where the legs are maybe even long function. Hey have you take enough steps today but that all can be if it can be figured out on the phone it can be by the phone it can be figured out by glasses but there's additional information there such as what are you focusing on in your world how much of your time is spent looking at things far away versus up close how much social time did you have today it's really tricky to get that with a phone like my phone right in front of us as if we were to standard lunch nowadays certainly Silicon Valley and then we're peering at our phones I mean how much real direct it's going to be a lot of time.
Yeah real direct attention and was in the conversation at hand versus something else you can get issues of where are you placing your attention by virtue of where you're placing your eyes and I think that information is not accessible with a phone in your pocket or in front of you yeah I mean that a little bit but not nearly as rich and complete information as one gets when you're really pulling the data from the level of vision and what what kids and adults are actually looking at and attending to yeah yeah yeah it's extremely valuable.
You get autonomic information size of the pupils so you get information about internal states I mean that you can there's internal sensor and outside so there's the sensor on the Ray band metaglasses is external right so it's basically allows you to see what you see then I was sorry the AI assistant to see what you're seeing there's a separate set of things which are eye tracking which are also very powerful for enabling a lot of interfaces right so you can see what you see.
So you want to if you want to just like look at something and select it by looking at it with your eyes rather than having to kind of drag a controller over or pick up a hologram or anything like that you can do that with high tracking so that's a pretty profound and cool experience too as well as just kind of understanding what you're looking at so that way you're not kind of wasting compute power drawing pixels with high resolution a part of the kind of world that you're not that's going to be in your pocket.
That's going to be your peripheral vision. So yeah all of these things. There are interesting design and technology trade offs where if you want the external sensor that's one thing if you also want the eye tracking now that's a different set of sensors each one of these consumes compute which consumes battery the ticket more space right so it's like where are the eye tracking sensors going to be it's like what you want to make sure that the rim of the glasses is actually quite thin because
I mean it's there's a kind of variance of how thick and glasses be before they look more like goggles and glasses. I think that this is there's this whole space and I think people are going to end up choosing what product makes sense for them maybe they want some that's more powerful that is more the sensors.
But it's but it's going to be a little more expensive maybe like slightly thicker or maybe want like a more basic thing that just looks like very similar to what Rayband glasses are that people have been wearing for decades but kind of has a I and it and you can capture moments without them to take your phone out and send them to people in the latest version of we got the ability into live stream.
And you're doing sports and just watching your kids play something and just you can be watching and you'd be live streaming it to your kind of family group so people can see it I think that that stuff is I think that's pretty cool that you basically have a normal looking pair of glasses at this point that can kind of live stream and has like an AI assistant so that the stuff is making a lot faster progress in a lot of ways that I would have thought.
And I think people are going to like this version but there's a lot more still to do. I think it's super exciting and I see a lot of technologies this one's particularly exciting to me because of how smooth the interfaces and for all the reasons that you just mentioned. What's happening with and what can we expect around AI interfaces and maybe even avatars of people within social media are we not far off from a day where there are multiple versions of me.
And you on the internet where people for instance I get asked a lot of questions I don't have the opportunity to respond to all those questions but with things like chat cheap people are trying to generate answers to those questions on other platforms will I have the opportunity to soon have an AI version of myself where people can ask me questions about like what I recommend for sleep and circadian rhythm fitness mental health etc based on content have already generated that will be accurate so they could just ask my avatar.
Yeah, this is something that I think a lot of creators are going to want that that we're trying to build and I think we'll probably have a version of next year but there's a bunch of constraints that I think we need to make sure that we get right.
So for one, I think it's really important that it's not that there's a bunch of versions of you it's that if anyone is creating like an AI assistant version of you it should be something that you control right it's I think there are some platforms that are out there today that just let people like make. I know that AI bot of me or other figures and it's like I don't know I mean we we have platform policies for.
For like decades like you know since the beginning of the company at this point which is almost 20 years that that basically don't allow impersonation you know real identity is like one of the core aspects that that kind of our company was was started on is like you want to kind of authentically be yourself so yeah I think if you're if you're almost any creator being able to engage your community and there's just going to be more demand.
So if you ever demand to interact with you then you have hours in the day so there are both people who out there who would benefit from being able to talk to an AI version of you and I think you and other creators would benefit from being able to keep your community engaged and service that demand that people have to engage with you but you're going to want to know that that AI kind of version of you or assistant is going to represent you the way that you would want and there are a lot of things that are also going to be able to do that.
There are a lot of things that are awesome about kind of these modern LLM's but having perfect predictability about how it's going to represent something is not one of the current strengths so I know there's some work that needs to get done there I don't think it needs to be 100% perfect all of the time but you need to have very good confidence I would say that it's going to that's going to represent you the way that you'd want for you to want to turn it on which again you should have control over whether you turn it on so we wanted to start in a different place.
Which I think is a somewhat easier problem which is creating new characters that the for AI persona so that way it's not. You know we built you know one of the a is is like a chef and they can help you kind of come up with things that you should that you could cook and can help you cook them. There's like a couple of people that are interested in different types of fitness that can help you kind of plan out your workouts or you know help with recovery or different things like that.
There are people there's an AI that's focused on like DIY crafts. There's so many as a travel expert that can help you make travel plans or give you ideas so but the key thing about all these is they're not they're not modeled off of existing people so they don't have to have kind of a hundred percent.
Fidelity to like making sure that they never say something that you know a real person who their model after would never say because they're just made up characters so I love that is that's so easy or problem. We actually got the we got a bunch of different kind of well known people to play those characters because we thought that would make it more fun so there's like Snoop dog is the dungeon master so you can like drop him into a thread and play.
Text based games and it's just like I do this with my daughter when I when I'm Tucker in at night and she just like loves it like storytelling right and it's like it's like Snoop dog is the dungeon master will come up with like here's what's happening next and she's like okay like turn into a mermaid and then I like swim across the bay and I go and find the treasure chest and unlock it and it's like and then Snoop dog just to always will have a next version of the like the next iteration on the story so it's it's stuff is fun.
But you know it's not actually Snoop dog he's just kind of the actors playing the dungeon master which makes it more fun so I know that's probably the right place to start is you have like you can kind of build versions of these characters that that people can interact with doing different things.
But I think where you want to get over time is to the place where any creator or any small business can very easily just create an AI assistant that can represent them and interact with your kind of community or customers if you're a business. And basically just help you grow your enterprise so I think that's going to be cool but I think this is it's a long term project I think we'll have more progress on it to report on next year but I think that's coming.
I'm super excited about because I've you know we hear a lot about the downsides of AI I mean I think people are now coming around the at the reality that AI is neither good nor bad it can be used for good or bad and that there are a lot of life enhancing spaces that it's going to show up and really really improve the way that we engage socially what we learn.
And that mental health and physical health don't have to suffer and in fact can be enhanced by the sorts of technology so we've been talking about so I know you're extremely busy I so appreciate the large amount of time you've given me today to sort through all these things fun and to talk with you in Priscilla and to hear what's happening and where things are headed.
The future certainly is bright I I share in your optimism and it's been only strengthened by today's conversation so thank you so much and keep doing what you're doing and and on behalf of myself and everyone listening thank you because regardless of what people say we all use these platforms excitedly and it's clear that there's a ton of intention and care and and thought about you know what could be in the positive sense and and.
And that's really worth highlighting awesome thank you I appreciate it.
Thank you for joining me for today's discussion with Mark Zuckerberg and Dr. Priscilla Chan if you're learning from and or enjoying this podcast please subscribe to our YouTube channel that's a terrific zero cost way to support us in addition please subscribe to the podcast on both Spotify and Apple and on both Spotify and Apple you can leave us up to a five star review please also check out the sponsors mentioned at the beginning and throughout today's episode that's the best way to support this podcast.
If you have questions for me or comments about the podcast or guests that you'd like me to consider hosting on the human lab podcast please put those in the comment section on YouTube I do read all the comments not during today's episode but not many previous episodes of the human lab podcast we discuss supplements while supplements aren't necessary for everybody many people derived tremendous benefit from them for things like enhancing sleep hormone support and improving focus if you like to learn more about the supplements discussed on the human lab podcast you can go to live momentous.
Live momentous spelled O U S so live momentous dot com slash Hubertman if you're not already following me on social media it's Hubertman lab on all social media platforms so that's Instagram Twitter now called X threads Facebook LinkedIn and on all those places I discuss science and science related tools some of which overlaps with the content of the Hubertman lab podcast but much of which is distinct from the content on the Hubertman lab podcast so again it's Hubertman lab on all social media platforms if you haven't already subscribed to our monthly neural network newsletter
the neural network newsletter is a completely zero cost newsletter that gives you podcast summaries as well as toolkits in the form of brief PDFs we've had toolkits related to optimizing sleep to regulating dopamine deliberate cold exposure fitness mental health learning and neuroplasticity and much more again it's completely zero cost to sign up you simply go to Hubertman lab.com go over to the menu tabs scroll down to newsletter and supply your email I should emphasize that we do not share your email with anybody
thank you once again for joining me for today's discussion with Mark Zuckerberg and Dr. Priscilla Chan and last but certainly not least thank you for your interest in science