#12: Deep Prasad - How to Build a Physics-Fluent AI System - podcast episode cover

#12: Deep Prasad - How to Build a Physics-Fluent AI System

May 29, 20241 hr 7 minSeason 1Ep. 12
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

Brought to you by NordVPN. EXCLUSIVE NordVPN Deal ➼ https://nordvpn.com/firstprinciples. Try it risk-free now with a 30-day money-back guarantee!

======

Episode 12: Discovering new materials is how we unlock new branches of the tech tree. But there is no way for the world’s most powerful AI tools to help us do that discovery – until now. 


======


(00:00) - Intro

(01:08) - GenMat's Goal: Building an AI System Smarter Than Humans at Physics

(02:17) - Why It's Hard for AI to Understand Physics Like Humans Do

(05:39) - How GenMat Models Atomic Level Activity to Optimize Materials

(09:35) - The Trial and Error Process of Traditional Materials Discovery

(11:28) - Materials Science 101: What Makes a Material and Its Key Properties

(15:00) - Perovskites: A Promising and Complex Material Family

(17:37) - How GenMat Helps Labs Discover and Optimize New Materials

(19:43) - The Challenges of Data, Representations and Feedback in Physics AI

(23:18) - Why GenMat Chose a Broad vs Narrow Focus Across Materials Domains

(25:32) - Expanding Into Geophysics to Find Critical Materials Using Satellites

(28:48) - The Experience of Launching GenMat's First Satellite with SpaceX

(30:58) - Overlap Between GenMat's Materials and Geophysics AI Models

(32:42) - GenMat's Progress: Team Size, Funding, Patents, Customers, Results

(35:46) - The Future 'Yug-Cyberia' Civilization Enabled by Advanced Physics AI

(38:54) - How Materials Breakthroughs Can Accelerate Mars and Space Exploration

(42:10) - The Difficulty of Communicating Complex Technical Work to Others

(44:48) - Breaking Down GenMat's AI Techniques: CDVAEs Explained

(48:55) - Why GenMat Uses Classical Deep Learning vs Quantum Computing Today

(51:37) - The Origins of GenMat: From Quantum Deep Learning to Classical AI

(54:02) - How GenMat Gets People Excited About the Potential of Materials

(56:11) - Rapid Fire Questions: Focus vs Breadth, Challenges, Surprises

(59:40) - Mapping Out a Future with Intelligent Matter and Interstellar Travel

(01:03:50) - The Meaning and Origins of the Term 'Yug-Cyberia'

(01:05:15) - Conclusion: Advanced Physics AI Can Get Us to Mars and Beyond


======


Links:


======


Production and marketing by The Deep View (https://thedeepview.co). For inquiries about sponsoring this podcast, email [email protected]

======


Checkout the video version here → http://tinyurl.com/4fh497n9

🔔 Follow to stay updated with new uploads

Transcript

Intro

Hey, everybody. Welcome back to First Principles. We are here today with Deep Prasad, who is the CEO and founder of GenMat, who is a really uniquely fascinating company, and I think maybe the perfect company for First Principles, because there are so many places that we could take this conversation. I'll let Deep describe what the company is actually building, but just to give you a glimpse, it's going to be everything from material

science to AI to quantum computing. There is literally anywhere that we could take this conversation, even literally space, which you'll find out how that's possible in just a second. Deep, thank you so much for joining. I'm really happy to have you. Why don't you tell us

GenMat's Goal: Building an AI System Smarter Than Humans at Physics

Yeah, thank you so much, Christian. Pleasure to be here. And it's always a fun time chatting with you. And I'm looking forward to talking about what we're doing at GenMat. So a quick little breakdown on GenMat. Our goal is very simply stated, but very hard to do. The goal is to build an AI system that is smarter than humans at physics.

That's it. So how do we basically create something that can, let's say, discover the next generation of physics theories that advance what we did with quantum mechanics and general relativity in the early 1900s? How do we build really advanced physics simulation systems that can essentially simulate these very hard regimes, like the hypersonic regimes and atomic oxidative environments in space, for example, which require a lot of multiple different physics modeling

techniques. And so at GenMAT, we work on the sort of intersection of physics and machine learning That's a perfect intro. So basically you're building a physics understanding AI, like an AI that knows the laws of the universe

Why It's Hard for AI to Understand Physics Like Humans Do

and basically can tell you what to expect if you give it inputs and then something comes out the other end, basically, right? That's exactly it. I think it's fascinating to think that AI wouldn't just naturally know that, right? Like wouldn't, okay, so AI is super smart. AI is now over 100 IQ. So how, like, why is it difficult to just program in just F equals MA and equals MC squared? Like shouldn't, like,

I don't know. I'm obviously just like, Don't need a little softball, but like, why It used to be a very philosophical question before we made so many advancements in these large world models, before we could actually figure out how far can we push, you know, silicon transistor based, let's say, large language models and their understanding of the physical world. And so the reason why it's so hard is, A, because we have embodiment. But let

me take a step back before I go deeper into that. Embodiment kind of covers three parts of human intelligence, if you will, the general intelligence that actually affects our ability to understand the physical world. And those three sort of components that I think separate us from, let's say, my dog, right? Who's like, obviously brilliant in her own way, but certainly not making rockets or, you know, working with satellites anytime,

seem like you are. So our, you know, our team is. So the difference is that humans have These three components, which is the sensory apparatus, so up to 50% of our brain can be dedicated towards visual processing of information. So it's not that our eyes are as good as a hawk, right? signal processing that we do, the computational processing, that

makes it powerful. And then there's the second component, which is our ability to imagine, or creativity, the ability, more precisely, to simulate physical systems that have yet to be tampered with. So take, for example, Einstein's thought experiments when he was discovering relativity, right? This guy's literally thinking about trains and lightning and then using that to derive like special, right, general relativity principles. And so we're like, you know, dreaming how one does as

they traverse through the universe as a photon. All of these things require the ability to simulate physical systems. and that to manage and store in memory abstract concepts. And these representations of the world that are physically accurate and useful is what's so hard to do even now. Today's large language models don't come close, actually, when you come to think about it. And then the third component between sensing simulation is the engineering and

manipulation of matter. So when you have a system that's capable of sensing, of simulating the physical world and manipulating the physical world, you should have something

as smart as a human or smarter. But because today's large language models don't have any of those things in a really cohesive framework, yeah, we haven't really been able to crack, you know, just F equals MA. A four-year-old child, for example, will know more about physics, to Jan LeCun's point, right, than like chachi bt4 does or five or six why because when that child let's say jumps from Four or five foot like tall ledge like something really freaking high

here right for the kid Like imagine when we did that when we

How GenMat Models Atomic Level Activity to Optimize Materials

were young right and we were troublemakers What would we do? Well, you would jump from a high place and And you bend your knees, right? And why do you bend your knees? Because the equation for power is energy dissipated over time, divided by time. So if you can increase the denominator, the amount of time, right, you're reducing the amount of power or shock that your body is experiencing. So we don't have these explicit equations that

our brain is solving when we jump. our muscle memories and cells and neurons that compute have learned the right sort of representations of physics. And so all that is So it's the sensing, the simulation, and the manipulation of matter. Those are the three things. I mean, intuitively to me, computers in general are pretty good at those things, right? Like sensing, we can definitely make cameras that are better than my eyes. And for whatever, for simulation, we have Monte Carlo sims

that can run millions of possible futures and my puny brain could never handle that. So I don't know, it seems to me that, why aren't computers better at those They're not better at those things today because they're not integrated like we are. So we're an integrated sensing simulation and manipulation of matter system. However, if you take a, let's say, sensor in a Tesla FSD, right, why can't, what's stopping that FSD from visualizing the warping of space-time under

different metro tensors? It doesn't have the physics simulation-based models, right, to do those things. But those are the things that we work on. So hopefully one day in theory, right? Maybe your Tesla can think about relativity because of us. And so where the price of manipulation matters. So, you know, the Tesla in this case is moving around, you know, you're trusting it, but it's not going to go and build like an arc reactor

in somewhere in a cave, right? To go and test its physics theories that I can't think of right now. So yeah, integration. And so our brain has had, right, like billions of years, to really integrate all these systems and evolve the integration of the systems. So that's the edge we have currently, but we'll see how long that Presumably the fundamental constraint here is sort of twofold. There's like, one is finding ways to represent the laws of physics to a computer

in a way that a computer can understand it. I mean, I'm sure at some point you do literally plug in like f equals ma or like equals mc squared or something. That's probably part one. And then part two, I would assume, is just you have to get a ton of data. Like, you have to feed in, like, I don't know, what happens when you drop a ball? Or what happens when you drop, like, jump off something? Like, you probably have to feed in examples of physics. And

so I wonder, like, what did that even look like? Or what did physics data look like? And how do you capture it and feed it in? So I don't know. We could take you those two halves in I love to, like, yeah, look at what the physics data looks like and the difficulties there. And really, it's a combination of both. Our equivalent of f equals ma, of dropping a ball, and seeing how each has a little bounce given the friction coefficients and so on, is

we started with the atomic world first. There's a reason for that, and I can get into it if you like at any time, but we wanted to build our world models, our physical world models in our inter-AI systems truly from the ground up, from base reality up. because that's the best way to predict everything else, right, is if you can model the underlying crazy atomic worlds, since that dictates everything. And so for us, that data

of like, what does a different basketball under different scenarios look like? A real life example of that is our titanium dioxide photocatalyst system. So we recently put out a preprint, for example, where We created the following setup. We have this titanium dioxide nanoparticle. There are 10 to 25 nanometers across. And on top of those nanoparticles, we deposit copper and

The Trial and Error Process of Traditional Materials Discovery

platinum nanoparticles or nanoclusters. And these sort of like nanoparticles are just globs, right? If you want to think about them more like amorphous globs. of copper and platinum. And the way that their configurations end up completely dictate where the photocatalytic activity will be. And the photocatalytic activity is what we care about because this system absorbs carbon dioxide absorbs water and sunlight and turns it directly into natural gas in a very clean reaction.

There's no waste products, all you get is literally methane out at the other end of it. So what we've done over the past few years is taken this system and we've looked at optimizing the nanoparticles and determining what configuration of the copper and platinum atoms would be most conducive for sucking in more CO2, basically. How

do you accelerate that? And so, traversing that search space, combinatorial search space configurations, goes up very quickly, exponentially, essentially, and factorially, when you take into account that you have three dimensions. So, one, sort of, just take the nanoparticle system that's sitting on top of the TiO2, right? So there's a combinatorial system where The copper and platinum atoms have different permutations that they can be in. And the more of those atoms you introduce, the

more different configurations they can all fall under. And on top of that, you can introduce different oxygen vacancies on the surface of the TiO2. When you pluck out the oxygen atoms on the surface of the TiO2, you also change the catalytic activity and what's happening at the interface. And the interface between the TiO2 and the nanoparticle now is this really rich area of super complex chemical activity that you can optimize to do something useful. And

Materials Science 101: What Makes a Material and Its Key Properties

so it's being able to model those things, creating data sets that represents all these different systems, form first principles, and then validating our predictions in the labs is where we started. And then over the past year and a half, we expanded into geophysics, which is where our satellite Yeah, wow. Okay, now I'm definitely convinced that computers are way better

than me at this. I got a little bit of that, a little bit, but I mean, I, what I, what I really take, so what I take away, what I take away from that is basically you are trying to super, super detailed and in, you know, a lot of granularity model, literal atomic level activity. You're taking, you're saying like, what happens when you put titanium, titanium oxides, like titanium and oxygen together with copper globs, like what, what ways can the globs be oriented? And

Trying to change something, you change the material properties by the way. So good luck doing that for millions and billions just for a tiny, small material. You see, you're simulating to try to figure out which things are worthy of test. And then once you have like a candidate list of stuff, then you go and like test it and actually like measure the material properties. Is

So how does it work today? Like if I'm a, if I'm a material scientist today and I'm trying to like come up with the next, you know, room temperature superconductor, like what do I, what do I actually do if I don't have a physics Yeah, absolutely. So right now, it's very much trial and error. Literally,

you're mixing shit. That's it. It sounds super complicated, this kind of synthesis and characterization, but it's just, that's what dudes in the lab, right, and girls and so on, just mixing different chemicals and solvents to synthesize different compounds. And they're doing it based off of decades of experience and intuition. And very, very, very slow archaic computational chemistry tools that are written in Fortran. So that's the art

of material science today. And that's what powers the semiconductor industry, the aerospace industry, the battery industry. All these industries are I know about the discovery of some materials. I know that, I don't know, back when we discovered bronze, it was literally just a mistake. Someone accidentally dropped some iron or something in copper, and they're like, oh shit, this is actually harder. This is actually better. And this happened with

penicillin. I don't know. There's rubber. I think that's literally how vulcanized rubber was made. It was an accident. So it makes sense. It makes sense that people are just... making mistakes in labs, and then every once in a while it turns out this super material. So

Yeah, absolutely. In fact, even when you have, let's say, experts with some intuition of what they're doing, like some idea of what they're doing, right, it's not just a straight up lab accident that they're banking on, like perhaps the potential LK99, right, submit Submitters were even then it takes a decade or deck many decades to drive

the material. So if you take the first high temperature superconductor that was discovered, what's awesome is that was discovered by a private company, IBM, right, that was discovered by IBM scientists. And when they discovered it, There's a lot of history to that. So it was the world's first cupids, right? These copper-based, essentially ceramic-based superconducting materials,

which in itself was a profound sort of discovery, right? That, oh, wow, you no longer need just a metal or perfect conductor or a normal conductor to become a perfect conductor. It can literally just be this weird material, right? And so that

Perovskites: A Promising and Complex Material Family

wasn't by accident, however. So Werners and Muller, had spent the past sort of decade, particularly Mueller, really just focusing on the crystal structure of what would be expected, right, of these cuprates and of high temperature superconductivity. So they were looking at all these other crystal structures that were precursors to the final cuprate crystal structure, the arrangement of atoms that's like essentially fingerprint for that particular material. So that family of fingerprints they

had been looking at for so long. And so finally they

like stumbled upon this, but that's what they were looking for, right? And it was because of some, one of the key ingredients, funnily enough, was access to some of the first like really large-scale density functional theory large-scale for their time in the 80s and late 70s so that's pretty awesome too right it's a combination of computational chemistry and real experiment and you know maverick science coming together in a private industry setup so we should be trying to do that again

Today's episode is brought to you by NordVPN. Now there are lots of reasons why you'd want to preserve your privacy and secure your online browsing. There's the normal reasons like not wanting to get hacked, which is pretty essential for deep tech companies like the ones featured on this show. But there are also some other ways that VPNs help you that are less known.

Like you can get deals that are only available in certain countries and you can get access to shows and other media that are again, country specific. And if you're going to get a VPN, use NordVPN. They're the fastest on the market today. To get the best discount on your NordVPN plan, go to nordvpn.com slash first principles. So you'll add on four extra months to a two year plan and

there's no risk. There's a 30 day money back guarantee. So check out that link, which is in the description of this episode or go to nordvpn.com slash first principles. Let's get back to the show. So maybe let's take a step back and just talk about materials. Like, what is material science? And what are the properties of materials? Like, I imagine what you're searching for are specific properties. Like, you want to find materials that are really conductive, or like super conductive. You

want to find materials that are extremely hard. You want to find materials that are whatever, like a variety of different things. So do you mind just telling us at a super high level, like, what are materials? And why do they matter? And what makes them good or bad? Happily. Happily. I love that question. So materials, which is arrangements of atoms that repeat at

a scale that's relevant to you. That's how I would define it. So you wouldn't care about like three, four or five groups of atoms, right? Or like five atoms. But you care if you have trillions

How GenMat Helps Labs Discover and Optimize New Materials

of atoms that intricately dance with each other and represent the next generation of semiconductors, right? That are more power efficient. You care when it's carrying you to the moon and to Mars. That's when it matters. So that's what

I would define a material. And so you can have many, many different kinds of material properties that you're interested in, right, all the way from conductive properties, like you were mentioning, like electrical properties, thermodynamic properties, structural properties, those are really common as well, right? So within those, you have a very wide range of trade-offs. Every

material has its own trade-offs for its properties. For example, you might be interested in looking for a transparent solar cell, like a truly invisible solar cell. We're about 70% of the way there last time I checked. We can create about 70% translucent solar cells. That field of research is largely being enabled by perovskites, which in itself is a fascinating material, right? So in material science, let's go back to the like 30s

and 40s when perovskites were first discovered. There were these interesting rocks with these interesting piezoelectric, you know, electric qualities to them, right? People were around rubbing them together and seeing that they create charge if you hit them against each other. And they're like fighting other rocks like that. And then we find out that about 70% of the Earth's crust, or a very, very large percentage, have these perovskite materials. It's largely composed of

perovskites all the way down, instead of turtles. And so in the 1960s, we have one, about 60 research papers, let's just say, per year on the subject of perovskites. And I'll tell you what they are in a second. And now today, we easily have 80 to 100 research papers published per day in that field. It's the most famous, I would say, after superconductors or even on pallets. It really does become the most famous material that people actually

The Challenges of Data, Representations and Feedback in Physics AI

put resources towards in practice. And there's a reason for that because perovskites are these They have an ABX3, so an anion, anion, and cation-like configuration. Basically, the ABs are anions and the X3 is a cation. The anions create this octahedral pattern. when you repeat the atoms, right, and you look at the fingerprints and

what they look like at scale, they create these octahedra. And when you tilt the octahedra, meaning when you change the bond lengths, when you change the chemical composition, you tilt the octahedra, or you displace the cation in the center of the cubic lattice, it changes the properties completely. So just displacing the cation, if you want to visualize it, and the cation, And then you have an octahedra covering the cation.

So if I tilt, axial tilt, this octahedra across X, Y, or Z, just one tilt, even a little bit, changes the properties. A little bit more, changes the properties. How are you possibly going to predict all those things? But anyways, right? So that's why you have so many papers, because it's a straight up surprise. You So these materials that are in solar cells, basically, we've known about them forever. So what do you mean that

they're in all of the Earth's crust, that they're in everything? What Yeah, so those magnesium-based perovskites that are largely what the, I believe, Earth's mantle or Earth's crust is. Oh, OK. So that's one example, right, where it's literally a large perovskite-based mineral It's the name of the geologist slash And the class of things is like the shape of the thing defines it, if it has that Each of the add ions creating the octahedra defines it. Wow, cool. It's just an infinite family

of materials that it represents. And so the cool thing is that these near-invisible scellular cells are based off of the perovskites. The first PLC electrics were perovskite-based and are mostly, largely perovskite-based. Currently, for example, in our labs, we work with perovskites for lithium-ion-based solid-state electrolytes. How do you find a much To tie this back to what you're doing as a company, let's stick on this

example. Let's keep talking about this new material. Would you help, would a lab come to you and say, Hey, I have this idea for what I think we could do with the orientation of the thing. And can you please test it for us? Or like, are you trying to discover your own ones yourself? Or maybe both? Like, can you just make it tangible, like what you're doing Happily, happily. So we're kind of doing both. We're dogfooding our own products and technologies for what I call ubiquitous materials.

So we don't care about creating Let's say a material for a better, lighter, stronger weight tennis racket, or like a better golf club, right? That, like, don't care if somebody else can commercialize that. We're happy to help them discover that, right? No problem. For us, we really want to focus on these materials that will change and affect all 8 billion lives. Like, that's our metric. So, for example,

Why GenMat Chose a Broad vs Narrow Focus Across Materials Domains

what does that look like? A truly 100% or 99% invisible glass panel, right, that's also a solar cell, that's totally up for rally. We're definitely interested in that. A lithium ion battery that's, let's say, an order of magnitude safer and more energy dense than what's available on the market that Tesla can deploy. We are happy to help find that and work on that. But then, yeah, so we want to kind of do both of these things. And I think that there's a

world, right? The world of materials is large enough that there's never going to be enough players making their own materials, making their own supply chains and processes to scale those materials. And so that's why we think that, you know, we just need to keep contributing like crazy to the acceleration of these materials, one way or another, whether it's us or helping other people do it. Our basis model is that we offer one product and two services around

the materials discovery side. And then I can get into our geophysics-based models, because I mentioned we expanded recently into another domain of physics, right? But so for materials work, we have one product, and that product is a series of physics-informed machine learning models that model crystal structures and chemical compositions. And they're useful for a variety of tasks. We have some models that are really, really good at predicting the properties. So let's say you change these

parameters, the axial tilt, the anions, and so on. What can we expect? We have models for that. We also have models that look at what happens when you have a very complex interface, like the titanium dioxide example that I was talking about, where the interatomic potentials are very complex and you need to discover where a representation for it. And then we have generative models for the inverse design task. So that's for when a customer, and this is what we're finding most people are interested

in, is particularly like a much larger market. We also have models for the inverse design task or capabilities where you give us target material properties, and we will discover, right, and we will figure out and generatively dream up

Expanding Into Geophysics to Find Critical Materials Using Satellites

Are there certain properties that are fundamental enough for you to model and other properties that are sort of like emergent and you Yeah, and when I say can't model, I'll be specific, can't model with today's physics AI technology. Like it's constantly an active field of research for us to be able to, you know, improve this multi-skills and multi-physics domains

Yeah, absolutely. So fundamental ones like the band gap, charge density, local density of states, electrical properties like connectivity, both DC and AC. We can also model ionic conductivity if you're looking at battery materials. So information energies as well is a really, really critical fundamental property. So yeah, we can model these fundamental properties, which already give you a lot of information about is this material going to be useful for your particular engineering task at

hand. So, you know, let's say you're trying to find like a really efficient solar cell that works really well in atomic you know, oxygen environments in space, that would be a set of material properties for us, right? Charging properties that we would then just like go run our models against. It's one of the directions that we kind of going. We are trying to elevate the aerospace, you know, in space industry. Well, So, so those are fundamental properties that apply to a

wide variety of materials. and material families, from transition metal oxides to, let's say ceramics, right, you're looking at formation energy and so on and so forth, battery materials, solar cells, you get the point. But there are also measured

properties that you asked about, and I think that's a brilliant question. Because there are emergent material properties that require deep quantum computing and quantum computing related technologies and thermodynamic computing related technologies to exist. And so that's what we can't do today. For example, modeling high temperature superconductivity. right, is a unsolved problem because it requires being able to capture really strongly correlated

electron systems and many body systems. And it's so computationally intractable, even for the most powerful silicon semiconductors

based AI models today. So that's where we're constantly like pushing up against the physics battery, if And so today you're just doing this all with classical computing or do you use We use quantum computers only as a test bed to test like how far I know that you originally, the reason I asked that though is because I know that at one point in time, you and Guillaume, like the guy who's the CEO of Xtropic, people might know him as Bef Jezos, we did an episode on him

a little while back. I know that you guys knew each other back in your university days and you were interested at the time in quantum deep learning. Was there a time when you were founding this company and you thought it was going to be a quantum company and then it ended up just being classical? Or is it just like, you know, someday you Jay and I, we both like determined, right, that okay, quantum

The Experience of Launching GenMat's First Satellite with SpaceX

hardware right now is not going to make it for quantum chemistry tasks. So like, one of the things he worked on was Even now, apparently, the world's most powerful quantum computing simulator, right, based on benchmarks. So, you know, he built that at Google X in the secret division that we built. And so, that was like, really a lot of that, some of that was boring, right, out of this acknowledgement that him and I had on the sort of limitations of being able to do quantum chemistry tasks

today. That simulator, even today, and the concept of using simulators and decomposing problems across simulators remains an active field of research that we're pursuing. where we've recently been able to perform certain simulations, certain material properties that we can calculate thousands of times faster using classical hardware, but using an entirely different paradigm of quantum probabilistic algorithms. And so those quantum probabilistic algorithms will natively work on quantum

computers once they come online. That's one of the benefits of writing code like this is that you're already getting benefits from this quantum computing simulator that's giving you useful chemistry calculations and on top of that you're ready so that when the hardware comes along, it's literally almost a back-end switch, right? Like you're just swapping the device driver to a quantum computing device. Now, that's one aspect of

our strategy. But the other part, this is the part that, like, yeah, I'll probably only say a few times, I don't know, just because it brings a lot of... Not saying it, but we're also deeply interested in accelerating quantum computing. I'm no longer interested in being an observer. A large part of quantum

computing is material science-based, like constraints. And so We talked about a bunch of, we talked about kind of the problem, like how you're approaching it, why it's important to solve, but we haven't yet talked about why AI is the right tool. So I'm curious, like why, when you were trying to do this, this cool material science discovery, So the reason why I chose AI is because it's ultimately one of the best pattern recognition, if not the best pattern recognition

Overlap Between GenMat's Materials and Geophysics AI Models

device we've ever created. That's really all it comes down to. Nature has really, really deep complex, hard to penetrate patterns that show up between all of these crystal structures, between all the electrons and their bond links and the properties. There are patterns that are meaningful in mathematical and chemical and physical nature. And so teaching a super intelligence that I think we have a higher probability of succeeding than

Can you talk more about the specific techniques that you're using? Is it deep learning? Is it some other thing? What of So we use two things. 99% or 95%, if you will, is classical deep learning. For example, we use something called CDVAEs. So crystal diffusion, variational autoencoders. This is one class of models that's really interesting to us, right? There's some open source CDB out there, for example, that you can

play with and see exactly what we mean. So that's taking the best of diffusion-based methods and variational autoencoder I know what neither of those things are. So it's like a 0% helpful to me. Let's, let's talk about it though. Let's, let's get into it. Let's get into it. So what is, um, so let, let's, let's, let's set a goal of Christian has to

understand what a CDVA is by the end of like five minutes. Uh, so what, how would you, how would you describe what a D like a diffusion model Alright, let's start with the Variational Autoencoder first, the VAE. A VAE takes some input, learns a compressed latent representation of

GenMat's Progress: Team Size, Funding, Patents, Customers, Results

that learned input, and then it has a decoder. So the autoencoder is encoding the input into some latent compressed representation, and the decoder has to get really good at taking that latent compression and decompressing it back to the original input, right?

And so the autoenclover has to get really, really efficient at compressing different inputs into smaller and more useful and more information rich latent representations of the And latent, latent meaning in this case, that it's not like a literal one-to-one, I see red, therefore I put red in a database somewhere. It's like, it's latent meaning that it's just some number and kind of inscrutable. Like those Right. Bingo. That's exactly it. So for example, it's

like. What you don't want to see in your occipital cortex as like visual data gets processed right like it would look really if you could see that would be almost psychedelic like right like you would see like, instead of, you know, two eyes for deep right it would be like two Yeah, totally. If you could peer into the brain and like look at what neurons are doing or something like that. If you looked at neuron signals, it would make no sense, but it

processes it. So it's like the middle step between the processing. Okay. So VAE takes inputs or whatever, turns them into crazy gibberish, but then there's another part of the other side that can take that crazy gibberish and turn it into meaningful stuff again. Bingo, exactly. So that way when you give the VAE a totally new input it's never seen before, right? It should be able to compress it in a meaningful and useful way and then the decoder can meaningfully decompress

it, right? And you can make predictions at the ends and beginnings of both of this process. So, for example, with the VAE, right, you're feeding it in crystal structures. By crystal structures, I literally just mean atomic coordinates of materials, their fingerprints, right? Where are all the electrons and the atoms, the neutrons and so on, particularly in the bond lengths. That's it. You feed those in along with the properties, because each one has different lists of

properties, right? So that now you're learning a compressed version. of those crystal structures and their associated properties so that when I give my VAE a new crystal structure that it's never seen before, it should be able to come up with a useful compression that allows me to deconstruct the entire input, including the properties, as an example. So that's the magic of the VAE. Now,

what does a diffusion model do? A diffusion model is infamously known for incrementally adding noise to a model and then training a sort of loss function that you can go backwards and start with a very noisy image and come up with a very coherent image. I won't go into why Gaussian distributions and so on matter. What I will say is that if you can teach an AI to take a very, very noisy image and find and

The Future 'Yug-Cyberia' Civilization Enabled by Advanced Physics AI

learn a representation of working backwards to the original images, you basically learn a representation. of coming up with any coherent image, because you've trained it to come up with coherent images thrown through, given noise. So if you give it random noise, right, some form of structured noise, even, or whatever, sample noise, the expectation is that it's forced to come up with a realistic reconstruction of coherent one, and that's what we exploit. But that's also why you

never get the same images. Every time you run mid-journey, it'll never be the same, because you're literally just injecting noise into your system. So now you combine the two and that's how you A crystal diffusion or a crystal structure. Oh, we put diffusion, and then, yeah, for variational So crystal diffusion, variational autoencoder, is a thing that can, that you feed it, presumably, the way

that it gets better is you just feed it tons of data in the beginning. You say, here's a billion different crystals, a billion different properties, and then it just learns what to do with those sorts of things. And then on the other end, the diffusion part of it is basically like, the equivalent of mid journey, like where it starts all blurry and then you kind of get the lumps, random lumps, but then it like coagulates into something that actually is meaningful. Like presumably that's

what's happening on the diffusion side of the other. Okay. So you are using, um, CDVAEs to do the, these like material, um, material predictions basically, and material, and also that, what you call the reverse thing where you're actually like predicting, And so, yeah, so the interest design problem is given a set of target properties, you know, find me a thousand, right, or 10,000 material candidates with

these properties. Which, right, you would have had to hire entire teams of material scientists before to even, like, get close Seems hardest to me is the data part, is getting billions of crystals and somehow loading them into Yeah, yeah, and I can see why you think that it was a hard part. So largely, there's, I would say, well, all three things that have come easy to large language models are very, very difficult for large physics models. Right.

So, um, The difference, for example, data, right? The training data, if you take large language models, the training data is available all over the internet, right? You just need a coherent sentence, literally. It just needs to be a complete, grammatically accurate sentence, and that's now training data. And then processing the representations, will transformers do most of

that work for you? And so there's obviously an active field research for transformer architectures that impose these attention mechanisms that are useful for learning, all these extra subtasks implicitly. And then you have the RLHF. You have thousands of people, hundreds of thousands, millions of people who can RLHF the output and roast

How Materials Breakthroughs Can Accelerate Mars and Space Exploration

your AI. That's pretty easy. But if you have a physics-based AI system, coming up with valid training data is very, very expensive. You need deep domain expertise, labs, lots of compute. So for example, the data for us is our experimental labs. It's also the synthetic data sets that we create using proprietary know-how methods. It's also first principles computed data. that we create, and that's using these massive DFT softwares, very archaic software. What's DFT stand for? Density

Functional Theory. Density Functional Theory, okay. So Density Functional Theory is a computational modeling tool. It's an approximation to the many-body Schrodinger equation. Every single material has a unique fingerprint. Every fingerprint has an associated many-body Schrodinger equation to it. If you can solve it in its exact form, you would be able to predict pretty much all the material properties that you're interested in, all the properties. And so the problem is that we

can only solve that for about three atoms. Beyond that, computationally impossible. That's where density functional theory was introduced. It's this computational method. So we have these old school software codes, right? Like creating data sets as well, in part. So it's this, how do you just create really high quality physics data? It itself is an art and a science. So that's hard. I totally agree. But I would say even the representations are hard too. So how do you impose physics

symmetry constraints? For example, if I may have a molecule, let's say, right, a titanium dioxide. And I may rotate that titanium dioxide molecule a few degrees across, let's say, the x and y plane. So I just rotate it a little bit. To my neural network, which is my CDVAE, this deep learning algorithm, it's only seeing numbers of positions of an electron. So if I rotate it this way, all

the numbers change completely in no known and obvious way. So it will think if I don't include physics constraints and symmetry constraints, that is new data that it's seeing, that it's seeing a totally new Um, if I show, if I show my phone like this, then it can see, oh, look, it's like a shape like this. But if I show it like that, even though we obviously know it's the same phone, like it's seen, it looks different because they're different pixels

That's exactly it. Basically, how do you do 3D object recognition, right? Recognition, but for atoms, so there's difficulties there. So for the physics symmetries, in

this case, you would impose E3 covariance, right? So it's a part it comes from like group theory So group theory is a form pure mathematics that studies mathematically describing quantifying symmetries so you can have symmetries in physics and you do deeply right like you have energy symmetries and constraints conservation momentum and and so on and so forth. So yeah, all these things have categories and groups particularly attached to them. So anyways, you

The Difficulty of Communicating Complex Technical Work to Others

have to impose really, really deep, complex mathematical rules into the neural networks that don't just come out of the box for large language models. And then the third part, the RLHF, you Yeah, and that RLHF stands for reinforcement learning with human feedback, right? Or with human features or something. Is it feedback or features? I never know. Yeah, you better write the first time. Okay, okay. What that means is basically you show people output number

one and output number two, and they say, oh yeah, I prefer output number one or something like that. But if you show me this crazy material, like, I have no idea what to do with that. Like, I don't know how to rate that on a scale of one to 10. So that makes sense. What does that actually look like? What does the reinforcement learning part look like? Is it just doing more measurements after you create some stuff in a lab and testing to see if it was good or bad or right or wrong?

Yeah, that's really where it comes down to is largely nature herself becomes the arbiter of truth for us. Like nature holds the feedback for us. It's like NLHF, right? Or sorry, RLNF, right? Reinforcement Learning from Nature Feedback. So that's like the hard parts. I imagine that you're

probably just very complicated. And for a lot of people that are not detailed, as expert as you in a lot of these areas, I imagine it's very hard to communicate, hard to get people the necessary background information they need to understand it. But, and yet, you probably have to hire a bunch of chemists who don't have all the AI background. You probably have to hire a bunch of AI people

who don't have the chemistry background and vice versa. My question is, is there any way that you actually do that like as a company or even just as you like going out and pitching investors or something? Like how do you try to get people the first principles understanding they need in I try to communicate it in a way that they can relate to deeply and intimately. Let's say I'm talking to someone who works construction all

day long. That person may not have the same deep technical appreciation of chemistry as someone with a PhD in chemistry. And that person, the patient chemistry, may be a complete novice when it comes to applied materials in the real world, like construction workers who actually understand melting constants and so on and so forth, melting points and real structural integrity, what that actually looks like. So there's a, you know, knowledge gaps in both

places. And so that what keeps it, you know, kind of threaded right in together is that shared understanding or appreciation actually more importantly of materials. So I always take that kind of first principles approach.

Breaking Down GenMat's AI Techniques: CDVAEs Explained

And so that's always been pretty useful in retrospect. It's just like finding something in their day-to-day lives that matters and then kind of just showing them the appreciation of how hard it is to actually find materials like that. A lot of people, I think, just kind of take material science for granted, right? That plastics, leather, silicon, all these things just exist. They don't. They take very, very hardcore labs and a lot of scientists, a lot of time, to discover. So

yeah, just communicating that. And then communicating the physics as well, right? So like the baby example or the child example of jumping from a high top, everybody understands that. So Like if you, if you're just looking at your company for the first time from the outside, it does seem like you have your, your, you know, you've, you've all these different areas

you're trying to do in sort of like all at once. Seemingly like you're, you're not only doing like materials discovery, but you're also doing materials analysis. Um, you're not, you're not just focusing on one material. You're focusing on like all of them. Like you're doing photoelectrics, you're doing like whatever, uh, like conductive stuff you're doing all these other, like you're doing so many different kinds of things. And so is there a reason you chose to do

that? Is there a reason you chose to go really broad instead of just being like the one ceramics lab or something? Absolutely. I laugh because that's like the first question That is such a common question, right? Where is your focus? And my answer is always the same. My focus is the system. We're trying to build an AI that is better at physics than all humans combined. That is the goal. And so that AI can't be trained on just ceramics, or just batteries, or

just geophysics, or just computational fluid dynamics. It really has to sample from all sorts of domains of physics and truly be embedded across all different kinds of sensors around the world. So I'm kind of taking that systemic approach. It might look dramatic, I just don't have, we just don't have the us puny VC brains or Okay, so let's let's talk about the geophysics stuff. And let's talk about space. Let's get there. So why? Why? Why? Why

add another thing? Come on, man. You're not focusing. Yeah, like, tell Yeah, if disrupting material science isn't hard enough, then why meet yourself with trying to disrupt geology, right? But basically, the benefit is twofold. In the immediate short term, our short term, when I say that, like 10 years or less, right? Massive benefits here is that we're going to be able to help with critical materials and critical minerals supply chain. As well as just a general raw

material supply chain, which also benefits us and our customers. As our customers at, let's say, the SpaceX's, Tesla's, whatever of the world, right, are prompting our AI to look for new materials, new alloys for their rockets, for example, or engines and so on, there's going to be net new atomic species or elements, net new as in versions of these raw materials that we typically haven't cared about right so Pick pick your poison.

I'm gonna pick an extreme one like frankium obviously it wouldn't pick something like that, but you got the point right so There's gonna be a sudden increase that we foresee over the next few years in absolutely new supply chains for raw materials that don't exist and so if we can put up satellites in space and and have geophysics-based AI models that reduce the cost of exploration and time for exploration of those new minerals, we reduce the cost of

the raw materials themselves. So that's a very simple first-order effect that we want to exploit or leverage, if you will. And then the kind of longer term goal that this serves, the geophysics and satellite work, is going back to the original three components that we talked about. So the geophysics satellite

Why GenMat Uses Classical Deep Learning vs Quantum Computing Today

work, hyperspectral work, also has a secondary benefit that's more longer term. And that goes back to what we talked about earlier, about the three critical components to general intelligence from our perspective. And so that's the sensing, simulation, and manipulation matter. So the more sensors we have in space, on the ground, underneath the sea, throughout the Yeah, it makes sense. So the, basically what you're trying to do is be able to have cameras, like hyperspectral cameras,

right? Not just like optical cameras. So a camera that can see every sort of wavelength from the narrow little, like whatever we can see into the infrared, into the ultraviolet, like just even beyond that probably. So it's like, yeah, it takes all of that visual, visual, call it visual data, and can look for materials. So you say, we know that copper looks like this when it's on the ground has this particular pattern. Basically,

That's the simplest way to describe it. That's right. So we're looking for signatures of different minerals. Copper-rich minerals versus iron-rich minerals will have a totally different reflectance pattern underneath the hyperspectral signature. To you and I, it's just going to look like dumb rocks. But to a specialized camera, they're very, very pretty and colorful. And each color is super meaningful in terms of the chemical and

crystal structure decomposition. of those minerals. And so some deposits, what we want to be able to do is, exactly as you said, point to deposits, right? And point to drill targets. Where should we actually drill and

look for these precious deposits? If you take something like a periphery, the causal relationship between the surface scans And the periphery are a lot stronger a periphery is a geophysical sort of phenomenon that occurs across millions of years when you have a volcano nearby, essentially what happens is you have

really really hot magma is kind of circulating. and heating up all these hot like metals and precious metals like gold and silver and it circulates these like really precious metals that we care about across millions of years deep underground up anywhere from 100 feet to several hundreds and thousands of feet beneath the surface and so you have kind of like a eye of sauron looking thing And around it are all these, like, it's literally gold and silver and nickel and other things

you care about. And then on the surface, it's really, really low grade material, but it's still like a lot

The Origins of GenMat: From Quantum Deep Learning to Classical AI

of gold, a lot of silver. And so a lot of peripheries have yet to be discovered, by the way. That's the cool part. So there's a lot of surface mines that yet haven't even been detected. And we will see them with the right AI and the So you actually launched a satellite, though. You launched a hyperspectral satellite on a recent SpaceX launch on Transporter 9, was it? Right. Oh, man, that was so nerve-wracking and exciting. It's like easily top, like, experiences of my life. Hats down. That

was so cool. Like, you know, like, I'm sure you experienced this too when you were going through your first payload. It was just like, Alright, we're putting a lot of money into this thing. There's a lot of hardcore engineering and risks associated It will literally go into space at tens of thousands of miles an hour and stay where, you know, we're about that speed. So, you know, like this can go wrong in so many ways, right? So it's very nerve-wracking all

the way up until the point that they say, you know, genmat1 deployed. And then after that, you know, it's like, all right, we can solve it a little bit. And then it's like straight to Leops, right? Like how do we manage Leops? So it was such an amazing and thrilling and like high impact experience. Dude, it was so inspiring. I was like, This is a private individual and his company and 10,000 plus people who believed in the mission. In

just 20 years, this is what they've done. Like I'm literally watching a frickin' skyscraper go to space and come back right now. That's awesome. And can you say anything about how it's going? Like, you know, how you guys did through, and LEOP is launch and early operations phase. So like, how did LEOP go and how's the Absolutely. So we're currently at the very final end of LEOP

where we're working with the ADCS and GPS. So once those guys are stabilized, so the ADCS is the Attitude Determination Control System, which you know, but just in case for your audience, it helps you determine the And so we're currently just operating and fiddling around with that, if you will. And at that point, we're going to be able to start taking our first pictures. So That's so exciting. Oh, my gosh. I remember. So we actually, Astronauts

launched a demo satellite before we did our big one. We took a picture of San Francisco from our little tiny satellite. And when that came back down, it's like that

How GenMat Gets People Excited About the Potential of Materials

picture is iconic in Astronauts history, just because it's like that came from a That was so cool, right? It's like, yeah, that was Yeah, you're about to be there, unless you've secretly already captured some demo images and downlinked them, but I won't press you to see if that's true. So basically, you are probably also then developing the algorithm side too. They're like, actually, how do you take in all that hyperspectral data and do stuff with it, identify the deposits

or whatever? So how similar is that to the other stuff you're doing? Is it a similar sort of like physics problem that you have expertise solving because you've been doing these material science Yeah, there's certainly some overlap. That's a great question, actually. There's overlap in some of the expertise. So for example, in geological soil sampling, we followed Pat on this recently, where basically in sampling, You'll do something called XRD, X-ray diffraction, and

you'll look at the crystal structures, right? You're basically taking an X-ray of the Earth, like of the surface of the Earth, like what minerals there are, and you map out their bones, aka their crystal structures. So that is done in material science too, in high throughput synthesis of chemistry for just like new ion, new battery discovery. So that process is very, it's literally the same. So the models are the same as well. And then there's also sort

of meta overlaps. So going back to what makes physics AI hard compared to large language models, The training data requires domain expertise. So to your point, you know, we have a lot of domain experts in geospatial, geophysics. We work, our investors is, you know, Comstock, right? They're Nevada's oldest publicly traded mining company. So we work with real world geologists too, to like validate our models. And that helps. And then yeah, we also innovate on the algorithm side

too. So how do you make useful predictions and Because we worked on doing these physics imposed constraints and symmetry constraints and learning the nuances with material

Rapid Fire Questions: Focus vs Breadth, Challenges, Surprises

science, a lot of those learnings quickly we were able to Very cool. Can you tell people, can you give people a sense just, I mean, across the material science side and this side, can you give people a sense of sort of how far along you are? Like how many people are at the company and like customers and sort of like traction and figuring out these things actually, and being able to make them useful. I imagine that people

are probably like, okay, cool. It sounds like a cool idea, but like that'll never happen, but it's I love it. Absolutely. So, you know, we're $15 million in and 40 people in three years worth of like results and work. So, you know, we have preprints on the LK99. We simulated it. We have preprints on our own materials that are titanium dioxide nanoparticles. where we've shown that you can actually use AI to dream up these new configurations and chemical surface reactions. We put

up and built a wet lab. It's probably one of the most advanced photoreactor combined with machine learning workflows in the world right now. We have a photo reactor in Toronto. That's where we're literally creating, you know, natural gas right from thin air, essentially. So we have that going for us.

We also, of course, have the satellite, as we mentioned. And we also have a traction both on the technology and customer side that includes, for example, We've now been able to generate tens of thousands of crystal structures across a wide base of material families that are all potentially trillion dollar markets. What we want to do is continue to validate that with actual industry

experts in those fields. So we won't be able to say names, but we've attracted the attention of pretty much every brand name you can think of. Like Ask Crazy, statement as that sounds. So we have that kind of traction that's like really interested in adopting our stuff. And then on the geophysics side, I'm really excited as well, because we were able to recently pre train, it was a transformer based architecture that we completely hacked to include constraints that we cared

about to predict hydrothermal alterations. So an alteration is a part of the ground where the chemical composition randomly changes, like it just changes unevenly. And again, this is something you may not visually see, right? But it's like very subtle, but very powerful. And so how do you teach a pre-trained model to predict that? On low-res Landsat data, which is like 30 meter resolution per pixel, as

opposed to five meters, which is what the GEMMAT-1 has. So we were able to predict alterations, hydrothermal alterations, on the COMSTAR And that was validated by their own geologists. It blew their mind. And this was done using a model that we pre-trained on much worse, lower resolution data. It's not even using our existing satellite And what does this look like at the limit? Let's say it works perfectly. Everything you have planned comes to pass. Where are we? Where are you? And where

I want to bring us towards a world that I call Yuk-Siberia. So you probably see that on my Twitter a lot, like it's in my handle, right? I still have to write like a proper blog post describing what it is, to be fair. So this is a good like preview. So Yuk-Siberia is

Mapping Out a Future with Intelligent Matter and Interstellar Travel

the third level of civilization that I think we're headed to. And I'll kind of give a brief guess as to timelines, right? Don't hold me to them. way off one way or another. don't even know anymore about AGI. Like this is obscure, you don't even realize it, right? Like this warps anything, any predictions about the future. But that being said though, you know, for us it's like, all right, where does physics AI, physics-based

AI take us in the infinite first? And the first level of civilization is one where I think we will exist in a world where humans are constantly creating matter that was assisted by an AI. So every piece of matter we ever create was partially driven by an AI under our guidance. So we'll have these beautiful, crazy, futuristic maglev train systems,

right? Consumer electronics that you don't have to charge for centuries. All these things will be the desires of a human, but service to us by an AI and just optimizing the atoms, making it more sustainable, right? So just solving all the things that we currently suck at because we don't have like chemical, quantum mechanical intuition, frankly. You have to build it over years and years and years. But if you can teach an AI that from the get-go, right? Totally different story.

So at the first level is one where you have hypersonic maglev trains, like you mentioned, megalithic structures. But then the second level, I think where we're headed, is where the matter itself becomes so sophisticated that it in itself becomes low levels of intelligence. So what does that look like? Well, that works like me running straight at the wall, right? And the wall folds in on itself. So that's because I'm required to keep humans in life form safe. So

I'll just safely walk through it, right? So my wall right now is stupid. It's an inanimate piece of object, just inanimate matter. So I think matter itself will just become more animated and intelligent. Arguably, some people might, you know, sort of sent like a low-level sentience, like belief system, right? Like this should be treated as life, potentially. Don't know. And then the third level is what's called the excited

bearer. And so that's the point at which we merge with this intelligent matter while keeping our humanity and singularity. So basically, this is a world where you can go outside and telepathically hail, right? Like a flying cat. Where if your limb falls off, right, the ground will grow a limb for you. This is a world where you control matter as easily as So, fair enough. So, Yug is short for Yuga,

which is Sanskrit for age of. And these ages are typically measured in the tens of thousands of years, right? So I think that if we properly unlock physics-based AI, we'll have tens of thousands of years of innovation minimum ahead of us. And then Cybera is just short for age of cyber, so like cybernetic matter. So in total, it's the age of cybernetic matter. So matter Okay, awesome. So it's like we start off and we are just

in a kind of a normal world, but things are better. The second world is this, the, my wall can now deform around me and like knows what I'm coming. And then the third one is like, my wall's my friend. Uh, so the, so in that world, like which of those categories are we like on Mars and like, you know, living, like, you know, doing interstellar travel and, and those sorts

of things. I know this is actually something you care about is like getting beyond Deepa, I actually think we just need to get to the first level, even half way to the first level, and we're already on Mars. What I'm talking about unlocks interstellar capabilities, right? Like the ability to potentially hack the speed of light if you will, travel faster than light, hack physics. By hack physics, I just mean find new

physics that we didn't know about before. So Mars seems within reach, even if we go far enough along the tech tree. Because think about it, if we can accelerate the high throughput discovery of materials that reduce the cost and the weight and improves

The Meaning and Origins of the Term 'Yug-Cyberia'

the strength and all these other thermodynamic properties of the rockets, if we can accelerate that, that doesn't require AGPI. That does not require a complete understanding of every domain of physics out there. I'm willing to bet we're actually a lot closer if we look at that scale to Mars. And so like another, like we look at, you know, what we're kind of doing here is building the foundational civilizational technologies that

are needed to kickstart any civilization. Because take the satellites, for example, if we go to Mars today, one of the first things that we struggle with is where are all the precious metals and minerals? Where are all the economically viable, more importantly, minerals that we can get to? That is a huge open-ended question that has many theoretical answers, but nothing really solid. So you want sensors all around the planet that have physics-based AI making

sense of that geophysical data. So you're reducing time and resources that you need to deploy to explore and extract those resources. And then, now you have a bunch of minerals on Mars. Woohoo, congratulations. The process that you don't have to worry about getting sued by the government. The Russian government, hopefully. You know, like, hey, we stake claims there, you're not allowed. So, assuming that's not

Conclusion: Advanced Physics AI Can Get Us to Mars and Beyond

a problem, right, and you get those deposits. The next thing you're going to be asking is how do you use those raw materials and convert them into advanced materials to build colonies, to build fully functioning nuclear reactors that power a Mars base. All those things are now material science and physics, applied physics-based problems. And so, yeah, I think Mars will get there, you know, probably halfway to the first level of

civilization. And somewhere between first and let's say second, we will have enough dominance of matter and understanding that we may be able to explore, you know, creating and altering gravitational fields and all these other really That's awesome, man. I can't think of a better way to just conclude it than there. We're going to be in the stars with talking and friendly walls. And

no, thank you so much, man, for coming. This was awesome. I think people appreciated the wide ranging conversation, but it always came back to Thank you so much, Fisher. It's been a pleasure and honor. So much fun. Appreciate it.

Transcript source: Provided by creator in RSS feed: download file
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast