AI Running on the Edge with tinyML - podcast episode cover

AI Running on the Edge with tinyML

Aug 22, 202454 minSeason 1Ep. 204
--:--
--:--
Listen in podcast apps:

Episode description

In this episode, Pete Bernard, CEO of TinyML.org, explores the groundbreaking world of TinyML—AI running on edge devices like wearables and sensors. We discuss how TinyML is revolutionizing industries by enabling AI in resource-constrained environments, covering real-world applications from agriculture to IoT. Tune in to discover how TinyML is set to redefine the future of AI and edge computing.


Pete’s Favorite Songs:


Hosted on Acast. See acast.com/privacy for more information.

Transcript

Explaining football to the friend who's just there for the nachos? Hard. Tailgating from home like a pro with snacks and drinks everyone will love? An easy win. And with Instacart helping deliver the snack time MVP's to your door, you're ready for the game in as fast as 30 minutes. So you never miss a play or lose your seat on the couch or have to go head

to head for the last chicken wing. Shop game day faves on Instacart and enjoy $0 delivery fees on your first three grocery orders, offer valid for limited time, other fees in terms apply. This episode is brought to you by Progressive Insurance. Do you ever find yourself playing the budgeting game? Well, with the name your price tool from Progressive, you can find

options that fit your budget and potentially lower your bills. Try it at Progressive.com, Progressive casualty insurance company and affiliates, price and coverage match limited by state law, not available in all states. Welcome to the Mr Beacon podcast. This week we're going to be delving into the realm of artificial intelligence. Very important

there if for many reasons, three spring to mind. One is the massive sucking sound as the entire capital markets, venture capitalists, stock market is getting driven by the technology. The second reason is no your enemy and this thing could kill us or the very least take our jobs. And the third thing is it's just incredibly cool. I got into the whole computing business after seeing 2,000 and one of space odyssey and just marveling at this how-9,000

machine. And now we can have a conversation at least as good as the one that they had in the movie with something that's running on our phone or actually in the cloud. And the cloud and massive data farms are synonymous with AI. But there's actually another way, another approach called tiny ML or tiny machine learning where AI is starting to be run at the edge or on the edge on watches and earbuds and small sensors like the ones that

we focus on in this podcast. So my general rule for this podcast is if I'm really interested in something I feel like I need to learn it in order to do my job then I'm thinking there's probably other people that feel the same way. And I'm hoping that you're one of them because we're going to learn a bit about tiny ML from the guy that runs tinyML.org. His name's Pete Bernard. He's an amazing guy, a really interesting career which we'll

cover in the second part of the podcast. But for this main bit then please enjoy my conversation with Pete where he explains what tinyML is and I hope it's useful. The Mr. Beacon Ambient IoT podcast is sponsored by Williott bringing intelligence to every single thing. So Pete, welcome to the Mr. Beacon podcast. It's wonderful to have you on the show. Great to be here. So we're going to talk about tinyML and AI and edge computing and IoT.

And what a great time to be in the artificial intelligence business. So congrats on landing at tinyML.org. I want you to explain a bit about the organization. It gives kind of the high level view of tinyML. And let's talk about, let's educate us, me and everyone else on exactly what it is and what the use cases are where it fits in in this kind of weird taxonomy of different MLs and LLMs and so forth. But what is tinyML? Let's start there.

Yeah. Well, you know, as you know, the term artificial intelligence has been around since I think 1956 or something like that. So this idea of kind of creating software that can learn, you know, be trained, is been around for a while and only probably in the last five or 10 years have the, you know, chips and networks and other things become

fast enough where a lot of this stuff really starts to practically work, right? And I would say the past couple of years we've seen, you know, in the cloud, you know, these kind of transform based architectures and LLMs do all kinds of fascinatingly weird things that feels like you're talking to a human, but not exactly. But machine learning and ML is really more about using AI for pattern recognition primarily. So the term ML means machine

learning. It's kind of a subset of AI. Some people say, well, machine learning is about patterns and AI is about like kind of simulating human thought. But tinyML really is all about doing AI and machine learning and highly resource constrained environments. So when some people think of AI today, if you go on, listen on CNBC or whatever, it's all about when they say AI, they mean sort of chat bots and, you know, fake girlfriends and Scarlett

Johansson voices and all that stuff. And that's not what we're all about. So those are running in data centers that use like gigawatts of power and, you know, oceans of water and tons of concrete and all that stuff. But it turns out you actually can run AI and ML, you know,

on local equipment, on sensors, on cameras, on water sensors, all kinds of things. And you can run AI and ML there and have like use a lot less power and cost and have a lot more impact because you're running AI software on the data as it's being created as opposed to, you know, sending a bunch of stuff to the cloud and then sort of working on it there. So tiny ML is really a, I call it a technical descriptor, just like edge AI or on device

AI. It describes, you know, running AI and ML workloads in these kind of highly resource constrained environments and all of the tools and techniques and chips and things that you need to do it. And so there's a whole ecosystem of folks out there that are building solutions based on tiny ML or sometimes they use the term edge AI. But it's really about AI software

running in those either memory or power or cost constrained environments. So if I wanted some AI running locally on my watch, then I can develop it in tiny ML. Yeah, I mean, so that would be an example of tiny ML, tiny ML application. You know, I think some of the common places that we're seeing it applied a lot in the commercial space like an agriculture for, you know, farms for crops and water detection and things.

We're seeing it in industrial environments. We're using AI for anomaly detection to detect high current draws or even audio detection of ball bearings, wearing out and things like that. We're seeing it. I had an interview to a company called Ubotica recently that's using AI and satellites. So they're taking the kind of spectral imaging of the earth and they're doing pattern recognition on it to detect early detection of like algae blooms

and methane, you know, releases and things like that. So anywhere where you want to use AI to, you know, act on the data in real time for detection, image detection, pattern detection, that's where you would apply like tiny ML techniques, I would say. And what are the limits of tiny ML? What are the sorts of AI things you really wouldn't do with tiny ML? You know, that kind of changes on a daily basis because we're constantly surprised about

what are people are able to do. In fact, we had a seminar online around generative AI on the edge on tiny ML back in March. And I think someone demonstrated running an LLM on a Raspberry Pi box. So, you know, you could inquire it. I think the idea was like, well, if you stuck this on a shelf at Home Depot, you could go up and ask questions about, you

know, where's the super glue or whatever the use case was. So, you know, it turns out that some of these models like LLMs are more memory bound than they are compute bound. And so you could, if you had enough memory on a system, you could have a pretty cheap chip, fairly low horsepower chip and still run generative AI, you know, on those things. But I would say, you know, that's where you're pushing the envelope when you're doing transformer based

architecture. That's currently kind of the upper bound for a lot of the edge AI and tiny ML. It's happening, you know, if you have the right combination and the right narrow use case, it can work pretty well. But you're not going to have an expensive conversation about René de Cart one minute and the performance of the product that's on the shelf. It's going

to be narrowly defined. narrowly defibrillary. Yeah. Okay. Like, for example, like if you were a car mechanic, imagine if you had, this is a really interesting use case, you know, car engines, as you know, cars themselves, lots of sensors and there's lots of data. It's very hard to understand what's happening in this car unless you kind of read the codes

or whatever your ODB port or something. You know, if you had an LLM, you know, running under the hood, could you open the hood up and ask the engine like, what is wrong with you? Like, why are you stalling on me? And it could read all those sensors and then use a large language model running locally to say, oh, you know, turns out that your fuel mixture is too rich and you should do this, whatever. So you could translate like a massive

amount of sensor data into a language that someone could actually understand. So that could be an interesting use of combining tiny ML for the sensor data with an LLM that's translating that into something you could actually act on. And one of the processes that this is running on, presumably it's not an Nvidia GPU that you've got running locally. No, you wouldn't put like an Nvidia blackwell in there. I think that's about 1200 watts

these days. So that is way off of spec. But there's a lot of companies, you know, you look at the kind of the big cortex M based companies like the ST micros, microelectronics, NXP, Canadian, Renasauce. They all have AI acceleration in some of their kind of MCUs. In the cortex A space, you've got, you know, NXP Qualcomm in there. And these are different kinds of ARM processes. Yes, ARM cores. Yes, sorry. So we're going from like the microcontroller MCUs to maybe like the MPUs.

Okay. You know, things that you would see in a smartphone, for example, we'd be pretty high-end actually for an IoT or embedded device. And then Intel X86, you know, they have the core ultra and they have some acceleration in there too. You could use those. If you're familiar with Intel Nuke, the four inch square boxes that you can put out there and use for all kinds of IoT solutions, you could run some AI workloads on that too.

And you know, what are the alternatives to TinyML if I'm running AI workloads on the edge? Well, I mean, it's more of a technical descriptor. So, you know, if it's, you know, it's DJI or TinyML or on device AI, it's just a way of describing, you know, the way you're doing that. So it's not like a standard. Okay. So it isn't a language then. So you could.

Yeah, you could use a TF light or PyTorch or any number of, I would say, there's a lot of now, like kind of highly optimized compact frameworks for running AI models on these resource constrained environments, all falls under this TinyML edge AI sort of umbrella. So I thought that TinyML was a language, but more for me. So no, so it's, it's a, it's

more of an architectural construct. It's like the decision we're going to run this stuff at the edge and we're going to get the benefits of local hardware, low latency, the ability to run when the cloud isn't present. Are there any other, so, so it's a design philosophy? Yeah, I would say it's a series of sort of techniques and technologies that are used to fit AI workloads into very tiny spaces. And so yeah, you could be cost constrained

or power constrained or size constrained. It turns out when most people commercialize products they're constrained in some form, right? I mean, at the end of the day, there's always a constraint. Unless you're like, you know, running up there on AWS and, you know, you just want to burn through all of your cloud spend, you pretty much have some constraints.

So it's really the study and the implementation of AI in these tight spaces. And, you know, some of some of the folks in the space are building sensors with some AI acceleration in them. So you look at like Bosch, folks like that are in the sensor business. They're like, well, we'll have the sensor and then we'll put a little AI workload in there to do anomaly detection on the, on the gas sensor to recognize different patterns of, you know,

toxic gases and things. So we're seeing a lot of folks in the kind of very low end low cost, low power space, adding AI capabilities into their equipment. So those could be MCUs or sensors. So it's really fascinating area. All right. Yeah. I think I just, I knew it was machine learning, but I just, my brain was thinking market language. It's not. Oh, yeah. Yeah. HTML. No. Yeah. Yeah. Okay.

And so the boundaries are, is there, does anyone argue, well, that's not tiny ML. You know, I've got a desktop with, with, I don't know, a whole bunch of memory and you're like,

sorry, you can't come to the meeting. You're not doing tiny ML. Yeah. Sometimes like you could say, like some people say, well, anything over over a millawatt, you know, is not tiny and anything up to a few watts, you know, is, is edge, you know, but these are, I would say, not super productive arguments because at the end of the day, you're trying to solve

a problem. You're trying to make it fit and there you go. So, and quite often now you're looking at tool chains and model zoos and other things and whether it's using a millawatt or a lot doesn't really matter. So yeah, but some people would, you know, they like to, you know, we all have our taxonomies in our heads about how to categorize things. So, but some of this stuff, like you look at, there's a company called Green Waves that has a gap

nine chip on risk five that's for like, herable. So they're in the herables market. So you think about hearing aids and things where you're doing audio processing, you're doing AI on the audio signals in real time. Their biggest value prop is the low power. So, you know, talking about battery life in herables, it's kind of the big deal, right? So, so they're all about like super, super duper low power in a set of workloads that are fairly finite,

right? Whereas you might say, well, you have a Qualcomm chip that maybe can do lots of different workloads, but maybe uses a little more power. So at the end of the day, like it all comes from as you know, what's the problem you're trying to solve, right? Is it an agricultural problem? Is it a herables problem? And then the good news is now there's this ecosystem. And I would say ecology of AI providers out there that can, that can sort of help you

solve that problem in one form or another. So, seems like a right place, right time. This is, I imagine you're getting a lot of interest. How's it going in terms of the size of the community and yeah, it's pretty, it's pretty amazing. We have one of the cool things about doing AI in on the edge or tiny ML is that it can be done with pretty low cost. And so, it's a great platform for students. We actually have about 100,000 students around the world

that have taken tiny ML classes. And we have a big outreach in our community with academia where professors and teachers are, you know, using this curriculum to teach their students, computer science students about AI because it's something you could literally put your hands on and like actually do stuff. So, we have a lot of folks from academia involved. We just

sponsor an event down in Brazil with IBM. It was like a week long seminar teaching teachers how to teach tiny ML to their students and working with our Dweenil on kits and things like that. So, it's got a great element of education and sort of I would say almost democratization of AI where everyone can do it at very low cost. So, that's been big. And then on the commercial side, I mean, as you know, everybody wants an AI strategy.

I was just speaking at a aerospace conference this week down in Vegas where it was too hot, by the way. And everyone in aerospace was just like, man, what do we do about AI? Like, how do we, what do we do with this stuff? Like, everything from like the integrity of the data sets and the models and the supply chain and you know, we start getting into space and you know, defense department stuff. Like, how do you do non-deterministic AI in

an environment that needs to be very deterministic, right? So, I think a lot of these industries are just trying to figure out, you know, how do they best leverage it? And the good news is there's a ton of cool innovation out there. I mentioned a bunch of the chip companies, software companies, you know, out there too that are just innovating like kind of week

by week. It's kind of hard to keep up with what's the state of the art. But that's what we try to do in the community is kind of bring the state of the art together to kind of collaborate. And how do you organize yourself? You have a huge scope in terms of lots of different technologies and presumably a limited number of people. What have you? Yes. Focus the tiny ml.org on. Yeah. Yeah. So as we're a nonprofit and you know, like any

nonprofit, we are, you know, funded with limited resources and staff. And so we organize ourselves in a couple of ways. We have a community manager. So we have a bunch of strategic partners that help fund the organization. And so we have a community manager that sort of does the care and feeding of that community. We have a group of professors that look after kind of the academic community that help make sure we build bridges between academia and industry.

And they're all professors and volunteers. We have an events management person that runs some pretty cool symposiums and in person events. We have an event coming up in Washington DC at the end of September with the National Science Foundation on sustainability and J.I. So running those events and that kind of in person and sometimes online. But the in person community building is really important. Yeah. And then we have development folks, you

know, that work with, you know, commercial partners, commercial companies. We have someone in Japan now working with the Japanese market. So, you know, it's all about we have these constituencies around tech providers, academia, commercial companies. And we try to make sure we have people doing outreach to all of them so that they can be part of the community

and get something out of it and we execute. But yeah, and we rely on a lot of like, you know, scale platforms like LinkedIn and, you know, YouTube and all this other cool stuff out there where we can reach. We have a discord server. People should go on our discord server and jump into the conversation there. So we try to use platforms at scale worldwide, frankly, so that we can share the knowledge and sort of build that community.

And do you get involved with standards and a government kind of, I don't know, lobbying is nasty with that. Plubbi, do you get involved in that? Yeah, well, I mean, in the AI world standards are a little bit few and far between. We like to think of best practices. So for example, at top like I had this morning, I was talking to someone, one of our partners about watermarking. So how do you watermark your data sets and

your model so that you can maintain the provenance of your data and data sets out there? So, you know, we have working groups that publish kind of best practices around some of those things, but not quite standards. They're not eye-tripley standards. We're like, we've all agreed to do this this way. And then in terms of, you know, our goal with, we call

them policy makers, governments is our number one goal is education. So a lot of policy makers, when they think of AI, whatever they're watching CNBC and they think of that other AI, you know, the chatbot, Terminator stuff, whatever. And so we educate them about, well, there's all this other AI stuff that's, you know, good for farming and water systems and healthcare. And this is how it works. And this is what we do. So we do a lot of education.

That's part of what we're doing in DC. And then we hope to also then, you know, influence policy in terms of, you know, making sure that the new boost mobile network is offering unlimited talk, text, and data for just $25 a month for life. That sounds like a threat. And how do you think we should say unlimited talk, text, and data for just $25 a month for the rest of your life? I don't know. Until your ultimate demise. What if we just

say forever? Okay. $25 a month forever. Get unlimited talk, text, and data for just $25 a month with boost mobile forever. After 30 gigabytes, customers may experience slower speeds. Customers will pay $25 a month as long as they remain active on the boost unlimited plan. My dad works in B2B marketing. He came by my school for career day and said he was a big row as man. Then he told everyone how much he loved calculating his return

on ad spend. My friends still laughing me to this day. Not everyone gets B2B. But with LinkedIn, you'll be able to reach people who do get $100 credit on your next ad campaign. Go to LinkedIn.com slash results to claim your credit. That's LinkedIn.com slash results. Terms and conditions apply. LinkedIn, the place to be to be. Everyone in the community can have a thriving business, but also can do it in a responsible

way. One of the things we do also is help channel the community into responsible AI efforts. We work with like United Nations and World Economic Forum. We have initiatives with them. We're part of the AI governance team there. Bringing in that edge AI, tiny ML perspective into some of these projects to help people do good things with AI. Just good. I'm having empathetic chest pains here, kind of a feeling of anxiety because I just see the huge scope of all the things that you could do. Yes.

You don't look very stressed. You're looking pretty relaxed. How do you figure out what you're going to do and what you're not going to do? Yeah. That's a good question. We had a kickoff meeting today with a marketing agency. I said, our number one goal is simply engagement. We just want engagement. We try to simplify things about like here's what we can do and here's what we can't do.

We do leave a lot of things, frankly, on the back burner. We have to prioritize making sure our partners are well informed and we have the right community and we execute on things that we can execute on. Sometimes we just sponsor things. Sometimes we just speak at things. We have to be a little bit careful about planning out what we take on as our own organization versus supporting through some indirect ways. Maybe it's one of the things I've learned going through startups and other things.

Sometimes saying no is as important as saying yes. Choosing your yes is carefully is important because once you decide you want to do something, you've got to do a good job at it. If you're not going to be able to do it, then just say, you know what? Great idea. Let's put it over here. When we get the resources and the time, we'll do it. We have those conversations. I've been involved in nonprofits in the non-tech world for a while in Seattle and it's always

the same thing. There's always a hundred great ideas. You only got so many volunteers and so much money. Let's get the really important things done first and then hopefully, you know, we'll get to the other stuff. Prioritization. I want to explore the good, the bad, the ugly. Why we start off with the ugly and end with the good. So what's your pedoom scourge? Do you think that tiny ML is completely immune from the probability of really bad things happening? Pedoom and some people are like,

this can only be good. You know, I've got a very low pedoom score. The probability of things going really bad is like one and other people are like, that's it. You know, it's just a question. Before the terminators start coming here. That's right. Yeah. It's like 100%. So you're obviously a little removed from the hottest part of the terminator

scenario, I guess. What's your, what's your view of what could possibly go wrong? And to what degree is the edge just not part of any of those disaster scenarios? One of the things people look for all the time with these kinds of deployments are there's obviously security risk. Anytime you put something out there, you don't want to create a conduit into a system that people shouldn't be playing with. And we've always heard those anecdotes

about some, you know, unprotected IoT thing that people get access to and whatever. So I think that's always top of mind. Oh, we should be top of mind. AI or not in the space, right? So you don't want to hack a gas pump and then get into Chevron's database or whatever. I don't know. Not saying that happens. So that's, you know, always something to keep an eye on. And, you know, those interesting technologies now and techniques around encrypting, you know, data at rest and

data in motion to make sure that you mitigate some of those risks. The other thing that could go wrong is what they call model drift. So you have an AI model that's operating and then over time, it kind of drifts to become more and more inaccurate over time. And so, you know, you need to make sure you have the right kind of management framework in place to keep that model relevant and

accurate. So, you know, I mean, worst case scenario, you're detecting anomalies with there are no anomalies or maybe you're not detecting anomalies with there are anomalies and that could be bad. So, you know, that's just something you build into the architecture to make sure you you mitigate those risks. Yeah, there aren't those kind of like, you know, sentient terminator that's like risks in in tiny ML and DJI really. And one of the hot topics too is like, well,

where's the human the human in the middle part go? Like, are we just sort of alerting a human that they should take action or should the system take action itself? And that's one of those interesting tipping points in design that I think a lot of people are struggling with. Certainly in situations where you're getting a lot of signals that something bad's happening and might make sense to automatically take action to shut down certain systems to prevent failure.

But like at the aerospace conference this week that was a big issue. It was like, well, you know, we really want the human to make the decision and so we don't want to really have the AI taking action above a certain level of functionality. And, you know, that's going to vary from industry and solution. But that is kind of a hot topic is now you could argue sometimes

it's better to have a non human make certain decisions. I mean, you and I you could argue that like with autonomous driving, I mean, some could be a big improvement over some of the way people drive on the streets today. So yeah, I certainly believe that I was still when I was a pretty good drive rather than ever had a bad accident. I've been driving since I was allowed to and actually four I was allowed to. But I used, you know, the Tesla driving, yeah, most of the time because I just

don't trust myself. I try, I think me supervising it is much better than me just always paying attention. And so I believe it's safer. So there's that's that's always an interesting trade off, right? I think I think people as people get more confidence and trust in the systems, they'll probably provide more control over actions that AI can take. But, you know, actually just

having the data and knowing what's going on and having that process is probably a good start. So, so yeah, there's there's always issues there, but I think people are pretty cognizant of the risks and how to mitigate those. Yeah. So what are the opportunities to do good, you know, solving important problems and clearly, you know, monitoring a machine and preempting some failure, there's all sorts of reasons why that would be good. But let's let's talk about sustainability

specifically. What are the opportunities used there? Do you think? Yeah. Well, I mean, we're seeing a lot of action like we have a bunch of these I mentioned university students, you know, using tiny ML in their coursework. I sat in on a presentation from a university in Ghana where they were showing all of their kind of semester long projects a lot around agriculture and water management. So, you know, applying AI, especially low power AI that could be solar powered or easily deployed

on sensors to help crop growth and prove yields useless water. You know, these are all vital resources that have like a huge impact locally, you know, and being able to do that with a $100 kit or whatever is pretty impactful. So, you know, you don't need to, you know, spin up a VM on Azure and blah, blah, blah, and whatever. And so we're seeing a lot of that, you know, even things

around, I mentioned water and water detection, water optimization. There's a company called Caliper in Australia that does a lot of water reclamation kind of technologies to make sure we're not wasting water in certain spots. And, you know, you can imagine an agriculture, you know, being able to make sure you're watering the right amount and not too much. So those are kind of really basic easy things to help sustainability. There was also, I was an interesting use case. I was talking to

someone from Morocco. So in Morocco, there's these trees, forgot the name of the trees where I would look it up, but they are, they are very important to the Morocco ecosystem because they prevent the Sahara desert from basically blowing winds into, you know, the cities. And so the health of the trees is really important. So they're building some systems there to do basically kind of anomaly detection of the health of the trees and giving heads up. So think about like, instead of like

telemetry for preventive maintenance, it's telemetry for, you know, tree health. And so they can tell, especially over an aggregate, what's happening with these forests and be able to take action on that, you know, long before this visual evidence of a problem. So that was kind of an interesting use case too. So we're seeing, I even saw one, there's another good one. So I spent time in Massachusetts. I love Cape Cod, Massachusetts. And there's a lot of sharks out there.

Like there wouldn't be Australia, but you know, big, great white sharks and stuff. So it's always fun to swim in the ocean there. And there was someone who had invented these shark buoys. And basically they had these AI systems that would detect certain audio frequencies and audio patterns that sharks make. And they would float these buoys offshore. And then when the buoys detected the sharks sounds, they would signal, you know, wirelessly back to the beach to say, hey, there's a shark

in this area. And so it's sort of like having a shark watching person out there in the middle of the ocean all the time, looking for sharks and then giving a heads up back to the lifeguard that there's sharks on the water. So, you know, fascinating. That could be important. Totally. What, what was the founding story? How did tiny ML go get started? Yeah. So this is around 2018. So I have Gennie Gusev from Qualcomm, Pete Warden from Google,

Adam Foux from NXP. A bunch of these folks got together and were kind of exploring like, how, how do you actually do this? How could you actually put a like a 10 kilobyte AI model inside a tiny sensor? And I think that's kind of how it started. They kind of got together to figure out what would it take to actually make this work? And so it was kind of a, you know, as most things, just a few passionate folks that got together and started talking and comparing notes. So there

was like collaboration between companies on how to solve some of these problems. And yeah, and then over time, you know, the problems got solved and there were new problems to solve. And the community grew and more startups were figuring how to do this. And now I would say like,

people know how to do it tiny ML. Although there's always new boundaries to push. And now it's more of like, well, now we have an ecosystem of companies and partners and things I want to work together and accelerate the business and, you know, and help people get trained and educated and create a talent pipeline for the next generation of AI engineers and things like that. So it's definitely matured. But it started with, you know, some folks getting together and trying to figure out how to

get some, how to make some work. And, you know, that's that's the origin story. Cool. So how, what are the ways that people can engage and be involved? Do I join tiny ML or watch? Yeah. So we have, so the public can engage with us through our discord server, through our YouTube channel, through tinyML.org, you know, lots of ways the public can, and if you can get educated, if you're a student or a professional, I want us to get upskilled.

There's lots of resources to do that. We do have a set of companies that sponsor the foundation. And so they can become a strategic partner and, you know, help support all the things that we're doing and there's benefits to doing that. But it's also just good to sort of reinforce our values and our work. And, yeah, and so, you know, certainly people can get in touch with me. Because we always need volunteers too. We get a lot of folks that are like, oh, I can help with

this hackathon or I can help with this mentoring thing or whatever. So always looking for people that just want to help. That's another way to get involved. So Pete, you've had an interesting career. You've been like programming at the BIOS level and you've been working on the Windows smartphone, I think, if you're LinkedIn profiles to be believed. And actually it's kind of fun, funny because I used to write device drivers that ran on top of other people's BIOS back in the 80s.

And I worked at Qualcomm when the Windows smartphone and Android and iPhone was first coming out. So we have a few parallels and then this whole IoT thing. Tell me how you got your current job. But let's start to the extent that you want to. Let's start at the beginning. Because I'm kind of interested in how you kind of navigated your path. Yeah, what's the origin story? Yeah, you know, so I graduated from Boston University in the late 80s, 88, computer engineering degree. It's kind of

a double E software combo. I came into college being a software geek in the 80s kid and wanted to learn more about hardware. And it's funny when I graduated, I was really enamored with the interaction between software and hardware, you know, sort of the blurring lines, right? Because it's sort of like, you know, well, hardware is software. You just can't recompile it. And so I was fascinated with that.

And I actually, because I didn't have a job, I started working for a professor who was teaching, he was teaching an assembly language course I had taken his course. And I really liked it. I thought it was really cool because it was really kind of down on the middle and didn't have a job. And so he, yeah, he built his own PCs in West Newton, Massachusetts, his own branded PCs called the Bitbucket. Bitbucket computers.

It's funny. I mean, I worked for a company that made computers back in the CPM days. I mean, they literally had a, in the basement, they had these fats of acid where they made the printed circuit boards. And then they, so we bought printed circuit boards. Yeah. But what you're describing in a sense, it's at one continuum of the Internet of Things, you know, Internet of Things software meets hardware, the hardware is the whole world software. Anyway, back to you, back to you.

So, but what one of the things I did there was I was running engineering, which meant, you know, busy sourcing and assembling these computers with a little team in the basement. But I did a lot of work patching the BIOS. So they had a BIOS on there and I had to write a lot of patch software to fix the BIOS for people at that. Oh, yeah, basic input output system. So it's a chunk of code that starts running before the OS is loaded to kind of initialize everything and

kind of get everything set up. And you know, back in the day, it actually was like a kind of a standard interface to a lot of hardware before Windows took over a lot of that interaction. So it was kind of pretty important piece. And I eventually got a job with the company that made the BIOS, called Phoenix Technology, that's at a Boston. Oh, yeah, that was the name that popped up when you did the computer. Yeah, the little computer. Yeah, Phoenix came on to them. It says AMI,

our Phoenix or whatever. Yeah, yeah. Phoenix was like the original kind of third-party BIOS after IBM. And I ended up working for them for like nine years, which is like a long time. And through that process got a free one-way ticket to the Silicon Valley and worked there. Went and did some startup stuff in the Bay Area, like everybody should. And ended up working in a company doing embedded Java. So embedded VMs, I was chief product officer at a company doing

that. This is now later in like near late 90s, early 2000s. And then we eventually got that Java to work, that JVM to work on Microsoft's new mobile platform was called Windows Mobile Backband. It's kind of a Windows CE derivative. So it was kind of new the phone space. And then they eventually said, hey, why don't you just come up to Redmond and work for us because we need phone people. And so I started at Microsoft as like a Windows Mobile person working with developers and doing all kinds

of random stuff. And then so that kind of went for a while. I started on the Zoom team. I was one of the first people on the Zoom incubation team to build the music player. So this was the they're on to the iPod was it? Yeah, yeah. And it's kind of made famous by the Guardians of the Galaxy movie. If you watch that, he has a son player. I don't know. Didn't sell that well. But it

was, you know, we're hardware meets software and it was a cool device. And so that's kind of what I did through Microsoft was like work on devicey things, phone, zoons, kins, Azure Edge, IOT, all that stuff. What's your diagnosis of why Microsoft could never crack the smart phone? I mean, that they should have, they should have done. They had all that experience. To get people excited about Boost Mobile's new nationwide 5G network, we're offering

unlimited talk text and data for $25 a month. Forever. Even if you have a baby. Even if your baby has a baby. Even if you grow old and wrinkly and you start repeating yourself. Even if you start repeating yourself. Even if you're on your deathbed and you need to make one last call. Or text. Right. Or text. The long lost son you abandoned at birth. You'll still get unlimited talk text and data for just $25 a month with Boost Mobile. After 30 gigabytes, customers may

experience slower speeds. Customers will pay $25 a month as long as they remain active on the Boost on the new plan. This episode is brought to you by Progressive Insurance. You chose to hit play on this podcast today. Smart Choice. Make another smart choice with auto quote explorer to compare rates from multiple car insurance companies all at once. Try it at progressive.com. Progressive casualty insurance company and affiliates, not available in all states or situations,

prices vary based on how you buy. With operating systems on different devices. Yes. Well, in the somewhere in the multiverse, there is a universe where Microsoft has the dominant mobile platform. We're just not getting that universe right now. We definitely don't know. But actually, it's funny because actually on YouTube, I'm publishing like a nine-part series on the history of Microsoft's mobile platform. So if you search for that, we do interviews with all

of these Microsoft execs through the years. We're diagnosing that very question. What happened in that 20-year curve between late 90s and late 2017? Yeah. I was just wondering, well, your executive summary is... I'm going to watch this. I don't want to cause people not to watch it, but what's the punchline? It's like nine-part series. It's many, many hours. The punchline is they should have gotten in the hardware business a lot earlier with their own phones. They kind of

came to that conclusion too late. They tried to ecosystem it like it was a Windows PC ecosystem. The dynamics weren't there. It crushed the business model, it crushed the motivation, it crushed the innovation. By the time they realized it should have been in the hardware business when they bought Nokia, it was too late. They had lost the developer community and never could really get out of third place behind Android and Apple. There were Windows along the way

where they could have made that investment just like they built the Xbox. They could have built phones. I get that in terms of competing with Apple, but Android, yeah, there was the Nexus phones, whatever they were. They did have a hardware platform. Why would it have helped having the hardware platform? Yeah. If they had their own hardware platform, they spent a lot of energy trying to get OEMs to build phones in very certain ways, very specific ways, which they could have

done it themselves. They also would have generated a lot more revenue to drive more marketing, to drive more awareness with developers. Because in Microsoft, if you're not making a billion dollars, you sort of don't count. If you're doing OS licensing for phones, it doesn't add up much. They were always underfunded relative to other Microsoft businesses. If they made their own hardware, they could have innovated, captured more revenue, done stuff. But yeah, that's all

hindsight at this point. But there were moments and you'll see in the series along the way, there was a decision point. Should we or should we not? And the decision was always not to until they bought Nokia. And then it was like, at that point, got too late. Timings, everything. Timings, everything. Yeah. Yeah. If I look at, like, when you're doing a paradigm shift, it definitely helps to give an example of how it all works. The flip side is,

it's like, you want an ecosystem. But if it's so new that the ecosystem doesn't know how to coalesce, then I think it makes sense to do some of it vertically integrated and then kind of seed the pieces to the ecosystem, which is what Borkon did originally with like CDMA, which then became 3G. Everyone was going GSM and so they said, okay, well, there's this wireless IP and no one else is going to do it. So we're going to make the handsets, the base stations. We're going to do

pretty much everything. And then they got it all working. It was better. And then they basically divested everything, sold off the handset business, sold off the infrastructure business, and stuck to the bits that they wanted to, which was the licensing and the IP. Sort of betting on yourself to win, you know, is kind of that's the strategy there. It's hard for you to get the ecosystem to make a bet if you're not willing to make the bet. Interesting history there. But it was also, you know,

Microsoft is coming from this very Windows centric OEM licensing model. And so it was almost like impossible for them to wrap their heads around, you know, making their own devices. I mean, they did eventually with Surface too, right? I mean, they eventually sort of did. Yeah. But anyway, interesting history. But I was there. I left in 2023, about a year or so, after doing,

I was in the Azure engineering group doing IoT things, Azure Artos and Azure Percept. And we tried all kinds of things to help proliferate that IoT ecosystem and get it to connect to clouds, you know, and arguably not super successful. And, you know, AWS and Azure and even Google have sort of focused more on the cloud recently, the high margin cloud business, especially with AI workloads. And yet, you know, IoT and devices on the edge continue to flourish and innovate. I took

over this tiny ML foundation to run that back in April. So fairly recently, and it, for me, it checks a lot of boxes because it's, it's a nonprofit. It's about education and community. But it's also about like edge devices and AI and cool, cool tech. So it's been a fun, fun journey to kind of take all that history and my career history and apply it to this kind of community building exercise. So there's clearly a good fit with your skill set. How did they find you? I

assume that you didn't leave Microsoft to say, I'm going to run tiny ML. Right. Yeah, I knew, I knew the folks there before when I was at Microsoft. So at the beginning, Gustav's the founder of it and Chairman of the board. And he's a Qualcomm and we knew each other for years. And, you know, I was kind of mining my own business and my post-microsoft life doing some fun things. And I saw that they needed an executive director. And I was like, interesting. In fact,

I was running a podcast of my own and I had of Ganyon as a guest. And then I was like, well, are you looking for an executive director? And then we started talking about like, oh, what if we like evolve the whole org to like, you know, to do new things and AJI and what's happening with generative AI. So it sounded like this is a really exciting project with the community and, you know, evolving it just as AI is evolving as well. So that's how we kind of knew each other already. So

that's how it now it happened. Very cool. Onto the music questions. Sometimes I ask people and quite frankly, they're not into music like the CEO of Estimote, amazing company, amazing guy. It doesn't like music. Couldn't really come up with three things. I have a feeling that you might like music. So what's the first song that you chose that is meaningful to you? Well, let's see. My first song is Samba Pati by Santana. If you're familiar with that, I think it's on the

Abraxus album. It's an instrumental. It's just a beautiful song. And I remember listening to it when I was probably a teenager or you know, reminds me of sort of, you know, my early days when I was getting into music is actually Santana was the first concert I went to. Well, that's pretty impressive. That's pretty impressive. Yeah, it was cool. I think I was like 17 or 16 like said, Saratoga performing Arts Center in New York or something. And saw Santana there. And

yes, so Samba Pati is always it's always on my short list. I think it was a good gig. I see the show. Yeah, the show was great. If I remember it, I mean, it was a long time ago. Yeah, I would say number two for me is can't you hear me knocking by the Rolling Stones. I love that song. Love it. Such an iconic opening riff, you know, and I'm a big audio file as well. And that is like my

reference song. So I when I try to listen to equipment and really see how good it is, I was trying to play that song because I feel like I have I know I know what it sounds like when it's good, you know, and I mean like I know exactly what the instrument should be sounding like. And I've listed it on some very high end systems. And so for me, that's like a reference song for me

as well as a great song. That is interesting. So I could like pretend to be an audio file. I'm not really, but I've hung around with people who are and I've got an amp, which is a valve amp and, you know, a regar, plain a three, which over in England, it's kind of it's kind of a good budget high end turn table. And I love that. I'm going to have to get the vinyl and see see if I used to we used to play guitar hero every unit. That was my favorite song because I can't play the guitar,

but pretend to play the guitar when that is. He's playing is pretty awesome. Classic. It's so it's and it's so many different interesting things about it could go on forever about that song. So that's my second one. My third song is a song called Sitting on the Bottom of the World. And that takes just song I wrote and it's on Spotify and I did it with my band

and we recorded it in a studio here in Seattle about two years ago. And it was always kind of a crowd favorite song and I love the song and it's got a lot of interesting context on things in my life and my son's life and all kinds of things. So I really love that song. And so that's you can find on Spotify. So I had to throw that one in there. You know, it's one of my favorites. If it's all trying too much, what's the connection with your son?

Oh, well, my son's kind of struggles with some things, you know, like a lot of parents have kids that struggle with things. And this is kind of about some of those struggles, you know, and trying to support it, you know, trying to support your kids through their struggles. It's challenging because, you know, especially as they get older, there's only so much you can do to help them help themselves through those struggles, right? So sometimes you feel like you're sitting

on the bottom of the world, right? Trying to help them. Yeah. I can completely relate to that absolutely. And yeah, my kids are 21 and 24 and, you know, I really think in a sense, we kind of had it easy. I think we had it just about right. You know, people are more or less stop beating their children savagely together to do what they did, but you still were kind of expected to get on with it. So you didn't kind of have an expectation that life was going to be

painless and easy. And so you ended up, I think it's really hard for kids social media. Yeah. Yeah. Well, that's very tough. It's a challenging world to live in as adults. I mean, we are a reasonably functioning adults. Imagine having your whole life ahead of you and trying to figure out what do I do with this world, you know, it's challenging. So I got to ask you, so what's your audio system? What's your, what's what audio systems do you have? And what was the

like the best one that you ever listened to? Oh, well, there's a, I'm trying to name the name of the store in Seattle, but it'll come to me. I'm having a brain fart, but they it's all used audio. It's up in Ragnar. Okay. And they have some incredible systems, they're, you know, Macintosh systems and, you know, BMW speakers and all the good stuff that I don't have. But I've owned a

lot of high-fi over the years. And I think I felt like I've sold off pieces over the years as I've moved and, you know, had various, you know, you have certain aesthetic requirements put on by your spouse on, you know, what, how big can the speakers be and stuff. So I've sort of like adjusted. But I still have my Denon Rosewood turntable that I've had for a long time. I love. And I have, I actually am I really into like what they call chai-fi. If you've heard of chai-fi,

it's like Chinese hifi. So there's like all that really kind of, yeah, it's all like Chinese made high-fi stuff to to based amps preamps. Exactly. So I have kind of like a lot of chai-fi stuff now connected to this turntable. Oh. And that's kind of my, my, my setup right now. I'm just fascinated with that because they, this such a weird innovation going on there and, and you can buy stuff without spending a ton of money and trying it and see what happens, you know. But yeah, so that, that's kind

of my typical setup. And then I've always saved my best audio set was for my, for my car. Because it turns out if you earn your car, you can turn it up as loud as you want. And so I always over-invest in my audio systems and my vehicles and kind of spare no expense on, on that. So that's kind of my primary listening room is my vehicle. Yeah. Very smart. Very smart. Yeah, I definitely do crank it up when I'm driving. And there's no one else there. My dream set of speakers is the BMW Nautilus,

you know, the ones that look like a massive. Yeah, they're an Autolus. I yeah. Shell, they're like this, this big, that's, that's what I'm. All right. Okay, I feel like we should go for a beer, but I'm, and that's still other time. Thank you, Pete, for being on the show and indulging all my personal questions about you or why should I sit up and, and, and career and music taste. No problem. Very good. Well, that was my conversation with Pete. I hope you enjoyed it as much as I did,

and I really did enjoy it. I want to thank you for sticking through to the end and, uh, Aaron Hammack, who has to stick through to the end because he edits this podcast. And I want to thank you again if you're one of the people that helps promote this show to friends, colleagues, people on the internet or those few people that read the ratings that apparently make a big difference to the standing of shows like this. So until next time, be safe.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.