I think we've officially come full circle. We are recording in the master bedroom of an Airbnb. You know, we went around, did the scientific testing, and determined acoustically this was the best location to record the show. We don't want to get any lectures from Drew. No, no. And thankfully, I don't think we had to tear apart any beds for this one. But it's funny because the studio where we record is actually my former master bedroom converted into a studio.
I do think we'll have to let Brent tear apart a bed after this just to get that energy out because he was ready to go. I was planning ahead and we're only using one mattress. Music. Hello, friends, and welcome back to your weekly Linux talk show. My name is Chris. My name is Wes.
And my name is Brent. Hello, gentlemen. Well, coming up on the show today, we're reporting from Red Hat Summit, and we're going to bring you the signal from the noise and why this summit has me flirting with something new. Plus, your boost, a great pick, and more. So before we go any further, let's say good morning to our friends at Tailscale. Tailscale.com slash unplugged. They are the easiest way to connect your devices
and services to each other wherever they are. And when you go to tailscale.com slash unplugged, not only to support the show, but you get 100 devices for free, three user accounts, no credit card required. Tailscale is modern, secure mesh networking protected by. Wow. Easy to deploy. Zero config. No fuss VPN. The personal plan stays free. I started using it. Totally changed the way I do networking. Everything is essentially local for me now.
Tailscale has bridged multiple different complex networks. I mean, I'm talking stuff behind carrier-grade NAT, VPSs, VMs that run at the studio, my laptop, my mobile devices, all on one flat mesh network. It works so darn good that now we use it for the back end of Jupyter Broadcasting server communication as well. So do thousands of other companies like Instacart, Hugging Face, Duolingo. They've all switched to TailScale. So you can use enterprise-grade mesh networking
for yourself or your business. Try it for free. Go to TailScale.com slash unplugged. That is tailscale.com slash unplugged. Well, we are here in our Airbnb. Yeah, why are we doing housekeeping at someone else's Airbnb? I know. How do they, you know, these Airbnbs, they just get more and more out of you every single time. But maybe it's because we brought our own mess. Well, actually, it's not so much of a mess. It's actually going really well.
People are getting excited about our Terminal User Interface Challenge. We are still looking for feedback on the rules. We have it up on our GitHub. We've seen some already good engagement, though, people talking about it in the Matrix room. So we're getting really close to launching it when we get back. It's the final week before we launch, essentially.
We're going to get back in the studio next Sunday. We're going to sort of set up the final parameters of the challenge, give you one week, and then the following episode is going to actually launch. Get ready to uninstall Wayland. Yeah, take the Tui Challenge with us. There's a lot there. It's looking like it's going to be a lot of fun, and we're going to learn about a bunch of new apps I never knew about. We have a lot of work to do because the listeners, they're way ahead.
Also, a call out to the completionists. We're doing this for a couple of episodes. We know a lot of you listen to the back catalog, listening in the past, if you will, and then you're catching up. We recently heard from somebody that was about 15 episodes behind, and it got us thinking, how many of you out there are listening in the past? So when you hear this, boost in and tell us where you're at in the back catalog and what the date is.
Until those darn scientists finish up. This is the closest thing we have to time travel, okay? And then one last call out for feedback. This episode, I'm getting into why I am switching off of NixOS. And this isn't a negative thing about NixOS, but I thought I'd collect some information. If you tried it and bounced off of NixOS, boost in and tell me why.
I'll be sharing my story later. But also, if you're sticking with NixOS, I'd be curious to know what it is about it that's absolutely mandatory that you wouldn't give up. Boost that in as well, or linuxunplugged.com slash contact. We'll have more information about that down the road, because really, it's just ancillary to what this episode is all about. And that's Red Hat Summit. So we were flown out to cover Red Hat Summit, as we have done for the past few
years. And the ones where there's a Red Hat Enterprise release are always really the most exciting. And Red Hat Summit 2025 here in Boston at the Boston Convention and Exhibition Center was May 19th through the 22nd of May. And they did something a little different this year. They decided to make what they really referred to as Day Zero Community Day. So this was a track that was sort of ran adjacent to Red Hat Summit in the past is now a dedicated entire day. And I thought I'd go check it out.
Welcome to Community Day at Red Hat Summit, day one. And it's all about, you guessed it, artificial intelligence. Well, okay, and Linux. But they made a pretty good call. They said, hey, Red Hat is working to set the open standards for Red Hat and for data and for models. And here at Summit, you can interact with us directly and inform how we participate in those. So sort of like get involved in AI through Red Hat, a call to action, as well as just general information about today's event.
You knew right from the beginning, OK, it's going to be another year where we focus on AI quite a bit. But this was a kind of a different call. It was there's a lot of impact still to be made for open source AI. And we're really as a company, Red Hat's really making a push. So why don't you get on board with our open source initiatives and inform the conversation there?
We'll push the wider industry based on your feedback. I mean, I do think that's a trend we see play out over and over, both between, you know, Red Hat interfacing with the industry, but also really leveraging and in many cases sometimes being driven by what's available and what's happening in the open source side because they really have skill sets and how to, you know, turn that into an enterprise product. So the better the open source side gets, the better their product gets.
I wasn't really sure what the focus would be this year. I mean, I knew RHEL 10 was coming, but I, you know, last year was really focused on local AIs. Could you do two years of Summit on AI? And this was Brent's first Red Hat Summit. Which is hard to believe, really. And we wanted to capture his first impressions sort of right there after he'd had a chance to walk around on what they're calling Day Zero.
Well, this is my first time here at Red Hat Summit. And I got to say, you guys warned me about the scale of this thing. Wow. Just the infrastructure and the number of booths and the number of people and like how organized it is to get everybody all here and doing the things they're supposed to be doing. I am a little overwhelmed by just the size. I bet you they spend more on hotel rooms than I probably make in five years. I don't know, maybe more.
Oh, gosh. I do have a nice hotel room, but even just the layout, like how everything's so close. You don't have to go very far.
You don't have to travel. everybody knows sort of there's people just standing around helping with wayfinding like it's super impressive the vibe's a bit different at linux fest they weren't even doing registration they weren't counting attendees there wasn't anything like that and here you have to get your badge scan to enter every area in every room and the security is definitely a higher presence what what impression does that leave on you.
Well, I guess there are relationships being built here that are very different than the relationships being built at other conferences, right? Like we saw some negotiation booths. Didn't see those at Linux Fest. So it's a bit of a different feel, but there's some real stuff happening here. Some real connections being made. Day two should be even more interesting. Really, it feels like maybe things are just kind of slow rolling today.
Did you get that impression that it's just sort of not quite started yet? It seems like people are still arriving and warming up to the whole situation, getting the lay of the land. So I'm excited to see tomorrow. That's when all the exciting stuff happens. Yeah, just wait. You get to wake up real early for a bright and early keynote, Brent. I forgot about how we have a time zone disadvantage. So day zero, if you will, was sort of the ideal day to go see the expo hall.
These expo halls are just quite the spectacle. I mean, the crews that come in and set these up in an amazing amount of time, they also have all of this racking they do for the lighting. I learned it took them two days to put all that together. And apparently that was like quite a miracle. Yeah. I mean, these booths are structures with like areas inside them and, you know, massive displays and LED lighting embedded everywhere. These are your highest of the high-end display booth type stuff.
I mean, this is really nice stuff. And we wanted to see it before it got too crowded. Well, you can't do day one without doing the expo hall. And it's an expo hall. Let me tell you, it's a whole other scale than, well, LinuxFest Northwest or scale. Lots of production, lots of money, lots of lighting. And right now we're standing out front of the DevZone, which seems to be one of the more popular areas, and in particular the DevZone Theater. And what do they seem to be going over, Wes? Wes?
Yeah, they're talking about the marriage of GitOps and Red Hat Enterprise Linux image mode. And despite us just being in a packed talk, I think there might be more people trying to watch this here on the Expo Hall floor. I think there's a lot of excitement around image mode and the things you're going to be able to do or can already do. We're tying it to existing declarative workflows with patterns that developers like that now can meet the infrastructure.
There does seem to be a real hunger for it here. Like, it's standing-only room right now. And they're doing a live presentation, too, so there's a screen. And everybody's trying to see it, but there's so many people in the way. Like, we're here in the back. We can barely see the screen. So, Brent, what do you think of this expo hall compared to other experiences you've had? It is very large. I've got to say it's very well-spaced. Like, you can see a ton.
It's not like these little cubes. Many expo halls just feel closed in. And this is open and breathy and tons of people, but it doesn't feel squished together. And it's bright and, I don't know, innovative? You know what does feel squishy? This floor. Why is this like this? So we're in the, like, dev room cloudy space? I know. We're at app services, Brett. Oh. See, I'm confused. But the flooring, they've added extra cush. It's, like, very cloudy. Feels good on the tired feet.
Now, something that you caught in there was image mode, and there was buzz on the expo hall floor about image mode, but RHEL 10 hadn't actually been announced yet. So we hadn't officially heard the news about image mode, but staff were walking around and literally asking, have you heard any leaks about RHEL 10? You heard anything? Because there's some things going around.
And then we were like, and I can't remember what we said, and it was like, why don't you tell us what the leak is, and we'll tell you if we heard it, was our answer. So there was some anticipation around day two and the keynote, because that's where we expected to get the official news of REL10 and Matt Hicks, the CEO of Red Hat, kick things off. Welcome to Red Hat Summit 2025. This is our favorite week of the year, and it's great to have so many customers and partners here with us in Boston.
There's so much to learn this week, and we hope that each of you can come away with a new insight to improve your business, yourself, and hopefully strengthen one of the things that brings many of us here, open source. He had an analogy pretty quickly after that, where we all three looked at each other in the dimly lit keynote room and we're like, what? So I wanted to play it again for us so we could actually have a conversation
about it. This isn't about replacing your expertise. This is about amplifying it. I recently had to explain this tension to my 10-year-old son who loves basketball. This is how I explained it to him. Imagine a new sports drink comes out, and when you drink it, every shot you take goes in. How would this change the world of basketball? Now, an extreme position is it would kill the world of basketball. How can you have competition when a middle schooler could shoot better than Steph Curry?
But I don't think that is necessarily true. Strength still matters. Just getting the ball to the rim from half court is no easy feat. Defense still matters. Your shot can be blocked. Speed still matters. You have to get open just to take a shot. So yes, a sports drink like this would drastically affect one aspect of the game, accuracy.
But how can we possibly understand the impact on a game just by removing one factor when And there are so many others in regards to height, speed, endurance, athleticism, strength. A change like that would fundamentally change the world of basketball that my son knows and loves. It would change who could be great at the game. It would change the focus of the game. It might change the rules of the game, but it would not eliminate the game.
I believe we would take these factors, we would shape them into a new game, and given just the inherent creativity in people, that new game would be better. Right now, that's exactly where we are with AI. we're in the moment of uncertainty between games, between worlds. We have to simultaneously understand that while the fundamentals that we know are changing, maybe beyond the point of recognition, there are so many other factors that come into play in terms of creating true business value.
So there's a couple of things that jumped out at me during the keynote when he said that. And I think the first one was, this is, again, the CEO of Red Hat. And I think he just... Gave us an analogy to what they view AI as, as this almost magic sports drink that means that if they can get everything else to line up, all the other supporting players in the game to line up, that then they have this solution that's going to let them get nothing but net.
That is, that's essentially like a makeup company saying they have discovered the fountain of youth, and they're going to bottle it, right? I mean, that is the biggest of the biggest statements. So just starting there, before we even get into the other aspect of the analogy, what are your impressions of that? Well, I think it's a big deal. Like, AI is all uncertainty currently, but this statement feels like we know exactly the direction we want to go in.
We are already working towards it, and it's already doing things for us. And there's still a lot of vision here from a company that otherwise didn't work on AI, right, until recently.
And it does seem like they have a lot of the supporting products in place to realize this idea that he put out there and we can get into some of that later but they they have several product pieces that sit on top of rel that are trying to enable this vendorless accelerator neutral back-end neutral ai system that's local or in the cloud also when it's in the cloud it's completely vendor neutral from oracle to azure or you can run it on your own infrastructure
and you know, pick your back end models. So they're trying to put all the supporting players in place, but it's, to me, it still feels like a real wild analogy. See, I think I see it more as trying to acknowledge the fears of folks around AI and the uncertainty, but making a pitch on the sort of human enablement side, right? Like kind of talking to the people who have to work with their products and administer them and saying like, we think this will make you more effective in that goal.
And then to your point, on the other side, they're then working to make sure that their technology is ready to meet that and interface with whatever AI power-up you are able to get. The way I interpreted this... Re-listening. I think the look we gave each other live was like, what? How does this, what's this trying to say? And re-listening to it here live, I got a little confused because at first he set it up as like, open source is great.
Here's the basketball, you know, sports drink thing that gives you superpowers. And I thought, okay, open source is the sports drink. and it allows all sorts of new things to happen and all sorts of new technologies to flourish because you've solved that problem in a way that is collaborative, etc., etc. And then he's quickly shifted to the AI piece, which almost reflects, for me, the trajectory of Red Hat.
Yeah, they very much came to the point of saying we see the path that AI is on right now as a similar path that open source was on and Linux was on 10 to 20 years ago. While this might feel new for many of us, this isn't the first time we've experienced this in software. In fact, when open source emerged, there were a lot of people that felt the same way about it. Open source challenged how software created value, even what competition meant.
It removed barriers that defined proprietary software. It even added a new factor around collaboration being critical for success. And in that challenge, it was feared, resisted, ridiculed, attacked. And yet, last year, there were over 5 billion contributions made to open source software. Despite the fear, despite the attacks, despite the disruption, open source still changed the world of software. I felt that potential in my first experience with open source.
It captured my imagination along with millions of others. It defined my career along with millions of others. Where others saw fear or disruption, I saw potential, along with millions of others. That is exactly what we're experiencing with AI right now. The world that many of us know is open source and software and IT. We have shaped this world over decades, and now the rules are changing.
And while that can be scary and that can be disruptive, if we take a step back, the potential is also undeniable. I would be really interested in the audience's thoughts on the parallels and analogies that Matt was drawing here. Boosting with your thoughts, if you agree, if you strongly disagree, I'd really like to hear that as well. But I think the news we were sitting there waiting for was actually RHEL 10. And so Matt steps off the stage for the first time and we get into the news.
Please welcome Red Hat Senior Vice President and Chief Product Officer, Ashesh Badani. Music. Everywhere you turn, the world is running on Linux. Tens of millions of people trust Linux to power the critical infrastructure. And trillions of dollars a day is dependent on Linux. For more than 20 years, Red Hat Enterprise Linux, or RHEL, has been the trusted platform for organizations around the world. It is the heart of Red Hat's portfolio and the foundation of our core technologies.
But Linux is often managed the same way it was 10 or 15 years ago. Today, we're changing that. We're giving Linux admins new superpowers that allow them to wait less and do more. That's why I am so excited to announce REL10. This is the most impactful, most innovative release we've had in a long time. And image mode is one of those reasons. We'll get to that in a moment. But there was another announcement up on stage that I wanted to include too. And that was something they're calling LLMD.
Reasoning models produce far more tokens as they think. So just as Red Hat pioneered the open enterprise by transforming Linux into the bedrock of modern IT, we're now poised to architect the future of AI inference. Red Hat's answer to this challenge is LLMD, a new open-source project we've just launched today. LLMD's vision is to amplify the power of VLM to transcend from single-server limitations to enable distributed inference at scale for production.
Using the orchestration prowess of Kubernetes, LMD integrates advanced inference capabilities into existing enterprise IT fabrics. We're bringing distributed inference to VLM where the output tokens generated from a single inference request can now be generated by multiple accelerators across the entire cluster. So congratulations to all of you. You came here to learn about the future of Linux, and now you know what disaggregated pre-filled decode for autoregressive transformers is.
It's actually a really significant contribution. So you could think of it as you submit a job to an LLM and then this system sort of sorts out the best backend execution based on resources, the type of job, the accelerator you might need. So it's taking something that is a real single pipeline and breaking it up with all of this backend flexibility. Here's how they describe it on the GitHub readme.
LLMD is a Kubernetes native distributed inference serving stack, a well-lit path for anyone to serve large language models at scale with the fastest time to value and competitive performance per dollar for most models across most hardware accelerators. So bringing that home and, you know, what it actually means in a practical sense for like a small business like myself, it would be maybe we have a few jobs that run on Olama locally on our LAN hardware.
But every now and then we have a big job and we want to execute that out on cloud infrastructure. And this can help us do all of that and, you know, the orchestration of it. So it's actually, it's a pretty significant contribution and it works with VLM, which we'll talk about more later or now.
No, no, I was just going to say it is a big contribution and, you know, Red Hat's playing a huge part, but they also list right here folks like CoreWeave, Google, IBM Research, of course, as well as NVIDIA. Yeah, yeah. And there's been some news about AMD's interest and involvement as well. And the NVIDIA involvement is particularly interesting to me because this doesn't serve NVIDIA in selling more hardware.
This project actually enables people to distribute workloads to other things that are not NVIDIA hardware, that are cheaper things when not needed. And so it's pretty interesting to see NVIDIA actually engage in this process. I get why AMD is. But it's interesting to see NVIDIA engaged, even though it kind of, in a way, eats away at their hardware mode. And I think it's exactly things like that that are maybe drawing some of the parallels to the Linux evolution that we've been talking about.
Yeah, and so the behind-the-scenes conversations I had with Red Hat staff is essentially, this is where the users are. NVIDIA is doing this because their customers are asking them to, just like their customers asked them to support Linux years ago. So, yeah, that's the parallel there. So it was a long keynote. I'm not going to lie. It was two hours. And there we what we just shared with you were some of the highlights.
But there are also moments where, you know, they're trying to address multiple audiences. You have your technical people there. You have your sales people there. You have your chief technology officers there. And so in one keynote, they're trying to speak to all of these different diverse audiences that just don't really get the same messaging. And so you'd often have guests come up on stage that kind of essentially say
roughly the same thing. And it gets really business jargon heavy because you're speaking to that audience. So we sat there for a while listening to a lot of that and then also interspersed with like these really interesting technical moments. We just stepped out of the keynote. This was the big keynote. There will be a keynote every day, but this was the big one. It was a two hour chonker. And we got Red Hat Enterprise 10, which is pretty great.
And image mode was a big part of that. There was essentially four key things that they listed that they're really excited about RHEL 10. And I think image mode is what they led with, and it was probably the one that stuck with me the most. They talked about how vendors like Visa want to be able to update their infrastructure as if it was a smartphone. And just flip a switch, and they've got the new updates, and it'll streamline updating security.
And I'm actually here for it. I hope it makes RHEL a little more maintainable for shops that are deploying it. But of course, for years... Two in a row, the big topic was artificial intelligence. And AI was really baked into everything. And I'm just curious, Brent, as a first-timer, what your impression of all of the AI talk was. Because, I mean, you just can't prepare a guy for this much AI talk. The scale of the AI.
I did notice that they basically took each of their products and added AI on the end of it, which I didn't expect.
And nobody really addressed that. but they're just sort of spreading the ai throughout i think maybe that's a more of a strategic plan to i don't know be part of the future um but i'm curious how that dilutes the current products or where they're headed with it it is a lot of brands now to keep track of, and like i said we're in year two of this and i'm not 100 convinced that all of the people watching in that room actually have the needs they're addressing up on stage i think some people do.
Airlines and Visa, I think they do. But I'm not sure everyone in that room was really feeling the urgent pressure to deploy AI to get a return on investment or total cost of ownership lowered for whatever they might have. And that's not to say that Red Hat doesn't seem to have found a more refined focus for their AI implementation. I think year two of this AI focus is actually a lot more practical. It's about shrinking the size of some of these models.
It does seem like they've found a few areas that they can bring some of the special Red Hat sauce to. Yeah, you know, okay, I think you're right that there are definitely questions around, is there this low-hanging fruit of, like, you got to meet this AI need, AI can do it today, you just have to figure out how to deploy it? Yes, for some, everywhere, maybe an open question. But I do think you have to give Red Hat credit.
Like, if you were trying to, if you are solving that problem, they have a lot of nice things in the works from, like you're saying, right, like quantized and optimized models that you can just get from Hugging Face or via catalog in your Red Hat integrated products. They've also been talking a lot about VLLM and turning that via the new LLMD into a distributed solution, right? So now you can do inference that isn't just running from a single process.
It's doing inference across your whole cluster of GPUs. And, you know, we saw folks today from Intel and AMD and, of course, NVIDIA. But it's nice to see, at least, whether or not you're really using it in your business, if you were to, that you would, in the future at least, have real options, not only between different models, but also different accelerators, as they put it. That distributed model stuff you were talking about, that was an opportunity for them to bring Google up on stage.
And the comment was, Google was our partner in crime in creating this. So really leaning Microsoft Azure got a mention up on stage. So they're trying to present themselves as a vendor-neutral AI solution. And when I say trying to present, I think they are doing it. They're doing it successfully. So if someone out there is in this market, I mean, Red Hat is killing it. But for me, as somebody who's looking at the more practicals, RHEL 10 is it, right? You get...
Improved security, you get image mode. And the other thing that they talked about, almost as if it was new, is virtualization. RHEL 10 is clearly making a pitch to shops that want to migrate off VMware. Did you catch this too? Oh, yeah. I mean, the whole product offering and really the rise of OpenShift virtualization, you know, it's not necessarily net new. And things like KubeVirt have been around for a while to let you run VMs as containers.
But they didn't quite come out and say the word Broadcom. But you got the feeling, you could tell there were stories around like, oh, a year ago, we really needed to modernize or look into our virtualization spend. And last year, there was a lot of talk about the potentials, I think, at Summit, right? And folks were talking about OpenShift being well positioned.
And this year was a bit of a, let's show you all the successful customers now deploying and have migrated or are in the process of successfully migrating to an OpenShift and a Kubernetes-based virtualization platform. And we even saw like a variant of OpenShift announced that is basically just OpenShift tailored for only running VMs. So that's that's full circle.
The Emirates Bank was up on stage. I think they mentioned they had something like 9500 and 9800 virtual machines running under OpenShift virtualization. And they also announced the OpenShift cloud virtualization is available on all major cloud platforms, including Azure and Oracle.
Wow. So when you think you've got a solution that works on-premises and something you can easily offload to the cloud, it actually kind of left me feeling like we need to play around with OpenShift virtualization and just kind of wrap our heads around it. Just give me your Oracle API key. We'll get started. Okay. And I wasn't kidding either. I really felt like they made a good pitch for Real10
and the OpenShift virtualization platform. I think it's something we are going to experiment more with and get more hands-on experience. It was actually a good, solid product. We got a hands-on demo for the press that they went through the dashboard. And it looked just as easy to use as Proxmox. Or if anybody's familiar with the later iterations of VMware, ESX, and things like that, it really sort of met those expectations as far as management and dashboard went. It looked good.
Yeah, you can tell it works really well if you have an existing sort of containers workflow in OpenShift and you want to add virtualization. But now they're even targeting it for folks that maybe haven't yet tried out OpenShift, but they're looking for a virtualization solution and you can get yourself an OpenShift cluster pretty much just tailored to run virtualization. And then maybe later you expand down to containers too.
So there was something that really got my attention and I am thrilled to see Red Hat pushing further down this path. and you see it also becoming really popular with Fedora Silverblue. You see it with Bluefin and Bazite and the UBlue universe of operating systems. It's using images to manage and deploy your infrastructure to get immutability. And Image Mode is something that Red Hat is focused on. They're taking Bootsy and they're bringing it even further.
And we had an opportunity to sit down with the product manager of Image Mode for RHEL and we got all the inside deets. Well, I'm standing here with Ben and he's the product manager for ImageMode for RHEL. And I asked him to try to give us the elevator pitch of what ImageMode is. Yeah, well, that's a great question. So, okay, we know containers, right? We've been building containers for applications for a decade now.
All the same ways that you build containers and manage them, we now can do that for full operating systems. We're going to change one important detail, right? We all know a Docker container, it's going to share the kernel, right? Well, these base images that we use for this, they're now bootable containers. So the kernel is going to live and be versioned in that container, right?
And so now we're going to take that, we're going to write it to Metal, we're going to write it to the VM or cloud instance, whatever. And now that server is going to update from the container registry. So now all of your container build pipelines, whatever automation you're using for testing verification, Now you can do that for operating systems. So it's really the same tooling, tool set, language, same everything for your applications you can now use for your operating system.
So the world we're living in is complicated enough. It's only getting more complicated. So anything we can do to simplify and reuse and just get people to value faster is the way to do it. And that's what you get with ImageModule for REL. That does sound very nice. So how are you booting an image? Is this BootSea involved here? Yes. BootSea is the core of the technology, which stands for Boot Container.
It's the magic that, like, kind of closes the gap between the tarball that your container image is and, like, the system. It gives you, like, an AB boot feel to the system, right? So when you update, you stage the next one in the background, and you can reboot, and now you're in the new one, right? So, Bootsy is the core of this and the core command line when you need to update the image or switch to a different one or reprovision the system. So, yeah.
And Bootsy went into the CNCF. It's a sandbox project now. We're working on getting the incubator status. So, yeah. That's the... My recollection is that we got Bootsy at the last summit. Bootsy was announced. So has this been kind of in the works since that announcement? Yeah, exactly. So we did a big announcement last year. Since then, we've been working with a lot of customers on getting them to production,
right? We just had one mentioned in the keynote. We had another one speaking yesterday. I don't know if I can say names on this, so I'm not going to leave it out. But I don't know. It was great. We have another one speaking later today. And then one of the hyperscalers is demoing it right now. So, yeah, I would say just the traction we're seeing has been awesome. So it definitely feels like that fit to where it's the right tech at the right time for people to be using it.
Yeah, I'm curious. I felt like when we kind of heard stuff last year, it was co-announced or at least sort of pitched a bit as being motivated by, like, faster problems specifically around AI workloads. You know, like, here's this new mode of operations we think would be a really good fit. But I'm curious. Last year, we heard a lot of sort of like, okay, we're starting there, but we think the applicability is a lot broader, and I'm wondering if that's kind of showing out in customer adoption.
It's way broader. I think I almost look at this just kind of an image flow is very general purpose, right, is where you can get to quite quickly. So, yes, it's still very relevant for AI. REL AI actually ships as a Bootsy image, right, and we run it that way. I would say one of the big values there is anytime you're connecting a complicated stack, I'm versioning a kernel, kernel modules, different frameworks, libraries, where it's a Jenga stack, which a lot of AI looks like these days.
Building with containers solves a huge amount of versioning problems. We want to get people out of the state where I DNF update a package and, oh, now my storage doesn't work because there's a lag over there. No, if the build fails, it'll never hit your server. This is, when you use containers, that just becomes so easy, right? Again, it's about going back to simplifying all the complexity we have and getting to value is the whole thing, right?
I'm just curious, what does it look like, you know, for folks maybe who have never tried image mode but have experienced regular RHEL deployments, how do you get started with, like, a new system that's full-on image ready? Great question. So there's different paths. It depends on your environment. So the answer may change a little bit depending on what your needs are. But in general, I think Podman Desktop is probably the easiest tool.
It's no cost. It runs on any platform. So if you're working on a Mac or Windows, we'd love to upgrade you to RHEL. But, you know, we get it, right? So you can put this on. There's a Bootsy extension. You can build containers. You can convert them to images. You can boot them as a VM, all from Podman Desktop. It's amazing. I use that today. Now, I immediately then switch to versioning everything in Git. And I have GitHub Actions as everything.
So my good buddy Matt here and some other colleagues put together templates for all the big CI CD systems. So if you want to just get started with, say, you do GitHub Actions, GitLab CI, Jenkins, Tecton, Ansible, you get the idea. It's infrastructure agnostic, right, is the whole thing. We've got all the templates. Clone the one. It's so easy. So we kind of have a good path if you want to work locally or if you want to work in, like, a Git model.
Those are the two paths I would steer you towards. Given Bootsy and ImageMode are relatively new, what are the challenges coming up that your team's going to be working on? Well, we've got a big roadmap. We're adding more security capabilities. You know, I mean, there's multiple ways to answer your question. But let me talk about security, right? Because this is forward-looking stuff here. We have all the pieces, and we're working on stitching them together.
Because what we want to do is the way you sign applications with like cosine for your container image. We can have the same basic key insert, actually inject that into firmware, if it's UEFI or inject it into the cloud image, right? And then from there, we can have a post-process step on your container that makes a UK unified kernel image, right? That is signed, we get full measure boot. And then the root FS of that container, that digest is in the UKI as well.
So, if your root file system gets modified at all, it's the holy grail security story, that tamper-proof OS that we've been chasing. So, Bootsy gives us all the things we need to stitch that together in the Linux and make it easy. Because today this stuff is possible, but you have to be like, there's like five people on Earth that can do it today, right? And I want, like, me to be able to do it, right? And so, we're pretty close. My goal, again, is forward-looking statements, so all that.
But I hope next year at Summit that's what we're talking about and everyone is like, wow. That'd be great. I'd love to catch up at next Summit and see how it went. Thanks, Ben. I'm particularly interested in Red Hat adopting this further because it brings a lot of what I like about NixOS and what I like about Bluefin and Bazite, but it brings it to the enterprise operating system and it could solve so many problems.
And you guys know I've talked about this, but the other reason why I kind of like this approach that they're doing is while it is a top-down system, it is leaning into workflows that people already understand. They're already deploying containers. They're already using GitHub Actions or whatever they're using locally. There's tens of thousands of DevOps engineers out there that could start deploying their own custom bespoke Linux systems.
And this is why I got into Gentoo back in the days, because I needed very bespoke custom systems, and there was no tooling around this. There was nothing. I didn't really have a lot of options, so I went with Gentoo 100 years ago to build these really bespoke custom systems that then I would manage and orchestrate from this crazy scripting thing that I had set up.
But this brings this to everybody using systems that are maintainable with Red Hat's backing and their whole CYA when it comes to certifications, licensing, compliance. I mean, it just makes me think other ecosystem here, but think about setting up a bootstrap system for just going from the base up, trying to get that going. And then for an RPM style, it's going to be different. And for an arch system, it's like packstrap or whatever.
And there's all these different things. And then in this new world, you just change what base image you pull from. And it's just so much simpler. As somebody who used to really, really get frustrated managing systems where your only options were RPMs and maybe, you know, an RPM repo that got you what you need. This is just such a huge land shift. And it was nice to be able to pick Ben's brain. One question I ended up having in all of this is how old are these new packages?
Like, REL10 just came out, but, you know, in Enterprise things are slightly more glacial than, let's say, Nix OS, which we visited last week. So what are we looking at here, boys? Like, what does REL10 actually have under the hood? Well, I believe it was branched off from Fedora 41. I think during the beta, maybe there was a 6.11 kernel, but it's shipping with Linux kernel 6.12. And then I believe GNOME 47... We also got DNF5 in Fedora 41, which is probably a big change.
When you look back at the Fedora releases, you can see, oh, Red Hat was trying to get this pipe wire milestone in. Red Hat was trying to get this DNF milestone in because ultimately that became RHEL. And sometimes you see these things get packed into a Fedora release for that reason. And DNF5 is great. So, you know, for the parts where you're maybe not doing it with image mode, that will be killer. And also Boot C initially shipped in Fedora 41.
So there you go. See, to me, it's like, if you like Fedora 41, well, now you get that in RHEL. It's basically Fedora 41 LTS, which is kind of appealing. You get Kenome 47 or KDE 6.2. You know, I had just a quick thought here on image mode and if it sees wider deployment. One small benefit of the approach, maybe it's a big benefit, is the AB style and rollbacks that this really easily enables.
And I was just thinking, you know, when we've seen recent issues, big problems with Windows deployments in the enterprise, where maybe something like a quick, easy boot, undo, boot into the last version rollback would have saved just billions of dollars of agony. And we know, right, like RHEL is deployed at or above the scale of Windows in these types of backend enterprise applications. So this could be huge. And I think so.
I think it's so monumental that it's making me seriously consider the Red Hat ecosystem for what I do, for what we do. Whoa. Yeah, we'll get into it. 1password.com slash unplugged. Now, imagine your company's security kind of like the quad of a college campus. Okay, you've got these nice, ideal, designed paths between the buildings. That's your company-owned devices and applications. IT has managed all of it and curated it, even your employee identities.
And then you have these other paths. These are the ones people actually use, the ones that are worn through the grass. And actually, if we're honest with ourselves, they are the straightest line from point A to point B. Those are your unmanaged devices, your shadow IT apps, your non-employee identities like me, a contractor. I used to come in and be one of those. I was always shocked because they're not designed to work with the grass paths.
They're designed to work with the company approved paths. That's how these systems were built back in the day. And the reality is a lot of security problems take place on the shortcuts the past users have created. That's where 1Password Extended Access Management comes in. It's the first security solution that brings all these unmanaged devices, apps, and identities under your control.
It ensures that every user credential is strong and protected, every device is known and healthy, and every app is visible. The truth is 1Password Extended Access Management just solves the problems traditional IEMs and MDMs weren't built to touch. It is security for the way we actually work today. And it's generally available for companies that have Okta, Microsoft Entra, and it's in beta for Google Workspace customers as well. You know what a difference good password hygiene made in a company.
Now imagine zooming out and applying that to the entire organization with 1Password's award-winning recipe. 1Password is the way to go. Secure every app, every device, and every identity, even those unmanaged ones. Go to 1Password.com slash unplugged. That's all lowercase. It's the number 1Password.com slash unplugged. Now, if we hadn't had enough of two days of interesting stuff, that was a third day with a brand new keynote.
Well, here we go. It's day three. We're walking to the keynote right now. I don't know what to expect because all the big announcements like RHEL 10 and things like that were announced yesterday. So I'm kind of going in blank, not sure what to expect. We'll find out together. One thing they came back around during the keynote on day three was the security enhancements in Red Hat Enterprise Linux. And there is one particular area they really focused on.
Please welcome Red Hat Senior Vice President and Chief Product Officer, Ashesh Badani. Music. REL 10. REL 10 is the biggest leap forward in Linux in over a decade. And we didn't just get here accidentally. two decades of server innovations. Virtualization, containers, public clouds, and each and every stage, RHEL has been the enterprise Linux standard. And now, the AI era is here. And around the world, there are uncertainties. But in a world of uncertainties, one thing is certain.
Yeah, that's good. That sells it. Now there was of course the. Just general positioning of RHEL, right? It's an AI-first distribution, but also it is a post-quantum encryption distribution. That's a mouthful. We've talked a little bit about post-quantum cryptography. Let's go into that in some more detail. Can you tell us about the impact of quantum computing, which I'm sure the audience is really interested in, and why we need to prepare for a post-quantum future?
Sure. So in the not-so-distant future, quantum computers will be more readily available. And they'll be leveraged by bad actors to break today's encryption technologies. When that happens, sensitive data will no longer be considered safe. But organizations like NIST and the IETF are already working on draft requirements and standards of what will be needed in a post-quantum world. And Red Hat is ahead of the game here.
We are leaders in post-quantum security, And we've been working on those requirements to meet post-quantum cryptographic challenges for some time now. Because we know that we need to help our customers protect their data against future attacks and fulfill future regulatory requirements. REL10 has the libraries, tools, and toolchains ready. So you can rely on us when you're ready to transition and start into a post-quantum world.
This is obviously early days, right? You hear the wording there, when you're ready to start transitioning. To a post-quantum world, right? These standards are very early, obviously. Yeah, I mean, we don't really even have the kind of quantum computers to really sort of test these fully out. So some very smart people have done some very clever math and devised, so far, our best takes on how we might defend against this. Yeah.
And Red Hat's there if you want to, you know, try to get ahead of the game. There's two things here. So number one is they're kind of pegging to the standard. So as the standard evolves, they will likely evolve their support for it, right? So that's kind of what the real, That's a beachhead here.
The second thing is you have to realize, I mean, I know you guys do, but you've got to just think, It takes 10 years sometimes for these distributions to really, these enterprise distributions, to really work their way out into the ecosystem. And so 10, 20 years from now, this could be a problem. This could be a problem 10, 20 years from now. And so if you start in RHEL 10, well, by the time people are running RHEL 13, hopefully it's in, it's baked in, and it's working.
The other thing that occurred to me yesterday is you have to think about the information that you're storing today and that might get cracked, let's say, in the future by quantum stuff just because it's sitting on disk. So getting in early, I guess, is the name of the game in this case. And I'm not trying to trivialize it, but there is also, I think, real value sometimes in just having one more checkbox that may get added to
security questionnaires that become standard in the coming years. That is true. Out of the box, you're good to go. You can say Linux covers this. It's not just something Microsoft is doing or whatever, or Oracle or whatever it might be. Yeah, there's a supported Linux platform that you can do that will be first class in that ecosystem. Now, day three, we wanted to just knock a couple of things off because we're at Red Hat Summit.
And so we had access to folks that you just normally wouldn't have access to in person. And we wanted to chat with the outgoing Fedora project manager and the incoming Fedora project manager because both Matt and Jeff, Matthew as he likes to be called, Matthew and Jeff were at Summit. And so we went to the community section, found the Fedora booth, and got these guys to sit down. Well, I have two quite important folks here. Gentlemen, can you introduce yourselves?
I'm Matthew Miller. I am the current Fedora Project leader for about two more weeks. Two weeks. And you? I'm Jeff Spoletta. I will be the Fedora Project leader in about two weeks. So I see you guys are hanging around together. Is there like a transitional period that you're spending together for this transition? Yeah, basically. Jeff started at Red Hat two weeks ago, and now we're trying to not scare him away. But maybe not doing it. I don't know. How's that going?
Yeah, I basically am looking at this as I am Matthew's shadow man, as it were, as a callback to some previous branding. But yeah, I'm here for the last couple of weeks with a fire hose of just Red Hat onboarding. And this week, I'm trying to meet as many stakeholders that would like to leverage Fedora to get some innovation done. And instead of opining myself, I'm really in a mode where I'm taking in information from as many people.
And part of that is getting as much headspace mapping from Matthew as I can. Yeah, like literally just taking his brain and trying to shove it into mine before Flock to when the actual handover happens. And is being here at the summit the first time you spent time together in person? Well, for many years. I've hung out with Jeff before. Jeff was active in the Fedora project. At the beginning of time, as I was, and then he went off to do real jobs and stuff.
Jeff, I was going to say, why Fedora? But it sounds like you've been involved for a long time. Yeah, I was, you know, the first, I mean, eight years of the project. I mean, I was there before it was Fedora Linux, when it was Fedora US as a contributor. So I was an external contributor through the first critical period when the project was being spun up. And then I took one of those past less traveled situations in life.
I went to Alaska to study the Aurora and then eventually got to the point where I was off the grid for several weeks at a time doing research and I just couldn't contribute anymore. And so I had to step away from the project, which is actually pretty interesting because I have the deep project knowledge, the foundations. I understand what the project is supposed to be. But I've also stepped away. And after being an academic, I've done three different startups, three different sizes.
I did a small startup with a telemetry project, actually a wearable project, for a couple of years. I then worked for a company as a DevRel for doing monitoring, Sensu. They no longer exist. They were acquired. And then I worked for iSurveillance, and they got acquired. And so it's really interesting. I was getting ready to move back east from Alaska to follow my wife, who's got a job in Virginia. And it just so happened it lined up when Matthew announced that he was stepping down.
So it was like the stars aligned, right? So I come back east, basically pick up my life that I left when I went to Alaska. And it's like I'm right back where I started, like back into Fedora now, this time as the project lead. It seems almost meant to be. Did you get nominated by this gentleman or how did that process work? We had a lot of really good candidates and it was a super, super hard decision. And I think in the end we agreed the stars aligned here for this to be the best.
Very nice. Matt, why the decision to change things? Well, I've been, so it will have been 11 years as Fodora Project Leader when we do the handover to be the beginning of June there. So that's a long time. And I honestly, I love it and I really could keep doing it. But I think it's good for the project to have someone else kind of looking over things. And it's good for me to find something else to do, although I'm not going to
go very far. I'm actually going to be still in the same group at Red Hat that does Fedora, Linux, community things. Does this just mean you get to play on things that are maybe less planned or you get to just kind of spend your time somewhere that you would like to? Well, I think planned is pretty ambitious for anything I've been doing. But the first thing I'm going to do is sleep for a week.
And then so I'm actually going to be a manager in there because I actually don't have any experience as being a full time people manager. And I thought I'd see how that goes and see how that broadens my view into working in an open source world. And we'll see where we go from there. And then, gentlemen, is there a mentorship process that's going on here? I know you said you're spending two weeks together, but is there anything more formal or less formal?
Yeah, so that's also, I think, a lot of times, I mean, it's been 10 years, so we don't really have a process for FPL transition that's there. But a lot of times it's been kind of thrown into the deep end. Robin Bergeron, my predecessor, helped me a lot, but was also very ready to be done with the job at the time. So I did a lot of making things up as I was going along. And I think Jeff will get to do a lot of that as well. But I want to make sure I'm going to be there so I can share anything,
my thoughts on things without trying to, you know. I don't want to be one of those, I'm pulling the puppet strings behind the scenes kind of thing. I'd be very respectful of the new role, but I also want to make sure that I'm accessible. Because I do have a lot of knowledge about things that Jeff keeps telling me, did you make slides for this? Did you write this down? No, I have not, but I can tell you all about it.
So we'll try and get that transferred in a formal way rather than just, oh yeah, I should tell you this. Nice. And Jeff, what are you looking forward to when you get your feet dirty here? Well, I guess, like I said, I don't want to opine too much just yet, but initially what I'm really looking forward to is getting a sense of the health of the project because I think Fedora is now at that time where it's now a generational project.
And as I tell people who meet me, if you remember my name and you're still involved in the project, you're maybe a risk. You may be an institutional bus factor, or what's the better way of saying that? Champagne factor or desert island factor. We talk about llama farming. So I am concerned that people who are doing it for the full length of the project, they probably have institutional knowledge that we don't have a process to change over.
And we may be relying on them too much to do what I consider hero work. And I want to find that. I want to get a sense of where that is so we can have an appropriate process to get mentor new contributors in. So that's my first thing, not technology, just get a sense of the health of the project. Because, like, even though it is very stable in terms of output now, which was not what it was when I was working on it, But it's, you know, and everyone says, like, yes, it's a rock-solid deliverable.
I want to get a sense of where the contributors are at and where the creaky bits are, right? So we're not burning out some people to make sure that that deliverable is happening. I mean, as I tell people this week, like, my mental model for this job is I'm the president of a weird university, right?
This job to me is, I'm not doing the work, like the people in the community or the faculty and the students doing the work in the university, but Red Hat is sort of like the equivalent of the state legislature. They are investing in funding, and so I have to bridge that.
And so it's important for me to get face time with as many Red Hat stakeholders as I can so that I can build bridges and make sure that the community ethos and the process by which technology works its way through from Fedora up is something that they're getting the best value out of. Without disrupting the community, right? Because it's, like I said, like the university model in my head, every time I say it, I'm like, this is the right model for this job.
Because it's like, state legislatures and faculty, you know, are not on the same page all the time. And that's where the president of a university basically sits. And that's what it feels like. Well, Matthew, Jeff, like, thank you so much for joining us. And come on Linux Unplugged anytime. It's always nice to talk to you. And yeah, I'd be happy to talk more. Even when I'm out of the role, I'll probably have more spare time for just,
you know, sitting around pontificating about things. So that'll be fun. Sounds good. And Jeff, thanks for joining us. And we'll surely hear from you in the future. Yeah, absolutely. Thanks for having me on. Yeah, Matthew, that invitation to that mumble room is open all the time. Come pontificate with us anytime.
I'm also really glad we made that connection. I think it's going to be interesting to have Jeff on the show after he's got, you know, some time under his belt at the helm of the Fedora Project. I know you boys are looking forward to that, too. He just has such perspective, if you think about all the time put in. Yeah, yeah, really. I mean, it's pretty neat to have somebody originally connected with the project, took some time away to really get some perspective and come back.
And I like his model of a university. That's an interesting thought model, at least going in. It'll be fascinating to follow up with him and find out if that played out for him. I think the next few years should be fun for the Fedora folks. So on our last day, you know, you have to knock out the fun stuff, like seeing our buddies at the Fedora booth. and they had this machine that they were teasing. I had to try it. It's called the AI Wish Machine. Okay, so we have a little
experience here. Wes, do you want to explain what we're about to do? Yeah, it's the one thing I think so far at Summit that there's been a lot of hype around. We saw it advertised at the keynote on stage and Chris has yet to try it. It's the spectacular AI Wish Machine. Magic promises AI Wish is granted. Your wish, Chris, is its consideration. Chris, what are your expectations here? I mean, it was featured in today's keynote.
Well, it was before the keynote started. You know, like when you go to the movie theater and they have, like, advertisements up on the screen? This was up on the screen. It's something you've got to try. So I've got a lot of questions. You know, I've seen a lot of things here at Summit. So I assume this is going to kind of connect a few dots for me.
And if nothing else, give me some advice on how perhaps OpenShift could help revolutionize the JV infrastructure and really drive innovation and lower total cost of ownership. So that's what I expect it's going to tell me. You know, the other thing, we've been to summits before, and in particular last year, there was some pretty cool AI-powered stuff, you know, like walls and visualizations and changing your photo kind of thing. Could be something like that, maybe. So should we go over?
So I attempted, of course, but everybody wanted their token because after you complete the vending machine experience, the AI Wish machine dispenses a token, and everybody loves their little swag. Okay, Chris, you've stepped up to the machine here, the AI Wish machine. What's your first impression? Oh, it's popular. Two different people cut in front of us to use this thing. People apparently have questions. So the first thing I've got to do is I've got to scan my badge to make an AI
Wish. I'm going to go ahead and do that. Is it scanning? I don't think it's scanning. Try scanning harder. I didn't see other people struggle with this. Why is it not working? I got my badge in the hole. What is it? There we go. Right? Is it doing it now? Yes. Okay. Hello, human. Hello, human. She's rolling something. Scan your bed. Nope, it didn't get it. Oh, gosh. Now we got it. Okay, you may now make your AI wish. Okay, I wish to be rich.
Oh, no, you have to actually choose from these options. I wish to train models without compromising my private data. I wish to build and deploy my AI wherever I need it. I wish to easily scale my AI across my company. I wish to use my preferred AI models and hardware. Well, clearly, I wish to... None of these. I'm going to, I guess, scale it across my company because it's the last of the thing I want to do. So I'm just going to pick that one.
Easily scale your AI across your company. Okay, that's what I wished. And AI says, with some slow frames, I tried, but you'll need to insert a gazillion dollars. What? Why is AI hustling me for money? Processing your wish. Why is the frame rate like 15 frames per second? If your AI solution won't work with you, it won't work for you. When you need your AI to scale on your terms, yeah, you need Red Hat. Thanks for playing. That's it? Grab your pin and then visit the booth to talk to a Red Hatter.
Well, where's my pin? Oh. Okay, let's get this... Oh, it's a red hat with AI sparkles. Okay. Well, Chris, come over here. I'm so excited to learn. How was your experience? I'm not sure what was answered. I think that just told me to go to a booth and I got a pin. I like pins, I guess. But how was your AI experience? Bad, man.
That wasn't really the best experience, but one thing that was kind of low-key talked about at the keynote that I think you picked up on, Wes, as maybe going to have larger implications down the road, is Red Hat seems to be embracing MCP at all the different levels. Yeah, definitely. This is something we had on our little buzzword bingo chart going into this. I'm not sure if we'd see it or not, because it's kind of relatively new even in just the broader AI universe.
It's the model context protocol, and it's a standard that came out of Anthropic for sort of letting the AI systems interface with the rest of the world. As you've heard, we believe that openness leads to flexibility and flexibility leads to choice with AI. And to ensure that, it's critical that we have industry-wide standards that all companies can build around. Now, as we discussed yesterday, MCP or Model Context Protocol is one of those core standards that's just poised to take off.
Now the letter P, protocol, is really important in this case. Vint Cerf, the godfather of the internet, describes protocol as a clearly delineated line that allows for independent innovation on either side of that line, what he calls permissionless innovation, allowing anyone to experiment and innovate, no approvals required. This is what we're striving for at Red Hat. I like that messaging. I'm going to be curious to see what their actual rollout is.
It does sound like they're working on the back end to sort of have MCP implementations for a lot of Red Hat products and services, right? So if you want to be able to interface these things from a chatbot or hook it into other agentic AI systems, Red Hat will be ready. You could see maybe a practical use case of this is somewhere where you could review your system resources, utilization, disk usage, things like that from a single interface. So you log into a dashboard.
Hey, what is the status of the web servers? And the system just comes back with a whole sheet of information. And even maybe down to, like, you know, applications that are installed and their usage and things like that. And you could also then, they talked about hooking it up into the event-driven
side of the Ansible automation platform, right? So from your AI-driven interface, whatever that may be, you can go trigger an event that's going to go restart that server that the AI showed you was malfunctioning. And this is, you know, the question I have is, is this something that is of an interest to the RHEL base? I mean, I'm not trying to typecast, but it seems like they're traditionally a pretty conservative user base. Is this something people are pushing for?
And I was trying to get a sense of that at the keynote or after the keynote, so as people were leaving. And I also would like to get a sense of that from the audience, because this is an area they're clearly pushing on for two years straight. And I think everyone maybe at this point has seen AI shoved into interfaces in a poor way and also in an actually helpful way.
And so there's always the question of like, does this actually make you more efficient in your tasks or is it a new way to do the same thing? I think regardless of how you break it down, though, it's nice to see a large, well-positioned. Well-known brand in this space really working hard to bring something that is not vendor locked. You know, like I like a lot of the different solutions that are out there, but it's like you're all in on the open AI ecosystem or you got all in on Anthropic.
I was also impressed. I don't know how you guys feel about this, but just, you know, every company is talking about AI. It feels like at least if you're even vaguely associated with tech these days, but talking with some of the folks in a few different places around the summit, it seems like Red Hat is very credible on AI. I mean, they have a lot of people who are legitimate actors in various open source AI communities working there, working with them. Like, they know what they're doing.
They also, to me, felt very well-informed and very well-connected with other businesses who are leading the way. Yeah, I mean, we saw NVIDIA up there, AMD, Intel, you know, generally people that are competing all collaborating together on this stuff. And of course, it's always fun for us to run in with old friends of the show. And Carl was there at the community booth. All right, Carl, what do you got for me right here?
I got a little pocket meat, a little bit of beef jerky and some beef and pork dried sausage. Get a little pocket meat on the expo floor. Thanks, Carl. I hit that pocket meat twice. I got to go to that pocket meat source twice while we were there. This is now like conference tradition for us. If we go to a conference and don't find Carl's special meat, then I think we're just going to feel like we left out.
We do have to be careful, though, because at some point, You know, the event organizers might get keyed off that Carl is competing with the catering. Well, if you'd like to support the show, we sure would appreciate the support. And you can become a member at linuxunplugged.com slash membership. You get access to the ad-free version of the show or the bootleg, which I'm very proud of. I think the bootleg is a whole other show in itself.
And so you get more content, stuff that didn't fit in the focus show. And you also get to hang out with your boys as we're getting set up. And then you get all the post-show stuff where we sort out all of the things. But you can also support us with a boost. And that's a great way to directly support in a particular episode or production. Fountain.fm makes this the easiest because they've connected with ways to get sats directly.
But there's a whole self-hosted infrastructure as well. You can get started at podcastapps.com. I mention Fountain because it gets you in and it gets you boosting and supporting the show that way pretty quickly. So the two avenues, linuxunplugged.com slash membership or the boost. Or if you have a product you think you want to put in front of the world's best and largest Linux audience, hit me up, chris at jupiterbroadcasting.com.
There's always a possibility that we might just be the audience you're looking to reach. That's chris at jupiterbroadcasting.com. Well, I felt a little bit of a reality shift going to this. Whoa. I did see you sweating a bit in your seat. That must explain it. Well, we've been talking a lot about this behind the scenes. And I have made the decision to switch my systems to Bluefin. And the reason being is I'm going to, behind the scenes, start playing with image mode.
I'm going to start in Podman Desktop, and I'm going to start building my systems in image mode. And then we're also going to start deploying some RHEL 10-based systems and some open virtualization systems here just for us to learn and experience. And I like a lot of what image mode is going to bring to RHEL and what's already kind of there with Bluefin.
And that is immutability delivered in this image way that is accessible to all kinds of administrators and DevOps people, where I think Nix is extremely powerful, especially I like the building up from the ground up approach, but we've clearly seen a lot of people bounce off of it. So I want to try to jump into this mainstream that's going in a direction that I like anyways. The rest of the world is kind of leaning in these immutable systems.
And I think there's a lot of value in learning a cloud native workflow outside of Nix OS. Chris, this feels like such a massive shift for you. Why now? Because it's like getting in on the ground at the image-based workflow at this scale. Will you stop if I just promised never to alias nano to Vim again? I mean, I might bounce off it, but I really want to give it a go. I've already got Bluefin downloaded and installed on one of my systems.
This is because you never figured out how to write a Nix function, isn't it? Right, right. It's just the flakes, man. The flakes drove me away. No, it's the idea of getting a lot of what I get with Nix OS, but with, and you're going to hate it when I say this, but a standard file system layout. I know, I'm sorry, I'm sorry, I'm sorry. This is why you wouldn't use LogSeek, because you just want to mark this.
But we have heard a lot of audience members say they really like these, I don't know, quote-unquote modern ways of deploying Linux, and Bluefin has been the choice that I've seen float to the surface.
Yeah, and I think it's my starting point. You know, it's my, I'm going to give this a go, I'm going to test drive it, I'm going to rent this out before I make the switch, and then at the same time, I'm going to be playing around with Podman Desktop, seeing if I can build systems and what it's like to do that. And then compare and contrast and move over. And at the same time, also experiment with some of the OpenShift virtualization stuff. Because I think that's really big.
That standalone OpenShift virtualization platform is going to be a contender. Or it is a contender. I have a question about how long you're going to commit to this path. Well, unless I, you know, drop off a cliff, I guess indefinitely. I don't know. I don't really have any timeline on it. Because I think it really depends on how the whole experiment goes. I've already started.
You know, when we tried Bluefin last time, and I've played around with Bazite on my own, I've always really liked their general initial approach, but I always thought, oh, this would be a little bit better if I could just take and shift it a little bit and, you know, make it more specific to a podcasting workflow.
Because I'm not a developer, I'm a podcaster. It makes me wonder about, like, some sort of challenge, maybe not official, but, like, you know, what are some things that you are used to doing or like doing on your current Nix-based systems? And can we see what it's like for you to try to port some of those? Well, I thought I'd start with the TUI challenge. I was going to try to have my main workstation and everything ready to go for the TUI challenge.
Because I got to install a bunch of TUI apps. I do like this, because then if you publish maybe the container files you're using, then I can bootstrap them. I see how it is. Chris, are you looking for advice from the audience? People who have maybe gone down this path? I guess so. I am curious to people that are running this as their daily driver, these image-based immutables. Silver blues and your blue fins and your, you know, your blue universe. We need your atomic habits. Yeah.
Or people that bounced off of Nick's and why. Or people that tried and can't. I mean, I'm curious to the people that tried to switch away from Nick's and it failed. Because it seems like that could end up being me if I don't know what I'm doing. So I'm a little nervous about that, especially because we're traveling and all of that. But I'm willing to give it a go. I'm feeling adventurous. Okay, so like after the show, we pour one out and then we RMRF? I think that's it.
And now it is time for le boost. Well, we did get some boosts. It was a slightly shorter week because we recorded early. But that doesn't mean people didn't support us. And Nostaromo came in with our baller boost, which is a lovely 50,000 sats. And he says, here is to some better sat stats. Thank you, Nostar. You are helping bring that stat up all on your own right there. My favorite type of self-fulfilling prophecy. That's right. That's right. Appreciate the boost.
Kongaroo Paradox comes in with 34,567 sats.
Not bad. I think so. just upgraded my nix machines to 2505 yeah we should mention 2505 is out congrats to the folks involved officially out i run unstable on my main laptop which is an m2 air running nix os apple silicon the stable release on most of my home lab and maintain the options for the two inputs in my flake this was my second nix os release since getting into nix last year and this strategy made it really painless no surprises but deprecated options since i
saw these cases slowly when these changes hit unstable what is your approach to nix os releases good question thank you kangaroo what do you do west i mean you're kind of a flake based system so you're probably not really paying too much attention to like channel changes and updates i do think this can be a nice way to do it if you you know you can do sort of test upgrades either on other systems where you do want to be an
unstable and see sort of the overlap between your two configurations or just, do test builds on stable with whatever existing configuration you have. And yeah, if you think there might be cases where you do need specific versions, you're more sensitive to version changes, then pre-plumb your flake with Nix packages versions ready to go with those. Then you can, you know, more freely get the boilerplate done. Then you can more freely mix and match.
Renly, are you more or less likely to upgrade to the next release once the previous release is no longer supported? In other words, are you going to wait? I usually wait, like. About a month, I would say. But then I'm all in. Yeah. So I like to give it a little bit of a transition period and then just dive right in. All right. We will add a link here to KongGroo's Nix files, too, for those who are curious or maybe want to emulate the approach.
Oh, thank you for sharing that. I like that. Thank you for the boost, too. Well, we've got a boost here. 23,000 sats from Grunerly. Just in case nobody has already told you, it's called Da-wa-ish. Da-wa-ish. Which is German for, I was there. Da-wa-ish. So not Big D-witch? Oh. Oh. We did redeploy a final iteration for ourselves, and it's been pretty fun tracking everywhere we've gone, all around Boston and whatnot. Been doing some tourism.
Yeah, it's actually to the point where Chris is kind of trying to choose his itinerary based on, you know, getting fun new routes in the Da-wa-ish. He's been doing, like, route art. That's really impressive. I like to draw on the map. Thank you for the boost. Appreciate it. Todd from Northern Virginia is here with 5,000 sats. Todd's just supporting the show. No message, but we appreciate the value. Thank you very much, Todd. Bravo, Boosin with 5,555 sats.
Jordan Bravo here. I recommend the Tui file manager, Yazzi. That's Y-A-Z-I. Also, for folks who need to use Jira without the browser, check out Jira CLI. Yeah, something tells me that's going to be way faster, too. We got a boost here from TebbyDog, 18,551 cents. Thank you for helping us help you help us all. All the next service talk has me thinking of a new tool I recently found called Browser-Use.
It's a tool that uses LLMs to control a web browser. They're really interesting to watch at work, and it integrates with all the common LLM APIs. Ooh, well, thank you. That's good to know. Also, a post-nitial boost? Post-litzal? I'm sorry? That's my best. Post-litzal? Do we know what that means, Wes? No, but I'm curious. It has something to do with math. Oh, that's why I thought maybe Wes would be calculating over there, you know?
Yeah, he missed this one. I know. It's surprising. and you know we all know that yes zip code is a better deal yeah we do know that did you did you bring it, You want to know if I packed the five-pound map in my carry-on? Yeah. It's like I brought the mixer and the microphones. Yes, I did. Oh, there it is. Okay. Yeah, we can put it on the table here. Just move your laptop, Brent. Don't spill the booch. I'm already on the second table. Why do I get pushed off?
Okay. All right. Teddy Dog. Tebby Dog, not Teddy. Tebby Dog. He says it is 18,551 sats, Wes. Yeah, there we go. Thank you. Can you get that dial in there? I got a small paper cut, so I'm tending to that. Yeah, there we go. Yeah, get some. Just grab one of Brent's Band-Aids. You brought a whole bunch. I also have a clothesline if you need it. That actually would be helpful, because then we could string up the map, and I could lay down, and then I could sort of read it that way.
Okay. And take a little nap. Do you need a headlamp? Yeah, actually. Yeah, and some epoxy would be useful, too, I think. Oh, I didn't bring it, darn. Oh my gosh. I did find some travel epoxy on our trip, though. There's a little cute little bottle of it you could just keep in your pocket. We should definitely bring that, then. Okay, well, just put a little dab on the map for me, would you? Okay. Right here? Yeah, and a little to the left. Oh. Yeah, so where you just spilled the epoxy.
That is the German state of Mecklenburg-Vorpommern. All right! Nailed it. That sounds like that's the name of it for sure. On the island of Rugen. Oh, whoa. Just pump the brakes right there. That's pretty neat. That is pretty neat. Thank you for the boost, and thank you for the fun zip code math. Now I'm glad we actually packed that map. That was actually worth it. Adversary 17 is here with 18,441 sats. You're doing very well. Says, I'm a bit behind, but the headsets are sounding great.
Regarding the Bang Bus adventures and getting pulled over, if someone had offered their truck and trailer services, would you have taken it? From what I know about you guys, I feel like you would have been more interested in the sketchy route regardless.
List well you got to test the van we that it was as much of a van we need to know it didn't work and the best way to find out was to drive it it's so true as an uh uninvolved third party i'm just gonna say confirmed yeah you know what i realize our audience knows us so well yeah yeah you got us adversaries thank you tomato boost in with 12 345 cents i think that might be a. Spaceballs boost we're gonna have to go right to ludicrous
speed it's been a minute thank you for the spaceballs boost i'm looking forward to hearing your reports from red hat summit i've started the challenge early because i'll be on holiday most of next week i'm already having a blast and it reminds me how much i enjoyed using linux and bsd back in the day right on oh and mr mato also links this up here because my write-up, which I'm updating as I go along, is at a link we'll have in the show notes.
That is great. I love that he's getting a head start. That's really nice. In fact, if anybody else has any great TUI tools, now is your chance to send them either boost or email because we need to round them up. We'll be doing that in the next episode before we launch the actual TUI challenge. That's fantastic. Thank you for the boost. MegaStrike's here with 4,444 sats. That's a big old duck. He says, Hello.
It's funny you bring up the back catalog listeners, I just finished listening to every Jupiter Broadcasting episode minus the launch, released since the beginning of the year in the last week and a half at 1x speed. I feel like, MegaStrike, you should give us some insights. That's so crazy. What have you learned in this journey? MegaStrike is a mega listener, I'll tell you what. Does this include this week
in Bitcoin? Are you going to go back and catch the launch, at least since episode 10, because it's pretty good? I wonder. I have so many questions. What's the schedule like? What activities do you listen for? Were you road tripping? How did you get that much time in? That's awesome to hear, and I have so many questions. Thank you for the boost. Well, Turd Ferguson is here with 18,322 sets. Turd Ferguson! First of all, go podcasting. And second of all, did you boys soak up any culture
in Boston? Or was it all Ansible and OpenShift? It was a lot of Ansible OpenShift, that is true. I mean, Chris got in the fight at the package store. There was that, there was that. We got to go to a ball game. We did that. We saw the Salem. We went to Salem and we saw a very old grave site. Which was pretty cool, actually. Sounds a little weird, but it was actually pretty fun to do. Some beautiful graveyards out here. Famous witches, too.
Yeah, what else did we do? What else have we done that wasn't Summit-related? We've done a few things. We're in our Airbnb now. Well, we popped in to pay our appropriate respects at Cheers. Oh, that's right. We went to Cheers. That was kind of all right. It was all right. Norm has just passed, so it was kind of nice to be there right as Norm had passed. So people were there pairing their respects, and they had pictures up and flowers and all of that.
They were very gluten-friendly at Cheers, I've got to say. Yeah, pretty good service. You know, it's not just a tourist hotspot, but the food is fine. And, of course, we did mention we got to go to a baseball game, so that was pretty classic. That was really nice. Yeah. I thought we got pretty lucky here. Red Sox and the Mets is pretty, like, classic ball game. Yeah. And also Fenway Park. I'd always...
Heard of it and how unique it was but to see it in person yeah i'm not a sports ball guy but that's just such a great opportunity and it was a blast well as west knows baseball has very strange rules around parks shapes and sizes basically none and so each one is a unique experience but you know after that we kind of got our fill of the city and made our escape which of course meant um encountering the native drivers that's true i really thank you both for letting Drive, I really enjoyed it.
I found it, at first I was a little like, wow, lanes have no meaning here. I mean, quite literally, lanes have no meaning here. But it's because the roads are old and narrow. And so you just kind of weave, you do a weave, and you just trust that the other driver is going to weave to your zig or whatever. And so you zig and zag around everybody. And I really enjoyed it.
It actually is a lot like driving the RV, where it's down to last second dodging another thing that's just barely sticking into your lane, or you don't have a complete lane, and you have a very wide vehicle. And so it was essentially taking all my RV driving experience and applying it to a passenger vehicle. But it worked great, and I enjoyed the heck out of that. So that was a treat for me, because usually when we travel, I don't get to drive at all.
We also then got to see lighthouses and go to the ocean and get fresh seafood out of the dirty Atlantic Ocean. It's not as good as the Pacific, but what do I know? I feel like you're biased. And we crossed off some new states, right? New Hampshire and Maine, our cousin from another coast. That's right. That's right. So thank you. Thank you, Turd, for that. It's nice to reminisce about it. In fact, thank you everybody who boosted into the show.
Even though it wasn't a full week, we had a decent showing and we really appreciate it. We had 30 of you just stream those sats as you enjoy the show and you stacked collectively 46,223 sats. So when you bring that together with all of our boosts, everything that we read above the 2,000 sat cutoff and below, we stacked a grand total of 215,748 sats for this very humble but yet very appreciative episode of the Linux Unplugged program.
Thank you everybody who supports us with a boost or the membership. You are literally keeping us on the air and the best. Music. If you'd like to get in on the boosting fun, you can use Fountain.fm. It makes it really easy. Or just a podcast app listed at podcastapps.com. Before we get out of here, you know what we got? A pick. This is one that we were tipped off to at the summit, and it's pretty neat. It's MIT licensed, and Wes has it running on his laptop right now.
What is it, Wes Payne? It's Ramalama. I love that name. Say that again?
Ramalama. Once more. Rama llama rama llama uh yeah okay so we've talked a bunch about oh llama on the show but it turns out um it's not really fully open source and so some folks are a little put off by this and there's some feelings like it's got some vc money there's some like okay right now they're totally fine but, what might happen and i guess the core part of it and like the sort of the some of the model serving stuff is not open source and
i think there's some feelings like they're trying to be a bit like docker in the early days where they want to be the standard right they've got their own model catalog and protocol for fetching the models from them when there's also places like hugging face and other you know lots of ways to get these models absolutely yeah ramalama was created sort of as a more fully open alternative to olama it's also more powered by containerization so whereas olama has like its own kind of stuff
that it does to do acceleration in its core and it handles the model running ramalama starts with kind of a first initial step which is a scripting layer that assesses your host system for whatever capabilities might be available for running models efficiently. And then the rest of it is all done with containers. So it'll spin up a Podman container. You can use Docker too. And that gets a standardized environment, which then gets piped in whenever host-specific stuff is needed.
And then in there, you go download the model from Ollama or Hugging Face or wherever else is supported. Wherever you want. And then using either Ollama CPP or VLM, you can then directly run as a chatbot or serve via open AI-compatible API that model.
So in other words, you can get a script, And even if you've just got a weak CPU-based system, this thing will set up, identify you've got a CPU system, launch the Podman containers, and inevitably give you an interface that looks a lot like ChatGPT running on your local box. But if you want to next level that sucker, you can use VLM to, like, pipe the back end to, like, some serious GPU action or, like, a cloud provider, whatever you might want.
Yeah, exactly. So you can kind of go from zero all the way to AI hero. But, no, you can actually. Like, I was just playing with it, right? So it's OpenA compatible, so you've got OpenWebUI or not-so-OpenWebUI running locally. You can hook that right up just like you would for Ollama. You can talk to RamOllama. That's right. Okay, so we'll have links and more information in the show notes for that. I see here at the bottom of the RamOllama.ai, it says supported by Red Hat.
So I take it. Red Hat's all in. Yeah, I think it's actually maybe even under the containers repo there. So it's kind of a first-party system in Red Hat and in the wider Podman ecosystem, too. Boom. Power tip right there from Wes Payne and Mr. Brantley. So we're getting back to our regular live schedule. We always try to keep the calendar as up to date as we can at jupiterbroadcasting.com slash calendar.
And of course, if you got a podcasting 2.0 application, then we mark a live stream pending, usually about 24 hours ahead of time in your app. And then when we go live, you just tap it right there in your podcast app and you can tune in. Also, just a friendly reminder, in case you don't know, we have more metadata than that, too, because we also got chapters, right? Stuff you really want to hear about and jump right to, chapter.
Stuff maybe you don't want to hear about and you would rather skip, go to the next chapter. And we also have transcripts on the show. So if you want even more details on that or you just want to follow along, those are available in the feed. See you next week. Same bat time, same bat station. Show notes are at linuxunplugged.com slash 616. Big shout-out to Editor Drew this week, who always makes our on-location audio sound great. We really appreciate him.
And, of course, a big shout-out to our members and our boosters who help make episodes like this possible so we can do on-the-ground reporting to try to extract that signal from the noise. Thank you so much for tuning in this week's episode of your Linux Unplugged program. We will, in fact, be right back here next week, and you can find the RSS feed at linuxunplugged.com slash RSS. Music.