
This show is supported by you and Elastic. Elastic is the company behind Elasticsearch that help teams find, analyze and act on their data in real time through their search observability and security solutions. We're gonna talk about them more in the ad break, but thanks a lot Elastic for sponsoring this episode and the live meetup that's with it. Jonathan, is that your soundboard?

No. That was not my soundboard. That was actual live people, I think. Yeah. You don't have video, so you don't know. I'm assuming that was honest.

Stick around to the outbreak to hear more about our beautiful sponsor. This is CapaGo for 05/27/2025. Keep up to date with the important happenings in the go community in about fifteen minutes per week. I'm Shain Ahmad.

I'm Jonathan Hall.

And we have a bunch of beautiful people here live waiting for us to start talking about go news. So how about you kick us off?

Alright. So we have a new issue that's or a proposal that's been accepted that I wanna talk about. Do you have any idea, Shai, how Go knows how many CPUs it should use when it's running?

It's always messed me up because when I use a container, I have to, like, change it. So I don't remember, like my DevOps guy did it, but it's always like annoying ish.

Yeah. So yeah, it should be a little bit less annoying ish after this proposal. So the proposal is CPU limit aware go max prox as the default. And my goodness, I learned a bit about some of this stuff reading through just the description of this proposal. It's a really kind of messy problem. Like you might think it's pretty straightforward, right? I have however many CPUs in my system. I have four cores in my laptop, and so I should run for You

have four cores in your laptop? Or whatever. Right? Do we need more Patreon supporters, John? Probably. Is your code just this efficient?

But when you start getting into, like you said, containers and VMs and stuff like that, it gets really weird because you could you might have like a 64 core or more physical system, but maybe you're allocated four of those cores. And when you try to like decide how should Go decide how many CPUs to use, an argument could be made. It should try to use 64 because then it could spike to 64 CPUs for moments, even if on average it can only ever go above four go up to four. Right?

Yeah. But that's that's a problem because you have, like, 64 containers running on the same server because that's why you bought that huge server you're running a cluster on, but then every server or every container thinks it, like, owns the whole work.

Well, I mean, still, the hypervisor or whatever is controlling that will still limit you to whatever your share is over on average over some time period. Maybe it's one hundred milliseconds or something like that, but maybe you want to be able to spike above that four cores or whatever you're allocated when you're when you have the chance when nothing else is using those cores. The point is it's complicated and this goes into all sorts of nuances and details that I'd never really thought about. That makes sense. And then so it was sort of the premise that I thought this is good.
This makes sense. I have no flippant idea how you should solve this problem. What makes for a reasonable default? And I have to be honest, I don't actually know what default they settled on. Like the details of that sort of got fuzzy and I was like, okay, this makes sense now.
I'm gonna go eat a hotdog or whatever I did that day. But TLDR, this should make your life easier when you're dealing with VMs or Kubernetes or whatever, and the previous default just didn't make sense. It should make more sense now. You'll still probably want to fine tune it in certain circumstances, but it should be better.

So I used to do it every time when I deployed with GoMaxProx or whatever, the environment variable. And now the trick that I memorized is useless. Thank you.

Deprecating your knowledge one one Go release at a time.

You know what? That's actually good. I have I less need for knowledge is actually better. I can focus more on my business stuff.

Very good.

So this episode is live, recorded with an audience, and you're very excited to probably hear about AI because we're in San Francisco, right? Who doesn't love AI? Me. Yeah. Everybody's like nodding. We don't wanna listen about AI.

I'm not in San Francisco, so I'm allowed not to like AI, right? Yeah. As long as you don't

wanna move here. I'm keeping it under wraps, just in this meetup I'll say, it's But there was actually a good LLM talk from incident.io. I know you're surprised you're like shaking your head. The main point of the talk was it's just Golang. So I won't like hash everything they said in the talk because it's just fifteen minutes and you should probably just watch it but I do, it's like a very strong recommendation, I'll give you like the main hook.
So incident.io, they 're working on like automating incident agent stuff which makes sense, right? They would want to do that and Rory Malcolm from their team just shared like how they're building the entire AI infrastructure inside Incynt Diona. They use Go and it turns out to just like be basic Go tools underneath because like they said in the Go blog, all the LLM stuff is mostly specialized hardware and APIs anyway. All the all the actual model stuff is happening over the network, like not on your service. And what you're doing is exactly what Go is good at, which is like network calls and APIs and text templating, which was just besmirched in Josh's talk, which just happened.
But they basically just developed their internal library and he just shows it because it's pretty simple with objects like prompt and snippets and like templates which use Go templating underneath the hood and tools, which give the LLM access to, like, real life stuff, like search, which all sounds very like AI engineering complicated, okay, I'm gonna make $4.50 ks living in San Francisco doing this stuff, but it's actually very simple GoCo. It's just part of how they're just working in Incident.io. So I suggest watching the talk, it's pretty cool. Did you get a chance to integrate with LLM systems yet or did you get a chance to avoid it rather,

I should ask? I've had a chance to avoid it. I actually do want to integrate with some for for my the startup I'm working on, but it hasn't been a high enough priority to to actually get there yet, but it'll be there before long.

So if you're in the same similar position to Jonathan, just go watch that talk, it's fifteen minutes, and I think you'll get a lot of, like, tried and true knowledge from Insaneo on how to start, like, the internal, you know, GoDash AI library inside your company.

Awesome. It's in my watch later list now.

Wait, so you're saying you're putting it in your watch later list?

Yeah.

Do you store that on stool app by any chance?

No. I put it on a I store it with a tried and true trusted database. How's that for a Reddit style comment? For sure. Yeah. Stu Stulap. What is Stulap? Stulap is a high performance SQL database written in pure Go with zero dependencies. Wait. Is it? Wait a minute.

Yeah. It's it's been making the rounds on Reddit, which means we we saw it through the lens of negativity. But what do you make of this project after you dug in a bit?

So the the main thing that jumped out at me is that it's columnar rather which is something I knew about mostly from when Matt Topol was on the show talking about, Apache Arrow.
Matt Topol, I currently do I work for Voltron Data, and my I primarily just work on the Apache Arrow libraries in general. It's my day to day job now.

But it's optimized apparently for in memory performance with optional persistence. So I guess it's kind of Redis ish in the sense that, you know, it's it's designed for faster memory. And if you want to persist it as a an afterthought maybe, then you have that option too. So I haven't I haven't tried it. I haven't tried using it. I've just been reading about what I saw shared on the Internet.

And a brand new DB, that that sounds pretty ambitious, I would say. Sure. Especially with so many features.

But it but you have but so the thing is, I mean, a lot of the people on Reddit were kind of pointing out what you just said. They're like, this sounds really ambitious. There's no way this is possibly ready for prime time. How dare you say that you have something that's fast? What is fast?
You know, define high performance. But this was this was a research project initially or initially it was a hobby project turned into a research project and now it's released. I love that. Why aren't people just saying that's amazing? A hobby project turned into a research project and now it's open source. What's not to love? Who cares if this becomes the new Redis or if it just sits there as it is? I think it's great.

So first of all, I'll say that you got your Redis voice really good in that where you're like, oh, it's not fast enough to the point where everybody in the room had like a giggle. Oh, he's an Internet troll. Jonathan is actually nice, I promise, even when we stop the Yeah. But the problem when you I'll play devil's advocate and I say, when you put on your website you're, like, production ready and you're fast and then, you know, some engineer at a company is like, oh, this is production ready and fast and plugs it into their architecture, suddenly they realize it's a project that's actually maintained by just one person, even with all the best intentions in the world, you know, they could go on a vacation for three months, even, you know what, even a medical emergency or something And then suddenly your project is stalled because you thought it was production ready, but in reality, like, you need to have some community around it or something just to have some safety, right?

So so my my response to that is that engineer needs to be fired. He deserves whatever he gets.

That Reddit energy is really seeping.

The thing is my my my backyard sandbox is your production ready and vice versa, depending on what we're talking about. Production ready doesn't say anything except somebody seems to be using this in production. So do your do your due diligence before you adopt, especially a new technology.

That makes sense. So it has been, like, pretty popular on Reddit, like, I think last week when we added it to the backlog and then we didn't get to it, so it got pushed to this week. But I did see, like, a minor release and a pull request that's, like, in the works, so it does seem, like, from two days ago, so it does seem like people do work on it. If this sounds like something that might fit your use case and you like being a super early adopter or you're looking to score some like open source contributions in the data world, this might be a cool interesting project to join.

So I guess we kind of agree that maybe you need to do some due diligence before you use Stoolap, but what if you need to do some sort of like, I don't know, full text search or something like that? What kind of product could you use that would integrate well with Golang?

Well, I could say Elastic, but people will say we're biased. So, I'm just gonna say, yeah, I'm gonna say just use Tail and Grep, you'll be fine.

Perfect, I love it. Tried and true.

God, if you have to, if you have to, Elastic just released their new Go client version 9.0.0 and as someone who's used Elastic, one of the things I less enjoyed is like writing these huge JSONs to define like an index or a re index or whatever and then just putting it into the JSON file next to my code. The main new thing in this version other than making it compatible with 1.23 is that you now have a DSL to talk to Elastic, which

is pretty cool because you

can do like dot new index, dot replace, dot whatever. Sort of a blessing and a curse. I know some people hate like DSLs and ORMs and they just want to talk to the API in the rawest form possible and that makes sense and I respect that, that the option didn't go away, but the DSL looks super nice, it's like fluent, you know what I mean, where you you call a function and then call another function to add another thing to that query. So it's like dot new index open paren, clopra, and then dot add whatever. So it reads like English, which is nice.
It reads sort of like that those JSONs, just a lot less verbose.

So I first read this this release notes. It says this release introduces an optional package for typed API named ESDSL, but I first read it as named as named Edsol, and I thought, why would they name this Edsol? But I realized I was reading the release notes for dyslexic search. Dyslexic search is pretty good actually.

Like fixes your typos and whatever.

Or introduces them maybe, I don't know.

Yeah, get some extra time on the test. Well, one last thing on the backlog that we really wanted to get to is this sort of big and sort of intimidating proposal about green tea garbage collection, which I opened and started reading and got intimidated and closed it and put it like tagged you on the backlog, so you teach me about it.

All right.

So teach me about it.

So I started reading this and I got confused. I'll just give you an example. Go's garbage collector implements a classic tricolor parallel marking algorithm. Now I can't see the room there. I want to see a show. I want you to tell me how many people raise their hands. How many of you understood that sentence?

We have one maybe, two pretty confident high ups and Josh Smile, Josh Bleecker Snyder standing together smiling. Yeah, I wrote the compiler, know every line in it. And I just want to clarify, are more than four people in

the room as you can hear. So from there on, it says this is at its core. So it's like now it's explaining it in simpler terms. This is at its core just a graph flood where heap objects are nodes in the graph and pointers are edges. Now I understand that a little better.
Like I know what edges are in the graph node. I don't know what a graph flood is, but like, so it's still not enough for me to have a concept in my head what this is, but like, oh, there's terms I understand now. Yeah, I know some of these words. Here's what I did when I was reading this. I said to chat GPT, explain this to me like I'm five years old and it says imagine your toy box is messy.
Oh no, but it did help break it down for me. So basically, as I understand it, and I'm sure some people in the room that those who rose raise their hands are going to correct my oversimplification. But rather than sort of doing a sweep over all of the memory to see what should be cleaned up, which is time consuming and isn't very optimal for CPU caches and so on. It does this in smaller chunks, which can be parallelized and done faster. I think it's kind of like cleaning up your toy box one drawer at a time instead of everything at once ish.
It has a cool name, so I'm going to stick with that part. That's cool. Green tea. We like the food food names on this show.

Yeah. Like open tofu and tamago and everything. But is it actually faster? Like, you could you could convince me because it sounds impressive, but is it actually faster?

Not when you take the time to read this and understand what it means, but if you skip that step and just go to using it, it claims to be anywhere between 10 to even 50% faster under certain workloads in certain situations. It seems to be much more faster when you're using a high parallelism, so multiple CPUs and multiple go routines and stuff like that. But yeah, it's faster or else why would they do it? You know, I mean, yeah, let's do this new thing. It's slower, but it sounds cool.

And the state of the proposal is like accepted, implemented, is it already in the language? When will I enjoy 50 Now that I know it exists, even though I'm not worthy of it, I want it now in all my production workloads.

So it is not accepted yet. It is still being investigated, but they have a working prototype that people are experimenting with and you can try if you're ambitious enough. And there's some actually cute little like ASCII art charts and stuff in here that are kind of neat to look at. Yeah, you could try it out. I I don't know when it would be I don't know how soon it would be around.
The 01/2025 freeze is happening very soon. I'm sure it would not be there even if the freeze weren't happening. Probably want an extra cycle at least for something this this critical.

But if I want to contribute to this discussion, I should go to the discussion and look at the pretty graphs and contribute some more? Is what you're saying basically? Yeah. Oh, I see the graphs. I I'm I'm looking at them right now.
They are very cool. Oh, they use braille as like ways to do the dot, that's smart. I'm sorry, I'm geeking about the graphs and not the garbage collector itself. So, if I wanna try it, do I like turn on some experimental flag? Because that sounds like they could use the feedback.

So it says how to try it, install GoTip, the GoTip tool, so you install it from the, you know, the the most recent version, and then go experiment equals green t g c.

So Please focus your attention on whole programs. Micro benchmarks tend to be poor representative of garbage collection. Oh, I see what they mean. So they want you to run it on like a real production workload and not just a benchmark loop. Especially since now that I think about it, you remember that loop proposal?
It takes out all the compiler limitations, etcetera, which this might impact as well. Anyways, Go experiment green tea or Go experiment no green tea for a coffee person. Yes, Josh. And just to round out that discussion, we have a Go compiler expert here in the room and he's gonna try to shed just a tiny bit more light on it to give us some intuition. Doesn't that sound awesome? Like, I planned it and whatever. It didn't happen at all.
First of all, the compiler and the runtime are totally separate, the runtime is really hard. So when when, garbage collector people talk, they talk about the garbage collector and the mutator, this awful thing out there, which we think of as like the thing doing the useful work. And the mutator is the thing that makes this mess that the garbage collector has to go and run around and clean up. And the challenge about cleaning this up is that the pointers that are live, the parts of your, memory that you actually still want to have, are scattered all around through memory. And we know that memory cache misses are slow.
And so if you're busy chasing these pointers around all over memory, a lot of what you're doing is gonna be cache misses. You're gonna spend all your time stalled. So the idea is let's waste quote unquote waste some CPU time to try to get some locality. So instead of chasing each individual pointer, we're gonna work on chunks of pointers, on chunks of memory, and this might end up being wasted work. We do all of this work for this whole chunk of memory, and we only found two relevant pointers or three relevant pointers.
It's okay. CPUs are really fast, and they're getting faster. And you have more and more of them, but the memory bandwidth isn't keeping up. So let's waste CPU so that we can now do less memory chasing. The added bonus is that we can then throw in SIMD and other really advanced techniques to get extra speed out of this. So the intuition is burn more CPU, but keep it local.

Cool. I have questions like, does that universally apply? Are there systems where memory is relatively faster than CPUs where you would not want to do that? But I think we're getting into

Josh is nodding his head very strongly.

So I think we're getting into details that probably are for another forum. Maybe we can talk about it on the channel. That's a good segue to our break where we talk about that, right?

Yes, our Slack channel, cupago, kebab case on the go for Slack. Thanks for mentioning it. Alright, so let's move to our ad break with actually quite a lot of interesting updates this time, so if you're one of these podcast listeners who does skip at this specific moment, don't do this on this episode. This episode is sponsored by Elastic. Thank you Elastic for sponsoring this episode and hosting this meetup and giving us free pizza.
Jonathan, next time you're in the Bay Area, we'll do another event. Need to like AI though, as mentioned at the top of the show.

I'll do it for that week.

You'll like AI just temporarily. Critical section, enter critical section, you land in SFO Airport. Anyway, Elastic is the company behind Elasticsearch, which helps teams find, analyze and act on their data in real time through their search observability and security solutions. They're building a fair, like a fair amount of their stuff with Go and it's become one of their core languages. Their new Elastic Cloud serverless platform is predominantly built with Go on top of Kubernetes, which is also Go, obviously.
So they're taking, like, resilient high performance HTTP APIs powered by OAPI cogen, just by a friend of the show, Jamie Tennell, we'll get to his blog post in the Lightning Round, that can basically scale instantly without you having to think about infrastructure, which if this is not if you're not the direct competitor of Elastic and you just want to outsource search, to a company that does that for many years and does it pretty well, it's good news for you, right? Have you used Elastic in the past, Jonathan?

Yes.

So have I, and like other stuff we've had on a couple of go that sponsored us, obviously we like like money and free pizzas, but we wouldn't let them sponsor us if we didn't try it ourselves and we knew like to actually recommend it to our listeners. I've used it in a few critical production workloads and I can just say it's one of these like trusty workhorses that you get to know it and the expertise is really worth it and their Go, like Go integration goes even deeper than their HTTP APIs, their ingestion products, Elastic Agent and APM servers are also written in Go and they're actually one of the top contributors to the OpenTelemetry Collector, which is also built with Go. So, most of their ingest collectors across the platform are Go based too. So, it sounds like a good bet if you use Go, you're gonna have a good time using Elastic. On the security side, they've got a ton of Go powering their threat detection and response capabilities and for developers, they've got the dev tooling written in Go to make their development smoother like we just talked about in the show, the 9.0.0 version of their client.
Also, they're hiring across the board, so if you use Go and they use Go, that sounds like a pretty good match. Check out jobs.elastic.co and they'd love to have more gophers on board. So thanks a lot to Elastic for sponsoring this episode and giving food to all the people in the room. Other than Jonathan, he's looking very sad right now. He's not getting any food.

Awesome. Yeah. If you're not in San Francisco and you're like me not enjoying pizza, you can still help the show. I don't know how that's related, but we'll we'll go with it. You can join our Patreon, which helps to pay for editing and hosting fees.
You can join us on Slack, as Chai mentioned, we have a channel where we're the go for Slack. You can also send us an email, tapagood. Dev, find our email address there and you can buy some swag. We have some nifty mugs and t shirts and few other deals. And of course you can share this podcast with a friend, a colleague, a pet student, whatever, anybody who might be interested.
It seems like a lot of folks are doing that. I want to talk about some some metrics. So we have hit a record this month. The one's not even over yet. We still have a few days left, and we have beat our previous month over month record by over 600 downloads. Our previous record was February of this year where we had 6,557 downloads. We are already at 7,114, almost almost 600 more. Well, definitely, we've got 600 more by the time the week's over, month's over.

And the episodes weren't even that good. You say

say that, but our most popular episode to date was the one two weeks ago where we interviewed Kevin Hoffman of IT Lightning about spark plugs. And we've had other very popular ones. I think the second most popular to this date. Yeah, with Ian. That one's not there yet, but that's still the most recent episode. That one's only 1,100 so far. We interviewed Carlos Becker. That's the second most popular to date.

Oh, from a

couple months ago.

From GoReleaser.

Yes. Yes.

Yeah. That tracks. It's a super popular project.

Yep. But we're routinely getting 1,500 plus downloads per episode, which just blows my mind. When we started, I was kinda crossing my fingers. I hope we get a hundred one day. And so thanks everybody for supporting the show and making it a success. We do have a Patreon this week, Mikhail Christensen, so thanks for joining. And I think that kind of wraps it up for the ad break.

Yeah. One final thing you forgot, even though we have a checklist that's in Trello, I'll add the checklist next time, is that you can also, other than sharing the show, we don't pay you to advertise, so other than sharing the show like with a friend or a colleague or a co student, leaving a review on Spotify or Apple Podcasts or just physically here, like, slapping five stars on my face helps promote the show in various algorithmic charts, such as Spotify's recommendation engine, and that's also at this point with a show that's this big, it's actually it does matter, it's a new growth vector for us. I think, some people told me that's how they found out the show, just scrolling through these apps, which is cool. That does it for our ad break. Stick around for the lightning round to round out this episode.
That wasn't better. Can you can you save me here?

Yeah. So I think that wraps it up. Let's do a lightning round and then put this show to bed. I don't know what that's better either.

I've more

than put it in My kids waiting for me to put him to bed right now, so that's that's on the brain. Alright.

Lightning round. First item for the lightning round is the Google IO Go presentation. So I was super pleasantly surprised as I think many people were to see Google IO this year talking about Go. They had like a twenty minute talk just about Go and what it does and what they're planning. It's basically our show.
They basically stole the format of our show where they talk about things that happened and things that are coming but the main thing they they on Reddit, they explained why they did it. They wanted to show that Google has more support for Go because recently there has been some I don't know if it's a direct correlation, but recently some people have been talking about, oh, Ian left and I don't know, the proposals have gotten slower, maybe Google is ditching Go because it's not AI or whatever. So they really wanted to show like a public show of support. Look, we are the people behind this, we're we're totally on on it. And also they wanted to get more people using Go, so putting it into the Google IO which basically everybody watches was a good way to get more people to know about Go.
And they even at the end, because I guess everything has to do has to have some AI connotation right now, at the end they show like how to build the AI stuff with Go, is very similar to the talk where you recommended the news, another good pattern. I was just happy to see it. If you know enough about Go or you listen to the show regularly, you shouldn't really watch it because they just talk about the stuff we already talk about in less detail.

You heard it here But I think

if you're trying to like get someone to be enthusiastic about Go, like right now, like a manager who doesn't know about it and you need to pick a tech stack, that's a pretty good like strong recommendation on why to use that versus another language. Cool. What's your Lightning Round thing?

My Lightning Round pick is XLise. XLise is a library that lets you read Excel files in Go, which I'm using, which is why I thought it was kinda cool to mention it. I saw there was a new release that came out recently. I have some love and hate for this new release though. So the the new release is version two point nine point one, which includes breaking changes.
I I don't like that a patch release of all things contains breaking changes, especially in the Go ecosystem where you're kind of like mandated to use Semver. But putting that aside, there's some cool new features in here. A whole bunch of new features, new field width capabilities and I don't know. There's just a whole list of things makes it more capable for reading Excel files. I don't think I need these features for what I'm using.
I'm just reading a very simple Excel file. It might as well be a CSV, but it's not. But this still a cool library. So if you ever need to read Excel files in Go, check out XLise. The link is in

the show notes. Are you using this package like literally in production right now?

I've mentioned in passing once today and maybe in the past, a startup I'm working on. I'm using it for that. I'll go into the details about that startup on Sunday. But so, yeah. I mean, production ish, we don't have anybody paying us right now for it, but that's the intention.

So it sucks that they did a breaking change.

I don't think it broke us, but yeah, it's

It might. Yeah, but I guess if you're using Excel files and you need to parse them, this is a good package to do that. I don't how many alternatives you have as well. Maybe use CSV, like you said. My final thing for the Lightning Round is Anton put out another blog post. We mentioned him a few times on the show and he was even on the show.
My name is Anton. I do some open source stuff and I write interactive, maybe I can call them guides or books, and interactive articles on my blog. That's mostly what I do in my free time.

He's the interactive release notes guy that does the he has a black, like, black themed site with all the interactive release notes, which is real nice. He just put out a blog post that you didn't really love about the default transport design in the Go standard library, there's like a global default transport thing and Anton sort of digs into a proposal and into that design and why he doesn't like it. I think if you're into language design and you want to see like a very specific nitpick, not like a big huge thing like a garbage collector, but just like something small that people use, this could be a pretty cool blog post to just look at a decision that I agree with Anton, is not that great about a global variable in the Go standard library. It is kind of neat picking weird, like I never had a problem with it myself, but it is interesting to read at

least. Awesome.

And that's all we have for this episode. Thanks for listening. And that's it, program exited. Say goodbye and then I'll

I'm supposed to say goodbye. Goodbye.
Whoo.

Program exit up. Goodbye.