Getting CRUFTy - podcast episode cover

Getting CRUFTy

Jan 12, 2025
--:--
--:--
Listen in podcast apps:

Episode description

Ben unveils his latest acronym-based software discussion framework while Matt patiently waits for the punchline. Our hosts explore alternatives to technical debt, debate the value of naming things, and Matt questions his ability to remember five letters for more than fourteen minutes.
Ben has written a blog post going into more detail since the recording.

Transcript

Why are we doing this in the first place? Hey, Ben. Hey, Matt. Well, how on earth are you, my friend? I'm good. And we were just chatting and... And actually, I'm going to stop you right there. I think you're clipping. I'm just looking at your stream. Do you mind turning the gain down on your microphone a tiny bit? I'll do my best. Hold on. Yeah, thing on the back. Now, a lazy editor. I don't know if I just turned it up or down. I think you turned it up.

yeah right away this no no that's better that seems good that seems good okay hey right all right now likely as not that will actually make it into the edit because i'm too lazy to take it out Anyway, hi, we were just talking as we are, as we were planning in the sort of 35 seconds before I just said, oh, I'll just hit record. And you said you had an idea for a talk and then you gave me the option of talking about it ahead of time.

which would be sensible, or going into it cold. And I'm like, let's just do it. And then I hit record. So tell me about your idea for a topic. So this is a follow-up. to a prior episode, the prior episode on technical debt. Okay. Yeah. And so in that episode, if I remember some of it partially, I had some like somewhat.

formed ideas about alternatives to ways of thinking of technical debt. And I think I was very clear in that podcast, like this is still, this is new material. These are like new ideas and I'm trying to. smush around form into something reasonable right and uh not to recall the pain of our iterative versus incremental podcast but um i have a new iteration on those ideas and i actually

Is it an incremental improvement, would you say? It's an iterative improvement. Okay. All right. Yes. See, you're doing it. Don't do it. You made me do it. Those are the worst names. And actually, this ties into the other thing. Those are the worst names. And names are important.

Names are very important. They give things power sometimes. Names, if you're a regular listener of this podcast and you heard us talk about patterns thinking and how patterns thinking is an important part of learning new skills.

learning to think in patterns and giving those patterns names. If the patterns don't have good names, no one will remember them. Right. And it sort of defeats the purpose. Well, not the entire purpose, but a lot of the value of the patterns. I mean, let's just, in terms of... in terms of the the like gang of four

patterns book you know part of the the thing that gave that the power is the fact that now we have a new language we can use and i say a singleton then you know what i mean or i say a flyweight and you know what i mean exactly and it carries a lot of baggage with it and and without the name being both attached to the concept and

being memorable and sort of concise, it doesn't have as much sticking power or wouldn't be as important. Right. Exactly right. And that's the problem with iterative versus incremental because they both start with I and no one can remember what they mean. But there's another name. that we have that we use a lot in software and it doesn't really have.

A single meaning, and that is technical debt. And we talked about that. Right. Although, you know, I think you made a reasonable argument that there is a single meaning for it. And then everyone has used it wrong ever since. Yes. But then de facto and de jure are two different things.

In fact, when we say technical debt, we just mean bad programming things that we'll go back and fix later, sort of. Sort of, exactly. So yeah, so the original definition that Ward came up with is no longer the definition that people use. They sort of make up their own.

And therefore it's lost a lot of value as a thing that we can use as a pattern to talk about, unlike the singleton or the flyweight or the decorator or any of those gang of four patterns where it's like people, I think generally agree on what those mean. I literally yesterday asked a whole room full of software engineers, what does technical debt mean to you? And I got 10 different answers, right? They were all sort of related. Only six people in the room. Yeah, exactly.

Yeah, actually, that's 100% correct. And so, you know, it doesn't have as much value. And so part of that original podcast was, okay, maybe we can find a way to make this a little bit more useful. And I've given this a lot of thought. And I now know that I've come up with an improved version of this because it has a catchy acronym. Oh, okay. Forgive my lack of enthusiasm. Go ahead. What is your new acronym? That was the best response in this entire podcast. Ben has a catchy acronym. Oh.

I mean, you've come up with some good ones. I'm not going to dispute. There are some ones, but you know, I know about like, you know, your fire acronym, but unfortunately I can probably only remember what one or two of the letters stand for now. Now I'm just put myself on the spot. Uh-huh. Something repeatable. Yeah. Remind me, Fi. Fast, informative, reliable, and exhaustive.

Okay. See, I got one. One out of four. Well, you got the most important one, which is fine. Okay, fast. Anyway, all right. So tell me what your acronym is. All right. The handy acronym is CRUFT. Oh man, that's a good one. Isn't it though? Isn't it though? Okay. And so cruft stands for complexity, which we've definitely talked about a lot. Yes. Risk. Yes. We've talked about a lot.

Uses or use cases, depending on exactly how you sense the force, the useful, the useful access aspects of your software, which are arguably the most important part. Right. It's the thing that, you know, hopefully, hopefully if you're doing it right, is making you some money, but not necessarily.

Yeah. Feedback, which is an essential part of any software development process. Yeah. And team, specifically team size. And that relates to bus factor. And I can talk about all those kinds of things. Okay. And so my hypothesis here is that when you're talking about technical debt, what you're probably doing, and I think this is actually true of most senior software engineers, and they just, unlike me, who, you know, they don't spend the time to like.

build systems out of the way that they think because they're not trying to teach them to other people. It's like, Ben, why did you do that? It's like, I don't know. It just seemed like a good idea. It's like, no, I need a better answer than that. Right. It's like, I don't know. I can't really explain it to you. So because I'm in this situation where I'm forced to explain my thinking to other people, I think that there are lots of situations in which senior engineers are making.

decisions along these dimensions and not even realizing that that's what they're doing. They just sort of do it out of their experience. Right. Yeah. Yeah. And so. I kind of think of this, if you wanted to share this thinking, as a five-dimensional portfolio optimization. You've got these five dimensions. On brand for day job. Exactly. Exactly right.

And it was not lost on me. Somebody, when I was explaining this mentioned to me, it's like, okay, so your problem with technical debt is it's a financial metaphor for financial people. And then you replaced it with another financial metaphor. I was like, okay. Yeah, that's a good observation. Portfolio optimization is not strictly financial and also hopefully not very metaphoric, right? There's no interest rate of technical debt.

And despite how many times people want to draw that, that's not a real thing. That's a pretend thing. It's not a real thing. But I think that these dimensions are real in the sense that they are quantifiable. And I think I can explain how you could quantify every one of them. Okay. And you, I'd be interested in, because I was saying that the weakestly defined thing, but yeah, I get the complexity can be somewhat defined. There's any number of both algorithmic, you know.

Complexity, but also just lines of code is not a bad proxy for complexity or number of systems or libraries or dependencies or something like that. So yeah, I get that. Risk. Yeah. How do we define... How can you numerically define or quantitatively? So if you really wanted to stand on the shoulders of giants, there are many people who have tried to quantify.

software risk in the past and i think a lot of those techniques are very effective if you want to apply them rigorously if you want a very rough estimation of this just look at the incident issues in your backlog or the error incidents in your backlog or whatever it might be. And maybe you have a different category for things that are...

speculative risks, right? Right. That's what I'm used to doing. Yeah. It's not the idea of like, you know, you write down the risk of like, what would happen if, and you come up with a few.

of the things that are really important. And you go, well, how do I offset those? And the more of those you can think of, like, well, what if this happens? What if that, have we accounted for this? What if the business changes direction, whatever, those kinds of things could be seen as risks. But yeah, I get it. Yeah. GitHub issues.

is not a bad proxy for it, at least in some mechanism of, you know, anything, any outstanding known issue that you have is definitionally a risk. It's like, hey. Right. If two people try to do this at the same time, then we get an exception thrown. And then when somebody has to deal with this at customer services, that is a risk that we're riding that risk. And yeah.

Got it. Okay. So CR. This is an easy acronym to remember, at least, and very, very apt. That's a good one. So the risks are like unwanted behavior in your software. The uses are wanted behavior. If you want a really simple way to think about it, right?

Oh, that is interesting. Yeah. Okay. So use is there in this instance is like, yeah, functionality that people are happy with or happy enough with that's providing value to someone somewhere. As you say, maybe it's making the company money. Maybe it's like the raising debt for a few. or open source project, whatever it is.

uh and that is one of the things that has kind of a uh it'll have in the optimization problem it should have a negative weight risk with respect to complexity and risk we hope it should have the opposite sense to that because obviously you want to optimize for the highest uses and the lowest risk, presumably, amongst all these things. So I would actually argue that really what you want there is a high correlation between the uses that you have.

Yeah. And the uses that you think will make you money. So more uses are not strictly better. Okay. Yeah. And this is where it's like, applying this practically, I think you're going to wind up with the constraints where it's really not possible to, or it's very unlikely that you're going to be able to increase the uses without increases.

increasing complexity or risk required complexity versus unnecessary complexity we've talked about before is to some extent you know it's necessary if we want to get something done then it takes a bit of yeah okay yeah But I would argue that there's probably a sort of equilibrium with uses. It's not that you're trying to maximize it or minimize it. You're trying to hit the right amount for the things that you understand will be valuable.

the value and yeah yeah again yeah that makes sense right now practically on most software projects there's a never-ending list of things that people think might be valuable and so you're never you're never really gonna like like all right we've added all the functionality or software that we ever need to add we're done like that

happens. But I think the important thing to understand about that is it is entirely possible that there are uses in your system that you want to remove. And I think that this model gets... very interesting when you start thinking about the trade-offs between them, right? So you're like, I'm going to reduce risk by removing uses, right? Like here's a use case that we don't want to support anymore because

it has a lot of risk associated with it, right? There's cases that we don't handle. There's edge cases that we don't handle. And we could handle those edge cases and that would cost more complexity. But another way to handle this is just remove the use case. Yeah.

Yeah, yeah, absolutely. Yeah. And I mean, we don't do that often enough, I think, is probably the point that you're sort of... getting at here or you know without it being you know there in front of you in in numbers you know one way to change our objective function that says you know which we we do want to maximize which is some kind of relation as you say a ratio of these things is to say well

let's just get rid of the thing that doesn't really work that people don't really use. Or when they do, they always hit this edge case. Let's just say we don't do that anymore. It's fine. Yeah. Okay. All right. You've sold me on the U now. No, I am sold. I like it. And it's not the inverse of either C or R. And in fairness as well, while there are sort of correlations between these things, they're not all totally orthogonal axes, are they?

complexity and risk sort of go hand in hand. I mean, like some of the risk is the fact that something is complex. Yep. Like any portfolio optimization, you're going to wind up with situations where there are relationships between the dimensions, right? You increase one and you decrease or increase another one. And it's sort of like there's in some cases no way around that, right?

Right, you know, you buy Apple shares and you've got more tech sector risk and that's okay. Sometimes they are maybe different or whatever. There's some, yeah, something, something portfolio. optimization problem here, risk. So that's actually interesting because maybe we should pause here on the U, but like one of the things that we do in our day jobs is that we have ways of taking...

the many things that we have and applying a risk model to it, which obviously is different from the risk we're talking about here, which is essentially the objective function. It's the thing you want to minimize. or at least take into account when you're trying to maximize something else. In the case of what we do in our day job, it is maximizing the amount of money you make while reducing the risk that that...

requires to go forward, the risk. And that's kind of some weighted sum of all of those things. And so, yeah, here, that's what you're talking about with the ratio of like, well, okay, if we increase the uses, but it also increases the risk and the complexity, maybe those net out and it's not actually better. Right. Exactly right. Okay. All right. Sorry. And then.

Oh, you've got something else to say on that? No, no, that's it. We're ready to go to the F, which I am about to try and remember. I'm definitely a normal human who can remember things more than 14 minutes. No, tell me what the F was. Feedback. Feedback. Okay. All right. So how do you know if your uses are valuable? I think that's like the most important form of feedback. It's like, did the software that we build actually make any money? And in trading, that's...

easier to do than in a lot of different contexts. You put the strategy into production, you see if it makes any money. But there's lots and lots and lots of other situations in which you need to create feedback loops. Are my users happy with the way that the thing's going? Is my software using the amount of memory that I expected it to use, right? If I made a change to my software, did I break anything? That's an obvious one that's on brand for us.

That's a feedback loop. Yeah, that's what it is. have we increased or decreased the performance let's just get both sides exactly exactly can i measure it and see yeah okay so that's yes yes and so the way in which you oh i guess one step back to you for just a second yes sir I made the claim that all of these are quantifiable and I would say easily quantifiable. I think the easiest way, the sort of like, you know, the if complexity is just like.

roughly the number of lines of code, which I think is actually a pretty good estimation of complexity. The pretty good estimation for uses is the number of passing tests. Those are the things that you know your software can do.

That is true. Okay. I will, but I'll take you back on that because one of the things that you were saying is, you know, like things that people want that are used, that value generating business value, because I can sit down and write a thousand tests for a piece of code that nobody.

actually needs or wants. No, no. I'm specifically saying that uses are not necessarily valuable. Oh, I see. Yeah, yeah. So actually to back that up specifically. Okay, got it. Yeah, yeah, yeah. Like their behavior of your software, are they valuable? I don't know if they're not valuable, then we should remove those uses because it probably removes complexity and risk. Yeah. Right.

But yeah, the uses are just, these are the things your software can do. Whether they're valuable or not, maybe the feedback will tell you, right? I see. Yeah, yeah, yeah. Okay. And then it's a slightly weaker coupling than something you can directly measure, but it's part of that whole. So yeah, so feedback can be... user satisfaction, it can be performance, it can be

test coverage, it can be CI build times. It's just anything you can measure objectively about your code. What about things like... you know, log incidents. I mean, you kind of alluded to this with the R thing here, but you know, like, you know, the number of warnings that you're logging out or, you know, info, that's all, I mean, it all falls under feedback, but it may well be that that feedback.

in some ways is again very sort of collinear with risk if you're saying well one of my risk metrics is how often do we get an exception thrown that we track in our exception tracker or whatever and then that's part of the feedback and it's also it contributes to the risk is do you see those as being non-orthogonal in that way or yeah absolutely so one of the things you can do is you can

mitigate risk by adding feedback. So if you have better observability in your system, you have better application metrics, you have better alerting, you have better things like that, then you can probably take more, you can have more risky things that don't...

result in as much risk because you'll be able to respond to them quickly. We're going to put this strategy into production. If it starts tanking, we'll know about it right away and we can turn it off. Or if an exception starts happening, we can handle it right away. Whereas if you're...

feedback is lower because your observability is lower, then that risk is effectively higher because you can't see what's going on, right? Yeah, yeah, yeah. That's interesting. Okay. So, all right. And then we've reached... Cruff. The T, yes. What is the T? The T in cruff is... Teams. Teams. I remembered it. I'm a good learner. Team or teams. Yeah. So...

So the bigger the team, the better. I see. Yeah. So you're trying to maximize the number. Yes. So here's the thing. The answer is sort of like sort of yes, but not really yes. So obviously we know that big teams are bad. But why are they bad, right? If you have a team of 50 software engineers working in a code base, you're going to have to do a lot of things to make that work, right? Yep, yep, yep.

So the thing that we're actually trying to maximize here, and I chose T because it fits the acronym. I was going to say, I mean, once you've got to cruff and you're like, what now? So I'm trying to think about like, okay, so we're talking about complexity.

complexity, we can measure that with lines of code. We're talking about team. Team is a little bit of an abstract concept, but I think it has a very clear quantifiable thing, which is bus factor. You are trying to maximize bus factor. Let's stop. talking about buses running people over. You and I know that the way that we used to do this in an old job that we were in together was the crypto factor where...

Some arbitrary coin that somebody had bought on a whim because it had a funny name goes massive and they never have to work again. And so they retire. And that's slightly less... Um, uh, injurious, injurious. Wow. But anyway, the point is that if somebody leaves for whatever reason and you have a massive hole in your ability to maintain and continue to develop your software or support it, then you have a problem. Yes. And yeah, we can call that bus risk. We can call it crypto risk.

I like crypto risk. Crypto risk is good. Although then now that brings in a whole other thing. So I get it. It's sort of politically charged something. Lottery risk? Lottery risk. Yeah. I mean, what's the difference between crypto and lottery? I don't know. Again. Wrong podcast. Wrong podcast for that. Yeah, yeah, yeah. But yeah, so I think that that number you do want to maximize. And it is generally a, I would guess, I'm not a...

you know, modeler of, of mathematical things, but I would guess that it is generally a multiplicative factor on the other things. Right. So like. How many people do we have that can understand the complexity that we have? Right. Manage the risks that we have. Add new functionality, use cases to our system. You know, interpret the feedback and know how to respond to it.

That is the number of people who are in that sort of – lotto risk set right interesting yeah yeah you could even sort of go along as far as to say like let's go through all of the issues that we have outstanding right now how many of them could more than one person could I assign to more than one person and they would get it done. And that would give you a way of saying like, well, and you could do it for everyone and kind of build a pattern of the entire team.

which team members can solve which bugs or issues or risks that you have and go, where are my blind spots here? Oh, if Trevor ever quits, we're doomed because nobody knows what Trevor's doing or whatever. And that then...

Okay. And so I think there are ways you could build to quantify. It might be, excuse me, it might be a bit trickier than the other ones. It might involve a little bit more gymnastics, but I do think it could be done. I mean, you could even have like labels and get up or whatever and say, Hey, it could be done by blah, blah, blah, rather than an assignment.

it to people specifically and then go hang on a second which which which issues do we have that only has one person on it right yeah yeah oh whatever yeah yeah sorry i'm trying to solution this right now but no i like it so cruft Cruft. And I think if you try to relate this back to what a lot of sort of colloquial uses of technical debt are, so like going back to the original thing here, you got a room full of six people and you got 12 opinions about what technical debt is, right?

Those opinions are probably not wrong. They're almost certainly not wrong. They're probably born from direct experience and sort of hard-fought lessons. It's just that they mean different things, but we're using the same word to refer to them, right? Yeah. A real basic example of technical debt, as many people use it, is essentially abandonware, right? Like you've got the haunted graveyard project where it's like, you know, in that situation, the T factor has gone to zero. Yeah. Right?

And it's all risk all the time. Yeah. You can't manage the risks anymore. You can't manage the complexity anymore. You can't add new use cases to it anymore because T is now zero, right? Another situation is when things get too complex, right? Like the complexity factor goes way up. It becomes very hard for people to understand. And now you can't add new use cases to it anymore. Right. Because you're.

By doing so, you're probably taking on a risk. I can go in and I can change the software, but I don't really understand how it works. If I break it, that's going to be really bad, right? So all of those dimensions come into play in that. in that scenario. And there are other things like I need to get something done quickly. How do I get something done quickly? I implement the happy path for a use case and I don't worry about the risks, right?

I say, like, we're going to catalog these risks. Hopefully we're going to address them at some point. But this thing, this functionality needs to exist by Friday. And I'm going to implement. only the minimum amount of code that I need to, to make that work. And I'm going to take on a bunch of risk in order to accomplish that. What's that? I said to do check password return true. Yes, exactly. Yeah.

Exactly. It's fine for the investor meeting. It's less fine when they say, how quickly can we get to production? Right. Right. So. So this is my next iteration of this thinking, because I do think it's a shame that the phrase technical debt is so... sort of like blurred in its meaning, right?

But it does fit a lot of cases, and it's very easy to talk to non-programmers. I think we litigated this the last time around. I mean, everyone has sort of some gut understanding of what I say when I say, like, hey, I'm going to do this quickly, but it's going to cost us in the longer run.

like oh i get it you're borrowing from some mythical thing and that puts you in debt and later on you have to pay it back with some kind of interest which again we've said doesn't really exist in the same sense but like it doesn't it does bring some of the thought processes through, not thought, it does have the right kind of smell to it, right, for it, which is why it's so attractive. Croft, obviously, is a great acronym.

And it is an acronym. That's its main strength really is the acronym. But also it is a word that we use to mean exactly that. And so we call it cruft. Yes. But. I mean, the thing is, it does, I mean, to pick holes, it is still an incredibly broad set of things that you've just defined in that, which is fine, because presumably you're covering, this is like all things. In fact, Croft in this instance is...

is an all-encompassing aspect of the entire project, right? It covers all of the... parts of it the good and the bad because you've got the you in there that's a usage and you've got like the feedback which is usually good and and the team which is hopefully a good thing and so it's not like it isn't in its own right a negative sense like if i say oh my gosh this is a bit

this is a lot of craft here you know obviously colloquially we know what we mean when we say craft we mean it's like you know it's the belly button lint of the code of the code phrase it's like there's the goo around the edges that you have to pick out um but So, yeah, I guess it's a good acronym to think about how to...

run a project and think about when you're when you're making a decision about whether i should add a new feature whether i should test this in a particular way or whether or not we should just send trevor off on his own odyssey for three months and then see whatever he comes up with you can measure it using

the cruftometer and sort of say, are we okay with this? Are we still within the bands of cruftiness that's okay? I mean, maybe that's how you could accept this. We accept that all projects are crufty because they're programs, right? And they're written by humans.

They're an engineering solution to a very high-dimensional optimization problem of like, well, we need to get this out by Thursday. Oh, I don't really understand how this thing works. All right, I'm going to use a library I already know. All these kind of things are being balanced in our head all the time.

Yeah, I suppose I'm trying to rationalize what craft really means. I know, you know, it's a great acronym. Right, right. Well, so here's how I'm intending. Here's how I use it myself. And here's how I intend for other people to use it if they choose to use it. which is if you're talking to a non-technical person, use technical debt as a metaphor. That's great.

Like that's what Ward originally intended. That's a great way to describe to somebody who's not a programmer why you're doing things other than adding new functionality that's going to potentially make you money. It's like, why are you guys refactoring? What's refactoring? Why are you spending your time? Use technical debt. It's great. That's what it's for. If you have.

programmers talking to other programmers about the trade-offs that they are making in code, you can do much better than saying, ah, that thing's got a lot of debt. You can be more specific because you're a programmer and you should. So for example, If you are reviewing a PR and you see some code in a PR and you're like, I don't think this complexity is worth the use.

Like you added in this super complicated arg parser thing instead of just slicing off the first two string parameters. And you added like 10 lines of code. And it's like, I get that that's better, but I don't think it's worth the complexity. Right. And then you can have a discussion about whether that's true as opposed to like, this code says full of debt. I hate it. Right.

Right. That makes a lot more sense to me. Yeah. Essentially, you're handing someone a cheat sheet of things to talk about when either justifying a decision or considering trade-offs. And, you know, in a code review situation. These are the words that you should be using. You know, this seems risky. Or are we okay with the risk that this thing won't work? Is it tracked in an issue somewhere?

If it does go wrong, do we have the feedback that will tell us how to come and find it quickly? Who else knows about this? Does anyone else understand this piece of code? You know, could it be less complicated than this? Maybe we should just, is there something off the shelf we could already use, right? And I've presumably missed one of the letters in this example. Yeah, who even asked for this?

Is this something we wanted to do? Yeah. I think I got more now. In the first place? Yes. Yes. And going back to the start of this conversation, I'm just suggesting that this is a better pattern. If you want to think in patterns. This is a better pattern for programmers to talk to other programmers about aspects of their code that they either want or don't want, right? As opposed to just debt, which doesn't have a concrete enough definition.

or isn't as concrete as programmers are capable of talking about. We are capable of talking about lines of code complexity. We are capable of talking about potential risks. Like, oh, if we get this message type, we don't handle it. What will happen? Oh, the system will restart and it'll go into the dead letter queue. Okay, that's an acceptable risk. Great.

We are capable of talking about these trade-offs in more specific terms that isn't a metaphor, a financial metaphor, and we should. And if you don't like cruft and you can't remember what the words are, that's... Fine. You should come up with these terms for your own team, right? Like for your group of people who are going to be in this code together, who are going to be wearing those risks, who are going to be getting those pages in the middle line.

Come up with your own terminology for this, right? But you don't have to use the same metaphor that you use to explain to the CFO why you need another person on your team that you do when you're talking to another programmer about code. It's just lazy. To use debt. Yeah. I know we use it colloquially. Yeah. No, I'm with it. I'm with it. I don't know how to weave this in, but it must be. Oh, no. It's just Ben. I can see the look on his face. What's Matt going to say?

But just for our international audiences, I believe we have, you should know that Cruft or Crufts... is a dog show in the UK. It's the equivalent of the, what is the big dog show in the US? I'm trying to think what it's called now. Ken, no. Oh yeah, like the Kennel Club something. No, I feel there's something else now. Now I should have, I was going to try and Google it, but in this particular position, my keyboard is so close to the microphone. All you'd hear is the clacketing. But yeah, so.

whenever we say craft it's exactly what i think of in fact you know at a two two three jobs ago we had the the um the c plus plus library of things that c plus plus really should come out of the gate with you know command line parsing and printing strings with commas between them and stuff like that was called c cruft as in crufty bits of c

and C++ that were just like, we just needed to do this because it's pasting the language together. So craft is well, well taken, well understood, but I don't think of the dog show whenever anyone talks about it. So if you can come up with S on the end of your acronym, if you need another thing, then that would make my day. Crufts. But then it would lose the programming meaning. So maybe not. No, that's awesome. Consider me a convert to Cruft. Well.

I appreciate trying to poke holes in this because the temptation with all of these kinds of things is, you know, like you're building castles in the sky. And they're not really applicable. And like, I try to think about things where it's like, is someone actually going to respond in a PR using these terms, the way that they've defined them here and result in a. better conversation and a better solution. And if that's not actually happening, then you're just, you're just

wrapping yourself around an axle being like, oh, and then I'm going to define this like this and I'm going to do this like this. And it's just systems that no one cares about. No, I think it's valuable. I mean, these are not... No disrespect to your insight here. These are not novel concepts, right? You've just found a really good way of putting five key concepts together that make sense and have a catchy acronym, which is a great way of, you know... of making it memorable.

And giving you a conference talk to prepare for, which I'm expecting anytime soon. And so, you know, where is this going to be presented? You've already presented it internally. Yeah, yeah. No, I should present this somewhere. But yes, the very fact that it is not novel is...

is a good sign because again, this was all about me taking the things that I into it. And I'm not special. There are lots of other senior software engineers who have, you know, built things for a long time, internalize this stuff.

And they sort of intuit things in this way. And it's just taking those and just putting a name on them. So we're not doing anything new or interesting here. We're doing things that we've done for years. I mean, especially some of the risk management stuff. It's like years and beyond software. Years and years and years.

We're just giving it a name so that when we talk about it, we can say, yeah, I think this is too complex or I think this is too much risk instead of like a very long winded explanation of all the things that no one is ever actually going to read in a PR because they're just like I TLDR. Yeah. yeah yeah yeah well cool i mean that seems like a an obvious place to

to close this thing. And I'm going to go away and think about this more. I'm doing a lot of compiler explorer work at the moment, which has somewhere in a region of 800 open issues, about 40 open PRs. And yeah, so this gives me a new...

tool in my arsenal to start thinking about things there that i mean there is a project my friend that has a lot of you for sure but an awful lot of people have added a ton of things in that are useful for them and then they've disappeared off the face of the earth afterwards which is completely reasonable it's an open source project you know you add your thing but uh

We don't necessarily have the test to cover those things, which means that they may break and we don't know. And that is a big R and it makes it hard for us to change things. And so, you know, this is what I'm going to be looking at over the next few days is trying to work out how to. bring this beast back under control so that uh that we can make um make it more um or less crafty more crafty i don't know uh better

Fantastic. Well, I guess I will see you the next time we do this. Yeah. See you next time. a programming podcast by Ben Rady and Matt Godbold. Find the show transcript and notes at www.twoscomplement.org. Contact us on Mastodon. We are at twoscomplement at hackyderm.io. Our theme music is by Inverse Phase Forgive my lack of enthusiasm.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.