Hello and welcome to Python Bytes, where we deliver Python news and headlines directly to your earbuds. This is episode 193, recorded July 29th, 2020. I'm Michael Kennedy. And I am Brian Okken. And we've got a bunch of great stuff to tell you about. This episode is brought to you by us. We will share that information with you later. But for now, I want to talk about something that I actually saw today. And I think you're going to bring this up as well, Brian. I'm gonna let
you talk about it. But something I ran into updating my servers with a big warning in red when I pip installed some stuff saying, your things are inconsistent. There's no way pip is going to work soon. So be ready. And of course, that just results in frustration for me because, you know, depend a bot tells me I need these versions, but some things don't require anyway, long story, you tell us about it. Yeah, okay. So I was curious. I haven't actually seen this yet. So I'm glad that
you've seen it so that you have some experience. This was brought up to us by Matthew Fikart. And he says he was running pip and he got this warning and it's all in red. So I'm gonna have to squint to read this. Says, after October 2020, you may experience errors when installing or updating packages. This is because pip will change the way it resolves dependency conflicts. We recommend that you use use features
equals 2020 resolver to test your packages. It shows up as an error. And I think it's just so that people actually read it. But I don't know if it's a real error or not. It's still it works fine. But it's it's going to be an error eventually. Okay, so this is not a problem to not adjust your sets actually do adjust your sets. What you need to be aware of is the changes. So we've got a we I think we've covered it before,
but we've got a link in the show notes to the pip dependency resolver changes. And these are good things. But one of the things that Matthew pointed out, which is great, and we're also going to link to an article where he discusses where, like how his problem showed up with this. And it's around projects that use some people use poetry and other things. And I can't remember the other one,
that does things like lock files and stuff. But a lot of people just do that kind of manually. And what you often do is you have like a, your original set of requirements that are just your, like the handful of things that you immediately depend on with no versions or or with minimal version rules around it. And you say have installed this stuff. Well, that actually ends up installing a whole bunch of
all of your immediate dependencies, all of their dependencies and everything. So if you want to lock that down, so that you're only installing the same things again and again, you say pip freeze, and then pipe that to a, like a lock file. And then you can use that, I guess, a common pattern. It's not the same as pip env's lock file and stuff, but it can be similar anyway. And then if you use that and pip install from that, everything should be fine. You're going to install those dependencies.
The problem is if you don't use the use 2020 resolver feature to generate your lock file, then if you do use it to install from your lock file, there may be incompatibilities with those. So the resolvers actually try is actually, there's good things going on here, having pip do the resolver better. But the quick note we want to say is don't panic when you see that red thing,
you should just try the use features 2020 resolver. But if you're using a lock file, use it for the whole process, use the new resolver to generate your original lock file from your original stuff, and then use it when you're installing the requirements lock file. There's also information on the IPA website. They want to know if there's issues. This is still in a, it's available, but we're
still, there's still maybe kinks, but I think it's pretty solid. Not enforced, but warning days. Yeah. And I kind of actually really like this way of rolling out a new feature and in a behavior change is to, to have, to have, have it be available as a flag so that you can test in a, in a, not a pre-release, but an actual release, and then, change the default behavior later. But so the reason why we're
bringing this up is October is not that far away. And October is the date when that's going to change to not just the flag behavior, but the default behavior. So yes, go out and make sure these things are happening. And if you completely ignore us when things break in October, the reason is probably that you need to regenerate your lock file. Yep. So in principle, I'm all for this. This is a great idea.
It's going to make sure that things are consistent by looking at the dependencies of your libraries. However, two things that are driving me bonkers right now are systems like depend a bot or pie up, which are critically important for making sure that your web apps get updated with say like security patches and stuff. Right. So you would do this like a, you know, pip freeze your dependencies, and then it has the version. What if say you're using Django and there's a security release around
something in there, right? Unless you know to update that, it's always just going to install the one that you started with. So you want to use a system like depend a bot or pie up where it's going to look at your requirements. It's going to say, these are out of date. Let's update them. Here's the new one. However, those systems don't look at the entirety of what it potentially could set them to. It says,
okay, you're using doc opt. There's 0.16 a doc opt. Oh, except for the thing that is before it actually requires doc opt 14 or it's incompatible. And as in incompatible as pip will not install that requirements.txt any longer. But those systems still say, great, let's upgrade it. And you're like in this battle of those things or like upgrading it. And then like the older libraries are not upgrading or you get two libraries. One requires doc opt 16 or above. One requires doc opt 14 or lower.
You just can no longer use those libraries together. Now it probably doesn't actually matter. Like the feature you're using probably is compatible with both, but you won't be able to install it anymore. And my hope is what this means is the people that have these weird old dependencies will either loosen the
requirements on their dependency structure. Like we're talking about, right? Like this thing uses this older version or it's got to be a new version or update it or something, because it's going to be, there's going to be packages that are just incompatible that are not actually incompatible because of this.
Yeah. Interesting. Yes. Painful. I don't know what to do about it, but it's like literally this morning I ran into this and I had to go back and undo what depend a bot was trying to do for me because certain things were no longer working, right? Or something like that. Interesting. Yeah. So does depend a bot depend a bot? Depend a bot. Yeah. That's the thing that GitHub acquired that basically looks at your various package listings and says, there's a new version of this. Let's pin it to
a higher version. And it comes as a PR. Okay. That was my question. It comes as a PR. So if you had testing around in a CI like environment or something, it could catch it before it went through. Yes. You'll still get the PR. It'll still be like be in your GitHub repo, but the CI presumably would fail because the pip install step would fail. And then it would just know that it couldn't auto merge it. But still it's like, you know, it's, you're like constantly like trying to
push the water, the tide back because you're like, stop doing this. It's driving me crazy. And they're, there are certain ways to like link it, but then there's just a force it to certain boundaries. But anyway, it's, it's like, it's going to make it a little bit more complicated. Some of these things. Hopefully it considers this. Well, maybe depend upon it can update to do this. Wouldn't that be great. Yep. That would be great. Well, speaking of packages, the way you use packages
is you import them once you've installed them, right? Yes. So Brandon Branner, Branner was talking on Twitter with me saying like, I have some imports that are slow. Like how can I figure out what's going on here? And this led me over to something may have covered a long time ago. I don't think so, but possibly called import dash profiler. You know this? No, this is cool. Yeah. So one of the things
that can actually be legitimately slow about Python startup code is actually the imports. So for example, like if you import requests, it might be importing like a ton of different things, standard library modules, as well as external packages, which are then themselves importing standard library modules, et cetera, et cetera. Right. So you might want to know what was slow and what's not. And it's also not just, it's not like, just like a C include it's a imports actually run code.
Yes, exactly. It's not something happening at compile time. It's happening at runtime. So every time you start your app, it goes through and it says, okay, what we're going to do is want to execute the code that defines the functions and defines the methods and potentially other code as well. Who knows what else is going on? So there's a non-trivial amount of time to be spent doing that kind of stuff. For example, I believe it takes like half a second to import requests, just requests.
Interesting. I mean, obviously that depends on the system, right? You do it on MicroPython versus on like a supercomputer, the time's going to vary, but nonetheless, like there's a non-trivial amount of time because of what's happening. So there's this cool thing called import profiler, which all you got to do is say from import profiler, import, profile, import. Woo. Say that a bunch of times fast.
Written, it's fine. Spoken, it's funky. But then you just create a context manager around your import statements. You say with profile import as context, all your imports, and then you can print out, you say context.printinfo and you get a, like a profile status report. That's cool. Now I included a little tiny example of this for requests and what was coming out of it. If you look at the documentation, it's actually much longer. So I'm looking here, I would say just, you know,
eyeballing it. There's probably 30 different modules being imported when you say import requests. That's non-trivial, right? That's a lot of stuff. So this will give you that output. It'll say here, this module imported this module, and then it has like a hierarchy or a tree type of thing. So this module imported this module, which imported those other two. And so you can sort of see the chain or a tree of like, if I'm importing this, here's the whole bunch of other stuff it takes with
it. Okay. Yeah. And it gives you the overall time. I think maybe the time dedicated to just that operation and then the inclusive time or something. Actually, maybe it looks more like 83 milliseconds. Sorry, I had my units wrong instead of half a second, but nonetheless, it's like, you know, you have a bunch of imports and your running code. Where is that slow? You can run this and it basically takes three lines of code to figure out how much time
each part of that entire import stack. I want to say call stack of that execution, but it's the series of imports that happen. Like you time that whole thing and look at it. So yeah, that's, it's pretty cool. That's neat. And also, I mean, there's, there's times where you want, you really want to get startup
time for something really as fast as possible. And this is part of it is, is your, the things you're importing at your startup is non, sometimes non-trivial when you have something that you really want to run fast. Right. Like let's say you're spending half a second on startup time because of the imports. You might be able to take the slowest part of those and import that in a function that gets called. Right. So. Yeah. Import it later.
Yes. You only pay for it at, if you're going to go down that branch, because maybe you're not going to call that part of the operation or like that part of the CLI or whatever. Yeah. And it's definitely one of those fine tuning things that you want to make sure you don't do this too early, but, but for people packaging and supporting large projects, I think it's, it's a good idea to
pay attention to this and make sure to your import time. Like it'd be something that would be kind of fun to throw in a test case for CI to make sure that your, your import time doesn't suddenly go slower because something you depend on suddenly got slower or something like that. Yeah. Yeah. Absolutely. And you don't necessarily know because the thing it depends upon that thing changed, right? It's not even the thing you actually depend upon, right? It's very,
it could be very down, down the line. Yeah. And maybe you're like, we're going to use this other library. We barely use it, but you know, we already have dependencies. Why not just throw this one in? Oh wait, that's adding a quarter of a second. We could just vendor that one file that we don't really, you know, and make it much, much faster. So there's a lot of interesting, use cases here. A lot of time you don't care. Like for my web apps, I don't care for my CLI apps.
I might care. Yeah, definitely. Yeah. So I've been on this bit of a exploration lately, Brian, and that's because I'm working on a new course. Yeah. Yeah. Yeah. We're actually working on a bunch of courses over at talk Python, some data science ones, which are really awesome. But the one that I'm working on is Python memory management and profiling are, and, tips and tricks and data structures to make all those things go better. Nice. So I'm kind of on this profiling bent.
And anyway, so if people are interested in that or any of the other courses that we're working on, they can check them out over at training.talkpython.fm helps bring you this podcast and others and, books. Thanks for that transition. But I, I'm excited about that because the profiling and stuff is one of those things that often is considered kind of like a black art, something that you just learn on the job
and how do you learn it? I don't know. You just have to know somebody that knows how to do it or something. So having some courses around, that's a really great idea. Thanks. Yeah. Also like, when does the GC run? What is the cost of reference counting? Can you turn off the GC? What data structures are more efficient or less efficient according to that? And all that kind of stuff. It'll be a lot of fun. Cool. Yeah. So I've got a book. I actually want to highlight something. I've
got a link called, it's pytestbook.com. So if you just go to pytestbook.com, it actually goes to a landing page that's on blog. That's kind of not really that active, but there is a landing page. The reason why I'm pointing this out is because some people are transitioning. Some people are finally starting to use three, eight more. There's people starting to test three, nine a lot, which is great. There's pytest six just got released, not one of our items. And I've gotten a lot of questions
of, is the book still relevant? And yes, the pytest book is still relevant, but there's a couple of gotchas. I will list all of these on that landing page. So they're not there yet, but they will be by the time this airs. Time travel. Yeah. There's a Rata page on Pragmatic that I'll link to, but also, but the main, there's a few things like there's, a database that I use in the examples as a tiny
DB and the API changed since I wrote the book. There's a little note to update your, update the setup to pin the database version. And there's a, something, you know, markers used to be, used to be able to get away with just throwing markers in anywhere. Now you get a warning if you don't declare them. There's a few minor things that get changed, are changed to that make it for new pytest users might be frustrating to walk through the book. So I'm going to lay those out
just directly on that page to have, have people get started really quickly. So pytestbook.com is what that is. Awesome. Yeah. It's a great book. And you might be on a testing bent as well. If I'm on my profiling one. Yeah, actually. So this is a Django testing toolbox is an article by Matt Lehman. And I was actually going to think about having him on the show and I still might on testing code to talk about some of the stuff, but he threw together,
I just wanted to cover it here. Cause it's a really great throw together of information. That's a quick walkthrough of how Matt tests Django projects. And he goes through some of the packages that he uses all the time and some interesting techniques, the packages that there's a couple of them that I was familiar with. pytest Django, which is like, of course, you're, of course you should use that. factory boy is the one
there's a lot of factory boys, one project. There's a lot of different projects to generate fake data. Factory boys, the one Matt uses. So there's a highlight there. And then one that I hadn't heard of before Django test plus, which is a beefed up test case. It maybe has other stuff too, but it has a whole bunch of helper utilities to make it easier to check commonly tested things
in Django. So that's, that's pretty cool. And then some of the techniques, like one of the things that people, some people trying to use pytest for Django get tripped up at is a lot of people think of pytest is just functions only test functions only and not test classes. But there's a, there are some uses,
uh, Matt says he really likes to use test classes. I mean, there's no, I mean, pytest allows you to use test classes, but, you can use these derived test cases like, the Django test plus test case. A couple of other things using a range act assert as a structure in memory SQLite databases when you can get away with it to speed up because in memory databases are way faster than on file system databases. Yeah. And you don't have to worry about, dependencies or servers you got to run. It's
just colon memory. Boom. You connect to it and off it goes. Nice. Yeah. one of the things I didn't get, I mean, I, I kind of get the next one disabling migrations while testing. I don't know a lot about my database migrations or Django migrations, or whatever those are, but apparently disabling them is a good idea. It makes sense. Faster password hasher. I have no idea what this is talking about, but apparently you can speed
up your testing by having a pass faster password hasher. Yeah. A lot of times they'll, they'll generate them. So they're explicitly slow, right? So like over at talk Python, I have, I use pass lib, not Django, but pass lib is awesome. But if you just do say an MD5, it is like, super fast, right? So if you say, I want to generate this and take this and generate it, it'll come up with the hashed output, but because it's fast, people could look at that and say,
well, let me try like a hundred thousand words. I know and see if any of them match that, then that's the password, right? You can use more complicated ones. And MD5 is a, not when you want something like be encrypt or something, which is slower a little bit and better, harder to guess. But what you should really do is you should like insert little bits of like salt,
like extra text around it. So even if it matches, it's not exactly the same, like you can't do those guesses, but then you should fold it, which means take the output of the first time, feed it back through, take the output of the second time, feed it back through a hundred, 200, 300,000 times. So that if they try to guess, it's super computationally slow. I'm sure that's what it's talking about. So you don't want to do that when you want your test to run fast because you don't care about hash
security during test. Oh yeah. That makes total sense. That's my guess. I don't know for sure, but that's what I think what that probably means. The last tip, which is always a good tip is figure out your editor so that you can run your tests from your editor because your cycle time of flipping between code and test is going to be a lot faster if you can run them from your editor. Yep. These are good tips. And if you're super intense, you have the auto run,
which I don't do. I don't have auto run on. I do it once in a while. Yeah. Yeah. Cool. Well, back to my rant, let's talk about profiling. Okay. Actually, this is, this is not exactly the same type of profiling. It's more of a look inside of data than of performance. So this was recommended to
us by one of our listeners named Oz. First name only is what we got. So thank you, Oz. And he is a data scientist who goes around and spends a lot of time working on Python and doing exploratory data analysis. And the idea is like going to grab some data, open it up and explore it. Right. And just start looking around, but it might be incomplete. It might be malformed. You don't necessarily know exactly what
its structure is. And he used to do this by hand, but he found this project called pandas dash profiling, which automates all of this. So that sounds handy. I mentioned before missing, no missing N O as in the missing number, missing data explorer, which is super cool. And I still think that's awesome, but this is kind of in the same vein. And the idea is given a pandas data frame, you know, pandas has a
describe function that says a little bit of detail about it. But with this thing, it kind of takes that and supercharges it. And you can say DF dot profile report, and it gives you all sorts of stuff. It does type inference to say things in this column are integers or numbers, strings, date times, whatever. It talks about the unique values, the missing values, quartile statistics, stuff, descriptives, that's like mean mode, some standard deviation, a bunch of stuff, histograms, correlations,
missing values. There's the missing note thing. I spoke about text analysis of like categories and whatnot, file and image analysis, like file sizes and creation dates and sizes of images and like all sorts of stuff. So the best way to see this is to look at an example. So in our notes, Brian, do you see where there's like a has nice examples and there's like the NASA meteorites one. So there's an example for like the US census data, a NASA meteorite one, some Dutch healthcare data and so on.
If you open that up, you get it. You see what you get out of it. Like it's pages of reports of what was in that data frame. Oh, this is great. Isn't that cool? It's got like, it's, it's tabbed and stuff. So it's tabbed. It's got warnings. It's got pictures. It's got all kinds of analysis. It's got histogram graphs and you can like hide and show details and the details include tabbed on, you know, I mean, this is a massive dive into what the
heck is going on with this data correlations heat maps. I mean, this is the business right here. So this, this is like one line of code to get this output. This is great. This is like replaces a couple interns at least. Sorry interns, but yeah, this is a really cool. So I totally recommend if this sounds interesting, you do this kind of work, just pull up the NASA meteorite data and realize that like that
all came from, you know, importing the thing and saying DF profile report basically. And you get this, you can also click and run that in binder and Google collabs. You can go and interact with it live if you want. Yeah. I love the, I love the, the warnings on these, some of the things that can like saying some of the variables that show up of like, there's some of them are skewed, like too many values at
one value that there's some of them have missing zeros showing. It does quite a bit of analysis for you about the data right away. That's pretty great. Yeah. Yeah. The types is great because you can just, I mean, you can have like hundreds or thousands of data points. It's not trivial to just, to just say, oh yeah, all of them are true or false. All of them are, I know they're Booleans. You'd have to look at everything first. So yeah. Yeah. It's one of those things that's like easy to adopt,
but looks really useful and it's also beautiful. So yeah, check it out. It looks great. I want to talk about object oriented programming a little bit. Oh, okay. Actually, it's not something, I mean, all of Python really is object oriented because we use, everything is an object really. Deep, deep down, everything's a Py object pointer. Yeah. There's an article by Redewon Delaware called Interfaces, Mix-Ins, and Building Powerful Custom Data Structures in Python.
And I really liked it because it's a Python focused, I mean, there's not a lot, I've actually been disappointed with a lot of the object oriented discussions around Python. And a lot of them are talk about basically, I think they're lamenting that the system isn't the same as other languages, but it's just not. Get over it. This is a Python centric discussion talking about interfaces and
abstract base classes, both informal and formally abstract base classes using Mix-Ins. And it starts out with the concept that people, there's like a base amount of knowledge that people have to have to discuss this sort of thing and of understanding why they're useful and what are some of the downfalls and upfalls or benefits and whatever. And so he actually starts by, it's not too deep of a discussion, but it's an interesting discussion. And I think it's a good background to discuss it.
And then he talks about like one of the things you kind of get into a little bit and you go, well, what's really different about an abstract base class and an interface, for instance? And he writes interfaces can be thought of as a special case of an abstract base class. It's imperative that all methods of an interface are abstract methods and that classes don't store
any data or any state or instance variables. However, in case of abstract base classes, the methods are generally abstract, but there can also be methods that provide implementation, concrete methods, and also these classes can have instance variables. So that's a nice distinction. Yeah. Then mix-ins are where you have a parent class that provides some functionality of a subclass, but it's not intended to be instantiated itself. That's why it's sort of similar to abstract base
classes and other things. So having all this discussion from one person in a good discussion, I think is a really great thing. And there are definitely times I don't pull into class hierarchies and base classes that much, but there's times when you need them and they're very handy. So it's cool. Yeah, this is super cool. Actually, I really like this analysis. I love that it's really Python focused because a lot of times the mechanics of the language just don't support some of the object oriented
programming ideas in the same way, right? Like the interface keyword doesn't exist, right? So this distinction, you have to, you have to make it in a conventional sense. Like we come up with a convention that we don't have concrete methods or state with interfaces, right? But there's nothing, there's not like an interface keyword in Python. So I like it. I'm a big fan of object oriented programming. And I'm very aware that in Python, a lot of times what people use for classes is simply
unneeded. And I, I know where that comes from. And I want, I want to make sure that people don't overuse it, right? If you come from Java or C#, or one of these OOP only languages, everything's a class. And so you're just going to start creating classes. But if what you really want is to group functions and a couple of pieces of data that's like shared, that's a module, right? You don't need a class, right? You could still say module name dot and get the list of them.
And it's, it's like a static class or something like that. But sometimes you want to model stuff with object oriented programming. And understanding the right way to do it in Python is really cool. This looks like a good one. Yeah. And also, there is a built in library called ABC, or abstract base class within Python. And it seems like a, for a lot of people, it seems like a mystery thing that only advanced people use,
but it's really not that complicated. And this article uses that as well and talks about it. So it's good. You know, one of my favorite things about abstract base classes and abstract methods is in PyCharm, if I have a class that derives from an abstract class, all I have to write is class, the thing I'm trying to create, parentheses, abstract class name, close parentheses, colon, and then
you just hit alt, alt enter, and it'll pull up all the abstract methods. You can highlight them, say implement, it goes, boom, and it'll just write the whole class for you. Wow. But if it's not abstract, obviously it won't do that, right? So the abstractness will tell the editor to like write the stubs of all the functions for you. Oh, that'd be, that's a cool use, reason to use them. That's almost reason to have them in the first place. Yeah. Almost. Nice. We've pickled before, haven't we?
Yeah. Yeah, we have talked about pickle a few times. Yes. Have we talked about this article? I don't remember. I don't think so. We have. Apologies. But it's short and interesting. So Ned Batchelder wrote this article called Pickle's Nine Flaws. And so I want to talk about that. This comes to us via piecoders.com, which is very cool. And we've talked about the drawbacks. We talked about the benefits. But what I liked about this article is concise, but it shows you all the tradeoffs
you're making. Right? So quickly, I'll just go through the nine. One, it's insecure. And the reason that it's insecure is not because pickles contain code, but because they create these objects by calling the constructors named in the pickle. So any callable can be used in place of your class name to construct objects. So basically it runs potentially arbitrary code
depending on where it got it from. Old pickles look like old code, number two. So if your code changes between the time you pickled it and whatever, it's like you get the old one recreated back to life. Like so if you added fields or other capabilities, like those are not going to be there. Or you took away fields. They're still going to be there. Yeah, it's implicit. So they will serialize whatever your object structure is. And they often over serialize. So they'll
serialize everything. So like if you have cache data or pre-computed data that you wouldn't ever normally save, well, that's getting saved. Yeah. One of the weird ones that this has caught me out before and it's just, I don't know, it's weird. So there you go. Is the dunder in it, the constructor is not called. So your objects are recreated, but the dunder in it is not called. They're just the values have the value. So that might set it up in some kind of weird state.
Like maybe pass it, fail some validation or something. It's Python only. Like you can't share with other programs because it's like a Python only structure. They're not readable. They're binary. It will seem like it will pickle code. So if you have like a function you're hanging on to, you pass it along like some kind of lambda function or whatever, or a class that's been passed over and you have a list of them or you're holding onto them and that you think that it's going to save that,
all it really saves is basically like the name of the function. So those are gone. And I think one of the big, real big challenges is actually slower than things like JSON and whatnot. So, you know, if you're willing to give up those trade-offs because it was super fast, that's one thing, but it's not. And are you telling me that we covered it before? We did cover it in 189, but I had forgotten. So it was like a couple months ago, right?
Yeah, it's a while ago. Anyway, it's good to go over it again. Definitely. Be careful with your pickling. All right. How about anything extra? That was our top six items. What else we got? I don't have anything extra. Do you have anything extra? Pathlib. Speaking of stuff we covered before, we talked about Pathlib a couple of times. You
talked about Chris May's article or whatever it was around Pathlib, which is cool. And I said, basically, I'm still, I just got to get my mind around like not using OS.path and just get into this. Right. And people sent me feedback like, Michael, you should get your mind into this. Of course, you should do this. Right. And I'm like, yeah, yeah, I know. However, Brett Abel sent over a one-line tweet that may just like seal the deal for me. Like, this is sweet. So he said, how about
this? Text equals path of file dot read text. Done. No context managers, no open, none of that. And I'm like, oh, that's okay. That's pretty awesome. Anyway, I just wanted to give a little shout out to that like one liner because that's pretty nice. And then also I was just a guest on a podcast out of the UK called a question of code or the host Ed and Tom and I discussed why Python is fun. Why is it good for beginners and for experts? Why am I give you results faster than like tangible code or
tangible programs faster than say like JavaScript, career stuff, all kinds of stuff. So anyway, I linked to that if people want to check that out. That's cool. Yeah. It was a lot of fun. Those guys are running a good show over there. Yeah. I think I'm, I think I'm talking with them tomorrow. Right on. How cool. One of the things I like about it is the accents just, you know, cause accents are fun. So I was going to ask you, would you consider learning how to do a British
accent? Cause that would be great for the show. I would love to, I fear I would just end up insulting all the British people and, not coming across really well, but I love British accents. If we had enough Patreon supporters, I would be more than happy to volunteer to move to England to develop a, maybe just live in London for a few years. Like if they're going to fund that for you, that would be
awesome. Yeah. London's a great town. Okay. All right. How about another joke? I'd love another joke. So this one is by Caitlin Hudon, but was pointed out to us by Aaron Brown. So she tweeted this on Twitter and he's like, Hey, you guys should think about this. So you ready? Yeah. Caitlin says, I have a Python joke, but I think, I don't think this is the right environment. Yeah. So there's a ton of these like type of jokes. Like
I have a joke, but so this is a new thing, right? I don't know. It's probably going to be over by the time this airs, but I'm really amused by these types of jokes. Yeah. I love it. This kind of touches on the whole virtual environment, package of management, isolation, chaos. I mean, there was that XKCD as well about that. Yeah. Okay. So while we're, while we're here, I'm going to read some from Luciano. Luciano Romalo, he's a Python author and he's an awesome guy. Here's a couple other related
ones. I have a Haskell joke, but it's not popular. I have a Scala joke, but nobody understands it. I have a Ruby joke, but it's funnier in Elixir. And I have a Rust joke, but I can't compile it. Yeah. Those are all good. Nice. Cool. Nice. Nice. All right. Well, Brian, thanks for being here as always. Thank you. Talk to you later. Bye. Thank you for listening to Python Bytes. Follow the show on Twitter via at Python Bytes. That's Python Bytes as in B-Y-T-E-S. And get the full show notes at
Python Bytes.fm. If you have a news item you want featured, just visit Python Bytes.fm and send it our way. We're always on the lookout for sharing something cool. On behalf of myself and Brian Okken, this is Michael Kennedy. Thank you for listening and sharing this podcast with your friends and colleagues.