#342 Don't Believe Those Old Blogging Myths - podcast episode cover

#342 Don't Believe Those Old Blogging Myths

Jun 26, 202342 minEp. 342
--:--
--:--
Listen in podcast apps:

Episode description

Topics covered in this episode:
See the full show notes for this episode on the website at pythonbytes.fm/342

Transcript

Hello and welcome to Python Bytes, where we deliver Python news and headlines directly to your earbuds. This is episode 342, recorded June 25th, 2023. I'm Michael Kennedy. And I am Brian Okken. And this episode is brought to you by Brian and me, us, our RRWorks. So support us, support the show, keep us doing what we're doing by checking out our courses over at Talk Python Training. We have a bunch, including a really nice pytest course written by Brian.

Check out the Test of Code podcast, the Patreon supporters. Brian's got a book as well in pytest. You may have heard of this. So please, if you check those things out, share them with your friends, share, recommend them to your co-workers. It really makes a difference. You can also connect with us on Mastodon. You'll see that over on the show notes for every episode. And finally, you can join us over at pythonbytes.fm/live if you want to be part of the live recording, usually, usually

Tuesdays at 11 a.m. Pacific time. But not today. No, Brian, we're starting nice and early because, well, it's vacation time. And well, plum bomb, I think we should just get right into it. Sure. Plum bomb, let's do it. It's a new saying. It's an expression. Plum bomb, let's just do it. Let's just do it. Yeah, I have no idea where this comes from. But the, well, I do know where it comes from. It was last week. Last week, we talked about shells and Henry Schreiner said, hey, you should

check out plum bomb. It's kind of like what you're talking about, but also neat. So I did. We were talking about shh. Oh, right. We were talking about shh. Don't tell anyone. So plum bomb, it's a little easier to search for, actually, than shh. So what is it? It's a Python library. And it's got, it's shell combinations. It's for interacting with your environment. And

there we go. Henry Schreiner, one of the maintainers. So it's a tool that you can install so that you can interact with your, your operating system and file system and stuff like that and all sorts of other things. And it's got a little bit, a little bit different style than shh. But it, so I was taking a look at, it's kind of like a local command for one. The basics are like import from plum bomb, import local, and then you

can run commands as if you were just running a shell, but you do this within your Python code. And there's also some convenience ones like sh has like LS and crap and things like that. But, but, but you, it generally looks like there's more stuff around how you operating, operate with a shell normally things like piping. So

you can, you know, you can pipe one like LS to crap to word count or something like that to count files. You can, I mean, there's other ways to do it within Python, but if you're used to doing it in, in the shell,

just wrapping, wrapping the same work in a Python script, why not? Things like re yeah, redirection work, manipulating your working directory, just all sorts of fun stuff to do with your shell, but through Python, you know, the pipe overriding the, you know, the pipe operator and Python overwrite sort of actually in the language being the same as in the shell is a little bit like pathlib doing the divide aspect,

right? Like we're going to grab some operator and make it that it probably was never really imagined to be used for, but we're going to make it use it to, so it looks like what you would actually, you know, the abstraction you're representing, which is pretty interesting. Yeah. And they, like this example, they have an example in the, the read me of piping LS to crap to word count. And they, they like define that as a chain and if, and it didn't even, it doesn't even run it. I

I don't think, it just defines this new sequence. So you, so you can chain together, script commands and if you print it, so it has a, probably a, a stir or a wrapper, implementation that shows you exactly what the, the, all the pipe and the chaining was. So that's kind of a neat thing for debugging. And then when you actually run it, then it, you call that thing like a function and it runs it. That's pretty neat. Yeah, it is. You can even do them in line, just put parentheses

around them and kind of execute at the end. Yeah. Pretty interesting. Yeah. Anyway, just a fun, little quick shout out to Plumbum. Yeah. If you thought SH was cool last time, you might also check this out, right? They kind of play in similar spaces. Yeah. Just one of the things I like about Python and the Python community is, this variety of different, different libraries that might solve the same space, but, have a different flavor. you know, some people like chocolate,

some people like vanilla. Well, I'm a big fan of caramel. So how about we talk about faster CPython? Okay. I'm not sure. So the faster CPython is, they're really starting to show some results, right? Python 3.11 was 40% faster, I believe is, you know, roughly speaking, working with averages and all those things. And we've got 3.12 coming with more optimizations. And ultimately the faster CPython plan was,

you know, put together and laid out by Mark Shannon. And the idea was if we could make, you know, improvements like 40% faster, but over and over again, because of, you know, compounding sort of numbers there, we'll end up with a really fast CPython, a faster one, you might say in five releases, five times faster and five releases. And so, you know, that started really with 3.10 and at 3.11, 3.12, not the one that's coming, but the one that's coming in a year and a few months, 3.11,

they're laying out their work for that. And it's looking pretty ambitious. So in 3.12, they're coming up with ways to optimize blocks of code. So in 3.11, stepping a little bit back, we've got the adaptive specializing interpreter or specializing adaptive interpreter. I don't have it pulled up in front of me, which order those words go in, but that will allow CPython to replace the

byte codes with more specific ones. So if it sees that you're doing a float plus a float operation, instead of just doing a word, we're doing an abstract plus, you know, is that, is that a list plus a string? Is that an integer and a float? Is that actually a float and a float? And if it's a float and a float, then we can specialize that to do more specific, more efficient types of math and that kind of stuff.

Right. 3.12 is supposed to have what they're calling the tier one optimizer. And so, which optimizes little blocks of code, but they're pretty small. And so one of the big things coming here in 3.13 is a tier two optimizer. So bigger blocks of code, something they're calling super blocks, which I'll talk about in just a second. The other one that sounds really amazing is enabling sub interpreters from Python code. So we know about PEP 554. This has been quite the journey and massive

amount of work done by Eric Snow. And the idea is if we have a gill, then we have serious limits on concurrency, right? From a computational perspective, not from an IO one potentially. And you know, I'm sitting here on my M2 Pro with 10 cores and no matter how much multi-threaded Python I write, if it's all computational, all running Python bytecode, I get, you know, one 10th of the capability of this

machine, right? Because of the GIL. So the idea is, well, what if we could have each thread have its own gill? So there's still sure a limit to how much work that can be done in that particular thread concurrently, but it's one thread dedicated to one core and the other core gets its own other sub interpreter, right? That doesn't share objects in the same way, but they can like pass them around through certain mechanisms. Anyway. So this thing has, has been a journey, like I said, created 2017.

And it has like all this history, up until now. And, status is still says draft. And now the Python version, I think the PEP is approved, but, and work has been done, but it's still in like pretty early stages. So that's a pretty big deal is to add that that's supposed to show up in, 3.13 and 3.13 and in Python code. And this is a big deal. I think that in 3.12, the work has been done so that it's internally possible. It's internally done by remember correctly, but there's no way to

use it from Python, right? Like it's, if you're a creator of interpreters, basically you can use it. So now the idea is like, let's make this possible for you to do things like start a thread and give it its own sub interpreter, you know, copy its objects over, let it create its own and really do computational parallelism, I'm guessing interaction with async and await and those kinds of things. And also, more, improved memory management. Let's see what else.

Well, so I guess along, along with that, we're going to have to have some tutorials or something on how to, how to, how do they, the two sub interpreters share information. Yeah, exactly. Yeah, we will. We will. I'm, what I would love to see is just, you know, on the thread object, give the thread object, use sub and isolating, you know, isolate sub interpreter or new sub interpreter equals true. And off it goes, that would be excellent. And then maybe pickles the object.

I don't know. We, we can see how, how they come up with that, but this is, this is good news. I think it's the kind of thing that's not that important necessarily for a lot of people, but for those who it is, it's like, you know, really, we want this to go a lot faster. What can we do here? Right? Yeah. Yeah. That sounds complicated. Does it make it go faster? Yay. Then do it. Well, and you know, compared to a lot of the other alternatives that we've had for,

I have 10 cores. Why can I only use one of them on my Python code without multiprocessing? This is one of those, that doesn't affect single threaded performance. It's one of those things that there's not a, a cost to people who don't use it. Right. Whereas a lot of the other types of options are like, well, sure, your code gets 5% slower, but you could make it a lot faster

if you did a bunch more work. Yeah. Yeah. And that's been a hard sell and also a hard line that, you know, put in the sand saying like, look, we can't make regular, non-concurrent Python slower for the sake of, you know, this more rare, but sometimes specialized, right. concurred stuff. So they've done a bunch of foundational work. And then the three main things are the tier two optimizer, sub interpreters for Python and memory

management. So the tier two optimizer, there's a lot of stuff that you kind of got to look around. So check out the detailed plan. They have this thing called copy and patch. So you can generate like roughly these things called super blocks, and then you can implement their planning to implement basic super block management. And Ryan, you may be thinking, what are the words you're saying, Michael? Duplo. They're not those little like us. No, they're big, big duplos. But it's kind of true.

So they were optimizing smaller pieces, like little tiny bits, but you can only have so much of an effect if you're working on, small blocks of code that you're optimizing. So a super block is a linear piece of code with one entry and multiple exits. It differs from an, a basic block and that it, it may duplicate some code. So they just talk about, considering different types of things you might optimize. so I'll link over to us, but there's a big, long discussion, lots of, lots of graphics.

People could go check out. So yeah, they're going to add support to D opt, as support for D optimization of soup blocks, enhance the code creation, implement the specializer and use this algorithm called copy and patch. So implement the copy and patch machine code generator. You don't normally hear about a machine code generator. Do you know, but, you know, that sounds like a jet compiler or something along those lines. Yeah. Anyway, so that's the goal and reduce the time spent in the

interpreter by 50%. If they make that happen, that sounds all right to me just for this one feature. That's pretty neat. Yeah. Wow. Pretty good. And I talked a whole bunch about the sub-interpreter's final thing. The profiling data shows that a large amount of time is actually spent in memory management and the cycle GC.

All right. And while when Python, I guess if you do, you know, 40% a bunch of times, it was maybe half this fast before, like, cause remember we're like a few years out working back on this plan in three, nine, three, eight, maybe it didn't matter as much because a percent as a percentage of where is CPython spending its time.

It was not that much time of memory management, but as all this other stuff gets faster and faster, if they don't do stuff to make the memory management faster, it's going to be like, well, half the time is memory manager. What are we doing? So they say, as we get the, the VM faster,

this is only going to be a larger percent of our time. So what can we do? So do fewer allocations to improve data structures, for example, partial evaluation to reduce the number of temporary objects, which is part of the other section of their work and spend less time doing cycle GCs. This could be as simple as doing fewer calculations or as complex as implementing a new incremental cycle finder either way. And it sounds pretty cool. So that's the plan for a year and a couple of months.

Pretty exciting. I'm really happy that these people are working on it. I am too. It's a team of, I think last time I counted five or six people, there's a big group of them around Guido at Microsoft, but then also outside. Yeah. So for example, this was written by Mark Shannon, who's there, but also Michael Dropboom, who was at Mozilla, but I'm not, I don't remember where he is right now. Cool. Last name. Yes, indeed. All right. Over to you, Brian.

Brian. Well, that was pretty heavy. I'm going to do a kind of a light topic is we need more people to write blogs about Python. It would help us out a lot really. And one of the ways you could do that is to just head over and check out one of the recent articles from Julia Evans about some blogging myths. And I guess this is pretty lighthearted topic, but, but also serious, but we have some more fun, fun stuff in the extras. So don't worry about it.

Anyway, so there's a few blogging myths and I just wanted to highlight these because I think it's good to remember that, you know, these are just wrong. So I'll just run through them quickly. You don't need to be original. You can write content that other people have covered before. That's fine. You don't need to be an expert. Posts don't need to be a hundred percent correct. Writing boring posts is bad. So these are,

Oh, wait, the myths are the myth is you need to be original. That's not true. Myth. You need to be an expert. Posts need to be a hundred percent correct. Also myth. All these are myths. Writing boring posts is bad. Boring posts are fine. If they're informational, you need to explain every concept. Actually, that will just kill your audience. If you explain every little detail page views matter. More material is always better.

Everyone should blog. These are all myths, according to Julia. And then she goes through a lot of the in detail into each one of them. And I kind of want to like hover on the first two a little bit of you need to be original and you need to be an expert. I think it's we when we're learning, we're learning about the software, a new library or new technique or something. Often I'm like, I'm reading stack

overflow. I'm reading blog posts. I'm reading maybe books, who knows, reading a lot of stuff on it. And you, you'll get all that stuff in, in your own perspective of how it really is. And then you can sort of like, like the cheating book report you did in junior high where you just like rewrote some of the encyclopedia, but changed it. Don't do that. But it doesn't, you don't have to come up with a

completely new technique or something. You can just say, oh, all the stuff I learned, I'm going to put it together and like write like my, my workflow now or the process or just a little tiny bit. It doesn't have to be long. It can be a short thing of like, oh, I finally got this. It's way easier than I thought it was. And writing little, little aha moments are great times to just write that down as a little blog post. The other thing of you don't need to be an expert is a lot of us got started

blogging while we were learning stuff as a way to write that down. So I'm, you're definitely not an expert as you're learning stuff. So go ahead and write about it then. And it's a great way to, and that ties into, it doesn't need to be a hundred percent correct. As you get more traction in your blog, people will like, let you know if you made a mistake and in the Python community, usually it's nice.

Um, they'll, they'll like mention, Hey, this isn't quite right anymore. and I kind of love that about our community. So, I'll the, I want to go back to the original part is you don't even have to be original from your own perspective. If you wrote about something like last year, go ahead and write about it again. If you think it's important and it needs it and you sort of have a different way to explain it. You can write another blog post about a similar topic. So yeah, I'm, I totally agree. I

also want to add a couple of things. Okay. I would like to add that your posts, the myth, your posts have to be long or like an article, or you need to spend a lot of time on it. Right. You know, the biggest example of this in terms of like successful in the face of just really short stuff is, John Gruber's daring fireball, right? Like this is an incredibly popular site and the entire articles are, it starts

out with him quoting often someone else. And that's like two paragraphs, which is half the article and say, here's my thoughts on this. And, or here's something interesting. Let's, let's highlight it or something. Right. And my last blog post was four paragraphs in a picture, maybe five. You count the bonus. Right. I don't, not too many people paid attention to mine because the titles, you can ignore this post. So I'm, I don't know why I'm having a hard time getting traction with it, but

um, I actually, I like that you highlighted the junk that good John Gruber style. There's a lot of different styles of blog posts. And one of them is reacting to something instead of, because a lot of people have actually turned, you can either comment on somebody's blog or talk about it on Reddit or something, or you can react to it on your own blog. and link to it. So link to it on Reddit or something. Yeah. Yeah. Not anymore. Cause Reddit went private out of protest, but you know, somewhere

else if you find another place or maybe post on Twitter. No, don't do that. Let's master it on. It's getting more. Yeah. Funny. I had another one as well, but, oh yeah. So this is not a myth, but just another thing, you know, another, source of inspiration is if you come across something that it really surprised you, like if you're learning, right. It kind of to add on, like, I'm not an expert is if you come across

something like, wow, I thought really, it broke my expectations. I thought this was going to work this way. And it, gosh, it's weird here. People, if it seems like a lot of people think it works this way, but it works in some completely other way, you know, that could be a cool little write-up. Um, also, you know, people might be searching like, why does Python do this? You know, they're, they might find your quote, boring article and go, that was really helpful. Right. So yeah.

I, I still remember way back, when I started writing about, pytest and unit tests and stuff, um, there was a, a feature, a behavior of teardown functionality that, behaved different. It was like, sort of the same in nose and unit test and then different in pytest. And I, I wrote a post that said, maybe unit test is broken because I kind of like this pytest behavior. And I got a reaction from some of the pytest contributors that said, oh no, we just broke,

we just forgot, didn't test that part. So that's wrong. We we'll fix it. Yeah. What a, what a meta problem that, pytest didn't test a thing. Yeah. Well, I mean, it was, it was really corner case, but I'm kind of a fastidious person when I'm looking at how things work. but the other thing I want to say is a lot of, a lot of

things written by you, other people are old enough that they don't work anymore. If you're, if you're following along with like a little tutorial and it doesn't work anymore because, you know, the language changed or the library they're using is not supported anymore or something. That's a great opportunity to go, well, I'll just kind of write it in my own language, but or in my own style, but also make it current and make it work this time. So that's good. Indeed.

Well, anyway. Okay. Well, let's, let's go back to something more meaty. Yeah. Something, like AI. So I want to tell you about Jupyter AI, Brian, Jupyter AI is a pretty interesting, pretty interesting project here. It's a generative AI extension for JupyterLab. I believe it also works in Jupyter and IPython is just IPython prompt as well. And so here's the idea. There's, there's a couple of things that you can do. So Jupyter has this thing called a magic,

right? Where you put, two percents in front of a command and it, it applies it to an extension to Jupyter and not, not trying to run Python code, but it says, let me find this thing. In this case, you say percent, percent AI and then you types and stuff. So that stuff you type afterwards, then, you know, turns on a certain behavior for that particular cell. And so this AI magic, literally it's percent, percent AI, and then they call it a magic or it is a magic.

So AMI, AI magic turns Jupyter notebooks into reproducible. It's the interesting aspect, generative AI. So think if you could have ChatGPT or open AI type stuff clicked right into your notebook. So instead of going out to one of these AI chat systems and say, I'm trying to do this, tell me how to do this. Or could you explain that data? You just say, Hey, that cell above, what happened here? Or I'm trying, I have this data frame. Do you see it above? Okay, good.

How do I visualize that in a pie chart or some, you know, when those donut graphs using plotly, and it can just write it for you as the next cell. Interesting. Okay. Interesting. Right. Yeah. Yeah. It runs anywhere the Python kernel works. So JupyterLab, Jupyter notebooks, Google collab, VS Code, probably by charm, although they don't call it out. And it has a native UI chat. So in JupyterLab, not Jupyter, there's like a left pane that has stuff. It has like your files and it has

other things that you can do. And it will plug in another window on the left there. That is like a chat GPT. So that's pretty cool. Another really interesting difference is this thing supports its model or platform agnostic. So if you like AI 21 or Anthropic or OpenAI or SageMaker or Hugging Face, et cetera, et cetera, you just say, please use this model. And they have these integrations across these

different things. So you, for example, you could be going along saying, I'm using OpenAI, I'm using OpenAI. That's a terrible answer. Let's see, let's ask Anthropic the same thing. And then right there below it, it'll, you could use these different models and different AI platforms and go, actually, it did really good on this one. I'm just going to keep using that one now for this, this part of my data. Okay.

Okay. So how do you install it? You pip install jupyter_ai and that's it. It's good to go. And then you plug in, then you plug in, like your various API keys or whatever you need to as environment variables. They give you an example here. So you would say percent percent AI space ChatGPT. And then you type something like, please generate the Python code to solve the 2d Laplace equation in the Cartesian coordinates,

solve the equation on the square, such and such with vanishing boundary conditions, et cetera. Plot the solution to matplotlib. Also, please provide an explanation. And then look at this, it goes, and down it goes. And you know, you can see off it, off it shows you how to implement it. And that's only part of that's shown. You can also have it do graphics. Anything that it, those models will generate is HTML just show up. So you could say, create a square using SVG with a black border and

white fill. And then what shows up is not SVG commands or like little definition. You just get a square because it put it in HTML as a response. And so that showed up. You can even do LaTeX, like --F is math, generate a 2d heat equation. And you get this, partial differential equation thing in, in LaTeX. You can even ask it to write a poem, whatever you do. But that's one of the, go back to the poem one. Yeah. It says, write a poem in the style of variable names. So you can have

commands with variable, insert variable stuff. So that's interesting. So you can also Jupyter has inputs and outputs, like along the left side, there's like a nine and a 10. And those are like the order they were executed. You can say, using input of nine, which might be the previous cell or something, or output of nine, go do, you know, take that and go do other things, right? Like kind of, that's how I opened this conversation.

One of the really interesting examples that David Q pointed out, there's a nice talk that he gave in a link to in the show notes at high data, like a week ago was he had written some code, two examples. One, he'd written some code, a bunch of calculations and pandas, and then he created a plot, but the plot wasn't showing because he forgot to call plot.show. And, he asks one of the AIs, it depends, you know, you can ask a bunch depending on which model you tell it to target.

He said, why isn't, Hey, in that previous cell, why isn't my plot showing? It said, because you forgot to pull, call show. So here's an example of your code above, but that works and shows the plot. That's pretty cool for help, right? Yeah. Geez. Instead of going to stack overflow or even trying to copy that into one of these AIs, you just go, Hey, that thing I just did, it didn't do what I expected. Why? Here's your answer. Not in a general

sense, but like literally grabbing your data and your code. Two final things that are interesting here. The other, maybe three, the other one is he had some code that was crashing and I can't remember what it was doing, but it was throwing some kind of exception and it wasn't working out. And so he said, why is this code crashing? And it explained what the problem was with the code and how to fix it. Right. So super, super interesting here. I'll check that out. Yeah. We have that link.

Yeah. Yeah. Yeah. The talk is really, really interesting. I'm trying to think there's one other thing that that was in that talk. It's like a 40 minute talk. So I don't remember all. Anyway, there's, there's more to it that goes on. also beyond this, it's, it looks pretty interesting. If you live in Jupyter and you think that these, these AI models have something to offer you, then this is definitely worth checking out. Alvaro says, you know, as long as it doesn't hallucinate a non-existing

package. Yeah. That's, I mean, that is the thing. What's kind of cool about this is like, it puts it right into code, right? You just, you could run it and see if it's pretty cool. If it does indeed work and do what it says. So anyway, that's, that's our last. Yeah. Go ahead. Oh, before we could move away too much, I was listening to a, NPR, show about

talking about AI and, somebody did research. I think that was for the times, New York times, a research project and found out that like there were, there were some, sometimes they would ask like, when, what's the first instance of this phrase showing up in the newspaper or something. And it would make up stuff. and even, and they'd say, well, you know, can you, what are those, you know, show those examples. And it would show snippets of fake articles that actually never were there.

It did that for, that's crazy. It did that for, legal proceedings as well. And a lawyer cited those cases and got sanctioned or whatever lawyers get when they do it wrong. Those are wrong. Yeah. Don't, don't do that. But I also, the final thing that was interesting that I now remember that, you made me pause the thing, Brian, is you can point it at a directory of files, like, HTML files, markdown files, CSV file, just like a bunch of files that happen to be part of

your project and you wish it had knowledge of. So you can say slash learn and pointed at a subdirectory of your project. It will go learn that stuff in those, in those documents. And then you can say, okay, now I have questions, right? Like, you know, if it learned some statistics about a CSV, the example that David gave was he had copied all the documentation for Jupyter AI over into there, and it told it to go learn about itself. And then it did. And you could talk to it about it

based on the documentation. Oh, that's so if you got a whole bunch of research papers, for example, like I learned those. Now I need to ask you questions about this astronomy study. Okay. Uh, who, who, who studied this and what did, who found what, you know, whatever, right? Like these kinds of questions are pretty amazing. Yeah. And actually some of this stuff would be super powerful, especially if you could make it not like keep all the information local, like, like,

like, you know, internal company stuff. They don't want to like upload all of their source code into the cloud just so that they can ask it questions about it. Yeah. Yeah, exactly. The other one, was to generate starter projects and code based on ideas. So you can say, generate me a Jupyter

notebook that explains how to use matplotlib. Okay. Okay. And it'll come up with a notebook and it'll do, so here's a bunch of different examples and here's how you might apply a theme and it'll create things. And one of the things that they actually have to do is they use Lang chain and AI agents to in parallel, go break that into smaller things that are actually going to be able to handle and send them off to all

be done separately and then compose them. So it'll say, Oh, well, what's that problem? Instead of saying, what's the notebook, it'll say, give me an outline of how somebody might learn this. And then for each mo each step in the outline, that's a section in the document that it'll go have the AIs generate those sections. And it's like a smaller problem that seemed to get better results. Anyway, this is a,

this is a way bigger project than just like, maybe I can pipe some information to ChatGPT. There's like, there's a lot of crazy stuff going on here. the people who live in Jupyter might want to check out. It is pretty neat. I, I was not around the Jupyter stuff, but I was thinking, that a lot of software work is the maintenance, not the writing it in the first place. So, what we've done is like taking the fun part of making something new and giving it to a computer and we'll all be just

like software maintainers at the, afterwards. Exactly. Let's be plumbers. Sue or overflow again, call the flower. No, I don't want to go in there. And also I'm just imagining like a whole bunch of new web apps showing up that are generated by like

ideas and they kind of work, but nobody knows how to fix them. but yeah, sure. I mean, I think that you're right and that that's going to be what's going to happen a lot, but you technically could come to an existing notebook and add a, a cell below it and go, I don't really understand. Could you try to explain what is happening in the line in the cell above? Yeah. And it, you know, it also has the possibility for making legacy code better. And if that's the reality, we'll see.

Yeah. Hopefully it's a good thing. So cool. All right. Well, those are all of our items. That's the last one I brought. Any extras? I got a couple extras. Will McCoogan and gang at, textualize, have started a YouTube channel. and so far there's, and some of these, I think it's a neat idea. Some of the tutorials that they already have, they're just walking through some of the tutorials, in video form at this point. and there's three up so far of, stopwatch intro and, how to get set up and use

textualize and yeah, well, I like what they're doing over there and it's kind of fun. another fun thing from, I like it too, because it's, you know, textualize riches is a visual thing, but textualize is like a higher level UI framework where you've got docking sections and all kinds of really interesting UI things. And so sometimes learning that in a interact, an animated active video form is really maybe better than reading the docs. Yep. And then, something else that they've

done. So maybe, watch that if you want to try to build your own, command line, use their text, you use your interface, a two E as it were, do we, or you could take your command line interface and just pipe, use a trogon, all trogon. I don't know how you say that. T R O G O N it's a, by, textualize also it's a new project. And the idea is you just, I think you use it to wrap your own, your own command line interface, tool, and it makes a graphic or

text-based user interface out of it. There's a little video showing an example of, trogon app applied to SQLite utils, which, has a whole one SQLite utils has a bunch of great stuff. And now you can interact with or interact with it with a GUI instead. And that's kind of fun. works around click, but they're apparently they will support other libraries and languages in the future. So interesting.

Okay. So yeah, it's like you can pop up the documentation for a parameter while you're working on it and a little modal window or something looks, looks interesting. Yeah. Well, I'm, I was thinking along the lines of even, like in a internal stuff, it's, fairly, you're going to write like a make script or a build script or some different utilitarian thing for your, your work group. if you use it all the time, command line is fine.

But if you only use it like every, you know, once a month or every couple of weeks or something, it might be that you forget about some of the features and yeah, there's help, but having it as a GUI, if you could easily write a GUI for it, that's kind of fun. So why not? the other thing I wanted to bring up a completely different topic is, the June 2023 release of, visual studio code, came out recently. and I hadn't taken a look at it. I'm still, I've installed it,

but I haven't played with it yet. And the reason why I want to play with it is, they've revamped the, uh, test discovery and execution. So, apparently you can, there were some glitches with finding tests sometimes. so I'll, I'm looking forward to trying this out. You have to turn it on though. You have to, there's, so these, this new test discovery stuff, you have to, go, you have to like set a opt into flag. and the, I just put the little snippet in our show notes so you can, just copy

that into your settings file to try it out. So, yeah, I guess that's all I got. Do you have any extras? I do. I do. I have a report, a report from the field, Brian. So I had my 16 inch MacBook pro M one max as my laptop. And I decided I just, it's, it's not really necessarily the thing for me. So I traded in and got a new MacBook air 15 inch, one of those big, really light ones. And

just want to sort of, compare the two of people are considering this. You know, I have my mini that we're talking on now with my big screen and all that, which is a M two pro is super fast. And I found like that thing was way faster than my, my, much heavier, more expensive laptop. Like, well, why am I dragging this thing around? If it's, if it's not really faster, if it's heavy, has all these, you know, all these cores and stuff that are just burning through the battery.

Uh, even though it says it lasts a long time, it's like four or five hours was a good day for that thing. I'm like, you know what, I'm going to trade it in for, the, the new little bit bigger air. And yeah, so far that thing is incredible. It's excellent for doing software development thing. The only thing is the screen's not quite as nice, but for me that like, I don't live on my laptop, right? I've got like a big dedicated screen. I'm normally at then I'm like out somewhere. So

small is better. And it lasts like twice as long and the battery. So, and I got the black one, which is weird for an Apple device, but very cool. People say it's a fingerprint magnet and absolutely, but it's also a super, super cool machine. So if people are thinking about it, I'll give it a pretty, I'll give it like a 90% thumbs up. the screen's not quite as nice. It's super clear, but it kind of is like washed out a little harder to see in light. But other than that,

it's excellent. So there's my report in my expensive MacBook for an incredibly light, thin and often faster, right? When I'm doing stuff in Adobe audition for audio or video work or a lot of other places, like those things that I got to do, like noise reduction and other sorts of stuff, it's all single threaded. And so it's, it's like 20% faster than my $3,500 MacBook Pro max thing. Wow. And lighter and smaller, you know, all, all the good things.

But you're still using, your, your mini for some, for some of your workload. I use my mini for almost all my work. Yeah. If I'm not out, then I usually, or sitting on the couch, then it's all mini, mini, mini all the time. Okay. Yeah. It's a black on the outside also then. Yeah. Yeah. It's, it's cool looking. And you can throw a sticker on that and somebody to hide that it's Apple and people might think you just

have a Dell. They wouldn't know that's right. Run parallels. You can run, run Linux on it. They're like, okay, Linux got it. What is that thing? It's a weird. Yeah. You could disguise it pretty easy if you want, or just your sticker stand out better. You never know. All right. So people are thinking about that and pretty, pretty cool device. but Brian, if somebody were to send you a message and like tricky, like, Hey, you won a MacBook, you want to get your MacBook

for free. You don't, you don't want that. Right. No. So, you know, companies they'll do tests. They'll like test their, their people just to make sure like, Hey, we told you not to click on weird looking links, but let's send out a test and see if they'll click on a link. And there's this guy, there's this picture, this guy getting congratulated by the CEO.

I T I T group congratulated me for not failing the phishing test. And the guy's like, dear head likes like, Oh no. me who doesn't open emails is what the picture says. So you just ignore all your work email. You know, you won't get caught in the phishing test. How about that? yeah. those are, you, you've been out of the corporate for a while. That, that happens. We've got, I've, I've had some phishing tests come through this. Yeah. Yeah. Well, like the,

the email like looks like it came from. So that's one of the problems is it looks like it's legit. And, and it has like, you know, the, the right third party company that we're using for some, some service or something. And, and you're like, wait, what is this? and, and then the, the link doesn't match up with the, whatever it says it's going to and things like that. But, um, it actually is harder now. I think that to, to, to verify what's real and what's not when more

companies do use 30 third party, services for lots of stuff. So yeah. Yeah. Yeah. It's, you know, it's a joke, but it's, it is serious. I worked for a company where somebody got a, got a message. I think either I might've been through a hacked email account or, or it was spoofed in a way that it, it looked like it came from like a higher up to say, there's something really big emergency. This vendor is super upset. We didn't pay them. they're kind of sue us if we don't,

you know, could you quick transfer this money over to this bank account? And because it, it came from, you know, somebody who looked like they should be asking that, right. It, it almost happened. So not good. That's not good. Yeah. the, I get texts down, like the latest one was just this weekend. I had a text or something that said, said, Hey, we need information about your shipping for, uh, Amazon shipment or something. And it's like copy and paste this link into your browser. And it's just

like bizarre link. And I'm like, no, it would be amazon.com something. there's no way it's going to be Bob's Bob's burgers or whatever. yeah. Amazon. Yeah. Let's go to amazon.com. Oh, anyway. Oh, well, well may, may everybody get through their day without clicking on phishing emails. So that's right. Yeah. May you, may you pass the test or don't read the email. Just stop reading email. Yeah. Think about how productive you'll be. Well, this was very productive, Brian.

Yes, it was. Yeah. Well, thanks for, for hanging out with me to this morning. So it was fun. Yeah, absolutely. Thanks for being here as always. And everyone, thank you for listening. It's been a lot of fun. See you next time. Bye.

Transcript source: Provided by creator in RSS feed: download file