Scott & Mark Learn To... Systems Thinking - podcast episode cover

Scott & Mark Learn To... Systems Thinking

Mar 05, 202531 minSeason 1Ep. 11
--:--
--:--
Listen in podcast apps:

Summary

Scott and Mark delve into systems thinking, exploring its meaning, importance, and application beyond coding. They discuss how understanding broader systems enhances decision-making, the challenges of AI integration, and AI-generated code's limitations. They also debate balancing in-depth analysis with swift action.

Episode description

In this episode of Scott & Mark Learn To, Scott Hanselman and Mark Russinovich dive into the concept of systems thinking—what it means, why it matters, and how it applies beyond just coding. Scott recalls an insightful conversation with a colleague, who argues that while younger generations are taught to code, they aren’t taught to understand the larger systems in which code operates. They discussed how systems thinking approach contrasts with traditional code practices. They also explored the challenges of integrating AI into coding, the limitations of AI-generated code, the necessity of understanding the broader system in which code operates and debated the balance between deep analysis and decisive action. 

 

Takeaways:    

  • Experience and mindset shape systems thinking 
  • Understanding how one's work fits into a larger system can lead to better decision-making 
  • With AI handling more coding tasks, the real value lies in the ability to think architecturally and systemically 

   

   

Who are they?     

View Scott Hanselman on LinkedIn  

View Mark Russinovich on LinkedIn   

 

Watch Scott and Mark Learn on YouTube 

       

Listen to other episodes at scottandmarklearn.to  

         

Discover and follow other Microsoft podcasts at microsoft.com/podcasts   

Hosted on Acast. See acast.com/privacy for more information.

Transcript

i did the i did a talk after your talk uh long after everyone left at our little internal event yesterday uh did your talk go well you watched it didn't you no i was asleep wait You were talking before my talk. Actually, my talk was just 10 minutes in Scott's talk. Yeah, yeah, yeah. So I've had to back clean up. Yeah. So they put me at the end to try to get people to stay. So I talked to an empty room.

Was it really empty? No, it was cool. Everyone was there. It was such good energy. It was a really fun event. People were vibing. That's my favorite event, what used to be tech-ready. Yeah, it was really cool. So this, for context, friends, big companies have internal conferences. Like if you work at Walmart, there might be 5,000 engineers at Walmart, and there's a whole Walmart conference that's happening you don't know about. It's as big as any tech conference you've ever been to.

So Microsoft has internal conferences, and we got to speak at one. It used to be called Tech Ready. It's kind of our internal Microsoft build. And I talked about how we should be thinking about AI in the larger system. you Let's do system thinking. Here's my thing on system thinking, Scott. I don't even know what that is. Let's have that conversation. Because I think you do. And we're already recording, so now we've already started the show.

yeah all right so mark i did an episode uh years ago it's to this day it's like my most popular hansel minutes episode and i had a systems architect who owns a company she's quite well known in the in the dmv her name is kishow rogers she's had a number of startups she's known in dmv the department of vehicles in the dc maryland virginia area

Sorry. Like, you know, government, like that area, East Coast. Anyway, Keisha has a company. She's done a number of startups. But anyway, she's been in business for 30 years like us. This was during the peak time of the learn to code movement. Everyone was like, kids, you learn to code. We were all running around teaching kids how to do for loops. She says that we're spending too much time teaching kids to code and not teaching them about the system.

in which some code might exist and we expanded that idea beyond just like writing a for loop and running it you know like full stack like learn about the silicon and to how do we operate as humans within a system Why is the system set up that way? So, for example, if somebody today, a young person, goes to Amazon, buys some piece of plastic Chinese crap, it appears at their door the next day.

Her argument is that no one thinks about the system within which created that, the oil that was, you know. came from dinosaur bones that went into the freighter that came across from china the people who work there the macroeconomics of it like systems are big and complex and we are abstracting away so much She thinks that's a problem and people need to learn systems rather than grinding lead code. That's the controversial topic of the day.

I don't know if it's controversial. I mean, if you take a look at how AI is doing so much coding, the value is systems thinking on top of it. So you do agree. This is great. Yeah, I do agree. I just wasn't sure. You're like, what's this dumb topic, Hansman? Exactly. What do you have this time? Yeah, you've got to use it for the episode, though. No, I mean, system syncing by itself just interpreted as words.

I could understand means architecture. I think of it as like architectural thinking as well. I was going to tease you because I was like, Mark, you have to think about the system in which the word system thinking exists. Sorry. Okay. But yeah, architectural systems, but larger human systems, just like interpersonal relationships is a system, but people keep showing up to work at Microsoft and big tech.

And they know how to grind out algorithms on leak code. Yeah. Do you do a lot of leak code yourself, Mark? I leak code. That's the only thing I do is leak code. My code, it's leak code, whatever that is. I think because I do it. I think because I am the definition of lead code. Exactly. Right. It was like all those Chuck Norris jokes back in the day. When Mark codes, it is lead code by definition because he's coding. Exactly.

Do you find a difference in the way you're interacting with? Is it a generational thing? Is it where people went to school? Is it how they approach problems? Some people just... pick up that idea of like, oh yeah, I see how this fits in. I see how the system puts together. While other people are blinders on, these are the 10 lines of code that they can see. I don't think it's a, you know, we talked about this earlier.

In one of our early episodes, when we were talking about architecture, and focusing on going deeper and broader than what you're just working on.

I don't think it's generational. I don't see – at least it hasn't been visible to me. There might be something generational. But I think it's just the way that humans sort themselves. And it might be based off of what they're – personal preferences it also might you know i just like to you know come to work and have somebody tell me go do the xyz and i'll just do xyz and i do a good job of xyz and that makes me happy and

I feel productive. And there's people that go and like, XYZ? Why am I doing XYZ? Yeah. How does it fit in? Did I be doing XYZ? And, you know, what's the point of XYZ? And so there's people like that. I thought I started doing better at Microsoft when I started thinking about where I fit into the system, where I started thinking about what are my boss's goals.

Why are we writing this software type thing? Before, I might just write a component and then, okay, here's a component. Then when I thought about it, the larger. Why am I doing XYZ person? I think almost to a fault. I almost do it too much. I don't want to do XYZ. I'm not doing it. Well, one of the things that I really admire about you and about Yanan, who helps us with writing some code, is there's a little bit of analysis, and then there's coding, and then you pause, and there's an analysis.

there's not a big upfront analysis and I've never seen you get analysis paralysis because there's 10, 25 ways to do something. Yeah. But you guys just pick one and you run in that direction while I go, there's nine ways to do this.

and then i get stuck yeah i wanted to i want to act more boldly but my systems thinking goes like too deep and i get stuck yeah interesting i i hadn't really thought about how i approach problems or tasks um that way but i think you're right and i'm connecting it with the fact that i'm that i just want to get it done

And I want to get it done in whatever the fastest way is possible, but make sure that it's done right at the same time. So going into something without thinking about it at least a little, you're likely to just waste your time. Right. Because you're going to do something and at the end go, wait a minute. Oh, I forgot to consider these things and what I just did I have to throw away and start over. But at the same time, if you're thinking about it.

If you're trying to think about everything, then like you said, you're just stuck and never do it. I'll give you an example of a stupid thing that I did last week that probably took longer than it needed to, but it might be an interesting way to kick. part of this conversation off. So my 17-year-old is trying to build up some mystery for some pep rally thing at work, at school.

he says i want a website where people go in and it's got like matrix letters and it's mysterious and it's a computery like mr robot and there's like a mysterious countdown and he's like everyone in the school we'll see the mysterious countdown and be excited about it it'll it'll like run around and then when the countdown goes to zero it'll switch and it'll have like seven

text boxes like Wordle and they have to enter in a code and then if the code is wrong, it'll shake and then if it's not wrong, it'll go to a page and then they show that page to the principal and they get a prize or something like that. I'm like, okay, that's a cool dad-son project we can work on. Then I joined the JavaScript ecosystem. It's like, you can use npm run dev and Zyte and Vercel0.

there's 50 freaking ways to do this i'm like is that problem that he described a problem that is going to require any node modules at all or should it so i just ended up doing it in straight javascript index.html all inline JavaScript, there's not even a script.js, and some CSS. But I'm racked with a systems thinking brain. Is that how the people are doing it today? Yeah.

Just thinking about my own experiences when given a task. And I'll take Sysinternals tools because a lot of people are familiar with them and we've talked about Zoomit. And how I approach making a new tool or adding a feature to an existing tool. I just think in broad strokes, adding recording to ZoomIt or adding something that we did together to ZoomIt, like being able to type left instead of right. I just think about the...

the rough structure of what might work. And then I just start. And sometimes that structure as I'm like, oh, wait a minute, I needed this other function that does this. Oh, this doesn't integrate nicely with the existing. way that the code's architected i need to change it but it it's always an iterative i i think of um coding as somewhat like sculpting because you start with this kind of big

the overall shape of what you're going for. But then as you get into it, you're like, oh, wait a minute, I need to chisel a little more here, chisel there. And what emerges is the sculpture that you're looking for. It's hard to predict ahead of time exactly where all the chisel hits are going to go. Yeah. And I also think that a lot of people come into what they call brownfield.

projects right not greenfield where it's like i just went and made my new index.html but brownfield was like i've just showed up at a company that's got 10 20 years of of of technical baggage like even if you were hired today to work on windows well actually sys internals is technical baggage actually that's a good point how it's been 25 years right it'll be 30 years next year

Yeah. So if you show up and you're the person that's got to add a feature to zoom it, arguably you should sculpt it in the style of the previous sculptor so that you don't mess it up. Yeah. And seeing that is also a thing.

entering the code base and going okay i i empathize for this person yeah the philosophy what they were thinking the structure the organization um and not just try bolting things on in the way that you would like um because then that makes the whole thing unstable and hard to understand but are we entering a time where the systems are too big to understand because i'm always advocating for sure yeah okay so you just give up like when do you stop uh you

I mean, you try to understand as much as you think you need to understand. I think that's a trick too. Like knowing where enough is enough. And this applies to everything, actually. Not just coding, but anything that you're... dealing with when i get docs when i get presentations when i'm at i spend a lot of my time in meetings as the senior exec that is reviewing things and making calls on things and there's

you know we've talked about this before i'm not the expert on everything um there's people in most things i am not everything but there's people always there that note that are presenting and they know it and i don't know it there's no way like if i wanted to say i want to know this as deeply as you do like that's a full-time job so i can't so the question is how much do i need to know to make a decision or have an opinion

Oh, that system thinking right there. How does this thing that you, subject matter expert who I trust, affect the larger system for which the subject matter expert may not have the larger context? Exactly. So I'm sure that you do this, is information will be presented to you, and you kind of fill in a picture, and you look for, based on your experience, where are the risk areas or where are the areas of concern that might factor?

into a bigger systems view of what we should do or not do and then you and then if there's gaps in the information then you start to ask questions right and The larger these systems get, the things that I'm thinking about as the systems thinker in the room is, is there going to be a ripple as we throw the pebble into the pool that's going to cause us a problem?

and an untenable, intractable problem at some point down the road. I'm assuming you're not changing stuff in Azure that could destabilize the system and cause problems. The goal is to always make it more stable, more. more reliable. Yep. And that's for sure. A consideration when you're talking about re-architecting parts of Azure that we're doing is how does this new part merge with the old part? You know, it's the standard changing the plane.

while you're flying it problem that you got with a system like this how do you communicate the size of a system is it documentation or is are you still doing uml or i mean like nobody's opened visio in a while No, and actually, I was talking to Scott Guthrie yesterday, and he's like... Yeah, I saw him, too. He goes, somebody asked me, who has Azure in their head? And... Who's still at the company? No, nobody.

yeah nobody too big even even people that weren't here they might have had azure in their head at the very beginning and i knew a lot more about azure 10 years ago or 15. Well, yeah, we did our history show like when it was like Red Dog and it was like stateless VMs. Yeah, maybe then. The level of depth I had was in coverage I had of Azure was much.

much higher percentage than it is now, just because it's gotten so massive and so sprawling. There's so many services, internal and external, that I can't. And there's nobody at the company that knows Azure in their head. This is one of those things about systems thinking. I've got, I think, the knowledge of the abstractions necessary for me to be effective. Where do we fit into it? As humans? Ethical?

uh philosophical like we had some really interesting conversations about what it means to be applying ai into into tech right now and i keep coming back to the the tension with the young people learning lead code like where where did little technical puzzles become more interesting to me than the business the larger business problem haven't they always been

i mean microsoft's famous for the interview question reverse the link list microsoft was that one question about like why are manhole covers around yeah that's the other one we have done that forever yeah why why do we do that is that just a human thing I think that we don't have a lot of people don't know a good way of measuring somebody's skill or capabilities. So it's just like, let's fall back on the easy thing.

if they can't reverse the linked list that must mean something about their skills or capabilities and in fact i you know that kind of program in front of me yeah i never liked that i had to do that by the way new mega technologies you had to you were required so they yeah it was part of their interview so i went to new mega technologies in 1996 or five um to work um i'm potentially uh

Softice, the kernel debugger. Softice. Softice mentioned. Because that's low-level assembly code. The problem was write this. function or write this program short, small program to do something I can't remember what it was in assembly. And we're going to sit and watch you do it. oh god that's the worst yeah that is the worst yeah so uh that was it was fun i i mean i did it and and crushed it i leaf coated it but um but it was

I hate that kind of pressure of, it shouldn't matter how I do it. My process shouldn't matter. I appreciate that you acknowledge that because people in the comments will appreciate that as well. It's like, hey, man, I might have to Google. I might have a Stack Overflow. I might have to go for a walk. I might have to eat a box of candy. Everybody's got their own process, and that's okay. How many Diet Cokes is it going to take for Hanselman to get this thing written? A lot.

So good. Yeah. So I think that's important. More important than ever. The process. Trusting the process. So this, you know, go to the whiteboard and reverse the link list. Like, that's just crap. It's garbage. It's forcing somebody into something that's not relevant to the job. Tell me how many times do you feel like, hey, I need to write this code.

i'm going to go to the whiteboard straight to the whiteboard and write out the algorithm first constantly this is not a thing yeah but this is where the the ai coding stuff becomes really interesting to me I'm not sure if we've done a show on this or not, but I think you sent me a document. We were talking about the rise of the expert beginner. You thought about this? There's some articles have been going around about how AI coding.

yeah if if used incorrectly can create expert beginners so people incorrectly yeah if you apply ai coding incorrectly if you basically teach a if i went and i taught there's there's a

There's a question around this. My kids don't code. They're just not interested in tech. One plays football, one's an artist. That's great. But when we went and we made that website for my 17-year-old, we used... you know copilot and it saved us a ton of time it was spicy autocomplete and we got that thing done in like half an hour he didn't understand it so then the question is does it matter well in this case it doesn't matter because the site worked

And his goal was, as a business person, make the site work. I spent time later making sure I understood the JavaScript because I just couldn't not. now he's an expert beginner he's like oh coding's easy i'm going to do this like i'm going to bang out website you know it's like it's going to be easy until it's not easy i've got strong opinions on this one yeah so he's an expert beginner now yeah so those

Coding systems like Devon and Replit, where you're like, hey, go make this website that does these five things. And then it starts and thinks about it and comes with the plans. You want me to add these things and goes off. And an hour later, it's like, here you go. And you look at it and it doesn't quite do what you want. So here's the problem. And I think why at least I see no line of sight into AI completely replacing programmers. No line of sight.

That's a hot take. That's a spicy take right there. And I'll tell you, and there's a few reasons for it. One of them is when these systems hallucinate just in... inherently they hallucinate and so how do you know that the code that they're creating is correct well they can write unit tests for it but they don't they can't write unit tests for this

full spec of what you imagine you want it to be like. They can only guess what you want. So speaking of systems thinking, when you sit down like, hey, I'm going to make a website. that it shows a map of the united states and you can drill in and it'll show you counties and it'll pull data from the government's engineering usage statistics and so you can go look at county by county how much energy do they use over years and

What sources of energy are consumed? I love that I'm architecting this in my head as you're describing it. So, you know what you would do? You would have the spec for that in your head. Would you sit down? and spend two hours writing down exactly you know when they click on this this is what happens when they click on that that's what no i would i would probably for me i'd write 10 user stories probably on like a notebook or like a notepad

And I just start banging the user stories out by, you know, broad strokes to detailed stuff. Not a lot there, maybe 10 minutes of typing. And I would just write bullets for what I want and then start coding. Like first I need a thing to show a map. I need to know I need to show the map needs to have this concept of states, you know, that needs to be in the code and I need to be able to click. Bob Ross. Yeah. Happy little maps. So.

Let's say that you wanted to use AI, one of these AI coding platforms to do that. You go and tell it what you want. You probably haven't fully specified it. Even when you go write it down, you haven't fully specified it. system goes and creates something, and it misunderstood something, or it missed something, or it has a bug somewhere, and so it doesn't do what you want.

And so at this point, what are you left with? You're left with, hey, you didn't do this right. Go fix this. And I'll tell you, the model might not understand that. The code has gotten complicated. And so when it fixes one thing, it breaks another thing. At that point, what are you left with? It's like you go through a few iterations and you can't quite get the AI platform to fully fix what you're going for, fully implement it.

So now you own a piece of code that you didn't write any of. And at this point, if you don't know how to code, you're completely dead in the water. If you do know how to code... You now have a Brownfield application you need to take over. That literally happened when we were writing this thing. He said he wanted Matrix rain coming down. And somehow he asked for green matrix rain, but the AI did it in red and did it with ones and zeros rather than the matrix.

you know quasi asian characters and he didn't know how to get it to go back so he just gave up and he just went with there's he's like that's fine so we started inheriting technical debt from the thing and it did not turn out how we envisioned it turned out like

a parallel universe spider-verse version of what we imagined and it was fine but it was definitely not what he envisioned yeah i mean that's a great example it was two two steps forward one step back and i think that this is just inherent

And the risk here is that you have the system, you invest a lot of time, it's built this big thing, and then you need to change it or you need to deliver this project, and it's not able to. And now you're forced with... you know inheriting the brownfield like let's say that hey you were delivering this to somebody that had actually contracted it or paying him to create it and you're stuck at the red and it's like no this i i think it's okay but that's not what i'm being paid for

I need to fix it. And at that point, you could take over the code and be like, OK, what's the AI trying to do? Where is the flaw? Let me fix it. He's dead. He's like, oh, fail. let me give you another one we asked it to do the matrix rain and it generated 256 spans

like hard-coded with ones and zeros in them, and then named them span one, span two, span three. Then I used JavaScript to manipulate the DOM to move these spans around, and it was like, heating up my laptop and it would not run on mobile at all so then i had to step in because you know the 17 year old was doing i was like you know do this programmatically without creating it more than two spans or something like that and it was like what

were you trained on that you thought that was a good idea it just went off running it was crazy that's one reason and and um the risk here like even for so the risk is somebody that's the expert beginner is dead like they can't deliver the project that they want they just and for you even for the the experience programmer like you now you might have to spend two hours to understand the code so that you can fix it

And if you just sat down, there's some trade-off somewhere. If you just sat down and had AI guide you or help you code as you coded it, you might be done faster than... Having the AI go try to spit out the whole thing and then you take it over and try to learn it and then fix it. I ended up being an expert BS detector because I was basically, the AI was doing a great job.

And then it started to generate some BS. And I was like, how much BS can I tolerate? I was like, I'm fine with that hack. But then it started to get weird. And then the spans showed up. And I was like, no, this is unacceptable. This is officially not. where we need to be and then getting that to undo was was was messy you're right if i had simply had it like over my shoulder going hey like like a partner which is where we get into that conversation about the socratic method

Brainstorming with an AI, because you're really just talking to yourself in the mirror, is sometimes a better way than just asking you to make something. So you do use Copilot, though, to generate things. I generate 95% of the code. You're not a hater. No, not.

To be clear, and we've talked about this before, when I do these AI research projects, which is a bulk of my programming now, 95% of it is AI generated or more. It doesn't always get it right. And sometimes I need to step in and fix things. Sometimes I write a little piece of code here and there where it's just more convenient than asking the AI and giving it the spec, basically. I can do it myself really quickly. But it writes most of it. And I understand the code well enough that...

When it screws it up, I can take a look and understand, okay, so this is what it thinks I asked for when I didn't, and I need to go back and either correct it or I can fix the code right here because it's simple enough. If I didn't know how to code, I wouldn't be able to. be this productive and i think so just bottom line that is one reason ai won't kill programming this problem of specification yeah detail and what becomes brownfield code

The instant it's created. Yeah. The other reason that I also think AI programming won't completely take over. So I was having it write some code and, you know, OpenAI's SDKs have changed over time. for new models and for new capabilities. And you ask the model to go create some code based off OpenAI's SDK, and it actually generated code using the old version of the API.

Training data is stuck in the past. It's stuck in the past. And it might even have trained after the new SDK came out, but it's just got more examples of the old code, so it wants to use the old code.

so you can tell it use the new code but it might not know it as deeply because there's not enough examples out there so the second a new api comes out ai doesn't know how to use it because it has no examples and then you need to go and rag over new documentation but even rag is not going to teach it like here go tell me go take microsoft's documentation yeah or whatever api you want and say

Is that a good enough? Is that enough for AI? How many times have you gone to API documentation? And it's not just Microsoft, but lots where it's like, here's the API. It's a REST call.

Here are the properties. Property one. This is the time. People need to build experience. They need to mess around with the thing for a while. And the documentation is purely just... here's the structure and the names of the right because it's generated documentation because it's generated there's like no like there's nothing about you use this after calling this other api it takes an object like this and no no context so an ai is not gonna

A human can't understand it. AI is not going to understand it, how to use it. So I think that's another reason why AI can't completely take over programming is it just doesn't know what the current API set is. It doesn't know APIs in general. And there's a few other miscellaneous reasons, but those are two big ones. Yeah. As we get towards the end, one last thing to think about is environment setup, what I've always called yak shaving. So I've been doing a lot of Python lately.

the number one issue is not the python yeah it's it's python v environments multiple i've got a python and user bin i got another python and whatever just share bin like bro there's pip and there's this and then that and then there's wheels and then there's like this package is incompatible with that version of that package but there's this package which is incompatible with this one but not that one and oh crap none of it like i can't use all three at the same time

Yeah. And I'm trying to do all of this on ARM. Yeah. And I'm finding with working with Anthony Shaw, we're finding that like, oh, you need to go and compile this native wheel on ARM because no one ever compiled it before. And then it's the left pad problem. I think just in conclusion, AI is a huge accelerant for somebody that's an expert. For somebody that's a beginner, they may be able to do simple projects, but nothing complicated.

So we need to teach the young people. They need to learn how to program. They got to learn systems. Yeah. And how to program. Got to learn how to program. All right. Well, if you have made it this far into Mark Scott, learn. Spread the word. Put some reviews wherever you review your podcast. It helps us a lot. Please share these things on LinkedIn. Mark wants to see the numbers go up.

If you want to write an AI, give us comments, write an AI that just watches the show over and over again to get the numbers up. That would be very helpful. Yes, good tip. Yeah, you can do that. Yes, but yeah, comments, let us know that you're out there. Otherwise, he'll stop hanging out with me and then it'll just be like Scott talks to himself and we don't want that. See you next week. That would be it for one episode. Scott slowly loses his mind.

as he talks to himself in his office. All right, that's our end right there, friends.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.