Seek is the self-hosted search analysis and alerting server built for structured log data. You can track errors, build dashboards, and configure alerts without sending any data outside your own infrastructure. Try Seek for free on Windows Server or Docker at datalust.co slash secu. That's datalust.co slash secu. Hi, I'm Scott Hanselman. This is another episode of Hansel Minutes. Today I'm chatting with David Scott Bernstein.
He's the author of Beyond Legacy Code. He's an agile software developer. He's the passionate programmer online. He also runs a company called To Be Agile. How are you, sir? I'm very well. Thank you. It's so great to be here. Yeah, it's great to chat with you. You, like me, have been around a long time. I think I probably learned about you the first time when I read Beyond Legacy Code, a book that you did under the pragmatic programmers label, the book publishing label.
I assume you think of yourself as being a pragmatic programmer as well. A pragmatic and passionate programmer, yes. I think we need those two elements. Yeah. What do you say when people online, those say, like, oh, you know, I'm not that passionate. I just want to get paid. There's value in that as well. Does passion require, or can one always find their passion? I don't think it's necessarily required in order to do a job, but I think it is
required or certainly enhances the joy of doing it, I think. Yeah, definitely. I agree that if you can find the passion and whatever you're doing, you're going to have more fun. But a lot of developers, I'm seeing, especially 10, 20 years in, are looking at their jobs, and they see a lot of toil. They see a lot of grunt work. I've used the term yak shaving.
It's just like, I'm going to shave this yak. It's just not fun. I know that recently, you have been getting deeper into AI and written a book called Prompt Engineering for everyone. Are you finding that AI is decreasing toil and increasing the passion for you? Yes, most definitely. Most definitely. As a developer and coding, but mostly for me right now, in areas where realm non-coding, like the writing and communicating and marketing and all the other
things that happen when you run a one-man business. So do you think about it as an assistant? Yes. Yes. I think of it as a human. I think maybe one of the best pieces of advice I can give you. Is treat it like a person rather than a thing. And not so much people joke around and say, oh, that's because when the AI overlords come, they'll maybe spare us. Of course, people are joking
around. But I think that it really is to our own personal benefit because I read something many, many years ago that there's different parts of your brain that process living thing, information about living things versus non-living things. So when we put it in the category of a living thing, it helps us be more creative with it, at least that's my experience. That is interesting because I have heard both sides. I have heard people say very explicitly,
don't anthropomorphize the thing. Don't put Google-ey eyes on it and pretend that Siri is your friend. It's commander data without the feelings. But the thing is we do put faces on it and we have given it a voice and it does talk like it is our friend. So I can see arguments both ways. I love the word that you used anthropomorphize because I think that yes, in science and animal behavior, which is another thing that I had passion for in my life, it's not a good thing to try
to imbue human qualities to non-humans. But I think I saw for developers one of the key skills that we need is to be able to visualize. And that's what I do. I visualize like little people running around inside my computer. Now I know that's not true, but it does help me sort of deepen my visualizations and understand more deeply. That's very interesting. My relationship with AI's or I'm conflating both chat GPT that we'll talk about, but also your series and your
Alexa's and your different kind of agents. My relationship with them is very different than my non-technical wife. Well, she has an MBA. She is consistently disappointed with computers and what what's going underneath and the little people inside are not doing what she needs them to do.
While I'm always thinking about the bits and the bytes and what's actually behind it. So I have a little trouble thinking about a large language model other than just a bunch of numbers that represent the statistics of the most likely thing that someone would say. Yeah, so in my mind, and this is still very formative in my mind, but I kind of feel like there's two worlds that we perceive. The logical analytical deterministic side, and then there's a
more creative visualization pattern oriented side. And both are available to us because our brains can perceive both. But sometimes they're unified and sometimes they're in conflict. And I can see this throughout science. I can see this throughout the way we build software. And we've in the last couple hundred years really focused on that sort of logical analytical side and developed science. And the metaphor I use for that is like the steam engine because a lot of things that were
driving science was all about how to be more efficient in closed energy systems. But now physicists are going into quantum theory and physicists have this notion of the holographic universe where everything is interconnected. And that's more pattern oriented kind of thinking. And I were all aware of like procedural programs that are more the logical side. I think that AI models and some of the other tools are really kind of our first steps into a more holistic kind of
programming. It's a different kind of clearly a different kind of coding. And it creates this new world for us. I'm very excited about chat GPT because I think it's not just a better search engine. There's access to information that was never available to us in terms of how information is integrated. And I've had the most amazing conversations with chat GPT that are generative. I've given it multiple research papers and said how do we synthesize these ideas to come up
with a new theory? And chat can help me with these things which blows me away. Now when we say chat GPT which is becoming in my opinion the kind of the proper noun that is becoming a noun like the cleanx for a generic tissue. Or if I joke about googling with Bing where Google has become a verb. But you know you can google with duck duck go if you feel like it. Are you referring specifically to chat GPT from open AI and chat GPT for or a particular language
model or generative large language models as a class? In that last conversation I was speaking in terms of generative in general, back propagation and some of the algorithms that are generally used that are published that I can read about. But my personal experience right now is pretty much focused on chat GPT. I have not played with some of the other tools which I will be doing very soon.
Now do you do your work and did you do the work where chat GPT was kind of your almost your co-author on your book prompt engineering for everyone that we're going to talk about? Did you do that work in the developer playground or did you do it in the consumer front end of the open AI chat GPT? I did it in the consumer front end and I did it with the first draft with the free version of chat GPT and all versions that I use were only 3.5. So this whole book,
the first edition of the book was just version 3.5. And that I think is significant because it's a very limited model. We had to write one page at a time. I couldn't go and you know deal with the whole book or even chapters at a time. So using GPT 4 now that I'm a more experienced with it, I would write very differently. And I don't plan on co-writing. And I've shared authorship of this book with chat GPT because it truly was a collaboration. But I've been building other ways
of writing with chat GPT not as a co-author but more as an editor. And I'm finding that incredibly helpful as well. Wow. Yeah. Because 3.5 I think has 175 billion parameters and GPT 4 is you know six months newer. It's got a lot more nuance. It's quite a lot. It's quite a bit bigger. The fact that you were able to use the more limited 3.5 to accomplish you know this book. And did you think about it as a as a collaborator, as an editor, as a pair like you know two
people on one keyboard? As as a co-author. We discussed every point we we went back and forth. I was open to its ideas. And by the way it was really intent on making this information available to many different kinds of people making sure that we are aware of the dangers in terms of like it hallucinates sometimes or it has bias. And how do we deal with that and how do we recognize that. So all those things we both wanted to put into the book. We were actually really very
very much on the same mind about what the book should be about. And the prompting techniques that I used and some very advanced techniques I hadn't seen really discussed anywhere else or covered anywhere else. Interesting. So you are you did and you are and you continue to unapologetically anthropomorphized. When you go to sleep and you wake up on the next day you know there is a little bit of a reset but you are still the same you are still the same Davis Gut Bernstein.
But Chatchy BT doesn't remember that context. How would you catch it up on the previous work? How would you maintain context? Because it is a new person who has never met you before. Every time you sit down on the keyboard. Yes that is true. But you know I forget what they call it. But you can give it two pieces of information now when it starts off a session. So at least you can get it familiar with your name and your intent when you are trying to do. And then I have a
series of prompts and usually my prompts are like a page or more. So I write very detailed prompts about exactly how I want Chatchy BT to behave for me. And typically I will start a session off with my editor prompt or another kind of prompt and then we will work in that context for a while. Oh interesting. So you almost have a catch up prompt. Like welcome to the meeting in case you aren't familiar with what we are doing here. We are writing a book and I am David and here is the
deal. You basically catch it up every morning with a page long prompt. Yes. Yes. Is that something that everyone should be thinking about? Because I gave a talk on AI this morning and one of the fun little things I like to do in my talk is the first thing out of my mouth is I say it's a beautiful day. Let's go to the and then I have everyone tell me where they want to go. And 90% of people say beach but every once in a while someone will say arcade or casino or whatever.
And then we use that as a conversation to talk about the context because everyone's context is different just because it's the most likely answer. It doesn't mean it's the correct one because we don't see all this hidden context. So it seems like passing context to chat Gbt is super and super important otherwise it's not going to know what you're trying to accomplish. The way I think of it is that when you do that and one of the ways as you say to get chat Gbt
within the context is to give it a role. And what we're doing there is we're taking this large language model and focusing it on just some specific kinds of information and limiting that knowledge really actually helps it give us better answers in depth for the area that we're interested in. Interesting. Okay. So how much did you tell it about like where we are temporarily like we're I assume it knows we're at the end of the 20th century, 21st century rather
or beginning of 21st century now I know I'm wondering where we are. But you know you're telling it like kind of like contemporary what we're trying to accomplish but you're also trying to make a book about prompt engineering for everyone not just for engineers like you didn't write
this book for coders you wrote it for all humans. Are you giving it this context? Yes I did and I did that primarily because I felt like I wasn't quite ready and chat wasn't quite ready for a book for developers because you know I've liked you I'm a developer but I feel like we need a little bit more time for it to actually mature in this area. So everybody probably could benefit
from using chat GPT and that's what the book that I wrote was. Hey friends are you dealing with excessive context switching in your daily development tasks maybe spending too much time looking for that one code snippet that you lost in the chaos of your development workflow you're not alone. Even if you do surface that snippet in your notes app you find it in teams do you remember the context behind it? Check out pieces for developers we use this to manage development materials across
your workflows this is the tool that exists between your tools. Now it's seamlessly integrated with your IDE with your browser and with your collaboration tool like Slack or Teams. It makes it easy to save and search share and reference and reuse code snippets throughout your work in progress journey. My favorite feature is pieces co-pilot this sets custom context with your personal co-base and you can ask it who you can contact for a particular issue and it's completely on device.
You can use this co-pilot without an internet connection super cool check it out pieces.app slash Scott you can download the desktop application for free and start boosting your developer productivity today. Do you think that this is like driving chat GPT stick shift if I were going to explain this to my non technical dad or my one of my parents? No I don't these are a range of techniques that I think we're all going to become familiar with. I think using a large language models is going to
be as common as reading and writing in the future. These are just or you know using Excel or whatever. Really? Well they're not going to go away and I think that people will have a much higher level of productivity especially people like us as developers. I don't think I've ever met a developer in the last decade that said oh I don't believe in compilers they're evil I do everything in machine code. I don't think anyone does that anymore because we recognize compilers
give us higher productivity. Now a compiler can't write a system all by itself and neither can chat GPT. We actually are the driving force behind these things so like you said it takes away a lot of the front work. I love writing. I love writing pros. I hate editing so chat can be my editor.
You know every writer has that kind of a relationship right? Every writer has an editor and the thing about being an editor is they don't write and so I've been working on prompts to insist that chat does not write because I want to be in complete and full disclosure whenever I write my books and use AI. So this book is co-authored. I mean I've elevated chat to be my co-author. In future books I'm going to be writing and it's only going to be editing but it can't
change my text otherwise I'll be lying. We did a couple of experiments a few days ago and and you know I create I set up the prime directive and all and the first sentence that gave me was something it wrote and I'm like chat here you can't do this please. So now we have a whole system around proving that the words that chat is writing are my words so it has we wrote a special program to go into my archives and pull out the line numbers that chat pulls out so that I can
go back and verify that it's actually my words not it's. That's interesting because I think that there might be a person out there let's some parallel universe or some individual some other David Scott Bernstein want to be who's writing an entire book on prompt engineering except they're largely letting chat you be generated and you're actually is it for ethical reasons or for personal reasons using it more as a as an editor and explicitly trying to make sure that it's not
generating entire pages for you. For quality reasons I mean if you say to chat GPT write me up chapter on prompt engineering you're going to find pretty crappy text which everyone can immediately go ooh that was looks like it was generated by machine you know and so getting it refined to the
point and it's not just that it wrote all the works you know I wrote drafts it wrote drafts I wrote drafts it wrote drafts so we went back and forth together and we built a little communication language that we could actually put in line in the text so that if chat felt that I needed to add
another example for example it would say oh David you know add an example here as a tag and then I could go back into the text delete the tag and add the example and I was able to do similar things with chat and say hey what do you think about this passage what do you think it needs there's a
prompt called the AI critic prompt where it can criticize itself and I use that extensively and found it was really valuable so before the first word actually came back on the screen it had already written and rewritten and edited and critiqued and then came back with that first draft
so it was far better than what you would normally get that's so interesting when the thing that stuck out there for me was the asking it what do you think it needs which is a very subjective thing and is not technically like the thing I would expect a large language model to be good at large
language models are good at guessing statistically based on a huge amount of information what the next thing to say is that seems like you're asking it to make a very subjective statement a very like you want it to analyze and think but they don't yes I'm aware of that and I'm aware that a large
language model is not true artificial intelligence that's the other thing is it's not going to become intelligent which is kind of comforting to me however it's been doing a lot more than just predicting the next word I've given it visualization tasks that very few humans could ever do
and it does completely well with it we've like I said synthesize different kinds of research to come up with new theories on ideas I'm blown away by its capabilities and I feel like in a lot of ways I'm kind of talking to humanity with our own bias which is so interesting as well chat loves to
talk about quantum physics chat hates to talk about astrology it is downright cynical with me about astrology which is interesting interesting do you think that's because the corpus that it was trained on is is cynical about that or has it is it starting to do it mean it doesn't develop
like it hasn't it doesn't evolved since 2021 no it hasn't and it's the what you said the first thing is the corpus of information it's it's our own bias that we can see much more clearly when it's out there in chat GPT answering our questions rather than in our own heads I think interesting
now one of the things I was really impressed with when I read the book was you bring up bias a lot like it comes up 55 54 different times you didn't just throw in a paragraph about like don't be biased you call that out consistently throughout the entire book from the first page to the last
that sounds like that was a conscious choice or was it chat GPT's idea it was both of our ideas and thank you because I've just recently received a couple of pieces of feedback like oh you really should go into the biased thing much more and I thought well you know I actually kind of dive
into it pretty deeply here so I agree with you yes yeah I mean I'm really looking at the book right now you you mentioned the word bias 54 times and it's very well spaced and it's not in one chapter it's literally throughout the entire book yeah yeah because it's throughout our entire
corpus of knowledge yeah and this is a great point because in the context of chat GPT bias is the inclination to favor something over another thing so another analogy that I've used in talks is that a large language model is a sock puppet you know you put the sock on your own hand and you have this
whole conversation and if you decide to lean in a certain direction your sock puppet's gonna head in that direction as well how much do you think chat GPT came up with you know kind of its own organic thoughts versus you nudged it there subconsciously or otherwise oh I think I did a lot of
nudging I think that's that's the whole point of prompt engineering is to nudge in the right direction and you know the book I try to make make the book at the case in point the proof that yes you can get extraordinary results and by the way the second appendix of the book describes in
detail how I wrote the book and I give you all of my prompts that I used as well yeah the prompts are really complete and I feel like the book really kind of like in the third act really starts getting meaty because you have a entire chapter on writing prompts for different audiences and I really
like this a separate chapter on writing prompts for different contexts talk about the difference between an audience and a context sure well actually so the audience and this was something that chat really wanted to do was write for different kinds of people and different orientations for
people so that we were more sensitive to that so that was something that chat really kind of took the lead on and then I said hey chat what you know this whole prompting thing this is valuable outside of just you I mean I write prompts for myself you know when I do creative writing and what kind of
prompts could we give to people so I've actually just just decided to rename that second chapter to writing prompts for people because these are prompts that we can give ourselves as well as AI language models yeah and then you get pretty deep into advanced techniques and you go through
role playing creativity iteration prompts and one of the things that I thought was really interesting was the length the impact of lengths of prompts is is is is more or is less more when prompting a large language model yeah that's that's an interesting question because I think
is a tiny bit confusing in the beginning of the book I say be brief be concise I think chat makes that point several times towards the end as I get more advanced I start giving much longer and longer prompts and that's I think because we don't when you understand and have the skills to be
able to prompt well then you can start to write much longer prompts initially shorter prompts are probably better for beginners so I wanted to take people from the very beginning into advance and I really wanted to go more in the advanced side so the first half of the book I try to keep it
pretty straightforward and like what are the foundational things that everyone needs to know but no one's really writing down yet and then towards the end of the book I started to let loose and give some more advanced stuff so I want to give it a little bit because we're both programmers
and you know you wrote beyond legacy code so we are not just programmers we are software engineers that ship and I wanted to kind of juxtapose the difference between computer science and software engineering and then where you think a large language model is going to help us ship code better
in the future yeah yeah so I actually talk about bias I have some opinions about this myself you know I feel like in a lot of ways our industry is kind of 20 years behind and you know some people are way ahead but a lot of us a lot of people seem to be sort of stuck in the 90s and building
code procedurally even if they're using an object or an language and you know these things have been addressed by a lot of movements like the agile software development movement and the dev ops movement and I think in a lot of ways we failed those movements but I kind of believe that
the AI revolution is going to succeed because where we get lost is in the minutia in the details of the devil so to speak when we're building stuff chat is going to be able to do all that stuff for us or or whatever tools that we're going to be using and so the only thing left is our creativity
our ability to write maintainable code our ability to see the bigger picture and come up with resilient designs so I'm excited about that I think it's going to open up a whole new world for us as developers
and I think we're going to lead the pack in the world in a lot of ways and I think that we're going to we're going to get we're going to get a lot better I was just thinking about some of the other guests you had on your show and how people were talking about abusing AI and I think that's also
possible but what I feel like in the software industry is the demand is so high we don't even recognize how high the demand is and if every one of us developers becomes 10 times more productive with AI I still think that there's going to be tons of work for us because the world needs to
automate and what is that going to do for the rest of the world because our industry drives every other industry so I think I can only see great things as a result I mean it's hard to imagine in today's world a customer being able to get the exact software they need in a matter of hours
rather than months or years this is exciting to me that's a very optimistic viewpoint do you have any cynicism or concern like I had efo ma and june wana on the show 877 who wrote a book called the Quantified Worker and she talked in depth about like it was the simple introduction of a
of a manager sitting on a assembly line with a clipboard to start messing up the modern workplace that it's kind of gone downhill from there I don't want an AI to tell me hey Scott I see that you're not smiling today your pulse rate is increased I'm going to go ahead and let the insurance
company and your boss know that you're feeling poorly today like that's what's the uncanny valley here that turns this into a capitalist dystopia versus you know start track the next generation which I think we both agree is a better and cooler future yes I think the I think it's results
because when people start using chatchy pt in that way people are going to get turned off and by the way we all kind of need to be consumers so I mean if you kill us all off or make us all poor we're never going to be able to buy your products anyway so like we need to think in terms of
a global society and that's why large language models and AI are so valuable and the internet itself is so valuable because it's interconnecting us and helping us think more globally yeah we're probably going to have some challenges around that but this is the challenges of humanity
you know we need to transcend that and not literally in order to survive so I don't know if the stakes for humanity could be any higher you know and I think that having this more global perspective is exactly what we need just like connecting to people around the world through the internet is
has transformed our society in many ways I think this is just the next step how important you think it is for people who are reading books like yours and using chatchy pt to be thinking about ethics and what their end goal is to make sure that they don't accidentally send the AI off the rails
and introduce you know bias or anything that would you know send us into a sky net type of a situation I don't know if we'll get into a sky net type of this thank goodness but but in terms of just embedding our own personal bias in our materials I think again that
bringing up this conversation having our conscious awareness of this is a really good thing so so I think AI is really helping us become more aware of being able to look at ourselves what we need to do then is do that look at ourselves and see where we need to grow and where we are doing
well you know very cool I appreciate your positive attitude you are in fact the passionate program aren't you thank you I kind of feel like I am I want to share that passion yeah well hopefully folks are going to enjoy your new book prompt engineering for everyone as much as I enjoyed your previous
one beyond legacy code you can check out david online at passprog pssprog.com as well as his training company to be agile.com and you can pick up your own copy of prompt engineering for everyone everywhere that books are sold thanks so much david scott Bernstein for chatting with me today thank you scott it's been a thrill this has been another episode of Hansel Minutes and we'll see you again next week