#NN 0121 - Vibe Coding with Brian Feister - podcast episode cover

#NN 0121 - Vibe Coding with Brian Feister

Apr 29, 202534 minEp. 122
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Summary

Brian Feister discusses 'vibe coding' using AI, emphasizing the importance of careful code review and questioning AI outputs. He shares insights on AI's impact on developer skills, the risks of uncritical AI adoption, and realistic productivity gains. The conversation also covers the challenges of scaling AI-generated code and the evolving role of developers.

Episode description

In this episode, we talk to Brian Feister about vibe coding. He shares his experience developing a new product using AI and highlights the importance of thoroughly reviewing everything the AI suggests because it lacks common sense. We discuss the risk of junior developers producing poor-quality code and the severe damage a company could face if it begins firing developers and replacing them with AI. Brian expresses scepticism about claims of massive productivity gains for AI but suggests that it will have a significant positive impact in many ways that we don't understand yet.

Follow Brian on LinkedIn https://www.linkedin.com/in/brianfeister

Listen to the podcast on your favourite podcast app:

Spotify | Apple Podcasts | Google Podcasts | iHeart Radio PlayerFM | Amazon Music | Listen Notes | TuneIn | Audible | Podchaser |  Deezer | Podcast Addict |

Contact Murray on LinkedIn or via email 

Transcript

This is No Nonsense Leadership, where we explore better ways to develop software products and services. Join world-class experts for honest insights and practical advice to help you lead digital teams clearly and confidently. Subscribe now to learn the best ideas in the field.

In this episode, we talk to Brian Feaster about Vibe Coding. He shares his experience developing a new product using AI and highlights the importance of thoroughly reviewing everything the AI suggests because it lacks common sense. We discussed the risk of junior developers producing poor quality code and the severe damage a company could face if it begins firing developers and replacing them with AI tools.

Brian expresses skepticism about claims of massive productivity gains from AI, but suggests that it will have a significant positive impact in many ways that we don't understand yet. Welcome to the No Nonsense Leadership Podcast. I'm Murray Robinson. I'm Donna Spencer. And I'm Brian Feaster. Hi, Brian. Thanks for coming on. Yeah, happy to be here. So we want to talk to you about vibe coding today. Before we do, could you introduce yourself to the audience?

Sure. My name is Brian Feaster. I am a technical lead at Salesforce. My team builds open source equivalent to Vercel, but it's commerce specific and owned by Salesforce. So it's a React. framework that's isomorphic and runs on the server and client so that's my day job and then i also have a startup where i'm a cto I have two other engineers that work for me, and that's where I have a lot more of the vibe coding, haha, experience.

So according to Andre Carpassi, who coined the term, there is a new kind of coding I call, quote, vibe coding, where you fully give into the vibes. embrace exponentials, and forget the code even exists. It's possible because the LLM's cursor, composer, or sonnet are getting too good. I just talked to Composer and Super Whisper, so I barely even touched the keyboard. I asked for the dumbest things, decreased the padding.

on the left sidebar because I'm too lazy to find it. I accept all for the code suggestions, always. I don't read the diffs anymore. When I get error messages, I just copy and paste them with no comment. Usually that fixes it. The code grows beyond my usual comprehension. I'd have to really read through it for a while. Sometimes the LLMs can't fix the bug, so I just work around it or ask for random changes until it goes away. It's not too bad for a throwaway weekend project.

but still quite amusing. I'm building a project or web app. I just see stuff, say stuff, run stuff, copy, paste stuff, and it mostly works. And that is the canonical definition of vibe coding. Yeah, I use a lot of AI in my startup. And a lot of what he says is the opposite of what I've learned from how I've got the most amplification of my ability. Mostly what works for me is. being very meticulous about reading the output. So that would be the opposite of except all for the code changes.

I have long-winded conversations with the LLM where I explain, notice the function signature here and this other function here. These two need to interact successfully. It's like typing a long letter to somebody who... has perfect memory, but not a lot of common sense. That's my experience of AI-assisted coding. Yeah. So what tools are you using? I use cursor and I also use Claude Sonnet. I've been doing this long enough to be... Before Claude Sonnet, there was a time when I was using...

OpenAI's model. Before it was integrated in the editor, I was copy-pasting these kinds of long-winded conversations into ChatGPT. The model capability was less. But I taught myself Golang from zero. And almost all of what I've learned is from the LLM. And I'm sure there are gaps and blind spots in my understanding for sure. That was okay because I was very brand new to Golang. And so I was learning it from zero and it still worked for me. And then...

It was a big improvement in productivity with the integrated editor with Cursor and Claude. Claude is way better, in my experience, higher quality output than the OpenAI model. Is this in VS Code or some other idea? Cursor is a fork of VS Code and they are, Cursor is a huge company. They're one of the fastest growing revenue software companies.

in history. And because VS Code is open source, they just forked it and they introduced all their own hooks where the LLM will proposed code changes you can accept them in line it's great but the hardest thing in the world is seeing a proposal that if you read it too fast looks correct. And you're like, man, I want to get this done. Let me click accept. And it's like that muscle resisting that urge is one of the hardest things in the world.

Ironically, what he's saying is like the exact opposite of the most important skill I think that engineers will need to develop. Okay, so what have you found are the best practices for getting good usable? Production ready code using LLMs to help you. I think one of the biggest mistakes people make is that they don't ask questions. They just ask for it to produce output. So I'll get output and... Because I have 20 years of experience as a software engineer,

I will notice something that's fishy and I'll say, what does this line do? And sometimes that'll help it to correct itself, but... where it's 90% correct and these three lines are wrong. I'll delete that myself. I don't need to prompt it 500 times to get it to fix that small thing. It's a weird space because... The more experience you have, the more it amplifies it gives the most to the people who already have the most.

Another thing that I do a lot of is think very hard about the function call depth. If I know I'm going to call five functions deep. I will only ask it for two or three levels of function depth. And I will get that part first. Make sure that works. Then I'll add the next link in the chain. It's guaranteed to hallucinate if you have five levels of call depth. It's going to hallucinate the function signatures. It's going to make up variables that don't exist and assume that they're there.

For things that are syntax based, it's really powerful. For example, I didn't know anything about Go routines for running multiple routines or threads simultaneously. And that's just a syntax question. So I can ask it to write that. And then I can ask it questions about the lines that are new to me because I haven't seen this before. So I asked you what works, what doesn't work.

Yeah, I think what doesn't work well is weakly typed languages. If you're writing JavaScript that doesn't have a compile phase, you're going to have a lot of subtle hallucinations and bugs that you're not going to catch. I'm not great with Python, but I would guess maybe that's tough too because you don't have to have types in Python and then it can get... a lot easier to hallucinate and not know that it's happening. You indicated before when we talked about definition of vibe cutting that

It doesn't work to just accept whatever it tells you. And if it's giving you code that you don't understand, it sounds like it's not a good idea to just accept it. Yeah, for sure. I make sure to define small iterations. I don't want large chunks because I don't want it to become impossible for me to

think about. If it's 10,000 lines, there's no chance that I really understood what that was, right? Keep it small, iterative chunks, make sure the small bits are working and people miss the opportunity to ask it what it's doing. So ask about that three lines that you don't understand. And if you want to go crazy and have it generate a thousand lines of code, fine, but ask it what all of that is. Read all of it.

That's where I think that the industry will change as a result of this, because if you haven't memorized the tools, you're going to struggle in data structures and algorithms, interviews. And memorization is going to be at an all-time low for value. And reading and noticing things is going to be important. I think that is a big pivot because... Big tech is so entrenched in the idea that memorization is the main thing. And so I think that this change with AI will bring about a fundamental shift.

I was thinking about what this means for... junior developers, does it change the way they learn and does it change the way they work? Yeah, I think it does because a lot of the mentoring that a senior engineer does with a junior on the team is about syntax and little gotchas and stuff like that. Certainly not all, but the LLM will be able to be a buddy for a junior developer in a way that will free up senior developers from a lot of those rote memorization questions that are like, oh.

Here's how these two systems fit together. There's a little gotcha here that you should be wary of. I found a lot of gotchas in new stuff that I've learned. that the LLM does know about. That doesn't mean all, but it would significantly reduce the burden on senior engineers answering little questions. And that would free up more time for the senior engineers to impart.

high-level systems thinking, system design and abstract thinking or understanding the boxes on the whiteboard diagram and how they fit together. So I do think that would be a big shift.

The temptation to be lazy has always been there. 20 years ago, when I started, I copied things off of Stack Overflow that I did not understand and I got myself into a lot of trouble. And that part is going to be... way worse because you're gonna have something that feels oddly human that keeps saying the wrong thing over and over again you don't know that it's wrong because you don't know what's right

And now you're stuck in an infinite loop. That's a real danger. So that's why I emphasize ask questions of the LLM because Sometimes it won't fit things together correctly. but it can correctly explain one line at a time. You might have to go down to that level and ask it for explanations one line at a time because there's something in there that's hallucinating that doesn't understand.

the connection between the aggregate picture, right? But you might be able to figure it out if you ask, what do these two lines do? What does this two lines do? What does this two lines do? And so I do think that's going to be how... people learn in the future. And it's going to be more important than ever to not blindly copying garbage and not understanding why it doesn't work.

It sounds like these leaked code interview questions are going to be a waste of time in the future because that's the sort of thing that these AI tools can easily do. Yeah, I agree. I think that that's where that systems design thing comes in because leak code is also system design. Leak code is understanding what's the best.

solution. But there was this unfortunate aspect which was memorization. And that part is going to be worthless. You still need to understand the whiteboard and how the boxes relate to one another. I can ask the LLM, what's a good algorithm that solves problem XYZ? Here is the use case or the end goal is to do X on a website that has Y million users and Z number.

joins in a database table. So what would be a good algorithm for this? It's going to do really well at that. And if you can think through the connection between those boxes to be able to describe that to the LLM, you'll succeed. So if your LLM was a person on your team, how would you think of them? Yeah, I would think of them as being like a savant, someone who has perfect recall. but doesn't have great common sense. Understands the English language well, but might not be great at mathematics.

and can memorize all the different... code snippets that you would ever find on Stack Overflow, but doesn't always know how to put them together. So it will exceed everyone else on my team in memorization. But memorization alone doesn't actually get you to the finish line. I hang out on experienced devs on Reddit and I see... tech leads complaining about the less experienced people in their team using AI tools to produce absolute garbage and then submitting these really big

pull requests for review and not being able to explain what it's doing or even take feedback. You're producing code, you're meeting your company productivity standards apparently, but you're not producing anything useful. It used to be the case that if you worked at Google, There's no way you're going to copy and paste Stack Overflow answers and get a pull request done.

that compiles at all. Now with LLMs, they're a little more clever and you might have a lot of garbage code that does literally nothing, but it still compiles. And that's a really, really big change in the industry. Sometimes when you outsource, you get quite low quality development teams. full of people who are very inexperienced because they're cheap. And I could imagine those people apparently being quite productive because they're doing lots of stuff and it's combiling.

And so you're paying them whatever the rate is, but you can't rely on any of it. Yeah, I had an example of a subtle bug in production. I had users that couldn't log in. And there was a HTTP handler function for the Go server that would throw this internal server error and it wouldn't even throw any logs. And so I think that was LLM slop, as they call it. Don't people say that as a software developer in most companies, you're spending probably at least 60% of your time maintaining?

existing production code. And a lot of that is just trying to work out what the hell is spied. reading code will be more important than ever. It already was important. You read code a lot more than you write code. I have no idea. what drugs people are doing who think that it's a 10x productivity gain. But I might have experienced maybe at most 50%. But that is very specifically because I wrote a platform from zero.

If I were maintaining code, the benefit would be far less. And that's where it's been perfect for me starting a startup. It's really amplified my ability. But I'm already hitting a point where... I'm introducing less features slower and the benefits less. Some of the rhetoric around vibe coding is that a fairly non-technical founder could learn to code, produce a thing, get it out really fast. Is there a reality around that? No, I don't think so. With my startup

I actually have a B2B relationship with a product manager who I worked with. He's a very good friend, but he had built a no-code app and he hit a wall with it where... He had actual paying users and needed to be essentially rescued by me. It's fine to build a prototype, but... The problem is that in order to know whether a prototype works,

you have to have users. And if you have users, you might have paying users. And if you have paying users that are on a prototype, you have a ticking time bomb. And so can a non-technical founder build a product, an MVP? get users, get traction. Yes. The more yeses to more of these questions, the more danger you're in. Because in the same way that no-code apps don't scale, I'm sure that vibe-coded apps are not going to scale as well. And so essentially, there's this really hard...

where you need to pull the rug out on the product that was the mvp that real users are on and transfer them to a real product that can scale and so That's a big challenge because you have to get significant traction before you make that switch and spend that money to hire a technical founder. Give away equity in your company, pay them a lot of money, whatever it's going to be that's going to get the job done. And knowing when you're safe to make that transition.

and not knowing when you're about to hit the ceiling and about to have lots of angry users, that's a very difficult balance to strike. I'm not that technical anymore and I've been looking at developing an app as well. I struggled using ChatGPT. It didn't really work for me. I got into the tar pit. pretty quickly, but a no-code environment like Bubble looks like it might be quite good. So my business partner built a real-time voting app for a karaoke contest.

This was a different platform called Adalo, not very different from Bubble. But he had six different database tables, and he's trying to join these together. Now, the way Adalo works is that... These turn into a really big, heavy JavaScript app that's client-side. And so when you do that and you make six calls round-trip to the server, it took forever to load the app.

And Bubble, I would imagine is similar. It probably dumps a lot of JavaScript on the client. It probably does a lot of client-side work and not a lot of server-side work. If you have a simple use case where you're making one or two API calls, From the client, it's probably fine. But if you have six tables and you need all six of them to put the UI together, it takes 30 seconds to load. They do that because it's very resource intensive to scale a...

cloud service for hundreds of thousands of users doing SQL. But if they can run it all in your own browser, it's using your local machine compute. So it Yeah, I had a friend of my dad's who had a business with a site that listed horses for sale. And she couldn't understand why. The site was so slow, she was using RDS, which is a SQL database product from Amazon. And then she moved to the serverless version and her cost went from $200 to $800 a month.

And this is not with millions of users. This is like. thousands of users, but she was doing table joins on 517,000 rows in multiple databases. And so that's where the serverless thing gets you. And so Bubble... is dumping it on the client so that those joins don't happen and drive their AWS bill up to $800 a month. So they're protecting themselves.

I wanted to come back to this productivity thing. So we know that McKinsey, Bain and BCG are doing a lot of work for big companies advising executives on AI. What I'm seeing on experienced devs is people are saying one of these big companies went through and they told the exec

that if they do all this, they can get rid of a whole lot of developers. So they've given us all this tool and they've just started firing people. I now have half as many developers in my team as I used to and I'm expected to use these large language models to do the same amount of work as before. So I'm wondering what sort of impact is realistic on productivity? If you're a developer, there's like a 5% increase in productivity. Is there 20%? Or is it this 10 times multiplier?

It depends a lot on what you're doing. If you're a greenfield startup with zero lines of code in your project, you might get as much as 100% productivity gain. I started using AI before... It was integrated like it is with Cursor, which is a big speed up. And it did help me for sure. Helped me learn a new language. It gets you past blank page syndrome. That's the most powerful thing for me. And not all engineers experience that to the same degree. I definitely have.

some amount of adhd and so the blank page effect for me is maybe more powerful than it is for other people for me that's why i'm more pro ai assistance even if i'm not pro vibe coding But all that said, what are realistic productivity gains? It depends. I've heard that Amazon in-house trained LLM knows about things that other LLMs don't know about. But when you have a really big...

complex code base, you're going to need specialized training. The problem is that if LLMs have ingested all of the code on GitHub, you're still going to have a bias toward that irrelevant code. There's probably some future where Amazon's able to train an LM from zero with no garbage code from GitHub, like only in-house Amazon code. I think that could happen eventually. But most companies...

are not that technical. And most of them are deeply hurting themselves by taking their already weak position and outsource all this to LLMs that don't understand it, overwhelm devs by cutting half their team, expecting... what should be a 5% productivity increase or a 10% to be 100%. And there will be whole companies that die because of this. Tribal knowledge is important in software. If you cut half of the team, you'll never recover from that. But realistically, as a developer...

What proportion of your day is actually writing code to start with? Exactly. Big companies have a lot of work that's not technical in a lot of big tech organizations. You express your influence through the people who report to you and who you collaborate with. You think more and more strategically, more and more abstractly, even if you're not in management. And a lot of my day is meetings. I spend a lot of time in meetings talking about strategy and...

There's a lot of plates spinning, there's a lot of convincing other teams to help you. And I spent a lot of time talking to people. That's why it's way more interesting to talk about AI at my side gig because there... zero lines of code when I started. I wrote 150,000 lines in a year. on the side, which I definitely would not have been able to do without assistance of AI, to do that on the side, just as a weekends and early morning project or whatever. But anyway, all that being said...

Salesforce, my day job, we have so many products, so many systems. I spend lots and lots of time talking to people who lead and know other systems that we're constantly integrating and bringing them all together. and there's zero documentation on some of these things. And so I spent a lot of time talking to people who own those products about how to use them and how to integrate them with my product and my team.

All that stuff is not going to change, I think. Yeah. Donna, we better go to summaries. What do you think? Sounds good. We started today with you giving us the canonical definition of vibe coding, and that's really good to then hang the discussion off. Then we talked about good practices and the mistakes that people make.

One of the main mistakes you highlighted was not asking questions of the output that is being produced. And throughout today, you talked about... how asking questions, reading, thinking is a much more important skill than... producing output and hitting accept all. AI-assisted coding might help juniors. And it was interesting that being a buddy for juniors for a lot of those memorization and mechanical and gotcha things is a really good use of AI-assisted coding.

frees up seniors to help juniors see the big picture be more strategic and do that much more valuable mentoring than go oh yeah when you do this thing you've also got to do that thing Murray asked a really good question, which was, if the LLM was a person on your team, who would they be? And you talked about how...

It would be a person with really good recall, but low common sense. Somebody who knows a lot of things, but doesn't know how to apply them. Who can memorize all the snippets, but doesn't necessarily know how to put them together. I asked you about the rhetoric around non-technical founders being able to throw together a new act.

And you pointed out that while that might be able to happen, like no code, the problem will always be scaling. But also the insight that I drew from that was, understanding the point where you need to go from your scratchy MVP or your no-code solution to a thing that's robust is hard to identify and quite risky because by then you have paying customers. And we also talked about the rhetoric of AI sister coding can improve your productivity by 50 to 100%. So you can get rid of.

A large number of people from your team. And you noted that when you start from nothing, like when you start from zero code, The productivity increases. are there, but when you're maintaining a code base, when you're working on big teams, a lot of the time is actually reading, figuring out how things work, figuring out how things connect.

You're not going to get large productivity gains out of that. And working in big organisations is a lot about talking to other people, understanding how things work, keeping the plates spinning. And AI-assisted coding is not going to... improve your productivity in that either. Maybe getting rid of large amounts of your team is not... a practical recommendation. Yeah, I've been wondering recently, what sort of productivity benefit are we going to see from these AI tools?

And I'm starting to come down on what I would call a medium benefit. It's not this massive benefit that everybody's talking about. It reminds me a bit of when Google first came out. That was actually really helpful. But it's not the same size step change as the internet itself. Having the internet have been due to the whole world and it's been enormously beneficial for lots of things.

So it's a medium advantage. And what that means is that the amount of money going into it doesn't make any sense whatsoever. So we're going to get companies getting some medium productivity benefits. from using large language models. But there's a lot of danger that team could start producing garbage, which is going to wreck your production systems if they try vibe coding.

The idea of closing your eyes and just hitting accept all while you talk to the computer. That's insane. But I think it might rival the internet. I think it's still early days. And I think that the rate of acceleration is pretty significant. You know, a year ago, copy pasting things into ChatGPT and it was helpful. And now it's integrated with Cursor. Really big step change there.

So Claude Code, a CTO friend of mine, got a developer preview. And so he defined really discrete unit tests that it has to pass. And he allowed it to just loop and loop until it passed all the tests. And it reduced the size of his Docker image by 20% in 10 minutes or something like. And he was like, he paid $2 for that. And to make that amount of change in that time reflect. a day of a developer's time. So there are certain subcategories of tasks

That are going to be really dramatically impacted, I think. I don't know what they're all going to be. I think it's still early days. Yeah. Only thing about accelerating change is that they... Research published by OpenAI and other AI researchers has shown that LLMs have hit a wall and that they're at the point now where they do 100 times more training data.

and compute in preparing their models, they get like a 15% increase in performance. So what they have to do is focus on other areas. Reasoning. And understanding are the two things that they're still pretty weak at. So if they can make big improvements there, that would be great. For me, the interesting thing has always been What can I do that I couldn't do before, assuming no progress whatsoever?

and i think there's a lot that hasn't been explored there because all the money in the world like a black hole has been sucked into these companies that are just trying to improve the reasoning score by two percent But there's so much to be done with no assumption of progress. That's where I think it's significant. Yeah. All right. So, Brian, how can people read about all this stuff? How can they have a look at your product that you built?

So on LinkedIn, I'm Brian Feaster. Brian's an I, B-R-I-A-N. Feaster is spelled... Strangely, S-E-I-S as in Sam, T as in Turtle, E-R. So you can look me up on LinkedIn. And my project is still pretty early days. I'm more B2B focused. So the vision is to have events that are all over the world. But anyway, the product is meetnear.me. So it's M-E-E-T, near.me. And you might have to add a query parameter, radius equals 50,000 to just see like a lot of events because they're not global yet.

For now, it's a B2B thing with karaoke league competition and basi ball and that sort of thing. And I hope to have free events just to help with community curation so that people can collaboratively work together. to point to another website and then pull those in. And so hopefully they're curated. That's the vision. Why would PayPal use your product rather than made up?

So Meetup experienced 90% attrition for their event listings when they started making it so that everyone must pay to list events on the platform three, four years ago, something like that. So it's like minimum $30 entry. And I just don't think that can help communities to really self-organize. Alright, so people can look you up on LinkedIn and can go and have a look at your products. This has been great. Thanks for coming on.

Yeah, thank you. This is a great conversation. It's nice to talk to somebody who I feel like gets it. So I really appreciate you guys trying to put the good word out there without all the hype. That's us. Yeah, we're trying to cut through the hikes. That's why we're the No Nonsense Podcast.

That was No Nonsense Leadership from Murray Robinson and Donna Spencer. If you'd like to explore better ways to develop software products and services, contact Donna on LinkedIn and Murray at Evolve.co. That's Evolve with a zero.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast