The Identity Crisis of Software Engineers in the Age of AI with Borislav Nikolov & Rares Mirica - podcast episode cover

The Identity Crisis of Software Engineers in the Age of AI with Borislav Nikolov & Rares Mirica

Jun 11, 20251 hr 13 minEp. 203
--:--
--:--
Download Metacast podcast app
Listen to this episode in Metacast mobile app
Don't just listen to podcasts. Learn from them with transcripts, summaries, and chapters for every episode. Skim, search, and bookmark insights. Learn more

Episode description

AI is reshaping software engineering—but are developers getting better or worse?

In this episode, Borislav Nikolov and Rares Mirica join me to discuss the identity crisis facing software engineers due to AI-driven tools. We explore the balance between increased productivity and potential intellectual decline, and what developers can do to stay ahead in a rapidly changing field.

Topics include:
✅ Pros and cons of relying on AI tools in coding.
✅ The importance of maintaining critical thinking skills as a software engineer.
✅ Real-world examples of AI’s impact on productivity and software quality.

Tune in to understand the challenges and opportunities AI brings to software engineering and discover how to adapt effectively. 🎧


Connect with Rares Mirica:

https://www.linkedin.com/in/raresmirica


Connect with Borislav Nikolov:

https://www.linkedin.com/in/borislav-nikolov-328ba221a


Full episode on YouTube ▶️

https://youtu.be/RO--t77O4HE

Beyond Coding Podcast with ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠🎙Patrick Akil⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Powered by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Xebia⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠!⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠!

Transcript

Hi everyone, My name is Patrick Q and for today's episode, we cover the software engineering identity crisis, especially with AI nowadays. I have a question for you. Are we becoming worse engineers overall with AI or are we becoming better engineers? That's the topic of today's discussion. And joining me today, I have two people, both returning guests and friends of the show, Boris Lav Nikolov and Raresh Mirika. It was a blast having them on again and I'm sure you'll love this one.

So enjoy. I'm wondering because I was talking to, I have a few friends that are more in the start up phase and they love it for vibe coding. But to get to that point before they love it, it was a bit of an identity crisis because people try it out. They're like kind of resisting this change where you generate code that is actually quite good and you can still change it and it it might fulfil your use case to a point where they generate a

lot of code. And especially in the start up phase, I still haven't seen bigger organisations with a lot of existing code be effective with it. But when you're starting from scratch, I can see how generating a lot of code to kind of fulfil whatever use case you have can be very beneficial. I wonder if people are going to go through this more so identity crisis because engineering is changing, however you want to look at it.

There is something new in there and companies are also looking at that and being like, OK, do we still need this engineering capacity or not? So the world is changing and I wonder what you think with regards to this identity crisis. The identity crisis for shortcoming, Yeah, right. So, yeah, for sure, because people are experimenting with it, as you said, for new projects and vibe coding and starting small, this is quite successful.

Yeah, sometimes it, it depends a lot on the field and domain, right. If if we're talking about solutions that are more generic CRUD applications, of course, and things that are are more on the beaten path, yeah, the the, the coding, the Co pilots and so on. They they can definitely handle those things. And so you're going to see teams that put a lot of heavy, heavy,

put that to a lot of heavy use. And then those code bases will evolve into a structure that is kind of compatible with what has been done before with that copilot or coding assistant and they're going to have a lot of success with it. I think if we're looking at larger pre-existing code bases, they have their own idiosyncrasies and they are not usually compatible with what the coding assistant will kind of

expect in a sense. So the coding assistant, because it is the average developer in a sense, the average Stack Overflow developer. In fact they it has some built in assumptions in a sense. So of course your own corporate code base might not be compatible with those

assumptions. So your engineers that have now a subscription and they are trying to use the, this coding assistant will discover that it's spewing up rubbish code that even though the code might compile it, it might even solve the problem, it's completely incompatible in a form in style, in expectations regarding all sorts of things down to security, compliance, etcetera, with the pre-existing code base. So then that will introduce friction.

I think there needs to be another couple of advancements, some serious advancements in terms of establishing A context and letting the the agent understand its environment better for it to be useful in these situations. If you're a start up or a one person start up, then I think you you definitely should give it a go. It will give you a lot of boost of productivity and so on.

And it's good to get into the habit of using it all the time because you will have this extra degree of freedom, this this extra resource that you can use for whatever, not only for, hey, can you write this function for me or can you write this library code for me, or can you integrate this library code for me? But you can ask it all sorts of things about the domain that you're solving problems in.

So it can actually replace a little bit of your business colleagues, your product manager, your data scientists, etcetera. It can give you some insights that you can use. And as an engineer, getting used to using that loop to improve your work, I think that's super productive. Now when we're talking about like established code bases, I don't I don't think that's going

to work that easily, right. And then there's the question of to go back a bit to the other conversation, there's a question of we are now writing in human oriented programming languages. So maybe there is a future in which vibe coding becomes the compiler step, not the editors step. Now we're we're in the editor ID step of the of the human to computer, let's say journey, right?

So there's the that feedback loop human puts in code into the ID, then it passes on to compiler processor, etcetera. Then comes back the output. You evaluate that output and then you evaluate your code, compare it for correctness and then you read restart that loop again and you you then iterate and improve your code. But this when you bring in the agent into this, then the agent must understand your human intent and the the human oriented programming language.

That could be Java, C, Python, Earl. Just to add to the cumulating, that's so the human abstractions and indirections, everything we do it because of the way we think in our limitations. So we the way we manage complexity is very much expressed into this like, OK, let's abstract. Where do we abstract? What's the point of abstractions? The reason that why why don't we code in micro code? We M1 has like 600 registers. Why can't we just why do we need

branch predictors? Why can't we tell which branch it's going to take? So we are incredibly limited in thinking in this incredible amount of interactions between particle pieces of the machine of the computer, right? So we forced the at the moment, that's why we have all these languages and libraries and abstractions to, to draw a piece on the screen. It's like ridiculously complicated. It's like 10s of millions of

lines of code. It depends on the machine though, not on ESP 32. Well, the ESP 32. It depends on the machine, so we have different levels of masochism. Yeah, no, I was just going to say the example of the ESP to the ESP to actually has some small standard library in Diodrome.

OK, so it's baked into the CPU itself and you actually so the complexity we even try in in hardware itself, we can like help to achieve the way we control the GPR controllers and the amount of registers and the register, all this is actually expressed in the wires. And so even there it's like it's not draw pixel on the screen. So you want to go even further. You want the. You want the GPT to output very long. For example, that's an example, not. Into the foundry and you get an

ASIC. For example, that's. Exactly what you want. That's like the absolutely. I don't want that. I'm saying from from the from the wires until to the abstract class, blah, blah, between those two things and like draw pixel on the screen driver of like let's say you want to draw pixel from the JVM. Now from the JVM into the the pixel there is like I don't know how many million easily 20 million lines of code easily maybe, maybe. You need to go to the OS.

Yeah, Yeah, it's, yeah, it's great. Yeah. So we're and now we have all these like tower of obstructions just because we're human, because we're human and we're limited and we try to actually make the more people share there. There might be people that are incredibly gifted, like they can see through all this and they can actually code the micro code and they absolutely exist. But we I, I can't do it.

So we have built this infrastructure of dependencies and like, you know, in attempt to actually build more complicated things on top of the other things because it's just not possible for us. It's not it's not possible for us to program adna computer. We just can't do it. We need like higher level abstractions in order to actually in the same way when you play a game of chess, you can't see the end of it, right? Some people are better than

others, but so but. Do you think we'll get to that point? No, no. No, I don't think my my point is. How far will we get? At the moment. How far should we push to get? But but so there's 21 is like, should we even try to get that way? Because there's like completely uninterpretable. And the second one is what does it mean for the developers right now that are actually writing this code with the language

model using human language? So because it's the same as the first thing, these tokens again are human. Like when somebody writes a method that says this is a simpler version of the, let's say the John Carmack's inverted fast input square root function, the famous 1. So there's like this number there, and then there is a comment there. It just says, what the fuck on the comment. That's what he wrote.

You can see it on GitHub. So when he wrote it like when Now when you ask the language mode, it's going to output the same function. So when it says what the F word, I don't know what they can say on YouTube, but what does it even mean? What does it mean? It's up to the reader of the code to decide what it means. It's like nonsense. It's it's not nonsense. Why can't? Why? No, it's not. It's not. No, it's a necessity for that code to work. No, no, it's a comment. That is.

Yeah, yeah. But but my point is then what is the point of naming? All right, so all these are made for humans by humans. When the thing writes something, it's like almost like imagine you're reading a, a generated book. What is the what is the point? Why don't you, why don't you get the prompt for the book? What's the point of you reading the book? You should just give me a prompt. I'll write my own book. So it's a little bit like that with the language model generated code.

It's not. Why would you read it? You see, unless you want to evaluate it in your head, which makes you the computer. So why should why see you read this code, you run it in your head because the symbols are meaningless. You just is that actually going to work or not? I don't why? And when you say like, oh, this is a simpler version of this, like simpler for who? You see.

So the identity crisis thing, which is what does it mean for me to code is one thing because the language model absolutely violates your like especially copilot. You write something and then also a bunch of code spits out and then you're like, wait, what? I was thinking something and now I have to read and present to the computer for the thing that you wrote. I'm thinking finally, finally tab tab. That's fine, that's fine. But before the press tab you have to. The thing is.

Oh, it's funny you think that. I No, no, wait, wait about something before pressing tab. That's really fun. No, no, it appears to press tab. The thing is, I do. Run when there's latency. OK, yeah. And you, you press tab tab and nothing outputs. I had this moment like a week. Ago. Yeah. It's so frustrating. And then I was like, wait, I don't know what I'm supposed to write now. I'm coding for 25 years, OK, I'm going to. And also I'm like, wait, what is what am I doing here? You know?

And that's when I took a week off and for one week I've been coding just in IMAX and then nothing on the terminal and I don't have it anymore. You went back to the the old. School days. I, I went back to those school days, but I still use AI, but I use it very I give it like 300 page documentation. It's like making a driver that use this and this and this just an SPI driver 14 that I'm making. And then it just makes it and I

use. I read the driver once and I embed it, but it's very different than it interrupting my my thoughts because then I start depending on it for my thoughts. But that's a. Is that a? That's not a GPT issue, that's a tool issue, right, Right. Yes, that's an interface. It's a way we use it, yes, yes, yes. I don't think it should be used while you're thinking, because it just stops you from thinking. Yeah.

But my question is more down the line of would you rather continue down this path where you have compilers give you code and you do tab tab, or would you rather replace that Carmack function with a prompt? Let's say we make a completely new syntax where we say here is a function that does inverse square root. Really. Fast. And that's it. And then we have all of these prompts structured in a sane way, in a way that we could reason about them from a a

human. You still need to have a human way of reasoning about you're asking the computer to do because otherwise you could not make an application, you could not make a business with just one prompt. Make me rich. Then then the prompt would be make me rich. But that's a recipe for the paper clip machine. And then you put this into GPT and outcomes bytecode that you just write on you just you just send it to the X86 architects. So can we?

Can we get there? Because the reason why it's really good at certain languages is because there's so much content of those languages out there. We don't have the same for Bycode I would say. This is an, this is really, this is really an assumption. This is really an assumption. So we I think we're trying, but The thing is we have taken a different branch.

So we are in the branch where we are using these models to generate human oriented content tokens that we read, they make us happy, they give us information etcetera or they make us sad etcetera. And out of this, we, we took this branch already because Open AI thought that this is the most productive, economically productive branch. I mean, humans are the audience. So yes, that perspective it

makes sense. Yes, but we, we've done lots of, lots of things for purely for machines like we have FML here in the country doing things specifically to make machines better. So, so it's not a matter of we generally value things oriented to humans more. No, it's not because of that. It's just because this particular agent, this particular group of people have brought to market a product that works in a particular way and thus biased all of the research into that direction because

that's now the new gold rush. OK, Now on this gold rush path we have generally society is scared of the gold rush legitimately so. So now we're looking at how to guard against some of the pitfalls of going that down that road. And this comes with for example, interpretability or predictability or alignment. OK, these things we don't need to compile code you right, they are not necessary. Give me a SEC. So not to the same degree because code is verifiable.

You can have C code Gen. where you can say I give you a prompt. I know that the prompt is connected to some C code, so I know that the inputs and outputs of this function in C are correct and the my prompt I think describes the behaviour of

this function. I feed this into the training step and they let the machine generate its own bytecode, direct direct bytecode that it can fit to the CPU and it can validate that the bytecode that it generated validates the test that they gave it. And we can actually code Gen. this to extreme amounts. So we can create a training machine to train a new model that is not a model geared towards output in human tokens, but it's geared to our towards and human input prompt and machine output.

And then you align that with structure. So you say, well, now I'm going to give you bigger and bigger programs or applications or what units, right? And they are going to be structured. The prompts are going to be structured in them in a particular way. And it should start generating, you know, just runnable code and you don't have to have the same things that you have for humans, because a machine is a much more

strict system. So if the, if this AI compiler fails, the functions are not going to output what you expected them the it's going to crash. It's so it's, it's much easier for you to identify that something's gone wrong. Now of course, if you tell it to create a banking application that AI should be aligned not to facilitate fraud. So that is a problem. OK, but that's a problem that any AI has in any case in the in the compile code generating AI, these things again are machine

testable. So even if you generate, you completely generate bytecode, that bytecode can still be analysed and tested for all sorts of things.

Like we do security testing for compiled code, complete black box testing, we put it in a disassembler and then we look at the assembly code and the disassembly language will still be available as long as we don't push it down the stack even further to what I was joking about earlier, which is, well, we could have the machine generate very log machine definition code and then that machine definition code becomes transistors and those transistors we truly do not

understand. Then we go into simulations and it becomes much, much, much more complicated. But we already cannot do that because the costs of developing such systems are are astronomical. So really the the question then becomes should I use a a copilot that generates Python code or should we invest in in creating codes compiling AIS where I speak English to it or Romanian and it helps with direct byte code?

And maybe as an optional, I write a small test Python or C code code class and I validate its output and then I put that in the loop and it can just try again until it passes all the tests. Keep in mind that there is no the this because we are so bad at managing complex, we can we have not even understood programming OK as human species.

So the computers that it's like it's even with the cloud and the most complicated applications that we have today are nothing, nothing compared to biological computers, for example, like the the there's some reasons when you cut their elbows, like you cut like a piece here and then you turn it around, it's going to go 22 elbows. So the complexity of biological systems is so beyond this emergent of emergent of emergent of emergent. It's so beyond what we can do.

So I think it's a mistake like like all the tech we have today. I think it's just nonsense. It's like nothing besides the things that he able to like pick up potatoes from the farm, everything else is, I don't know, it's fine. Maybe banking like I'm not trying to say I'm just thinking in terms of complexity, OK. And. And usefulness to human species. OK, so do. You mean it's extremely simple? No, no, no.

I mean, yes. I mean it's incredibly simple with even with the abstraction towers that we've built, it's simple. That's extremely useful. Yes, yes. But what I'm saying is if we want to go to the next level, we need a way to to look into complexity. We can't deal with this complexity. We just can't, regardless of the amount of abstractions that we built. First of all, you, you know, because the you can ask why do we build computers with addressable memory?

There is no reason for that. And because we could build sequential memory and all programs, programs still work. It just it just we are the way we are. People have been using addressable systems since Gilgamesh, like 4000 years ago in Sumeria summer where they catalog kings and this king was from this year to this year. And then they put it, they know how many years each king wrote which year. They can look up a king by a year and a year by a king. So this is just how we are.

We put things in boxes with labels and then we can find them. And that's how we compute in our head. That's why we program the way we program and that's why it's actually so hard and for people to do functional programming because it's just different and it's not that it's weak, it's just different from the average person. Of course there's some people that get it and they go that way. So everything that we have is in order to manage this complexity and we're not still not there, right.

So language models, I I should stop saying language models. I think this massive neural networks can allow us to look into complexity with a, with a human interface, a real human interface. So in order for us to build even systems like I mean fusion systems or like something that is very, very complicated, very emergent things, very chaotic, we just can't deal with it. We can't model it properly. We can't react to it fast enough. We can't predict the things that are going to happen.

Like even super simple like 3 body pendulum thing. You look at it and you're like, what? I don't you don't know how to manage that. So to make a program that manages it is even harder. And to make a program that manages like 10 billion of those, it's even hard. So we need help, OK, In order for us to get to the next level of technology, we need help like we need a looking glass into complexity that we can interface

with. So that I think this massive, massive computers, that's what they are, these differentiable computers, they can help us to do that. And we are completely misusing them now by making them like write me a LinkedIn post stuff. And just it's like, it's insane. It's insane how powerful this technology is and how we are using it just wrong. We're using it to tell us what to do, you know, like make me more productive. What is the point of being productive?

What does it mean? The reason why people want efficiency and productivity is because they want to live 2 lives. OK, They want to live more than one life in one lifetime. What is the point of being productive? You can just chill on the beach. Why do you want to be productive? I mean, productivity gives you a lot of other things, right? I agree with. You 2 lives in one life. As much as possible, basically. So the idea is, if you're infinitely productive, you live infinitely many lives.

Yeah, but imagine you live forever. That'll be a problem. It just the whole thing makes it very different. Yeah. So that's why I want to be more productive, because my life is finite. I know. Yeah. So that's only. But imagine we make you immortal. That would be a problem for the planet. We already have too many people. That's fine, but then we go to other planets. It's OK, we'll make we spread the problem. We decide to stop here and make ourselves immortal and stop having children.

Right. So it wouldn't be. I mean, it's still a problem for the planet. Yeah, but I mean, there are solutions to that. The the bigger problem is that your own life would cease. Any meaning? Exactly. Yeah, you do everything you want, and then you would still have your life. Yeah, but why? Yeah. No, I don't think there's nothing good coming out to go that way, Yeah. Nothing, nothing good. Let's go back.

My point is we need to manage in order for us to totally understand complexity, we need, we're limited. We're too limited. We need a looking glass into it. But is this iteration, is that what you're describing? Because the situation is very much stated towards humans, right? Yes, but because that's what we can do. Yeah, but we can use it to build the next iteration. Yeah.

Can we? Yes, because this predictive model that you said, this neural network is very much based on the content that is out there, right? That's what it's no. No way to wait to predict the next state, the next. You need to understand everything like nothing is outside of context. There is no such thing like there is when you interact with somebody in order to guess what they're going to do. You have to understand the whole world.

You have to have a world model, you have to understand physics. You have to understand what they're going to do in order for to have a theory of mind. Like if the model has a theory of mind of humans, it can emulate humans basically. So it's not just it learns what is to be a human, you know, you know how you test theory of mind, the Sally test. So I mean, this is very simple. If we put your keys here on the table and there's like a box here, you go out and then take

your keys and put them in a box. OK, And then I leave. I know that you're going to look in the box because I can pretend what is to be you. Therefore I know what you're going to do. I mean, you might and not do that because you might hate your keys really. So you you don't want to look for them like certain probability of that. But then I'll guess why would that be? So that's theory. Might I can pretend that I mean, I can think what you do.

OK, so imagine that the only way we can do the language is if I understand you and therefore understand the whole world. So if the language model can do language good enough, then it it learns basically what it has, a word mode. It learns to what it is, what the word is and how to. So then it can go the same way. It cannot modify itself, etcetera, but it can write new code to modify itself and etcetera. You're also asking just about the 1st order effects. So can ChatGPT or Gemini or

Anthropic? Can they directly help building the next stage? But there are massive 2nd order effects. So imagine if it evolves to the point where it replaces a large number of jobs. Then what you have is a large number of talented, capable people freed up from those jobs to work on the next iteration of technology. Unlikely. Unlikely, because there are reasons why humans tend not to starve, at least not in the long run. Maybe one generation will starve, but the next generation

will not. They they will find a way that could be violence, but they will find a way not to, not to start. Survival. Yeah, exactly. They, they, they have a strong survival drive, right. So the second order, 3rd order effects here are that you will have a large number of talented ambitious people trying to find the the next thing that makes them productive and through which they can compete in the world for resources, which is what you've been doing for a couple 100,000 years.

So that will lead to advancements in all fields or in in the fields that are right attractive, let's say for the next iteration. And this field is attractive for now. So you could theorize that a large number of talented, ambitious people will start pouring into it because they were displaced by the fact that, well, I mean, the, the machine is already writing my codes. I better start doing something else and I let the machine write

my code. But I'm going to start thinking about how to make a better machine, faster machine, bigger machine, etcetera, or machine that can solve some other problems right down to fundamental physics, etcetera. We've had, we've had examples of this, right? So you have the the machine that helps with fusion with tokamak predictions, you have the machine that does the protein folding, right. So you have advancements.

Now very recently there was an improvement to an algorithm for matrix multiplication, the 4x4 matrix, which yields about the 5% improvement on 4x4 matrix multiplication. But if you do larger matrices and this algorithm scales up to them, then you get an even better speed up so. Because this alone, just this alone. This alone is huge. Has saved has paid for itself in Google money. So this Google the alpha evolve did it and then it saves 1% pretty much of Google.

So to compute which in just, if you just put that in money and how much they spend on training so far, I think you can make an argument that it paid for itself. Well, and this game you will have forever for as long as we keep keep that algorithm the, the, the knowledge about that algorithm alive, right? Which is well, it has happened that we forgot things in the past.

So if you're thinking about these these effects, then you, you can clearly see a path where these machines will accelerate, they will grow, will become better to the extent that it is physically possible. If there is some kind of law of nature that prevents you from advancing past a certain thing like we have with faster than light travel, for example. Well, we believe that it is impossible to travel faster than light. OK, that's, that's a barrier.

We cannot cross that barrier. And that does have a lot of consequence here on Earth, right? So, for example, communication from one continent to another, it has to obey these rules. But otherwise, there is no rule that says that we cannot make stupidly complex machines to build other machines for us, to drive robots to do farming for us. There's no physical law that prevents us from completely automating something like farming. Yeah, it's really down to

resources. It's really down to if you want to go there, it's down to money. But it's not, right. Money cannot buy you a very, very large body of engineers, software developers, product people who are motivated to create the structures, the companies, the regulations, etcetera that are needed for us to fully automate agriculture. So. Let me give you a counter argument to that. Because the decay of thinking,

the atrophy of thinking is cap. Like for example, policy makers, they use GPT to generate policies. OK. And then the people that have to apply the policies or the people that have to adhere to the policy, let's say a new tax policy, whatever, they also use GPU to part of the policy, OK, Yeah. So then what we have is first explosion in bureaucratic complexity, right? Because new more complicated laws are happening and you need more models just to deal with

that part. It has nothing to do with like the technology of like getting better, etcetera. And at the same time, humans in this process are actually getting worse. Like in the same way when you do this vibe coding. So first of all, you know Andreiker Patty into this vibe coding like 2 months ago or something. And he's possibly, I mean for me, he's one of the seven kings of diplomacy. Like, he's the emperor and one of the emperors.

And when he does it, he's like, at the very top of his field. OK? So he has seen and he can see further than most people can see. Yeah. So when he does that, it's very different when I do it. OK. So I have so much to grow to learn, and doing it denies my learning. Does it? Yes. OK, because it actually what it does is it spits out a bunch of code and then you just at some point you get tired like I actually you get tired of reading you just like OK, is that one I tried.

I don't accept I tried. It doesn't work or it doesn't work. OK, well, you know, so that doesn't it stops you from thinking it basically completely atrophys your thinking. So you can completely have the other effect, which is humans just get worse, they just stop progressing. That's not the same with any other with farming. For example, you can say OK machinery changed farming, that's true, but the farmers are hundreds of times better at understanding farming.

Yeah, but there are hundreds of times less fewer. I know, I know, but my point is they are better. But we can reduce them to 0. No, no, no. So my point is, my point is let's say blacksmiths or we can reduce anything to 0, right? But the point is, the modern blacksmiths understand material science and the tooling that they had on the way make them better at understanding. The field Astronomically better. Astronomically better. Wow, this incredible. But what this does to us, it's

stopping us from progressing. No, the current blacksmith. No, no, no, hold PhDs. Yes I know, I know. But what I'm saying the GPT for engineering for software development stops us from progressing. But that's it, because if I am coming out of university for example, I now have a tool that I can use for learning significant. But you don't. It would be in a different. Workflow than what you're describing because I fully agree with you.

If I'm trying to achieve something and I have a tool that helps me and I'm trying to interpret and I do that continuously and it's just continuously wrong, I will just try trial and error until I get what I'm looking for indeed and therefore I have not. Learned what I'm trying to

execute. It's an incentive issue because you have the tool, you have no incentive to go and learn the fundamentals of the things that that tool does for you and I your argument is that this leads to the the atrophy of intelligence. I never thought about describing politicians as that's that should be a meme. That should be a meme. But like you say, you can use it for learning. Yeah, but you won't. Why not? Because it's so easy to not use it for learning. It's so ridiculously.

And then you also. So it's like getting. The devil. It's like adding, it's like adding documentaries to my to watch list on streaming platforms. Yeah, I'm, I mean, we know. We've seen the statistics. We know. Fair enough. You don't watch them. You just, you don't watch them. And yeah, but this, this feature is here. We have it. That's it. Well, that's why I think I think it's very important for if you want.

I mean that's why people listen to podcasts and say if you want to just get better, you have to use it differently. I can, I'm not saying I use it amazingly etcetera, but if you are not on breaker, but it just don't like coat, that's what I'm saying. And I'm saying use it like the way use it as not to interrupt your thinking. Use it separately. Don't use it in your ID use it as a separate thing. You say like, OK, just help me

to think about this. And then you of course, because you're doing the thinking from the tokens. So like again, if you have, I'm not going to take 100 page documentation for ST7796S so that I can make a driver because I have to look at the SPI timing since I don't, I just want to like just please make me a driver. Thank you. I don't want to upgrade my React Native. I just. I don't want to. I don't even want to read the new things in directly. I don't want to.

Please do it. OK, that's fine. But that is very different than thinking about how I'm going to model the state machine. What belongs to what? What is actually activating the change? What messages are going to be passed in my system and how am I going to design this? It's very different. But for me, it completely depends on what your goal is in the end, right? Because if I want to be a better software engineer, then I can agree with you.

I use it and consciously use it in a way that makes you better at what you're doing, right? More effective at your craft. If I'm starting a business and I want to achieve outcome, I would like as much money as I can. Then this is like a means to an end and I want to get there as fast as possible so I can put my brain on zero. I can try and error my way through and then see if I can reach customers. Yes, but my point is to the let's build a better future.

Yeah, and that's not gonna. That's a super. Short term, the better future is. It codes for me and I go surfing. Yeah, that's a better future. Yeah. A better future for all of us and to build actually the looking glass into complexity. We and it's so it's so far from that and we spend so much time engineering on it and with its help, absolutely, but we also have to get better at engineering. We're just not where we can use it yet. That's what I'm saying.

I'm not saying it's not going to get there. I'm not saying we're not going to get there. But if we just like you have something, it's like it's like a SOMA thing. It's so it's like, it's just like, I just want this to do everything. I don't want to do anything, you know, soon. I mean, that's what people say. The worst possible future is actually people just wearing glasses and AI tells them what to do because the next one is like, well, why?

Why are we not going to let it to tell you what business you should make, right? Oh, you do this business. OK, now type this prompt. Why shouldn't it tell you the prompts that you should write? It's really funny. I think it's really funnier giving these examples because the software development example is kind of like the blacksmith example today. So you could make something the old fashioned way with a hammer and anvil and so on.

But you can also order it online from axometry and just give them a CAD model and it will come out perfectly. And I think you're absolutely right that these tools can help you being a looking last through complexity. But I think we're looking at the wrong kind of complexity, like the complexity of software. It's, as you said, trivial, like if I were like, if we follow down this path, I think the smartest thing to ask these machines to do is diplomacy. Yeah. Right.

So if you want a better future for all of us. Well, but the next thing is let them rule us, because the next thing, diplomacy. Unless you act. So let them be our kings. Well, I mean, they will be our government in fact, because we are still no, because we're still writing the prompt, so we are still the legislator. Oh, no, because the people's ruling here. No, no. You say we're writing the prompt, but the next prompt is Tell me the prompt that I should write. Yes, sure.

But that's fine. But we we do that all the time, yes. But that's what I mean. It is writing the prompt, it's not you. Yes, but that, that's also fine as long as it's aligned right, because we're also doing this with politicians today. They're writing the prompt. No politicians. They're played. They're gonna be be hanged. But it's the same with the machine, right? It's gonna be realigned. It's gonna be retrained next year. No.

Why not? Because it can rule that it's not going to be. We have constitutions for this. The politicians also cannot write. Who is interpreting the Constitution I. Mean yeah, it's turtles all the way down. We know that, but at some point, at some point, if things go South, we do stop. And that's what you call the revolution. And that's where you, that's where you unplug it and plug it back. Again, Yeah, it just seems to have that like we're.

You turn it off. And on like I I've been because I've been using it so long and so so long. I literally had to take this week I'm off work just to code without it. I usually code 2 days. I always code without it. But I felt like it's just not enough. I just felt like I need just to go back. I just need to go back to thinking to think about what I find complicated and just try to make something with in a different way. But you really enjoy it as well, right?

The thinking act of it and then producing whatever you're trying to achieve. Yes, because the purpose of me producing a thing is for me to just improve. Yeah. And so that's what I look at. And then regardless if it's in some enterprise code or I can look at the screen for like hours and then just say, is that is that imagine that's the last piece of code that you're going to write. Is that the last?

Is that how you want to be? And of course, at some point you're like, well, I still have to do other things, so that's OK. Yeah. But you still have to at least have somewhere like did I what is happening with me while I'm doing this? You have to do some, some self-awareness about. And I just felt like this acceptance. I was, I was so many. I, I actually was blog post recently, but like so many, except, except, oh, it's not working. Oh, what are you doing?

No, that's wrong. Just do it the other way. They do something. Accept, accept and reject, accept. And then after the day, I look back at the day and I'm like, nothing happened to me. I just, I just lost my day. What we're seeing now, I, I think you're very unique in that you truly enjoy like the creation of it. And you've done this for many years, especially in the last, I would say five years. And this is just an arbitrary number.

People are coming into this field not necessarily for the love of it, right? They're they're coming from someone from a lower income base. They see that all this knowledge is out there shared in the open. The barrier of entry was very achievable, right? Teach yourself to be productive, teach yourself to execute means

to an end. And now we're seeing the great culling because the people that really came into that for this, they will stop learning because the means to an end now got a lot easier and companies are seeing this and they don't need that. They need the deeper knowledge, the layer that goes underneath that truly understands. And also to a certain degree, if you truly understand, you have to really enjoy what you're doing, you can just use it Willy

nilly to achieve a means. And because everyone can do that nowadays. So then this is the culling basically. And I'm wondering what you're saying, Radesh, where all of these people now come out to the market. It's not just tech, but it's also everything around tech. Tech OPS, for example, sales OPS, anything like that. It's very hard to get a job because things are being automated, of course, are being more productive. So then what are those people

going to do? Yeah, I mean, they're going to keep on being humans. They're going to adapt. Many, many of them are going to adapt. Some of those who adapt are going to be successful, and some of those who are successful are going to be extremely successful. And in fact, if you look at the history of economic downturns, of market crashes and so on, afterthe.com bubble, you had the resurgence of Google and then later on the social media and so on.

These are start-ups that came out of people who are laid off in the.com bubble. Yeah. So humans are going to do the human thing. They're going to adapt and and fight for it and so on. Maybe a bigger problem of this is the cheapening of the work product. So if you're thinking about crafts, you could think about people who used to paint versus putting up wallpaper. You could look at wallpaper as being an extremely efficient way to paint your walls.

Or you can you could look at it as a cheapening of the craft. And depending on your position and the way that you you position yourself towards the craft and the way that you value the craft. For example, Boris Tab and I value software engineering quite highly because we've been doing this for too long. And yes, I'd like to be productive and I'm sure Borislov likes the the driver to write

itself. But also there's a certain beauty to go into the code and to really get it right, to really understand, OK, this is assembly and this is this what it is, what it does with the chip. And there are people out there to bend it who go beyond the spec of the chip and go like, could they maybe up the frequency just a little bit? But what happens if right? And sure, when you're one of those, then you look at this and you're like, oh, this is a nightmare.

They're completely ruining everything. I've been thinking is there's no such thing as amazing. There's no such thing like the the blacksmithing is only extremely valuable in the eyes of people who appreciate blacksmithing. And if anything, today it's much easier to be a blacksmith than at any time in human history, because today you could be a real estate agent and then on the weekend you can blacksmith all you want and you know for sure you have a roof of your over your head.

And you have right. So this is kind of the the thing. But yes, people are going to transition from one from certain fields to other fields, but there is always going to be human activity as long as I mean, as long as nothing tragic happens. Yeah, I first of all the again, I, I completely I'm of course I enjoy coding cancer, but do enjoy more more like just growing. I enjoy more and. You equate those. Those are very close. Exactly, that's my point. No, because I just go through.

I go through, go through all experience, right? It's not just too cold, but the way I do things when I also do leather working and I also do metal working and just everything I do from it is the same. There's no difference in coding or in leatherworking. There is no difference you just how what kind of or when you cook, it doesn't matter what kind of attention, how much attention you pay in the thing that you do and how you reflect on it. So IA lot of times I hate the

thing that I have to cut. I just have to do something. I don't like it, I have to do it, etcetera. That happens actually most of the time. And I'm not saying that. I'm saying we, we just need to get better. We're not that we have been running in circles with technology trying to like come with the next abstractions and we still cannot manage, we barely manage like PID controllers kind of complexity. And we just need, we need to get better in order for humanity to

advance, to solve its problems. We just need better engineering and we are on the path to get to worse engineering. That's what I'm saying. And that is a dangerous, it's not, it's not about like love of the craft or like the craft is going to change. It's not the career at all. It's about what people are going to build the tomorrow. What kind of people and how much are they going to know? And with the help of AI, obviously, right.

And I don't even know what people's jobs are going to be, but we need to make sure that we are on the path and what education is going to be. Of course, most people actually about, I think easily 50% of the developers actually cannot program. That's absolutely the case. I think maybe 700 people, easily more than half of them just cannot program whatsoever. And that's like after like 10 filters of CVS and etcetera. And that's totally fine. They try to get better. They, they studied.

I mean, in many schools you start studying programming at 12 and then you memorize a bunch of stuff and you kind of get by and we because we still know actually how to teach it. So you kind of like go through some tests and in the end you graduate. And then you do your. Best and then you get to a company and a little bit of Stack Overflow, a little bit of this, a little bit. Now it's even easier to kind of like fake it. But in the end, you still don't know how to program.

That's it. You just don't know. You don't know what it is and you don't know what you're doing. And that's OK. You can make money and you can provide the reason you make money because you provide value for somebody else that needs the thing that you're producing. That's OK. I don't. But my problem is with the other people that are actually going to build the thing that's going to help us to get into to solve carbon. And we need to get better, that's what.

And and we are not going to get better unless we try to get better. I think I agree with you that we, we are on the path where we're going to do worse engineering. I, I think I agree with that. For me, then what is interesting is for the people that are on the path of OK, we really need to solve human problems and the means to an end here is engineering. If they can figure out a way where the tooling now enables them them to find that layer of depth, right?

Because the knowledge gap should be smaller. I feel like with the tooling that is out there. So now it's a means of how do I use this, this tool set that is there. I mean, we're definitely going to go quantity over quality because I can do a lot with very little with the tool set that's available nowadays, much more so than three years ago. But indeed, we do need quality,

always. Well, you can say this self like when you go to a grocery store and you have like a healthy thing in the fat and sugar on the other and most people irregardless, you have to have a lot of discipline to actually pick the right thing. So this is the. This is that. Might be a cultural thing that's changing. I do see a lot of people putting more attention into yeah, yes. So they have to put the same into what they're doing. That's what I'm saying.

And just be critical of what you're actually doing and are actually learning because in order for you to build the next thing you need to learn, that's it. And it's not, it can help you and use it. Just use it properly. We, we might not be there yet, but indeed, if I look at healthy options, what I put in my body, I'm being a lot more health conscious than I used to be years ago. And I see my friends and family are trending towards that same thing.

It's not the same with the mind. Yes, people are not there with the mind. Exactly. The mind is you go YouTube, you go TikTok, you zone out for hours on end and you do that the next day. Yeah. And it's like junk food for the brain. I'm not there yet. That's what I mean, yeah. And so now we're just, it's so easy for the IT. You can ask other people how often when, when copilot slows down, right? How often they're like, I just don't know what to do.

Yeah. And then just observe that and think about what is actually happening. This is also the absence of the zone. If you use Copilot for 20 minutes straight, Yeah, you're not in the zone. No, you are just tabbing, mindlessly tabbing. And then. Yeah, of course. Now. Oh, shit. Now I have to write code. Well, what? What was I doing? Yeah. Oh, maybe Let let me go get an ice cream. Yeah, let me let me do something

else. Maybe Copilot is going to be back online in 20 minutes and they don't have to do this. He will try and multitask, like do as many things while they're idle, basically. Yeah. And you can never get into flow, yeah. Yeah, yeah, yeah, we will. We will need to change our patterns for sure. But The thing is that these tools, they're going to advance, of course. So it's kind of a race because the the tool is going to end up being better and better. So you don't have to stop doing

tabbing anymore. And I, I think the, the tool is going to become a new abstraction layer where you program the tool to program the software that you want to run. And then what you think about when you're thinking I'm doing software development, you're going to think about different things. So people have not been thinking about CPU registers since the 90s. Roughly most people. Most, the vast majority I've never spoke to, yes, but that's the thing.

The vast majority, I know the vast majority of programmers before the 90s had to and then the vast majority after the 90s didn't have to. There's always of course a small yeah Delta, but right. And now since the 90s, we've been thinking about all sorts of things, right? And software, divine software patterns, etcetera. And now we're getting to a point where we need to think about other things, higher level things. They're no longer abstractions that you can directly link to

one another as we did before. In a sense, this is emergent behaviour. We managed to program computers to a, to a point where it, and because in the end, the, the language, the models, the neural Nets, they're all working on Turing machines. So these are programmed in the classical way, right? And I, we've discussed this before, right? So you're going from classical programming to programming neural Nets.

And neural Nets in a sense are an emergent property of Turing machines, but they're not per SE a Turing machine. They're a probabilistic kind of machine. I mean, if you think about it as a, it's a search space, you know, like a search of from other the possible parts and the way it finds things. It's very different than a Turing machine for sure, but it still can be deterministic. Yeah, it's deterministic. Yeah, of course. Like. It is definitely a very different. It's really, yeah.

The thing is, you know, and Turing made a Turing machine where everybody understood the Turing machine and Church numerals were not. It took us. It was still like if, if somebody explains to you the church numerals now you'll be like and selection is explained to your selection. And the way if works in Lambda calculus, you'll be like. But if you if you know the way if works in the Turing machine, every single person can be explained the if in the Turing machine.

So I think you can say the same about this new machine. Yeah, right. So it's like it's just training. Different paradigms. Yes, it's just training. Different paradigms. So then we need to Start learning its tools. Just like, you know, pre 90s, you would directly shift images into the video buffer, right? You'd have yeah, while post 90s you would start calling DirectX, open that API and then cry a bit and.

Try, yeah. But it's also because it's like pipeline simple from pipeline because we keep adding. Yeah, but there's a different paradigm. Like once the engineers learnt post, that moment drastically changed from what they used to learn, what they had to learn pre that moment. So I think the vast vast majority of engineers are going to make the switchover post vibe coding and they will still be engineers.

Because a fundamentally an engineer is someone who makes something work in a productive way, right? There's no there is no fundamental definition of it like a civil engineer. Of course on the Reddit people argue this a lot. The civil engineer is not the same as a software engineer. Alan Keech is say software engineering is an oxymoron. Yeah, so but they, but they are. But they are in the view, in the, in society's eyes, they are

engineers. Yeah. They, they take some kind of human desire and they turn it into an artificial product of the human mind through their own skill, experience and knowledge. Yeah. And whether that involves or not the understanding of registers. Doesn't matter. No, Yeah, I, I don't. It doesn't matter. Absolutely the case, but I can. Just like civil engineers are not metallurgists. So an example that would be using the language model without

understanding tokens. OK, that's just an example, so I'm not saying that you shouldn't think about this. So I'm saying it's helpful for you to understand registers and for example at least maybe word lines. Yes, necessity. You can you keep like moving further away, but that's just because of our inability. But can you if you understand it, does it make you better or worse? Let me ask you like this, OK, If you understand it, does it make you better or worse?

Like for example, there is Verticio made this like we recently about this like building in Chicago. Maybe that was about to collapse because they made a mistake of like wealth both instead of wealth and the person that found the mistake right, they just thought something was wrong. But then architect because he understands metal, right? So the fact that he understands just makes him better at his job. It doesn't mean that he has to

think about it all the time. He thinks about like higher level of like OK, this kind of works patterns. And yeah, but that is his job, like the that person's job was to understand metal you. Understand that No, yes, yes, yes, I know, but. But there's a higher level person, there's a, there's an architect. And then above the architect, there is the customer that

bought the building. And so, so different at these different levels of the, let's say the value chain of building skyscrapers, different people's jobs are very different and they don't necessarily need to understand much about the level. Yeah, that's true. No, no, I understand, I understand. But my point is if the person above understands, they don't need to but. If they do, if they always be. Better, yeah, but that's what I

mean. You need to at their job, not necessarily no. At their job, no, at their job, not necessarily better as in terms of human growth. And I mean, of course, knowledge and experience and good judgement is always better, right? Do you want to know everything about everything? Well, I bet you do. But it's not really a necessity for your job, because the job has limits. You're working inside of a box. No, no, yeah, everything is, everything is connected. There's.

No, if I'm using a skyscraper, I have a budget. Yes, but you are. How is you have to do? You have to understand weather and climate? Maybe yes. Well, I think so. For me, this is always, to a certain degree, you need to understand things and then depending on what your use case is, that's going to define the depth that you need to understand it. If I'm a project manager and just orchestrating the whole thing, I need to align people.

That's my role and responsibility, basically experts on how to build the skyscraper. And I would love to know enough to not be bullshitable basically. But always that's the level of that I think that I think, sorry, I think Boris, last point is that you could always know more about those jobs. I would love. And if you did, then you could do a better job overall. But then I would be able to do their job and I don't have possibly.

Yes, yeah. Possibly yes, but even if that's his hypothesis is if you knew more about their job, you could do a better job overall, yes. And that's, I don't think that there's a equivalent sign between those things because if we have 6 experts, so we're A-Team, we have 6 experts, right? Let's say I'm the program manager, right? Right. Of the team, I don't know anything about their job. OK, that's, that's scenario one. Scenario 2 is I also know their

jobs. Well. The total amount of skill in the room is the same. You could say there is more capacity, but ultimately the total amount of skill in the room is the same. So the outcome should be the same. It depends because if you're going to collaborate, This is also where because people grow from a certain role to all of a sudden the role of an overseer start to micromanage because they are an expert in that role and because they knew that role or they start to so the is going.

To be worse. Exactly. It could also be worse in the end because you're now an extra expert in a room full of seniors. Nothing might be done. You're just an extra person there. First of all, the Because people are incredibly multidimensional. Yeah. And there's like some distribution to the skills. So when you say an expert, I mean, in a group of people, somebody's below average. Yeah, that's by definition what an average means in some dimension.

So how are you going to put the right person and the right task? You cannot. Well, but if you understand their job and you see their code and you see like, wait, because you're still. Constrained by time, I understand their job, but I don't have to be able to be as effective as them. No, no, no. I'm not saying you should be effective. I'm not saying that. But you should be able to understand deep enough, which means you have to be able to.

What if I mean, you split the thing into like 10 tasks and then you have to just assign them. So you understand enough program that you've seen the code of each of those people and you know very well which person will do what, because not everybody is good. Exactly the same of everything, right?

So I'm not. So therefore, unless you start micro manager, which is obviously a mistake, but just the ability to put the right person in the right place requires you to understand what they're doing. And so of course you can, you can be a yes. So therefore I mean that's the counter argument to. Then the actually the output will be better because people will be going to do the optimal thing. So I don't know how many. I've definitely put the wrong

people in the wrong place. And then when you just put them in the other place, it's just 10 times better easily. Yeah, the same person one month later, not like. So. Yeah, that's because I know how to read their code. But if I didn't, then I'll just think, oh, this person is underperforming.

What? Would you say people that don't have that capability because this this skill you're, I think, putting in the effort to not let this kind of grasp of programming atrophy, But a lot of managers have let it atrophy, right? They would have said, OK, I've been managing for 10 years and I do some hobby projects, but that's it. And now the abstraction layers have grown to such a level where it's just they're outdated and their ability is still a lot of people management, right in

hiring. Do I want to work with these people? Do they mesh well? Do they add a little bit of diversity or do we need the same people to make decisions faster and execute? I mean, that's it's a very hot topic. I think a lot of people say you shouldn't know the craft if you're a manager, you're a manager. I think that's absolute nonsense, Like absolute nonsense is going to disagree with me heavily on that.

But I think manager should absolutely quote and should sit with the at least a day with their coders and just quote on the thing. How would you know what is broken? Because you get like broken telephone stuff. Somebody tells you we're not delivering this, where we're not delivering this. Just go and go to the person and see what is happening is for example, in booking, we had this thing where when you make a change, so booking is a Pearl shop and you it's really hard to

do book right. So you have to print the book and then the web page and so you print the bug. But the page takes 2 minutes to start. OK, so the two when you start it just takes 2 minutes. Horrible. Horrible. So you print and then and then. I mean it's a normal legacy code based thing. Nothing. I don't think it's like amazing or horrible or anything. I'm just saying you need to wait 2 minutes every time you don't like with me, you have to go and get a coffee because it's like,

what do you do? So if you are a manager of those people and then you sit with them for 10 seconds, like just code one thing, one simple thing. You'll be like, this is unbearable. We have to fix this immediately. Yeah. So while you're on the top, you just look at these numbers and you're like, well, the productivity, we release that many features that doesn't tell you anything about the experience, developer experience. For example. I agree with that.

Yeah, so. Or what this trigger with or maybe somebody made a library that like the API is just broken. This this is broken. And how do you know if something is broken unless you feel it, you know, or you're omnipotent or omniscient. So you need to experience the thing. You can't be so far ahead that you're like, oh, do we just do all these boxes and then they work? That doesn't that's not the case. I'm. I'm right there with you on this

front. But the people that say managers only manage, they say, well, you just need to know people, but knowing people is not enough, no, I think. I think you're conflating managing people with managing the environment and managing the company and managing the software stack and so on. So I think you've described 3 jobs there. So there is a person who needs to manage the people. We're talking approval of holidays, making sure there are no interpersonal conflicts,

stuff like that. And those they don't really need to know. How to imagine somebody pushes like sheet code? OK. That's OK, there is another person who manages the quality of the code. Wait, wait. But the quality of the code is a function, let's say in looking, I always got exceptional reviews only once I got the the exceptional, what was it potential? It was potential and and so I had an essential potential, but like medium output output. And that was the month that my

grandfather died. So because the person that approves the things is you see and they are related. Yeah, but thank God that people can talk to one another, right? I mean, I'm not. There are functional and dysfunctional organisations and there are in within them. There are functional and dysfunctional cases and so on. But the point is that you're trying to you're trying to put in a lot of responsibility over one person.

So one person cannot in a large organization, especially like booking with a lot of internal complexity and also external one person is not enough for a team. So you have to have multiple people. So I think that in the interest of being able to split those responsibilities, the manager should have reduced and deeper personal responsibilities over the people, right? And a new function should appear for responsibilities against the technology.

We already have one for business, the product manager. I mean, we, we in the industry, right, collective, the collective, the collective we, we have product manager that handles the response, the responsibilities related to the business. And then the manager has, you know, budgeting people where, and you're trying to wrap them with the, the, the technical stuff, right?

And I just disagree with that because also because there's a bit of a conflict of interest because they're wrapped up with business and with a budget, then trade-offs are now biased. If you're thinking, yeah, technical depth related trade-offs, for example, right, Typical example, every, every single engineer who's ever worked in a, in a organization of more than three people knows that this is going to be a

source of conflict. Like when are we going to finally get to that thing that we all know we hate? Well, when you're the person who's responsible for both the quality of the code and the budget of the team, then you're in a conflict of interest here. Now you cannot make it inside of your head a well reasoned argument for either or. Well, look, I'm not saying that we should have like a mini demigod in each team. I'm not saying that.

I'm saying that you the further you get away from the atoms of the company, which is what is being built, right, The further it just you have to experience it because you the organization has like motifs, the Organism itself and organization is a very complicated entity. It has some I mean. Yeah, but where did you stop it? So what I'm saying is you can just keep everything and just go to the production line.

Basically. You don't have to go every day, but you go and then you experience what is actually happening here, and then you can to at least confirm that what you think is happening is actually happening. OK. But where do you stop at engineering manager level, Senior engineering manager, director, CTOCEO, where do you stop? Because I can make this argument all the way to the board, all

the way to the shareholders. The shareholders, yeah, I think that's no. No, the shareholders should understand what the engineers are doing because only then they are well informed. I see what you mean. Yeah, that's true. That's true. But I think like at least at least if you are managing a bunch of people, you should participate in what they're making. I think maybe, maybe I don't know if the CEO should go as a CTOII code more than I just code. But it's a super small company,

right? So, but if it was a bigger company, I think I would still code. At least you don't have to code everyday if you can't, right? You obviously have to do a lot of other things. You can make time to shadow somebody or. Where you want to or I'm asking about is what's necessary. But I do think it's necessary. You think it's necessary. I think it's especially now when everybody, when the world is so complicated, people, people, everybody has different

incentives. The organization has some incentives like people want to make it promoted. So there's like sabotage of projects in like companies like Facebook or etcetera where it's like it's so intense to have impact. It's so important, especially with the layoffs now they're all kinds of like incredibly broken incentives in the organization. So when somebody tells you something is wrong, it's like there is this video on YouTube, there is this, I mean that's a little bit dark.

There is this guy that was in Vietnam and he came back and he's like the we go out and then we lose 5 people. So we come back and we report, well, we killed 50 because we can't lose 5. Without doing. Anything without killing fifty and but we didn't we didn't even see where they killed us from. So it's like, so the manage, you see what I mean? Yeah, yeah, you don't want to get. Basically, yeah.

So you have to when the, when everything's so perverted in the organization, everybody has different, the product owner has 1 completely different. Instead they have to ship the thing. The engineer is like, oh, I have to work on this like impactful thing. Is it impactful enough? The product owners, I have to move teams because I need to get to this other thing and get. Promoted. And to get promoted. So how would the manager of the manager know what's going on?

Somebody said, okay, this team, this thing is not working. Why is it not working? Well, just go in for a little bit and see what's going on. Bear with them, code with them and see what they're making, what is the actual problem, and then then think about it. I'm not saying you should do it every day. I'm just saying if you can do it, you just want to understand what the organization is doing better. Well, I mean, even even if I didn't have any technical knowledge, I could still do

that, right? Go to the people and just see what their day-to-day is. See what? What is that dysfunctional behavior? Yes, the problem is just when it's technically like let's say some other teammate like broken idea. You might have to have an interest to be able to like, take that information in. I don't have to be super

technical. Imagine that I tell you like management says like, look, you have to integrate, you're doing something and then we want to release a product, but we have to integrate with, I don't know, Salesforce now and then people like we people start integrating with Salesforce, whatever. But then you go in and you see that the Salesforce API just not horrible. It's horrible. And then you're like, why are we? This is actually going to hurt us.

Very, very many people are just going to say, of course we integrate because they make business sense. How do you know it's actually going to hurt you in the long term unless you just see what's going? On. That's fair, yeah. Will I, will I in the future just ingest all of slack into a model and then asking what are the incentive structures of this company? I think, yeah, I think also you're gonna see what is actually people. And that's also. Sensitive.

Sergey Brin said company. Property. Yeah, fair. Slack personal conversations. Also Slack chats. Yeah, they're on company time. I I you blog that you know 9 to 5 anything that happened 9:00 to 5:00. No, Europe, probably. Company. Company chats, company emails, all of Confluence. Etcetera, Sergey Brin recently had an interview. He did that in Google with the in the beginning of language models. He said, well, there was like I just took all the chats and then

say like, OK, split the tasks. And then he just said it to see the he said it instead of the language model. And then he said, like from this room who is actually should be promoted. And then the language model picked up some some girl that was a little bit quiet, but she did tremendous amounts of work which wouldn't have be picked up. Yeah. So yes. It's nice to hear that misogyny does not translate well into LLMS. Yeah, right. That's really that is really encouraging. Perfect.

I think that's a perfect way to write it off. Thank you so much guys. I, I really enjoyed this discussion. I must say, thinking of the impact on society and kind of this identity crisis of software engineers, for me, it's always fun to have a conversation with. Just thank you so much for joining. Thank you. Thank you. We'll, we'll round it off here. If you're still here, let us know in the comments section of what you liked on this episode.

It's the best way to support the show, and otherwise we'll see you on the next one.

Transcript source: Provided by creator in RSS feed: download file
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast