Whenever a new technique or innovation comes out, it's not the product. AI is not the product, it's a lens and not the product. A lot of people get caught up in the innovation, like the technology, if you will, but people don't buy AI. People don't buy an lens. They buy tools and products that solve their own problems. So really what you gain is a new tool, a new tool belt to solve your customer's problems, not a new thing that you can sell. Hello and welcome to The Engineering Leadership Podcast brought to you by ELC, the Engineering Leadership Community.
I'm Jeremy founder of ELC and I'm Patrick Gallagher and we're your host. Our show shares the most critical perspectives, habits and examples of great software engineering leaders to help evolve leadership in the tech industry.
The world is talking a lot about gender to AI, but this next conversation talks a lot about how you can use these tools to enhance engineering leadership, to enhance your productivity and problem solving. In unexpected ways that you probably aren't examining yet, joining us is Clemens Mewald, head of product at InstaBase.
And we talk about things like the questions you should ask when deciding how to leverage generative AI capabilities, how you can use generative AI to address cross functional collaboration challenges within your organization, plus how you can translate deep technical research into meaningful business and product outcomes.
Let me introduce you to Clemens and why we're so excited to have him talking about this topic specifically. He's the head of product at InstaBase. He's been a product and technology leader in the AI and machine learning space for over 15 years.
And Clemens has held leadership positions at Databricks where he spent more than three years leading the product team for machine learning and data science. And before Databricks, Clemens served on the Google Brain team building AI infrastructure for alphabet, whereas product portfolio included TensorFlow and TensorFlow extended. So he has been thinking about how to leverage these tools strategically, enjoy our conversation with Clemens Mewald.
Thanks for joining us Clemens sort of the headline we came up with for the things that are going to guide our conversation is around leveraging new generative AI capabilities to help enhance problem solving. So we're talking a lot about problem solving and some of the observations you're seeing for how people are using these new tools and technologies with solving problems in different contexts to begin.
I was wondering if we could start with a little bit more about your journey and some background context on you and why and how you got here, because I think what's special is you've been thinking about the generative AI and AI space for a lot longer than most people like you were pre hype cycle of this trend and have been involved with this since 2015. So why did you get involved in this space and you know what ultimately led you to InstaBase?
Yeah, it's actually quite interesting. So I tried to Google Brain team in back in 2015 and I guess that was my first professional experience with machine learning in the eye and back then I think what mostly drew me to this was I was just interested in the hard technical problems and when I heard about what Google was doing in the space, it was just fascinating.
And I guess my claim, the thing was that I was the first product management TensorFlow back in 2015 and then I helped Google adopt TensorFlow and like build modern infrastructure around it that is now used across all of the different alphabet companies and the interesting thing with what you mentioned it was like pre hype cycle is back then was when large language models really came out and like birth was developed at Google.
And I remember like the first applications of these technologies at Google and now fast forward a couple of years later, if you will, like the rest of the world has caught up and of course there's been a lot of development around these language models.
But the foundations, if you will, were laid back in like 2016, 2017 at Google and then just like very briefly, I guess to fill the gap out of Google that joined a company called Databricks and let the data sense machine learning product teams there. And I think it's successful for the last three and a half years, Databricks became again a very large company, I think by the time I left it was like 5,000, 6,000 people. So again, like took a step back in a smaller company called insta base.
Insta base now is a very interesting company because it applies these new generative technologies, like specifically LLM to unstructured data. It turns out that actually you can use these models to understand unstructured documents really, really well. And I'm not going to go into like too many details. You talk to customers like if a human can look at a document and like get information out of it, even if it's a complicated like legal document hundreds of pages long, these models can do it.
So it's actually like mind blowing of how far the technology has come. When you're sharing your experience and journey here, there's really two main drivers for why I've been really excited for this conversation. I think number one is your experience like thinking about how to use this type of technology to solve problems for people is that you just have such a vast amount of experience in a lot of different like use cases here.
So the reason why I was excited is because every single person or community right now is trying to figure out how do we incorporate this into our roadmap. What does it look like to build a product within this type of space? And then how do we organize our engineering organization around that to build that? So I think number one, you know, I know you and I are going to talk a little bit about productizing innovation as one part.
But I think the other part is because you've had so much exposure to a lot of the ways that these different types of products and tools can impact people. You also have a really interesting point of view on how these can help solve problems for people. Before we get into like productizing innovation, I want to kind of talk more about like using generative AI to support leaders as they're addressing different problems.
And so I was wondering if you could talk maybe a little bit about your observations on how people are applying some of these tools to solve problems from a leadership context. What are you noticing? What are you seeing? What's been interesting to you? Yeah, it's actually quite interesting. And you're like, of course, like I've been observing like these technologies and a lot of the tools that have been coming out.
And to some degree, I actually have formal, right? Because like you read online about all of these people like using these tools and becoming so much more productive. You hear about all of these like over implied people that hold like three or four different jobs. And I was always thinking like, what am I doing wrong? Like why should I be more productive? And I think at a very high level. And like maybe it's just maybe I'm just like being a little myopic.
From what I've seen is that at least so far there is a lot of magnitude change in like productivity and like applicability with these tools at what I would say like more junior roles. If you look at like these people that are over implied in like whole three or four different jobs, they're usually not like engineering leaders, right?
Like they're not like a VP of engineering that runs like multiple thousand people organizations. It's usually like like L4 types of engineers that are just that much more productive in their jobs. I haven't seen like very meaningful applications of these tools at a more senior level. And of course it is a productivity tool, right? So like I've personally played around with a lot of these like even for like writing performance reviews.
I think one of the challenges that I see at more senior roles is that the problem spaces a little more ambiguous. And the criticality of like your work product, if you will, is higher. So even if I use like some of these tools to like help some rest documents into like performance reviews, I find myself spending more time on then like criticizing the output if you were like adding the output. Then I would have like spent creating it in the first place.
So I personally like haven't like seen like a step function like productivity, if you will, but using these tools directly. Of course that doesn't mean that like leaders need to ignore it. All of our teams are using them. It's just in terms of like my own workflows. I don't use them as much as you know, he would assume reading in the news.
Absolutely. Well, it makes me wonder like from more of a speculative perspective, like what do you wish a generative AI tool could help you with from like an executive leadership perspective? How do you think about where you think these tools can really fit the need of an executive leader? Yeah, it's a hard. I think it's a hard problem and a lot of it I would probably bring back to context.
And what I mean by context is a lot of the things that generate tools like you can optimize or like automate have like some limited context, right? So like if you if you're writing like a piece of code, the context, if you were like is defined within like all of the libraries that you use and like all of the other source files that you're using. So the model can look at the context and understand your intent and like help you.
I think in a leadership position or like in a more senior position context is like everything you know about the company about its strategy about its competitors about the product about the people that are involved. So maybe to take a very concrete example right and I actually came across this as Google a common like task in a leadership position is to think about organization structure and how to achieve change by like changing your organization.
And I always told people and I remember having a conversation with a very senior person at Google who came up with the perfect or structure. And so that this is exactly how you have a problem. This is how like we just are going to be aligned. And I said well as soon as you apply names to that or structure completely falls apart because if you put that person in this position, they're going to quit their job.
And the reason why I knew this is because I knew the person and I knew the preferences and I knew like who they would report and like wouldn't report to and like just how it works. So I think if I look into the future right and think like a can in the future like some of these tools actually like help more senior like leaders with like the workflows. I think gaining that context and like that understanding about like a very complicated like world with a lot of different people is necessary.
And you know I've been in tech long enough to say like never never never in terms of like is that possible. But it's just like too many variables to consider and anything else that goes into like a more tactical project management like figuring out like the best way to execute on a project. I think that's already more tactical and like some project management tools and companies are already looking into this right.
Absolutely. So you'd mention that from an engineering leadership perspective, like the main area of impact is less on like the productivity side because of maybe some of the gaps in context and understanding intense and being able to provide better recommendations. But what you did mention was like from a team's in a product perspective like this is where a lot of the impact is happening.
And so I was wondering maybe if you could speak a little bit more to the product side as an engineering leader like what do some of these emerging changes mean for your core product. Like if somebody's thinking like oh my gosh, like we absolutely need to build this into our roadmap. What are some questions that you're asking when you're thinking about this. Like how are you thinking about the impact of generative AI when it comes to product building.
That's actually like one of the questions that I spend most of my time on and within that category, I guess the first statement that I would make. And like just so that's on the record LLMs or like GPT is not a product. This is a common thing that happens when new technology comes out. Everyone thinks this new technology is a product and like they want to surface that today customers. And that's the reason why I like suddenly like every product has like a chat interface.
And I think that's like a very short term and like narrow view of like what these technologies can actually like achieve right to take a step back right like think about it more broadly. LLMs are not a product like the really like a new tool in the tool belt right if you go and now a lot of things that you build from a product perspective become faster, better. Like you can actually like do things that like previously you thought like you weren't able to do by using this technology.
But I think the key thing you think through is how generative AI and LLMs actually apply to your problem. So how can you take the tool like these new capabilities and phrase your problem in a way that leads to a step function of like better part quality or like faster delivery. And interesting like analogy or like anecdote if you will behind this is when I was back at Google and like I think it was like around 2016 reinforcement learning became like a big thing right.
And I think back then you're like game playing with AlphaGo and so on like was like a big deal. And everyone thought that reinforcement learning would revolutionize everything I was involved with a lot of different teams that actually was trying to apply reinforcement learning to like a lot of different products at Google. The challenge that I found with this is that people had a very hard time applying the concept of reinforcement learning to their problem.
You have to think about an environment where there's different actors that can take different actions and then this is a long time reward function. And it was just not natural to like apply this to many different problems, but it turns out actually like almost any machine learning problem can be rephrased as a reinforcement learning problem right. Any recommendation problem could be a reinforcement learning problem.
And I think actually one of the first places it was applied is actually like notifications I think in YouTube reinforcement learning I think one of the big challenges was that people couldn't apply that framework to their problems. And so this is generative. I think what happened was that number one, the form factor is just so natural to us. It's language and it's. You ask a question you get a response. The concept of generating like tokens or like text that fit fit your expectation.
People just have found it much more natural to apply to the problem. So like they found out that you can like generate code you can generate a song you can generate like anything.
I think there's been so many different applications, but coming back to your original question, terms of like how to think about applying this to your product really what like generally I helps you to do is generate content or like generate sequences of tokens that previously like you couldn't have done in like a similar like fast and like efficient manner. And how did get surfaced in your product is a completely orthogonal question right.
In many cases, if people like forced their product into like this like we're like now we just have like a chatbot Q&A is exactly the wrong way of thinking about it right. And there's like many more creative ways to connect to be applied. Well, when you say create there's a lot more different creative ways to apply it in talking to a lot of leaders. Like you know, we had a generative AI focused executive dinner a few weeks ago and everybody said this is changing our roadmap.
And then we're naturally then moving to the first thing we need to do is build a chatbot within inside of our product. So it's almost like that was like the first thing to do. So like when you talk about creative applications like what do you mean by that. So the reason why these models can be easily applied anywhere is because it turns out especially in the technology space.
If you can create something from a written configuration, right, like maybe it's code, maybe it's like actually like a config that can significantly accelerate the development process.
And actually like seen the creative space these days, a lot of tools were like I've seen like a video capture of like a 3D and 3D animation tool where you can create objects and then you can literally just say, hey, I want to create a bridge with like three peers with like four lanes wide and it actually creates that for you right.
You can argue that that's a chatbot, but it's like not really right. You're giving instructions on this generating a config under the hood that is then represented as like 3D objects right. The main point is that this like natural language in and out works well like if you like have a support bot, but the instruction in and like structured information out that you can then use to like generate you're like 3D objects or like even I think if you use Photoshop right or like these days.
I actually like translates instructions to Photoshop right you could imagine like the more short term application of like having a chatbot saying like how do I create like 3D objects in this tool and like it will tell you right. So I think short cutting that like entire process by generating configuration. I can then be used in your product is already like a step ahead and then there's like an entire category of products that just use.
LMS under the hood without even like having that natural language interface if you will expose and I'm just going to use like insta base, which is the product that I work on right now. It turns out within the space you can take like any document and extract information from the document without training model specifically for the document type right.
So we're going to be in like the supervised machine learning world where for everything in the document type you have to like train a supervised model way of to annotate data trained at model and then extract information. Now you can actually just take LMS extracted information into a structured format without any fine tuning without any like data annotation just works.
And it also turns out that you can just tell the model hey give me the most well information from this document as a JSON in the background right. So I'm not saying that this is the user interface and the model actually does a really good job of extract information and putting it into like a perfectly structured JSON without the user like even having like shown and intent or like having to type anything in.
And I think there in lies like another insight here, which is prompt engineering is a thing for a reason right like these models are like very finicky and like how you actually express like the prompt is important.
And your product can do that on behalf of the user you're basically taking away that like that variability right because like in this complete example like there is a there's a right way to write the right prompt saying like hey give me the most relevant information from this document as a JSON it's not that prompt by the way it's a little more complicated.
And if I just gave a chat box to my users like maybe out of like a thousand users like one would come up with like the correct prompt and like everyone else would just like try other things right.
So I think the like doing the prompt engineering for your users and basically providing just functionality without even exposing that in the product is actually the right thing to do in many cases because it turns out natural language is like pretty flexible and from experience that can also tell you that a lot of these models actually are very sensitive to even like capitalization.
So if you if you ask a question and they do ask the same question like without capitalization will get an entirely different answer and that's just not the right product experience right and like that's why exposing like a chat box in many cases like leads actually like to that variability and to that inconsistent behavior.
The insight you just shared to me is mind blowing from a product perspective removing even as much as possible the complications of the prompt engineering side of things like because that's where I almost get anxiety thinking about like is my prompt the most correct prompt that's going to get the output that I want.
And I don't quite know how to phrase it and sometimes like there's all these analogous words that are like you know all these similar words that I could use to generate that output so just for anybody listening if you apply that mentality or that principle to your product and makes a huge impact.
There is a way around this by the way if for anyone who's interested Google search is the same problem right like you can express the same search in like in many different ways and like there's an entire category of like query rewriting right. That says like hey like take a query and the query write it in a canonical form such that like if 10,000 different people like express the same intent in like 10,000 different ways to get the same result.
There are methods for this and you know like query writing is a thing but like most people don't do that right like they just expose like a GPT API through the product and then like let the user basically figure it out. And I have the time I'm not even sure what I want exactly and so that's what the specificity of like what I'm looking for to is also the hard part so it's almost like I don't know what I don't know and so I think that's great.
I wanted to dive in deeper into like how to frame better questions to leverage some of these different gendered AI capabilities and so I was wondering if you had any thoughts about how do express problems in the terms or in the framework that would work better with a generative AI tool or just like how to better frame your problems so that they can better take advantage of some of these tools.
And what are your thoughts there? Yes, I think there's a and like we've experimented a lot with a lot of different tools and I think there's a couple of insights that are extremely useful that are generally applicable. I think one of them is definitely providing examples of like what you want the output to look like right it happens like quite frequently especially like when you use this like behind the scenes like in your products.
You can actually like tell these models hey the output should look like this JSON structure just fill in the blanks and it will like do so right like it will return the same JSON with you like fill in the blanks with information that it thinks is needed there providing that context is a good example.
Similarly what people refer to as a few shot learning which is providing examples of like what good answers look like actually I also help these models like mimic your intent right so you can say this is the type of answer that I'm looking for here are some good examples of questions and answers please answer in the same way also will result in the better outcome.
So they're just like some of the tips and tricks for prompting itself and much more important inside I think at least like that I've had at some point is like really broadly applicable which is it's very hard for these models to produce anything that's factually correct and what I mean but this is not in the sense of like fact based on history.
But you're like let's say generating code right and like a lot of people of course actually like figured that out pretty quickly that it happily these models like happily generate code for you but they don't know if it's like syntactically correct right or like semantically correct it's just like it looks like code because like it's produced a set of tokens that like look like code and of course there's been a lot of work behind this but I think that the broad example is instructive it turns out that these models are much better writing tests so one thing that you can actually do is you can
let it write code and like you can provide test cases and say hey please write a test for this function and then let it iterate on the function until it passes the test so that is an interesting insight that I've seen like more broadly applicable which is if you have like a very open ended problem such as in like generate this code it's a very hard problem if you can narrow down the verification of like hey is this a good piece of code and then like let the models like iterate on like that hard problem it's more likely to result in a good outcome.
And like now it's getting into this like generic topic of like agents and like multi-step conversations and like letting the model iterate but I've seen a lot of really interesting applications that use that principle or saying like not try like the hard problem like try to use a problem then like it do it on the hard problem until it passes the test which by the way I'm surprised by I just read the stack overflow developer survey and I think testing was like number four or five on the list for people that they use like a lens for and like writing code.
It was the first one I was actually surprised with is because in my mind like writing tests increased test coverage and those kinds of things are like much more automatable if you will and like also like usually like I think that people don't like to do.
Well, to me that's interesting is you said because like the impact on writing tests it sounds like it's much more reliable in that you can trust more of what's going on there and it's more reliable and verifiable and it seems like with the narrative that I would see online is like a lot of people there's a lot of complaints about well like this code that we're generating doesn't actually
quite fit and I'm spending more time reviewing the code than I am actually applying it and shipping it and so like I think that's an interesting thing to call out is that it's almost unintuitive like if it's number four means it's the not obvious thing that people should be focusing on in terms of like double down on that because that's more reliable and easily verifiable.
Yeah, yeah, it's quite interesting and you know, I mean the reliability piece is actually like an interesting one because you see it of course with like producing things like code and then like the like you can factually jack if it works and not.
Of course, if you just like have it produce text and you're like there's been instances where like lawyers create like outputs that like reference like like cases that never actually happened right I would never rely on this right like there's also by the way techniques to avoid this so just to use
instance as an example again we basically like point the model just at your documents right so we'll answer based on what's in a document and not not based on what it thinks it knows which is just like a compressed representation of like all of the text that it was I were trained on because that compression is by definition lossy right and that it's not just a
lossy right and that it will never like just be able to like reproduce facts but if you just point it to your own documents and say like hey give me an answer based on this information you can verify if that information is actually correct. And I guess that's actually how it's been used in a lot of like internal like knowledge is type use cases right.
But you bring up another point which I think is quite interesting especially for engineering leaders to consider is like reliability and quality which is that these models don't have a good track record of behaving consistently over time.
And I mean but this is you've actually seen that there were like public reports on this like opening eyes chat you be it's performance deteriorated over time right to be before like you saw some benchmarks and like it just got worse and worse and worse over those benchmarks and I think it has to do with like them making the model more
performance and actually like scaling into production and also maybe in some cases making it more like secure but that's actually an increasingly important thing to consider for like engineering leaders and like product leaders especially.
And the reasons why Google was very reluctant to apply deep learning in search was exactly just problem search behaves in a certain way and you can't afford from a product experience perspective for like your search rankings to look entirely different like the next stage just because you retrained the deep learning model behind it.
And it just now that I remember experiments at Google where you like to train the massive deep learning model help the search and it worked well and then they returned to see model and it worked entirely differently.
And then change in product experience is bad right like in many cases and that's what now a lot of people are realizing when they use like opening eyes to be T which is if they just rely on like the latest version of to be that changes over time and like your experience made the theory over time.
At least to my knowledge not great solutions to this like one of them is you just like pin the version of the model you use and like it doesn't change but then you have to basically create like technical debt that you have to like fix later or by the way and like this is a mathematical statement if you train seven of these models and then like average the output like then it will be more consistent over time. But that also comes with its own problems right.
And the whole challenge you just laid out there was so far off my radar in terms of like the performance deteriorating over time and like understand the implications of that just completely off my radar. You're talking about reliability and quality and that kind of brings up another area that I wanted to get into which was some of these different practical examples for how leaders can maybe leverage some of these different tools.
I want to talk about scaling technical teams like for an engineering leader who may be their company is at a place where their primary focus is on scaling their technical teams up to the next level. And what are you seeing there in terms of how these generative AI capabilities are supporting that initiative that priority. Yeah, I guess there's a couple of different dimensions right like maybe the easiest one to discuss is just like productivity.
I think you will find a lot of like public articles to say like how engineering productivity has increased by use of these tools. And I just read, you know, I was referring to the Stack Overflow Survey actually according to that survey like 70% of all soft engineers already use Jennifer.
I think it was 40% already use it like 30% like intent to use it the number one thing that they were using it for is a writing code and then it was like debugging and then like writing documentation something right. It's fair to say that like productivity increases so in theory that helps you scale like teams if like individual productivity increases, but of course there's also like this other topic.
I guess that we may not want to get into which is you know people actually have been over implied taking up multiple jobs or just like working less as a result right. If you if you say hey like now I'm more productive I can get more work done in a short period of time and because of remote work and pretty flexible so I'm just going to work less, which is something important of course like for engine leaders to consider.
But that's an age old problem right which is especially like in software engineering to say hey I have some expectation of the output and it's very hard for me to tell like it did you spend like two weeks on it or did you spend two days on it. I think that that's something that has existed for a very long time and I think is not going to go away with generative AI.
Yeah I wanted to ask you about like rethinking the productivity element because this is something I've been talking to some folks about it's like in a generative AI paradigm where productivity is just different in terms of your output. Like do you think about it differently as you're scaling out those teams like maybe productivity becomes different. How are you thinking about that differently.
That's a and I think that goes back to the there's a there's an academic answer to this relic and economic answer if you will and there's like the reality right and I think the academic answers like hey I just if you give every one of your software engineers a tool that in theory improves the productivity by by some percentage. Like now you're like effective capacity of engineering organization like increased and you can get more work done in less time right.
But that of course like doesn't take into consideration the reality that like it maybe some people like use more effectively some people use it less effectively maybe some people like basically like mediate by like reducing like the time to spend on something right and just being over implied. So that's just where it gets gets hard but in theory you should assume that like your overall like development capacity increases right.
I think there's a more longer term really important question which is what do like very junior engineering jobs look like now right because also like if you look at like some of the opening I like benchmarks that the ran or like I think they also like ran it against like interview questions like GPD 4 can like get like an L3 like soft engineering job right.
So I think probably what's going to happen is that like the entire like job letter is going to shift one up where like now people can be more productive than they used to be like probably like by like roughly level but that's you know like just like an un informed un informed guess.
But yeah it's an interesting question but every in soft engineering that's obvious because there's been so many tools and like it's been so well reported like it up co parallel and everything but the same applies to every other job function right.
So and finally this was the second part pointed I was getting to especially like when you think about scaling engineering stations is you know hiring and like recruiting is is like a very large part especially if you like in growth mode when I was a data breaks like the company grew from like 400 to 5000 people that means a lot of hiring.
And it turns out that actually like you can like even the recruiters here in the base like use some of these tools to help with you like job descriptions other tasks it turns out that if you have to write tool it's like a more general like productivity tool for any like white color job. I haven't seen like the killer application fee role that like helped in hiring but I would assume that you're like the job of like scaling up a company with some of these tools is like easier than without them.
Absolutely this comes up a lot in small group conversations that we host within our community this idea of cross functional collaboration the main challenge being like engineering leaders develop a deep level of expertise within their domain and then bridging that gap into other areas who maybe are less familiar with the context of engineering becomes a huge area friction just the translation communicating priorities everything like that.
But this is an area where you have a tremendous amount of expertise in this cross functional collaboration area and so I was wondering if you'd share a little bit about what are you seeing here from like a from an example perspective of leveraging some of these tools are working through different cross functional collaboration challenges.
Yes, I think on the on the collaboration side right and by the way funny side note and the apologize that I keep coming back to it but actually like in that stack overflow survey.
I think the last item on the list of like what would you say if I were for was collaboration I don't know like what it is but I think like people just like see more applicable like definitely the writing code but I think on the collaboration aspect again like this to the message that I would look at one of them is actually using generally via tools as like collaborative tools I think like that's one category.
The other one is like that falls into like the product development category which is making sure that all of the cross functional roles are on the same level when it comes to like these capabilities and speak the same language what I mean but this is it actually depends on your culture as an organization of like how big of a problem to this it turns out so like I've spent like the last couple of recent enterprise software some enterprise
companies are sales driven so like the sales team sell something and then you like the engineering team like has to implement the reason why that's obviously a problem is that you like you you very rarely innovate through like a sales team right like the sales team is like never going to like tell you like he uses this like new the high technique to like build the product because like they're just like answering requests from customers and then there's
like other types of companies that are like more engineering driven where there's a lot of innovation that comes out of engineering teams and then they go to market teams basically like try to make that work through a channel and of course like the right answer like something between but I think when you think about cross functional collaboration what I've noticed in like this like generative I craze in the last couple of like months and like almost like year
I guess is that it's extremely important for all of the different like cross functional organizations to be roughly at the same like level of knowledge of like what's possible and how this technology actually like applies to your business because what can happen is that let's say you
have a team is like very innovative and like they played around with like generative are like they find you to own model and like they're planning it if your product team is still writing pure these assuming like history and like the old world right and if your marketing team is like still not
writing about this topic and if you still seem is like very far away from this it's not going to succeed right like it's going to take a very long time for these cross functional teams to align to like actually like have an effective output. So what I've noticed is like more important than like anything that we've seen in the past is that your marketing lead needs to know exactly like what the technology is like how it applies to your business and like what changes in the future.
You see I'll see a list of all this because like they're getting questions the product team definitely needs to know about it because like they are the ones that like are supposed to like integrate together with the engineering team and if like engineers building products that product measures don't understand then that's not going to work well.
So I think the educational effort of like bringing everyone to on the same page becomes much more important and that's actually like a key function of the leadership team right and we've like spent a lot of time with like tech talks and you're like with demos and making sure that like everyone is aware of the paradigm shift and like can think through like what it means for them.
So anyway, even then things still drift into the past if you will right like you tell people about like it is this brand new technology like well is changing like here's what's important and then like a month later you can find something that's still making assumptions based on the old world and the answer to your courses you like keep reading it and like actually like sometimes you stores as examples of like what shouldn't happen because otherwise you can move us fast into the future.
So the question was going to be you know how do you then align everybody around these new capabilities but I think what you said with like tech talks demos and then revisiting sort of the paradigm shift and how that what this means for each person.
Other other elements of this that that we didn't cover about like how to align people around this because I think what you're describing is so important is that like if you have engineering far ahead of how these tools are going to transform the business and everybody else is thinking about a different paradigm.
Like what you're describing is like all of the friction the slowness all of those things come in as an impact. What are other ways maybe to help align on those new capabilities or pass along that learning.
It depends on your organization size. If you're running like tens of thousands of personalization probably you have to think through like very formal like enablement education like training like channels like make sure that like everyone like gets on to the same level maybe even like certifications right it turns out that you're like if you run like organizations of like thousands of people actually like having certifications like
is necessary because if you just launch an online course and like some people take it that's not how things work right at that scale it really becomes like an educational effort and a smaller scale. It's really just like getting everyone on the same page like even in like more frequent all hands like we discussed this topics over communication.
I think is is extremely important especially like if things move quickly I learned this in a different context when both at Google and the database like my team is like grew by a factors of 10 you're like over like two or three years and what I learned back then was that like basically every six months there's like more new people and old people you have to repeat yourself a lot right and even if you think that like wait a second like I just gave this like same speech like a week ago.
It's necessary because there's a lot of new people and similarly with like very foundational like technical challenges you have to like over communicate and like if you think you've said it like one time to many say it one more time I guess like for people to actually hear the message and then what I've also found especially like with these technologies like is actually like explaining some of the thought behind it you just have to find the right
altitude if you will because like what you shouldn't do and like I've seen that as an end apparent as well suddenly like explaining people how attention works and like transformable based models work and like you're like the math behind self supervised models that just doesn't make sense and I've seen especially like in very technicalizations like sometimes people go a little upward in terms of education.
Absolutely, but I think to your first point similar leadership lessons still apply in that repeating yourself and explaining the why behind things still matters if not more so because of how fast things are moving and so I think reiterating that point well I hope it relieves people because all of those insights have been available to you but I think like you said like the pace of changes so fast that the only way to overcome that is by repeating yourself and helping educate around the paradigm.
Yeah, but even your understanding may change right so you may like you may learn something new and then like from one week to the next. There's like a slight change in strategy and again like over communications important because at the end of the day everyone like points in the same direction even exit could faster.
So, Clemens we've been talking a lot about leveraging these tools for an engineering leader context to help them solve problems or apply that within the context of the teams that they lead but the other area of your experience that is is really interesting is the work that you've
been trying to take you know deep research and highly technical things to connect them to business outcomes or to turn them into product or to productize that those innovations and so I was wondering if we talk a little bit about how do you think about translating some of those like more deep technical or deep research areas into meaningful
business outcomes or products what's your thought process there on a high level I guess there's a distinction between like are you doing it once or you're doing that repeatedly if you only do it once I guess like it's just like I'm wondering so I actually have like a framework of how I think about doing that repeatedly and then the second one is like just like a very high level statement that I think is like absolutely critical so the high level
statement is and I mentioned this earlier like whenever a new technique or like innovation comes out it's not the product right so like AI is not the product and a lot of people get caught up in the innovation like the technology if you will but people don't buy AI like people don't buy an allowance they buy tools and products that like solve their own problems in the world so really what you gain is like a new tool
and you tool build to solve your customer's problems not a new thing that you can sell unless you read up yes and like actually like that's the one thing that you can sell it right now coming back to the more repeatable we are thinking about this right so I actually my team at Google created that framework and I've actually
like applied it since then also at Derbyx which is identify verify verify and the reason why that was relevant at Google is because we were part of the Google research and if you sit within a research organization every day like a researcher comes up with like something new they publish a paper in a conference and then the next question is okay like how is that going to impact products is Google so the framework was basically identify which was talk to researchers like find out like what is
the latest and greatest in like new techniques and usually like researchers of course come up with like mind blowing things but they're not necessarily tied to like okay like how can you actually like apply this in like many different places so after identify like after you found like something new is verify which is deliberately a very one of
a prototype way of like checking if what you've identified actually like applies to meaningful business problems and can have a measurable impact right so just to use the reinforcement learning example again right so someone at Google came up with like a library to implement reinforcement learning
intensive flow the very first step to like verifying if that's a real thing is like let's try to launch it to actually like send YouTube notifications to people's phones right and if that works out and if you actually like have a measurable business impact then you get into the amplify stage where say okay like now that we've verified that this actually works let's amplify it and like let's make sure that like every other team that needs to apply this has like an easier
job of applying it and it becomes more repeatable it's basically like a portfolio problem right there's like always like a whole bunch of things and identify always a whole bunch of things and verify now always a whole bunch of things and amplify right and I think if you just apply that repeatable framework to like doing it once
like let's say I'm in a company and like a product generally if I came out like what do I need to do you've identified it the key point is like the verified step before you amplify it right which using like find out like the least expensive fastest way to like verify if this actually like has a meaningful impact and what I mean but it's like don't create like a year long roadmap build like a new
product based on the lens just to find out it doesn't work but instead find something that you can verify in like a month or two to get a signal of like where it actually works and then you know like going to the amplify a stage if you will and part of it in some cases that means actually to re implement like to start something completely new on the
side and this goes back to like the innovators dilemma which is like most companies already have like a massive customer base they already have a massive product it's like very hard to change it and in some cases you know like applying a new technology to that like takes two years and you
to late but actually like in parallel like starting like a much lighter weight like new product or like new capabilities you can like test things quickly with and like be a little more like bottom up actually like helps right but I think for those who are familiar with the
individual and but they can probably like see how to apply that absolutely I want more follow up question here because you're mentioning just the the critical importance of verify and then the timing of this being of the utmost importance like otherwise you'll miss it and it's so true right now like right now things are changing so fast that if you're trying to plan for a roadmap two years down the road you're
going to miss it what's your favorite way to verify in a lightweight way of all the different versions of what this is look like for you like do you have one that stands out to you as like I love doing this this one this was a ton of fun I got great signal or maybe just like emotionally you're like this was just so
cool. I think the the and by the way like this also goes back again to like size and like what type of product you have verify by the way it is a very large cooperation with a very large user base usually means like a be testing right you can like run a small
experiment and like just expose like a thousand of of users to it right in a startup by the way like verify is often just like building it like an entirely new prototype and like pitching it to like an entirely like new set of customers right it's it's almost
like like pivoting to like a brand new way so it really depends on how big the boat is I guess that you need to like move right and how nimble you can be but the point is like just to reiterate this and you summarize it really well it needs to be quick right like you need to get a signal fast and as soon as you write like a long time roadmap you're probably already too late.
I love the focus there it's almost like it really doesn't matter what you do as long as it's fast and as long as it gets you good signal that's great. Clements we've got a couple rapid fire questions to wrap us up if you're ready to jump into those. Okay yeah.
What are you reading or listening to right now I started listening to a podcast mostly because I have to commute in a Bay Area and I spend a lot of time in my car so I've actually been listening to a lot of like we see based like the iPodcasts and I find them interesting mostly because I just recently listened to one that actually went into the more political and economic backgrounds behind you like some of the companies pushing for open source like the
I regulation so it's been really fascinating do you have an episode you want to recommend or yeah so this was actually an episode with Martin Casado and Mark Anderson where Mark Anderson actually like wrote an article about I will not end the world. Martin Casado is awesome we've had him at a few of our events so definitely I imagine that's a tight conversation between the two of them so it's probably some good some good insight there.
Next question what's a tool or methodology that's had a big impact on you. That's a big one is something called critical user journeys and this is a this was actually about the Google's part of the product excellence training and the short summary is like a critical user journey is basically defined by who is going to reduce the journey what is their goal like what is the outcome and then all of the steps that they have to take to actually like reach the outcome.
The reason why I found this impactful is that one of the insights of critical user journeys is that they usually spend products and like they're not contained to like just like your own product or like your own feature. So anything meaningful that anyone wants to achieve like usually like it goes across multiple different products and as a product owner you have to actually consider like all of these like adjacent products that are important.
I'll give you one like very easy example in many developer products user journeys usually includes that overflow and documentation and you're like it's a AWS management console right.
So we've actually had examples at Databricks actually where we found things that we had to fix in the AWS documentation to improve the user journey that like database customers go through because like there was like a step where they had to go through like the AWS management console that the need to influence like other products or like the need to like think like what else people need to do to actually like achieve something meaningful is very powerful in that framework.
I love that my mind's already jumping around about how I can apply that differently in my own role. We're talking a little bit about trends here so maybe we can kind of get away from gender of AI or maybe we stick with it. I don't know but this question is about trends. What's the trend you're seeing or following that's been interesting or hasn't hit the mainstream yet.
There's probably like a lot of things that you and I consider like in the mainstream that are not really in the mainstream right because like we kind of live in a bubble.
So whenever I think about like AI in general like even cloud computing like digitalization like all that kind of stuff we're like really early in the ES curve and a lot of these things right and like in Silicon Valley it's like sometimes easy to forget but especially like if you were an enterprise software and like you talk to like a more representative set of like global enterprises. You will see that you know like the entire world is like still early on in like a lot of these trends.
That is such a great call out in terms of where we sit with that last question Clemens. Is there a quote or a mantra that you live by or a quote that's been resonating with you right now. There's probably like a lot of I'm not prepared for this question is probably a lot of quotes that I could come up with.
Maybe I'm going to leave with a cliffhanger that is more like on the on the philosophical side which I like keep reminding myself of there's a graduation speech by David Foster Wallace and like it's called this is water and it's less of a quote and it's more of like a mantra I guess or like a reminder. So I'm just going to like leave the listeners with googling and like finding out like what David Foster Wallace means when he says like this is water.
This can be my first action after we get done. Clemens just wanted to conclude by just saying thank you for introducing us to a lot of different ways to think about how we can solve our problems here. I think like what stood out to me is just how clearly you think about how to leverage any different type of tool to really specifically focus on helping people solve problems.
I just think that the way that you think like provides a really incredible not intuitive perspectives there. So just wanted to say thank you. Awesome. Thanks for having me. It was great. If you enjoyed the episode make sure that you click subscribe if you're listening on Apple podcasts or follow if you're listening on Spotify.
And if you love the show we also have a ton of other ways to stay involved with the engineering leadership community to stay up to date and learn more about all of our upcoming events are peer groups and other programs that are going on head to sfelc.com. That's sfelc.com. See you next time on the engineering leadership podcast.