¶ Intro
Hi everyone, my name is Patrick Akio, and if you're interested in AI and software development, this episode is for you. We go over the core concepts and evolution of prompt engineering, rag retrieval, augmented generation, as well as GPT agents and how you as software engineer can leverage all of that and more. Joining me today, Friends of the show, Roy Dereks, entrepreneur at heart and always building products for developers. So enjoy.
One of my recent thoughts was more so on the full stack
¶ Starting as software engineer
development. And also with a lot of questions that I get is if you were to start your career all over again and either be self-taught or kind of transition into this career of software engineering, how would you start? Would you go kind of full stack from the start? Would you focus on front end or back end? Or would you try and kind of jump on this AI bandwagon and try and develop more so AI specific applications? How? How would you start from the
start again? That's also a very good question that I'd like to answer with. It depends, but I'm not going to do it this time. But now. So for me, I'm self-taught. I always had, for me always programming started with a use case. So I wanted to build something and that's why I learned programming. So I guess that's an important
starting point for myself. But maybe for others, if you're going to university or to high school and you know, I want to build stuff, I don't necessarily know what I want to build, then it will be good to find a starting point that's maybe either full stack or front end or back end. And there it really comes. Well, I would say start with a
full stack project. It would probably be a good thing because then you build a front end, you build a back end, and you're going to find out like what excites you most so you can continue from there or maybe go back end or go front end. You don't think it's a lot kind of to start off with full stack immediately? Well, it also depends what's your starting point. There is a, there are a lot of these JavaScript courses that will teach you basic JavaScript. So that's probably a better
starting point. But if you look at the world of where our self thought developers at least are coming from, they go to YouTube, they go to random websites, find tutorials and Start learning. So if you're on that sort of bandwagon and you watch a video like a four hour live coding video, and I have some friends who are these YouTube influencers and they do these 4 hour live stream builds and ready to take you from zero to a full stack app.
Insane. Yeah. And those are pretty pretty much focused on new developers. There's some JavaScript in there, but they'll walk you through it. So that might be a good starting point, like going from there. But it also means you need to be somewhat of a self starter if you need someone to hold your hand. And well, it might also be good, but there's some self learning there as well. Yeah. And then for AII, think it's
completely different subject. So a lot of the things I do now are like helping developers to build AI applications. But I wouldn't say that will be a starting point. If you have a use case and you say I might want to become an AI engineer and I want to build some app that does what Jet GPT does, then starting with building a full stack app could be a better starting point than starting to build AI apps.
Because there are so many paradigm you need to understand before you can actually get started. Yeah, interesting. I was wondering because for me,
¶ Putting together something from IKEA
I'm always kind of similar to you. I built something when I have a use case. And for me it's easy because I I'm not an entrepreneur. I'm in an organization and especially within a consultancy, I come into a project and there's already either a predefined goal or a kind of early and defining goals. But then I have use cases, right? I don't have to come up with my own use cases. I can and there's probably adjacent to whatever we're
trying to do in any case. But to come up with use cases from scratch would be way harder. And then when I'm looking at the technologies, I would always do kind of what is required and go extra based on interests. But I've noticed myself not going very, very deep. For example, I can use LLMS and applications, but I I don't have this urge to figure out how LLMS work under the hood. And I think I can still be productive and effective without
that knowledge specifically. But I've also talked to other engineers and they want to know everything there is to know before they kind of build on use cases or they actually want to understand something completely before they use it. And I'm not like that. How's that for you? Yeah. So for me, I usually start with a use case. So I try to build something, I try to get it work and then once it works, I'm going to figure out how it actually works. There are people that.
So if you take the use case of putting together something from IKEA, I will take the manual. I would look at the bits and pieces and I will just start doing it. And then some way I figure out, oh, I did something wrong or I need to go back or this wasn't supposed to go there. And then there's other people, which I I watched like putting
together some from IKEA. They look at the manual, they go through it a couple of times, they inspect all the different parts there are, then they're going to try and see like where would it fit and only then they will start putting it together. So those are probably the people that want to know how LLMS actually work before they even start to do something. For me, it's building something and then figuring out how to break it or how to iterate or how to make it better.
So yeah, generally speaking, I don't really care about how something works. I just want to make sure it works and then once it works and it fits my use case, I'm going to try and specialize in this area and figure out how to improve. Yeah, I've, I've been involved
¶ Starting a project from scratch
most of my career in projects that have already gone, let's say from zero to one are already live and I'm there to make it more effective or build on top of it with features. There's one project where I actually worked from zero to 1, and I feel like that experience is actually quite different, which I didn't expect initially. You're actually starting out from scratch and you have a lot of options and it's really hard if you haven't done it multiple times before.
I feel like to not fall into this analysis paralysis, especially when it's more so for a bigger organization where people love coming up with solutions and future use cases which are not really valid yet and mostly assumptions. In any case. How do you also, from your own experience, kind of scope it down and make it tangible within this use case and not meander into a different path? Yeah, that's also a very good question.
So I'm used to either starting my own startup or work at startups and help them get somewhere. And usually there's a lot of choices you need to make and more often they're made by a small group of developers or maybe if it's a really small start up by ACEO or by ACEO. And they're going to tell you, we need to go this direction and we're going to use these technologies because these are the technologies we are familiar with.
And it's probably fine. I mean, if there is someone in your organization that will help you to sort of figure out what are the five key ingredients you need in order to build a successful application within your own own boundaries, that can already help. So I'd say always try and find someone that sort of has opinions about what you should or shouldn't use because someone needs to be accountable at some
point. If you go with a certain technology and then five months into the project you're going to figure out it has all these limitations, then well, it could be good, could be bad. I mean, you probably have learned a lot, but if you're on a consultancy project and you need to deliver something, then it might not be the best thing. If you're a startup, you can actually make these mistakes and you'll learn and get better and you improve.
But if you work on projects for clients and it will be harder to to do this. So myself, I don't really like to work in a project where all the boundaries are already been set, because then it's just work. You're executing on whatever is in there. There might be small improvements you can make, but it's not what excites me and I can easily see.
I mean, I know a lot of developers around me, they get excited by taking something really small and making it even smaller or faster or optimizing, optimizing it. For me, it's never really the case because I also always want to build for the end user. I want to make sure the end user has a better experience. If I would optimize one tiny thing and the end user doesn't see it, to me it doesn't feel like I've done something. Yeah.
No fulfillment. Yeah, I mean you might have 200 lines of code edit or deleted or whatever, but the end user still sees the same website so or same application. So I don't really I don't see the benefit in that. If you are in a growing organization, if there are requirements from clients, then you need to do these kind of things. If you work in start-ups, which is there for my preference, you don't have to care about this because the only thing you care about is the end user.
Are they using your product? Are they buying your product? Are they talking to friends or colleagues about your project? So yeah, that's maybe not really the answer to your question, but. It is.
¶ AI usecases
I, I mean, it made me reflect on where I've usually had most fun. And I think actually going from zero to 1 to a point where a developer can be productive for me, that is still a bit of a pain because I'm tying things together. I'm setting everything up and trying to make sure that this way of working within the new stack or whatever stack we're building is going to be optimized for.
And I've really enjoyed coming into a project where that's been thought of and I can still obviously tweak with past experiences, but I'm productive kind of from the get go. And then there's still a lot of stuff to do. I've never been in that spot where you're talking about where probably you do a lot of AB testing and you do minor changes with big impact just by virtue of the skill that you have as
organization. But yeah, as you lay it out, I'm, I'm not sure if I would enjoy that as much. I definitely like kind of this AB testing approach and building a proof of concept really quickly testing it, and if it doesn't work, just throwing it away. I have no problem throwing stuff away. But yeah, if that's the only thing you do, you do kind of go into this micro optimization and you have to find fulfilment in that, otherwise it's going to be tough.
With regards to AI use cases that you've seen, where do you think this is going? Because so far from what I've seen, there's a lot of manual work that can be replaced. People that analyse PDF documents and put in information into a system for example as one of the use cases or having kind of this chat bot or guide experience where people explain either language or concept within a overarching tool. What are some of the use cases you've seen?
So I think the use case you describe are a lot of things. You see, it's people trying to automate workflows, which could be, like you said, someone generating PDFs or analyzing them and putting them into a different system. It could also be extending existing capabilities. So if you look at, I think Microsoft, Google, and even we had IBM, we had our big developer conferences or product conferences this week or last week.
Yeah, every product that was announced somewhere has some new AI capability where probably not much is happening in terms of what the LLM is doing. It's probably pretty, pretty straightforward in most use cases, but everyone is trying to put AI into somewhere. Yeah, I always like to look at memes, and there is all these memes about product managers telling their team to do something with AI but not really knowing what. So those are interesting use cases. And I think it's also, if you
¶ Prompt engineering
look at start-ups that apply to Y Combinator, like two years ago, all of them were like jet GPT wrappers, meaning that the only thing they would do is giving you some assistance where you can ask a question and then it will probably just use a model from open AI to answer your question and it doesn't do anything more. So when you look at these LLMSA, lot of the start-ups, you see, a lot of the use cases, you see they are doing prompt engineering.
And probably everyone knows the term prompt engineering. So this is where you sort of rephrase the question you asked to the LM in order to get better results. And you can do a lot of things in there. You can set the temperature, like how creative should it get, or you can give specific instructions. You can give examples of things you want to see, say if you want to get results in a specific order or in a specific format. You can ask all these different questions.
And it's very basic prompt engineering, which all the LMS are able to use. So that's one thing, and that's what most of the companies are doing today and it's probably good enough for most use cases. You don't think that's a bad thing? That it's just a wrapper around something existing. Well, it's getting worse because people get higher expectations. I mean, when JGPT first started, we get mediocre results and we were like, oh wow, we get
mediocre results today. You're saying it's these are really bad results because another LM is able to give me better results or I'm using this tool which is a stand alone software service I need to purchase. It does better and it probably does better because they do slightly more than just interacting with the LLM based on prompt engineering. They make it context aware and
¶ RAG explained
there's many ways to make LLMS context aware. So this is where a lot of things are shifting today and there's quite a lot of things happening there already. You maybe heard the term rag rag. So this is retrieval augmented generation and this is where you would include a database and usually these are factor databases that use semantic search. So semantic search isn't new, it's something we've been using all along. Like most search engines will
use semantic search. It is where you put in a couple of words and then it's going to find matches to those words based on a score. So this is semantic search. It doesn't just do a filtering of the database like give me everything that includes this word in the beginning or the end. Is can do way more, but it isn't smart. But it is able to give you relevant entries from a database.
So with RAG, what you would do, you would take your documents or your data, whatever however it's structured, you would split these into smaller pieces of text that you can insert into a factor database. And then you would have a factor database that might have a million lines of or a million rows actually, because it's a database with pieces of your
data. And then whenever you create your prompt that goes to the LLM, first you're going to retrieve some data from the factor database that's relevant context. And thereby you can extend the knowledge of the LLM from all the basic data that's trained on with your own company specific data. And then this is like very simple RAG because you give this context and then there's a million patterns within RAG that evolves different names and different orders.
And sometimes you want to re rank the data that's coming from the factor database or you want to do keyboard matching there as well instead of just semantic search. There's a million different variations you can do there. And this is only just within the Rack concept. There's more concepts within AI that get even more complex. So this is really where it's going, but it doesn't necessarily mean that you need to do all these things in order
to be effective. I, I always like those, those projects where you would see that someone really made was really thinking about how to structure a project. You probably have seen those as well. There's like multiple layers with an F in directory. And then if you look at reactor front end app, the components are splitted into really small components that all are reusable and have testing. This is how you can set up a front end project. Doesn't mean you have to do it.
Also build from the project that has 20 files, 20 files of 1000 lines of code. It might not be the best way, but it might get the job done. So it's the same in this AI
¶ GPT agents explained
space where you can do LMS with prompt engineering. You can look at RAG where you make LMS context aware and then there's a new pattern which we can talk about as well, which I think is more of the future than than RAG itself. What's this new pattern you're talking about? So this is really what we call agents. You might have seen this. I think Jet GPT or at least open AI in our platform have introduced some way to start
building agents. So agents are slightly different from LLMS with prompt engineering or LLMS that are context aware using things like RAG. With agents, you would have an LLM that has a set of tools or skills. So one of those tools might be and if you're using Jet GPT, you'll notice one of those tools might be search the Internet. Or one of those tools might be make a call to my Salesforce or my CRM or my SAP or whatever you're using to store customer data.
Make a call to those services and extract data. If you look at the agents, you'll have an LM that has this set of tools that are ways to reach the outside world. And even REC could be a tool in this. So RAC could be a tool like go to my factor database and collect results based on all those millions of PDFs that I uploaded there. So this agent is able to look at all the tools and skills. There's sort of two words for IT. People either like to call it
tools or skills. And it's also the nice thing about AI. We're coming up with words for things and one company might think one word is better than the other. And then it always takes some time in order to figure out what conclusion will be and what will be the exact phrasing and terminology. So you will have an agent which has a set of tools or skills at
its disposal. And then you ask a question like, hey, can you get me all the packages from can you get me all the orders and tricking numbers for today? So maybe you're an e-commerce company, you want to extract like all this information because you want to know how many products that we sell and shipped. So the LLM wouldn't get this question. And then based on your question, it will look at its skills and
tools. One of it might be go to my postal provider and find out what packages we ship today. Another might be go to my database and figure out what orders have been placed. And then it also needs to take this information and put it in a coherent way. So it will come up with a nicely format answer to your question. So agents can do slightly more than RAC, which just depends on
a factor database. They're able to come up with a plan to collect all the data that needs to answer your question and then structure this in a good way. Because sometimes it's sequential.
So it first needs to go to System A and then to System B, and then the other time it might need to go to System B first before it goes to system A. So instead of having to go all of this by yourself, where you need to say go to system A, collect this, go to system B, collect this, which we do as programmers, we try to write these sequences of things
because we know the sequence. With the agents, you don't necessarily know the sequence, or you don't need to know the sequence, because the agent is able to figure out just by looking at natural language, which tools or skills that you use to answer your question. Interesting.
¶ Developer way of working with AI
I mean, if it's, if I look at how it's grown, it started indeed with these more so command prompts and engineers would use it to make themselves more effective, right? To test assumptions more quickly, to gather more information on what needed to be done sometimes to generate a right answer as well and then implement it. And then now that it's moving, it's more context aware. Plus it can actually do part of the integration work on its own even without kind of the steps in between.
Sequentially. You just say this is what I need the orders of today and then from which system and then the slicing of today. It would handle it by itself. Which means either engineers are going to be more productive or how I see engineering nowadays is a lot of work is actually
integration work. There's not too much business logic here and there, and even if there is business logic, people are very mindful of it. It's usually very well tested and kind of isolated in one place within an application. Now that the integration work is also then kind of done more so automagically in some cases. How do you see the role of engineers changing with the kind of component of AI more and
more? Yes, if I look at myself I I use things like GitHub Copilot or like coding assistance in general. They've for example helped me to generate tests from my code. These aren't the best tests but they will do things that would took me 1020 or 30 minutes to do it myself. Like a boilerplate. Yeah, like a bottle plate or can actually look at your function and come up with some testing scenarios.
So let's say you can get a 80% testing coverage by having Copilot generate this for you in a minute instead of you having to write this in an hour. Yeah, that's pretty good. That's pretty good. So if you look at coding in general, it's part of it is just work. It's you know what you need to do, you can do it. It's just going to take you time could be hours, could be days, could be weeks. You know you can do it. There's no new new aspect to it's just execution.
And then there is maybe, let's say it's 20% of your job is actually thinking through things, coming up with a new solution to a problem or coming up with some integration between different components or different libraries that doesn't exist yet. And I'm saying 20% of your job might be like this. I hope it's a bit more, but let's say you're delivering a consultancy project for a client, then probably it's like 20 or 10%. If you were going to start up, it might be 80%.
And only 20% is work work. So let's assume this 20% of your job is the only thing you need to do, and the other 80% can be automated. This can mean two things. It either means we need less developers because a lot of work can be automated. It could also mean that we only have time to do the 20% in 20% of our time. But what if you have 100% to do? This 20% could also mean if we get better solutions, we get more creative integrations between different libraries or different systems.
So I don't think we need less developers, but developers that we have. They will spend more of their time on creating innovation rather than just doing the work that needed to be done. But there was no other way than to just do it manually. Yeah, I like that thought and I
¶ AI taking away jobs
think I'm hopeful that it's going to go that way. But also in, in reality, sometimes where times are tough, people look at costs and if we have people that all of a sudden get more productive, more effective, but then there's still the same amount of people, either we keep those same amount of people or we cut costs. And then sometimes layoffs happen. I don't know if you see that nowadays. I don't think AI plays a big role in kind of the layoff sequence that we've seen so far.
But organizations are definitely cutting out the the extra fat, as they call it, and try and be more lean. And definitely also engineering is sometimes impacted. It's very hard to see, OK, where does that come from? And I think if your organization is huge, it's also like what we discussed before, it's really hard to find things that actually impact an organization with either its mission or its values and its goals. So then, yeah, you might find yourself working on this micro component.
And then the organization just decides, OK, this is not impactful anymore. And then the layoff happens. I wonder if it's going to happen more and more than as people get more productive with this role of AI. And also an interesting thought, which I'm curious to hear your opinion on, is kind of this education gap, right? Because you and I, we've played around with software for a long, long time, then AI came so we can leverage previous
experience. For example, I had my partner, she was trying to figure out something to do with JavaScript, and she had GPT generated something. But she's like, yeah, it's not working. And I don't know why it's not working. I don't know which component. I don't know what it does. It's trying to explain it to me. I don't really want to know in the 1st place. Can you help? And then, yeah, I, I did figure it out, but that's basically
based on previous experience. Plus I know exactly what to then change and where it is. But new people will not have that same amount of experience. They will have to find a different way to get kind of this gut feeling of where things are wrong or where the right kind of building blocks fit in together. It's gonna be more difficult, I think. Yeah. So it's two things. It's the the first thing you said about AI maybe taking away jobs.
I always feel AI is kind of cheap right now because all the different providers, they're trying to get companies to use their models. And at some point the price will increase and companies will only start to automate away things when it gets less expensive to automate. And I think it will be a while before most development jobs can be automated away. Maybe some of the more administrative jobs can be automated. But if you, if you look at, and I like this analogy of like
shrimps. So shrimps need to be peeled and there are some machines that can do it, but the machines are really expensive. That's why a lot of companies, they will ship their frozen shrimps to like low cost countries to get them peeled and then ship them back. Even though there are machines, this is still going on because it's cheaper to have people do it. It could be that those people are underpaid than they probably are, but still people are less expensive than machines in that
sense. For AII think it will be like that for a while as well. So the bigger companies might be able to automate and have less people because they have AI, but the smaller companies for them it's probably cheaper to still have people do it. So there will be, it will take some time before we get to a point where machines will take over because they get less expensive where it is something
you see in different sectors. So that's maybe a positive point, like make sure you stay relevant and it will be hard to automate your job. But even though if it's easy to automate your job, then I also wonder like how excited do people get by this job that's so easily being automated? Yeah, it should be kind of maybe more mundane and repetitive in in any case then. Yeah, but also, I mean, you need to, you need to work in order to get money, in order to survive.
So in that sense, I'm OK with the fact that some people work because they need to work rather than they want to work. And then probably they have a ton of hobbies that excite them. So either way, it will work out fine, but it will take some time before those more creative jobs or even your hobbies could be automated. And then on the education part,
¶ Handling hallucination
I, there's some things we're implementing. I see different clients implementing for AI because in AI there's always the issue of hallucination. Like you said, you ask a question about JavaScript and you wanted to generate some function that you maybe want to try out on your machine, then the AI probably doesn't know what machine you have. It doesn't know what MPM libraries you've installed, if you even have Node or JavaScript
installed on your machine. So it's going to come up with a solution based on its training data. And there are some things in AI that we like to call guardrails. And with guardrails, you, for example, can see the level of hallucination. So the AI itself, even though it doesn't tell you, it calculates some score based on house, how sure it is that it gives you the right solution. And then there's also something called groundness.
And this means it will look at its relevant training data or the relevant context it has, and then figure out how much of its answer is based on existing data and how much of its answer is based on the larger set of data in total. And with these kind of things, if your partner would ask the same question and we'll get like a low groundness score, it can actually ask the AI like, hey, I'm getting a very low score. Can you maybe try again? Or can you simplify things?
And if you look at agents, if you will work with an agent, they'll probably see like there's a low ground score. I need to figure out if I can get more relevant data from the Internet, if I can go to a different source to collect more data, or if I should actually tell the user like hey, this won't be the best. Or it could actually validate the code for the end user and say I've generated this code for
you. It doesn't work, so I'm going to generate a new function for you that might does work, because now I'm going to execute it in my own sort of JavaScript runtime. Yeah. So the interpretation is then also handled by the agent rather than the person that is prompting. Yes, interesting. Yeah. But then still, don't you think there's going to be a knowledge gap at some point? Because the even if a prompt is
¶ Knowledge gaps with AI
then interpreted and gives you an answer that's similar or just can't figure out the answer and there is an answer, at some point, you have to be able to still figure it out yourself. And if we get to a point where let's say more repetitive or mundane tasks, let's call them simpler tasks, maybe would be automated, then those are usually the tasks that more junior people would start off on and learn actually and get experience to become and grow more senior in whatever they're
doing. If we don't want to have those tasks anymore. I, I don't know what junior engineers are going to do. I feel like the role is going to change in what they experience and how they're going to be productive, but I don't I don't see it yet how it's going to
happen. Yeah, I guess if I think back, so I mentored quite a lot of junior developers like either when they were an intern, my start up because, well, you're a start up. So interns are a good way to have more people help you or even as a startup, you probably end up hiring junior people because they have lower salary expectations. And but then there's also the trade off. They need more help. They need training wheels. They need to be mentored. They need to be taught how to do things.
And my thinking there is in a world where AI is going to automate parts of our job, AI can be some sort of a mentor for these developers and these developers are able to use AI. So let's suppose you're on VS Code, then you're trying to create a function and then you have a coding assistant and it's going to give you some boiler plate for the function. It means the junior developer is able to move faster because they get a boiler plate, but it also means they have to think less
about how to create a function. So like you said, there might be a new sort of group of developers coming up that doesn't know how to create a function because it's already created for them. They might get lazy. They might, well, maybe they don't get sort of these naming syntax burnt into their minds or have the muscle memory to do this over and over again. But I get these training wheels that will be generated for them and probably a large part of the
code can be generated for them. So they're more directing the code, like maybe changing the order or maybe getting better at asking questions to the AI in order to do this. And this will probably go on for like 5 or 10 years. But then after that it might be indeed that we only spend our brains or time in our brains for like the more complicated issues and then there will be some barrier to entry for for junior people. Yeah, that's what I think as
¶ Companies need to pivot
well. It's something you mentioned with regards to let's say code generation kind of triggered me and made me think of there are some platforms that just their business model is code automation or code generation. And then even beyond, we always have low code applications or
tools similar to that. I feel like with the role that AI is, is now having as well as growing towards, I'd be very fearful if I was in that market, if my business model was cogeneration and all of a sudden agents come out and they generate the code, interpret kind of mistakes and then execute on top of that as well. I'm wondering how many, let's say code generation, business models or actually start-ups we're going to see. And then to even go beyond that,
same for low code. Basically low code's whole thing is you don't really have to understand exactly what's going on. We have these integrations and building blocks and you can kind of clip them together and their main thing is speed. Maybe if your main thing is speed and kind of as a software business model, it's going to be hard to compete. What are your thoughts on that? So it's also one of the fun aspects of running a start up.
If you build a company and you're going to expect to keep the same company in the same concepts and the same sort of products that you're offering for the next 5 or 10 years, then you probably won't exist anymore at some point because you need to keep pivoting. You need to keep adding new products, improving, making sure you're building stuff people like. So if you're building a start up and you're relying on no code generation, you can also use AI
yourself. So maybe you can make, you can generate better codes, you can bring over your own, come up with your own AI assistance. You can do all these things in a sphere of AI that already fit what you've been doing before, just make it better or faster or more efficient. If you're competing with Jet GPT or other Jet assistants, then you're probably miles ahead at this point. So the only thing is you need to make sure you keep a hat.
So you keep improving, you keep using all these different things and make it better and run faster. I don't know about yourself, but if you ever tried to create social media posts in Jet GPT, they're pretty generic. And if you scroll through LinkedIn or Twitter, you can find all those generated generated posts.
There are some start-ups that help you to become better at writing using AI. They will still generate those posts for you, but it will also generate some sort of knowledge base based on your previous posts, based on posts that are doing well. Put all this data together and then it's a bit like the rag example I gave.
Put this in a factor database, maybe create a specific database for you as a person, and then it can generate new posts based on your previous posts that fit your own tone of voice. And this is something ChatGPT could do but doesn't focus on at this point. If you ever started doing this, you can probably excel for maybe 1-2 years. And at some point, some AI assistant, some generally available large AI assistant will be able to do this in a similar way or potentially better.
So maybe you need to start building agents at some point and then go on to the next pattern and the next pattern. And if you're doing this, you'll probably be ahead of those general jet assistants for a long while. If you don't, then you're just in here to make quick money. Then you're probably fine for the next one or two years, and then these bigger assistants will just take over whatever you've been doing. Interesting.
¶ Personalization and AI
They don't have to run in parallel, but you can integrate with whatever is coming up. I really like the example that you gave of this. That's a social media post because yeah, I have experimented with it. I have trouble sometimes writing nowadays. I, I don't really look too much into what I'm writing and I'm writing more to engage or also to gauge. But yeah, if someone would, if I would have a tool that would be more personalized to my writing style, I would definitely not
have to correct as much. And nowadays I don't use it because it's too generic. What I get back, I would rather write myself. In the beginning I did and I was like, OK, so I'm correcting here and there, but then I lose. I feel my authenticity. And then, as you said, you're kind of ahead of the curve if you have that, because ChatGPT and main models are not really focusing on that personalization aspect specifically for the domain of writing social posts.
But yeah, how long is that going to last then? I think it will be there for some while. So there. It's also in the word right? It's a large language model. It's built for generic use cases. It isn't optimized for doing specific tasks. And this is different from the models we saw before. So I did a machine learning course in 2015. It's like a three day course and I learned to do some Python. I learned to build my own model. It was specifically trained to do something.
So large language models are trained to do everything. And something we're seeing now is, and this is a different pattern, besides RAG and agents, which are focused on making LLMS perform better for your use case, you could also fine tune or train a new model based on previous training data or based on different models. So another pattern we're seeing is companies taking a large language model and then fine tuning it with specific data. So that's another approach you
can take. But I can easily see large language models being sort of the the Holy Bible for for AI and then people creating their own testaments based on whatever is in there. Interesting.
¶ AI and data privacy
Yeah, I mean, data has always been even before AI, how a lot of companies make money, right, Because they can teach a lot about a person with regards to what they like or what they might like or what they would like as well with regards to cross and upsell. And everything is very
commercial in that aspect. And then especially in Europe, we now have these guidelines, GDPR, you have to have data processing agreements and you can't actually put your data somewhere on a cloud that lives outside of Europe. And there's these more so guardrails with AI. Do you think we're going into the same route? And would that then kind of limit innovation? Because I feel like the more data the models have also about
a person, the better. For example a personalized experience can be. Yeah, yeah. I think for personalized data, always use something like RAG or agents where you make LMS context aware. They don't necessarily train the LMS on your personal data because I don't think the LM in general will get smarter if it
knows your address, for example. But it might get smarter if it gets access to your e-mail history because you might have been discussing things or concepts, or it would get smarter from those things, which could also be anonymized. You don't necessarily need to know your name or your address for that, but it might be able to benefit from knowing your way of thinking. If you look at what a lot of the bigger companies are doing today, they're selling off their
data to these providers of LLMS. So Reddit or Slack, they're all selling off their user data to train these different models, which might make the model smarter and act more like humans, but it, it wasn't really benefit from a lot of this data. So there's also the thinking of what data should I give it in order to make it better and what data is only relevant at a certain point in time. So I'm a bit worried about this. If you look at Slack for example, they said you need to
opt out by sending an e-mail. You opt in by default. Yeah. So I wonder, like, did their terms of service already allow for this? Have we all been sleeping when we went through these? Like all the bigger companies, they're using Slack, so they must have reviewed their terms and conditions and they agreed and now there's this new set of better needs to opt out.
So I'm really wondering what's going on there with all the legal departments in corporate because they need to opt out and somewhere they probably implicitly already agreed with sharing your data. Yeah, I mean the terms and agreements are so huge. And when Slack came out with that like it was everywhere and was actually quite a shock,
right. Because if you have to opt in by default, then already, I mean, from my side, I would assume that OK, they're going to start using this and I can opt in to have them use this. But indeed, if it's an opt out, then yeah, they're already using it because then the terms of service, indeed if they haven't changed, they accommodate for a kind of auto magically that's. Yeah, so that that kind of amazed me. I mean, for Reddit, that's fine.
Most of the people are on Reddit already on some sort of anonymized account name and they will share personal. They've already, they don't really share personal data. They share experiences or have questions and those need answering. But yeah, it's very interesting to see where this will go and what sort of guardrails we will get, not only for figuring out hallucination or how good the answer is, but also like what data is shared and what data can
be generated by by LMS. Yeah, yeah, absolutely. I am very curious though, and I'm very excited to see where it's going. Also in seeing new use cases pop up or seeing new implementations here and there and wondering, OK, is this how they've used it or how have they done it under the hood? I'm expecting more conference stocks as well in sharing knowledge. So this is going to be a lot of fun and I've really enjoyed this
¶ LLMs are just an API call away
conversation, man. It was very, very knowledgeable. Lots of learnings in there. Is there still anything you want to share before we round off? Yeah. So the final thing I want to share and it sort of touches upon your conference stocks thing. So I like to speak at conferences and I like to talk to developers, front end developers, web developers. And the biggest thing I wanted to to leave behind today also is like AI is just a concept. As a developer, you still need
to build the app. You need to make calls to databases, make calls to API, deploy your code, write tests or generate tests. And a lot of things you can do as a developer with AI just rely on an API call. So all the big LM providers have AP is, you can just call their AP is and access LMS. You don't need to be running Python codes. If you're a JavaScript developer, you can just call an API.
And probably more of these concepts we discussed like RAG or agents, they will become available either as API calls or through libraries such as length chain. So we didn't really have time to discuss length chain, but it's a library that sort of abstracts away all these different LLM providers. So you need to build it once and then you can connect to different LLM providers. So these are real really good things to think about. With Google Link Chain, you can probably find some nice
tutorials to get started. So in short, every developer can build AI applications because a lot of things are still the same. The only thing you need to know is how to interact with these different LLM providers. Yeah. And the accessibility kind of barrier is decreasing. It's getting more and more accessible, which also is really good for the future. Yeah, definitely. So LMS are just an API call away. Yeah, I like to say. Yeah, for sure. Cool. Then I'm going to round it off here.
Thank you so much for listening. I'm going to put all Roy's socials in the description below. Check them out, let them know you came from our show. And with that being said, thanks again for listening. We'll see you on the next one.