¶ Intro
Hi everyone, my name is Patio Kakil and in this conversation we get to have a behind the scenes at what is happening over at TomTom when it comes to generative AI and ChatGPT solutions. What solutions are they building and how are they building it? No clue, but they bring seamless customer experiences and increasing productivity all around. Explaining all of that and more is my friend, you grow. She's VP of Engineering when it comes to advanced, innovative technologies.
Enjoy. I was wondering because you you
¶ How Yu ended up at TomTom
said you lived in San Francisco and at some point you ended up in Amsterdam and I think that goes hand in hand in kind of joining forces with Tom. Tom, what was the the period leading up to that and how did you end up at Tonto? Right. So I actually moved to TomTom from Massachusetts, So I worked in San Francisco. But then the pandemic happened and we all started working from
home. We had a tiny studio in San Francisco that's getting way too crowded to work from home, so we moved back to Massachusetts for almost 2 years. And then during that time, the company that I used to work for that's Uber ATG, got acquired by a company called Aurora. So a lot of changes were happening and I figured after a year at Aurora, it's time for me to look for something else. At the time TomTom reached out, I figured it's related to things
that I've worked on in the past. So I worked at Uber twice, once on a variety of teams, but one of those teams would be map making. OK? And then that kind of related to what TomTom was working on and wanted me to focus on, Yeah. So I figured that's a good fit. Exactly. And you wanted to stay within this domain, let's say map making.
¶ Leveraging generative AI
I would say I'm interested in applying machine learning, AI and basically data to solve business problems and hopefully business problems related to physical life, physical world. Yeah, I love that. Yeah, so that's kind of the theme in my entire career. It has to be really grounded to how we live our lives. Yeah.
And with AI kind of as a as a backbone that technology that supports that, yes, How are you doing that then within TomTom, like how are you leveraging tools and technologies like that in whatever you're doing or delivering? Yeah, that's my favorite topic. That's a good one.
Yes. So we in the past year since the launch of Tai Chi BT, TomTom has invested in leveraging generative AI in just the entire business which we look at the opportunities would be generative AI is so different from a classic machine learning in the sense that you don't need that upfront investment, you don't need a large team of specialists working on algorithms.
You can leverage what the cloud providers give you like from we partner closely with Azure Open AI, you can leverage the foundational models provided by them directly, directly and focus on business applications. Where do we get value out of this application? I love that. So a couple of things we look at would be 1. How do we bring a better user experience to our end users? This could be the drivers driving the car. It could be a developer working with Tom Tom's APIs and SDKS.
It can also be just an internal employee using our internal documentation. I mean we hear stories from every company on complaining about internal documentation. How hard it is to find things so that can can. I wouldn't say it's completely solved by generative AI can potentially be greatly improved with generative AI. So we're also looking at map making a process where our existing work flows, what are some of the steps that generative AI can greatly help and the field is also changing
fast. So we are keeping tabs on what is the latest technology, what is the latest regulation and will adapt as we go. So far we've tried out a bunch of things and I I'm super happy about the direction that we're
¶ ChatGPT location plugin
taking. Is there, is there anything you can share with regards to implementations you've created? And then later on, I want to, I want to talk about kind of when ChatGPT came out and how you pick that up within the organization because I think that's interesting as well. Yeah, great questions. So the user facing application that we are slowly introducing to the world in the summer, we launched something called a location plug in for CHA GBT.
So back in the days the CHA GBT had the plug in store and different providers are giving CHA GBT access to their own products and we did the same. So what we gave CHA GBT would be what's called geocoding, meaning you can look up an address, routing and navigation and the live traffic. So then this gives any user of ChatGPT access to navigation information. You can ask what are 10 good parks near me. Then I like Park #3, how do I
get there? So that's enabled through the plug in system and that's over the summer. Then later on we build a similar access to Tom Tom's location, AP IS and SDKS in what's called a digital cockpit. So essentially the infotainment system in the car. We build an AI assistant for it. It would understand your queries about not just make a call or ask about the how do I get from point A to point B, but something more fuzzy. Let's see what would be an interesting example.
So let's say I'm planning a trip to drive to Berlin. Yeah, but I drive an EV, so I need to stop along the way. Okay. Then Tom Tom's EV search can find chargers that's compatible with your car. Wow. And also remind you when you need to charge where? Yeah, that's just step number one. We're also adding capabilities to search. Usually the charging will take some time, but when the car is charging there, what should I do? Exactly what can I do? Yeah, exactly. Then it takes user preference
into account. Maybe there's my favorite coffee place near the charger. Maybe there's a restaurant that I can stop by. Maybe there's a bookstore that I can check out. So those user preferences can be provided to the AI so it plans the route that's suitable for you. Wow. It should enhance the experience of of doing kind of a long term trip like that, especially when you're driving an EV.
Those are some challenges, yeah. Yeah, and similar searches for parking for making the planned trip, but then integrates with your calendar. So instead of me having to go through my day and think about when do I need to leave for which appointment, then the assistant can take care of that for you if you integrate your calendar. Yeah, and these are these examples of ideas that are being worked on or are they at a stage where they can be put out in
production already as well? So they are being worked on. I wouldn't say they are production ready or working very hard for making it better that at CES will be announcing the product in the digital cockpit. Awesome. So if for for the audience interested in seeing it, go to CES and stop by the TomTom booth and try it out for itself, yeah. Awesome is there is there a link online that I can share in the show notes as well where people can see this?
Yeah, we'll, we'll show that. Share the link. Awesome. Then I'll put that in the description then. Awesome.
¶ Embracing ChatGPT within the organisation
When when ChatGPT came out, did you and your team, for example, immediately kind of embrace this technology? Because from an organizational point of view and as a software consultant like using technologies that have been established are easier, right, because you can rely on the resiliency it's been established. So also towards the future software as a long term game, it's better to adopt technology that has been established.
But it's kind of a dilemma because then you cannot use cutting edge technology that is kind of trying to prove itself a chat. GPTI think is is unique in the way that people always say stuff certain technology is disruptive. I think this is truly disruptive with the capabilities that it brings, so maybe it's different, but how did you embrace kind of this new AI technology?
Great question. So last end of November or early December when Cha Chi BT was launched, a couple of us just tried it right away and noticed that it is game changing. GB Char GBT actually was. A later was a later version of GBT 2.0 I think was which was released a couple of years ago before last year. That version I've tested as well. I tried it and I figured it's a good attempt. I don't see how I can use this in production right away, but I was having this in my mind to
revisit later. But last December the things were clearly different. Char GB TS capabilities were at the stage where you can see some hope of using this in actual production. I personally tried this for personal projects for a bit, it seemed promising. So then the same month December, a couple of people just pitched the idea to our CTO and got by and to create a small effort that's to the side of the main business.
We don't want to disrupt of whatever the teams has planned for, but we also don't want to miss the opportunity of a potentially disrupting technology. Beautiful. So it is a tiny effort created to the side. It's also not widely announced to the to the entire company. But we collected ideas on where do we see potential inefficiencies that can be addressed by generative AI. We also experimented at the time. It is mostly prompting and trying out to how much hallucination is tolerable.
What are the techniques of reducing hallucination? I think retrieval augmentation techniques came later on, but even in the early days without all those nice tools, it's just already showing enough promise that we want to create a small task force to capitalize on this opportunity. What we had in mind would be worst case we waste a couple of people a couple of quarters time and no big deal.
¶ Generative AI and rapid prototyping
Yeah, I think that is how I would say from my experiences like Blueprint, how you can set up a a dependency that is not gonna like destroy your operation and still leverage the benefits that it potentially has. And I think those benefits far outweigh kind of the worst case scenario that that it doesn't work and still you have then the
learnings of this technology. But the benefits are tremendous and I think organizations that are established should do that more should try and form small task forces to kind of disrupt market share that they already own. So then they can incorporate those learnings in the software with the output. The product will be better because of that. Exactly. Yeah. And with this wave of technology, you can also see benefits right away.
So it doesn't take that long. I think in January, we already narrowed down to a few use cases where it is showing great promise. I recall at the time the one goal we created would be on my way from office back home. Find me a Thai restaurant that serves my favorite dish and then tell me when it is open, it should be open when I drive home and also tell me what's the price of the of the dish. So we created that task and immediately realized that this can be done. Yeah. Instantly. Yeah.
So in January then we narrowed down a few applications that we would focus on. And I think we created a slightly larger task force with one dedicated person in February to build out the solution. So 1 1/2 person spent February building out that AI assistant prototype. And in March, it's already a web browser version where we can ask all kinds of questions to it. It responds in in the right
language, in the right way. Sometimes it gives you a restaurant that doesn't exist, sometimes plans are out that's wrong, but it's getting there. We're seeing the promise right away. So then we really thought it's worth investing more in Then quarter two it is time to double down and work on this one prototype that has a potential
application that's user facing. So throughout the at Q2 we made the prototype work better, less hallucination, more infrastructure that allow us to experiment faster. So with these prototypes, I think the iteration speed is really important. If we have an idea, can we try it out and make sure there were no regression in the capabilities we are we enabled in the past. So building that capability really allows us to fastly experiment with lots of ideas and quickly improve the quality
of the product. So end of Q2 it's already showing a ton of promise and then I think I don't recall exactly when, but the plug insurance appeared in Q2 of of this year. So immediately the the idea is clear. We want to enable charge BTU to access location technology that we provide.
So then it it is a few weeks work of pulling the APIs that TomTom provides and writing the right a tiny bit of code to enable that to be become plug insurance for ChatGPT and we launched that in July. And so far, as far as I know, I think that's the only set of location plug insurance for ChatGPT. Well, the the sheer speed at
¶ Domain knowledge is key with generative AI
which you like come from an idea and then actually bring something live or have kind of a test version to test your assumptions. If it actually provides value, it sounds tremendous, like what about the environment makes it so that the speed can take place or is it because of that decision to have kind of a side task force to focus on that?
Right. So that that's the challenge large companies would usually face when you want to bring in innovation and that that's why we created the task force that's stand alone to the side is not connected to any existing product at the time. We just want to test out is this even possible? If it is, then we're going to introduce the task force to the relevant teams. So towards the end of the summer.
It's also when we start to socializing the idea with more teams and try to think if we have generative AIS capabilities, what are the most relevant points to deploy it. So that that's where we identify the opportunities like facing giving more better user experience through the driver experience, through the developer experience but also internally where do people potentially see the use of it in
their daily work? Yeah, plugins were identified by our colleague who has the domain expertise for 20 plus years at Tom-tom and knows location technology inside and out. And that's where we also recognise the third opportunity which is generative. AI is fairly easy to use, but then you need the right domain knowledge to apply it correctly. And the TomTom is a company that sits on decades of experience
with location technology. So to enable more teams to help them understand what are the capabilities from generative AI, how would they apply it, which it's up to them. So they would identify the opportunities. What we need to enable would be help them see the capabilities but also provide some templates so they can quickly get started with obligations instead of learning to use the tools. So. So for that, it would be the focus of our quarter three, quarter three, it is to enable
more teams to experiment. We had a hackathon in June, end of June and that got I think 18 ideas from various teams. They quickly experimented and I believe two, one or two teams actually deploy the solution to production even after the hackathon. Perfect, which is wonderful. Yeah, yeah, that sounds. Just that easy to use.
¶ Rapid company innovation with AI
Yeah, it easy to use but also your your engineering let's say environment allows for experimentation like that, right. If you have a hackathon day, you believe in the people that you have, you will allow for that freedom of exploration and probably that helps when you say OK, we're going to create a task force, we have buy in from let's say higher management who wants to join in.
Is that kind of the plan of attacked forming that task force or how were people able to help in that way as well? Yeah. So this really came about quite quickly. So our CTO, Eric Bowman was very supportive of just experimenting with new things and even the task force started with people working on it part time.
It's only when we started seeing initial promise when the AI assistant really is behaving and wowing people every time we demo it. Then we decided it's time to pull in a tiny dedicated team to work on generative AI, but not just for that one product but for enabling other teams. So eventually over the summer as we launched the plug in, we realised now we've synthesized the mission of this team which would be build the core AI products, but also enable the
entire company to innovate. Yeah, I love that you use the technology and you help domain experts because you need that expertise. And then together you make magic happen, right? And you have an abundance of ideas and you try them out one by one.
Where is the value and the speed of the organization and the speed of this new technology allows for you to go through that like we were talking about Q1Q2 delivery in July, like it's it's enormously quickly and it sounds like a lot of fun to operate in. Yes, it's tremendous fun. The entire year after the launch of ChatGPT, it's just fantastic.
The speed the technology moves and the speed company moves in correspondence to leverage the opportunity, I think would decide how well the business would perform in the next 5 to 10 years. Yeah, I truly believe that for companies nowadays, the companies that can move fast and adapt to new technology innovations will stand to win. I think so too, yeah. When it comes to the knowledge,
¶ Cocreating with other organisations
cuz I think that's an interesting one. Do you rely on the knowledge you have in house or do you seek for example partnerships with you said we work on we run on Azure for example with Microsoft. Do you collaborate and Co create as well? Is that an aspect? Yes, definitely. So my personal opinion is there are a few large players that does a fundamental research and also do the more infrastructure work.
But for companies whose core capabilities and core competency are not in building AI but in bringing technology innovations in say location, then it is not in Tomtom's business to create a new foundational model, but to leverage what's out there. So we made that clear even from the get go. We're not doing fundamental foundational models.
We're not conducting research. We're utilizing what is out there provided by cloud providers like Azure, Open AI, but also in the open source community just to use that in the application. So that's one of the tenants that we put in, in the Tiny innovation team as well. Don't try to do research to see how we can apply it in the future when these research is needed or whatever is off the shelf is insufficient. We can certainly look into it, but let's start with creating value.
Interesting. And that really resonated with the team and enabled the different teams to participate. Yeah, yeah, that. To me as someone that creates as a software engineer, that sounds awesome, right? To allow existing technologies and to focus on value, that is what I want. I I feel like I don't want to reinvent the wheel when I don't have to.
I just want to use what I can and the value is where the fun is also is where the fulfilment is to have that wow factor when you're demoing kind of what you've built up in a really, really fast-paced environment in a QuickTime. I think that's that's the fun. Exactly.
¶ BentoML and optimizing model usage
We're also partnering with the the external community in just creating this different solutions. The technology is so new nobody can very few players can claim that they truly know everything about what's going on. So it is in collaboration in exploring these solutions in building new patterns, usage patterns. We see that a lot this year which is fantastic.
So we have been partnering with a small start up called Bento ML. In looking at at the time the the solution was quite new and it's no longer new but at the time. The problem we're trying to solve is we have the AI assistant. It relies on opening eyes API, the back end for for making the AI brain calls, but it's slow and it's also quite costly for each round of communication. I think we calculated at the time based on the pricing before the dev day that cost 1.5 ish
EUR. Wow. So that clearly is way too expensive to really use in production. So in trying to solve the latency and the cost problem we partner with, it's the start up which can help us experiment with multiple ideas.
So eventually we build a architecture that relies on a simpler model to make the easier calls and only fall back the most complicated, complicated calls to the slower but smarter AP is. So through that we both partnership with an external company that can help us experiment fast to supplement the internal innovation, but also build a a usage pattern that potentially they could use for their other customers. Yeah, so it's a win win for both. I think that's genius, right?
Because the the slow, let's say, costly model, sure, it has the information, but if you can take artifacts of that or create your own models through open source technologies that are available, feed it with the information that you have, then you kind of decouple that. And you decouple the costs of that as well. And you reap the benefits in latency and the lower cost there as well. Yeah, I like that a lot. And through the
¶ Partnering with Microsoft
productionization, then it is directly leveraging our partnership with Microsoft. And it's wonderful for companies to have access to these big players with a tremendous innovation power. So later on, as we build the product, that becomes a stage where we are exploring offline offerings. So imagine if it's a car. If it gets into the tunnel and it suddenly forgets how to respond to you, that's clearly not a great user experience.
So that that that's where we need to look into not just the cloud model, but also something that resides on the car, using the car hardware to do the compute but can still respond to simpler commands. Interesting. So, so through that research Microsoft was able to connect us with researchers at MSR and with various engineers that can give us pointers on looking into how this can be done.
So for a company that's not specialised in AI but wants to leverage a IS capability, I think these partnerships would be crucial. Yeah, yeah, I think so too.
¶ Protecting user data privacy
I was thinking when you were talking about let's say the digital assistant being aware of user preferences, where you want to go, even what your favorite things are. How do you take into account kind of the the user data privatisation? Because I think from a from a standpoint where you have that information that is private that is valuable to then use it in existing tooling that is like giving your data away and data can be gold to other people as well in that way or very private to some.
Great question. TomTom has always been very careful about providing protecting user data privacy. The approach that we're taking would be first it's just opt in. You don't have to give access to your calendar or or your private data if you don't want to. But if you choose to, but the AI algorithm would use it for this session and then forget about it, it's deleted after a period of time.
So we do want to make sure we utilize the user information to make the experience seamless and we've tried it out by ourselves. It is much better but to also not retain the data that we don't need after that interaction. Yeah, what would happen if it? If it does retain the data, Would it increase in value you think? Or just providing it temporarily already has the value you're looking for. So to provide the value for the users, just having it temporarily is already good,
yeah. But for the algorithm to improve, so we would need access to some user data to to just improve it over time. Yeah, but there are ways that we can anonymize the user data before using that For improving the algorithm. We're still experimenting with that. I love that.
¶ Productive engineers with GitHub Copilot and ChatGPT
Yeah. We, I think we touched mostly on let's say customer facing experiences and I think most of the innovation at least initially that I've also seen in my little network that I have is with regards to digital assistants. And then from a person that is a software developer, a lot of Productivity Tools has all have also popped up that can create or help you create software faster. Have you been experimenting internally with those as well? Absolutely. Yeah. So a lot has happened in the
past couple of quarters. I would say we've, I think within TomTom GitHub Copilot was enabled in Q2. I just read the e-mail summary from our awesome team. I believe close to 500 engineers have tried the GitHub Copilot and most report to love it and report that the productivity has been boosted by having access to this awesome tool.
There's also an AI code review tool that we created with a bunch of other teams that we created this small community that are making prompts and making instructions for the review. AI code reviewer to give more relevant information and also cut down the noise so you don't find it too annoying. So a lot of teams are experimenting with this. And that's an internal tool, yes. OK. We've also launched an internal ChatGPT.
Even I think early Q2 back then I think most people's concerns would be I don't want my data to be shared with Open AI or can I use my personal account? But if I use my personal account, can I use this for work? So then that we we created a internally hosted version of ChatGPT that you can use for work and you don't have to worry about that data being used to train the next version of GPT. So that is really important and very funny.
An engineer Ryan in the company also created a very nice UI that I I personally find it to be better than the ChatGPT UI. I love that. That remembers your prompts and gives you pointers on what you can use this tool for. So that also codifies some of the training into the tool itself. Then as a a a new user of chat
TBT. If they launch to what we call Chatty at Chatty dot TomTom I think then then they have access to this tool but also on the sidebar it gives you all kinds of pointers on how to use the tool more efficiently. So the training is already there.
¶ Developer experience teams at TomTom
Awesome. It sounds like there's a lot of innovation going in just creating internal tooling or organic gathering ideas and executing on that. Do people get time for that? Is that all like own initiative, own time or how do those, let's say, internal tooling? And how does that innovation get facilitated?
Yeah. So the the team that are responsible for creating digital workplace and then developer experience to allocate time, this is they look into what are the benefits, what are our guidelines that we should give access to our engineers when we roll out these tools. It's also in partnering closely with the providers, Microsoft in just bringing the right level of control.
So I'm very grateful that our CTO allocated the resource to allow for these to come in. I think TomTom is one of the earlier players to give these dual access to our engineering teams. Yeah.
¶ Open sourcing AI solutions
And I think you're you're reaping the benefits because of that as well. Are you thinking of let's say outsourcing, not outsourcing, open sourcing those internal tools eventually as well? Because I think it could be interesting for other organizations and it shows TomTom also values kind of open source in that way and giving back. Great idea. I'll tell everybody about this. Because I would want to use it, especially if you say the interface is better.
I like it. I personally really like it, yeah. That's cool. We also have like a data unit specifically from Xevia and their innovation is happening also in combination with helping partners and customers and also internal, they created immediately this this plug in that is also internally hosted where you can access it through Slack. So I don't have to go to my browser, I just talk to my slack GPT and it gives me anything I need basically. And I'm also pushing them to open source.
Very nice. Yeah, because I talk to people, I get them enthusiastic and they're like, can I use it? Yeah, exactly. Yeah. I will look forward to use it if you decide to open source. Awesome. Then I'll let them know that's even even more leverage that I can use in that way.
¶ Past vs. present machine learning investments
For me, it's interesting that generative AI is different in the way that maybe I should take a step back. Before when companies wanted to work with data, they needed to have a lot of infrastructure internally already in place, right? Because data is only valuable if it's the right amount of data. Garbage in is garbage out with regards to data as well and the the output because of that.
So then the infrastructure needed to be there and it's very hard to do that retroactively, especially organizations like TomTom that have been established for a longer period of time to retroactively have the infrastructure in place could be quite costly to implement. But now I feel like with generative AI, because you're leveraging existing technologies, getting up and running is not as much of an issue as relying kind of on your own data analysis in that way. Yeah, that's spot on.
In the past, I think the classic machine learning requires so much upfront investment. Yeah, your data needs to be in the right shape. Your pipelines need to have a place that that the people understand. So your ML OPS needs to be in the right place. And once we deploy this into production, you need to monitor it in a regular basis and adjust as needed. Yeah, whole operations. Yeah and all that. I think it's very non trivial to set up.
There are many start-ups in this space just helping companies do that. I chatted with a few founders and number the duration it takes to up level the entire data. I would say data maturity. I heard it takes two to three years. Even if with the most ideal situation it still takes at the minimum 18 months. So that's a long time before a company can really benefit from AI. I'm truly glad to see generative AI. It makes all that kind of, I wouldn't say completely unnecessary.
You still want good data infrastructure regardless, but makes getting started way easier and you can also start small. With machine learning you have to really aim for the traditional machine learning, You have to aim for a large enough opportunity. So if I build a model, do I expect to see maybe 10 million in return? Otherwise it's probably not worth the effort.
But with generative AI, regardless of the size of the opportunity, it is so easy to use, you really just can try and aim for those smaller tail applications. I think Andrewing described the situation quite well that the head use cases would be this high complexity but large opportunities and the tail applications would be they are smaller in potential opportunity, but it's also fairly easy for domain experts to use.
So now you can capitalize on these that were previously unavailable for classic AI. Unaccessible completely, Yeah, Yeah. Interesting.
¶ Where is generative AI heading?
We've talked about Productivity Tools and kind of digital assistants. Is there a third variant or where do you see kind of this generative AI leading us also towards the future? Because for me when it initially came out, I was like, OK, this is this is going to make a lot of user experiences seamless and I can just ask a question and it knows kind of what I need and it figures it out. We're together, we figure it out. But initially, hopefully with the right question you get what
you need, right? Whether it's I need to go from here to Germany and I I drive a Tesla, how do I do that? And it just says, OK, we have to park there because this is what this is the traffic and it's going to take X amount of time that's already amazing. But where do you think it's even even going to progress towards when it when we look at the future? Yeah, so that many thought leaders have shared their point of view. I can only bring my personal view.
I think in the future all the tools will have their own Co pilot and and not to be confused by the Co pilot in driving self driving that that's a different story. But for all the tools we currently interact with will have an interface from a Co pilot and that will greatly boost productivity. Maybe what work that needs to be done by a whole team can be done by a whole by one person in the
future and not a distant future. Then there's also the agent described by Bill Gates. So instead, just instead of just helping you make decisions, it can also make some of the decisions and take the action for you. The simpler actions like booking a table for you, maybe book a ticket to the of the train, maybe you send you an send an e-mail on your behalf. If if it's simple enough, maybe you you view it or maybe you don't. So that I think can really open up different opportunities.
In the past, the startup would be you build a small team, each person does something special, specialized, and then you launch the product. In the future, maybe that one idea you just talk to the AI agent. It does execute all those ideas for you and experiment. And then in the past you influence through talking to multiple stakeholders. But in the future, maybe just a tiny company with a couple of people can make the decisions.
If you have multiple ideas, you experiment with all of them.
¶ Uncertainty and fear of AI
Yeah, I feel like the the access to information was a few years back, like with the Internet, with smartphones even like at the point of our fingertips. And now we get access to more and more execution power, right? With generative AI being like, OK, we we want to do X as an idea. You hook into it, you get it done faster and you get feedback faster. Yeah, well, that also kind of
within the organization. Did that lead to people being fearful of losing their job or their job evolving in a way? And how are you accommodating for that if you are? Right. There's also always the concern that our AI is replacing humans. Yeah. So we'd like to think there are two modes of using AI. There's the completely automating away the task and
making human irrelevant. But there's also a large part of the the part that's on augmenting human performing their tasks much higher quality, faster and so on. Yeah. There are also these newer opportunities that we don't even know today. It's like the previous generations of technology innovation. Something new always pops off that requires the human to perform different kinds of tasks that we can't even think of today. And those would open up the new opportunities for human.
And I think those will always be there, right? It will be evolved. I do think that right now just by virtue of things being complex, let's say in a world without generative, things are in some parts being done manually or continuously or more so over and over and over again. I think those are going to get automated and those might get replaced or you might get different responsibilities in more of AQA position. I was talking to a colleague
¶ Knowledge gaps because of AI
actually on the podcast as well and he says when the production power, let's say through generative AI, when your execution power increases, then you still have to have that quality check. And for me, the interesting thought is to be able to do that quality check, you also need to know and have the knowledge of what is happening kind of behind the scenes. And senior people can do that because they have built up that foundational knowledge.
But to then get into this field, I think would be difficult because I think you're the way you would get into this field and the level of knowledge might be a bigger hurdle than it is nowadays. That's kind of my concern, I think. Yes, I agree. I I have the same concern and don't have any answer solving that. Yeah, so the senior people have already seen how technology was without generative AI and how it
is going to augment. But then for new people breaking into the field, what should they learn? What should they specialize in? How should education as a whole evolve to make the humans ready for the tasks in the future but not the tasks of of the past. That's an open challenge. I don't know if, but we have an answer for yet. Yeah, me neither. But I I think it's a very interesting one and it might have to do with how do you, how do you do it as fast as possible or how do you get feedback or
where is the value? Because ideas are also going to be more and more valuable since you can try them all out basically. Or how do you gather feedback and where is the quality in that way? I feel like we're going to have different artefacts and more so different KPIs and using this technology and I'm I'm very curious to see where it ends up. I feel like, I feel like things
¶ VisionGPT
are going fast though, like faster than like you're you're working with it day-to-day. I work on it like when we have innovation days or when we have time from off from the client to work on innovative things. For example, you said, oh it's easier to say I want to book this ticket for this concert.
XY and ZI saw this open source tool which is called Vision GPT and with that you download it, you install it and you can do exactly that and it clicks through the browser, you can see it. It's really fun to play and play, yeah, and that's exactly what it is.
¶ Sensory input in GPT
Absolutely, Yeah. One topic that I just thought about would be also the multi modality. So today GBT can already see, hear and speak. There are other modes of data that we haven't explored in depth, maybe with a sensing. So sensors kinda sense your speed, your acceleration and how would AI make use of that and know your context better to provide more contextual help. Those would be something that I'm personally very curious about. Yeah, I think so too.
It wasn't even a thought that popped up, but the input can be varied and can all be leveraged to then have kind of a better experience or to gain more information about your environment, how you're driving, or wherever you might be. Yes. So the interface with the computer is no longer through just the computer or the phone. It could be something completely different. Yeah, yeah, many interfaces. Yeah. Yeah, you. I've really enjoyed this conversation. I must say.
Likewise. This was a lot of fun. Yeah. And I can tell you're really passionate about this. Yeah. Was this kind of what you experienced going expected going into this as well? No, I had no idea, but this is a lot of fun. Awesome. And thank you so much for coming on and sharing. Is there anything you still want to share before we round off?
¶ AI hope and fear in big tech
I'm curious, from your conversations with different people working in the field, what would be one thing that people worry about and what would be something that people are hopeful about? People are, from what I've heard, people worry about being obsolete with the tasks that they have. Like I have friends that work at big tech, more so in let's say supportive functions, whether it's sales or whether it's operations.
And people there are more so fearful because of the technology because what they are doing are supportive towards other, let's say, business value tasks, whether it's sales or software delivery. And those supportive functions might be replaced by tooling and therefore they think that either their whole team or their whole department in that way might become obsolete, not now, but in the span of, let's say, one to
three years. That is a general fear that I've perceived kind of within my network and talking to people, it's understandable. I think it's understandable, but I I don't quite have the solution. I do think you have to be kind of in a position within an organization to deliver value. But if that value all of a sudden is from a different tool than that, I don't know quite how people can pivot from that. That's a difficult one.
Where I do think people see value is exactly in what you say, like the the time to get up and running is incredible. Your value in experimenting ideas and your output is tremendously fast. So then people are getting feedback faster and faster and faster and then seeing where the value is and they can deliver quite fast as well. That's also why I challenged you in asking, OK, where is the readiness of the innovation you're doing? Is it production ready? I feel like not many people are
production ready yet. They're still in the experimentation phase, but I think things are moving quite fast and we'll see a lot of production ready generative AI solutions quite quickly. That is kind of my my prediction there. Yes, yeah. When you use the technology to aid people but not completely replace people, then it's easier to get to production ready. Yeah, the fully human out of the loop solution, I don't know yet exactly. We're not there.
We're not there yet, no. But I do think within the span of a few years, things are moving quite quickly and I think they'll move that way as well, yeah. Yeah. So we're off to a very interesting future. Absolutely. Yeah. Thank you so much again for coming on and sharing. Thank you for having me. It's been fun. Awesome. Then I'm going to round it up. I'm going to put all used socials as well as some of the stuff we talked about in additional links in the show notes.
Check them out. And with that being said, thank you for listening. We'll see you in the next one beyond coding.