Right now we're at the stage where AI can solve specific tasks. So what I tell companies is you should look at all the tasks that your employees are doing, and then basically wildly experiment across as many of those tasks as possible to try to identify where AI is really able to create a lot of value.
Hi, this is The Marketing Meeting and I'm your host, Itir Eraslan. Every two weeks I meet with experts and we talk about topics. related to brands, marketing, and businesses, and sometimes add random lifestyle topics too. I hope you enjoy the show. Welcome to the marketing meeting. And I'm happy to host Iavor Bojinov. He's an assistant professor of business administration at Harvard business school and the co PI of the AI and data science operations lab at the school as well.
Uh, welcome to the marketing meeting.
Thank you so much for having me. I'm excited for our conversation today.
I'm glad that Felix got us introduced a while ago. And then because I was pushing him with some questions on AI, and then he says, like, of course I can answer those, but there is another expert that would be really helpful for you to discuss. That's why, that's how I think we got connected through Felix. And I'm super excited. Saying hi to Felix from here. So I'm in, um, many of my connections are from LinkedIn and that's the main platform that I'm announcing these podcasts.
And I know that you worked at LinkedIn as a data scientist. So I decided to start with a question from there. I remember that in one of your research projects, you highlighted how some LinkedIn connections matter more than the others. Uh, can you share some details about that project with us?
Yeah, absolutely. So maybe it's just helpful to just give you a little bit more of that background. So I have a very non traditional background for business school professors.
I actually have a PhD in statistics, uh, from Harvard, and there I really sort of studied how to handle missing data and how to do causal inference or trying to understand cause and effect and You know, through that a lot of experimentation, uh, and during my PhD, I worked with a number of different tech companies, Google, LinkedIn. Uh, and then when I finished my PhD, I actually joined LinkedIn full time. Uh, and I'm a huge fan of the platform.
I still have many friends that work there, many connections. Uh, and you know, one of the most central, uh, ideas of LinkedIn is, so their vision is basically to create economic opportunity for every member of the global workforce. And the way they operationalize that is to basically build a social network, uh, that allows people to sort of connect to people that they know. Uh, and they do this through their proprietary algorithm known as the people you may know algorithm.
So you've probably seen this on their website, it sort of just recommends, okay, you might know these people. Um, and, and the idea there is that if you're able to expand your network, you have access to more opportunities and you'll be able to get more jobs. Right. That's kind of all get better jobs. And this all goes back to this theory, uh, known as the theory of weak ties from rather better in sort of the seventies.
Um, and what he did was he basically interviewed a lot of people here and actually in Massachusetts, uh, right outside of boston. He asked them, you know, where did you find your job? How did you find this job? Who connected you to this opportunity? Mhm. And what he found was that it wasn't sort of your best friends that helped you get jobs. It was people that were maybe one step away from you.
So it was, you know, someone in your area that maybe you met at a conference and then you stayed connected with them. Or it's, you know, maybe your career mentor, or maybe it's, uh, your friend's, uh, uncle or best friend or something like that. And those people seem to be the ones that are much more useful in getting jobs. New job opportunities. And since the seventies, you know, this is probably one of the biggest theories in social science, probably one of the most highly cited one.
And this is sort of the motivation for all these social networks, et cetera. Come back to this theory. Um, and what we wanted to do was basically try and test if this theory holds out. In the digital world. And what we did was we basically leveraged a number of historical experiments that LinkedIn had run, uh, that just randomly vary the algorithm that's being used under the hood for the people you may know.
And sometimes that algorithm would recommend you to sort of do what's known as triangle closing. So connect you to people who, uh, you already have many mutual connections with, and then sometimes other versions of the algorithm will sort of connect you to people that were further afield. And this is basically an experiment because. You know, LinkedIn would always try many different versions of this algorithm to try to figure out, you know, which version works best for its customers, right?
Um, and what we found was, and we were able to sort of, um, update this theory of weak ties a little bit was that it's actually a little bit of a U shaped relationship where people who are really close connections, you know, think of people you went to, to university with or people you're currently working with. They're not that useful because they have all the same information as you. They're not giving you access to something new.
And on the other extreme, people who are super weak ties, where you just like, they're completely different industry. You have no mutual contacts. They're not that useful because they like the information they have is just too far afield. And what we found was that there is this like sweet spot of people who are kind of in between.
The really weak ties and the really strong ties, people we call moderate ties, and they are an order of magnitude more likely to help you find a new opportunity and get a sequential job transition, which means sort of get a job after you've connected with them at the same company. And, you know, that's kind of helped us verify that LinkedIn is creating a ton of value. And they actually help them realize that actually these algorithms are super powerful.
Uh, and they can tune them to really help them actually deliver on their vision. So there were lots and lots of learnings from it. Uh, and you know, I really, it was amazing. Actually, the linkedin let us publish this research, right? Because we did that internally. And then they let us actually share that with the broader community. So that's kind of what that work is about. . Itir Eraslan: So in that case it means that we have to, uh, make sure that we pay attention to people you may know.
Uh, yeah. Apart, uh, in our , in our app. AB absolutely. I mean, not so of course we shelter for work. Um, but people have shown that, uh, we ties are more useful when it comes to innovation. So you sort of get new ideas from these people. Uh, and basically any economic outcome you can think of, uh, has pretty much been justified through weak ties.
Uh, so these are super powerful and I would, I always encourage everyone to, you know, broaden and open up their social network and actually LinkedIn did a major media campaign around having open social networks so that you're connected with many people. It's not a closed social network where you just have, you know, a handful of really strong ties. And that's what's really good for the economy.
Perfect. So on podcast, I try to talk with experts that are dealing with AI that are all day to day in the AI world, but I'm. At the end, I'm tying it up to marketing because this is a podcast about marketing And many of the companies that I work with or interview with or talk to Their biggest concern is actually about Which platform, which softwares to adopt, which parts of the business to integrate AI, and also like many of them have a question mark about where to start.
So that's why I would like to, you know, start with that and then link it to the marketing piece. So when a business considers building a AI team or AI systems in place for their companies, what should they consider when they evaluate new tools or softwares?
Yeah, that's a great question. And it's one I hear all the time. Um, the way I like to think about it is right now we're at the stage where AI can solve specific tasks. So what I tell companies is you should look at all of the tasks that your employers are doing and then basically wildly experiment across as many of those tasks as possible to try to identify where AI is really able to create a lot of that. Places where I've seen companies fail is when they try to completely.
Automate a whole function, you know, I've seen the number of companies say, okay, we're now going to do AI marketing. And we're going to use chat GPT to create our adverts, create our creatives, everything. It will be done by that. And they try to automate the whole process and inevitably it fails because Again, AI is really good at very, very niche specific tasks right now. And it's not very good at linking those tasks. Now that's going to change in time, but right now we're just not there.
So what I say to companies is. Experiment wildly, try, you know, as many different task based solutions as possible, and then just be really scientific about it. I think this is one thing that's great about the marketing world over the past, maybe over a decade now, they've been really pushing for this sort of scientific approach to marketing, where you try to quantify everything. You try to measure, you know, the ROI on this, which ad is better, right?
So I think you need to bring that experimentation mindset that's really What is really adding value, but again, it has to be at the task level and then for each task, you have to think about are we going to automate it? Or are we going to try to augment it? Is this a task that's going to be fully delegated to the AI?
Or is a task where maybe just the first draft will be done by the AI, and then the human's going to come in and fix it up and improve it, etc. So, again, I'm giving you sort of very vague ideas. answers, but it is a really broad question and, you know, experimentation, trying things out and being at the task level is really the big sort of key takeaways.
I think your point about like, seeing it as tasks rather than, you know, trying to, Um, combine it as a whole, uh, is an important one because, you know, even in marketing, for example, there are places that you can automate with AI, but there are really bits of the pieces that you cannot really automate. And it's like, it's a big orchestration work. Uh, so, You know, that's, that's the dilemma that I'm facing right now, for example, for companies that I'm trying to integrate AI to.
So, you know, you automate email or newsletters or those type of things. But, uh, when you think about as a consumer behavior, for example, how do we integrate AI to consumer behavior, to understanding consumer behavior, which has nothing to do with automation, email, uh, automating emails is, is a very, uh, Basic task of that. You can integrate AI actually. So in that sense, you talked about return on investment, which is like a hot topic for him for a marketer. It has always been the case.
Okay, what's the return of investment for marketing? And then on the business side, there is one other question is that is AI economically used, meaning that does it Bring any economical value, uh, to the business since you see everything as a whole from the business side, how can you assess the economic impact of AI within the business?
Yeah. So there was sort of a few questions within that broader question. Um, maybe just to quickly comment on what you're saying about sort of the automating, like the email and the consumer behavior. I think that's exactly right. That is the strategy of going task by task. And sometimes, as you said, AI is really, really good at, and we know that, for example, generating first draft of mood boards.
It's really good at that because You know, previously a marketer who spent a lot of time on Pinterest, et cetera, trying to find the right images, but they know what they're looking for. So now they can just type it out. It gets them the draft, but that's not the final output, right? The final output still requires taking that. Uh, idea. That's that's where, you know, this little bit of augmentation could save huge amounts of time.
So connecting that to the ROI, uh, comes back to this experimentation mindset. So I'm working with a number of companies where we're trying to bring rigorous experimentation to the deployment process.
Of AI and you can do this either through like mini hackathons where you have people tasks Using the tool versus not using it Uh, there's a very famous now of a year ago bceg and hbs run this joint experiment where they had basically consultants Uh, do a number of different tasks, uh, and they measured, you know, the quality of the output, the time it took for that and many other metrics to get a sense of how, um, these tools sort of impact quality and productivity.
Now, of course, that's not the ROI, but those are inputs into the ROI equation. So sort of building that up to your broader question of like, what is the impact within an organization? Again, it comes back to experimentation and just being as careful as you can be measuring as many different outcomes around productivity quality.
And then at the end of the day, you probably will have to do a little bit of a back of the envelope calculation to map that to, uh, ROI because Internally, it's really, I mean, it is really hard to say, okay, what's the ROI on this one particular employee, right? It's, that's a really hard equation. If we're deploying things to our consumers, as tech companies do, it's really easy to measure revenue and things like that. But when it's sort of internal uses of AI, That becomes really tricky.
So that's why companies usually focus on that quality. Um, work satisfaction is another really big measure, uh, productivity, et cetera. Those are the types of metrics that companies are really measuring and then trying to map it back to that.
Because at the end of the day for a business to adopt something new or a new technology, it will all tie up to. The economic value that it creates, right? Like if you are able to solve a simple problem in your workflow, but just hiring like a part time employee, rather than adopting a whole technology to the whole company, then it will probably be for the time being sitting, sticking to hiring that part time employee and not adopting a whole new technology around it.
I think that's something that. A lot of people are missing out because at the end of the day, AI feels quite democratized and it feels, um, you know, it has no price. I mean, you don't think it's like for free, but it's not for free actually.
It's super expensive and super hard to do. And I see this all the time in the IT world. Where a lot of companies have outsourced a lot of their IT, uh, work and it's so cheap that the sort of the calculation of trying to automate any parts of it is just astronomically high. Um, And, and this is a, you know, you're kind of setting yourself up for failure.
If that's the mindset that you have, because we also know that there is always with new technology, there's a little bit of a J curve where productivity quality, everything sort of drops initially and then it grows. So initially the adoption of AI is going to be painful. You're going to have to deal with change management, which is much harder when it comes to AI than traditional IT that involves a lot of retraining, et cetera.
And you're going to take a huge cost of setting this up and then initial drop in productivity. But for me, it's still a no brainer that you need to get this done because The reality is this is where every company is going.
And the question I ask executives all the time is if you were to redesign your company today, forget about all your tech debt, forget about everything, you know, you just have, you know, your, your value proposition, let's say you keep that, but you can redesign your operating model, would it look different? And the answer that I inevitably get is it will look fundamentally different from how it looks like. Hmm. Then they say, okay, but we can't get there because we have tech there.
We have all these other things, but what I say to them is if you're not going to get there, someone else will, and someone else is going to come and be a competitor. And we've seen this time and time again in the rise of the internet, right? Like companies like Amazon were able to completely destroy, uh, the book retail business. Barely exists now because of companies like Amazon and you, you've just seen this time and time again, and it's going to happen again.
So if companies are not ready to really integrate AI and accept the fact that it's going to be costly and painful right now, Um, they're going to run into trouble in the future. And I do think a lot of these companies are just going to go out of business in the next 10, 15 years.
Um, AI started like a, comes like a magic stick, right? Like it just, everything happened like a year ago and then we expect everything from it. But although, uh, some people may not like it, but I would like to talk about the challenges that it comes with. Uh, what are some of the key challenges that you see in these models? Absolutely. So
first of all, AI did not come a year ago, right? AI has been around for a very, very long time. It's had many different names over the years.
Um, and this is I mean, if we look at companies like Google, that is what gave them a competitive advantage over Yahoo and everyone else is that they were able to build predictive machines that was so powerful, uh, that they were able to do personalization like Netflix, for example, their recommendation algorithm is far superior to anything that Disney and anyone else has, and that's why they're worth so much money. Uh, so just to, you know, make that clear,
yeah,
IGPT came out and now when I teach about AI, I don't need to spend the first five minutes saying AI is important and AI is cool. You should be paying attention. It's sort of for granted, but it has been around for a while. Now. When it comes to the challenges, I think there's many, many different challenges. The way I like to think about it is through the lens of how you develop a specific AI project.
And the way I think about that is essentially it's having four steps, which is the selection, and that's kind of picking what projects you're going to work on, the development, which is the actual building of the AI, the evaluation, which is the experimentation and measuring the ROI. And if it works, it's adding value. And then. The adoption piece of it, which is sort of the change management piece of it.
Uh, and of course, there's sort of a fifth step, which is managing the overall architecture across these. But if I spend just a few minutes, just zooming into each of them, I can tell you a little bit more about what goes on. Um, so in the selection, uh, we're all used to prioritizing our projects, right? We usually think about the impact of those projects and the feasibility of those projects. Um, but when it comes to AI, there's a few nuances that I think are important to understand.
So When it comes to the impact, uh, very often, uh, the project selection of like AI projects is left to data scientists who usually lack a deep understanding, uh, of the business. So what they end up picking are basically projects which allow them to use the latest technology, uh, in a not necessarily strategically aligned with where the business is trying to go.
So that's a major, major pitfall is just the impact is not there because they're just not picking projects are going to be, uh, significantly meaningful for the business, which is why you need, um, you know, let's say you're trying to build an AI for marketers. You need those marketers in the room to really. Be part of the conversation and be part of the innovation process. That's one big failure. The second big when it comes to the selection is really around the feasibility of this.
Um, when it comes to feasibility, we're often used to thinking about, you know, what's the timeline? What are the costs? But when it comes to AI, there's a number of other questions, which are, you know, what's the data? Do we have the right data to do this? Uh, are there any ethical considerations, right? Is there going to be any issues around privacy, around fairness? Um, is the modeling going to be transparent and sort of interpretable to the people who are going to use it?
Uh, and then the question that I always, uh, like to remind people of is, You know, just because you can do it doesn't mean you should do it, right? So when it comes to that feasibility with AI, you really have to ask yourself, Should we really be doing this? Is this really the thing to do for our employees and for our customers?
Uh, and then the other thing, and I just always like to give a shout out to some of my colleagues who do some amazing research in this area, like, uh, Jack Lane, uh, and Karim Makhani, they have this beautiful paper that shows, uh, Uh, the humans are really bad at disentangling impact and feasibility. They tend to correlate those two things. Uh, so they would say if something's high impact, they would inevitably say, Oh, it has to be highly feasible and vice versa.
And these are like field experiments with experts, everyone who's like, Oh, I never do this. You absolutely do. I do it all the time. Uh, and the way to get around from that is just literally writing down and being really careful to first focus on the impact of the project. And then to focus on the feasibility,
because I have a question about the data piece, because as a company, for example, I have lots of data that I collected over the years from my customers and so on. How do I know that that data is enough or is not biased?
It's a great question. Very often. Um, one of the things that I've been quite frustrated about is there's been a number of companies going around saying, Oh, we're generative AI. You don't need to get your data and all that. Just shove it all in and magic will happen. And that's just so far from reality that it's almost comical. Uh, so what I've been saying to a lot of companies is, of course, you need your data strategy. You need to have your data lake.
You need to have your data warehouses in order, um, because having clean curated data is actually super helpful and super powerful. But having said all that, the reality is sometimes you don't know if that data is enough until you start, which actually beautifully moves us into the development process.
And I have usually when I give a talk on this, I have this beautiful slide that sort of builds and it sort of walks through all the different stages, which is The different stages in the development, you know, so it starts off, data scientists needs to go and find the data, they find the data eventually, they clean it, uh, they curate it, they realize that's not the right data, they go back to get some more data, they clean it again, then they build some model, they realize
that's not quite right, it's not enough data, go back, try to get more data, and you end up having these like super long loops that can take months and months. Eventually they build a model, uh, they, you know, evaluate it and it looks like it's good. Uh, they're getting ready to deploy it, but even that's not enough because usually data scientists work in these sort of notebooks, which are not really built for scalable AI.
So then they basically have to pass it on to like an AI engineer or a software engineer that restarts the whole process, builds these whole data pipelines. Retrains the model, figures out the data is not being refreshed, goes back, builds data pipelines, etc. So, so the whole development process ends up taking six, nine months, uh, etc. And it's a huge, huge, uh, bottleneck.
And, and the way I like to think about it for most companies, Uh, forget about the sort of leading tech companies, but for most companies, I'm going to be really nice to them. It's like building a Rolls Royce, right? It's really white glove service. Every little detail is handmade. It's beautifully crafted. And this is me being nice because. It could also just be like, you know, pre industrialization people building, you know, pre 4T just building terrible things.
But I'm going to be nice and say they're building gross voices. And that's kind of this current state of the world for most companies. Uh, and, and, and really what they need to do in the development process is move away from this Rolls Royce model and move to more of like a Mercedes or even, you know, a Maybach Mercedes where a lot of it is actually automated and then you have people coming in and doing just the bits where people really add value.
So like the building of the models itself, right? Because if you have your data. In order, you have your data pipelines that just beautifully should feed into a model. And that's the bit that the data scientists works on. So that's kind of how I think about the development process. Uh, the data piece, even if you have, uh, this like AI factory for automating the development of AI, if you have that in place, you still have to worry about having all the right data, uh, et cetera.
What I find it quite useful, especially, uh, on the marketing side is that So you have the data, you assess the data, and then for example, you automate like in the simplest way, you automate your emails based on the data and your, you know, customer profile that you have in that data. And then for example, you target people, uh, that are always buying from you.
But the thing is that, If, for example, you are talking to a specific demographics and you are selling only to a specific city and specific demographics where actually you want to grow and you want to target a new demographics, then that data and the outcome of that data will not be helpful for you, because if I'm gonna target another city or a female consumer base rather than a male consumer base, then you will have to make sure that you also create that data, which is actually
is not available in your database. Um, so that's why it's always like important to make a match of, okay, what's our data and what is, does it say, uh, is everyone, uh, interested in it? It's in black boots and they are living in New York and they are so used to, you know, buying black boots around September and October when it starts straight, but whereas, I mean, maybe you're, you're going to start selling in more in Los Angeles, which will not be the case.
So, you know, I always say, okay, I mean, what's the data saying to us, but our data at the end of the day, and what are we trying to achieve here? Uh, is a good story to catch our words. And then, you know, after the development parts. Or are we at the development part now?
Yes, so exactly. So that's the development we've been talking about. And I think, again, this is as if you're sort of predicting where this is all going, right? Which is, uh, that's where the experimentation comes in. Uh, where you have to really, uh, try out whatever AI you're doing, you have to go and try it out. So if, if your data is on New York, uh, black boots and you want to start selling in LA, uh, you build your model. That's going to try to pick some people.
You run an experiment and maybe compare it to just, let's say you're not going to do any personalization. You're just going to, you know, let's say maybe you have some mailing lists. And you're just going to send it to half of the people, and then for the other half, you're going to do some personalization. And you can use that to try to learn how generalizable this AI is in other contexts.
And through that process, you start to gather more data, which you can then incorporate back into the development. So that's why I always say, once you've built it, whatever AI you've built, that's why you need that experimentation, because What it does on your laptop isn't necessarily what it's going to do when you actually put it into the wild.
There's many reasons why, when you take AI and you put it in the wild, its behavior is fundamentally different from how it behaved in your little closed environment when you were developing. So that's where the experimentation comes in. Again, it doesn't fully get rid of that challenge of, well, if we have no data, what do we do? Well, you know, you got to start somewhere. You got to try to collect that data. Buy it
from third parties.
Buy it from third parties. Or you just realize you're going to take a hit and you're not going to have personalization. You're just going to deploy things and try to see what happens, try to gather some data. Um, but maybe just to quickly just touch on that final piece, which I think is really important, which is the actual adoption of AI.
Yeah. Now,
this is not as important if you're just, you know, Uh, you know, using it to to do some personalized emails, but very often right now, especially with GNI and in marketing with a lot of the tools that I'm seeing, they're really supposed to be used by marketers, um, and other professionals to really automate or augment specific tasks. And. Very often, this is where companies really stumble.
Like I've lost count of the number of companies that I personally know and have spoken to sort of their leadership team that's in charge of, say, their gen AI deployment, and they say things like, we bought product X and we spent this many million getting a contract. You know, we did our pilot study. The productivity gain was.
200%, you know, we could take tasks that took two weeks and do it in half an hour, you know, we launched it and none of our employees downloaded it or none of the employees actually used it. And then I'm like, yeah, of course. Did you like speak to them? They're like, well, no, we spoke to like the five that did the pilot, but like, we didn't speak to anyone else. Like, why would we?
It's like, Oh, my God. You're all used to building products for your customers, but most companies are not used to building products for their employees. So the second they look at, they flip things around, they forget, like, how to interact with their employees. Like, there's a whole team for marketing external products, but very few companies have actual marketing teams for their internal products.
But you kind of need to do that because if you want people to use it, you need to put that same, it's the exact same thing. You need to sell that product to your employees and with AI, the sort of the framework I like to think about is really the basically need to build trust between. The human and the AI. And for me, trust sort of falls into three buckets. You have the trust in the AI itself, which is, is it accurate? Is it fair?
Do I sort of understand its limitations and when it's going to work well, when it's not going to work well. So, you know, have I been trained on AI piece of it? The second layer is, do I trust the people who built it? Uh, and this is super important because very often we are reluctant if I gave you some really fancy tool And I just said oh, it's really really accurate Go ahead and use it and you have no idea who built it or why they built it You're not going to want to use it, right?
You're just going to be like I I don't know where my data is going. I don't know. Are they going to use this to just automate me and then just fire me in two weeks? Like, I don't trust these people. Uh, and that's why trust in the developers is so important. Very often I actually find it completely trumps the trust in the AI.
For example, if you have a team that's building these Gen AI tools, and they work really close with the marketing team, they have many meetings with them, they explain to them what they're doing. Even if the AI itself is not that interpretable, you know, it has all these other things, you're just willing to trust it because you trust the people who built it. So that's why that layer is super, super important.
And then the final layer, which like everyone just forgets about this is actually trust in the processes. Uh, and what I mean by that is how do you handle when things go wrong? Who's at fault? How do you handle disagreements? Now, what happens if the AI is saying, do this, but the person is like, I actually want to do something opposite. How do you handle that? Who's at fault if things go wrong, right? If I follow the AI and some bad decision gets made, am I on the line?
Because if I'm on the line, I'm not going to want to use this. Like, why would I use something where I don't fully understand that? I don't know who built it. Um, and if it goes wrong, it's my fault. Right. So, so those are some of the three layers of trust, trust in the AI, trust in the developer and trust in the processes that you need to really focus on to get that adoption.
And I think like for marketing, there are two big layers, uh, that we need to take care into concentration. One of them is trust because Brand is all about trust and brand building is all about building trust about your brand, about your business, business, uh, in the eye of the consumer, but like in your case, also it's in the eye of the internal teams as well. So that is adopted by the teams.
But the second one is the, um, consumer piece, because you have to understand the consumer and its behavior and how to deal with that consumer behavior, which trust is a big pillar in that section. How do you think AI might transform, uh, the way we understand the consumer behavior or is it too early to say something or is there areas that are already useful to understand the consumer behavior?
Yeah, so it's a really interesting question and I'm gonna try and take it in a few different ways and you can tell me if I'm going completely off track. On the one piece of it, AI is now getting really good at trying to understand our consumers. Uh, and trying to understand what they really want. Um, you know, we're able to analyze text data. I know examples.
Uh, where, uh, companies have their frontline workers with, you know, like little audios in stores, and they basically can say things like, Oh, we seem to be out of this. Or, you know, the customer said that this product wasn't as good. And that gets fed into basically a model that's able to extract lessons.
So it's giving you the ability to learn about your customer in, In a much more nuanced and detailed way from a much broader set of inputs, as opposed to just typical surveys or typical consumer panels as we're sort of used to doing in the past, so it's sort of opening up that door. The other thing that's happening is it's also changing how we communicate with our consumers, right?
Um, In the past, of course, we chatbots have been around forever, but they're getting really good now, and that's kind of changing things because, and this is my biggest question and sort of, uh, worry, um, with really good A. I, um, let's just focus on, say, say, sort of retailers or even high end luxury. We're talking about like shoes, right? Um, yeah.
If you go to, say, Prada, Gucci, in the store, you have this amazing experience where someone comes, they talk to you, they help you pick out your product, and they're like, okay, this is the right thing for you, you try it on, it's great. Now that AI is getting so good, in the very, very near future, we're going to be able to have online experiences that are just as personalized.
So we're all going to have our own sort of digital shopper that's like, hey, here is Uh, the product that's just right for you. Here's the thing. And then my big question is like, how do you continue to sort of differentiate yourself and sort of build that customer trust and customer understanding when everyone is moving to this super hyper personalized. Uh, digital experiences.
So I don't know what the answer to that question is, but I think a lot of us should be thinking about it, uh, because that's going to happen very,
very soon. That was one example about, uh, AI is really good at finding the right consumer or right products to offer to the consumer, but it's not great at closing the deal, especially the business deals. Uh, I don't know if it was. on one of your papers or so on. But I think this is a very nice example of, okay, there are parts that we can really get the high value of. Um, but when it comes to, especially, I guess, emotional intelligence and how to tap into the, uh, Yeah.
So, so this is a great question. So, first of all, that was definitely, that wasn't my research, but I, I think I know it. I'm trying to remember whose it was, but it's very interesting research. A couple of comments, right? This comes back to what we were talking about where it's really good at specific tasks. So you need to figure out what those tasks are and give it those tasks. But it's changing and it's getting better.
So, uh, in terms of the emotional intelligence, I don't think there's too many studies right now in the emotional intelligence, but there's a really interesting one on empathy. Uh, that was done by, I'm forgetting the team, but it was done on doctors and they basically had doctors write notes. Uh, and then they had chat GPT write notes, uh, and they compared it and people rated chat GPT as scoring higher on empathy, uh, which is a very human emotion, right?
So it's able to mimic empathy way better than, uh, the average physician can. Uh, so I think these things are changing very, very quickly. And I wouldn't be surprised if it gets better at closing deals because. Frankly, it has way more data on how to close and how to sell. It's read every single sales book ever written, right? That's better than, than sort of your sales team can ever do. So I think that's going to change.
I know a number of companies who are working on a sale co pilots for the sales team to help them. Handle those conversations and navigate sort of how to pitch the product and how to personalize it.
And how about the trends? For example, when you are working in fashion, you know that there's a trend coming in and you see it on the high street of New York, London and so on. How you think AI is right now at the moment good at spotting trends? Upcoming trends in business. Yeah.
Yeah. And it sort of comes to data, right? Like, um, there's the Chinese company. I think it's a sheer fast fashion retail, right? They're bigger than H& M, Zara and several others combined. Um, what they do is they basically even put up ghost products that don't even exist, uh, to test. To test. Yeah. They'll put up products or they'll do a limited run of 10 items. Uh, just to see what consumer demand is for those products.
Uh, and then if it looks good and promising, they very, very quickly scale that up. And this is, you know, this is not individuals trying to figure out what the trends are. This is, uh, A. I. helping gather data and then efficiently figure out what those trends are.
Uh, so I think you need to again, it comes back to what we were saying earlier, which is If you are a high end fashion retailer that has 12 products and you know, you have a whole creative team that's Generating this is what the future of fashion is going to work look like you don't really have the data to do this but if you're a company That is able to that is a fast fashion retailer and can gather data really really quickly You know, you can actually try to delegate that to your consumers
I have one question because uh since we talked about trust and trust is usually linked to You The strength of your brand as well, you know, you mentioned about how if you don't trust the process or the company, you don't share your data or you don't, um, you know, rely on the, um, recommendations of that data. Um, in that sense, I feel that. Bigger companies, because I, I'm okay to share my data with Apple because Apple is a big company.
And then I rely that they are using the best systems in place to protect my data and so on. Whereas, I mean I'm reluctant to share my information or to rely on the data of a smaller company. And I feel like, is it like the bigger gets bigger or, uh, like it's a very interesting thing that I'm quite interested in nowadays. So I wanted to ask you a question about, uh, this going forward for companies that are, especially the startups.
Yeah, it's super interesting. So I think it comes down to value. Um, None of us question sharing our data if it adds value to whatever it is, right? Like OpenAI when I made my OpenAI account, they weren't that big Uh, but they were giving me value. So I gave them their data. Google back in the day when They were starting out their first email. I remember getting the email with, you know, started out, um, almost like as April Fool's joke.
Um, well, they they proposed joke was that you have unlimited storage. But I remember just like being like, yeah, there's gonna add value to me. So I'm gonna give my data. So I think a lot of consumers are happy to share their data as long as they're getting something in return. So for startups, if you're adding real value to your consumer, everyone is happy to share their data. And there's a lot of other sort of trust factors that they can do.
You know, for example, I remember five, 10 years ago, you would go to this like small website and they would have their own like custom way of taking your payment. And And you're typing in your credit card and you're like, this, there's no way I will ever get this product. There's no way that my data is not going to be sold on the dark web. Uh, now everyone has like Apple and Google pay integrated and an Amazon pay.
So, so all of that is now like, there's just so many ways of leveraging the trust that those large companies have. Build around payment processing, the cloud infrastructure. Everyone's on the cloud now. That's way more secure than having a server at home. I used to remember having my own server that I was renting out and that was not at all anywhere near as secure as AWS.
So I think, you know, if you're able to create value and you're using sort of the right infrastructure, there's no reason why. Uh, people should be worried about sharing their data, uh, with small companies. Mm hmm.
Mm hmm. One final question about marketing and AI, uh, what important trend do you see for the future of marketing?
I think marketing and every other role is going to fundamentally change, and I think marketing is one of the first places it's already starting to really, really change. And the biggest shift I see is really this transition from being, uh, content creators to being content in marketing. Uh, in the past, every marketer was a content creator, right? They were making, uh, the adverts. They were making the text. Every piece of it was being created from scratch by marketers.
I think the transition we're seeing now is that they're all shifting to content managers where the first draft. It's being created by AI, and then they're managing it, they're tweaking it, they're owning the interaction with the AI. It's still the final product that's, you know, it's really their creativity, um, but it's like the tools they're using are changing. And I'm sure there was a huge change when first people were hand drawing adverts, and then they moved to computer.
Drone AdWords, and then they moved to sort of Adobe automating parts of that. And now it's even like that shift has already been happening for many, many years, but I think it's gonna speed up and it's gonna be even more exponential. So that's kind of the biggest area. Uh, and then the other piece of it, so, of course, the content creation and content management, the other piece of it is just Personalization is basically free now, right? The cost of variation has dropped to zero.
So now you no longer need to create just one advert, you can generate thousands of adverts and personalize them, uh, and then use experimentation to figure out, you know, marketers are already used to A B testing, you know, many, many adverts, but now that's like, on steroids. You can do thousands of adverts and do this in a dynamic way where you're learning about your customers and you're offering personalization. So all of that to me means that marketing is fundamentally changing.
And it's a really, I think it's a really, really exciting time for marketers.
It's a great take. And, uh, two last questions to wrap up with. If you could recommend one book to the listeners, what would it be?
This is not a marketing book. It's a book that two of my colleagues, um, road marker in the city, Karim Lakhani on meeting in the age of AI. Um, so the book was actually written before Gen AI blew up. I mean, they talk a little bit about it, uh, but I think the book will give you the foundational understanding around how AI is impacting both companies, business models and operating models. Uh, and I think it's a fantastic book and the book came out literally a couple of years ago.
But I'm hoping for an updated version that has. You know, all the extra things that are happening. So that would be my one.
I will check that out for sure. Because I always have this dilemma of, uh, okay, if that book is written like a couple of years back, okay. Would it be still relevant, but it's good to hear it from you. And the final, my favorite question is your, what's your favorite coffee place in Boston?
Um, probably Intelligentsia, which
is a
chain, but I love their coffee. Do
they have Intelligentsia in Boston? I didn't know that.
They have it in Boston. I think they have maybe a couple of spots now.
Okay. Okay. I missed that probably. Uh, I'm trying to remember. Yep.
But my favorite coffee chain will have to be Phil's in California.
Oh, okay.
Not Phil's coffee.
Okay. I, I was in LA last week. I should have tried that. Uh, if, if, if you have taught a week earlier, actually.
They do pour overs and if anyone from that company is watching, please open up a Boston location. I know they have some DC and sort of, um, West Coast locations, but nothing in Boston. I'm waiting.
Yeah. Okay. I'm going to try it when I go there next time to have my pour over there in the morning. Uh, yeah. Thanks so much for joining me. And, uh, I'm really happy that FedEx introduced us, uh, so that we can have this chat. I already have so many other questions because I'm gonna just stop it here. Uh, thank you so much.
so much.