Welcome back to Experiencing Data. This is Brian T. O’Neill. Today, I want to talk to you about the top five reasons that AI/ML and analytics SaaS product leaders, particularly in the enterprise space, come to me for UI and UX design help. So, in short, let me just kind of summarize this solo episode today for you. Again, I want to give you these five reasons that enterprise software leaders typically reach out for help. So, they usually have some problems and symptoms that they’re feeling.
Actually, I’m going to kind of separate out the problems they perceive with these symptoms that I often see when I start to look at their solution and kind of unpack what the issues are and what’s causing the problems that they’re feeling. So, there’s those top five reasons.
And then I also want to talk about the slow creep of design problems, UI and UX design problems, in an enterprise data product, especially if you build your own thing or the team is largely engineering and data-oriented, the skill sets, and they haven’t worked with professional designers. When I talk about some of the slow creep problems that can occur.
I also want to talk again, about these—and I say, again because this is based on an article that I had sent out to my list a while back, but some of you may have missed that, and some of you probably aren’t on my mailing list, but you should be. If you want my weekly Tuesday dispatches, head over to designingforanalytics.com/list. But anyhow, 20 symptoms related to design that, if left untreated, may lead to these five core problems, and the reasons that product leaders will reach out for help.
I also want to just kind of give some ideas about how to think about these issues and when to deal with them. And if you come and check out the picture that goes with this article—it’s a little ridiculous; it kind of looks like this Borg Terminator guy—and why it’s relevant to this article. But I will leave that. You’ll have to come and actually read the article and see it for that part. So anyhow, let’s jump in.
One of the problems with bad design is that some of it we can see and some of, we can’t, unless you know what you’re looking for. So, over many years, when it comes to designing UI, UXes, for enterprise applications that are heavily based on machine learning, or traditional analytics, there are frequently some red flags that tell me where the design may be falling short of servicing the business objectives, and the user objectives as well. And when design suffers, users pay for that.
And when users have to incur wasted time, friction, usability issues, confusion, trust issues, all that stuff, all of this is what leads to negative business impact, which is what the people responsible for revenue, and churn, and keeping customers, getting new customers, all that’s when you start to feel it.
So, if you’re in a product management or business leadership role for a commercial enterprise software company that relies on machine learning, and AI, and analytics to deliver value, hopefully this episode will be for you.
Even if you’re primarily focused on delivering internal data products—so your customer is a colleague who has the same email domain as you, and someone that works inside the walls, so to speak—many of these problems are things that you should be aware of, too because even if your customer is the head of marketing of the sales team, and they’re not swiping a credit card to get access to your dashboards, your models, or whatever the output is that you’re producing with your products,
they’re still looking for an ROI on their investment, if they’re investing their time, and maybe, you know, internal funny money spending on a data team, they still care about getting some value, as well. So, they might say, “So, what’s our AI strategy?” Or, “We need a dashboard
that uses AI to predict X for us.” And I would hope by now, if you’ve been listening to the show for a while, you know that buried in that ask is an actual business problem they’re trying to solve, and they’re trying to tell you what the solution is they need because they’re trying to be helpful.
But we have to frame that as a friendly lie, and you do good for yourself and your team and those stakeholders by getting in front of the actual underlying issues so that you can start producing benefits for them and not just delivering the literal thing that was asked, right? So anyhow, let’s jump in here.
So, the most common business problems that lead to data product leaders and founders, particularly those working in the SaaS enterprise software space where there’s heavy quant data, analytics, machine learning, this type of stuff, the first one that I’ve been seeing, at least, well, last few years, is probably that sales demos seem too hard or too long to close, despite the fact that team feels like the product value should be quite significant.
Like, the customer should be seeing a significant value to them. And I think this is often observable through customer questions, muted interest from prospects, or no customer user interest in changing the status quo.
There are challenges in this enterprise space where it can take time for a demo to prove out its value, and I think this is something where, again, service design and user experience design for the product itself, being aware of what that entire sales cycle looks like and figuring out how do we properly onboard someone even in the demo phase or the honeymoon pha—well, the honeymoon phase happens after the purchase in my definition of what that is—but this is all stuff that can be
designed in such a way that we minimize the impact if there truly does need to be months of integration before the value could be seen or something like this—I mean, hopefully it’s not months, but even if it’s weeks—we can design for this. But anyhow, I’m not getting into solutions right now. Right now we’re talking about problem space, so sales demos seem too hard or too long to close.
The second one is that there’s a spike in customer churn, or low adaption due to a competitor product with better user experience, or another kind of related one is that there’s a significant—you have a significant customer renewal on the horizon that is seriously at risk of not renewing. So, this kind of problem can be a slow burn problem, where you have, like, this whale customer who’s threatening not to renew their contract, and there’s significant renewal revenue that’s at risk.
I think in this case, this is where the product team got comfortable, they’re probably just busy shooting out new features, what Marty Cagan would call a feature team, and the lack of an investment and good user experience is now a major risk for the product as customers begin to look at alternatives or demand easier solutions.
It’s harder to get away with bad design these days because people tend to expect consumer-like experiences even in their enterprise software, and frankly, there’s just so many options out there now, it’s much harder, unless you’ve built a moat around your business for 20 years, and you’re the only player in the space. Even then [laugh] , there’s always that chance of disruption.
But anyhow, so number two is the spike in customer churn or low adoption, due to a competitor coming out, or a significant customer renewal is on the horizon and now everyone is really concerned all of a sudden about the product’s quality because that guarantee of revenue is no longer there. So, that’s the second one.
Third one, the product’s IP is technically solid, but customers and users can’t see how your IP, your magic sauce, your cool AI, whatever it is that you made, that can’t figure out how it translates to value for them without a lot of explanation, training, or time investment. So, if the product is B2B and a key champion inside, you know, the customer org leaves, there’s a large risk to the vendor—or your business, if that’s you, if you’re a vendor—there’s a lot of risk there.
So, that’s the third one that I see. The fourth one I see is dashboard and user interface bloat, which leads to customer frustration, and then fiscal impact.
So, what might have started out as relatively small, hyper-targeted, simple product that had demonstrable value, and customers were pulling out those credit cards and swiping them, over time, as more and more features get asked for, and there’s interest in the market—often, I think this comes from going too wide in terms of industry or customer fit—it gets more and more complicated and there’s more and more views, more interactions, more features, and more use
cases that it’s all trying to solve for. And so, eventually the head of the product gets this feeling that—they get that ‘we need to redesign’ itch, and they’re not quite sure where to start, but they do kind of sit back and realize, wow, we’ve made this thing really complicated. Like, revenue has gone up, and we’ve gotten more sales and all that, but now we have kind of this mess on our hands because we didn’t invest in design early.
The engineers, and the data team effectively built the product, and that worked for a while, but there was really no plan or strategy. So, it’s just kind of like, even if you’re—you know, if you understand engineering, for example, it’s just like not having any plan for how you’re going to architect the software for future growth, et cetera, so you just keep on throwing on new features and slapping more code in there, and the next thing you know, you have all this debt.
So, debt can also be user interface design debt, dashboard debt, those are all real things, too. And then the fifth one is that the product manager or the founder or the stakeholder who’s reaching out to me, their vision or strategy for the data product is simply not manifesting itself in the design that customers are asked to use.
So, this is usually more of a situation where either the vision is not being communicated clearly to the internal team that’s designing and building it, or there’s just a mismatch about whether or not that strategy can actually be executed in the product.
So, this leader gives this vision or strategy to the team, and the team is busy building features, and building mock-ups and prototypes, and maybe even shipping some product out there, but the business stakeholders are not feeling like the vision for the product is actually emerging from all this work that’s being done. They feel like it could or should be better, but they don’t know what that north star looks like, and they’re concerned that if that vision doesn’t emerge, the product is going
it’s not going to get the traction that they’re seeking, or there will be other types of business impact that they’re not looking for. So, these are all fairly high level problems, but I want to kind of step down a level of altitude now and talk about how some of these problems manifests themselves in the UI and the UX, like, kind of at the product strategy layer.
So ultimately, while these things do surface themselves in the design, in the user interface and experience, the causes can be many, from people and process, to communication, a lack of leadership, inexperience, and the like. So, not everything I’m going to talk about here is going to be a UI problem, but I feel like that’s kind of where the decisions all surface in the end.
Like, eventually this, like, debate about a feature or how you’re going to solve some particular customer use case, it gets manifested in the form of a design that has an experience with it, and that represents the decision of the product team. That’s what went into the product, so that’s where we see it surfaced.
So finally, just before I give you these 20 symptoms that I see emerging down at this kind of lower level of altitude here in my consulting design engagements, et cetera, don’t take this personally. I’m trying to be helpful here, but I want to be clear and direct so that you don’t sit and fester with an injury that you should have had treated a long time ago.
You want to get ahead of this kind of stuff because most of these kinds of problems get worse the longer you wait, they get more expensive to fix, and emotionally, they’re just harder to change because all the sunk cost that goes into, especially if you’re in a startup or something like that, it’s your baby. So, I just kind of want to throw that in there. Don’t take it personally. This is intended to be helpful. And you’ve got a difficult job to begin with.
And a lot of product leaders, they don’t have a lot of power, they don’t own all the resources, but they have all this responsibility to actually deliver value at the end of the day. And that’s a tough role to be in. And I also know design is usually not top-of-mind for an enterprise software product, especially one in the machine learning and analytics space.
This is usually not an early internal hire, or a place where resources seem like they need to be spended, so you have to figure out, is it worth our time to address this right now?
And whether you address these issues yourself with training, or outside help, or internal resources, if you’re lucky to have them, you first need to be able to recognize the symptoms that may tell you it is time to invest in your user experience, in your user interface design because that is what’s going to actually change the downstream fiscal objectives that you have. So, with that, let’s jump in.
Here are some of these 20 symptoms that I see in client engagement that should be triggers for you to seek help, or get training for yourself, or to start addressing these problems. So, first one is that the product’s bloat and growth has become extremely complicated as it tries to satisfy multiple users with differing needs. I hear this all the time with dashboards. There’s the operational user.
And then there’s, like, the management executive who’s some downstream stakeholder, and you want it to be really easy for the non-technical user, but you also want to give everybody all this flexibility to get whatever data they want to get access to, and it has to have an export to Excel, and on, and on, and on.
And this is a lack of firm product strategy, and someone really having a vision for who we’re going to be and who we’re not going to be to the customer—and I say ‘who,’ I’m talking about the product. Who is the product going to be, and who is it not going to be? Second one, there are often no clear, quantifiable progress or success metrics for current product work that’s going on.
So, effectively this means that no one really knows how to measure progress except, did we ship the current workload by this specific date? So, this is when Agile is effectively running product, which is completely wrong. The goal is not to ship stuff on time; the goal is to produce benefits and outcomes for the user and for the business. So, I hear this all the time.
It’s so rare that I meet someone who’s working on a team that can often tell me what these progress metrics and success metrics look like. They simply don’t know, and they’re not—not only the metrics not defined, they’re definitely not measuring them because nobody even knows what they are. And so, it feels like you’re making progress because you’re shipping stuff, but nobody really knows how to measure quality.
The third one is, you offer dashboard or product customization as a crutch for not making hard decisions about what the product strategy and design should be. This is always going to sound better to anyone you offer it to. And it’s really hard to remove customization controls and features later. But secretly, I think a lot of you probably know that you’re offering some of this because you don’t want to put a line in the sand about who this product is for it and who it’s not.
And this is really important early on when you’re trying to get product-market
you can’t be everything for everyone, you need to be an A-plus experience for the person you’re trying to satisfy for, and it’s so much easier if you can just pick a target user and just hit home runs for them. The product is much easier to grow, when you’ve proven that you’re hitting those home runs over and over again for a specific person. Whenever we offer customization—and I’m not saying it’s always bad, I’m just saying, I see it mostly used as a crutch for not making firm decisions.
And that’s not the same thing. There are times when customization is warranted, and it needs to be done carefully, but you need to ask whether or not we’re effectively asking the customer to become the designer. And customers are not designers. Customization is more of a tax than a benefit. In the marketing copy, in the marketing lingo, it sounds like a benefit, and you put a checkmark in the pricing—you know, the pricing table, what features do you get—check—customization.
Sounds like a benefit, until you actually go in and try to do it. And then you realize now I, the customer, have to make all these decisions about what I’m supposed to look at, what I’m supposed to do, how I’m supposed to do it. This is putting a lot of work on your customer, who’s paying you for the value. Now, if they feel really invested in that work, and they can see the obvious benefit to them by going through that tax, they may be willing to do that. That is the edge case.
I rarely see that, so I think you need to really ask yourself, why are we offering all this customization? The fourth one, your product’s data viz choices may be correct in theory, but despite picking the right charting types, et cetera, the choices you’re putting into the product aren’t delighting customers and generating the value. So, this can basically mean the UI might be right in theory, but the experience is not. And the lack of adoption and delight is telling you that something is off.
So, there’s more to designing the data viz portions of the tool. It goes beyond just picking the right chart type. We need to be thinking about experiences which happen over time, as opposed to data viz, which is a single user interface output that gets spit out onto a screen. Those are not the same thing. Fifth one, there is a team agreement that the current design is not good, but the team does not have clear agreement on what a redesign, a quality redesign, would look like.
So, this is a little bit related to the earlier one here, but what happens is a lot is the engineers, the developers, were not really given time to design the solution, but they had the responsibility to do that because no one else was doing it. So, they pick a GUI library or viz tool, and then they throw up the charts at the last moment. This is usually the last thing that’s thought about because all the modeling work and all the data work tends to be where the priority goes.
And you, the product owner or manager, you probably know something is wrong here, but you’re not really sure how to fix it properly without just guessing again. You look up the word ‘dashboard’ and Google and you look for some nice looking paint jobs, and copy that framework, and then you plop it into yours. And then you hope that it works, and maybe give it to the sales team and see what happens. And that’s a way to design.
I think that’s a very high risk way to design, and you’re probably not going to see the benefit there because cosmetics really don’t get you too far if there’s an underlying value problem with the service. Number six, there’s a rush to throw AI and, particularly GenAI of course, into the product wherever possible, leading to the AI button phenomenon. So, it used to say ‘click here to do X’ and now it’s ‘click here to do X with
Holly, our GenAI assistant’ [laugh] . So, on this topic of ge—this is obviously a huge topic, GenAI and especially the large language model space—and while initially feeling like magic, I think that we’re effectively giving a CLI, a command line interface, back to users again. We’ve kind of come full circle. And prompt engineering sounds like it’s an engineering problem or maybe a data science problem, and it’s not. It’s a design problem.
We literally have a user experience challenge with these LLMs to get them to do what the person wants. So, they’re kind of magical, but they also take a ton of wrangling. So, I advise you to really think strongly before just throwing that into your chatbot, throwing your LLM into the product wherever you can. I never meet someone who says, “If only my sales tool had AI, my sales team would
be so much better.” And yet, all the marketing and the lingo you’ll see out there, there’s a lot of companies rushing to just make sure the word AI appears somewhere in their marketing speak. It’s just not how people talk because people care about what’s in it for them, what the benefits are to them, how you’re solving their problem, and they don’t know whether or not AI can, should, or would be a better fit for that or not. It’s an implementation methodology.
So, if you’re hoping that’s going to solve everything, you’re placing a big bet. So, we need to think strategically when implementing GenAI or just AI in general into the product because we think it’s going to help drive sales or increase business value. Number seven, you can’t provide me with a specific shortlist of benchmarked use cases by which the quality of your product should and is constantly measured by to ensure that their product stays on course.
Furthermore, if it involves, like, predictive models because maybe there’s a range of responses possible within the interfaces, you know, beyond recall versus precision, you’re not really sure how to evaluate the design or the user experience quality of those use cases because of all these different responses that are possible from the model.
So, if you don’t have this set of benchmark use cases, I recommend that you try to get some, and it’s kind of the ‘do no harm.’ Like, no matter what we do, we cannot make the situation any worse. These use cases have to be stellar because it underpins everything that we do as a company. I rarely see those. I highly recommend having some benchmark use cases. Do not make those worse.
Number eight, your sales org is effectively running the product org by telling them what to build so that they can close deals faster, and at all costs. After all, more sales is the current business mandate. Of course it is. Got to show you X dollars or revenue by Y date. End-users who aren’t also fiscal buyers don’t stand a chance with this unless you stand for them and champion the long game. But I’m going to get more into that in just a second.
So, your sales org is effectively running the product org. Number nine, your product team and your engineering team or your data team, whoever is building the underlying [IEP] is working backwards from that solution in order to find a market solution, a product that the market wants to buy.
So, you’re starting with intellectual property, a model, some kind of solution, some access to data, some data set, whatever it may be, and then you’re trying to create a solution out of it that someone might want to buy. However, what I often see is that the creator of this product or the solution, they can’t actually describe what it’s like to—like, a day in the life of the target user, and how that prospective customer would integrate this into their life. They could imagine this change.
Like, “Of course they’d want to use it. And then they would log into the tool, and then they would go do this.” But they don’t—they’re just making that up, and they don’t actually know what the current workflow of the customer looks like, their attitudes, their belief systems, their challenges, they don’t know what that’s like.
So, they kind of tell themselves a story about, “Oh, everyone’s going to start using our product to do this instead because it’s so much better because of how we, you know, we use advanced computer modeling to do X, Y, and Z.” They’re really good at coming up with that story, but they can’t actually express what the delta is between current behavior and desired future behavior.
And I know hearts and minds are in the right place, and we do want product people to be believers and visionaries and to see that future that could be better for their customers, but this is not a reliable way to repeatedly deliver value with low risk. I think there’s a lot of risk in this if you’re going to start with the solution and then try to find a problem that it can fit with neatly, let alone one that someone wants to pay for.
Number ten—and this is related to that salesman we talked about earlier—the fiscal buyer of a solution—your customer—is not representative of the actual end-users who will use the analytics or the data inside this product for decision-making purposes or whatever the promise of the software is. And I’m sure many of you know that the staff and the users often have no say in what the tools are that are bought for them at work.
And there’s been a promise of some kind of financial ROI to the fiscal buyer, and that sounds logical, and it looks good because the salesperson and the team, the strategy to get it to go to market is to think about the org-level problems that the fiscal buyer is having.
The reality is that the person that’s buying it may have no idea what it’s actually like to be, say, a salesperson or an advertising campaign manager on the ground inside the company, and those are the people who are going to actually use the tool. So, what happens is, we might get the short term when maybe we convince that buyer to pay and to lock down a contract, and then we have the user adoption problem. No one’s using the product.
Now, we’re really worried that renewal is not going to happen. Or worse, the champion of the product left and no one else is really using it. You’ve only got a couple people in there who were really behind this thing to begin with, and then they leave. And then everyone’s wondering, why are we paying for this? Where did this come from? So, if the product was really generating its value, you wouldn’t have to be worrying about that churn as much.
And so, low user adoption can be a signal that people are not seeing the value of the tool. So, as a product person, you’ve got to—I know you have to balance closing the short term sale to keep the vision for tomorrow going, but it’s pretty rare that I see that people are willing to backpedal and make sure the strategy going future is a long-term one. Usually what happens is short-term wins just feed the beast. It’s just, like, close the deal at all cost.
It doesn’t matter if anyone uses it, just get our foot in the door because we need to close this revenue so that we can get some more revenue. But you’re building the sunk cost, and you’re building the debt there, and then at some point, it’s really hard to make changes to the solution because of all this debt that’s been sunk into the product, and you’ve got a large low adoption problem on your hands.
Number eleven, there’s a relentless focus on delivery speed at all costs, but nobody can define what quality means beyond the technical parameters. So, I’m talking about data governance practices, privacy’s, SLAs, speed, all these kinds of technical parameters that sometimes founders will talk to me as being beneficial to customers, but it doesn’t sound like the way customers actually talk.
And just as an aside, this is also one of my biggest beefs, with a lot of the data product definitions out there when we talk about them in the internal enterprise context, a lot of them seem to focus a lot on the technical benefits: scalable, reusable, data contracts, SLAs, security, all these kinds of things. And I believe those are all valid things, but they’re kind of secondary benefits. They’re not the primary reason someone buys something.
Like, I didn’t buy this new computer monitor—the screen that I have—because it’s safe and secure. I needed a really nice way to see the interface of my computer. I wanted it to be big, and I wanted it to be movable, and I wanted it to look nice. That are my primary benefits. What LED technology it uses and how fast the bit rate is, and the color depth, and all that, those are all secondary benefits.
So anyhow, this is my tangent just on data products is we get too focused, I think, on the technical benefits there and not the primary benefits. So, a customer won’t buy or change their status quo until a primary need has been satisfied. So anyhow, it’s something to think about there is this relentless focus on delivery speed at all costs, but if you’re going to go fast or slow, you still need to understand how will we know if we did a good job?
What is the definition of quality in the eyes of the customer? Number twelve, the team cannot produce a colorful descriptive example of their core user on the spot, and how the data in the product fits into their work life. So usually, this is a symptom of little to no ongoing user research and customer exposure time. There’s no assets that could be shared with the team.
There’s, like, no video recordings of user interviews, there’s no customer conversation transcripts, no highlight reels, there’s no usability testing. There’s just basically a big gap between the makers and the users of the solution, and so it’s a very impersonal way to design because that the engineers, the makers, the data team that’s powering all the technology, they don’t really know any real human being with a first name and a last name.
They can’t cite one, they don’t really know what it’s like to be John or Jane or whoever the person is, they just haven’t ever met them. This isn’t just a short term thing. This is also a long-term opportunity, if you want to think of it that way.
The more we can get our technical people exposed to actual end-users, the more empathy we develop, and the easier it is for them, the makers, our technical teams to make better decisions on their own, to start generating ideas that can benefit the user to push back on work that will not help the user because they’ve actually seen them, they’ve met these people, they’re real people. And they want their work to get used. We don’t want to isolate our technical teams. We want the opposite.
We want more customer exposure time. Number thirteen, you or your staff cannot get routine, direct access to these end-users. I know this is challenging sometimes, and so your design is mostly based on guessing, or proxies telling you what they need, or the salesperson is the only one that gets to talk to the end-users, and so everything is routed through their lens.
And the irony here—thank you, Klara Lindner, in our Data Product Leadership Community, who I had on the show recently—we’re data people, and we’re supposed to be delivering an insights prod—a lot of times are analytics and machine learning tools, these are insight decision support products—they’re supposed to be rooted in facts and data, but when it comes to designing these products, there’s not a whole lot of data and facts that are actually informing the product
design choices or the product strategy. We’re mostly designing on guessing because we have not gone out and talk to anybody, or rather, we haven’t gotten out to listen and observe anybody. So, if you’re going to be a data people, and you think everyone else should be using data to make decisions, well, then you got to eat your own dog food, and you need to go get some data to inform your own design choices.
Number fourteen, you or your stakeholders think that training and change management should be addressing product usability and utility issues. I think for most people who have been listening to the show for a while, I probably don’t need to go too deep into this one. In my opinion, training and change management are symptoms that the product strategy and design, it’s too difficult, and it’s not properly aligned with customers. It’s not to say all training is bad, all change management is bad.
There may need to be some of that stuff, and I do understand that, but I think there’s huge opportunity in many situations to make the design and the experience better, such that you don’t need to spend all these resources creating training debt. And training debt has to be kept up. As the product grows, you have to update all the training material, all the change management procedures, all that kind of stuff. You have all this extra weight in your backpack that you got to carry around with you.
The other way to do it is to design for the behavior change right into the experience of the product, to make it easy enough to use that you don’t need training, or you embed the training into the experience of the product itself.
There are other ways to improve on this, so if you’re thinking that someone’s—they just paid for something, and they want to go in and open the manual, I’m sure you’ve all—a lot of people here have probably opened up the box to a new mobile phone—an iPhone or an Android—and compare—and it’s got a very fancy camera on it—and compare that to opening a Sony camera box. And there’s probably a half inch thick manual with hundreds and hundreds of pages of instructions on how to operate the camera.
I mean, this is even for, like, some of the point and shoot models, let alone, like, a DSLR and that type of thing. Do you really think someone that wants to take photos of their kid playing soccer wants to read a 500-page manual? They don’t. So, decide where you’re going to put your resources, right? Number fifteen, your product’s visual design looks like it was designed by engineers, or it looks like an engineering proof-of-concept, a POC.
But that was okay because, you know, enterprise customers especially, like, analytics users, they don’t care how it looks, right? But at the same time, as a product leader, you’ll come to me, and you’ll tell me that you want it to feel like Apple designed it. Because that means simple, high value, high prices, useful. It’s the north star, it’s the vision.
So, you kind of think it doesn’t matter how it looks to that finance user, that kind of, boring person that just does stuff in numbers, but secretly you kind of know it does matter, that there’s something to the visual design of the solution that seems to convey a sense of professionalism, a sense of trust, a sense that this product is stable.
What I’m getting at here is that the visual design has a very functional purpose as well, and I think a lot of times we tend to see this as it’s just aesthetics, and you can always change the paint later and it’s not essential. That visual design, the aesthetics part, can give you a pricing edge in the product, it can convey a premium service, it conveys trust, security.
The visual design does affect how people feel about the product, and I think most data people here know because they’ve struggled with low adoption of data products. They know that feelings actually have a lot to do with how people make decisions. We all know this from behavioral economics. I don’t think I need to talk to this audience about that.
So, even though we like analytics, we like logical, we like the data telling us what to do, we all know that we’re all subject to those biases, and that our feelings about stuff matter. We do make quick judgments about the quality of things based on how they look. So, if you need to tell yourself a story, then tell yourself that one.
There are functional, scientifically proven reasons why the aesthetics actually do matter to the product’s usability, its perception of ease-of-use, its quality, all those kinds of things. It’s not just a paint job. It’s like, throw the branding on whatever, it’s the same product. I know it feels that way because it’s hard to see into all this, and a lot of this stuff is not binary. It’s not like it’s either this or it’s that.
It’s the sum of all these visual design choices add up into an overall perception of the brand and the quality of the service and all of that. But you can ignore my advice here, if you want, but I’m telling you that there are functional reasons why the visual design can have an impact on your product’s validity in the market, its ability to be sold, the utility, how much people are willing to struggle with it.
This is also something that’s been studied is that things that look better, people are more willing to spend more time and effort trying to get them to work if they look professional. So, number sixteen is kind of related, but you did invest in visual design, but it didn’t work out so well the last time or maybe it was the first and the last time that you tried it.
So, maybe you liked how the new version looked cleaner, but somehow your goals around adoption, retention, usability, utility, they didn’t really seem to be affected by that investment and that designer you hired or whatever it may have been.
And I’m guessing few questions were asked by the designer about your data, or the analytics, the insights, the modeling, the limitations of the data, where problems can exist for customers, other factors that you probably felt would need to be understood before the needle could really be moved.
So, now you’re a little bit worried, and your data science and engineering team has a lot of questions about the feasibility of the new design, the schedule, and you’re kind of wondering, should we really implement this new visual design that we just paid for? And is it going to pay off? This is what happens when we hire a pair of hands to come in and throw a paint job at something, but the underlying product is not understood.
The design firm doesn’t really understand data products, they don’t understand how probabilistic machine learning algorithms work, they’re not thinking about any of that. They’re only thinking about typography, look-and-feel, they’re thinking about maybe mobile versus desktop rendering.
And that stuff does matter, but if they don’t understand how this ties to the business objectives you have and what’s functionally possible with your technology, and they don’t understand the level of effort required by engineering to implement these changes, well, that redesign that you paid for may be just another sunk cost that’s not going to serve you in these goals, in these objectives that you have. So, I’m sorry if you’ve been through the situation.
There’s a lot of design talent in the market now, and I think this is usually what happens when you go with the nicest looking portfolio that you saw at the lowest price you could get. This is where you may end up being left with is not being sure whether or not you should really implement this design or not because somehow it doesn’t feel like it’s going to pay off, or it’s going to be too expensive to render it. And you have to make that decision.
Should we change all this stuff in order to match the design prototype that was built? That’s a question you have to ask, of course, or you can seek out other help. Number seventeen, your AI/ML solutions demo really well, but the end-users aren’t finding it quite so usable, explainable, or trustworthy.
And maybe you started to expose more knobs and controls so that they could customize the outputs to their needs, but what was supposed to be a simpler machine-learning-driven approach to your product now seems like it’s actually getting more technical. So, this is why we have to have routine exposure to users and buyers and stakeholders. We need to constantly be observing them and talking to them so we can figure out how much control do we need to give to users? How much customization?
How many knobs and dials and things should we be giving them access to, to make sure that we’re balancing their hyper-specific need with the overall strategy and promise of what the product is supposed to do?
So, I don’t have a simpler way to help you with this one, except that you need more exposure to your customers and by understanding their pains and seeing the trends, that is going to inform what level of control you really need to expose to users, particularly in solutions that are probabilistic, that are leveraging AI and machine learning. Number eighteen, the UI surfaces an underlying data model or object model in the actual application navigation or the menu items, right?
So, let’s say you have a major entity in the data model called ‘customer.’ Well, of course there should be a customer tab in the UI or a customer link in the menu, right? And of course, they should be able to drill down on that because analytic solutions love to be able to drill down into detail. Well, first of all, no, drilling down is actually usually a tax not a benefit.
Everyone will tell you they want to be able to do that, but the actual tasks and activities of doing that is usually mundane work, it’s a lot of tool time work, it’s not goal time. It’s tooling effort. But stepping back from even the drill-down here for a second, should ‘customer’ be a major link in your menu, in your information architecture, in the navigation bar, right?
When I see the major data objects literally just exposed as links in the navigation, it tells me they really haven’t thought at all about mapping the customer’s mental model of the space into the design of the product. Instead, the data model, the object model, has been literally manifested into the application navigation. And the problem with this is that user tasks and activities tend to work over time. These are temporal things that take time to do.
For example, if there’s, like, a customer, and then there’s, like, an orders page as well, separating the customer and the orders may not make any logical sense because of how the user performs tasks related to looking at customer purchases. So, for example, there may not ever be a time where you would want to separate a customer from an order. That is completely illogical from a mental model standpoint, even if in the data model, it makes complete sense to separate those two things.
So, this can lead to really big, giant menus that have lots of sub menus, and we’re trying to expose all these objects because somebody asked for that table at some point, or they wanted a view of customer which was separated from it, so we feel the need to then expose another report about customers, yet 90% of everyone else is really focused on orders, and all orders need to have a customer associated with them. You can see where this is going, right? It’s just it leads to bloat.
And every time we add more stuff to the nav, we’re making the overall system more complicated to use. In design lingo, we call this ‘cognitive load.’ We’re increasing the cognitive load as we make the navigation even more complex. So beware, if you’ve got a one-to-one mapping of your data model into the information architecture of the product, it’s not always wrong, but a lot of times it’s a very suspicious thing.
When I see it, when I do an audit, I do tend to see that as a warning flag that needs to be looked at further. Two more left here. Thanks for bearing with me. This has been a fun episode, I’m hoping you’re getting a lot of value out of it. Number nineteen, the user interface, especially key dashboards in it are based on a template, another product, a portfolio piece, or something that your team copied.
They found it in a Google search, they found it on the portfolio of some designer that they liked or whatever, and when I ask them, “How did you arrive at this design?” Effectively, they tell me they found this template and they wanted to get a template—because of course we don’t want to roll our own. Why would we want to do that?—and we filled it in with our data, and then we made some adjustments because of course it didn’t fit perfectly; we had to get our product’s idea to fit into it.
But we started with a template. And at the time, you know, it seemed to make sense, right? Of course, it’s going to have three donut charts across the top. Why? Well, because everyone else has three donut charts across the top. And because most of the people out there don’t want to spend the time thinking about what it should be; they want a quick fix. And this is why you see countless interfaces with three donut tops across the top of the dashboard UI. That’s why it’s like that.
Because everyone wants what everyone else has, and everyone else assumes that someone else has figured it out. So, I love donuts. I liked them for breakfast. It’s not a dessert. I think it is a breakfast item. I just want to throw that out there. I know it’s cake, but I’m still putting it in that solid breakfast category. Whether or not the donut chart is the right thing, that’s a separate discussion.
I can tell you though, back-ending your product into some designer's portfolio piece probably isn’t going to serve you well. You’re going to spend a lot of time trying to bend it, squeeze it to fit into the holes that they’ve placed into the template. This is usually a sign something’s wrong, but it does feel faster because you don’t have to expend the cognitive energy to come up with that zero-to-one design.
The other option is a blank piece of paper, you have all your IP, all your technology, and someone has to decide how to go from a blank piece of paper to the first something, or the first redesign. That takes a lot of energy and thinking and strategy, and so we tend to just go find the quick solution, or what we think is a solution and steal it and repurpose it. You can probably see where I’m going with you here.
If your product is that special, it deserves to have the proper design treatment so that the value of the product can be surfaced in such a way someone is willing to pay for it, and find it indispensable, and delightful to them. You probably aren’t going to reach that if you’re trying to shovel your IP, and your data, and your insights into someone else’s design that they themselves probably copied from somewhere else.
And number twenty, you or your stakeholders have a high expectation for good design and experience, but you’ve underinvested in the resources that can deliver that. Why? Well, maybe you got burned in the past by a design firm or a staff designer. The portfolio looked good on the surface, but then you realize, like, they don’t really understand data and the constraints that go there.
And you’ve got nice paint jobs coming back in the interfaces, but they’re not asking you any questions, they’ve not really exercised how the data’s is going to work in this GUI in different states—since data often comes in different states, and you have edge cases and all this kind of stuff—they haven’t really asked any questions about any of that. They just sent over one paint job and asked whether you liked it or not.
And maybe now your tech team, your data science engineering team has a lot of questions about the feasibility and the lift required to build this new version, and you’re not really sure it’s so much better beyond the aesthetics of it. So, you probably hired this person thinking this is going to make things better, and you took that first step to get some help to start improving this. And maybe the engineers in the data team thought, “Oh, great.
Someone else is finally going to own this so we can focus on the modeling, and the pipelines, and the security, and the code being resilient and quick.” And that’s the stuff that they want to do. They don’t want to be making decisions about fonts and colors, and look-and-feel, and workflows, and all of that. And that’s exactly what the designer wants to talk about is fonts and colors and look-and-feel and all this kind of stuff. And you’re not really sure.
You’re like, “Well, we hired you to figure that out. But why aren’t you asking us about all the technical stuff?” And there’s a big gap here. So, this is a sign that we’ve not you haven’t found the right kind of help for the kind of problem that you have. So, those are my 20 things. I hope these were helpful to you. Are there more? Yes. There probably are more. Not today. This was a long episode. And if you got to the end, I’m really [laugh] glad you stuck around.
If you want to, like, be able to pull this article out, scan it, all that kind of thing, I do have this on my website. If you go over to designingforanalytics.com/resources, this is where I keep a lot of my, like, most popular content, or my most recent thinking, stuff that I think data product leaders need to be hearing about, so check out that space there. If you do need some help, you’re welcome to learn how to work with me directly.
I tried to improve the value, adoption, usability, utility of data products. That’s kind of my mission here, whether it’s through training and coaching, or doing it with clients directly. Just head over to designingforanalytics.com/services, and you can learn how to set up a discovery call with me on that page.
Anyhow, for now, I hope you’ll start to become aware of these symptoms, these top five problems we talked about at the high level earlier on in this episode, and go out there and take action against these things. But before we can take action we need to be aware of these things, so I’m hoping today at least you got some awareness about this.
You can see the world through, like, a designer’s lens, when I look at these kinds of products, and you can start to address some of these issues on your own. That’s ultimately my goal is just building this awareness of what design is beyond the surface-level changes and surface aesthetics. I’m really hoping that you’ll find ways to improve your adoption, and drive those sales, and keep the mission funded for tomorrow.
All right, well, thank you so much, and we’ll be back with another episode of Experiencing Data soon.