Even though everybody on at a certain level knows that what people say they do and what they actually do are very different, well, you know, oh, it came back in the data.
This is Erin May.
I'm John Henry Forster, and this is awkward silence. Silences.
Hello, everybody, and welcome back to Awkward Silences. We are here today with Erica Hall, the founder of Mule Design and the author of Just Enough Research and Conversational Design. We're so excited to have Erica here. We've been liking and retweeting tweets for a while, and it's great to just be talking in person. So welcome, Erica.
Thank you. It's very delightful to be talking with you today.
Yeah, I'm excited. I feel like you're one of the few people I follow on Twitter that has an authentic voice about design stuff and isn't either just being loud and angry about something or I don't know. I feel like you have a very unique perspective. So I'm glad that we're gonna get to chat.
Oh, thanks. That's great to hear. I was recently at a conference and one of the attendees came up to me and they said, after my talk, Oh, you're much more diplomatic
than you are. Well, speaking of that, you have a diplomatic take on surveys. What is it?
Yeah, I don't think that's very diplomatic at all. Yeah, well, my take on surveys is that they're the most dangerous and misused of all potential research tools or methods in the realm of doing things online, you know, digital design, any of that market research, stuff like that. Think surveys are really, really overused and badly used and people should mostly just stop.
Great. And why is that? Why are they so dangerous?
I am a big believer in as many people as possible participating in design research, but surveys are the one thing that I think really, really takes a lot of expertise in order to conduct them and interpret them correctly and usefully. But the problem is there are so many survey tools out there. They're so easy to create and run. And the ease of running a survey is completely out of proportion to the expertise required to actually do it right, that it's a really, really popular thing to do and all sorts of really important decisions are being made based on totally bad, flawed evidence.
Sort of the handguns of research methods, They're like broadly available, incredibly dangerous.
He's by an expert.
So, yeah. How did we get here? Where have surveys always been such a broadly used dangerous weapon or is this more of a recent development?
Well, it's one of the things that became much, much easier once you could run them online because before enough people were online and before enough tools were created to survey people, you had to either call them on the telephone or get in front of them somehow or contact them in the mail. And so reaching a large number of people was time consuming and expensive. And so I think this made people think about it a lot more. But all of a sudden, on the internet, you could send out a survey to hundreds or thousands of people almost instantly.
It also just seems like there's a part of the human brain that finds larger sample sizes irresistible. Right? You see the opposite with qualitative research. People are like, is five really enough? And you have that whole debate. But a survey of any type of inequality, if you're like, we have 200 responses, it just feels legitimate for some reason.
Right, absolutely. Because there's just this bias, there's a really poor understanding. Like a lot of people get into this stuff without a clear idea of what they need to know, why they need to know that. And they're just like, oh, but we know something about more people without really thinking what a particular data point really is or whether or not it's even possible to ask a question and larger numbers just seem better.
And who's most likely to fall victim to the irresistible lure of the easy online survey. Who what kind of groups, what kind of people, and what kind of situations are you talking about? You know, more researchers and product folks or, you know, obviously, marketers love their surveys. Who's really using these a lot from your experience?
I'd say, researchers not as much because if you are a trained researcher, you know when and how to conduct a survey. So yeah, it's product managers, it's people in leadership, it's any designer working with somebody like that. It's anybody who feels like they're gonna be on the hook for having to make some sort of data informed, justification for a decision. And the more anxiety there is about that, the more likely people are to to turn to surveys like this.
What are some of the things that people get wrong in designing their surveys? Because garbage in garbage out. So the garbage in, I guess there's two garbage inputs here potentially. Who are you asking questions and what questions are you asking them? So where do people get those things wrong?
Well, I think that the biggest thing people get wrong is starting with their objective or their like higher order research question, which is what do we really need to know? Because people get it backwards. People say, Oh, we're going run a survey, what should we ask? And this is the biggest mistake people make in research. They don't stop and say, Okay, what do we actually need to learn in order to inform a decision?
They start with the thing they're going to do. They start with, oh, we're going to talk to people. What should we ask those people? Oh, we're going to run a survey. What questions should we ask?
Instead of saying, what do we need to know and what's the best way to find that out? Because if you don't start there, like there's some things that are just impossible to find out using a survey because when you put together your questions, you've got to know what does it mean to have a representative population? So you have to know like, who do we need to know about? And that has to be really, really well defined. Then you have to think how do we actually reach those people?
Which of those people do we need? What questions can we ask them that they can actually answer truthfully and accurately? Because I think people often run surveys without asking that question, which is like, okay, is this a question that's possible for people to answer? Like the market research survey that is so common and so completely impossible to answer is say, how likely are you to buy this product in the next six months? That is a really, really common question.
It's impossible to answer. Sure, people will give you an answer, but no one thinks about things like that. No one can predict their own behavior in terms of that type of probability. Like how likely am I to buy a new television? Like, I don't know.
Like what's gonna happen? Am I gonna move into a different place for some reason where I need a smaller TV or a larger TV or is my TV going to break? Is something else, is a totally new awesome kind of TV gonna come out? So there's all these factors, all these things that could happen in people's lives that make it impossible to really predict behavior and and nobody can really predict their own behavior unless the the answer is like, absolutely not because I don't watch TV. But even then.
It also feels like when you start with the method and you know somebody on your team is running a survey, like if I know Aaron's gonna be surveying our users soon, you almost get a little bit of like the pork barrel effect where other people on the team are like, hey, can I throw a question in? And then you get this hodgepodge collage y type thing that just doesn't make any sense to the end user either. It's like, why are they asking me this question? And then it pivots hard to some other unrelated question in addition to not being the best approach as you just laid out. But it's also a pretty bad experience for the person who takes the survey at that point because now you're going through a random set of questions and it doesn't leave you with the best taste in your mouth.
Yeah. A survey, a good way to think about it is a survey is actually it's application design. It's not easier than that. The tools really, really hide what you're doing. Like what you're doing is you're designing an interactive application, a piece of software where you need to put it in front of the right users and they need to be able to complete it successfully.
And people don't think about it like that. They don't think, oh, I'm engaging in design and I need to already know something about the people I'm serving in order to make questions that are intelligible and that they can answer. There's no weight to it but then people will use the results in order to inform really, really important decisions like I said. And there's nothing about a survey, you won't ever get errors back about the quality of the survey. You'll get answers back because people are really great at the interaction of selecting answers in a survey or even filling things out in a box, right?
People are really, really good at that but you have no way of verifying unless you do completely different other research on the same topics. You have no way of knowing just from the results you get back whether you can trust them or not, like whether they're true and valid.
Yeah. It feels very silently.
You mentioned the example of someone, you know, like, will you buy a TV in six months or whatever? What is the what is the real user question somebody is trying to get answered by asking that question? And more broadly speaking, like what are the kinds of questions people think they're trying to answer when they're sending out a survey that are not well answered by surveys?
Yeah. So the question asking somebody to predict their own behavior, the reason that designers and product teams do research is because they do want to be able to predict the future to a certain extent. There's a great quote I use all the time from this Wall Street Journal article that all business is a bet on human behavior. Because you're making an investment hoping that it pays off because you hope that you're right about what people will do in the future. And so the naive thing to think is, Oh, we could just ask people what they will do in the future.
And so there are ways of creating a predictive model of behavior about looking at past behavior and then looking at the circumstances and like looking at like emerging trends and stuff like that. So if you want to know how likely people are to look at a behavior, you look at who has engaged in this behavior in the past and why. But you have to do work. This is where the work of research and design comes in, in terms of looking at what's happened and then saying, okay, based on what we know, what do we think is likely and what do we need to do to make things like more or less likely to happen? And so that's that's really the question.
And you can't you can't ask somebody that directly. What you can ask about, you can ask people like what they've done in the recent past. Like you can't ask anything that they can't remember. You can look at historical aggregate data and bring it all together to kind of answer that question to say, oh, like how big is the market? Like what drives purchase decisions? There are all these other questions that you can ask, but you need to go about answering them in a much more nuanced way.
Well, so that begs the question. Right? So surveys have all these problems and they're really ill suited for many things. What are they good for, if anything?
Well, I think it's important to break it down in like when you say survey, what do you mean by survey? And they're good for getting kind of reported, the kind of data it is possible for people to self report from large swathes of the population that it would be impractical and expensive to talk to one on one. So they are good when you want to understand say the distribution of attitudes in a population, like polling, that sort of thing is really good but that's incredibly expensive and time consuming to do right. I think that's people don't realize is doing surveys right isn't the cheapest method. It's often a much more expensive way of learning.
It's great if you ask very targeted questions in a specific context of a particular population, right? So this sort of idea of like, oh, we'll just put it out there and we'll see who responds and then you get like a completely biased sample, right? But if you ask say one question at a particular moment when somebody like goes through a flow or something like that and you ask them something evaluative, like, oh, how was that experience? Did you find what you needed? Like that could be a fine way to evaluate whether or not that page is working for particular people and you can track them.
And it's good for that. It's good. Like they can be good. You can do them casually. And this is where it gets tricky is, yeah, you could run like a low stakes casual survey to just kind of generate some ideas.
You could send something out to your mailing list and say like, which of these topics are like most pressing concerns for you or something like that. And you would get something back that might be indicative of like, oh, the people who did care to respond said this, so we should explore that further. But the danger is that people take those things as though they are much more broadly representative than they actually are.
Alright. A quick awkward interruption here. It's fun to talk about user research, but you know what's really fun is doing user research, and we wanna help you with that.
We wanna help you so much that we have created a special place. It's called userinterviews.com/awkward for you to get your first three participants free.
We all know we should be talking to users more, so we've went ahead and removed as many barriers as possible. It's gonna be easy. It's gonna be quick. You're gonna love it. So get over there and check it out.
And then when you're done with that, go on over to your favorite podcasting app and leave us a review, please.
Related, if you are going to do a survey and you don't have somebody in the team who is an expert on how to do it, are there rules of thumb or other things to keep in mind so that when you are putting it together, the odds that you fuck it up are not as high?
Yeah, just don't do it.
All right, perfect.
If you don't know what you're doing, just talk to some people because the most important thing is to be clear about what you need to know in general. It's like, what information are we missing? And this is a great thing to do is just get together with your team and say, okay, where are the gaps in our knowledge? And have that conversation. Because even just having that conversation is often very, very helpful because you'll find out like, oh, we have this assumption that's completely not founded in reality.
Mhmm. Right? And once you have that discussion about like, what do we really know and what do we need to know? And then you can say, okay, what's the best way of finding that out? And it's probably not a survey.
It's probably maybe talking to some people, maybe looking at research that other people have done. Like maybe if you want very shallow, broad information, like look at stuff that the Pew Center has done. Like they run legitimate surveys about broad trends with internet usage and stuff like that. You could look at your analytics. There are all these other things that just having a real sit down with your team, hour long conversation about what do you need to know, what do you already know, you'll probably be able to figure out ways to learn those things really, really quickly.
But no one ever just stops to have that conversation because it's like, no, we need to get some quantitative data back to the managers. Right? That's where this all comes from, is this notion that quantitative data is inherently more objective or somehow superior or more informative than qualitative data. And that's often at the heart of it. It's like if our research doesn't fit in a spreadsheet, nobody's gonna believe it.
And so they end up saying, well, 70% of our users said this. And it's like, no, that wasn't really 70% of your users. That was like 70% of the people who replied to that survey for whatever reason. And does that number even matter? Like, maybe, 70 out of a hundred people said that, but it's like, are those hundred people representative? No. But a statistic like that gets repeated and taken as though it's representative. So
Yeah. No, that makes sense. As I'm thinking about it now, I actually do think that I know that I'm not a good survey writer, and I managed to get by pretty well just without running surveys. So I think what you're saying makes a lot of
Yeah, I mean, demand for surveys often comes for completely political reasons. And also because it is very fast and very cheap to run bad surveys. And a lot of times that happens because there's a fear or a lack of understanding about how to talk to people. Like in some organizations, people have actually written to me and said, Oh, we're not allowed to talk to our customers directly, but we're allowed to survey them. And it's like, those aren't intersubstitutable techniques.
I never understand the dynamic you just described of, we're not allowed to talk to our users, but we can send them a survey. Because if you think about it a little bit further, And you just go a step, it's like, well, what's gonna be more damaging to our user base is that I send 1,500 of them a bunch of dumb questions that are maybe hard to answer and confusing, or I talk to four or five of them one on one and build some rapport and learn a few things. They know that there's humans that work at this company and they maybe To me, the scope of the damage in by quote unquote not talking to them is like, you're still communicating to them and it seems like much higher risk to me.
Yeah, there's a lot of terror about just talking to people. And that's a lot of what my work with clients is, is helping understand that it's not terrifying to talk to people and that you will learn things and how to use what you learn. But a lot of it really does come down to this fear. And sometimes the fear is, especially in an organization where the sales relationships are very high touch, right? And the people who own those relationships are so scared of just letting it's like, Oh my gosh, I'm a salesperson or I'm a customer relationship manager or whatever.
And I've cultivated this relationship that is directly connected to my compensation. And I'm just gonna let some yahoo from the design team, like talk to this person without me there. Like, there's this fear of like losing control over something like that.
So, yeah. So seems riskier clearly to a lot of organizations to send people, their workers, these mere workers out to talk to their customers. That seems riskier than surveys. I know you're all about making research accessible to more people, just enough research, right? There is a certain baseline of comfort that needs to be there, right, on the part of the person talking to What is that baseline?
How do you people? What do you need to know to be at a place where yourself and your organization can sort of be sufficiently comfortable that you've de risked talking to customers enough that it's a better option than sending out a survey?
You need to be clear about how you as an organization make decisions. So you need to have good communication in the organization first. And this is often like It takes a lot of uncomfortable conversations to get comfortable. And the first uncomfortable conversations you should have are inside your own organization. And they don't take a lot of time and they don't take a lot of money, right?
Those are both total research smokescreens. Because you can learn really useful things very quickly and very cheaply. But there's so much terror of having those sort of candid conversations. Like I was saying, like, do we know? What do we need to know?
How are we making these decisions? Like the first question to ask inside your organization is like, what are our goals? Does everybody working towards those goals have the same idea of what those goals are? What decisions are we making? And you go back to that quote about placing bets, it's like, where are we placing bets?
What resources, whose time and money are we investing in these decisions? And on what basis are we gonna do that? And if you start digging into that, the reason there's so much terror is you find out that so many investments and decisions in business are made based on so called gut feelings of the people with the most power and influence in an organization. And once you start poking at that, it gets really uncomfortable really fast.
Yeah. We all gravitate towards the quantitative data of you know, a thousand people told us this and 70% of them, you know, picked answer A or whatever. But there's actually like, there is, you know, studies and there is information that it's not the most effective when it comes to like inspiring behavior or getting people to change their actions. Like the example that always comes up is the commercials where you throw out a bunch of statistics about childhood hunger or something, and you don't get a lot of donations. But then you tell one really compelling story about a child who's suffering from those situations and you see people donate at a much higher frequency.
So like within your organization, if you are trying to create change and get people to buy in on stuff, the storytelling is actually probably gonna be more effective in the long run. Oh,
absolutely. Everybody, especially anybody who's any any sort of managerial role, they say that they make decisions based on data. They say that they make logical decisions, but they actually make decisions based on emotion and based on stories. Like, people run on stories. Like, we are terrible at statistical analysis.
Like, it takes so much training. And even, like, when I was you know, as I'm working on this revision to Just Enough Research where I'm adding in a chapter on surveys that I totally didn't have in the first version because my sense was, no, don't even. Like if you're not a trained researcher and you know what you're doing, don't even give me your surveys. And then after talking to people over the last few years, I thought, okay, so I have to at least put in a chapter about how to do it right. Just kind of scared me off.
Because after doing the research to write this chapter, I'm even more convinced that people just need to stay away from surveying.
Right. You you need like a survey to decide if you should do the survey and it's really long. Lots of
Yeah. Yeah. Yeah. You you have to go through this go through this test. And let's see, what was I saying? The decision makers, surveys, oh, the emotions. Yeah. So people run on stories and they run on narrative, but there's this terror, right? There's so much fear in business and design of being called to account for your decisions. And what a lot of managers are looking for is just that sort of CYA thing.
Like I have a friend who runs like focus groups just to essentially provide quotes for annual reports for CEOs, right? So there's a lot of theater. There's a lot of like research theater and making it look like we have data. And there's a lot of people like doing the same thing over and over again to like look busy and look productive and not really stopping to say, like, what are we all really doing here? And like and and what's what's our shared kind of mission?
And to admit, like, it's very humbling to admit, you know, like, I don't know if you read Thinking Fast and Slow, you know, the Daniel Kahneman book about how flawed our critical thinking and decision making is. Like, it's very humbling to accept that, but you have to in order to actually make logical data informed decisions. You have to admit all the biases that we have as people. Like, that's just how our human brains work. Like, we're not computers.
We have all these biases. Once you understand that, then you can use that and you can be more rational and you can make decisions informed on data, but the but that doesn't mean using math because we just we suck at math. Yeah. We we just do. And so why bother? And it's all because we have this, like, false impression of what it means to make good decisions.
So this is a little of a tangent, but when I think surveys, two things come to mind. So can I just bounce them off here really quick and get like a rapid fire? I think a lot of them, like getting in my inbox are, fill this thing out, take five minutes, be entered to win an iPad. So the prize type incentive or the drawing, thoughts on that?
I mean, I think incentives are important because incentives do reduce bias. Because if you don't have an incentive, then the only people who will take the time are the people who love you or hate you. So it's not awful, but, like, it's fine to give somebody an extra incentive because like I said, it will kinda reduce bias. But it's like, is that really part of a coherent, sampling
strategy? Survey might still be terrible.
Yeah, everything could still be terrible, but yeah, at least you know well, our sample is biased towards people who were interested in potentially winning this thing.
Cool, and then the other one was probably the most common survey of all, at least the one I see the most is, how likely are you to recommend this survey to a friend, the NPS one. Any quick thoughts on that? I know it's a big topic.
So NPS was invented by a management consultant as not as a research tool,
but
as an evaluation technique to give to managers who are bad at math. Right? Like the whole reason he invented this tool was because managers aren't good at statistical analysis, which is what you need to do if you do surveys correctly. So he created the whole like drop out the people in the middle and subtract the detractors from the promoters. And he gave he substituted simple arithmetic for statistical analysis because he, I guess, had a low opinion of the math skills of people with MBAs.
And the issue with that is that in the case for which he devised that tool, Really, it it was like enterprise car rentals in in that time period. It was not a terrible thing. But because the math is so easy, it's like everybody's using them for everything and they're using them to substitute for actually doing research and not really thinking about the context. The problem is, and he came out Oh God, what's that dude's name? Fred Rygold, if I'm pronouncing it correctly.
He has come out in the past couple of years to complain about how many NPS surveys are run because it used to be that, again, like everything's on the internet, so it's a lot easier to insert these things into every interaction. Back in the early 2000s, these things weren't so prolific and it wasn't the worst idea to after somebody rented a car, ask this one question. That seemed like a perfectly fine tool to evaluate like, oh, how well are we doing in meeting our customer expectations? Like they had this significant interaction with our company that is the sort of interaction we would like more people to have with our company. And that was a fine thing to kind of check-in on and measure.
But now it's like, I can't Like, you can't do anything. You can't use like a restroom without getting some. It's everywhere.
Yeah. It's everywhere. What that
means is that it's absurd, right? Because there's no connection between these very, very minute interactions I have with an organization typically and whether or not I'm satisfied or would recommend. Like all of these interactions aren't necessarily things that anybody ever would think of recommending or discuss with their friend, right? Like if I call my bank to resolve some issue and they give me this survey, I'm not gonna recommend or not recommend my bank based on that one interaction with that service rep. And in fact, always fill out those surveys with the highest scores because I know those phone customer service reps have the most terrible jobs and that they are evaluated on that.
And I check-in hotels sometimes and they'll give me one of those survey forms. And of course, I'm going to fill it out and give them highest scores because like they were a nice person. I might hate the hotel.
Right.
I might have had a horrible experience, but I'm not gonna be personally responsible for that person losing their job. And so there's so much bias in them now and they're not being used intentionally or conscientiously. It's just, it's this weird trendy thing that everybody does. It's created like because one of the other really important things about serving people is you've got to understand the person you're surveying, like their general context and like how many surveys are coming at them. Even Pew has been writing some great stuff about how much harder it is to survey people because it's so much to get to the people they need to get to in the methods they use because there's just this onslaught of being constantly asked for feedback.
And so in that environment, it makes it even less likely that you will learn anything.
Yeah. My low key silent protest against NPS is I always just respond with an eight because I think it's a good score and just gets counted for nothing. I was like, well,
it's not what
you're cheating on.
I think it's stupid. And I'm gonna put eight down forever.
Does NPS improve? Someone must have asked this question, but I don't think I've seen this in all the NPS literature. So many hot takes. Does it improve if you say, have you recommended us to a friend? Right? The old, like, ask about past behavior versus future behavior. And if so, when would you even like, that's the hard part, right? When do you ask that?
Yeah, that's a much better question, that's a different question because then it's not, then that's actual research. It's not really NPS you want. Because that's a good thing to know because what that will tell you is like, do people ever recommend you? Is that a sort of interaction that people have? And it's not awful, but it's like, yeah, but it's like, when would you ask that of a person?
You could only ask that of people you knew were current Well, I guess you could ask it of people who weren't necessarily current customers because plenty of people recommend things that they don't necessarily purchase or use because you might say like, oh, you know, I'm not a person who needs this, but like I have another friend who mentioned that they had a good experience. So yeah, I'd recommend it. But it's like, why are you asking that question? What do you actually want to know?
That makes me think of Sometimes I'll send a link to Erin for an article. Like, hey, check this out. She's like, oh, is it good? And I'm like, I haven't read it. I'm just recommending stuff online. Straight to archive.
People do that all the time. And the thing, the difference between a survey and asking somebody that question in person is when you ask the question in person, you can tell, like, oh, are they making it up? Did they understand the question? You can follow-up. You can ask for more information, but you can't do that with a survey.
Like, you do not understand the context around the person. You don't know if they are who they say they are. You don't know if it's a dog answering your survey. You don't know if it's actually a customer. You don't know if they understood the question.
Because one of the things like, if you're creating a survey, one of the things you need to do is you need to usability test the questions to make sure that the order of questions, the type of questions, you know, the format, whether they're open ended, whether they're scalar, you need to make sure that that actually works. But whoever, like, sits down and actually tests a survey before running it. No. They just run it, they're like, ugh. Some
of the questions from the survey in your Medium article about surveys are so hilarious, and I'm sure they're I think they're all real world examples. They're very similar, but, you how many clicks did it take you to get where you wanted? And maybe my favorite, please rate the balance of graphics and text on this site. It's absurd.
Yeah. And one of my favorites and one that I'm actually I I took I'm putting a screenshot in the new chapter in the book is, like, I I got an NPS survey from Ticketmaster. Right? Who would recommend Ticketmaster?
I love the thieves.
Huge. Yeah. But it's like, I'm not gonna recommend Ticketmaster to a friend, like, ever. Are you kidding? Like, hey.
Why don't you pay $40 on top of the already expensive, like, ticket to that show? But I'm still gonna like use Ticketmaster one hundred percent of the time because they're my only option. And then the question is like when you have another good question to ask when you're using any research method including surveys is how are we going to use this to make decisions? If everybody checks an eight, like what are you going to do? What are you gonna do if you're like, oh, it came back and it turns out everybody checked an eight.
That doesn't guide, that doesn't tell you how to change what you're doing at all. You still have to do research. So it's just like this sort of, what do you call it? An operating management tool.
Yeah, it's like treading water. It feels like you're doing stuff and you're burning energy, but you're not really moving forward.
Yeah. And the scale isn't magical. I could also go off on liquid scales forever because I really That guy was a really, really smart guy who came up with the scale from strongly agree to strongly disagree. He's a very smart and fascinating individual. And it's like the same thing where he came up with a tool for a very particular purpose, which was understanding the distribution of attitudes within a population.
And it's been completely misused and especially in marketing, there's a conflation of attitudes with behavior, right? Just because somebody reports that they strongly agree with a statement, does that mean that they're gonna take a particular action? No. No. But everybody treats it like, oh, everybody strongly disagrees and they make these huge leaps without really asking about whether that's really indicative.
And people just constantly like, even though everybody on a certain level knows that what people say they do and what they actually do are very different, well, you know, oh, it came back in the data. And this is all because people are just terrified of each other.
Coworkers or humans in general? It's like
Yeah. Humans in general are super like, fear is the mind killer. That's so true. But if you're not gonna like there's a great quote from Susan Farrell of the Nielsen
Norman. Is it Nielsen Nielsen Norman. Nielsen
Norman. Yeah. Susan Farrell of N and G said, if you're not gonna act on the data, don't ask that question.
Right. Right. It's been helpful for me. I mean, I'm, you know, been asking we were talking with Noam from Wealthfront the other day and was talking about the five whys and kids and other, why, why, why, you know, I can relate, but it's been useful for me personally. It's just good to know. I just wanna know, can waste a lot of time. And it yeah. Yeah. Yeah.
I think the thing that John Cutler shared with us a while ago of, you know, if there was one data point or one thing you could know about your users and how much would you pay to get that answer? Just focus on learning that. Figure out, we'd pay a hundred thousand dollars to know x and just go find a way to learn x. You don't need all of the other data points.
Exactly. And prioritizing is so important. But I've talked to people in really well resourced research departments in large technology companies who are just given budgets and the vagus topics and it never connects to business priorities or what people actually really need to know because again, it goes back to having some hard internal discussions that you could actually have really, really quickly but everybody's actually terrified of other people.
Well, that's been a common theme of our podcast. But I don't know. I'm hopeful. I feel like this whole industry is making some strides in the right direction. Try to get people talking to each other a little bit more.
I think so. I think so. Really is. It's a matter of like, it's chipping away because there is no magic because it's people. Right?
There's no magic formula to get people to do things because it's all about changing habits and changing behavior and that's really hard. And I think acknowledging that it's hard, that it's difficult is the first step. If you say these conversations are hard, prioritizing what you need to know is a really hard discussion and not just glossing over it and saying like, oh, these things should be easy. They're not. And so once you have sort of permission to feel that pain Right.
I think I think that that's really the first step. Agreed.
All life is suffering. And everything's service. And scene. Thanks for listening to awkward silences brought to you by user interviews.
Theme music by Fragile Gang.
Editing and sound production by Kerry Boyd.