How to Use AI to Research Like a Researcher (with Cori Widen, User Research Lead at Photoroom) - podcast episode cover

How to Use AI to Research Like a Researcher (with Cori Widen, User Research Lead at Photoroom)

Apr 29, 202524 min
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

Research has often suffered from shortcuts, assumptions, and poorly conducted user interviews, long before AI entered the picture. While there are concerns about AI exacerbating these issues, today we’re exploring how AI can actually improve research practices by standardizing and democratizing good research at scale.

Cori Widen, User Research Lead at Photoroom, joins us to share how AI is being leveraged to transform research practices at her company. She discusses the cultural mindset that encourages innovation and provides practical insights on how teams can use AI to elevate the quality and impact of their research.

Resources from this episode:

Transcript

Speaker 1

For time immemorial , people have been performing research badly . We didn't need AI to take poorly calculated shortcuts or make broad assumptions based on scant data or develop awkward personas shaped by assumptions and painfully unhelpful user interviews .

So now that AI is here , it comes as no surprise that the research community is concerned that the technology might wreak even more havoc on research quality .

Research community is concerned that the technology might wreak even more havoc on research quality , but on today's show we're exploring ways that AI can actually help to standardize and democratize good research at scale . My guest today is Corey Wyden , User Research Lead at Photorum .

If you're not familiar with Photorum already , the company is getting a lot of attention for the innovative ways they're using AI , and Corey gave me a fascinating rundown of how all of that starts .

At the cultural level , we talk about how the mindset of the company fosters an excitement for exploring the possibilities , as well as some practical ways that teams can use AI to appreciate , perform and use great research . Let's jump in . Oh , by the way , we hold conversations like this every week , so if this sounds interesting to you , why not subscribe ?

Okay , now let's jump in . Welcome back to the Product Manager Podcast . I'm here today with Corey Wyden . She is the User Research Lead at PhotoRoom . Corey , thank you so much for making some time in your schedule to meet with me . Thank you so much .

I'm so happy to be here Me too , and Corey has actually been working with me for a long time , and this is one of the first times that we've actually gotten to speak and not just communicate via email , so this is very exciting . I feel like I'm talking to an old friend or an old pen . Yes , very exciting , I agree .

We'll start off the way that we always do . Can you tell me a little bit about your background and how we got to where you are now .

Speaker 2

Yeah , sure , so I've been in the tech industry for about 13 years . For most of that time I was actually working in product marketing .

But as most people in product marketing , research methods were kind of a part of my job , right , like interviewing users and things like that and at some point I just made the call and said , actually that's the part I like the most the user research so I transitioned to being a full-time researcher , first at Lightrix and now at PhotoRoom .

I'm leading user research at PhotoRoom .

Speaker 1

Awesome . So today we're going to be focusing on researcher-approved ways to use AI for research purposes , which is kind of a hot button issue right now . So we'll kick it off on a hot button topic Using AI for qualitative analysis .

What's your current methodology for combining AI and manual work in a way that you feel good about that you can put your own stamp of approval on , and how has that evolved over time ?

Speaker 2

Definitely controversial , and I would say just a bit about how it evolved before I go into , like , where I landed now is that you know , I was also reluctant , I think like the whole research community . One because I didn't trust the AI to do a good job , and at the beginning it really didn't . That was very legitimate .

And the other thing that I don't hear people talk about enough is just that I loved my job . I actually wasn't waiting for AI to come and say let me make it better , let me solve this pain point . I liked things like qualitative analysis , the process of doing it myself .

So I wasn't super excited about it at the beginning , but it did become clear that it was kind of like figure out how to utilize it or get left behind .

So when I started using AI for qualitative analysis , my first process was to do my exact manual process that I had always been doing alongside trying to utilize AI and kind of comparing the experiences and seeing what the AI did well , what it didn't do well and like how it could kind of like hit this sweet spot of reliability .

It's been a lot of trial and error and I think this process will always be evolving , because AI is always evolving , but , as of right now , I'll describe to you where I am at this moment . Okay , so I think it helps to have like a concrete example . So let's say I am analyzing a strategic research project and most of the data is user interview data .

So what I will do is I will have all of the transcripts within a project . They're stored , we use Dovetail and then we use a tool called Dust to query the AI based on all the transcripts that are in Dovetail . Okay , so basically , I'm using AI to ask questions about a set of transcripts that I have for a project .

So what I do is I basically I have my research questions in front of me , which I made manually , and I start prompting the AI and asking for quotes related to our research questions . So like , let's say , for example , I want to know about pain points in a specific flow or something like that .

So I'll ask the AI to find me quotes from the user transcripts that relate to that research question . And what that is doing is it's basically replacing the process of manually tagging interviews . I think many , many researchers would use various tools to manually tag interviews based on topics according to your research questions . And now I use AI for that .

And what I then do is I take the quotes on each topic and I put them into a mirror board and do affinity diagramming , which has always been my process for analyzing interviews , and I still do that manually and based on the affinity diagram , I come up with my insights for the project .

And the other place where I use AI here is I actually run my insights through and I say , like you know , these are the things that I came up with , and I ask the AI to disagree with me or define things like in the transcripts that are contrary to the insights that I brought forward , just to see if I'm missing anything , to check for biases and things like

that . And that is kind of my Frankenstein process of manual and AI for analysis .

Speaker 1

So you kind of mentioned that your overall opinion of AI has shifted . I think a lot of us have kind of had that sort of journey of like a little apprehension and then that's kind of evolved over time . So how's that evolved for you ?

Speaker 2

Yeah for sure . So at the beginning I felt like there was a lot of pressure within companies to figure out how to utilize AI and it was a bit directionless .

Right Like this is going to make you more efficient , it's going to help you do a better job , et cetera , et cetera , and I wasn't sure exactly how to apply it and most of my attempts initially weren't that great . It wasn't that great at replacing me in any part of the research process . However , that pressure didn't go away .

So gradually , kind of as ChatGPT got better at handling larger chunks of data , I kind of like relented and I started making like custom scripts to do very basic things . Like you know , maybe summarizing an interview or something like that .

That opened my mind a little bit , but I think actually the big turning point for me was relatively recent , like when I came to PhotoRoom , because PhotoRoom is super unique in its approach to AI , both user-facing AI and also internally as a company . Is there pressure to incorporate it ? Sure , but I would describe that pressure as like excited pressure .

Everyone is very excited about all the different kind of use cases that they are finding with their specific field and profession using AI , so there's an atmosphere of you know , people always sharing what they've accomplished , people being really interested in how you're utilizing AI , etc .

And that environment kind of turned it in from this like , oh no , I'm a researcher and I have to figure out how to use AI to something . That's actually , I would say , like a fun and interesting part of my job .

Speaker 1

Cool , okay , I want to dig out how to use AI to something that's actually , I would say , like a fun and interesting part of my job .

Cool , okay , I want to dig a little bit more into kind of culture at PhotoRoom in a little while , because this is really interesting and I think this is like it's a big factor that can influence the adoption within companies of AI overall , which is like that's a huge issue on its own .

But let's talk a little bit about AI assistance , because you kind of mentioned that sort of like ways that you've used AI have changed . Being able to use it more effectively has been a real game changer and , like I think everyone knows , now AI assistance are trending and I know that you've built a few that you found to be really helpful in your process .

So what specific types of assistance have you built and what problems have they been solving for you lately ?

Speaker 2

Okay , so two main ones come to mind . So one is called Mining User Interviews . It's not a very creative title . However , that is exactly what it does and essentially every user interview that is done at PhotoRoom is stored in Dovetail and then we have the transcripts via the API and we can ask questions . So that is utilized .

It's kind of solved different problems for myself and Becky , who's my co-researcher , and other stakeholders . So for us as researchers , as I mentioned , it's a big part of our analysis process . It's how we find quotes instead of tagging things et cetera when we're analyzing a project .

And for stakeholders , it's a great way , without going through us and us digging up reports and all of that stuff , to just mine all the interviews done to date to ask basically anything about users or users of competitors .

Right , like you know , please give me a list of all the challenges people have had with , I don't know , ai backgrounds or some feature in PhotoRoom . So it saves us time and it also kind of brings the stakeholders closer to the user by looking for that information and interacting with it themselves .

So that's one , and another one is more recent and it's an interview guide generator . So that's one , and another one is more recent and it's an interview guide generator .

So I know we're going to talk about it later , but a lot of people at PhotoRoom do user interviews and there's just like a large degree of available time to prepare a great interview guide before interviews and also knowledge about best practices , like how do you ask users things to generate the insights you want .

So the interview guide generator essentially the input is what do you want to learn , who are the users that you are interviewing and what do you want to learn from them ? And then it generates an interview guide , mostly according to best practices .

So the effect of that has been that like okay , so regardless of how much knowledge you have , you can generate an interview guide that doesn't ask leading questions and doesn't ask users to predict their future behavior and things like that .

Speaker 1

That's really powerful , because I'm seeing a kind of a trend of not just AI , kind of expanding productivity among individuals , but also kind of expanding the skill set of individuals by being able to kind of transfer own knowledge and that kind of thing , which is such a cool way of using the technology .

So it's like really interesting to hear kind of like a more nuanced approach directly . Okay , let's talk about prompt engineering . This is another huge thing . Everyone's trying to do better at it and I know that you know everyone talks about . You know , just give it lots of context , but it's a little bit more nuanced than that .

So what are some specific techniques that you've discovered that have really improved the quality of the outputs that you're getting when you're doing analysis on user research ?

Speaker 2

First of all , I'm just like everyone else . I'm always trying to figure this out and sometimes I'm like pulling out my hair , like why doesn't it understand me ? But there are a few things that have come up that I find really helpful , specifically when doing analysis .

So , first of all , any question that I ask the AI , I ask it in at least two or three different ways , because it's not a human and it's impossible to know how it's interpreting my question , and I often find that I get different user quotes or different transcripts coming up when I ask the question slightly differently .

So , particularly when I'm analyzing a project and I want to be careful that I'm finding like every relevant point on a particular topic , in order to do the analysis , I use a few different prompts , asked differently each time , and also I always I call it nagging the AI , but I always nag it and ask if they can find more examples , because even though I always

ask for an exhaustive list of quotes or an exhaustive list of examples , it's never actually exhaustive . There's always more . So I always ask for more .

And the other thing that I have learned is that you know when I speak , like if I'm speaking to a human , you know I can ask like four or five questions at once and get really excited about a topic , and that does not work well with AI . So I'm trying really hard to ask prompts that only contain kind of like one question .

And it's interesting because you mentioned that a lot of people talk about like give it as much context as possible . But I feel like with context it just anecdotally based on my own prompt experiences I think there's such a thing as too much context and it starts to develop a hierarchy and prioritize the things that you're saying and ignore others .

So I'm trying to find that sweet spot which , to me , is tending toward even less context .

Speaker 1

Oh , interesting . Yeah , it depends also where are you using context , okay . Well , this is good to know , because this is the first time I've heard someone say oh , don't use all the context .

Speaking of missing nuances , so one of the things that you mentioned before is this kind of pitfall of AI sometimes missing nuances in data or presenting limited data as a trend , like there's like kind of some sort of things that I think most researchers might catch on to .

But you know , I , like you said before , as you're transferring skill sets and knowledge to other people , it doesn't always come through .

So how do you validate AI's analysis and ensure that you're not missing important outliers and kind of more importantly , like if you're sharing some of these skills with other stakeholders , how do you ensure that that transfer of knowledge also gets carried forward ?

Speaker 2

Yeah for sure . I mean , these are really important questions . I think so . When I'm analyzing a project , I'm rarely asking the AI for insights . I'm asking it to find me relevant things , and then I'm rarely asking the AI for insights . I'm asking it to find me relevant things , and then I'm kind of using my human self to make the actual insights .

But the mining user interviews assistant that we talked about before is often used for things that are outside of strategic projects . So we do ask it questions about users , right , like what are the things users struggle with most ? About X or whatever it is ?

And I think that , first of all , this is really challenging and I have kind of two rules that I always follow when I'm doing this . First of all is that when I ask the AI for an insight , I ask it to cite every single transcript that it is using in order to create that insight .

So sometimes it will cite , you know , two transcripts out of like hundreds and I will see . You know two transcripts out of like hundreds , and I will see , you know .

I can go into the transcripts and look and see that like , okay , this is actually not a thing and I can use my human sensibilities to decide because it's told me where , from where it is drawing the insight . And the second thing is that I always ask for outliers .

So if it says , like , if it gives me an insight , I say okay , and now please give me as many examples as possible of users who actually are contrary to this particular insight , and that helps me kind of like look at all the data and then draw my own conclusion in a more nuanced way , of like look at all the data and then draw my own conclusion in a

more nuanced way . I will say that , in terms of stakeholders using the AI agents , we did create a guide for using the AI agent , just in notion that everyone can access .

But you know , those are resources that I think some people use and some people don't , and so this is something I'm still figuring out Like how do we , when we have something that is supposed to sort of like unlock the availability of user data to everyone at the company ? How do we make sure that everyone is aligned ? And I don't have an exact answer yet .

Speaker 1

Yeah , I think this is kind of an ongoing thing , because this is sort of , I think , points to the value of still having a human overseer , no matter what we're using .

AI for you still need to be an expert in that domain in order to kind of check the work and make sure that you know it's kind of treated more as an employee that you're leading , rather than just you know someone with an equal skillset to you .

Yeah , I think it was kind of a issue that a lot of folks are bumping up against Like how do we kind of ensure that our internal owned knowledge is kind of aligned with whoever is trying to use the AI for this purpose ?

Now that we've kind of talked so much about sort of like the values and the way that the company functions , I think we can dive right into some of the values of PhotoRoom that's kind of enabled and empowered you to act like this with AI .

So let's chat about PhotoRoom's values as they pertain to the use of AI , because you kind of pointed a lot around this idea of the culture at PhotoRoom being a big driver of your adoption and engagement using the technology .

So what does it mean to democratize research and how does it change the way that stakeholders interact with and value your UX research findings ?

Speaker 2

Yeah for sure . So I mean , the thing that appeals to me most about PhotoRoom at the beginning was how user-centric they really were . Like the leadership was spending time interviewing users , and that was pretty much like .

I realized in the interview process that it was an expectation that people at PhotoRoom are talking to users and looking at qualitative data in addition to quantitative data , which is not every company , let's put it that way and you know , utilizing kind of their interactions with users or their exposure to qualitative data and their decision making .

That was something that was established way before I got to PhotoRoom and you know I was a little bit nervous about it because I have to admit that my approach to democratization had always been kind of like on the other end of the spectrum , which was like research should be done by researchers and it's great if PMs and designers et cetera speak with users , but

I didn't see like full on democratization as the best path forward , and I think I really got an education at PhotoRoo . I decided I was like open to trying something new and what I have seen and what I've learned is that the bottom line is that in a company where people interact with users and are expected to take qualitative data into account .

They just value user research more and they're more likely to utilize the insights right .

And this is like one of the pain points that I'm sure you hear all the time from user researchers is that you know they're constantly trying to advocate for the user and for their findings based on users , whereas at PhotoRoom , kind of this culture , what it facilitates is that everyone's very hungry for those insights .

That's one thing , and I think that the big fear of researchers in , like , an environment where democratization is so huge is that you know it won't be done right . Okay , and I think that was one of my fears as well , because there are actual research skills .

And what I have also found is that in an environment where people are consistently interacting with users , they're actually hungry for best practices , right , like everyone wants to do a good interview when they interview users on a regular basis and everyone wants their usability testing to be accurate .

So one of the cool things about our job in user research is that we get to help people facilitate those skills , like when there are gaps .

Speaker 1

And I think that it's one of those kinds of things , especially when it comes to attitude around best practices for research .

Maybe it's just me , but I feel like once you kind of get a sense of the amount of skill and nuance that's involved in performing effective user research and effective user interviews , it's kind of like , oh , you didn't know how much you didn't know . Yes , I totally agree . Yeah , we did .

We did an awesome episode with Steve Portigal a couple of years ago on conducting user interviews and I had not I've never done them prior to interviewing him other than just interviewing on the podcast , and he was just like in 20 minutes . It was so illuminating how many even just your own behavior , how it can impact the outcome of an interview .

So I think it's really , really cool that there's this like hunger to transfer knowledge and getting people better at talking to users . It's very , very interesting to me .

Speaker 2

It's funny you say that because Steve wrote a book called Interviewing Users , like he literally wrote a book about it and it's a book that I have given the stakeholders and it's happened more than once that people are like there's a whole book about it , you know , and it's like , yes , absolutely , there's that much to know about it .

Speaker 1

Yeah , well , and it's such a cool thing because , like you said , I think every company is always looking to incorporate their users' real feedback and what they really want into products , but you need to have enough feedback and you also have to have high quality feedback in order to use that effectively .

So , yeah , I'm really excited to see some of the ways that AI is kind of helping to gain buy-in for a researcher . So it's , yeah , I'm really excited to see some of the ways that AI is kind of helping to gain buy-in for a researcher and kind of help people wave that flag . That's really cool . I'm very pro-research if you can't tell .

So , looking ahead , though , how else do you think that UX research is going to kind of continue to benefit from AI ? Do you see any other kinds of capabilities that are kind of just around the horizon that were not quite there yet ?

Speaker 2

Yeah , so I don't have enough technological knowledge to tell you like how good the technology will be and when , but when I think about how the researcher's role will evolve as AI evolves and as we adopt it more , one of the things that I've realized , even today , is that I have a lot more time to collaborate with stakeholders because of you know how much more

efficient I am when I use AI in the process .

I guess if I had to make one kind of loose prediction about this , it would be that the kind of research role morphs from just being about executing research and sharing insights , from just being about executing research and sharing insights and , you know , yes , executing research where needed , but spending a lot more time with stakeholders doing things like brainstorming

and spending time in the solution space , because we'll have the time to do it and because you know we will kind of have the credibility and information to bring to the table faster .

Speaker 1

And as far as right now , for your researchers that are just incorporating AI into their workflow . Obviously you've shared a lot of really practical tips and I love how accessible and like pick up and use them right now A lot of the recommendations you've made are .

But do you have any other recommendations as far as like a good starting point for those of us who are working without a best practices ?

Speaker 2

manual . I honestly think that the best thing to do is to first of all change your mindset , just accept that this is happening , and I think that that's really hard to do . But once you do that , I think a lot of people say that it's effective to kind of like start using AI for kind of small , admin-y like tasks , and I kind of I think the opposite .

I think it's a good place to start is to jump into using AI for qualitative analysis . Figure out that's the meatiest part of the job and the most time consuming part of the job for the most part , and figure out where it can help you .

My recommendation is definitely based on my own process , which is to try and use AI to replace manual tagging , to pull the places in your user data whether it's usability sessions , interviews , whatever to categorize information and do your analysis from there .

The reason is because I think that in order for you to kind of like make the crossover to someone who is excited about using AI in research , you have to do something that's really going to impact your workflow , and the amount of kind of like time and energy you save doing that is pretty monumental .

Speaker 1

To end us off what has been your most surprising discovery , as you've kind of used AI for your own purposes , like what has been kind of like the breakthrough moment for you .

Speaker 2

Two things here . So the breakthrough moment for me which might have been more obvious to other people was just how I , as a human , could still have an impactful part of this process . That it wasn't about using AI . It didn't mean outsourcing everything to AI .

It means , like , understanding what I'm best at and what AI is best at and putting it together for the best type of qualitative analysis .

That was one thing , and the other thing that was like a real moment for me was that I got feedback from someone at PhotoRoom who told me that , like when they were hiring the first researcher , which is me , that the biggest fear they had was that it would be like you know , research is known to be slow , okay .

So they were like , oh , this person's going to come and like they're going to give us the insight so slowly we won't be able to utilize them . And this person told me like you know , it turns out you're pretty fast . I realized that the reason I'm fast is because of the fact that I've embraced AI at these kind of like crucial parts of the research process .

I wouldn't be fast without it , and understanding that that makes me more valuable on the team was definitely an aha moment for me .

Speaker 1

That's really cool and it must feel good to kind of feel like , yeah , you got like a bit of a nice in the hole now . Well , thank you so much for joining us , corey . I always love conversations about research . I always learn so much , and today has been like so much knowledge digested into the same , like 25 minutes .

So thank you so much for all of this . Where can listeners follow your work online ?

Speaker 2

Yeah , so I'm not super online .

Speaker 1

However , I am always happy to connect with other researchers on LinkedIn , so feel free . Cool , well , thank you so much for being here . Thank you , thanks for listening in . For more great insights , how to guides and tool reviews , subscribe to our newsletter at theproductmanagercom slash subscribe .

You can hear more conversations like this by subscribing to the product manager . Wherever you get your podcasts .

Transcript source: Provided by creator in RSS feed: download file
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast