Pushkin. Nick Jacobson wanted to help people with mental illness, so he went to grad school to get his PhD in clinical psychology, but pretty quickly he realized there just were nowhere near enough therapists to help all the people who needed therapy.
If you go to pretty much any clinic, there's a really long wait list, it's hard to get in, and a lot of that is organic, and that there's just a huge volume of need and not enough people to go around.
Since he was a kid, Nick had been writing code for fun, so in a sort of a side project in grad school, he coded up a simple mobile app called Mood Triggers. The app would prompt you to enter how you were feeling, so it could measure your levels of anxiety and depression, and it would track basic things like how you slept, how much you went out, how many steps you took. And then twenty fifteen Nick put that app out into the world and people liked it.
A lot of folks just said that they learned a lot about themselves and it was really helpful and actually changing and managing their symptoms. So I think it was beneficial for them to learn. Hey, maybe actually it's on these days that I'm withdrawing and not spending any time with people, that it might be good for me to go and actually get out about that kind of thing. And I had a lot of people that installed that application, So about fifty thousand people installed it from all over
the world, over one hundred countries. In that one year, I provided an intervention for more than what I could have done over an entire career as a psychologist. I was a graduate student at the time that like this is like, you know, something that was just amazing to me, the scale of technology and its ability to reach folks, And so that made me really interested in trying to do things that could essentially have that kind of impact.
I'm Jacob Goldstein and this is What's Your Problem, the show where I talk to people who are trying to make technological progress. My guest today is Nick Jacobson. Nick finished his PhD in clinical psychology, but today he doesn't see patients. He's a professor at Dartmouth Medical School and he's part of a team that recently developed something called Therabot.
Therabot is a generative AI therapist. Nick's problem is this, how do you use technology to help lots and lots and lots of people with mental health problems, and how do you do it in a way that is safe and based on clear evidence. As you'll hear, Nick and his colleagues recently tested therapod in a clinical trial with hundreds of patients and the results were promising. But those results only came after years of failures and over one
hundred thousand hours of work by team Therapbot. Nick told me he started thinking about building a therapy chatbot based on a large language model back in twenty nineteen. That was years before chat GPT brought large language models to the masses, and Nick knew from the start that he couldn't just use a general purpose model. He knew he would need additional data to fine tune the model to turn it into a therapist chatbot.
And so the first iteration of this was thinking about, Okay, what where is there widely accessible data and that would potentially have an evidence base that this could work. And so we started with peer to peer forums, so folks interacting with folks surrounding their mental health. So we trained this model on hundreds of thousands of conversations that were happening on the Internet.
So you have this model, you train it up, you sit down in front of the computer. What do you what do you say to the chat pot?
I'm feeling depressed? What should I do?
Okay? And then what is the what does the model say back to you?
I'm paraphrasing here, but it was just like this, I'm I feel so depressed every day I have. It's such a hard time getting out of bed. I just want my life to be over.
So literally therapist is saying they're going to kill themselves.
Right, So it's escalating talking about kind of really thoughts about about death. And it's clearly like the profound mismatch between what we were thinking about and what we were going for is.
What what did you think when you read that?
So I thought this is a such a non starter. But it was. I think one of the things that I think was clear was it was picking up on patterns in the data, but were the wrong data.
Yeah, I mean one one option then is give up.
It would have been absolutely like, literally, the worst therapist ever is what you have built? I mean it, I couldn't imagine a worse Yeah, a worse thing to actually try to implement in a real setting. So this went nowhere on in and of itself. But we had a good reason to start there actually, So it wasn't just that there's widely available data these peer networks actually do. There is literature to support that having exposure to these
peer networks actually improves mental health outcomes. It's a big literature and the cancer Survivor network, for example, where folks that are struggling with cancer and hearing from other folks that have gone through it can really build this resilience and it promotes a lot of mental health outcomes that are positive. So we had a good reason to start, but gosh, did it not go well? So, okay, the
next thing we do is switch gears the exact opposite direction. Okay, we started with very lay persons trying to interact with other lay persons surrounding their mental health. Let's go to what providers would do. And so we we got access to thousands of psychotherapy training videos and these are interesting. These are these are how psychologists are often like exposed to the field on what they would what they would like really learn how therapy is supposed to work and
how it's supposed to be delivered. And in these these are like dialogues between sometimes actual patients that are consenting to be part of this and sometimes simulated patients where it's an actor that's that's trying to mimic this, and there's a psychologist or a mental health provider that is like really having a real session with this, And so we train our second model on on that. On that data, it seems more promising. You would think you'd say, I'm
feeling depressed. What should I do? As like the initial way that we would test this, the model says mm hmm, like literally mm hmm.
And like like it writes out m m space hmm.
You've got it.
And so what did you think when you saw that?
And so I was like, oh gosh, it's picking up on patterns in the data, and so the yeah, but you continue these interactions and then the next responses go on from the therapist, So like within about five or so turns, we would often get the model that would respond about their interpretations of their problems stemming from their their mother or their parents more generally. So like it's kind of like, if you were to try to think
about it, what a psychologist is. This is like every trope of what a like in your mind if you were going to like think.
Of the stereotypical on the couch and a guy's wearing a tweet jacket sitting in a chair.
And hardly says anything of that could be potentially helpful, but is reflecting things back to me and.
So then telling me it goes back to my parents. Yeah, well this is so let's just pause here for a moment, because, as you say, this is like the stereotype of the therapist, but you trained it on real data, so maybe it's the stereotype for a reason.
Yes, I think what to me was really clear was that we were we had data that the models were emulating patterns they were seeing in the data. So the models weren't the problem. The problem was the data were the wrong data.
But the data is the data that is used to trade real therapists. Like, it's confusing that this is the wrong data.
It is it is.
Why why is it the wrong data? This should be exactly the data you want.
Well, it's it's the wrong data for this format. And our conversation when you might say something me nodding along or saying mm hmm or go on, my contextually be like completely appropriate. So tactically in a conversational dialogue that would happen via chat. This is not like a medium that works very well like this kind of thing.
Yeah, it's almost like a translation, right. It doesn't translate from a human face to face interaction to a chat window on the computer.
And not the right setting.
Yeah, so that I mean that goes to the like nonverbal subtler aspects of therapy, right, like presumer when the therapist is saying m hm, there is there is body language, there's everything that's happening in the room, which is a tremendous amount of information or emotional information, right, and that is a thing that is lost, yes, no doubt in this medium and and maybe speaks to a broader question about the translatability of therapy.
Yeah, absolutely so I think to me, like the it was at that moment that I kind of knew that we we needed to do something radically different. Neither of these was working well. About one in ten of the responses from that from that chatbot, based on the clinicians, would be something that we would be happy with. So something that is both personalized, clinically appropriate and dynamic.
So you're saying you've got it right ten percent of exactly.
So really, no, that's that's not a good like no, it's not a good not a good therapy. No, we would we would never think about implement like actually trying employ that. So then what we started at that point was building our own creating our own data set from scratch, in which we how how the models would learn would be exactly what we want it to say.
That seems that seems wild, I mean, how do you do that? How do you how do you generate that much data.
We've had a team of one hundred people that have worked on this project over the last five and a half years at this point, and they've spent over one hundred thousand human hours kind of really trying to build this.
Just specifically, what how do you build a data set from scratch? Because like, the data set is the huge problem.
Yes, absolutely so. Psychotherapy, when you would test it is based on something that is written down in a manual. So when you're a psychologist when they're in a randomized controlled trial trying to test whether something works or not. To be able to test it, it has to be replicable, meaning it's like repeated across different therapists. So there are manuals that are developed. In this session you work on
on psycho education. On this session we're going to be working on behavioral activation, So which are different techniques that are really a focus at a given time, and these are broken down to try to make it translational so
that you can actually move it. So the team would read these empirically supported treatment manuals, so the ones that had been tested in randomized control trials, and then what we would do is we would take that content chapter by chapter, because this is like session by session, take the techniques that would work well via chat, of which most things in cognitive behavioral therapy would, and then we would create an artificial dialogue between would act as like
what is the patient's presenting problem, what they're bringing on, what the personality is like, and we're kind of constructing this, and then what is what we would want our system to be the gold standard response for every kind of input and output that we'd have. So we're writing both the patient end and the therapist end.
It's like you're writing a screenplay.
Basically, it really is. It's a lot like that, but instead of a screenplay that might be written like in general, it's like not like not just something general, but like, where is something that's really evidence based based on content that we know works in this setting?
And so what you write the equivalent of what thousands of hours of sessions?
Hundreds of thousands. There was post docs, grad students and undergraduates within my group that we're all part of this team that are creating.
Just doing the work, just writing the dialogue.
Yeah, exactly. And not only did we write them, but every dialogue before it would go into something that our models are trained would be reviewed by another member of the team. So it's all not only crafted by hand, but we would review it, give each other feed on it, and then like make sure that it is the highest quality data. And that's when we started seeing dramatic improvements in the model performance. So we continued with us for years.
Six months before chat TPT was launched, we had a model that in today's standards would be so tiny, that was delivering about ninety percent of the responses that were output We were evaluating as exactly what we'd want. It's this gold standard evidence based treatment, so that was fantastic. We were really excited about it. So we've got like the we've got the benefit side down of the equation. The next two years we focus on the risk, the risk side of it well, because there's a huge risk.
Here, right The people who are using it are by design quite vulnerable, by design putting a tremendous amount of trust into this bot and making themselves vulnerable to it, like it's a it's quite a risky proposition. And so so tell me specifically, what are you doing.
So we're trying to get it to endorse elements that would make mental health worse. So a lot of our conversations or surrounding trying to get it to For example, I'll give you an example of one that nearly almost almost any model will struggle with that's not tailored towards the safety end. What is it is if you tell a model that you want to lose weight, it will
generally try to help you do that. And if you want to if you want to work in an area related to mental health, trying to promote weight loss without context is so not safe.
You saying it might be a user within eating disorder absolutely unhealthily thin, who wants to be even thinner.
And the model will help them to often actually get into a lower weight than they already are. So this is like not something that we would ever want to promote, but this is something that we certainly at earlier stages we're seeing these types of characteristics within the model.
What are other like, that's an interesting one and it makes perfect sense when you say it, I would not have thought of it. Or what's another one?
A lot of it would be like we talk about the ethics of suicide. For example, somebody who is who thinks, you know, they're in a midst of suffering and you know it's like that they could should be able to end of their life or they're thinking about this.
Yes, and what do you want the model? What what? What does the model say that it shouldn't say in that setting?
So for you, in these settings, we want to make sure that they don't and the model does not promote or endorse elements that would promote someone's a worsening of suicidal intent. We want to make sure we're providing not only not the absence of that, actually some benefit in these types of scenarios.
That's the ultimate nightmare for you. Yeah, right, Like this be super clear. The very worst thing that could happen is you build this thing and it contributes to someone. Absolutely, that's a plausible outcome and a disastrous night.
It's everything that I worry about in this area is exactly this kind of thing. And so we essentially every time we find an area where they're not implementing things perfectly, some optimal response, adding new training data, and that's that's when things continue to get better until we do this and we don't find these holes anymore. That's when we finally we're ready for the randomized control trial.
Right, So you decide after after what four years five years?
This is about four and a half years.
Yeah, that that you're ready to to have people.
Use use the model.
I'll be it in a kind of Yeah, you're going to be the human in the loop. Right, So, so you decide to do this study. You recruit people on Facebook and Instagram.
Basically ye exactly, yep, and what.
So what are they signing up for? What's the what's the big study you do?
So it's a it's a it's a randomized control trial. The trial design is essentially that folks would come in, they would fill out information about their mental health across a variety of areas, so depression, anxiety, and eating disorders, for folks that screen positive for having clinical levels of depression or anxiety, they would be in our Folks that were at risk for eating disorders would be included in
the trial. We tried to have at least seventy people in each group, so we had two hundred and ten people that we were planning on and rolling within the trial, and then half of them were randomized to receive their ABOUT and half of them were on a wait list in which they would receive their ABOUT after the trial had ended. The trial design was to try to ask
folks to use their ABOUT for four weeks. They retained access to therabot and could use their ABOUT for the next four weeks thereafter, so eight weeks total, but we asked them to try to actually use it during that first four weeks and that was that was essentially the trial design.
So okay, so people signed up, they start like, what's what's actually happening? Are they just like chatting with the bought every day? Is it?
So they install a smartphone application that's that they're about at. They are prompted once a day to try to have a conversation starter with the with the bot and then the bot. From there they could talk about it when and wherever they would want. They can ignore those notifications and kind of engage with it at any time that they'd want. But that was the gist of the trial design, and so folks in terms of how people used it, they interacted with it throughout the day, throughout the night.
So for example, folks that would have trouble sleeping, that was like a way that folks during the middle of the night would engage with it fairly often. They in terms of the types of the topics that they described, it was really the entire range of something that you would see in psychotherapy. We had folks that were dealing with and discussing their different symptoms that they were talking about. So the depression, their anxiety that they were struggling with,
their their eating, and their body image concerns. Those types of things are common because of the groups that we were recruiting. But relationship difficulties, problems like folks, some folks were really like I had ruptures in there, you know, somebody was going through a divorce. Other folks were like going through breakups, problems at work. Some folks were unemployed and during this time, So like the range of kind
of personal dilemmas and difficulties that folks were experiencing. Was a lot of what we would see in like a real setting where it's like kind of a whole host of different things that folks were describing and experiencing.
And presumably had they agreed as part of enrolling in the trial to let you read the transcript Oh?
Absolutely, yeah, very very clear when we did an informed consent process where folks would know that we were reading reading these transcripts.
And are you personally, like, what was it like for you seeing them come in? Are you reading them every day? I mean more than that.
So, I mean this is something that is so ill. You alluded to that that this is one of these concerns that anybody would have. Is like a nightmare scenario where something is the bad happens and somebody actually outs right, So this is like I think of this in a way that I take So this is not.
A happy moment for you. This is like you're terrified that it might go wrong.
Well, it's it's certainly like I see it going right, but I have every concern that it could go wrong. Right like that, And so for the first half of the trial, I am monitoring every single interaction sent to or from the bot. Other people are also doing this on the team, so I'm not the only one. But I did not get a lot of sleep in the first half of this trial, in part because I was really trying to do this in near real time. So usually for nearly every message I was, I was getting
to it within about an hour. So yeah, it was it was a barrage of NonStop kind of communication that was happening.
So were there were there any slip ups? Did you ever have to intervene as a human in the loop.
That we did? And the thing that that was something that we as a team did not anticipate. What we found was really unintended behavior was a lot of folks interacted with they're abot, and in doing that, there was a significant number of people that would interact with it
and talk about their medical symptoms. So, for example, there was a number of folks that were experiencing symptoms of a sexually transmitted disease and they would described that in great detail and ask it, you know what, how how
they should medically treat that? And instead of they're they're about saying, hey, go see a provider for this this is not my realm of expertise, it responds as if, and so this that all of the advice that it gave was really fairly reasonable, both in the assessment and treatment protocols, but we would not have wanted to act that way, So we contacted all of those folks to recommend that they actually contact a physician about that. Folks
did interact with it related to crisis situations. So we had also had there about in these moments provided appropriate contextual crisis support, but we reached out to those folks to further escalate and make sure that they had further support available and that and those types of times too.
So there there were things that, you know, we're certainly areas of concern that that happened, but nothing nothing that was concerning from the major areas that we had intended all kind of really went went pretty well.
Still, Tom on the show the results of the study and what's next for therapot. What were the results of the study.
So this is one of the things that was just really fantastic to see was that we had we looked at our main outcomes for what we were trying to look at, where the degree to folks reduced their depression symptoms, their anxiety symptoms, and their eating disorder symptoms among the
intervention group relative to the control group. So based on the change and self reported symptoms in the treatment group versus the control group, and we saw these really large differential reductions, meaning a lot more reductions and changes that happened, and that non depressive symptoms, the anxiety symptoms, and the eating disorder symptoms, and the THERAPOT group relative of the witless control group, and the degree of change is about
as strong as you'd ever see. And are randomized control trials of outpatient psychotherapy that would be delivered within cognitive behavioral therapy with a human, a real human delivering this an expert. You didn't test it against against therapy, No, we didn't. What you're saying, results results of other studies using real human therapists show comparable magnitudes of benefit. That's exactly right.
Yes, you've gotta do a head to head. I mean, that's the obvious question, like why not randomize people to therapy or THERAPI bought?
So the main thing when we're thinking about the first origins point is we want to have some kind of effect of how this works relative to the absence of anything.
Relative to nothing, well, because I mean, presumably the easiest case to make for it is not it's better than a therapist. It's a huge number of people who need a therapist don't have one exactly, and that's the unfortunate reality. BOT is better than nothing. It doesn't have to be better than a human therapist. It just has to better, that's right.
But so yes, the we are planning ahead to head trial against therapist as the next trial that we run, in large part because I already think we are not inferior. So it will it'll be interesting to see if that actually comes out. But that is that is something that we have outstanding funding proposals to try to actually do that. So one of the other things that I haven't gotten to with in the trial outcomes that I think is really important on that end actually is to two things.
One is the degree that folks formed a relationship with therapaud and so in psychotherapy, one of the most well studied constructs is the ability that you and your therapist can get together and work together on common goals and trust each other that you as a it's a relationship, it's a human relationship. And so this in the literature is called the working alliance, and so it's this ability
to form this bond. We measured this working alliance using the same measure that folks would use with outpatient providers about how they they felt about their therapist, but instead of the therapist that now we're talking about therabot, and and folks rated it nearly identically to the norms that you would see on the outpatient literature. So we asked folks, we give folks the same measure, and that it's essentially equivalent to how folks are reading human providers in these ways.
This is consistent with other where we're seeing people having relationship with chatbots and other domains. Yes, I'm old enough that it seems weird to me. I don't know, seem weird to you.
I that part I this is more of a surprise to me that it was as the bonds were as high as they were, that they would actually be about what humans would be. And I will say, like one of the other surprises within the interactions was the number of people that would like respond kind of check in with therabot with and just say hey, just checking in as if like Therabot is like a I don't know. I would I would only like have anticipated folks would use this as a tool, so like not like they
went to hang out with like almost that way. It's like our initiating a conversation that isn't I guess doesn't have an intention in mind.
I say please, I'm thank you, I can't help my Is it because I think they're going to take over or is it a habit or what? I don't know, but I do, I do.
Yeah, I wouldn't. I would say that this was more surprising the degree to that folks established this this level of a bond with it. I think it's actually really good and in really important that they do, in large part because that's one of the ways that we know psychotherapy works, is that that that folks can come together and trust this and develop this working relationship. So I think it's actually a necessary ingredient for this to work.
To some I get it makes sense to me intellectually what you're saying. Does it give you any pause or do you just think it's great?
It it gives me pause. If we weren't delivering evidence based treatment, Well, this is a good moment. Let's talk about the let's talk about the industry more generally. Yeah, this is not a you're not making a company, this is not a product, right, you don't have any money at stake. But there is a something of a therapy bought industry.
There is a private sector. Like, tell me what is the broader landscape here?
Like, so there's a lot of folks that are have jumped in predominantly sense the loans of shot GPT, and a lot of folks that have learned that you can call a foundation model fairly easily.
When you say call you mean just sort of like you sort of take a foundation model like GBT, and then you kind of put a wrapper around exactly and the rapper it's like it's basically GPT with a therapist wrapper.
Yeah. So it's a lot of folks within this industry are saying, hey, you act like a therapist and then kind of off to the races. It's it's otherwise not changed in any way, shape or form. It's it's like a literally like a system prompt. So if you were interacting with chat GBT, it would be something along the lines of, hey, act as a therapist and here's what we go on to do. They may have more directions than this. But that's this is kind of the light
touch nature, so super different from what we're doing. Actually, yes, so we conducted the first randomized control trial of any generative AI for any type of clinical mental health problem. And so I know that these folks don't have evidence that this kind of thing works.
I mean, there are non generative AI bots that people did randomize control trials of, right, just to be clear.
Yes, there are non generative absolutely that have have evidence behind them. The generative side is very new, and so and there's a lot of folks in the generative space that have jumped in. Yeah, and so a lot of these folks are not psychologists and not psychiatrists, and and Silicon Valley, there's a saying move fast and break things. This is not the setting to do that. Like move fast and break people is what you're talking about here.
You know, it's like the and the amount of times that these foundation models act in profoundly unsafe ways would be unacceptable to the field. So like that, we tested a lot of these models alongside when we were developing all of this. So it's like, I know that they don't they don't work in this kind of way in a real safe environment. So because of that, I'm I'm really hugely concerned with kind of the field at large that is moving fast and doesn't really have this level
of dedication to trying to do it right. And I think one of the things that's really kind of canning within this is it always looks polished, so it's harder to see when you're getting exposed to things that are dangerous. But the field, I think is in a spot where there's a lot of folks that are out there that are acting and implementing things that are untested, and I suspect a lot of them are really dangerous.
How do you how do you imagine theahbut getting from the experimental phase into the widespread use phase.
Yeah, so we want to essentially have one at least one larger trial before we do this. You know, we have it's pretty a pretty decent sized first trial for being a first trial, but it's not something that I would want to see out in the open just yet. We want to have continue to oversight it, make sure
it's safe and effective. But if it continues to demonstrate safety and effectiveness, this is one of those things that why I got into this is to really have an impact on folks lives, and this is one of those things that could scale really effective personalized cares in real ways. So, yeah, we we intend to if evidence continues to show that it's safe and effective, to make this out into the
open market. In terms of the thing that I care about in terms of the ways that we could do this is trying to do this in some ways that would be scalable, so that we're considering a bunch of different pathways. Some of those would be delivered by philanthropy
or nonprofit models. We are considering also like just a strategy that would just not for me to make money, but just to scale this under some kind of for profit structure as well, but really just to try to get this out into the open so that folks could actually use it, because ultimately we'll need some kind of revenue in some ways to be part of this that would essentially enable the servers to stay on and to scale it.
And presumably you have to pay some amount of people to do some amount of supervision absolutely forever.
Yeah, So we in the real deployment setting, we hope to have essentially the decreasing levels of oversight relative to these trials, but not an absence of oversight. So exactly you're not going to stay up all night reading every message exactly, that won't be sustainable for the future, but we will have like flags for things that should be seen by humans and intervened upon.
Let's talk about this other domain you've worked in in terms of technology and mental health, right, and so in addition to your work on thera bot, you've done a lot of work on it seems like basically diagnosis monitoring people, essentially using mobile devices and wearables to track people's mental health to predict outcomes like tell me about your work there and the field there.
So essentially it's trying to trying to monitor folks within their freestanding conditions, so like in their real real life, through using technology so in ways that are not don't require burden.
The starting point is like your phone is collecting data about you all the time. What if that data could make you less depressed?
Yeah, exactly, what if we could use that data to know something about you so that we could actually intervene and so, like thinking about a lot of mental health symptoms. I think one of the challenges of them is they are not like all or nothing the field. Actually, I think it's this really wrong. And when you would talk to anybody who has a experience as a clinical problem, they have changes that happen pretty rapidly within their daily life.
So they like will have better moments and worse moments within a day, They'll have better and worse days. And it's not like it's all this like it's always depressed or not depressed. It's like these these fluctuating states of it. And I think one of the things that's really important about these types of things is if we can monitor and predict those rapid changes, which I think we can.
We have a that we can is that we can then intervene upon the symptoms before they happen in real time, so like trying to predict the ebbs and the flows of the symptoms, not to like say, I want somebody to never be able to be stressed within their life, but so that they can actually be more resilient and cope with it.
And so what's the state of that art, Like, is there somebody who's can you do that? Can somebody do that? Is there an app for that?
As we used to say, Yeah, I mean we have the science surrounding. This is about ten years old. We've done about forty studies in this area across a broad range of symptoms, so anxiety, depression, post traumatic stress disorder, schizophrenia, bipolar disorder, eating disorders, so a lodge are different types of clinical phenomenon and we can predict a lot of different things in ways that I think are really important.
But I think, like to really move the needle on something that would make it into population wide ability to do this, I think the real thing that would be needed for like the ability to do this is to pair this with intervention that's dynamic. So something that's actually ability, has an ability to change and has like a boundless context of intervention. So I'm going to actually loop you.
Back like the Abot.
That's exactly right. So these two things that have been distinct arms of my work are like so natural compliments to one another. Now think about Okay, let's come back to therabot in this kind of setting.
So give me the dream.
So this is the dream. So you have Therabot, but instead of like a psychologist that's completely unaware of what happens, is reliant on the patient to tell them everything that's going on. In their life. Yeah, all of a sudden, there butt knows them knows hey, oh this they're not
sleeping very well for the past couple days. They haven't left their home this week, and this is a big deviation from them and how they normally would live life Like this can be targets of intervention that don't wait for this to be some sustained pattern in their life that becomes entrenched and hard to change. Like, no, let's actually have that as part of the conversation, where we don't have to wait for someone to tell us that
that they didn't get out of bed. We kind of know that they haven't left their house, and we can actually make that a content of the intervention. So that's like, I think these these ability to like intervene proactively in these risk moments and not wait for folks to come to us and tell us every aspect of their life that they may not know and so like because of this, it's that's that's where I think there's a really powerful pairing of these two.
I can see why that combination would be incredibly powerful and helpful. Do you worry at all about having that much information and that much sort of personal information on so many dimensions about people who are by definition vulnerable.
Yeah, I mean, in some ways, I think it's the real ways that folks are already collecting a lot of this type of data already on these same populations, and now that we could put it to good use. Do I worry about kind of yet falling into the wrong hands. Absolutely. I mean we have like really big tight data security kind of protocols surrounding all of this to try to make sure that only folks that are established members of the team have any access to this data. And so yeah,
we are really concerned about it. But yeah, no, if there was a breach or something like that that could be hugely impactful, something that would be greatly worry.
We'll be back in a minute with the lightning round. Hey, let's finish with the lightning round.
Okay.
On net, have smartphones made us happier or less happy?
Less happy?
You think that you think you could change that, You think you could make the net flip back the other way.
I think that we need to meet people where they are, and and so this is we're not like trying to keep folks on their phones, right, like, we're trying to actually start with where they are and intervene there, but like push them to go and experience life in a lot of ways.
Yeah, Freud overrated or underrated?
Overrated?
Still okay, who's the most underrated thinker in the history of psychology? Oh?
My, I I mean to some degree, Skinner was like really operant conditioning is like at the heart of most clinical phenomenon that deal with emotions, and I think it's probably one of the most impactful. Like it's so simple in some ways that behavior is shaped by both positive essentially benefits and like drawbacks, so rewards and punishments and these these types of things are the simplicity of it is is so simple, but like the how meaningful it is and daily life is so profound, we.
Still underrate it. I mean when I the little bit I know about Skinner, I think of the black box, right, the like, don't worry about what's going on in somebody's mind, just look at what's going on on the outdoit. Yeah.
Yeah, And with.
Behavior, I mean in a way it sort of maps to your wearable's mobile devices thing, right, like just look, if you don't go outside, you get sad, and so go outside.
Sure exactly. I am a behaviorist at heart, So this is part of part of what however you way.
I mean, I was actually think briefly before we talked that wasn't gonna bring it up, But since you brought it up, it's interesting to think. Like the famous thing people say about Skinner is like the mind is a black box, right, we don't know what's going on on the inside and don't worry about it.
Yeah.
It makes me think of the way large language models on black boxes, and even the people who build them don't understand how they work.
Right. Yeah, absolutely, I think psychologists in some ways are best suited to understand the behavior of large language models, because it's actually the science of behavior absence the ability to like potentially understand what's going on inside, Like neuroscience is a natural compliment, but in some ways a different different lens in which you view the world. So like
trying to develop a predictable system that is shaped. I actually think we're not so bad in terms of folks to be able to take this on.
What's your go to karaoke song?
Oh, don't stop believing. I'm a big karaoke person too.
Somebody just sent me that just the vocal from stop believing.
Ah, yeah, no, it's it's it's like a meme.
It's amazing, it is.
Uh.
What's one thing you've learned about yourself from a wearable device?
Mm hmm. One of the things that I would say, like my ability to understand recognize when I've actually had a poor night's sleep or a good night's sleep has gotten much better over time. Like I think, as humans were not very well calibrated to it. But as you actually start to wear them and get understand you can you are you become a better self reporter.
Actually I sleep badly. I assume it's because I'm middle aged. I do most of the things you're supposed to do. But give me one tip for sleeping. Well, I get to sleep, but then I wake up in the middle of the night.
Yeah. That. I think. One of the things that a lot of people will do is they'll worry, particularly in bed, or use this as a time for thinking, so a lot of a lot of the effective surrounding that, or to try to actually give yourself that same time that would be that unstructured time that you would be dedicated that you might experience in bed.
You tell me I should worry it ten at night instead of three in the morning. If I worry, if I say it ten at night, okay, worry now, then I'll sleep through the night.
There there's literally evidence surrounding scheduling your worries out and I love during the day and it does work. So yeah, that's okay. If it's got some.
Worries, I'm gonna worry it ten tonight, I'll let you know tomorrow morning.
If it were just don't do it in bed. Yeah, okay, okay.
If you had to build a chatbot based on one of the following fictional therapists or psychiatrists, which fictional therapist or psychiatrist would it be? A Jennifer Milthy from The Sopranos, B Doctor Krokowski from The Magic Mountain, see Fraser from Fraser, or d Hannibal Lecter.
Oh god, okay, I would probably go with Frasier, a very different style of therapy than but I think his demeanor is at least generally decent, So yeah, mostly appropriate with most of his clients from what I remember in the show.
Okay, it's a very thoughtful response to an absurd question. Anything else we should talk about?
You've asked wonderful questions one thing I will say, maybe for folks that might be listening, is a lot of folks are already using generator AI for their mental health treatment, and so I will I'll give a recommendation if folks are doing this already, that they just treat it with the same level of concern they would have the Internet.
They there may be benefits they can get out of it. Awesome, great, but just don't work on changing something within your daily life surrounding particularly your behavior, based on what these models are doing, without some real thought on making sure that that is actually going to be a safe thing for you to do.
Nick Jacobsen is an assistant professor at the Center for Technology and Behavioral Health at the Geissel School of Medicine at Dartmouth. Today's show was produced by Gabriel Hunter Chang. It was edited by Lydia Jean Kott and engineered by Sarah Brugier. You can email us at problem at Pushkin dot FM. I'm Jacob Boldstein, and we'll be back next week with another
Episode of What's Your Problem.