
Thanks so much for joining us today. Mark Unglas is the director of data science, AI, and research at Mental Health Innovations, a charity improving the mental health of millions in The UK. Mark and I met a few years back when I was at Imperial, and we did some research together on AI for mental health. I just wanna start by giving a bit of background about Mark. Mark is a researcher with over twenty years of experience, doing research in psychology and neuroscience.
He has a PhD in experimental psychology and held academic positions at Imperial College London and the University of Oxford. Ungliss was first author on I'll just talk about a couple of your papers. One breakthrough 02/2001 study published in Nature showing that even a single exposure to an addictive drug can cause can cause long lasting changes in the brain's reward circuitry. Mark and his colleagues went on to demonstrate that dopamine neurons are not monolithic. They identified two functionally different dopamine systems.
One that encodes rewards and another that encodes aversive or warning signals. Am I getting that right?
Yeah. That's good. Beautiful.

In 2016, Mark published the results of a research collaboration with MIT on social isolation and loneliness neurons, a type of dopamine neuron that, quote, represents the experience of social isolation. So that's a bit about your background. Welcome, Mark.
Thanks. Thanks. Great to be here. Thanks for having me.

Alright. Now let's go to the extreme and just have some fun. Are you familiar with the thought experiment of the experience machine?
Not sure.

The experience machine goes down the path of, like, imagine a world where we could actually just have some, like, thing that you put on your brain that gives you full on experiences, that gives you dopamine in particular. Like Mhmm. You know, if dopamine is, you know, unexpected rewarding outcomes, you can still cause unexpected rewarding outcomes by simulating it. Usually, the question is basically, like, would you choose the experience machine? But I'm curious, like, do you think when you think about, you know, AGI and your predictions for the future, do you think we're going to actually achieve an experience machine?
I mean, I think we're way off from, you know, being being properly plugged into the into the matrix.

But At least a year or two?
Yeah. But but I think that, you know, again, if you, it's sort of inescapable conclusion that if you, if you, if you take a, you know, mechanistic view of the brain, and of the world around us, then we should be able to simulate some of these things. And I mean, we have, you know, several sensory systems, but we don't have that many. And I mean, we can already, you know, simulate vision pretty well. So

We can?
You know, now it's a different thing to say whether you could have, like, something plugged into your, your optic nerve that would just, you

know Oh, that's what you mean. But You're saying we can simulate vision as in, like, we can we have movies and stuff.
Yeah. Yeah. So, yeah.

Can we just convert that into a modality that you plug into your brain rather than putting it in front of your eyes?
Yeah. Exactly. So, I mean, I just think in principle that

that's something that's possible. I guess also just going down the path of, dopamine and such. I mean, there's Yeah. If I understand correctly, like, opioids are just really awesome because they increase dopamine in the brain. Right? Or they
that's one reason why people go to them. Yeah. Yeah.

Okay. Obviously, you have a far better understanding given this was your research. But there's an argument that you that people can make, which is basically, like, opiates just are the experience machine. This is at least, like, one form of the argument. We have achieved our first step into the experience machine. They're called opiates.
Yeah.

If you take them, then you experience reward without actually having to do anything that's rewarding.
Yeah. Absolutely. Yeah. I mean, that that is essentially what what those drugs are doing. They are they are hacking they are hacking in to the new the the normal you know, these these systems are not there for the pleasure of drugs. They are there for no normal normal operations. But

Could you make the same argument about SSRIs? You know, SSRIs are hijacking the parts of your brain.
Yeah. I mean, I mean, SSRIs and actually, the actions of SSRIs, they're kind of things they're doing not unlike, you know, what some drugs are doing, like cocaine and amphetamine and so on. So so, yeah. So there's no there's no principle difference, really, between what's happening there. We are we are hacking hacking the brain.

As a machine learning engineer, you know, I tend to think of all neurons as basically the same. And I guess you've had a lot of your career dedicated to showing that that's not the case in the brain. Well, can I just ask, why do you think why are different brain cells different? Like, in the context of AI, we think of, you know, neurons interacting with each other just with a weight, like an action potential, and yet, you know, you focus so much on the differences in the cell types. Like, does that matter?
Well, it definitely matters for for brain function, I think. And a lot of these kind of motifs that you see, in the in the brain, you see all over nervous systems, all all over the animal kingdom. And so some of them are clearly important for regulating neuronal activity for transmitting signals over long distances across the nervous system, and also for different types of signalling on different temporal scales. So some neurotransmitters signal very, very quickly, and some, like, for example, dopamine is much more neuromodulator. It's much more slower in its impact, but it can be more more wide spread.
And those different temporal characteristics and, different wiring characteristics, you know, give great importance of functionalities to the way the brain works. And you sort of it's tempting really to feel like they must must be important. There's potential importance for how neural networks operate as well. Right?

So you and I have talked a bunch about, AGI just for fun. And, when I ask, I think you've described yourself as, not quite an AGI skeptic, but glass half full. Is that right?
Yeah. So I think, you know, I would say glass half full in the sense that my glass is half full. I mean, AI is performing incredibly well. But I don't feel it doesn't feel to me like AGI is around the corner. Yeah.
I I wouldn't bet. I mean, I wouldn't bet on it happening in in my lifetime. But then it depends what you mean by AGI. Of course, all comes down to definitions. But I think, you know, what you're seeing with AI performance at the moment is incredible performance in some respects, but in others in other respects, you know, quite unimpressive performance. And, you know, when you look at those those issues, it's hard to feel that AGI is is about to be upon us.

Yeah. So, I mean, I guess it's interesting. I'm I'm bringing up the neuroscience thing off the bat just because I think it's it's been interesting to hear from your perspective about how there is sort of like, you know, when we think about, AI neural networks, we kind of miss a dimensionality in the way that human brains work. Is that right? Like, when you talk about the way that different neurotransmitters function differently.
Yeah. Exactly. I mean, I think when you look at the complexity of the brain, you know, people often say, oh, there's 80,000,000,000 neurons in the human brain. Right? And 80,000,000,000 glial cells.
And people often just think about that. Oh, well, it's like a massive bag of frozen peas. Right? All of these, all of these neurons are all wedged in together, and they're all connecting, and each neuron connects with a thousand other neurons, and it's just this enormous neural network. But when you actually look at the neural circuits, what you realise is that there's this kind of intricate complexity, and, a lot of the neural circuits are very different from one another, you know, like one of the brain regions that's been best studied, the hippocampus, you know, when you actually look at the way those neurons are wired up, and in fact, we don't really understand, there's no agreement on all the different types of neurons and all the different ways in which the circuits are organised.
But there are all these different parts of the brain, and they're organised in quite distinct and quite intricate circuits that we don't really properly understand. And that's very different from just a a straightforward neural neural network, I think.

Yeah. I mean, I guess I, I wanted to jump straight in and talk about the neuroscience side for that reason, which is, like, I guess, even until looking through your research, it it's not obvious to me just how much complexity there is, in the human brain. And I think, like, when you look at so you you told me a little bit about the circuitry. I mean, I was shocked to learn that in terms of the dopamine system, there's, like, so much that we didn't know. I mean, that I I always thought dopamine was, like, a thing to do with reward, and then I know that, you know, your research was like, hey, it's not just reward.
Can you just quickly summarize what that research was?
Yeah. Sure. So, I mean, that's a really good example because dopamine's probably the neurotransmitter we know the, you know, the most about in the brain, certainly of all the kind of key neuromodulatory transmitters like serotonin, dopamine, norepinephrine. We really know a lot about dopamine, and yet there are these really fundamental questions that are not answered. So there's this old view that dopamine is involved in signaling pleasure and that good things activate the dopamine system.
Right? And really important breakthrough came from a lot of work by someone called Wolfram Schultz. And what he showed was that dopamine neurons are activated by better than expected events, basically. That they're encoding something called a reward prediction error. So, when good things happen or rewarding events, they'll activate dopamine neurons, but only if those events are unexpected.
Right? And so that's different from something encoding pleasure. So the question is, why is that important? And of course, this is this is a classic teaching signal for the brain. Right?
So when you when something unexpected happens, that's when you should learn about things. Right? So we're all making predictions as we go about the world all day long, trying to maximise our rewards and minimise our punishments. And if we're making correct predictions, we should carry on with our behaviour because we're making correct predictions. But if it turns out that those predictions are incorrect, that's when we should stop, pay attention to what caused that unexpected reward or punishment, and and learn about it.
And so that's dopamine is really providing that teaching signal.

Mhmm.
So it's really different from it just signalling and causing the experience of pleasure. It's it's it's really a a teaching signal and a behavioral modification signal.

So it's really fascinating to me in particular to talk about dopamine and AI's. Are you familiar with the, like, the temporal difference? The idea that temporal difference loss is the
activity. Yeah. Yeah. Yeah. Yeah. So exactly. And so one of the really big insights that, Schultz and, Peter Diane, who who and, Reid Montague, who who were, all talking about this together was that they realized that dopamine activity looks a lot like a temporal difference, algorithm.

Can you just explain what do are you familiar with the temporal difference algorithm is? Yeah.
Awesome. So although maybe let's just hear

your Yeah. I mean, the high level is, the temporal difference algorithm, in the context of, reinforcement learning agents, when like, so the example that this, was, like, the time that DeepMind was doing a lot of work on, like, Atari games. And so the way that they would have an agent play an Atari game is literally, like, take all the pixels on the screen, pass it into a model, and have it evaluate the value of every possible next state. So if you were to map that forward and look at, like, the tree of all possible future states, that's, like, insanely huge. And that was, like, not feasible.
The temporal difference principle was basically the only thing you need to predict is basically what rewards you're gonna get in this state transition. So, like, if I'm Mario and I jump and I get that coin, how is my reward signal gonna change? And then what is my expected future reward from that point forward? And if my expectation of what the reward will be differs from, what the actual reward looks like, then there's a loss that I need to update my model of the world, basically. But it's a model of the world that, like, that encodes specifically expectation of reward and punishment.
Did Did I get that right?
Perfect. I need a better job than me. But that that Yeah. That's basically it. And so, you know, dopamine neuron activity follows the same rules.
And actually, it's it's really interesting you mentioned deep mind because, you know, they obviously, in the early days especially, are really inspired by understanding the brain. Demis Hassabis was a neuroscientist, and in fact, he did, a lot of work in the same place where Peter Diane was based, who who was involved in that original, paper about dopamine neurons encoding temporal difference algorithm. So there's clearly, like, a lot of overlap, which is super interesting. And I mean, that at least encourages you that that way of learning, that's the way the human the human brain, we're we're learning a lot of stuff like that. That's a good good place to start.
With your AI.

I mean, there's I I feel like there are, you know, skeptic ways you can approach this, but there is at least like this, like, very simple basic, argument that I think you could make that AI is actually much more similar to brains than we think. And I think the argument is basically, you know, fifty plus years ago, people came up with these ideas for how you could build a very simple model of a human brain into a computer. And the thought would be, hey. If this works, this is going to change everything. This simple model that just, you know, treats a brain in a very simple low dimensional way.
It ignores a lot of complexity. All it is is like neural connections, basically. If this works, we should be able to do everything with it. And fifty years ago, that was, like, not true. That, like, didn't work. Fifty years later, we increase our ability to scale compute. We have GPUs. You know, in 2012, we build a much bigger version. You have like AlexNet, which basically follows the exact principles from like fifty years earlier. Nothing all that different.
And then it just works. So the argument would basically go, we tried to model brains in machines and then we achieved magic results that look a whole lot like intelligence, why wouldn't it be like too much of a coincidence to say, oh, no, no, no, it actually is nothing like a brain. It just happens to be, you know, you we're calling it neural networks. It has nothing to do with the brain, and yet it's still achieving the magical results. Like, I guess I'm wondering, the fact that the results are so impressive from neural networks today that are based on the human brain, is that evidence that they actually model how brains work?
I think that's a reasonable suggestion. Yeah. I mean, I think I suppose really the question is what what what do you need to do to to take those neural networks to get to something like AGI? Mhmm. Like, is it clearly, they've gone a long way, but it is there something else we're going to need?
So, you know, I would say that now that's not my, you know, speciality. But there was certainly and certainly that's an area of debate. Right? And like a lot of AI, there's no principled reason to know whether it's a yes or a no. Right?
So so people are just gonna have to try. There were a couple of things about current, neural neural network performance, which I think are important, that are really different from humans. So one is the way in which they learn. Right? So, you know, humans will learn language very quickly without, with very limited exposure to data, basically.
Clearly, current AI, to reach the level of performance it's getting to, is is requiring essentially all data available. Yeah. So that's a really dramatic difference. And the second is that the performance is slightly different. You know, I was thinking about this the other day, you can you can ask chatGPT something, it'll give you an incredibly fluent answer, very informative and detailed, but then you can also ask it, you know, how many, well, the classic cases, like, you know, how many Rs are in strawberry or something.
It probably doesn't do this anymore. But, you know, all the various things that it can get tripped up on that sort of indicate that it's, it's understanding of the information is not, it's not equivalent. And if you're speaking to a human who was being articulate about that information, Yeah. They would not fall for the same. So it it it kind of suggests that that there's there's something slightly slightly different.

Yeah. I mean, I would say okay. So the complete counterargument to the case on, like, we tried to model brains and we succeeded is we never tried to model brains, and maybe this is all just, like, an argument by analogy. Maybe, like, just to go down the the other path is basically, like, oh, cool. DeepMind claimed that they based stuff on dopamine.
Did they? Or did they just later on be like, oh, that's so cool. Dopamine is kind of similar. And actually, it was just similar mathematically and in no way similar biologically. Like, there is, I guess, the counterargument to be made, which is we've scaled up compute and now we're trying to draw an analogy to brains so we sound like so we have more justification.
Yeah. So I think I mean, certainly the earliest proponents of neural networks always used to say claim that it was inspired by neural networks.

Yes.
And, you know, DeepMind came from a you know, they they all came from a background in neuroscience.

Mhmm.
They've been exposed to those ideas before they before they find founded DeepMind. So I I think some of it could be retrospective storytelling.

It could be.
But I I think some of some of it's about being inspired about at least the the models that are being built are in in very fundamental ways kind of inspired by the nervous system, I suppose.

Yeah. I it's hard to imagine that that like, there still is this analogy. It still looks a little too close to a brain. So I wonder if the other angle to look at this is just in terms of just efficiency, which is like biology evolves, which is a very weird mechanism to, like, build systems. When we build computers, they don't evolve.
We, like, construct them. Evolved systems tend to be insanely efficient because they can be. Like, you're dealing with such a low scale. And, you know, there's talk right now about, like, if we could encode data in DNA, then we can encode data, like, absurdly more efficiently than on a hard drive. Like, it would be so much more efficient to just encode, like, a like a document, like a dot docs file in DNA.
If we could do that, obviously, that's really hard. So by analogy, there's some arguments we made. Hey. Neurotransmitters are like a really efficient hack that helps brains operate far more efficiently because, for example, they can operate at different time horizons. Whereas if you didn't have neurotransmitters, you don't have that ability to operate over time horizons built in.
So then it becomes sort of this thought of, like, what if all biology in the brain outside of the basic neural network are just like incremental efficiency gains? Where fundamentally, each one of those decreases the amount of data you need to learn that, you know, a cat looks like a cat or, you know, some sort of task that a human can learn from three examples in an AI takes, you know, the internet to learn.
I think it's possible. I suppose we're at the stage now where it's hard to to know the only way we're going to find out is by trying, I think. And and people are going to try.

So anyway, let's back to neuroscience for a minute. I think something that was that's been really interesting to me when I was looking at your research is, it's it's obviously very different than the constructive generative way that we approach AI where we're actually trying to, like, get stuff done. You're focused more on, like, can we actually just understand what's happening? Because it is happening every day inside of all of us. We don't need to create it.
It's there. And we still have some like, have extremely little understanding of what's going on. So I came across one paper, so one area of your research, hopefully, you can comment on, which was around, motivation and appetite regulation. Oh, yeah. Do you remember that word?
Yeah. I think I know which paper you're you're talking about. Yeah.

I guess I I I found it fascinating just because, it, you know, is not as much, I guess, like a systems level, let's understand how dopamine works in the brain, but it's much more specific. If I understood, you'd identified student systems that were responsible for, appetite regulation, and you were able to control in mice appetite regulation by controlling some part of the brain.
Oh, yeah. Okay. Right. So, yeah, I do I do remember this one. Yeah.
So actually, that was a bit of a surprising result, and I I'm not sure that we ever quite understood what was happening. But we were we were really interested originally in salt appetite. Salt salt appetite is really a really fascinating, specific type of appetite that you don't experience strongly all the time, but you if you're salt depleted, you'll experience it. And we sort of accidentally, essentially, it's accidentally, and I mean, it's actually a lot of the discoveries we made one way or another were basically accidents along the way in the lab when we were doing something else. We found that when we stimulated dopamine neurons, this suppressed the appetite.
So actually not quite sure why why it did. I suppose more more more broadly, this this was part of a kind of larger body of work, where we found that different dopamine neurons are involved in different functions. So the the the account that I gave you of dopamine neurons being activated by reward prediction errors, you know, has been incredibly influential. And key part of that is that all dopamine neurons are all doing the same thing. They're all broadcasting the same signal to the brain.
And yet there's various hints, and there were various hints in the literature that some dopamine neurons responding differently to different types of stimuli. And so that's something that I got really interested in quite a long time ago. And we showed that some dopamine neurons are activated by aversive events. So so, which is not what you would predict based on the reward prediction error theory. Something that's worse than expected should inhibit a dopamine neuron, or they should at least not be responsive to it.
But we found that some were excited, and and really the the the real puzzle was that it was only a subset of these dopamine neurons that were responding. So we found that dopamine neurons were not all responding the same way, someone responding differently. And then and then we went on, and you also mentioned that paper where we showed that some dopamine neurons are really sensitive to social isolation, so they're being activated by this aversive sense of experience of being isolated as well. And so taken together, and I mean, other people have also contributed to this idea, we've argued that actually, different dopamine systems are involved in different types of behaviour. So, and also some other people have shown, for example, that maybe appetite is regulate, more regulated by some dopamine neurons than others.
And, the idea really is that, well, one possibility is that this, take a step back. When you when you think about the reward prediction error, right, one one important question is what happens when when something worse than expected happens? Right? So that's clearly a really important signal for the brain as well. You know, some people say more important than rewards.
Right? You want to avoid these punishments. And so there must be a neural system that's encoding that. Mhmm. And one possibility is that it's a reduction in dopamine. So you have an increase in dopamine for better, for positive prediction errors.

And a decrease for negative signal.
Exactly. Now a couple of problems with that hypothesis are that dopamine neurons don't really decrease their firing that much when something unexpectedly bad happens. Right? It's not as strong as the excitations you see. And also, it's kind of hard to imagine how that signalling would be decoded postsynaptically as well. And so

You're saying, like, how a lack of a neurotransmitter
Yeah. For post for postsynaptic receptors, that's harder to Sorry. Sorry. So when all the all this dopamine is released, right, it then binds to other neurons. And the way it binds to other neurons is it binds to a receptor, which is a specialised protein. It's like the lock and key model. So the dopamine is the key, and it goes into the lock, and then that unlocks processes in the next or biological processes in the next neuron, right, which could be excitation or

Got it. So you're saying whatever is kind of happening downstream within the brain because dopamine, neurotransmitter was
Exactly. And and the way to think about dopamine neurons is that they have incredibly they probably have the most complex projections of any any neuron in the animal kingdom. They they make millions of synaptic connections. Most neurons are connecting with maybe a thousands of this neurons. Dopamine neurons are, making, in the human brain, making maybe millions of synaptic connections. So they they And

they're spreading widely across
the brain. Yeah. Exactly. They're acting

as like one system.
Exactly. So they're going all the way into your limbic system and, up into your cortex, from this tiny nucleus, the, in the midbrain. Maybe you have a couple of hundred thousand of these neurons, not very many, of the 80,000,000,000 you've got. But they they they send this incredibly dense arborization. And they're like little trees.
Well, most neurons are like trees in some way. And and they just have this very, very complex root of of branches or axonal projections, where there's something happening.

What you were saying. When when something worse happens than what you were expecting Yeah. Wide ranging impact where you're like, whatever I just did, don't do that again.
Exactly. But it

would be hard to explain that through just a lack of dopamine. It would make much more sense that something actually is
Exactly. It would be another neurotransmitter system, for example. Right? So one argument that one idea people have suggested is that serotonin might do that, but actually that's been really hard to demonstrate. We did a bit of research on that. It's been really hard to for various reasons that probably then might might get a bit too technical for this discussion.

But You did mention to me in the past that serotonin is just hard to study as well.
Yeah. There are exactly. Some of them are just technical limitations. And dopamine neurons happen to be amenable to they're actually pretty tricky to study as well. But, but but maybe not as hard as serotonin neurons.
But one other possibility is, well, maybe there were just different types of dopamine neuron. Neuron, right? So one type of dopamine neuron does your positive prediction errors, and another type of dopamine neuron does your negative prediction errors, right? So this is what we argued, and actually some other people argue argue that at a similar sort of time. And the the debate still continues, un unresolved.
Which I think sort of goes to show how, you know, these things move slowly, but they move slowly as well, because it's technically really challenging, really, really challenging work to resolve. So even fundamental questions like that are like, you know, still still largely, you know, still up for up for debate in one of the neurotransmitter systems that's the best understood in the brain.

So from the outside, you know, as a layperson Yeah. When I hear that, you know, it's really hard to study this area, but we've had a bunch of unexpected results where we can identify, you know, systems of the brain that are responsible for loneliness or that affects loneliness. And we have systems of the brain that affect, you know, the wanting of salt, you know, and you can actually increase people's desire or an animal's desire for salt and, you know, increase and decrease it. I have to wonder, you know, without necessarily understanding every part of the brain, you have to imagine that this project of neuroscience continues. And over time, we become better and better aware of specific parts of the brain responsible for different things.
And part of how you're studying this, at least for some of these systems, is that you're actually able to induce a state by modifying the way that neurons fire. Am I getting that right?
Yeah. That's right. Yeah.

So where does that lead you philosophically?
Where does that lead you philosophically? Well, I mean, you're right that the field keeps moving forward, and people are making incredible discoveries all the all the time. And at some point, we'll we'll have agreement on what are all the different parts of the nervous system, and then some point after that, we'll start to have agreement about all the different functions of the nervous system, and then we'll we'll we'll start to really understand how how the mind works. And I suppose philosophically, then really this leads up to, you know, having a much better mechanistic understanding of behaviour Yeah. Psychology.
And then and then I think, you know, the big questions that people always talk about are, okay, well, how does this then relate to consciousness? How does this relate to sense of free will?

Yes. That's where I was going with this. Yeah. So I recently read this book Determined, by Robert Sapolsky, who's a Stanford professor, a neuroscientist, and an anthropologist. He does a lot, it seems.
And in the book, he basically makes the case that in neuroscience, there's a lot we don't understand, but we keep understanding more and more. And whenever we look for an explanation, we tend to find one. We don't tend to have this sense of, like, there are uncaused, you know, behaviors. So I guess part of what I wonder is, you know, you work in the mental health space these days, which we should, you know, we'll get back to you. But I guess, you know, does it change the way that you interact with people, the way that you would think about people making decisions when you think, hey.
This person made this decision because they really wanted salty food, but actually, I could have, you know, modified this one little part of their brain and suddenly they wouldn't have wanted that salty food anymore. Like, does it make people seem less like full selves that make decisions when you know that actually could just break them down into a brain and think of different regions of the brain, you know, acting semi independently to make decisions?
Yeah. Yeah. So I, you know, I look at everyone like little complicated little machines. I mean, it's interesting. I probably, it's probably something which I, you know, probably the same for neuroscientists and lots of biologists is that, you know, we take a bit more for granted than maybe lots of other people do.
That when I think about, well, I don't understand why someone behaves in a certain way, or I don't understand, or or or just generally a certain type of behaviour, But to me, it seems like a tractable problem. Right? It's just, I'm, you know, if we study it enough, we will understand it. Right? And it won't turn up. There's no mystery. There were no, there's no magic going on. Well, it's magic. It's amazing, but it's not, it's not magic.

That's so fascinating. So someone, I mean, when you do think about this, when you're like, I cannot understand why this person is doing this behavior. It's so, you know, annoying. Or this person's so brilliant. You know, I can't understand how they achieved that brilliance. And you're like, I can't yet understand. But I'm sure that we can understand it if we just work a a little harder.
Yeah. Yeah. I mean, I think so. I mean, I'm super, you know, optimistic about science. I mean, I just think that science is incredible, really. And, and, like you say, when we apply ourselves to it. We basically make, we make progress towards understanding things. It's, it's the best, best way to find things out.

Mhmm.
And it's been incredibly effective, and I just believe that we will just continue to do that. Now, you know, it's certainly some people might argue, well, you know, maybe maybe understanding the brain will be, at some point, we will just run up against something that's beyond our ability. And, you know, philosophically, maybe maybe it's not possible for the brain in some way to understand the brain. The the the ideas are too it's somehow, it's too complex, you know, and we can't do it. We might have to wait for AI to to solve that.

There's something weird here about, like, though, even if we fail to understand it, once you've gone down a path of repeatedly trying to understand something and then succeeding, there has to be some feeling of, like, it doesn't really matter if I successfully understand this particular behavior. There is obviously a mechanism and I know that more than most people because I've seen mechanisms for other behavior.
Yeah. So I mean, it's true that you you don't have I suppose what you're saying is that you don't, you know, you don't have to understand. It depend it depends really what your aim is. So Mhmm. I think, you know, science as a whole is, even though you it's a sense of moving towards understanding everything, it's always aimed at understanding something specific.
And there are certain things which, you know, just are never understood for some one reason or another because they're not, don't seem important, they're not prioritised. Yeah. And so, so it could, it could end up being the same with either the anatomy of the brain or, you know, it may, there are all sorts of weird and wonderful possibilities. Maybe it turns out that like, having this parts list, this, like, something that the neuroscience community has, like, sunk, you know, untold millions into and always claims is vital, which is to have, like, a parts list of the brain, you know. Maybe that'll just, we'll look back on that and realize that actually that was, that was a fool's errands because for the, you know, some of the reasons that you're talking about, right, it's just it's a neural it's a neural network and that's the most important thing.

Yeah. There's something here. I think one one idea I've heard that, you know, the paper, the magical number seven plus or minus two, that, basically just people can't keep in mind more than seven things at once, whatever a thing is. A thing could be like a digit or a thing could be like an object or an idea, plus or minus two. And so there's something here as it applies to science, which is like scientific theories can be incredibly complex, but they have to be composed of incremental bricks.
You can't have more than, like, around seven bricks in an idea. This is basically the framing. That if I were to explain a region of the brain and I were to define that region of the brain, I can use something like seven separators to be like, oh, this is what I mean by the region of the brain. But if I go beyond that, everyone's just confused and lost. And that becomes some sort of intrinsic limitation where if you could imagine a neural net that was studying a neural net, if you're imagining, you know, non human that can construct ideas that are composed of a thousand parts, perhaps they're like, oh, it's very simple.
This part of the brain is defined by and then you have some extremely complex relationship oriented idea with differential equations or something that, like, we just could not conceive of. But it has extreme predictive power.
Yeah.

I guess you end up like, one of the questions that, you know, we when you're talking about science, you were saying a limitation of science is there a lot of things not worthwhile to study, and so we just don't study it, and therefore, we don't learn. Doesn't mean that we couldn't. It just means it's not worth our time. And a lot of that could change if we had, you know, massively more resources from AI where we could end up in a world of, you know, the goal is just to be able to predict behavior. And otherwise, we don't really care about the mechanisms, and we don't care about what's going on inside, but we care about predicting behavior.
And then perhaps, you know, on the path to predicting behavior, it becomes necessary to have an extremely strong mechanistic understanding of just everything in the brain.
Yeah. It's possible. It could become trivially easy to to to solve, you know, to have a wiring diagram for every human brain or something. And so then it would just be done because it would be trivially easy. I still think in in that kind of world that feels not resource constrained, there'll still be resource constraints, you know? And so, it'll always be a question, is it worth doing or not? But it but it might become so cheap and easy that that people do it.

Yeah. Here's here's a question, changing topics a bit. How did you get into the mental health space?
So, I mean, actually, I did a psychology degree, and I originally thought I might be a clinical psychologist. And, I did a bit of work experience at a psychiatric hospital, one summer, and it and it had all of the, it had a sort of long stay patients who are in there. They're really chronic psychiatric disorders. And, I can remember after that thinking, oh, this is, some of those disorders are so interesting, but so intractable and hard to treat. And, I can remember thinking, oh, maybe, maybe I'll be more interested in going into research.
It really felt like, and it was quite a long time ago, and maybe, maybe the really research needed to be done, really to understand psychology and the brain, before it was going to be straightforward to help these people. So anyway, then I got interested in research. I had some really great lecturers actually who really inspired me, and I just, the whole thing of doing science really excited me. So I went into research. But then after I'd been doing that for quite a while, I began to get a bit I mean, I don't know if bored is quite the right word, but I'd been doing it for quite a long time, and I had my own research group, and I'd done all the things that I sounded really amazing when I was a bit younger, like publishing interesting papers, and having PhD students, and winning grants, and things like that.
And I I could tell that I was I was getting a little, stale. I thought to myself, you know, I think this is an opportunity to, make a change. And so I basically closed down my lab and then, started looking around for, opportunities in kind of mental health tech and Yeah. Tech. So that's how I ended up at Mental Health Innovations.

So it's really more of a, you know, start out in psychology, and you're like, am I really gonna be able to help these people? Maybe we actually need to get a little bit more research y, go down more of an understanding route. And if we can gain more of a mechanistic understanding of these disorders through neuroscience or psychology, who cares? They're, you know, related.
Yeah. I think that was probably the idea. I'm not sure. I mean, society has made a bit of progress in in in treating mental health. It's made a lot of progress in, you know, destigmatizing it and having discussions about it and, thinking about it, which I think has been has been really important.

So, I'm curious to ask you on on that in particular. So, can we talk about the serotonin theory of depression? Sure. I ask this because I think it it ties in a lot of the topics that we're talking about.
Yeah.

And, you know, something that we've spoke about briefly. So there was a paper in 2023, big impactful one. I did look it up. The Serotonin Theory of Depression, a Systematic Umbrella Review of the Evidence. In the abstract, they write, the main areas of serotonin research provide no consistent evidence of there being an association between serotonin and depression and no support for the hypothesis that depression is caused by lower serotonin activity or concentrations.
And I guess the context worthwhile to give when speaking about this is there is a common understanding in society that we have these drugs called SSRIs, selected serotonin reuptake inhibitors that people take. And I think the very common layman explanation that many people would believe is, depression is a disease where people don't have enough serotonin in their brain, and it can be treated pretty easily by increasing the amount of serotonin in their brain through these pills called SSRIs. Of course, there's been this big backlash, hence this review to say that's too simple. That's not true. I know that you have a lot of thoughts on this.
Yes. So I mean, it's a really interesting debate where to start on it. I think that the key thing about that review paper from my point of view is that what it what it shows is that the evidence directly showing that a reduction low levels of serotonin leading to depression is is weak. The evidence is weak. And I think some people disagree with that.
I know I know some, a lot of people who work in the in the field, think that the evidence is actually a bit stronger. But but it's certainly, not, I would say, it's not super strong.

Let me just ask at a higher level. Is depression a brain disease?
Well, so it it's clearly something to do with the brain. Right? So I think that's the important thing. So it it must be in some way to do with the way your brain is functioning, unless you think unless you're a dualist. So

So you're saying unless it's your soul or it's, you know, in your feet or something like that. It seems to probably be in your brain.
Unless your mind is some non physical thing floating whatever it is, it's obviously in your head. And it's something to do with your brain. Right? So, and so it's fair to say, if you don't want to be depressed, that it's some some something to do with your brain not functioning correctly. Right?
So dumb dysfunction of the brain. I think that we all agree on that. Now, whether it's useful to use a word like disease, I think, is less clear to me. So, there are lots of things that are diseases, where you either have the disease or you don't. And it's useful to talk about it like that.
But I think for mental health, it's not always helpful because most, most psychological traits, are continuum, continua, where you, you are, you have various traits that sit on that, you know, so for example, mood, you know, you can be either very, very happy or very, very sad, and we all sit somewhere along that, and we can move along it, and if you're very, very sad, at the very far end, you'll be clinically depressed. You know, and so, okay, I mean, this raises so many topics, so I try not to lose my thread, but of course, you know, that raises the topic of, well, how do you even diagnose someone? Right? So, you can, in the end, you have to come up with sort of relatively arbitrary cut offs for saying someone is or is not clinically depressed. Right?
So psychiatrists have a whole, you know, there's a massive diagnostic manual. You know, that's the first sign that there's a problem. The diagnostic manual is so big, that it's, you know, and

And I would also add, like, I think my my understanding of the evolution of the DSM, the diagnostic statistical manual used to diagnose disorders, in particular with depression and anxiety, has struggled incredibly with, like, inter annotator agreement, which is to say, here's a case file from a person. Do they have depression? Yes or no. It is incredibly hard to just get two people to agree on that even with the book. And the kind of idea with evolving, with changing those thresholds has been, let's just get more people to agree Because even if we're wrong sometimes, if we can agree, then at least we could do research on the same population and we can use the same terminology and we can figure out what actually helps people.
Exactly. And what would be even better is if we had proper biological diagnostic tools.

Right? Like an MRI.
An MRI or some kind of blood test or anything that was replicable, and completely objective. Yeah. And at the moment, we don't really have that. And so that that creates a number of problems.

Why don't we have that?
Not enough more research is needed. One reason we don't have that, it may be because a lot of these disorders, it's not like you either have it or you don't. You're always we're sitting on continuous. So so there will always be an arbitrary cutoff. Yeah.

I mean, just high blood pressure, you know, has a cutoff. And blood pressure's a continuum, but obviously we do have a test for it.
Exactly. So some of these disorders, there's some progress. I mean, I go back to dopamine again, because it's the best understood. So, you know, and actually this, some of this relates, I think, serotonin hypothesis, which is that, you know, one of the first anti psychotics, for treating schizo psychotic episodes with people who have schizophrenia was, dopamine receptor antagonists. So they're the drugs that block the actions of dopamine.
And because of that, people are like, oh, must be too much dopamine in the brain of people who are having a psychotic episode, and and if you block the actions, you block you you you reduce the psychotic episode. So actually, after, you know, many, many years, it turns out that that is actually, looks like that's correct. So when you do imaging studies on dopamine in the brain, people who are, having psychotic episode, or even just pre psychotic episode have elevated dopaminergic activity in the brain. So that hypothesis turns out to be have some truth to it.

So Luckily, in that case, it is at least relatively simple. There is some way that you can actually draw cut off by just looking at
Yeah.

Dopamine activity.
You know, even though psychosis is a super complex event, and actually it's kind of interesting to think, maybe we can come back to why how that how just having too much dopamine might make you psychotic. But anyway, and there are probably other things happening in the brain as well. But, you can imagine then on the back of that, that if serotonin selective reuptake inhibitors increase dopamine, serotonin Yeah. In the brain,

and that Perhaps.
That improves mood. Well, well, the s r SSRIs do increase serotonin.

Mhmm.
So they definitely increase serotonin. And they certainly, for some people, increase, improve their mood. And so, then you might say, well, it's not unreasonable to think that there's not enough, there's, there's depleted serotonin in the brain of, some of these people. But demonstrating it's been really difficult. Just just imaging serotonin in the human brain is incredibly difficult.
And, it's very high. The the temporal resolution is really low. The resolution is really low, the spatial resolution is really low, so you can't detect changes over more than, like, you know, maybe seconds, more like minutes. You can only see very big areas of the brain. You're not, this is a resolution, it's not down to, like, little micro circuits and so on.
So it's just a really, really hard thing to do. And so on that basis, if, I would be very cautious about saying that it's been shown that it's not the case. So, you know, absence of evidence is not evidence of absence.

Yeah. And the wording in the paper is very particular in how they word this.
Yeah. So I think also given what we and we don't have a full understanding of serotonin, but given what we know about serotonin and the circuits it's involved in, I I'm I'd be quite surprised if it wasn't involved in mood in some way. But anyway, sorry, I'm digressing a bit. You asked about disease. And so then then then the question is, is it useful to refer to depression as a brain disease?
Well, it's clearly something to do with the brain. A disease or not, some people find that useful. Right? And I think it's because they people who are depressed, they find that helpful because they feel like it acknowledges the gravity of their their issues. Yeah.

And I think there's some element of, like, it's not my fault. Johann Hari writes about in in lost connections a lot about his experience with depression. And he talks about how, like, how much of a relief it was to find out, it's just a brain disease, and if I take some pills, I'll be okay. And then he also talks about how disappointed he was when he had to increase those dosages over time and and learned that he was not cured by simply, like that that it wasn't a black and white, here's the pill that cures.
Yeah, exactly. So it's a it's a it's a double edged sword, you know. And I think also, it's a relief for that responsibility to be taken away from people. I can see that. But also then that can discourage people from exhibiting, like, agency over their own life.
Right? Yeah. So with a lot of mental illness, and also lots of other disorders, you know, feeling like you have some control to manage your to manage whatever's happening to your health, that you can take steps to improve your health, is really, really important. And by taking a view that it that you have a a disease and that you're passively experiencing this disease, may not may not help you get better as well. So I think that's, you know, it's it's it's nuance.

It's an interesting phrasing you said there, passively experiencing this disease.
Yeah.

I'm just noting noting the phrasing because I guess part of what's interesting when you're looking I know you've talked a little bit about, like, the well, right now your work is not in trying to diagnose and treat depression and to give people drugs. So presumably, if you're telling me that, you know, there is a major problem, which is depression, it's also on a spectrum, it is a scale, it's not black and white, it can it seemingly has something to do with serotonin and SSRIs are effective. I mean, why why are you not working on just giving SSRIs to everyone?
Well, so SSRIs don't work for everyone. I think that, it's a really crude drug as well. So we talked about, you know, the the science of investigating increases in serotonin is very crude. Taking an SSRI is a very, very crude drug to take. It's elevating serotonin in all sorts of parts of your brain.
And so it might be help it's helpful for some people, but I think that, you know, the way to think about approaching shifting people along these psychological traits is there are lots of different ways you can move along the psychological trait. So you can take some drugs, it'll move you, it'll maybe, may move you, some people, but clearly there are other ways of doing it. So you can have talking therapy, you know, cognitive behavioural therapies, maybe even things like going for a walk in nature. There is like a whole, you know, library of of possible interventions

Mhmm.
That that you can do. And and they may work for some people in better than others in different ways that, you know, we don't really, we we we don't really understand.

So when you're thinking about this as a psychologist and a neuroscientist, you know, you have to realize, of course, that, like, neuroscience has learned more about the brain and has introduced ideas like the serotonin theory. And that leads to, you know, a shift in a mentality that it's more of a brain disease, and we we towards thinking that drugs are a good idea. I tend to completely understand that point of view because there there does have to be some feeling of like, if it's in the brain, if we can identify regions, shouldn't we be able to target those regions with drugs? Why should we have to target them with actions? I'm just curious about how that's affecting.
Oh, well, I mean, it depends what those it depends what those changes are. Right? Yeah. So if you understand those changes, then you can think about the best the best way of modifying them, essentially. Right?
And, you know, based on what we know about the brain currently, a lot a lot of these, issues are probably to do with neural circuitry that's not operating the way we want to be operating, synaptic connections Yep. Being stronger or weaker. Now, changing changing that connectivity, influencing the activity of those circuits, just a drug may not be the best way to do it. It may be and and and, you know, when I think about therapy Yeah. I for me, therapy is a way of changing neural circuits.

Interesting. Say more.
So, you know, when you experience therapy, if you change Yeah. That must be because something changed in your brain.

Because for you to change is for your brain
to change. Exactly. Exactly. I mean, you know, and so, and what changed? Well, in all likelihood, it will be it will be the way neural synaptic connection, it will be changes at synaptic connections, so the way neurons are connected to each other and the way those circuits operating. So one level of explanation, even though even though one level of explanation may be, a psychological one

Mhmm.
Right? I feel better because I talked about this thing that was troubling me, and now I don't feel so bad. That is a result of of changes in in in neural circuits and neural connectivity.

Yeah. And Which ideally, if we had measurement tools for this, we'd be able to measure it and say, ah, your drugs are working very well. And in this case, we're talking about therapy.
That's it. And and and, you know, I think the next the next step probably for the the area and this kind of thing that people are thinking about is that you, you, you wouldn't, you might not do, you might not have a drug on its own. It might be therapy and a drug. It might be, it might be, you can imagine a scenario where maybe changing those neural circuits is facilitated by a drug.

Yeah. Yeah. The, like, psychedelic therapy.
Possibly. Yeah. Exactly. It's sort of opening your mind. Right? Maybe, maybe, maybe people talk about, you know, psychedelics because it, like, opens your mind. But maybe what psychedelics do is, you know, facilitate neuroplasticity. And so then you have therapy.

Which might be the same thing.
And it's just different ways of talking about the same thing. Exactly.

Yeah. I I'm also just thinking back to the, you know, desire for salty foods idea of, like, if you want to change a mouse's desire for salty foods, you might be able to, you know, shock their brain in the right place. You also can give them salt, and then they'll usually want less. Yeah. You know?
Like, there are because from that kind of point of view, this is true of, you know, if you're hungry, you could potentially shock part of your brain to not be hungry. But there are a lot of other things you can also do to not be hungry, like to eat food because that is how the brain works. Loneliness was the other one I was thinking about there because it would seem very suspicious to say we can cure loneliness with a drug. That would be a very if we're talking about, you know, depression here, I mean, I can imagine we probably will have a point where people will try to treat loneliness with a drug. And that's gonna be really weird.
Yeah. So I think that would be surprising. But it could be that

Oh, I I would bet this is gonna happen at some point. They're gonna be like, are you lonely? Don't wanna be lonely, but also don't wanna talk to people? Although it would make more sense to not say cure loneliness, than it would be like reduce your, you know, desire for social interaction. Yeah. Exactly. And then when you look at it that way, you're suddenly like, oh, I don't want to reduce my desire for social connection. I just want to be less lonely. Alright. Maybe you should go to therapy.
I think I think it's, you know, possible that yeah. It's not implausible that we get better understanding of those neural circuits about social interaction. And there might not be a single drug that does it. But but again, in in principle, you know, these are all action mechanisms that are in the brain. And if we can understand how they work, then we should be able to modify them.

Yeah. And then if we step aside from, like, the super biological side where you're taking pills, it's, you know, well, it's no secret you're director of AI at Mental Health Innovations. I I believe it's no secret that you guys do AI research, and Yeah. You're looking into the applications of AI for mental health. And are Yeah. We allowed to talk about the work that we do together?
Sure. Yeah. We we definitely should. Yeah.

Alright. Cool. But yeah. So I you know, we work together on some work on training AI to help people with their mental health, something analogous to therapy. I don't know what we're allowed to call it
out loud, but it's kind of AI therapy. Yeah. AI support.

AI support. Mental health support. Yes. Yeah. I I don't know how much I have to be politically correct, but you're a lot you're welcome to use your the correct terms. But no. I mean, I guess I wonder is, on this spectrum when we're talking about helping people, you have to imagine that AI support is a lot less intrusive than drugs. Right? Like, can you talk a little bit just quickly about the service that MHI provides through Shout?
Yeah. Sure. So, so Mental Health Innovations is a charity that, is developing new digital products and services to support people's mental health in The UK. So our first service is Shout, which is a 247, text based service to support anyone anyone in The UK. And, and also actually recently we merged with The Mix, which is another digital charity, that run a number of online services.
So we're really committed to providing digital solutions to support people, and their men support their mental health. So Shout, has turned out to be really popular. We launched we had a public launch in 02/2019, and we've since held more than 3,000,000 conversations with almost a million people in The UK on that service. And it's primarily a listening service. So people text in, they get connected with a trained volunteer, who's supervised by one of our clinical staff, and they can have a conversation about whatever's on their minds.
We get a range of things that people are talking to us about. Quite a lot of people coming most common issue people talk to us about is suicidal self harm. People are also coming to us with anxiety, low moods, worries about school, bullying, this kind of thing. And so it's turned out to be incredibly popular. That's generated a lot of data, and we're really committed to using that data, to further, you know, to optimise the way we run the service, and to generate insights into mental health that we think are of interest to others in The UK.
So And

so some time ago, I guess, we started with training volunteers and Yeah. Yeah, if you wanna feel free to
Sure. So, I mean, one of the things we found out really early on was that there were lots of people who wanted to volunteer for us. We've trained, I think, almost 20,000 people. But they they very often struggled with confidence. It's really difficult.
You do online training, but then you don't get much opportunity to practice, and then you're immediately on the platform, and you're having to have a text conversation with someone in the middle of the night who's who's really in a difficult spot. And so people wanted more practice. And so, you know, early on, people, we started talking about, would it be possible to build a chatbot? And, you know, this is pre chat GPT, but, you know, it still seemed like an exciting possibility. And, and of course, when you did that project with us, with Imperial, so with this partnership with Imperial College, it's been really great.
And I think one of the things that became clear from that project as a sort of side side finding was that

We can actually simulate Probably. These conversations and actually train a language model.
Probably probably the tech had caught got to a point where it was reasonable to do that.

And then a couple years ago well, I guess that was that was some time ago, and the tech was pretty early. We did build out our experiments. And then Yeah. I guess, when Slingshot became a thing, we you know, it was a natural progression to say, let's fine tune a version of this model that we can actually to start with, that can simulate the kinds of people who use Shout so that we can understand what conversations with those kinds of users would look like and use that for training. And now, of course, we've also trained a model that can actually, simulate the volunteer side of the conversation and can actually give suggestions and copilot, you know, suggestions on what to say, especially in the hardest parts of the conversations.
How do you start a conversation, wrap it up, go through, you know, the, suicide risk assessment processes like that? Nothing launched yet, but, I guess one thing that we have talked about is, some people who reach out to shout are obviously doing so because they wanna talk to a person. But I think we have a pretty strong idea that that's not everyone and that there are probably a lot of people who, given the choice between a human and an AI, would choose an AI.
Definitely the case. Yeah. I mean, I think, you know, a lot of people are using Shouts because they actually they wanna text, and they don't want to speak as well. They wanna talk to someone they don't know, and

they
don't want to be judged. And I think that a lot of the these reasons and others mean that, you know, people are amenable to talking to an AI. And there's there's good research, well, good research. There's some research to suggest that that's the case. That some people under some circumstances would choose to to talk to an AI.
So, I mean, I'll give you an interesting example from us. I mean, it's it's clear that people are using ChatGPT and and and all sorts of other, platforms to talk about their mental health. I mean, you can

Character AIs.
Yeah. Psychotherapists. Read about it in the papers all the time. Yeah. When when Snapchat released My AI, I think it was two years ago now, we were incredibly busy for a couple of days. We weren't sure why. And, we we realized it's because we were being signed people were being signposted to us from Snapchat's My AI. And so people were using my this service had been launched. It's not a mental health service. Definitely Snapchat.
Definitely don't want people using it to discuss their mental health. But people were going on there, talking about their mental health, and then being signposted to us. Yeah.

And we're hearing this a ton from my AI. Part of their struggle was, yeah, they were like, hey, how how do we actually handle the fact that a ton of people just want to talk to us about their mental health?
Exactly. So, you know, so this just goes to show that people people really want to people really want and and it's because they don't have anyone to talk to, you know. Most of the people who contact Shout say that they contacted us because they didn't have any, they don't have anyone to talk to. Either because they're, it's late at night, or even, even if it's during the day, they don't have anyone to talk to. And so, they, they really just want to, you know, be listened to.
And and I think that a lot of people recognize that even if it's an AI, there's still some usefulness in expressing yourself. Yeah. And there are all, you know, we can think about there are all sorts of benefits you can think about to to, talking even to it to an AI that maybe doesn't doesn't do much but but listen to you really and sort of reflect back. But Yeah. Exactly. Yeah.

What I one thing here is, you know, Shout is, increasing accessibility of just, like, much larger population. But even then, by far, the biggest, like, limitation in the Shout service is just you don't have an infinite number of people that can provide the service. And so you mentioned I think you said, like, 3,000,000 conversations with a million people. That's not that many conversations per person. You're pretty significantly having to limit the number of conversations a person can have.
Yeah. I mean, the biggest challenge for us has been that the demand for the service is hard for us to meet. And so, you know, some especially late at night, that people sometimes have to wait to get through. And that's not unique to us, all of or, you know, and, there are very lengthy queues for NHS mental health services. Or any any mental health service provider will tell you that they're they're really overwhelmed with, demand.
And that's because there's been such an explosion, really, in, is it a bit dramatic word to use? There's been a very, very strong increase in number of people who are seeking out support for their mental health, and there just there just aren't enough people to to provide that. And so, you know, if there's a technical solution to that, if if there's a AI service that can can help in some cases, that some people want, then we really need to it it's this is not one of those things where it's about replacing the humans. Yeah. There just aren't enough humans.
Yeah. You know? And so we really society really needs to think seriously about about that. And,

And I think also we've talked a lot about the lower end of the acuity spectrum in terms of, this can be a stepping stone towards accessing human care, but it's always gonna be a really long time before you're gonna get that human care being realistic. And if there are people that we can offload from that system that we can say, you know, we we can actually make room for the people who really need access to those human services.
Yeah. Yeah. I absolutely agree. You know, I think about sometimes people talk about these apps as being like digital doorways or digital stepping stones towards getting treatment. So you need to look at it in the whole, really, as what what's available to to everyone, and that it could be useful to discuss what's on your mind with a bot before you then go and see your therapist.
Or it could be useful for you to have that conversation in between therapy sessions. Or there are all there's any number of possibilities. Or or it could be that you have a brief conversation in digital app, and actually that makes you feel a bit better. Yeah. You decide you don't need to go and see someone.

I was gonna say on that last point, given that we've, you know, we when we first got started with our research years back, language models were like barely a thing. We were dealing with, BERT models, you know, non generative LLMs, but still, you know, large scale pre trained. And of course, the world has changed, and I think we reached or at least, you know, I reached a point where I thought, like, finally, I think models can do this. And I came to you and I'm like, hey. Let's just do this because I think think it is possible now.
It might actually be possible now. And we can see where the technology is at. I mean, how far off do you actually think we are from, being able to beat the median, therapist in terms of having, you know, efficacious conversations?
So my my cups definitely are more than half full on this.

More than half full.
I think already, the generative AI apps are they're so engaging. I mean, that's really the key thing.

Mhmm.
You know, there are there are other other apps that that don't use generative AI, and the the the limitation is that they're just not as engaging. And maybe if you stick with them and you follow their structured programme, and they're they're they're they're that you can have but people have benefit from them. But that is the key thing with generative AI, is that it's so engaging. Yeah. And I feel like for a lot of people, we're we're not far off having performance that would be good enough to help them.
You know, a lot of a lot of people it's like, for example, a lot of people contact you, shout. They only contact us once or twice. They're in a tight spot, and they they, you know, maybe a moment of crisis, and they want to talk about it, and they want to think about what they're gonna do next. We're we're just listening and helping them work through what they might do at the end of the conversation. Mhmm.
And, you know, I think that's that's something which which AI could put could do now pretty well. But obviously, there's the issue of there is this issue around quality. Can it do it well enough?

The
main concern that people have is around safety. Mhmm. And, you know, that's that's a challenge, but not a not an insurmountable challenge in my view.

Yeah. And I guess we are talking well, I'm excited for the future of our partnership then, because I guess the next big thing we're working on is just demonstrating safety of our system. So
Yeah. And I think that's that's gonna be really important for people to you know, there's not much discussion about that now. It's just that at the moment, I think I read an article yesterday, actually, about someone who's, you know, whose chatbot encouraged them to kill themselves or something, you know. We're still at the stage where, people are trying to jailbreak systems to do that. But I'm sure I'm sure that we'll we'll move move beyond that.
And I mean, also, the other thing is sort of unpopular amongst certain people, but, you know, humans aren't perfect. I think there's a real possibility that AI or AI supporting humans is gonna increase, gonna increase outcomes for people. You know, I think that's a really important thing to keep in mind. A lot of people are just like, oh, we've got humans over here, and we've got the AI over here, and, you know, the AI, we're trying to get the AI to be as good as the humans. But, but, but really what we're trying to do is improve outcome, improve outcomes as well as Yeah.
Give have good outcomes for for for more people. And I I I really think that that AI is gonna help with that.

Increasing, partially because we can just reach more people, and partially because the bar might not be as high as we think.
Yeah. Or even if it's high, it could be higher.

Yeah. Yes. And I think what's also been really interesting about your work is, in the mental health space, virtually nothing is ever, researchable, like, because sessions are not recorded. You don't have transcripts. And I guess with your service, you actually have transcripts at scale that allow you to understand, what works mechanistically, like, which conversations work, what comments can a can a, volunteer say that have an impact.
So it'll be really interesting, I think, entering the AI age when we can finally study the, mechanisms of what works and then also iterate incrementally at scale, you know, instead of having a lot of different people and trying to develop institutional knowledge, teach each other, suddenly you're actually able to improve online by just seeing what works.
Yeah. So that that's definitely, you know, a big opportunity. And I mean, the whole area of mental health is, you know, data poor in many ways. It's hard to do research, even, you know, we're talking about diagnostic criteria, but, you know, also when you look at outcomes criteria, right, these are basically questionnaires, you know, about how you're feeling and how you've been feeling recently. And it would be good to have better way of testing, you know, whether someone really is

Yeah.
Some sort of really, really has improved.

Even if it were just a matter of, taking the same kinds of research and just making it a little bit more objective, taking, you know, have a conversation and have the transcript lead to a diagnosis rather than have the doctor decide on a diagnosis.
Exactly. Could be. Yeah. It could, you could, you could imagine that that that would be the case. That would be really useful, and it might not be an it might not be an either or as well. It's Yeah. It's all about enhancing, you know, all the information that we're getting. I think there's a real opportunity there.

Last thing to bring up, and a completely unrelated topic is we've talked a lot about stoicism, and I think you've learned a lot from, you know, you've learned a lot from being a psychologist, I guess, and from the space you've been in working in mental health about the kind of best practices that work for you. Obviously, we are speaking from a scientific perspective where we'd much rather have a lot of different approaches and not, you know, just go based on intuitions. But can you share a little bit about your theory of the good life?
Yeah. Sure. So, I mean, it's become quite popular, stoicism, recently. And I'm kind of glad. And I think it's become popular because there's a few, you know, good communicators who've been promoting it. You know, like most philosophy, it's not super accessible if you read the original sources.

Yeah.
But, you know, people like, Ryan Halliday, I think have done a really good job of communicating that to people. So I read a book, and David Goggins, he's one of my favourites, if he wants an entertaining read. So, a really great book that I read is called The, Stoic Guide to the Good Life, by William Irvine, who's a, who's a philosopher in The States. And, and he, I think, does the best job of, of, providing food for thought. So for me, for me, what I really like about stoicism is that it provides ways of thinking that I think are really useful in life.
And I think particularly, I mean, relevant also to sort of mental health really. Because I think, you know, the challenge is to to lead a productive, everyone wants to be happy and to be content, but it's hard to be productive and ambitious and competitive. And be, happy and content. It's a

real challenge. And so the challenge is how do you balance, like, be ambitious, try to change the world, don't be completely still and just let the world pass you by, but at the same time, you know, be happy. Yeah. Exactly. And that's you're competing. And I guess, do you want to just, summarize? I I know you gave me a very simple summary. Yeah.
So I think for me, the key things are the first thing is really trying to recognize what you can and can't control in the world. Mhmm. So, you know, this is really important for stoicism, is that, there's no point raging against something that you can't control.

Mhmm.
And when something happens to you, what you really need to try and do is identify what parts of it you can control, and then that's where you should act. Mhmm. So it's not always completely clear, but it's a really, really useful framework for thinking about that. I think the second part to that that's really important is that it's not even always clear. You can you can what you have is you have in within your gift to control how you feel about something to a certain degree, or to, you know, it's not always easy to control how you feel.
But in the end, that is you can't control if something happens to you, but at least you have some potential to control how you feel about something. And also to think about whether whether something really is good or bad. Things are not always intrinsically good or bad. And so, how how you how you feel how how you feel about them is somewhat in your control, and that's really important. And

So just to summarize that, it was, first, in a situation you have to think, what can I control and what can't I control? Don't get too upset about the things you can't control. Focus on what you can control. And second, you do have some control on how external stimuli make you feel and you should use that.
Yeah. Exactly. And that's really and it's really important because, you know, things aren't always straightforward. So there's a this story that the, fable of the Chinese farmer, and the horses. So super short version is that he's on he's out one day, and all these horses arrive on his farm, and all the local villagers gather round and say, oh, it's really wonderful.
You've got all these horses. You're so lucky. And, and he's he says, well, you know, we'll see. And then the next day his son is out, corralling the horses and trying to, break one of them in, and he gets thrown off the horse and breaks his leg. And all the villagers gather round and say, oh, this is so terrible.
Your son, your only son has broken his leg. And the farmer says, you know, we'll we'll see. And then the next day, war breaks out, and, the military come and gather around gather up all the young men, take them off to the war. But they don't take the farmer's son because his leg is broken. And And so the villagers are like, oh, that's such good luck.
And then the farmer's like, well, you know, we'll see. And so, you know, it's just a really important illustration of how you don't always know how things are gonna unfold. And so it's important to kind of keep that in mind, I think. And the the other lesson from, that William Irvine gives, which I really like, is to view a lot of life as like playing a game of tennis, and that really the way to think about playing a game of tennis is to play your best game. It's not to aim to win.
Right? And so the the the analogy that he gives is if you're playing a game of tennis against, say, Roger Federer, right, you are obviously gonna lose. And, you you know, you shouldn't, it doesn't make any sense to get upset about that. But what you should aim, try and do is play your best game of tennis against Roger Federer. Right?
And you're gonna feel most content and satisfied if you play your best game. And of course, the the the truth is that if you play your best game, you're actually most likely to win. And so in lots of situations in life, you know, we should be trying to play our, but it's about, again, you can control yourself and how you play the game. But you can't always control who you're playing against, and you can't always control the outcomes. But if you focus on playing your best game, then you are also most likely to win.

I love this. We started talking about dopamine and serotonin and the mechanisms and we end with the advice from the neuroscientist, which is, stoicism is a great way to rewire your brain.
Exactly.

That was awesome. Thank you so much.
Pleasure. Thank you. Enjoyed that.