3 Takeaways Podcast Transcript
Lynn Thoman
(https://www.3takeaways.com/)
Ep 223: I’m a Doctor. ChatGPT’s Bedside Manner Is Better Than Mine.
This transcript was auto-generated. Please forgive any errors.
Lynn Thoman: I'm going to start the podcast today with my guest reading the beginning of his recent New York Times op-ed article. Jon, please go ahead.
Jon Reisman: As a young idealistic medical student in the 2000s, I thought my future job as a doctor would always be safe from artificial intelligence.
At the time, it was already clear that machines would eventually outperform humans at the technical side of medicine. Whenever I searched Google with a list of symptoms from a rare disease, for example, the same abstruse answers that I was struggling to memorize for exams reliably appeared within the first few results. But I was certain that the other side of practicing medicine, the human side, would keep my job safe.
This side requires compassion, empathy, and clear communication between doctor and patient. As long as patients were still composed of flesh and blood, I figured, their doctors would need to be too. The one thing I would always have over AI was my bedside manner.
Lynn Thoman: So the one thing that he thought he would have over AI was his bedside manner. But is that true? Does it matter who or what we interact with in medicine or elsewhere in our lives if it provides us with compassion, empathy, and clear communication?
Hi, everyone. I'm Lynn Thoman, and this is 3 Takeaways. On 3 Takeaways, I talk with some of the world's best thinkers, business leaders, writers, politicians, newsmakers, and scientists. Each episode ends with three key takeaways to help us understand the world and maybe even ourselves a little better.
Lynn Thoman: My guest today is Jon Reisman. He's an American doctor who has practiced medicine and worked as an emergency room doctor in hospitals throughout the U.S. and the world. He's worked in places as diverse as Alaska, Antarctica, and Nepal. He is also the author of the book, The Unseen Body, and he has written for The New York Times, The Washington Post, and other newspapers.
Welcome, Jon, and thanks so much for joining 3 Takeaways today.
Jon Reisman: Thank you for having me, Lynn.
LT: It is my pleasure.
LT: When ChatGPT and other large language models appeared, you saw your job security go out the window. Let's start with the technical side. What did you expect from ChatGPT on the technical side?
JR: Well, I have to say I was very surprised by ChatGPT's abilities, both on the technical side and just the verbal side of imitating human language to such an incredible degree, including very technical language that you expect only from professionals who have studied some area for years and perhaps gotten several degrees.
Computers seemed good at deciphering the technical side of medicine, so I was not surprised by its abilities there. When I was a medical student, we used to Google things like blood in the urine, blood in the sputum, and it would come up with the rare rheumatologic diseases that we were going after and things like that, and it always got it right. So that side I was not surprised at.
But I think like many other people, I was very surprised by how good ChatGPT was at mimicking humans, basically, and making you think that there was a human behind the words. And that goes for everything from technical explanations of medical concepts to even human conversation, which we often have in medicine. So I would say that it was that sort of mimicking of humanity side that really caught me off guard, as it did many other people.
LT: So you expected the technical side to be excellent, diagnosing complex diseases and offering evidence-based treatment plans, but you were surprised by the communication side.
JR: Correct.
LT: In one study, ChatGPT's answers to patient questions were rated as both more empathetic and of higher quality than those written by actual doctors. How is that possible? AI [artificial intelligence] is not caring or empathetic.
JR: Right. And I'm sorry to say perhaps many doctors are not either. I think a lot goes into what people perceive as an empathetic answer from a doctor.
For instance, ChatGPT can generate language at a much quicker rate than a human. If a human doctor is slowly typing into their computer an answer or just speaking the answer, it takes some time to come up with that answer, where ChatGPT can kind of generate a large chunk of text in an instant, seemingly. I thought about this a lot.
And I think what goes into feeling that a doctor's answer is empathetic, part of it might just be the length of the answer alone. Obviously, that's not the only thing. But if a doctor says something short and blunt, like, oh, you're fine, don't worry about it.
Maybe from a doctor's perspective, we think that sounds authoritative and it sounds reassuring to a patient. But in reality, it sounds like you're treating the patient like they can't handle more details, they can't handle a more in-depth dive into what the technicalities of your decision are. And so perhaps we think that's reassuring.
But I think a patient wants more information and wants to be a part of the decision, too, and not just take our word for it, as they might have in decades past when medicine was more paternalistic. So I think just the length alone and the instant it takes for ChatGPT to generate a more in-depth, more explanatory explanation of what we think is going on and how the advice we're giving stems from that. I think that's a big part of it, too.
So I don't think that's the whole story, but perhaps that's a big part of it. And doctors, being very busy and rushed all the time, perhaps don't have the time to give those more in-depth answers that patients want and deserve.
LT: Students all learn in medical school how to break bad news to patients. What are the do's and the don'ts?
JR: As a medical student, I learned that too. It's actually the only training I really got in bedside manner, besides watching more senior doctors and more senior residents enact the do's and the don'ts, learning from their positive examples and negative examples.
For instance, when you come into the room, you don't want to clobber the patient over the head with the news that they have cancer. But at the same time, you don't want to beat around the bush. They're there to get the results of their biopsy, let's say.
So don't talk about the weather, get to the point. There's this tendency to soften the blow of the news by using overly technical language, words like adenocarcinoma, which is a technical description of some kinds of cancer. You sort of can hide behind those technical words that the patient may not understand.
And instead of coming out and saying words like cancer that feel hard to say when you're faced with that patient, it is actually difficult to come out with those words. So we tend to hide behind technical words. That's obviously a don't.
Another important do is to always have a tissue box nearby in case the patient starts crying, of course, which sometimes happens.
And then, of course, a big do is to ask the patient what they know about cancer, what they know about perhaps a specific kind of cancer that you're diagnosing them with, to educate them. Because many people know the word cancer is bad, but really don't know much more than that or what to expect in the coming months and years.
So explaining all that is a very important do.
LT: One of the do's that resonated with me, really made sense to me, was to think about using the “ I wish line” as in, “I wish I had better news.” That somehow makes it seem more personal.
JR: So one of the lines that I learned, one of the scripts, was the I wish line. It's actually referred to in that way, the I wish line.
I wish I had better news. So that kind of does make it more personal. And having those lines, I almost think about it like you have a tool belt with different tools you can pull out, different lines you can pull out in different instances.
And it sounds robotic. It sounds technical when you should be utterly human in that situation. Yet, you're pulling out these pre-scripted lines.
But they really do help in those situations.
LT: Jon, you initially recoiled in medical school at the idea that compassion and empathy could be choreographed like a toolbox or like a set of dance steps. But what happened when you were actually practicing medicine as an emergency room physician and you had to deliver really bad news?
JR: I did find that having that script, having those tools, those lines really, really helps.
It's such a surreal situation. You know, I would have thought - I did think as a medical student - these situations are usually one human to another. You're just having a heart to heart conversation while at the same time conveying some technical information about the diagnosis and prognosis.
But it is a very unnatural setting. So as an ER [emergency room] doctor, I often find cancer, let's say, on a CAT scan when I'm working up a patient's symptoms. This is a person I've never met before.
They've never met me. I'm playing a role that I play every day to make a living. For me, this happens semi-often.
And for them, it could be the worst day of their life. So there's this huge chasm between us, this stranger I've never met before and likely we'll never meet again. It's not surprising that a human might act unnaturally in such a situation.
You know, we're all in our jobs acting out this unnatural role, playing a role, really, no matter what our job is. And that the same goes for doctors and even in those most human moments when you are telling a patient some life-changing information.
So in retrospect, it's not surprising that these sort of lines, these pre-written scripts help in that situation to sort of bridge that emotional professional chasm from which both of me and the patient are coming at this very difficult conversation.
LT: You've thought a lot about pre-written scripts. Where do you see them in society and what do they accomplish?
JR: Scripts are everywhere. When you think about it for a second, you think, oh, we're just humans. And when we talk, it's human to human interactions.
But our society and our lives are pervaded with scripts. When we greet people, we're following a script. And when we say goodbye, there's scripts between husbands and wives, there's scripts between friends, there's scripts between professional colleagues.
There's things you don't say in certain contexts and that you do say in others. Whatever your job is, if you're in politics, if you're in the medical setting, you know, there's things you say and there's things you don't say in those contexts. So we're kind of all following all these scripts.
And it seemed repulsive to me at first to think, oh, there's this pre-written script and I'm just an actor on a stage following stage directions when I should be a human in the moment connecting with this other human.
But pre-written scripts, pre-written actions and choreographed motions and gestures pervades every aspect of society. And I think when I thought about it, I realized it's actually a big part of being a human is a script.
And you're not just improvising and freewheeling it all day long every day. We're all following roles to some extent, though we may improvise on the script. Obviously, I'm not reciting the same exact words to every patient.
It is a conversation. There is a back and forth. So it's sort of like you have the script, but then you sort of improvise on it to fit it to the specific context or the specific conversation that you're having.
And that's kind of like how human life works in society, I think.
LT: Jon, you believe in the power of scripts. Do you think we will be interacting increasingly with AI, AI seemingly empathetic or informative with scripts, as opposed to interacting with other humans?
JR: I think we will. I think there's no other way.
I think so many areas of life have reduced human to human interaction. You know, I sometimes use chatbots online to get certain banking tasks accomplished. And I think most of healthcare can go that same way.
You know, doctors are expensive. Maintaining facilities are very expensive. Healthcare is a huge proportion of our national cost for the country.
And so reducing those costs will be great.
Hopefully, in some ways, we'll increase access, decrease the cost. But as a side effect, there'll be less human interaction, there'll be more interaction with machines, with AIs.
So it's kind of a brave new world we're entering. And hopefully, we can find the right balance without losing our humanity, even though we're interacting less and less with other humans. That is a scary new world.
LT: Does it matter that AI has no idea what we or it are even talking about if there are linguistic formulas for human empathy and compassion? Should we hesitate to use good linguistic formulas, no matter who or what is the author?
JR: Certainly AI can be very helpful even without feeling any compassion itself. I don't think any of us strive for a world where all human compassion and emotion is driven out and only technical verbal scripts of compassion remain.
Surely humans caring for each other, a doctor caring for their patient, a doctor feeling terrible about what they've just discovered on a CAT scan inside a patient's abdomen or skull. Surely that compassion must stay in the world and we must maintain it.
And AI, you know, if you're just writing a form letter to a patient about some ho-hum test result that's not that serious, I don't think tremendous compassion is needed. But certainly some is needed when in these more human moments.
And I think it will take some adaptation. And I wonder how far humans can take it. You know, traditionally we talk to each other face to face.
We hear each other's voice, which turned into the written word where you can send a letter across the country and you're not looking at the patient, which turned into sort of like telecommunications where we see each other, but we're across some distant geographic chasm. So the way we communicate with each other has changed so much. So I wonder how much AI communication we can tolerate.
Maybe patients won't actually miss their human doctors all that much. Most diagnoses I deliver are not life-changing. They're pretty ho-hum.
They are, oh, you sprained your ankle, you didn't break it, or you broke it and didn't sprain and you're going to follow up with an orthopedist, or you have strep throat, or you don't have strep throat, you have a viral cause of your sore throat. You know, these are not life-changing conversations. They don't require tremendous compassion or brilliance in bedside manner at all.
It's actually rare that I have to, relative to other diagnoses, that I have to deliver these life-changing ones. So I think a lot of medicine can change and people are not going to miss the more awkward conversations with their doctor about these sort of everyday, not so dangerous diagnoses. So I think medicine's in for a lot of change.
LT: It does raise these more fundamental society challenges. Taking a step back on a more general level, should we worry about relationships between humans? Humans aren't always as empathetic as we could be. For example, there's the classic story of the husband who comes home from work and he says to his wife, I had such a hard day at work, to which his wife rather than being empathetic about his tough day, responds with, well, you wouldn't believe the day I had.
Do you think that we as humans will become lonelier as relationships with other humans aren't perfect? They take effort and relationships with humans may not be as easy or as empathetic as interactions with an AI assistant or an AI companion.
JR: I do think it will probably get harder to maintain human relationship, though I do think that is very important. I think already with the technology we have, even without AIs that imitate humans nearly perfectly, we're more isolated as time goes on since we can kind of do almost everything in our daily lives without ever leaving the house or often without even speaking to a human.
We accomplish so many things through websites, let's say personal finances and banking and all these other things. We don't interact with humans as much as we used to. Is it making us more lonely? Probably.
As we interact less and less with humans, will we get lonelier? Probably. Hopefully we'll find ways to compensate. We probably have to ramp up even the more human sides of our lives as we interact more and more with AIs.
I don't see the interaction with doctors as hopefully a big part of people's social lives. You know, hopefully it's a small part of their lives. I guess if you have complicated serious disease, you see quite a number of doctors and perhaps many specialists.
And sadly for some people that might be the bulk of their social interactions in daily life, but hopefully humans can compensate for the kind of dehumanizing of more and more aspects of our lives by kind of ramping up the humanity of other parts. I guess we haven't done that super well lately, but hopefully we will. Hopefully we will.
LT: Jon, what are the 3 takeaways you'd like to leave the audience with today?
JR: The first takeaway I'd say is that as much as medicine feels like a very human endeavor, much of it is really just technical and a matter of customer service. And I think AI is going to do splendidly at that side.
The second takeaway I would say is that there's really no going back.
There's only going through and going forward. And that applies to the way technology will affect healthcare and many other aspects of life.
My third takeaway is that healthcare really needs to get into the 21st century in the way that it delivers care and interacts with patients.
As many people have noticed, interacting with your doctor's office can be rather dreadful. You have to sit in traffic, wait in the waiting room, get herded through your visit like an animal. And the communication can be terrible. You can wait for a call back for days and weeks or the results from your exams. And this all seems kind of stuck in the 20th or even the 19th century in some ways. So while the technical side of medicine seems to be sprinting into the 21st century, the kind of customer service side of healthcare still seems rather dreadful and in need of updating quite dramatically.
LT: Thank you, Jon. This has been really interesting. And thank you for your work to bring medicine into the 21st century.
JR: Thank you so much, Lynn. It's been a pleasure.
LT: If you’re enjoying the podcast, and I really hope you are, please review us on Apple Podcasts or Spotify or wherever you listen. It really helps get the word out. If you’re interested, you can also sign up for the 3 Takeaways newsletter at 3takeaways.com where you can also listen to previous episodes.
You can also follow us on LinkedIn, X, Instagram and Facebook.
I’m Lynn Thoman and this is 3 Takeaways. Thanks for listening!
This transcript was auto-generated. Please forgive any errors.