Both of our children are medically complex. I have two children. My son is eight, my daughter is six. My husband and I are both educated professionals. I think that's super important. We also had gold-level health care insurance at the time when all of this was going on, and we had the means to pay out of pocket for care. So, we had what looks like everything to be perfect to get the best health care and the best insurance, and we could not get an answer for three years.
Our son is adopted, and so because of that, he had a newborn exam at the time of birth, and then he had another newborn exam in our home state. And both of those newborn exams missed what's called a sacral dimple. And that should have been a significant point of observation that there's something wrong. When he went for his kindergarten motor skills test, he failed the gross motor skills test. He had left and right imbalances.
And they dismissed it as it's just, you know, every child develops in their own time. Welcome to another episode of NEJM AI Grand Rounds. We're delighted to bring you our conversation with Courtney Hofmann and Dr. Holly Gilmer. Courtney is the amazing mother of a young boy who is suffering with chronic pain. He saw 17 doctors over three years without finding a diagnosis. In desperation, she put his records into ChatGPT and it returned a diagnosis of tethered cord syndrome.
And then crucially, there was a human-in-the-loop. She took that diagnosis to a new physician, Dr. Holly Gilmer, a pediatric neurosurgeon. And we were so lucky today to be able to speak with both of them on today's episode. Raj, this has to be near the top of the list for one of my favorite episodes we've ever done.
Just like, anytime I hear a story about a parent refusing to take no for an answer, doing whatever they can to get the diagnosis they need for their child, I just find that so inspirational. I think, it's interesting that AI was actually the key enabling factor for Courtney and that had she had this issue 10 years ago, I'm not sure that the story would have finished in the same way. So, it really is just like a truly inspirational story at the intersection of AI and medical diagnosis.
And again, like I think one of my favorites that we've done so far. Yeah, I totally agree. And I think Courtney also reflected on lessons about the health care system and her journey. Her son's journey, why the diagnosis might've been missed and how the AI model was able to take a holistic view of all the symptoms, signs, and findings of her son, and then provide the diagnosis, which was then acted upon. So really amazing to hear that side and just how thoughtful she was about everything.
And yeah, to be clear, all the information to make the diagnosis was there. What she needed was a new way to synthesize that information to see what the physicians that she had been seeing missed. And her comments on what led to that, how she tried to get different doctors to take her more seriously speaks volumes about structural issues in the health care system that shouldn't exist, but unfortunately do.
The NEJM AI Grand Rounds podcast is brought to you by Microsoft, Viz.ai, Lyric, and Elevance Health. We thank them for their support. And now we bring you our conversation with Courtney Hofmann and Dr. Holly Gilmer. Courtney and Holly, welcome to AI Grand Rounds. Thank you both for being here. Thank you for having us. Yeah. Thank you very much. So, Courtney, I think I first learned about your story through a news article. This article was published just over a year ago in September 2023.
And I have the title of the article here. It is "A Boy Saw 17 Doctors Over Three Years for Chronic Pain. ChatGPT Found the Diagnosis."
So, of course, I was immediately struck by the title because, I think at the time, there was a lot of speculation about large language models in patient's hands, but not really many prominent stories about cases where it was being used, where we got to hear directly from the patients about how the models were being used in their own care or the care of their loved ones. And then I read the article of course, and I got to learn about your son's story.
How you used AI for help and about Dr. Gilmer. So, we're so happy to have you both here, and maybe Courtney, we could start with you. Maybe you can tell us about your family and about your son, and you can begin by telling us about that. And then when you maybe started to notice that something was wrong. Thanks so much again for having us. We really appreciate telling his story and just continuing the patient advocacy that needs to happen for all of us, for our kids.
So just a bit about our family. We're a midwestern family of four. Both of our children are medically complex. I have two children, a son and a daughter. My son is eight, my daughter is six. And my daughter has three rare diseases and my son has two. He has a tethered cord and then he has a borderline Chiari, which is also to do with the spine at the top of the base of the skull. My husband and I are both educated professionals. I think that's super important.
We also had gold-level health care insurance at the time when all of this was going on. And we had the means to pay out of pocket for care. So, we had what looks like everything to be perfect to get the best health care, and the best insurance, and we could not get an answer for three years. So, when you look at what went wrong in our process, our son is adopted. And so, because of that, he had a newborn exam at the time of birth, and then he had another newborn exam in our home state.
And both of those newborn exams missed what's called a sacral dimple. And that should have been a significant point of observation that there's something wrong. And that got missed. And we switched pediatricians during the process, and it got missed again. So that was one of our key misses on the diagnostic journey. A second one was everybody assumed it was behavioral. He was a five- and six-year-old little boy. It was in the pandemic.
And I got told by so many different doctors, this is what little boys do, or it's because of the pandemic, all children are changing. And we knew our son, this was not attention-seeking behavior by him. Another thing that got missed was he had imbalances. When he went for his kindergarten motor skills test, he failed the gross motor skills test. He had left and right imbalances, and they dismissed it as it's just, you know, every child develops in their own time.
And I brought up even further, well, you know, I noticed that he has unequal gluteal muscles. Oh, that's so minimal. It's not going to affect anything. When we had, what I feel was the biggest diagnostic miss was when we had the MRIs, they were read by one person. And you mentioned, Raj, that we saw 17 different doctors. Nobody actually opened up the MRI and looked at it until Dr. Gilmer. Everybody relied on the one person that read the radiology and she missed it. You know, it's not her fault.
She just missed it on that one scan. And we saw orthopedic surgeons, we saw rheumatologists, we saw other neurosurgeons and neurodoctors, and everybody didn't open the MRI. So, if you're listening and you have a problem, make sure those doctors are opening those MRIs. And then the fact checking. Nobody was reading the reports that we were sending in. Nobody followed up with the PT and nobody was double checking anything that we were saying.
We were relying on our memory when we went into these appointments. So, if you look at, like, why did I finally go to ChatGPT? If you could hear, the frustration in my voice as I'm even retelling this story. It was my son's seventh birthday. It was about to be that birthday. And I went on, my mom really all wants to see pictures of her grandkids on their birthdays, and I was searching out, like, the best photos to show her, and I could not find a photo where the smile met my son's eyes.
Like every single photo, there was something wrong and I couldn't take it anymore. I'm an educated person. I have two degrees from the University of Michigan, and I felt like I should be able to solve this problem. I've been an entrepreneur for 15 years. I can do things. I can execute strategy, but I could not execute figuring out what was wrong with my son. And ultimately, I sat down, and I took his entire health file.
I went through all eight of the electronic health record systems he had, put them into one file. I went through all of my notes and put together one symptom list and I put all of that in the ChatGPT and I said, what is wrong with him? And it came out with three different results. And I went through, and I kept asking back and forth, like, tell me more about this. Tell me what this means.
And ultimately ended up going through, into social media and reading parent stories, and figuring out that it probably is tethered cord. And there's two doctors, one Dr. Gilmer, who can do this with almost exacting certainty every time. We found her and we're lucky enough to get in with her two weeks later. And he had surgery six weeks from when I used ChatGPT and he is a new kid now. Wow. I was just going to hop in and follow up on some stuff.
So, one, I think that your example is what as parents, we all hope that we would do if we were in your situation. So, I just want to say, you know, I'm astounded by the courage and the tenacity that you had to get to the bottom of this. And I think we all hope that we would react similarly in a situation like that.
I think the second thing that I'd like to like to dig in on is one of the recurring themes of this podcast is systemic issues with the health care system and ways in which we think AI might address some of those issues. So, do you have some sense of what caused your son to go through this diagnostic odyssey, where clearly the pieces of information were there, but they were missed or ignored. Is it an overworked health care force?
Is it some paternalistic hangover from previous generations of doctors? What is your sense of the things that led to this cascade of failures that left your son undiagnosed for three years? Oh, I think it's multiple variables that you mentioned. There was definitely a moment where we talk about the paternalistic viewpoint. I acted for some of our appointments. Some appointments I would go in as the working mom and they would tell me I didn't know my kid.
Other times I would dress like a stay-at-home mom. But then they didn't seem to take me seriously. So, then I brought my father with me because I thought maybe if two generations are here, they'll take us seriously. And so, we played whatever game we could to try to get them to have a better conversation with us. And then the last piece was an MRI. We had several MRIs, and they were all read by one person, and then those MRIs are sent to doctors. And this is so important.
I just assumed that doctors that were treating my son were opening the MRIs and looking at the MRIs. And in reality, they're reading the MRI report. And the person that read the MRIs was missing the tethered cord every single time. So, I think he ended up having six MRIs and it was missed each one of those times. And then the fact checking, no doctor was ever looking at what was happening anywhere else. They were all relying on our memories.
And when you have two kids, with a bunch of health issues, you can't remember which kid is which. Doctors are overworked. They have no time to actually review everything. Nobody could look at our files before we got into the room with them. They have 22 minutes and we're spending 18 getting to the same page. So, we're having a four-minute appointment where you can't make movement on a treatment plan or diagnosis.
And then the last thing is they're so pigeonholed into their space because of the lack of time that they can't see beyond what they specialize in. I want to point out one funny symmetry here. So, when we get LLMs to answer questions, often we have to do this protracted series of things that are often called prompt engineering. And if you ask the LLM to pretend that you're the smartest person in the world, it will actually give you more accurate information.
And it just strikes me that you were prompt engineering the doctors. That the way that you presented yourself to them made them take you more seriously and therefore be more likely to give you high quality information. And as far as I know, this is one of the first instances of like prompt engineering for doctors, but it seems like it kind of like goes both ways. That's really funny. I've not heard that before. I like that. I'm going to keep that.
Courtney, I think something that you just said that really stuck out to me is that you feel like each of the individual clinicians was sort of focused on their own domain. And I think when you were typing your son's records into ChatGPT, putting everything you, you knew and handwritten notes and other things like that, you were really trying to get out of that mold, right? Of looking at your son only through one lens or with one potential set of problems in mind.
But instead trying to prompt this kind of general reasoning model into helping you come up with a potential diagnosis that considers everything together, that considers your son holistically, and sort of considers the full boy and everything that's going on. And it's also startling to me, I think you didn't actually use the images themselves, right? I don't know if ChatGPT was multimodal at this point. I don't think it was. And so you just used the text that came out of the notes, right?
And your own notes, as I understand it. And using all of this sort of general information, the model was able to come up with tether cord. You're right. I could not use the imaging. I used the words on the MRI, and I went line by line, and anything that they said, anything they called out in the report, but yet wasn't underneath the things to look at area, I copied and put that in individually and said, what else could this be? It called out something at the S2 joint.
And I said, what else could this be at the S2 joint? Or what diseases are wrong at the S2 joint? And so like, I got that whole list and started looking through everything. It was a night full of medical rabbit holes. Can I ask what your experience or even awareness of AI was before this moment? I know that you, like, obviously you're an executive and hard charging, but like, had you heard of AI? How did you even come to know of ChatGPT and that this might be an option to you?
So, I have a background in the HR industry. So, it was something that we were watching carefully as it comes to like interview bias, and hiring bias, and how AI was going to be used. So, I was pretty up to speed on AI in general. Was I a medical person before my kids? Absolutely not. I passed out in every single biology class every single year since fourth grade. So no, the irony is complete.
It's interesting that that's your entry points to AI because obviously like with HR, you're worried about propagating biases against underrepresented groups. And if I was in your headspace, it would not be a natural reflex for me to go to something that my main exposure to has been negative and like that it could be harmful. So how did you, what flipped for you to make you think that this might be a tool for medical discovery?
I was desperate, honestly desperate, but I will say I put the age of my son in at several different ages and I never called him a white male. Very interesting. So, I think a very critical element of this story is that you put everything into ChatGPT, into the AI model. It suggested tethered cord syndrome. And then we had this sort of nerdy phrase in machine learning, which is there was then a human-in-the-loop.
And that human-in-the-loop is Dr. Gilmer, who we're also so pleased to have here with us on the podcast. And so you took this diagnosis, you took this suggested diagnosis first, as I understand it, to an online support group where you found Dr. Gilmer, and then you took your son, and the imaging, and all the data, and of course the story, to Dr. Gilmer. And so maybe Dr. Gilmer, we could turn to you.
And could you tell us about what your reaction was and also how you navigated this information and the case. Well, so when I met Courtney and Alex, he had these symptoms, scattered symptoms, some off and on back pain. It wasn't entirely clear, but then as I got to know them between the first visit and the second visit, his back pain became more frequent and more constant. It was interfering with him attending school. It wasn't helped with physical therapy, and there was the known sacral dimple.
So, then we got imaging, lumbar spine MRI and lumbar spine CT. And was it from the original imaging itself that you were able to confirm the diagnosis of tethered cord? And then that prompted sort of a new imaging study? He had never had a lumbar spine MRI. He had off and on had back pain, and he had the abnormal motor function that was found in kindergarten. And Courtney had asked for imaging of his entire neural axis over three years. It was never done.
And could you tell us about, tethered cord syndrome, how it presents typically, how it can present atypically, and maybe give us a sense, and especially for the sort of non-medical listeners, how difficult this is to pick up and why it might be missed in practice. Well, let me first say, most of the time, when pediatricians see a sacral dimple, in a newborn. They are aware of it. They'll get at least an ultrasound, which is a screening study.
If the ultrasound is not read as showing a tethered cord, then the child may be kind of lost to diagnostic follow up, and the ultrasound is a screening study. So, if it's positive for tethered cord, the person has likely a tethered cord, but if it's negative, that doesn't mean they don't. So, you know, it presents in a variety of ways. Very commonly children will have a sacral dimple, an abnormal gluteal cleft, lay term's butt crack. Okay, that should be straight.
And sometimes it's what we call bifid, which means it's straight and then it has two, like a V at the top, two limbs at the top, like a V. But if one limb is longer than the other, if the crack is curved, if the buttocks are asymmetric, that can indicate a curvature of the bone underneath. It can indicate a lipoma underneath that may be connected to the spinal cord. Anything in the midline of the spine is suspicious for a tethered cord, a tuft of hair, dimple, hemangioma, a Mongolian spot.
A lot of babies have a Mongolian spot, but if it's in the middle of the spine, then that may be a sign of a tethered cord. A defect that you can feel on exam. And so those are in babies. In older children, we may see any of those signs, what we call neurocutaneous signs or signs on the skin, the abnormal gluteal cleft, leg length discrepancy. I have frequently seen one leg longer than the other. Older patients may describe always having to get shoes in different sizes.
Very high arches or one high arch in the foot and a normal on the other side. Anything that's asymmetric suggests a problem with the spinal cord. Then as far as symptoms, when children are a little older, they frequently will have delayed milestones. They may walk late, greater than 16 months old. They very frequently have abnormal bowel and bladder function.
In fact, urinary urgency, frequency, frequent bladder infections, particularly in a male, constipation, those all may be signs of a tethered core. And a lot of the time we see, more than one of these findings, and symptoms, and signs. Some kids have no signs until they are getting towards teenage years, they might start developing scoliosis because some children are symptomatic with tethered cord as babies.
Others have to grow, and the spinal cord, because it's tethered, it's stuck down at the bottom of the spine. So, instead of growing up with the spinal column, it stretches. And that causes symptoms and neurological deficits. So, in some children, we start to see frequent bladder infections, urinary urgency, incontinence, as older children, and maybe scoliosis. Sometimes it's back pain. Numbness, tingling in the feet, pain in the back, radiating down the legs.
If I could hop in and ask the compliment to the sociology question that I asked Courtney. So my wife is a pediatrician and neonatologist, and one of the things that she often encounters in her practice is activated parents who are trying to make sure their child gets the best care possible, but are wading through the literature themselves, in a very, let's say random walk kind of fashion where they'll see something and it's hard to get a whole sense of the body of evidence.
How do you think about the role of ChatGPT in patient activated decision making? In this case, I think it's clearly miraculous, but what is the holistic picture from your perspective, especially as someone who treats rare diseases frequently? Are we, is this something we should be encouraging or is there an optimal way that we should be thinking about getting patients access to these tools? Or I'm curious how you think about that.
So, ChatGPT is not going to help you guide your diagnosis in terms of eliminating things. I think the value is mentioning everything. Every possibility, and just as Courtney was saying, she went down every level. What could this be? It's going to give you a lot of options of what it could be.
And then, hopefully you'll go to a specialist or your primary with these possibilities and there'll be further testing to either rule it in or rule it out and an explanation as to why it's not, if it's ruled out. What ChatGPT gave her that no one else gave her was the possibility. And that's where you bring in the bias. Like you said, activated parents, from a professional bias standpoint, a lot of the time people don't want to mention a possible diagnosis.
Well, I don't want her to worry about that. Oh, you know, she's dealing with this for three years. Let her worry about it. She's trying to find an answer for her kid. Mention anything it could be, and then work it up. And that's where ChatGPT will help. That is an interesting point that AI will not navigate around difficult diagnoses just for the sake of navigating that.
And there's perhaps a more willing to be frank about a tough diagnosis when you're speaking to an AI versus an M.D., and I hadn't really considered that, but that seems like an excellent point to keep in mind. I think it's really important, the broad range of what will be mentioned and hopefully the models will not be altered to discriminate in that way and remove possibilities because that's, that's really the value. Maybe I can ask a question that is related to the one that Andy just asked.
So, it seems like the doctor-patient relationship, the doctor-patient dyad is now becoming a triad with AI, right? We have this new entrant into the relationship and I'm curious, Holly, there's obviously this profound case and use of ChatGPT here. And Alex and Courtney, but do you feel like AI is changing the way patients are approaching you generally in your practice as yet? Like we have, like, how common is this? I, you know, Courtney's amazing. She was activated. She used it.
But are a lot of parents, are a lot of patients putting this in, into their own hands and using this for, for their care? Not yet. Courtney is a phenom. Courtney is not your, the person that you meet every day. It's coming. I think particularly with younger patients who are more comfortable with the technology. But patients who had problems and not had a diagnosis for years will make themselves comfortable with the technology, right? And I think they're going to start to surprise us.
They're going to start challenging us. And, doctors, we need to not dismiss what they bring us, whatever the source is. If it's social media, if it's a Facebook group, if it's ChatGPT, however they're being pointed in a direction, we need to evaluate it and either, you know, definitively rule it in or rule it out. Maybe one more question for Courtney before we go to the lightning round.
So, like in a kind of a weird coincidence, I was, Raj mentioned this Today.com article where he first learned about your story and about halfway down there's a quote from me in it about the technology. So there's like lots of like cosmic synergy happening here, which is really fun. But as Holly mentioned, you're kind of like on the vanguard of power users of this technology for my patients. And I imagine that there will be lots of parents and patients like you listening to this.
So, is there, do you have any wisdom that you might share about what was successful in you and successfully leveraging this technology that like an average, not Courtney level, but maybe like an average patient level, may be able to get similar results to what you did? So, I think utilizing ChatGPT and then utilizing the power of the parent groups, I see that now in the parent groups that I'm in because of the number of diseases my children have.
So, I'm starting to see it at our level where people are using ChatGPT on a very regular basis. My daughter, like I mentioned, has a lot of different things happening, including allergies. And many of the parent groups will have allergies to, let's say, soy, corn, oat. And so, they'll utilize ChatGPT to go find recipes and eliminate all of those allergens and create a recipe guide for the week. And they'll put a budget in, and it will do it for them.
So, like, I am seeing that on a regular basis in our groups. So I think it's just, it's coming. And I would say if you're a parent, who has, you know, this is important too, sorry. I didn't know that my kid had chronic issues. I didn't know that word. When I'm living off in my land in my silo of there's something wrong with my son, I had no idea to use the word complex medical child, or chronic child, or patient advocacy. I was just mad because my son wasn't well.
And so, if you know somebody like this, key them into those words so they can find where to go and get them into parent groups on social media, because there's so much knowledge in there to help push them further. Awesome. Thanks. So, I think we're ready for the lightning round. Okay. So, the first one is for Courtney, and maybe give you a little bit more liberty to answer this, with a little bit longer length than we typically do. I think I know the answer, but, but maybe not.
So, the question is, will large language models, which is the AI technology behind ChatGPT, will they be net positive for patients over the next five years or net negative? They will be net positive, but there does have to be some controls put in at certain points because you do have statistics driving everything. So bias is in ChatGPT, too. So, we do have to acknowledge that, and we've got to be careful of how it gets developed, but ultimately, it's positive for patients.
And then one more like mini static electricity question. It's not quite a full lightning bolt. What concerns you the most when it comes to patient use of large language models? I fear for the hypochondriacs who will think that they have everything wrong with them, and they don't have that wrong with them. But I think for those people that struggle with things like a soy allergy, like my daughter. When it does seem like everything is bothering you, they're going to figure it out.
Alright, the next question is for Holly. Holly, will AI and medicine be driven more by computer scientists or by clinicians? Oh, computer scientists. And we have got to be involved with the development. We won't be driving it. But we should be involved in the development of it because, we have the patient experience and we know what the needs are, but yeah, doctors are a little afraid of technology.
So, it's interesting because every computer scientist we have on here is like, oh, the doctors, but I think they, maybe they don't want to get in trouble. So, it's always funny how the other side assumes that the other side is going to have more impact. Okay. So, Courtney, this is another one for you. This is much more in the get-to- know-you category of questions.
So, this one is shamelessly copied from an NPR podcast called "Wildcard" where they pick these cards and then ask them to answer questions. So, what is the thing that you have changed your mind the most about since you were younger? Probably medicine. I think as I remarked earlier, I was not a very good study. I didn't study medicine very well. Biology, all of it. I was really good at figuring out how to how to make myself sick so I could leave the classroom.
Has your faith in the medical establishment changed significantly since you were younger? So that's a really interesting question because what your listeners can't see is that I was also a, uh, now I know the word, medically complex child. So, I have a cleft palate and that was back in the 80s. When I was going through this with my son, my mom was like, it's okay. Just keep doing the research. It's okay. It's going to be okay.
And I think with time, I now understand how to navigate our very fractured health care system. So, it's, it's a little restored right now, which is surprising. The next one is for Holly. Holly, if you weren't in medicine, what job would you be doing? Oh my goodness. If I weren't — There are no wrong answers to the lightning round. Oh my gosh. If I were not in medicine, what job, I can't imagine not being in medicine. You asked me if I weren't in surgery, then I'd be a different, type of doctor.
But not in medicine at all? Oh my goodness. Oh, I wasn't ready for that. An entrepreneur. Alright. I can see it. Thanks. I will start by acknowledging this next question is a little, maybe a little mean to ask a parent of two kids, being one of these myself. But what's an example of something that you do just for fun? What is your hobby? What is my hobby? I'm really an involved parent right now. I have gracefully acknowledged that I only have a certain amount of years at home with my kids.
So, I do a lot with my kids. And I recently just started doing diamond art because my daughter wants me to make her something to go on her wall. And I did not understand the investment of time that diamond art is. Excellent. Holly, we have the same, this is our last lightning round question. We have the same question for you. What's an example of something you do just for fun? What's your hobby? I like to play golf. Golf. Excellent. Golf. Golf sucked me in.
I know it's so stereotypical, but it sucked me in. It comes at you fast. Like I was a golf hater for a long period of my life, and Raj knows this. I built like a golf simulator in my backyard to get better. Wow. It does sneak up on you. That's for sure. Awesome. So, you both have survived and passed the lightning round with flying colors. So, we just have a few kind of last questions here for you. And, I'd like to start with Courtney.
And I think we touched upon this earlier, with one of the questions that Andy asked. But I'm hoping you can give us parting words to, especially for parents and for patients navigating their own care or care of their loved ones in this age of AI. You know, the models are changing. We know what, to some extent, what these models are capable of. We have your story. We know where they can hallucinate and confabulate, you know, those things are widely appreciated now.
So considering what you've learned and it's been now a year since the story came out, so I'm sure this has been on your mind and something that you've even seen change over time. What message do you have for patients and for parents who are navigating care for themselves or for their loved ones in this age of AI? I think share the load where you can because it gets very heavy.
And then the second part is if you're going to use AI, the hallucination you mentioned, that was partly why when I was using AI, I was cross referencing and asking the question multiple ways to ChatGPT to make sure that if it was giving me fake answers, I could find it. And then if it referenced material, I would Google the material to make sure it exists. And then after that, right, you acted upon that information by joining an
online community. Corroborating against the stories that they're sharing, and then bringing Alex to Holly, right? And, so there's, there's sort of, again, that it's, we're coming back to that human-in-the-loop, that cross referencing, and just really pressure testing the suggestions made by the model.
We might describe it as maybe, I don't know if you would agree, open minded, that it was willing to consider things that hadn't been considered before, but still knowing the errors it can make, the problems that it has. You took that information, it sounds like always with a grain of salt, and then involved humans, and human experts, and other parent experts in this condition before you acted upon it. Oh, absolutely. Alright. Thank you. So last question to you, Holly.
How, given your experience, do you think medicine will change over the next five-to-ten years as more and more patients and you know, something else we haven't really spoken about today, more and more physicians begin to integrate AI into both their lives and practice. How's medicine going to change? Well, so it's going to, AI is going to be integral. It's just going to be a part of everything. It's going to get involved from the administrative side, too, in terms of
what maybe surgeries are approved or not approved. What treatments are approved based on the algorithms that are set up. And again, we have to have a part of it. Patients are going to have, you know, more education faster. People will be able to research faster and come to us with more well-thought out, detailed questions. Just as a side note, Courtney didn't actually mention tethered cord to me.
She found the answer on ChatGPT and then we had gotten the lumbar spine MRI in the CT because he was having back pain and I looked and said, he's got tethered cord. And she said, okay, ChatGPT says you're right. You passed the test. Correct answer? Yes. That was a secret test. Yeah. Yes. Doctors are going to be dragged into it kicking and screaming, but they're going to use it for two reasons.
Because patients will demand a faster answer, a faster differential diagnosis that ChatGPT can provide, but also when they see that their work is decreased. You know, I'm using a type of AI in clinic now to help with dictations when I see patients. And so, I can do the notes faster. And that means I moved to the next patient faster. And so, I think that's how it's going to evolve. But I mean, we just, the human component has to be there. Can I ask you a specific follow up to that?
So often in AI papers and in press announcements, the specific promise they're making is that it will democratize access to expert diagnostic information. Do you think that that's a realistic goal and that it will be fulfilled over like a five-to-ten-year time horizon? Yes, but I think when you're given a broad range of diagnoses. Yes, you can get some that are clearly wrong, and it's going to have to be verified.
But that's okay, because, perhaps you're reading something you never would have heard of or otherwise come across. You know, I think it's fine. There's some interest. On the other side, in medicine, in particular neurosurgery, of minimizing incidental findings. So, I mean, people have had whole lectures and talks about this, how to address incidental findings on MRI.
And, to me, the answer is, to address them, not to ignore them and not to call them incidental without meeting the patient, without talking to the patient, examining the patient. But this is what patients go through. So I think using AI is a better option than dismissing people. Awesome. Thanks. Well, I would just like to say thank you to you both for taking the time to chat with us today.
I think an amazing combination of deft use of AI that really, I think, delivered a truly inspirational and uplifting story. So, thank you both again for coming on. That was amazing. Thank you both. Truly amazing. Yeah. Thank you. Thanks. This copyrighted podcast from the Massachusetts Medical Society may not be reproduced, distributed, or used for commercial purposes without prior written permission of the Massachusetts Medical Society.
For information on reusing NEJM Group podcasts, please visit the and licensing page at the NEJM website.