What's Your Problem? with Jacob Goldstein: Using AI to Help Doctors Save Lives - podcast episode cover

What's Your Problem? with Jacob Goldstein: Using AI to Help Doctors Save Lives

Mar 04, 202442 min
--:--
--:--
Listen in podcast apps:

Episode description

Every year in the U.S., tens of thousands of hospital patients die of preventable causes. For many of these patients, warning signs are subtle and easy for doctors to miss. Suchi Saria is the founder and CEO of Bayesian Health, and a professor at Johns Hopkins where she runs a lab focused on machine learning and healthcare. Suchi’s problem is this: How can you use AI to detect when hospital patients are at risk of potentially deadly complications – and how can you get doctors to listen?

See omnystudio.com/listener for privacy information.

Transcript

Speaker 1

Welcome to Tech Stuff, a production from iHeartRadio. Hey thereon Welcome to Tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with iHeart Podcasts and how the Tech are you? Okay, So this isn't really tech stuff today, I thought it would do something a little different. So recently we had Jacob Goldstein on the show. And Jacob is a journalist. He's done tons of work for multiple prestigious news outlets, and he's also the host of a

podcast called What's Your Problem with Jacob Goldstein? And on that podcast, Jacob talks with various smarty pants in the engineering field to talk about how technology can potentially help us solve some very difficult problems. And I thought it would be great to bring you an episode of his podcast, because I think if you dig text, you're also going to dig What's Your Problem. But I know it can be a hassle to go seek out another podcast, and

a lot of y'all may never take that initiative. So I thought, well, I'll bring one episode in just for today, and we can listen to an episode of What's Your Problem and enjoy that, and then if you like it, you can go seek out that podcast and subscribe to it. And if otherwise you're like this isn't my bag, well don't worry. We'll have another tech stuff episode for you on Wednesday. So this episode is called using AI to help Doctors Save lives, and I think that's a cool

topic to talk about. Often on this show, I'm talking about artificial intelligence in a rather skeptical way because I feel it's not a fault with AI necessarily. It's a fault in how lots of businesses are rushing to try and incorporate and adopt AI without fully baking in a business reason for it, and that kind of short sightedness can often have negative consequences. But I would never deny the fact that artificial intelligence does have its place and it can end up being a huge benefit to us

if we design it properly and implement it properly. That's a big if, and I think in healthcare is one place where AI makes a lot of sense, again, assuming that we do take the care to design and implement it appropriately. Obviously, there are very high stakes when we're talking about healthcare. So let's listen in on this episode of What's Your Problem? And I hope you enjoy.

Speaker 2

When you walk into a hospital, technology is everywhere. In one room, a surgeon is giving a patient a bionic knee. In another room, a CT scanner is creating this incredible three D picture of the inside of a person's body. But in other places the hospital feels less high tech. Doctors are still reading patients charts and making decision partly

on evidence but largely on instinct. This part of the hospital is not so different from what it might have looked like, you know, fifty years ago, and bringing new technology to this part of medicine to care at the bedside is a really hard, really interesting problem, because you not only have to figure out how to use technology to deliver useful information to the doctor at the right time, you also have to figure out how to convince the

doctor that the information is actually worth listening to. I'm Jacob Boldstein and this is What's Your Problem, the show where I talk to people who are trying to make technological progress. My guest today is Succi Sarya. She's the founder and CEO of a company called Baesian Health, and she's also a professor at Johns Hopkins, where she runs

a lab focused on machine learning and healthcare. Succi's problem is this, how can you use artificially intelligence to detect when hospital patients are at risk of potentially deadly complications? And then once you've done that, how can you get doctors to believe that the AI's warning is worth paying attention to. She told me she first got interested in healthcare sort of by accident, when she was a grad student at Stanford studying AI and robots.

Speaker 3

You know, I grew up actually being fascinated by AI. I loved AI, and really most of my interest was in the algorithm front and like looking at robotics and building robots that were really smart, you know. And I got acquainted with medicine through a friend colleague who was a doctor taking care of babies. And what I learned through her was that this is all this data we're starting to collect, but literally nobody was doing designing any

software to make sense of it. So it was just coming from a world where you know, I studied all kinds of data day in day out, with robots doing fun tasks like getting the robot to hold the ball or juggle the ball to then realizing, holy crap, there's like so many more useful things we could be doing.

So that was really my first discovery of like how big a gap there was between people who thought about AI versus people versus the problems that needed to be solved, and how little we understood about these problems.

Speaker 2

So so you decide that this is going to be your thing, right, this is your life's work now.

Speaker 3

I mean in the beginning, I wasn't convinced. In the beginning, it was just about spending a few years helping out and making sure we are able to make you know, in the beginning, it was about my next three years. Like I was afraid of investine. I was afraid of the complexity of medicine. Like it wasn't an easy field. It's not one where they welcome you, right, just as an engineer, you don't come in and like at least twelve thirteen years ago, that wasn't the culture that.

Speaker 2

Like right, Like like an an MD a hospital does not want to hear from some AI researcher. They're busy, Oh no.

Speaker 3

For sure, and they're like, we're busy, we have real work to do.

Speaker 2

Yeah, what is this?

Speaker 3

Like this all sounds an esoteric mumbo jumbo.

Speaker 2

Yeah, And so you say, you know, we're collecting all this data in healthcare and we're not doing anything with it. That is not intuitive, Like, that's not you know, I think most people sort of prior And this is at an academic hospital, right, Your friend is at Stanford Hospital, a very prestigious academic hospital. I think Stanford Hospital, I think data. I think these are people doing research. So what do you mean when you say we're collecting all this data and not doing anything with it.

Speaker 3

Yeah, So twelve thirteen, fourteen years ago, this field was very new and at the time even collecting and storing this data, natural question was can be afforded? It costs dollars to store this data? Why would we do that?

Speaker 2

And when you say, what what kind of data are you talking about here? When you say collect and store this data?

Speaker 3

So literally, this was at the time babies entering, you know, in the new natle ICU, these are premature babies are born in real time. Devices are collecting heart rate and vitals and oxygen saturation data and like, and so that kind of detailed data, which is much more bulky, was historically not stored. Instead, what they would do is they'd take like fifteen minute averages and capture that okay, And naturally the question came up, do we need to store it?

This is really expensive data. Let's just throw it away after forty eight hours, we don't need it anymore. Let's just throw a quick summary of it.

Speaker 2

Huh. So you might do a study, you might track certain data points, but the idea that you're going to just as a matter of course, be storing all of this data that is now being generated and saved because electronic medical records are just being adopted. Nobody was doing that. Nobody had really thought to do it. It was an expensive prospect. It didn't seem like there would be a good reason to do it exactly.

Speaker 3

And coming from AI, where we looked at you know, fingerprint data on the internet in retail or finance, then the you know, we it was so natural to think about how this data teaches you things that it felt crazy to me that like me one similarly, all sorts of amazing things about these babies or human body or how we're involved, or like what are the signs and fingerprints of disease? How did they show up?

Speaker 2

When you say fingerprint data, that's a that's a metaphor, right, what does fingerprint data mean? In the context of sort of e commerce and online finance.

Speaker 3

Well, like they went to this site and then they came to this site, or like they saw and add somewhere else about this, and now you know they're searching for something, and it shows you intense It's.

Speaker 2

This moment ten years ago when like the when people are using data to know like everything about what I do when I'm shopping for new shoes. But you you're but they're not collecting data on like sick newborn babies exactly right.

Speaker 3

Does that mind blowing to you? Because it was crazy mind blowing to me.

Speaker 2

Okay, yes, my mind is blown. So what do you do?

Speaker 3

Well? I mean it seemed like such a pressing problem. It also helped that we were funded as a moonshot project by the Google founders, that it was a high profile investment, and it sort of naturally led way for United place like Stanford curiosity, and we had some amazing collaborators who were equally curious, who said, well, let's dive in and see what we'll understand. And that was the

start of it. I literally got hold of this massive, twelve hundred page, like this huge thig book to learn about babies and what conditions they experience and what does it all mean, and then starting to understand how does it show up in the data, and you know, spent evenings and weekends, and actually I remember sitting in the basement of Stanford Hospital at over Christmas trying to work on trying to get data out of the health record

in the first place. And we were trying to experiment with all of techniques for pulling the data out, which you know now is a whole lot easier than it was twelve years ago because.

Speaker 2

It's not built for that, right, It's basically built somewhat to track the patient and to a significant degree to like bill insurance. Right, that's traditionally what electronic medical records were for.

Speaker 3

That's exactly right.

Speaker 2

Kind of amazing and kind of weird. I mean, I want to talk more about the bigger idea of data and healthcare, but just to kind of land this moment early in your career at Stanford, like, is there some project you do, Like what is the end of your work at Stanford.

Speaker 3

So the project was, you know, we're monitoring these premature babies right anywhere between twenty four week old babies which are very very tiny, like very twenty.

Speaker 2

Four weeks of gestation.

Speaker 3

To be exactly to like twenty eight thirty thirty two. And the idea was, these babies, you know, are like they're at risk for significant, like an array of complications. Yeah, and the idea is the sooner you know, the earlier you can do something about it, the greater the chance that you're going to actually resuscitate them. So our job was, like, could we look at this data from the second they're born and collect this data to start analyzing and modeling

which babies at risk for which of these complications? And if you could, then you could start to put more of these preventative prophylactic type pathways or approaches in place for carrying.

Speaker 2

Basically identify problems more quickly leading to better outcomes. That's the basic desire exactly.

Speaker 3

And in the process I discovered, like, you know, a long time ago, there was a physician named Virginia Apgar, and what she figured out is like, just by measuring five different things from when the baby is born, she can compute a very simple score that tells you how the baby's doing. And so so naturally, the question we asked is, Okay, so now that we are seeing all these ways in which the machine learning and AI is

discovering novel signs and patterns are predictive. Could we just simply combine this to come up with a simple score that says, you know, can I predict complications? And what we found was this new simple score that uses data that no special thing you have to do, it's already being collected. We just analyze it and we ought to compute the score turns out to be much more predictive than the ABGAR at predicting complications.

Speaker 2

And so so it worked. I mean, did do people use it? Is it standard of care? Now? What happened with that? With that research?

Speaker 3

So at that point I was like, oh, this is so cool. And literally we got all these journalists who wanted to write about it, and it was on the fundraising you know, it was like Stanford's fundraising highlight for like the next five years, et cetera. But what was the saddest thing about it is that there was no

natural mechanism for implementing it in practice. And it had to do with so many different pieces to it, Like we didn't have the infrastructure, we didn't have the like know how of like how do you get physicians to trust something like this. How do you build this in a way that is true, us worthy and reliable. How do you do this so that it's not just like a pet project in one hospital, but it's like a system that is scalable nationally. And you know, what is

the incentive structure? Who pays for it and why would they pay for it? And all of that is literally what sort of got me, like got me super interested in the field where I started to feel, Wow, we're at the start of what feels like is a massive movement, has many components to be figured out, but we need to figure this out. Interestingly, at the time on sand Hill Road, you know why, virtually being in pal Aalto.

Speaker 2

Yes Santel Road where all the venture capitalists are exactly.

Speaker 3

People were like, this is fantastic, here's money. Why don't you start a company on this topic? And I spent six months investigating, you know, talking to lots of peers health systems, hospitals and realizing we're just too early. There's a lot of work that needs to go in place for this to become something that will scale nation. Now, fast forward ten years.

Speaker 2

Later, I want to fast forward, but give me just another moment when you say it's too early, Like in what ways was it too early? Like specifically, what was not not ready in the world to start a company at that time?

Speaker 3

So the first thing we needed is for hospitals to be ready to implement a system like that. For that to happen, they needed to have implemented the Electronic Health record, huh, be stable users of the HR so that they'd be willing to plug in third party systems on top of it.

Speaker 2

And it's kind of amazing that ten years ago, you know, twenty whatever, twenty teens, still hospitals were not sort of ubiquitous users of electronic medical records, right, like doctors were still writing on paper.

Speaker 3

Honestly, coming from computer science where I did you know, where I was involved in other areas of AI and computer science, like this was like the biggest like shift in mindset I felt every time I came back into the healthcare side of the equation, it felt like I was going at least twenty thirty years back, right.

Speaker 2

Like get a time machine going into the past when you walk into the hospital, which is particularly, I don't know, ironic surprising, given how in some ways healthcare feels very cutting edge, right, Like A central interesting thing to me about the work that you do is the way in which healthcare is. You know, you go get a whatever, a CT scan. It's this incredible machine and it uploads to a computer and a whatever AI radiologist can you

know read the scan blah blah blah. And yet on the kind of data side, on the complicated patient at the bedside side, it's still very kind of old fashioned and almost artisanal.

Speaker 3

I mean, you raise like a fantastic point, which is I think when it comes to introducing and designing new medicines, Yeah, we've become really really good, but in terms of once the medicine is produced, in terms of actually accelerating the adoption, optimizing the update, yeah, designing who gets it and what does and when detecting early who would benefit from it. That's what I call the healthcare delivery side of the equation.

I feel like there's a very very vast gap of what needs to happen to get better.

Speaker 2

So, okay, so you do this project. You see that it's too early to start a company because the world isn't ready yet, because hospitals aren't even widely using electronic medical records yet. Much less being ready to sort of expert the data and listen to the data, et cetera. And you take a job as a professor at Johns Hopkins, Right, is that the next step?

Speaker 3

That's right? And part of the move to Hopkins was realizing there's so much depth and breadth of medicine, not just around the on the actual devices or the engineering on the chemical or the drug development, but also on the delivery side, like how what does it take to scale ideas nationally? How do you design policy around it? There was sort of a whole institute dedicated to scaling ideas nationally, So to me that was extremely exciting to learn about what would it take to really build the

foundations of a field like this. And moving to Baltimore was a big move, but I was just excited by the idea of learning it all and learning it especially as an engineer as ERNII research, as an outsider coming into healthcare.

Speaker 2

In a minute, Succi and her colleagues figure out how to use AI to detect when certain patients are at risk for complications and also how to get doctors to listen. So Succi is at Johns Hopkins in Baltimore and she has this big idea using AI to help doctors treat hospital patients, but she has to figure out exactly what to focus on.

Speaker 3

One of the big areas was this idea of like early detection of patients at risk for complications and diagnostic errors being the third leading cause of death. Like that's nuts. Like, so today, you know there are critical moments that are missed. We get patients the wrong diagnosis or that they're developing something subtly and slowly. That's like a whole branch of diagnostic errors where you know, complication or a condition develops,

but they don't get noticed in a timely fashion. And so these seemed perfect for AI to come in with the kind of data that exists to be able to flag patients that are high risk and make it easy to provide a second pair of eyes.

Speaker 2

Because it's basically pattern matching, right, I mean, differential diagnosis is taking lots of different variables from the patient and trying to put those variables together to match the patient to you know, thousands of other patients and say, oh, all of these, all of these variables, all of these health indicators suggest that the patient has disease X. Like that's fundamentally what a differential diagnosis is, and like machine learning should be very good.

Speaker 3

At that exactly. And previously people have attempted differential diagnosis with very coarse symptoms, like high level description of like you have cop your fever. What was different this time around is because of the HR, we had very detailed.

Speaker 2

Data the EHR, the electronic health record right exactly.

Speaker 3

And so it provided this brand new opportunity to do this. And then you know, naturally when you go down the list and start looking at problem areas, sepsis is a model disease. We chose to demonstrate the idea.

Speaker 2

So let's just talk about sepsis for a minute. What is sepsis?

Speaker 3

So let's say your patient gets infected. Your immune system is now going to do respond in order to protect your body, but in sepsis, it overreacts and starts attacking your organ systems, leading to organ failure and depth. And so the idea of its sepsis treatment is very much the earlier you can detect it, the better you are at like tackling it.

Speaker 2

Right, Okay, so I buy it. It seems seems like a big problem and it seems like one that might be solved or at least, you know, made less bad by with the application of machine learning. So how do you how do you actually do it? What do you have to do to build the model and see if it works and get people to use it.

Speaker 3

Yeah, so this is almost like what you're about to describe in two minutes what was almost a five year journey. So first, it's collecting a huge amount of data where you can identify both patients of suptic versus non septic and when they had it, and what other conditions did they have, and what else was happening in their life right, and you know, all the data leading up to that episode and what was done after the fact. So you

get the data. Then the next part is, you know, you have to actually understand the biological process or the clinical process that's happening and layer that on top of the data to make sure you're going from like just bits and bytes to data that makes sense, okay, And then you implement lots of different learning algorithms to be able to experiment, you know, the thing that we first

did versus the thing we do now. There's like lots of generations of improvements in order to get to a place where you're going from like, you know, not very good signal to very good signal.

Speaker 2

So you're building a model through trial and error, basically trying to get an AI model that has a high sensitivity and specificity that's good at issuing an alert when a patient has sepsis, and does an issue too many alerts when the patient doesn't have sepsis.

Speaker 3

Basically exactly, and also does it in a way that you know, when it says somebody has sepsis, it's able to explain why. It's able to provide enough information so that the clinician can act on it. And it's not doing it solely that there's not enough to work on, and it's not doing it so late that it's useless.

Speaker 2

Like often people talk about AI models machine learning models as black boxes, right, like, very good at pattern matching, very good at predicting the next word, but we don't know why, And so you're saying in this instance, you sort of need to know why.

Speaker 3

My very key evolution of a scientist working in this area was in the beginning, I saw it all as data in math, and then as I started working more and more in interfacing and actually deploying systems like this, what I started realizing it's actually not math and data, it's about trust, because ultimately, to get adoption and to get outcomes, I need to get trust from these highly trained clinicians who studied this year and year out, and they have a process in a system for working and

you have to fit within this system.

Speaker 2

And they're very busy, and it's very high stakes, and they kind of think they know everything, and it's so presumably very hard to get them to trust you in making their clinical judgments exactly.

Speaker 3

But moreover, I've also been on the other side of like tons of engineers making all sorts of about their system knows better, but when you actually go and make sense of what the evaluations they've done, they literally have very little understanding of medicine and the practice of healthcare,

so like their claims are mostly not good. So a huge part of it is like developing respect and humility for the system, the complexity, so that when you're bringing in this new thing, it really truly fits, it's easy to use, it makes sense, it creates value. Without all that, you're not going to get to the benefit.

Speaker 2

So now you say it creates value, and suddenly you sound like a founder, an entrepreneur and not like an academic where where in this arc do you start a company?

Speaker 3

You know, it was somewhere in twenty eighteen. I remember twenty eighteen was a transformative video for me for a number of reasons. I'll start with the very simple thing of like, when we first built this system and deployed it, only like two or three clinicians use it, and it was the two to three clinicians who were involved in working on the project with us. What I realized was we knew from looking at large amounts of data that the system was working, it was working correctly, and we

could identify these cases. We could identify them early, and even from interacting the clinicians, we knew you could do something differently about it. So it's one thing for system to detect. You know, clinicians will say, so what, so what am I supposed to do well about it? And in this scenario, we've even done studies to know that actually they could be acting, you know, they could use this output to meaningfully change the patient's care. So then to me, the question was, Okay, if we know this

thing works, why the heck are we not succeeding? And that's kind of where it went from the puzzle of math and data to trust. You know, how do we develop and deploy it in a way that's transparent. How do we understand like what are the top of mind issues from a practicing clinician's point of view, and how do we address it? Where are we creating value? How do we start quantifying value?

Speaker 2

There any moments where you're like, you know, you have this thing that can be helpful, and yet someone a doctor, a hospital administrator, whatever, is telling you why they're not going to use it.

Speaker 4

Basically, I mean so many moments I can't even like begin so I think I remember this time when they basically were like, Okay, this thing is flagged the system.

Speaker 3

What do I do with it? And I was like, you should look if the patient has something, And they were like, are you kidding me? How many flags?

Speaker 1

Do you know?

Speaker 3

How many alerting systems exist? If I were to take every single alerting system and start to use that to start informing when I'm doing a diagnostic workup and what am I doing, I basically would not get my day to day work done right.

Speaker 2

It's like it's like when you're if you're ever in an emergency room, like everything is beeping all the time, and your system is just one more beep in a sea of beeps that everybody ignores, and.

Speaker 3

You feel passionately about it.

Speaker 2

Yeah, it's your reasons you care about this beep, but nobody else cares about this.

Speaker 3

Being Nobody gives a damn. And it was just like so it was difficult, right, like you come. I was sort of like, you know, I felt defeated. I sat there, I was like, this is so unbelievable. This is like so powerful. Why aren't they believing me? And so there was an information gap right like then it was like understanding, oh this you know, the system in which they live. Okay, I understand that all these different alerts exist. How are these alerts created? How are we different? How can we

demonstrate we're different? Why should we be trusted? And so that was as an example starting point. Like another one was like we deployed it, and we deployed it in a way where it was you know, within the electronic health record, but it was done in a way that was really cumbersome, like every time they needed to respond, it was like a few you know, it was like a minute and a half of work, and you know, honestly,

they're so busy. A minute and a half extra to do something that they don't already have total conviction in is like a lot to So then you spend a bunch of time optimizing, well, how do we go it from me take it from a minute and a half to like three seconds? How do we optimize it so that it's instantaneous? It's easy, it's just there.

Speaker 2

So this isn't about the data at all. This is just user experience basically.

Speaker 3

Hugely human factors, like human factors and human factors here is very different and complicated because you're trying to optimize human factors within a chassis that is very complicated. Right, Like you're not like standalone software, This is like you're within an electronic health record, and like, how do you do this in a way that the electronic health record providers will allow.

Speaker 2

You information not your software?

Speaker 3

Yeah, it's not your software, And how can you do it in a way that is smooth and seamless and they actually like it? And then you can do this in a way where it's not just custom built for a Johns Hopkins, but it's something that you can send to take to a rural hospital, right.

Speaker 2

So you're doing all this, at what point in this arc do you start the company?

Speaker 3

So another like personal thing happened, which is I lost my nephew to sepsis. And you know, it was the craziest, like saddest, like you know, most insane feeling to be able to like, you know, as like a researcher, as a scientist. I'm like ned deep in these research areas. And then it's one thing to go and talk about it, to say, well, here's how you do it, and here's how it works, and here's why it will work, and

here's why this is a great idea. And it's another to then come to that moment of realization where like, well I haven't actually done anything to make a difference.

Speaker 2

So you're already working on sepsis, yes, and your nephew you say, nephew meaning younger than you?

Speaker 3

Is this a young much younger than me?

Speaker 2

Wow?

Speaker 3

And realizing like I was doing, like it all sounded like an excellent like it all sounded great on paper, you know it. It was like, you know, I'd go to meetings and lots of people would listen and they'd say, yay, great idea, et cetera. But then at the end of the day, for me, it was like I'd gotten too used to you know, it's easy. It's easy to like talk about something smart and then people say it's a great idea, and then you leave the room and you feel good about it, and then you go back and

you work on it some more. And I think it was hard, like hard for me to sort of realize like I had gotten to carre it away and I'd gotten to carre it away like not thinking about what is it actually going to take to make it real? And the making it real is what's like just so much harder than I thought. But part of it is I also felt like this isn't just a sad This isn't just like a you know, for an idea for sepsis.

This is really like crazy to me that this isn't how we operate the like I think the time has come and what is exciting to me is in the last year or two, I'm starting to see the world has shifted. There's been a very meaningful change in the last few years. I think losing my like losing my nephew, made it very real. It went from this idea to feeling like this was an opportunity where it's very real. Now we can make a difference. The pieces exist, and

I need to make it happen. I can't hide anymore. And in twenty eighteen I went from like started to realize like most systems that finished implementing the health record, electronic health record policies were starting to change. The AI was mature enough that it was really clear we could do a lot with it. And it was my very little part I could do to you know, address my my you know, my part of grief related to my nephew. Like it was the very little role I could play.

So so in twenty eighteen I started to, you know, think go after it with the idea of the work to actually start a company. We're actually going to turn this into something that scales nationally. And that's where it all began.

Speaker 2

So you start the company, and you do build this AI model to detect sepsis in the hospitalized patients, and you do this study and you wind up publishing the outcome in the journal Nature Medicine, right, which seems like a big, big moment in your work, in the life of your company. So tell me about that study.

Speaker 3

Yeah, So in twenty two in July twenty two, we had three studies. They were featured on the cover of Nature Medicine. These were very big studies for the field. Then the studies that came out in twenty two were basically showing how we implemented the system by five different sites, like both in the emergency department, the floor, the hospital flow is the ICUs across academic and community hospital, So five different hospital in totally different geographic region right in

Maryland in DC, rich communities, poor communities. And what we were able to show was the system both like you know, almost three quarter of a million patients in the study forty four hundred physicians and nurses who were part of the study that you could detect sepsist significantly earlier than they were currently detecting and acting on. So that was

one second we showed that. In fact, when we then implemented the system, we show saw meaningful reduction in treatment timing, like patients were getting treatment in a more timely fashion when providers were seeing the alert and acting off of it. And then the third we know early detection is possible now and we know treatment timing has moved, and we've known in sepsis that early treatment is the key to better outcomes, So the questions do we see that in

our population as well? And we saw that in patients who actually got you know, early alerts. On who got the alerts and providers acted on it, we actually saw much better outcomes in terms of reductions in mortality, morbidity, length of state, fewer complications, secondary complications that arise out of sepsis. So it was just extremely exciting to see that we could go from you know, a technical idea to actual outcomes. And then one of the most interesting

things we'd studied here was adoption. Will clinicians adopt? It was a very real world study to show, like, can of system like this actually work? And we showed ninety percent physician adoption. So that was extremely exciting to see. And that's what I call that's what you know was about closing the trust gap.

Speaker 2

So, okay, so you published this paper whatever a year and a half ago, where are you now? What's your company doing?

Speaker 3

One thing that's very also that I didn't cover earlier is that we expanded the system dramatically from not just working on sepsis, but a variety of other conditions like sepsis, where there is very significant both clinical benefit but also financial benefit for the health system. The reason the financial piece matters is, you know, ultimately health systems are working

on one two percent margin. For them to be able to implement systems that actually improve care, they still need to be able to financially justify that this can be done, and that was crucial.

Speaker 2

So what are some of the other things you're working on besides sepsis? Now?

Speaker 3

Like, another example area is presh ulcers. Okay, huge area where.

Speaker 5

Like bed bed source exactly Like, it's an area where again huge patient impact in terms of like you know, if you do end up getting a serious beds or how detrimental it is for the patient, sometimes leading to death, sometimes leading the need for amputation.

Speaker 3

But even more interestingly, huge burden on the caregivers themselves, like nurses today have to do a huge amount of work to take care of these patients. Like today, there are lots of scenarios where these patients are missed, and there's an opportunity where you can actually use the data to identify this higher school and start again implementing these new ways in which you can do targeted you know, preventative measures.

Speaker 2

What has to happen for you to you know, for your software to get adopted at hospitals all around the country. Like I buy that it's helpful. How do you get from it being a kind of researchy thing to being a thing that everybody uses.

Speaker 3

So the hurdles we needed to cross was one. We needed to figure out a way to get approvals from the electronic health records to be able to integrate it. We did.

Speaker 2

That took a couple of years from like the just the big software makers, Epic, whatever, the companies that make the electronic health records. They have to say yes, okay, so that's done. Check. Great, what has to happened next? Yeah?

Speaker 3

Next, you need a system that is able to you know, when you go from one side to the next, to the next to the next. You need the ability to be able to measure and generalize as you core, cross site and reliably perform.

Speaker 2

So it has to work in lots of different kinds of hospitals that collect different kinds of data in different settings.

Speaker 3

And in our partnerships shown that data.

Speaker 2

Okay, third check.

Speaker 3

Like I said, we have to show that basically people will adopt in these different environments. So we have data to show that okay.

Speaker 2

Four.

Speaker 3

In some of these areas you need FD approval, and in the areas we need of the approval, we're working with the FDA to get those approvals.

Speaker 2

Okay. So that's kind of the next step, correct.

Speaker 3

And then once that's done, you can now start to you know, it's available, it can be marketed, you can scale it nationally. All very exciting things.

Speaker 2

So if things go well for you, what will the world look like in say five.

Speaker 3

Years, Oh my god, so exciting. I think we will actually be implemented at sixty seventy eighty percent of the market, I hope in the US. What's interesting now is like, you know, healthcare is a market which is a leader follow up market, and once you show things that work, it makes logical sense. You have the proof points, you've tackled most of the and issues that people struggle with.

Then this is an area where you can scale. And when it comes to like the areas we're working in, which is clinical, unlike some of the others like billing and messaging and back office. You know, the years of development required to build what we build is very long, Like it's taken us eight to nine years to do all the pieces necessary to get to where we are, so there aren't as a lot of like other competitors in the market.

Speaker 2

You have a moat, and FDA approval is going to be even more of a.

Speaker 3

Mote among other things. Exactly, So we have a very very significant like moat and hurdles people have to cross to really get it to work, and we've invested in them.

Speaker 2

And so in your happy five year future, most of the hospitals in the country will be using your software, your models to detect sepsis, to detect bedsores earlier than in a.

Speaker 3

Variety of for the conditions. Like we've looked at our own financial models and show that like a you know, modest four to five hospital health system stands to gain like fifty two hundred million dollars from the implementation of our system in some you know, the condition areas we're tackling.

Speaker 2

And people will die less and be less sick as a benefit also.

Speaker 3

And that is honestly the biggest maturity I've had in building this company. I started from like the cause of caring, and it was realizing, like it's funny in healthcare, they're so used to caring for patients who are dying every day. They've gotten desensitized. You then come back to realizing you

need the other things to follow, like the money. You need to figure out a way to make it easy for them to do the right thing, And when you do that, then they do actually care about doing the right thing, because that's why they were there in the first place.

Speaker 2

We'll be back in a minute with the lightning round. Okay, I'm going to keep you another two minutes or something to do a lightning round. You went to college at Mount Holyoke and all women's college. Yeah, and so I'm curious, what is one thing you would tell someone considering attending an all women's college.

Speaker 3

Oh? I loved Mount holy O. It was so much fun. It's where I got my confidence that I could do really really hard things and not be, you know, not feel defeated.

Speaker 2

If you weren't working in healthcare, where would you be trying to apply AI?

Speaker 3

Oh my god, I've just been so obsessed with healthcare for the last decade. I haven't really lifted my head to think about other things. I mean, honestly, there are a million areas you could apply it, but I don't like thinking about it because it's just that the need is so dire in health care, and it's so hard. It's so hard for an II research to focus in healthcare because they don't make it easy. You can make a lot more money doing the same kind of things

in finance. You can get the data more easily, you can make money off of it more easily. Like it is annoying, It is really annoying.

Speaker 2

Is chet GPT overrated or underrated?

Speaker 3

Actually I think it's underrated.

Speaker 2

Okay, go on, I think you know.

Speaker 3

When we see the math, we're like, okay, that's the math. That's interesting to me. What was really informative was like the experience, the social experience. It was so exciting to see people who first interacted with it and you know,

have their head mind be blown by the experience. And that's sort of then informing how important the user experience out of the houses, Like you know, we had some of the chatbot technology before we had some of the interactive but it's sort of how opening I designed it in the use cases like storytelling, poems, like the use cases where they trained the system to be very good at conversant like was what made the experience so exciting because then people could start you know, like experiencing it

themselves and that sort of opened up their mind to what else could it do?

Speaker 2

Analogous to the lesson you were talking about in your own work, where getting the answer right figuring out if the person has sepsis is actually the only part of what you have to.

Speaker 3

Do huge and that's I think where AI as a field that a lot has a lot of growing up to do because historically the people who entered this field are you know, they gravitate towards the math, they gravitate towards the hired science. But what they don't realize is ultimately it is a people problem that you're solving. You

have to get people to love it. You have to get people to incorporate it in their daily lives for this to be successful, and you have to operate in a world which is not very precise, like people have their faults and their mistakes and they work in a particular way, and you've got to get this thing to fit.

Speaker 2

Suchi Saria is a professor at Johns Hopkins and the founder and CEO of Asian Health. Today's show was produced by Edith Russlo and Gabriel Hunter Chang. It was edited by Karen Chakerji and engineered by Sarah Bruguer. You can email us at a problem at Pushkin dot Fm. I'm Jacob Goldstein and we'll be back next week with another episode of What's Your From

Transcript source: Provided by creator in RSS feed: download file