AI: Unraveling the Human Factors of Artificial Intelligence - podcast episode cover

AI: Unraveling the Human Factors of Artificial Intelligence

May 07, 202448 minSeason 3Ep. 1
--:--
--:--
Listen in podcast apps:

Episode description

Join us for the S3 premiere of The Human Odyssey™: A Human-Centered Podcast!

On this episode of The Human Odyssey™ join Rashod Moten, Human Factors Specialist, and Dr. Jennifer Fogarty, our Director of Applied Health and Performance, as they discuss the various ways in which Artificial Intelligence intersects with Human Factors and Applied Health & Human Performance.

This episode of The Human Odyssey™ was recorded on March 23rd, 2024.

Visit our website: https://sophicsynergistics.com/

Follow us on social media!

Facebook: https://www.facebook.com/SophicSynergistics/

Instagram: https://www.instagram.com/sophicsynergistics/

LinkedIn: https://www.instagram.com/sophicsynergistics/

Twitter: https://twitter.com/SophicS_LLC

Transcript

Welcome to The Human Odyssey, the podcast about Human-Centered Design. The way humans learn, behave, and perform is a science, and having a better understanding of this can help improve your business, your work, and your life. This program is presented by Sophic Synergistics, the experts in Human-Centered Design. So let's get started on today's Human Odyssey. Hello, and welcome to The Human Odyssey Podcast. My name is Rashod Moten. I am one of Sophic’s Human Factors Specialists.

I'm joined here today by our guest, Jennifer Fogarty, Sophic’s Director of Applied Health and Human Performance. Hello. Hi. Thanks for having me. And thanks for joining us. Sorry I got hung up there for a second, but I wanted to ask just a bit more about your history and your background. Sure. Yeah. So, I started out, PhD in Medical Physiology, so I was studying cardiovascular disease. Very passionate about Human Health and Performance.

I'm an avid, avid exerciser and, someone who believes that, you know, we can actually do more for our health through - through things like exercise and eating well, like, and how do we prove it to ourselves? So I was really focusing on that and actually had a model where we showed that exercise indeed grows collaterals in a heart when you have a blockage, like you naturally can build em and I was fascinated by the process behind that.

So I started doing some molecular work in my postdoc to partner with the functional work. At that time, I had an opportunity to be part of a mission that had NASA science. It was one of my committee members. He really liked the way I operated so he said, could you come run my lab at Kennedy Space Center while we have our, It was a rodent mission. It was the last Space Life Science mission. And unfortunately, it was the Columbia 107 mission that didn't return.

So in about a three or four month span of working at Kennedy Space Center, I got to learn a lot about, aerospace. My family has an aviation background, so I was familiar with high performance jets, people who do, extreme things and acrobatic flight. And I always thought I would end up, like, studying that in some way or incorporating it but didn't have a directory there at the time.

Went through the Columbia accident at Kennedy and, was really just blown away by the environment and the culture and, you know, at that time, you know, the ultimate sacrifice these people and their families had made to to try to do things that have never been done before. And, came back to Houston.

I was in College Station at the time and was looking for a position outside of academia, and one came up at a contractor which was Wyle Life Sciences, and sure enough, they had a role for cardiovascular discipline scientists. So it kind of lined right up and, I started working at Johnson Space Center.

Quickly moved on to a civil servant position because I had a unique set of skills, having done clinically relevant research, was highly translatable and the physicians of flight surgeons who support the astronauts really needed some folks who understood what was coming up through the research pipelines and how to, how to potentially translate it really for their purposes.

So I was working background of, evidence base, what was going on in research, how it might apply to the needs that were happening in spaceflight, because spaceflight really puts a premium on prevention. Right. The best way to manage medical care is not to have a medical incident. That’s fair. Like, can we really avoid these things? Can, can we know that we're not going to have bad outcomes during a mission, a variety of different durations?

At the time, it was shuttle focused, but it was the earliest stages of ISS where people were now living instead of two weeks in space on shuttle, they were living four and five months on station, and that was a very new experience for all of the space programs, except for shorter stays, that were done previously. There was Skylab and there was NASA, there was Mir and then NASA Mir. So there was a little bit of an end of like five and ten people who had experienced this.

But it was really the start of, of the world, of the International Space Station. And that was a remarkable time to be around in science and and with the flight surgeons supporting them. So kind of wove my way through different NASA jobs. It was similar, like, I'm a utility player there.

I love solving problems so if there was an opportunity to have a role where I was involved in making a difference in operations while I was helping to guide what research needed to happen that was really kind of the best combo for me. But ultimately, toward the end of my career with NASA, I was the Chief, Chief Scientist of the NASA Human Research Program, which when you start getting into those roles, there's a lot less fun. I can imagine.

But as one of the Russian, I was interviewed in Moscow and we had done one of the isolation campaigns in their, their NEK Facility. They came up and they said, you know, why don't I smile a lot or something? And, and I said, well, I am a serious person. And usually I'm listening and thinking, so I don't think about what my face is doing, you know, when I'm on camera or something along those lines. And he asked me, through an interpreter, is it because you think you're the big boss?

And, I was like, well, actually, I am the big boss. I don't have to think it. I'm in charge of a pretty big program, and I have to be responsive to the taxpayers and Congress and, you know, NASA Headquarters. I said, so, yes, I'm a very serious person. I can laugh about it now, but at the time I was very like, what do you mean, the big boss? What do you think I’m the big boss? I got the title. It was a lot of responsibility, but it was also very, gratifying. Right? Just in a different way.

So after a couple of years of that, I decided to step away from government work directly, being a civil servant, and go into industry and that's when I joined Sophic as the Applied Director or the Director of Applied Health and Performance. You're not the only one who doesn’t remember my title.

So it's, it's just an opportunity to work with a variety of spaceflight providers, people who do medical hardware, people who are going to do extreme environments, other than spaceflight, we can get involved with. So it was applying my skills to problems again and being part of building solutions and seeing them applied.

So it's an exciting time to be in aerospace, obviously, you know, with the Artemis commercialization of low-Earth orbit and potentially even, you know, lunar missions and then Mars missions, there's a lot going on.

There's a lot of companies that are starting out, people who want to engage and really need help with Human-Centered Design and as, as we talk about more in the government/industry side, Human System Integration, and then the concept of keeping people healthy before they go into space. And while they're in and on those missions, which right now, descriptively, are incredibly varied. Yeah, I Imagine. So, yeah, the variables are almost limitless. Thank you for having me.

No, no Love the conversations. Oh, same, same and honestly, thank you. Your background. I, I didn't want to provide, you know, just give an intro, brief intro because I knew I wouldn't do it justice so I really appreciate that. Now, for today's topic, we’re going to discuss artificial intelligence. You know, it's a hot topic today.

It's just in, in society, you have, of course, everyone under the sun speaking about positives and negatives, fears you know, any even you know, for optimists, you know, they're thinking about where it could be. So today, just want to primarily just focus on artificial intelligence with regard to your background itself. But before we do that, I do want to ask you mentioned College Station. You wouldn't happen to be an alum of, [Texas] A&M. Well well this kind of.

The, the reason I hesitate is it's an interesting story. So when I joined the College of Medicine, it was the Texas A&M University College of Medicine. While I was there, the College of Medicine and other schools associated with the Texas A&M system kind of pulled out and became the Texas A&M System Health Science Center. That existed, I think, on the order of a decade.

So my degree actually talks about coming from the Texas A&M Health Science Center, and my, my class in particular, because of the date we started, typically we would have gotten Aggie rings like that was the model even for graduate students. I was, I'm from New Jersey, so. It's was quite a culture shock and I didn't understand the whole thing.

But, nevertheless, there were people that were in my class were very disappointed that when that all that shift happened, there was a hot debate about whether the graduates would actually get Aggie rings. Yeah. And some people were, you know, obviously sad, very sentimental and very went down that path. I didn't really understand it. So I kind of was over my head.

But yeah, I mean, I've had this strong association with A&M and the College of Medicine in particular, and I did a lot of work at the Large Animal Clinic at the Veterinary School, which I tell you is just stunning. I mean, both the capabilities are amazing. The amount of funding they have, the work that they do, world class.

but yeah, that's the stuff I experienced in the, experiences I was able to gain because of the remarkable research they did, really sets you up when you leave to be well-versed both breadth and depth. The opportunities are kind of limitless there if you're willing to do work 24 hours a day. As a grad student, sometimes that is required. Yes. That’s my, that's my A&M story. I try to be very careful because I'm like, technically, if you saw my degree, it doesn't say those words, but yeah.

Just wondering, you mentioned College Station in the A&M as a huge presence here in Houston, specifically in the health care field. So just wanted to Yes for sure. Yeah, very strong. All right. Well just to get back again. Thank you again, but to get back to of course AI, before we, you know, dive deep into a conversation about it I do want to ask, how would you define artificial intelligence? Just based on your understanding of it. Sure.

Which, which as someone with a degree in Medical Physiology, I’m not, not the highest qualified person to comment on this, but as someone you know aware of it, you know, that might actually answer your question with the question. So, so I think I understand, the, the use of the terminology, “artificial intelligence,” and it's usually coupled in my world with machine learning.

I'm a little more accustomed to understanding in a very tangible aspect of machine learning, and the development of algorithms that can go into massive data sets and kind of evaluate patterns. Right. Particularly in medical data. Right. And the machine can learn things now to transcend that. You're like, at what point do we go from algorithms that can be used to interrogate data, find patterns, and then check against reality, which it says, are these patterns real?

You know, and do they are they meaningful? That was the other part. Like in the medical domain, you're like, doctors would never do it, should not do tests that do not have a positive predictive value. Meaning when you get the answer, you know what to do with the answer. Whether it's a zero, you didn't have a problem, or a one, you have the problem that test has meaningful interpretation. Yeah. If you don't understand where you're going with it, don't do it.

And that's where AI, for us sits right now in this gray zone of it does not necessarily confidently deliver a positive predictive value. It delivers new insights because it can go, these algorithms can go so much more broadly than the human mind can right. Assimilate data that comes from a variety of sources and the abundance of data that's available for, for a variety of different environments.

But for me, it's really the concept of going artificial intelligence is, it transcends mathematical equations that are algorithms, to it itself starts to build new algorithms, right, based on the patterns it is or isn't seeing, or it is actually determining whether it's got the tools.

Yeah. Yeah. Right and to be honest with you, it's like with the most current manifestation I think people would be, experienced with is ChatGPT, you know, which they call it, and recently I very much appreciated the terminology, like artificial intelligence in the wild. Yes. This capability has been unleashed and people are playing with it and when you play with it, you train it, right? Not different than children or dogs, or cats.

Which obviously has a variety of different outcomes depending on who's doing the training and what you've got. Yeah. But watching it deliver unique insights based on the direction it's been given, that kind of transcend anyone's person, person's capabilities. So the way I think of it, it is almost like the personification of the diversity of people who make up the contributors. Right?

But instead of trying to figure out can I, can I understand what you are saying and use your intellect in your experience base? It is pulling the salient points from you in some way, putting it into the pot of options, reconfiguring them, kind of doing like a lot of what I know is probabilistic modeling. Like, let's try this permutation. Yeah. And comes back a million times later and says, when we've looked across all the patterns, this is this new insight I can give you.

Yeah. and so for me, I like, you know, the idea of it being sentient. I don't think it's true. It's not thinking, but it's using math with respect to a variety of different data sources and the idea of recognizing, having pattern recognition or recognizing lack of pattern, you to then reconfigure itself to do another query. Yeah. So it's not quite, it's beyond the algorithm level where someone's writing the code to make it do a task.

It itself creates its own tasks and I'm like, well, that is pretty powerful and I understand why there's all sorts of emotions and concerns wrapped up in that. But I do think of it, I'm a little bit of a centrist in a lot of things, which is it's a tool, and we get to decide how we use that tool.

And if we want to let it run free and tell us to do things, that is a choice that is made versus I'm going to use it to help me understand things, and I can use it as a tool that can give me information that otherwise I would never be able to perceive. so I'm a knowledge is power kind of person, so I don't fear it. But I do understand people's concerns about other people's choices, about the utilization of it.

Yeah. No, that makes perfect sense of course, you know always hear the, analogy to say Skynet. You know? Yeah. That's generally where most people’s fears come from. Yeah, the entertainment industry you know, when art, you know, portrays a reality and then we start to live it, you know, that has already defined where it could go. Yeah. Right, and so it would be good to have some more positive representations. Yes, yes.

Which I think are out there, but maybe not as interesting in the world of social media and different modalities, they're not as clickbait susceptible. Right? That's something I've seen. I think, even YouTube, just in understanding, of course. So, as far as my background, you know, my. [Inaudible] A small snippet, I have a grad degree, you know, in Psychology, Human Factor Psychology, with Human Factors focus.

That being said, I remember back in one course, my favorite professors, you know, we started talking about algorithmic thinking and modeling, right? And of course, that naturally led to neural networks Right as they pertain to, to the software side. And then the development side.

And you know, after that, that course, I think it was maybe that summer I go back and went, you know, headfirst into just learning about AI and how these models are not only developed but also implemented within the systems and whether or not it's truly like, you know, as you said, data in, data out, you know, you have a subset of inputs capture big data sets, gives you an output. Right? I wanted to learn more about that.

And what I found was, you know, just in clicking even in educational videos or educational blogs, reading journal, not just journal articles, but articles, many articles, a lot of them would highlight neural network but then would not actually explain how neural networks are actually implemented in - on the software side. So having it's having the idea, giving the idea in the background and understanding that, you know, the neural network is literally a 1:1 ratio.

I'm giving you this one input, this or the guidance. And it's looking for this particular subset of data. Right? That was really interesting. So yeah, no thank you for actually explaining that and not I say not buying into the fear I appreciate that. Well and as you know, comes up regularly like panicking and fear don't make things better.

Yeah Yeah, so helping the audience and folks who don't understand it, you know, have some sort of frame of reference, and in a way that they can understand it, to the best of our ability, to just kind of settle and calm everybody down because then you can think about it. Yeah. If you're already in fear mode, that's - we know on a, on a neurobiology level, you've already obstructed some elements of clear thinking, because now, you know, in my world as physiologists like that's fight or flight.

So your body is already redistributing blood a certain way. It's already prioritizing actions in a certain manner. And so that used to serve us very well, right? When not reacting was likely to lead to death. Right? So Yeah but right now we are just bombarded with issues that set off our fight or flight syndrome because it's so intrinsic to how our brain operates. Like it's, it's not quite binary, but it can be pretty close.

And then our behavior and kind of our learned systems potentiate the fight, or fight and flight over the calm and that's run by two different parts of the nervous system. And so then you have to be very deliberate about doing things, for yourself, like calming yourself down - self down to potentiate the calm side of your nervous system. Yes.

That then again allows your brain to work more optimally to be calm and think through an issue rather than physically react and then reflexively react and of course, it's not always a physical thing, but verbal like the, “No”, and, “I know,” “Absolutely not.” And, “you're wrong”. And you know. All that's all the. keyboard warrior stuff that happens. Yes, yes. That tends to be relatively unproductive. So Yeah absolutely.

Now you know, given your background and you did touch on this earlier, but I wanted to kind and get your thoughts on how you think, AI artificial intelligence is not only being implemented in health sciences and human performance. whether that be at the research level or, you know, whether you're looking at practitioners, how it's being truly being implemented, but also has an impact on the industry at all?

Sure. Yeah. There's a couple areas where it's really been on the leading edge of coming in, and it started with definitely the machine learning side. So one is radiology. you know, I don't know how many people might have experienced this, but often if you get a biopsy and it gets sent off, and, you know, clearly there's something potentially seriously very wrong if you're getting, like, an organ biopsy or tissue biopsy, you know, from a clinician, not a research protocol.

it can be weeks before they expect a result. Right? So now you get to like the the heightened sense of, “I need an answer. Like the answer could be anything. And then I know what - then we can have a plan.” But just the waiting is a torturous process. Well, the question is, “Why does it take so long?” Well, it's a very, expert and human dependent activity, and the people who specialize in that get bogged down in a lot of false positives and a lot of just negative samples.

So the question was for that sort of tool, does it always require the human eye or could we have the experts train, you know, a machine learning algorithm to know what to look for. Not so it could do the triaging and you could speed up the process.

So in the radi- the domain of radiology, more and more, interrogations, whether it be MRI, CT scan, biopsy - are being triaged by machine learning algorithms which may have already crossed over into what may be artificial intelligence, that the machine is now recognizing that, hey human, you forgot to tell me these things. Like this pattern. I see this too. It goes into another category like you are looking for cells of a certain type would have indicated a disease process.

I didn't see them, so it's a negative for that but I saw this other thing that you should look at now. So it flags the specimen to be reviewed by a human for a particular reason, and changes the prioritization of - of how they look at it. So the most critical cases can go to the top of the line, to the human who really needs to do the high level subject matter expertise work. My experience with, with spaceflight in particular is with, imaging of the eye in particular.

We've got an issue going on with astronauts that's hard to explain. Very concerning. And, a lot of energy putting into studying that. But one of the areas that's really been remarkable is in, nuero-ophthalmology and imaging with respect to AI in the mental now and that is a clinical, a clinical tool. A lot of clinicians are using that. The confidence, the verification has been done. There's a lot of certainty. There's constant quality control being done.

and that field just continues to grow, you know, and there was some fear not only that it could be like, “Is it wrong?” You know, constantly like, “Is this good quality?” So a lot of that work continues in the background to continue to assure that this as is as good or better than if a human did the first pass. But the other element was people having fear of being replaced. Yes.

What do I do now if I'm not looking at, you know, 30 slides a day or sitting in a dark room staring at a screen at MRI images all day? And it was. Yeah, like we we don't have enough doctors right there that in that case, there was no reason to fear. We just shifted your role to the higher level expertise and applied it differently. So I think radiology has gotten comfortable with the idea of using it as a tool and really potentiating their value.

More broadly, it is not applied for the reasons I mentioned. Like the validation is not there, the confidence is not there. Overall, the data to support a diversity of people is not there. And a lot of medical care, you have a selection bias based on people who can afford the insurance or afford the test. So there was some work done recently, it was actually on an immune therapy for cancer, that they thought they had a pattern and they had a treatment regime based on the pattern.

And, they started delivering that, treatment more broadly. And it turned out that for people of color and Asian people, that it was a worse choice. And it turned out because they weren't part of the select in pool when the study was first done, those findings were not present.

So they took a more narrow population, extrapolated that this would be good for everybody based on the cancer criteria when it turned out that it's not just a cancer criteria, but some other genetic underpinnings that have to also be present. It's just that genetic underpinning wasn't diverse enough to pick out that it didn't work for everybody.

Yeah and I have to ask, as far as, you know, in instances like that, you know, whenever you do see at least some signs of bias within the outputs itself, are you finding - of course, you know, you mentioned that it's not really being implemented broadly, you know, across the industry but whenever that does come up, are you seeing that heighten that level of concern a bit more or is it kind of just triage the issue on its - in the, in that silo and then saying, okay, after further

assessment, we'll decide whether we want to implement this later? I think in the, in the clinical and research, clinical research domain, it's just heavy, heavy skepticism and particularly for the ones that have shown to not be beneficial use of it or are limited by something like, selection bias, data bias, people have pulled a little back and said, okay, you know, from an industry standard. And this isn't just like government regulation.

This is, you know, the clinical industry, the insurance industry is involved, as you might imagine. Now, that's not today's topic. So that that's all I’ll say on that. I think we both had the same reaction. It's like four podcasts. Yeah. Yeah. It's - there’s a lot to unpack there. Yeah. Yeah. But no they, they really have kind of pulled back and said we need to do better. And that has been, the benefit was it was the push recognizing that we don't have a diverse enough data set.

And that actually revealed other issues with it which is inequitable care, access to care. Why is it? Why don't we have these people in our database? You know, like what - how do we do this? I mean, we have to write this thing. So I think it has resulted in some good things, but it will delay the product, which in that sense, going back to, like, it's okay, you know, for any job I've ever worked, and I get a lot of pressure to do things fast, light-rush, like there's a lot of urgency.

Yeah, yeah. Real or not, like we want to make progress. And I get that. But my phrase is always like, “I will only go as fast as good will allow.” Yeah. And if I don't think something is good and I know that’s a generic phrase, but that means, you know, credible, valid, evidence-based, you know, as you know, quality of data, diversity of data. But you start ticking down the list of, of what means good. Until it has those things, we're not going to production.

Yeah. Like and we can explain why, there's solid rationale, you know, but that, that definitely gets a lot of angst when you're working on the business side of the house so Yeah, no, I can imagine. But - but there are fields advancing it. I think the other ones, the more generalizable where the, the diversity, it's a very broad gradient of both the medical conditions and the medical treatments, those are incredibly complex.

And those are going to take longer where something like machine learning algorithms to AI start to make sense to us. And the field believes it's the right thing to do and it's showing benefit. Yeah. You know, surpassing what standard of care is today. It is more prevalent in the performance world because again, and mostly it's a risk - that risk:benefit ratio.

Yeah. And when you're talking about potentially, potentiating elite athletes or, or people who are considered occupational athletes, people who go into hyper extreme environments like the Everest climb or, you know, things of that nature going to Antarctic, high altitude work. That's when you're saying, like, “Well, hell, I got nothing to lose here. Like, like if it can help me do it better, let's, we're all for it.”

So a lot of data gets gathered, a lot of biomedical monitoring is going on, and then you're dealing with super deep data on an individual that you can do a lot of work on them, baselining them and figure out, like, are there ways to potentiate them and who they are in a, a pattern we couldn't have seen, other than throwing these algorithms at it.

It’s like a signal to noise issue, like we're going to gather a bunch of stuff, we're going to know to look at some things, you know, that we typically have have done for decades now. But there's a lot of potential signal in all of this noise. We just don't know how to find it. Yeah. So the ML AI process kind of draws the signal out and I always tell people my approach is that signal just becomes a clue. It does not tell me what to do yet. Now the work happens. Like let's go verify that signal.

Let's verify what we would do with that information. And if it belongs in the operational domain, does it belong on Everest? Does it belong in high altitude? Does it belong in space spaceflight. You know? Yeah. It's interesting because of course I've seen that, you know, whether when it comes to performance coaching and, of course, athletes and definitely extreme athletes as well. and another tidbit about my history. I was in the military as well.

Yeah, that’s another area, they’re very interested in all of us for sure. As you might know. Yeah, yeah. And that's something, you know, I had the pleasure and honor of, working in Bethesda and got to see one of their labs there, or work - work with one of their labs for human performance. And it was actually, really amazing to kind of see exactly how we're not only tracking performance, but also increasing performance, improving performance.

And that was my first time really seeing any semblance of machine learning, you know, being used. And it was it was enlightening to me, to myself. That being said, you know, you do want to make sure that the - that the data is good. Yeah. And any of your modalities that you're, implementing you want to make sure they're good. How long typically does it take?

You know, at the - from the research level of, let's say research has been validated, peer reviewed and industry, specific industry, let's say those coaches, you know, how long does it take for that information to not only get trickled down but also used, and then for, on the backside, how long does it take for that data to get sent back up and say, “Hey, we're using this. This is actually great. You know, we think we should actually improve or increase our use of AI, any AI system.”

Yeah, I think it's a, it's still a question of, it has variable lengths depending on who the user is. Yeah, that makes sense. So you see people who are I will say another group who's avid users of this, are people interested in longevity. You know, a ton of data is coming out on biochemical pathways, molecular pathways that get turned, get turned on, get turned off Over time, you know, chronology affects biology. but then lifestyle factors, you know, who are you?

What and how have you been living? Where have you been living? Very important. What are your leisure activities? So that in a composite ends up creating the version of what are your exposures and exposures times kind of your genetic vulnerabilities versus robustness lead to your outcomes over time. And some can behap more quickly versus happen later. But if someone wants to be an architect of their biology, you're going to have to dig pretty deep into the molecular world.

And as your body translates from, you know, your DNA code into the RNA, then to a protein and a protein to function, and then the function to how your body operates. Right? That's where the rubber meets the road. Like do you run faster? Do you live longer? That's the end question. Yeah. And those people, they call it biohacking now. I would say the biohacking community is willing to use just about any tool possible, and they'll take any clue and try it. That is terrifying. It really is.

And but in this day and age, if it doesn't require a clinician to prescribe something, you have the freedom to go acquire stuff and people will leave the country, to go get access to tools, meaning therapies, medications, whatever. There's a laundry list of things under that headline. But, yeah, that moves very rapidly, right. Because the clue happens and they want to go try it. They are their own experiment over and over again.

And there are people who have suffered the ultimate consequences of, using themselves as, as a science experiment. that when strong validity is not there because a lot of it is like, who knows how wrong they were? We'll never actually know because of how they, did they document what they did? Can we repeat this experiment of one? It could be what I call fail for the wrong reason, which is you have the right tool but the wrong amount of the tool. Right. Dosing can be dependent.

Timing can be dependent. So that's why a real science protocol would help you know if something has the potential to be a tool that could be more broadly used or prescribed in a way that could make sense when people need it.

You know, and I, given my background and where I've spent the past 20+ years doing, I do lean a little bit right of center when you talk about, having rigor in that process and then having clarity about what we know and how much we know about it, not to obstruct freedom of choice, but your freedom of choice is, obfuscated by the idea of, you don't know what you're choosing. Yeah. So in the world of human research, we have informed consent.

So when it comes to something like participating in something that's - is using AI to give you insights. Part of the informed consent, and that this is not literal, but how I would approach it. There are other analogies to this is uninformed, informed consent. What you're going to be told is, we don't know what risks you're really accepting here, but you're willing to do it anyway. And what parallels this is people who are willing to sign up for a one way flight to Mars.

You know, there are different companies out there trying to get lists of people. This happens pretty regularly. not the credible companies in terms of, they have a vehicle and stuff ready. They're trying to get funding, you know, crowdsource funding to go build a something. but with the caveat that, hey, we don't think we can get you back and they’re like, “I'll go anyway!” You get hundreds of thousands of people signed up. And so clearly there's no impediment there.

Yeah. But they're - they're informed if you just, if like, “Do I have to protect you from yourself?” Sometimes you know it's very parental. That's what people don't like. That tends to be you know what regulatory bodies do. So Yeah. So AI is sitting in that space where, it has a lot of potential. It's in the wild to some extent, and you can play with it, but I, and you don't need a prescription for it, so government’s not regulating it they're struggling with what that means.

I don't think they really should. They're not good at it. [Laughter] But what do we do as a, as a culture, as a civilization, where do we give ourselves some boundaries so that we can ensure people are safe?

Because I guarantee you as what we see even in the pharmaceutical industry, you know, in the recreational drug industry, you can be very upset if you suffer bad consequences from something that you were - you were trying to use and you thought could give you benefit and now, now you want someone to blame. Yeah. So, you know, you do want to set up a structure where there's some boundaries that says, “You go outside these bounds, you're on your own.”

But for what we know, it has legitimate purpose and - and has verification and validation of it. We think we could apply it and do better. We can do better. How we do today because it gives us insights we couldn't have had before. Yeah. It actually touches on one of my final questions as well.

You know, earlier you spoke about OpenAI, ChatGPT, and, you know, in speaking about not only how AI actually captures information and how it is being used, but also specifically when it comes to health and biohacking. You know, you've seen, we've seen obviously, those apps where you can sign up, I'm guilty of it. I think I signed up for when the Wim Hof Method at some point as well.

You know, we're all if you're somewhat health conscious, there's something that you're actually interested in, but the one question I never thought to ask, you know, whenever I'm actually - whenever I'm going into the apps and I start putting in my information is how my information, so my, my actual personal information is being used. Are you seeing any concern within the industry, with regards to privacy protections when it comes to AI?

A lot of concern, and clearly in the, in the clinical domain, particularly in the United States and Europe, has some pretty strict laws, right? They actually have stricter laws that, at some point when I was dealing with my - my job at NASA, I had international partner, you know, work and, yeah, one of their laws, about electronic data, pretty much like, shut everything down for a little bit until the lawyers figured out, like, how do we implement something? What does it really mean?

So the GDPR was - was something to assure that they had the best of intentions, but it just like, the wheels grinded shut for a couple of months until, cause we had test subjects in a very expensive study and suddenly like they were like, well, we can't send you the data from Europe to the United States. Yeah. I was like, well, considering we're paying customer like and then they're consented, I'm like, you're going to have to figure this out.

And - and they did but they just they didn't know how at the moment. And that didn't have to do with AI in particular. But that just gave you like the ultra conservative, like it's an all stop until we figured it out. So since so few, clinical tools depend on AI, you won't see it in a disclaimer right now, but you get the HIPAA release, you know, the Health and Insurance Portability Act release, which tells you we can only send your data to other people who are going to do X, Y, and Z with it.

And otherwise, you know, it's - it's secure, it's behind these firewalls. You know, they try to give you some information about your actual privacy. So that structure is in place, but I don't see anything coming out in releases talking about using AI tools. Usually those are going to come out in a separate consent because essentially that would fall under research.

Yeah. So clinicians do do research and there is clinical medicine using people's data to go train AI and then surveil AI on the back end.

Is it delivering results that actually happen because you have the medical results in the medical record, and even it is allowable under HIPAA in the United States for your data to be used without your consent, if it can be anonymized, meaning like some of your demographics will not be moved along with you, your name, your Social Security number or your insurance information, none of that will move, but it will say like male between 25 and 50.

You know, it might, depending on the requesters request, it may have something like body mass index, some - some of the metadata that would help them understand and then would say like, did you have, you were normal, healthy, not hypertensive, not two - not type two diabetic because then they want to compare you with people who are sort of like you, but type two diabetic and have BMIs that are high and say can AI predict, could have I predicted who was going to be who.

So they don't give AI all the information. They kind of give them the left side of the block of information saying what - who were you if we knew you from, you know, ten years old to 25 and then we map you from 25 to 50. So we already know who you became. But if we only gave AI the upfront data, the earlier part of your life could it have known you were going to become that person.

Could it have tracked essentially your biomedical information in a way that said you had risk factors we couldn't see? And - and that is the good use of AI, right? Because then we can go to the 10 to 25 year olds and say, how do we really do prevention? How do we stop you from becoming a type two diabetic? How do we stop you from becoming, you know, the - the heart attack victim or the stroke victim like that is the goal of using AI in medicine.

So HIPAA has that built in, which is a great tool because it's a phenomenal database, but it does protect your privacy. Outside of clinical world, if you're engaging in these apps, you have no guarantee what's happening with your data. You can go into the fine print, and I would always recommend downloading the terms and reading them later. Like, we all get it. I mean, I've signed up for stuff, I get iTunes. I don't, I'm not a lawyer. I don't understand most of that.

Like I have a pretty high degree and still I'm like, I don't understand my phone bill. Yeah. My cell phone bill. Yeah. I mean they've had - they've done joke like not joke but like, can a neurosurgeon and a brain, you know, can a brain surgeon and like a nuclear engineer figure out your cell phone bill, like what am I being charged for here? And then like the iTunes agreement, like, no. We just, we want the iTunes. Like just, let’s move on.

But I also don't want to sign away my rights and give you stuff. Yeah. So the warning is, is that a lot of times in these companies, you are the commodity. They are using your data to build their business case, and they are using it to refine their, their offering, their product. So in some cases, if you engage, you are being provided something of value. That's why you did it, right? You wanted something from them. Well, they need to build their business case of the future.

So they're going to use your data and you as a participant to get there. And so then the - there's a mutual benefit if you understand what you signed up for. Yeah. There are some apps, and I, I was told this a long time ago and I think I, my son is 16, on the internet, in the wild. Very nerve-racking. And I'm just training him to be the best critical thinker he can be. It's like, shutting all that off is not an option. But, he's faced with a lot of choices.

He may not really understand. Not, he doesn't have the life experience. So he thinks he knows it all, but he does not have the life experience. So, but I say the one thing you gotta know is if something is free, you are 100% the commodity.

Anytime someone's having you sign up and you got to give them your email, you know, the texting is terrible now, the phone number they want, but you know, the email and your metadata, and they may be watching you and your habits, like, you know, some of these shop apps and stuff, like they're watching everything you buy and search and just know that for whatever you thought was worth getting, that they're getting a whole lot. Yeah. And you have signed away your rights to know what that is.

And that's where it's very dangerous, and that's why people go to DuckDuckGo. And, you know, which I get. I don't know what to do about it. I have no answer. I'm, I'm a little, like, willing to try stuff myself, but I think that the warning is like, be skeptical. That's healthy. Go educate yourself. That's in your power. Right? Try to understand the the sources you're getting educated by. That's the other one. Yes, yes. There's a lot of fake news out there. Yeah, yeah, yup.

It’s a daily conversation sometimes. But, you know, I don't think this is Skynet. I think, like, I was there with Y2K, I was - I was, interesting things I've seen happen, and predictions that - the world still hasn't ended. Yeah. 20 times now. Yeah. I don't think it's going to do that, but we - we got to keep an eye on it. Don't - don't be naive about it and then think about your data and yourself as, protect it like, like it's your most precious resource, you know, ask the hard questions.

And if, if something you want to engage in is not being honest with you, maybe it's really not worth engaging in, especially on the internet. It's a lesson for life. There we go. It’s a hell of a podcast. Oh yeah. It was fun. Thank you Jennifer, that was my last question. Did you have anything that you wanted the listeners to know as far as about yourself, anything upcoming?

I think, yeah, there's a lot of exciting things going on in the aerospace domain and commercial space and my big message to people is that we get a lot, you know, why do we do this? Like, we have a lot of problems to solve, you know? You understand when you look around you like, it can be overwhelming at times, right? And, can be a lot, you know, very hard on your mind and your heart on any given day. But for people who engage in something like spaceflight it itself is - is not the reward.

The reward is the accomplishment of getting solutions that are going to change how we live on Earth because they have to be stripped down of all the things we take for granted, and we have to accomplish things that we just won't solve for ourselves here on Earth.

And while I understand people are talking about why we have to leave Earth potentially one day, and I hope that's never true, I am working in a domain where I want to bring these solutions back to Earth and improve dramatically the equity in access to health care. I want to make differences in women's health and early screening and mental. I mean, it's just like today even it's just on the top of my head about some of the women's health issues, and that - those are my goals using spaceflight.

So while I accomplish one thing, I'm going to accomplish these other and you don't have to pay twice. That was the goal. Yeah, yeah, I - it's a huge forcing function to solve some really, really hard problems that we just have not solved for ourselves here. And the last thing, it's actually an African proverb. It always makes me just a little emotional, but “The Earth was not given to you by your parents. It is on loan to you by your children.” That’s beautiful.

Yeah. And I, this is a stunning revelation and perspective on how we treat things. And when you are loaned something, it's a much different concept than when you're given something. So take care of it and it'll take care of you. That's it. That's awesome. Well, thank you again. Thank you so much. I loved the conversation. I’ll come back for more. Yes. Please do. I will. All right. Well, to our listeners, thank you guys for tuning in.

This has been episode one of season three of The Human Odyssey Podcast. Once again, my name is Rashod Moten and again, we're here with Jennifer Fogarty and as always, please join us next time. If you do want to provide any feedback, reviews, anything like that, please visit us on any one of our platforms. We're on all social media platforms and feel free to drop a like and a review. Thank you so much. See you next time.

The Human Odyssey is presented by Sophic Synergistics, the experts in Human-Centered Design. Find out more at SophicSynergistics.com. Get Smart, Get Sophic Smart.

Transcript source: Provided by creator in RSS feed: download file