NASA will use AI in current and future spaceflight missions to the Moon, Mars and beyond. - podcast episode cover

NASA will use AI in current and future spaceflight missions to the Moon, Mars and beyond.

May 28, 20241 hr 6 min
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

NASA will use AI in current and future spaceflight missions to the Moon, Mars and beyond.

Transcript

Hey, everybody. Welcome back to the Elon Musk Podcast. This is a show where we discuss the critical crossroads that shape SpaceX, Tesla X, The Boring Company, and Neuralink. I'm your host, Will Walden.

Good afternoon. Thank you for joining us for NASA's first artificial intelligence Town Hall, I'm Melissa Howell. Today's all about offering more insight and how NASA is using AI to build on its foundation of innovative technology and exploration and the role AI will continue to play here at the agency as we look to the future.

After the discussion, we'll take questions from the audience right here in the room and those joining us online at conference dot IO. To kick off today, Town Hall, I'd like to welcome Administrator Bill Nelson for opening remarks. Hey everybody, When a new generation of tools reaches NASA's doorstep, we don't just plug in and suddenly go into a whole new bunch of things. We study them, we iterate them, and we advance them for the

benefit of all. We work with partners across industry, across universities, across our government, across the world to find ways to improve them. And so we do what NASA does best. The technology that we have today is astounding. Take for example, the phone in your pocket or even the phone that you're using to watch this town hall. It has over a hundred 100,000 times the processing power of

the Apollo Guidance computer. The technology, of course, that took us in Apollo 11 and put Neil and Buzz on the surface of the Moon. And we're talking about just the phone that we use to send emails. So imagine what we can do today as NASA dives into the next generation of artificial intelligence. We have used AI safely, sustainably, successfully for decades. Yet today this technology is

transforming before our eyes. There is new promise in artificial intelligence, for example, machine learning tools like neural networks, deep learning, generative AI, and modeling. And as this technology grows, so too does our capacity to use it, to test it, to refine it, and to integrate it into our work to try to benefit all of humankind. When used right, AI accelerates the pace of discovery, and it can support our missions. It can drive our research.

It can analyze our data. It can support our spacecraft, aircraft, science, and a lot more. And it can open new possibilities in our ability to land on celestial bodies and to navigate to them, to peer into the vast corners of the cosmos and even in the search for life. And remember, that's a statutory requirement of NASA searching for life. That's why we're digging on Mars right now. That's why we're looking for exoplanets. And it goes on and on.

That's why we're sampling that asteroid sample from Bennu. Well, AI can make our work more efficient, but that's only if we approach these new tools in the right way, with the same pillars that have defined us since the beginning. Safety, transparency, and reliability.

Those three things matter not only to NASA, but to the president and to the vice president, and they are focused on making sure that when our government uses AI, including these emerging technologies, that we do so safely, securely, transparently, and responsibly. So consistent with the president's executive order on artificial intelligence that he wrote last October, we recently announced our new chief AI

officer. You're going to get to meet David. And I thank David and all of our panelists who are going to share with you today about where we're going with AI. When new tools arrive at our doorstep, we test them, we improve them, and we deploy them to hopefully better serve humankind. There's a lot of risk with AI because if it's employed in ways that are not for the betterment of humankind, then it could be

disastrous. But the way we're doing it is the way that we employed technology for the very first step on the moon 55 years ago. And that's how, using every tool at our disposal, we're going to make leap after giant leap in the decades to come. And that's going to be how we

lead. And so I want to bring up our deputy administrator, Colonel Pam Melroy, a person you all know whose resume far exceeds her as a space shuttle commander and an Air Force test pilot, and on top of that, a great team member and as we refer to ourselves as a crew, a great crew member. Come on up, Pam. Thank you so much, Sir. It's a not just an honor, it's a pleasure to serve. So thank you.

I'm really excited about today. You can probably guess you, you guys all know I'm really a nerd at heart. So moments like this are very exciting for me. So I just want to say thank you to everybody for joining us for this town hall. Of course, we're gathering today to discuss one of the most transformative and exciting frontiers and technology, artificial intelligence.

At its core, AI is an advanced form of statistics and probability that creates the capability for a computer system to perform complex tasks that have traditionally required human intelligence to integrate, from reasoning to decision making and even cool things like creative endeavors and art. At NASA, Unsurprisingly science and technology Organization,

this is not just a buzzword. We have already harnessed the power of AI tools to benefit humanity by safely supporting our missions and our research projects, analyzing data to reveal underlying trends and patterns, and developing systems that are capable of supporting spacecraft and aircraft autonomously. Unsurprisingly, in a poll across the federal government, NASA had more use cases than any other

federal agency. I can tell you, I want to give kudos to our Associate Administrator for Science, Doctor Nikki Fox. Five years ago, I heard her give a talk about the use of AI in heliophysics data. That blew my mind. It was awesome. So we're already doing this, and we're doing a lot of it. But as we push those boundaries and we continue to find exciting new ways to do use AI, we have to recognize the importance of responsible usage like we do with any other disruptive

technology. So as was said most recently by Spider Man, with great power comes great responsibility. So we're totally committed to that. We have to ensure that what we are doing in our AI initiatives are guided by robust governance and protective measures. So our goal of course is to harness the benefits of AI for

the betterment of humanity. But we have to do the other side, which is to safeguard against the potential risks, which are unfortunately all too real from unintended bias and data collection or things like data compression, which ends up resulting in less than accurate answers. The consequences can range from funny, think auto correct on your phone to very serious impacting human health and safety. The administrator mentioned the

president's executive order. So as hard as we're leaning in on the technology, we must be steadfast and lean in to formalize the processes and the protocols for AI usage to make sure that we're taking advantage safely of AI innovations across the agency. So having that governance structure is critically important. It provides us procedures and guidance from the agency.

And actually, when you put that kind of structure in, it can actually empower you, our workforce, and unleash your innovation in a way that we all feel comfortable. We're doing it the right way. Our artificial Intelligence Working Group has been championed by three powerhouse leaders at the agency, our Chief Technologist, AC Terrania, Chief Scientist Kate Calvin, and our CIO Jeff Seaton.

This working group has been looking at this executive order and developing recommendations in response to the directive that are very focused on NASA, on our mission, how to best enable our mission. The other exciting step that we took is we named David Salvagnini as our chief artificial intelligence officer, and I think his phone started ringing off the hook right away, maybe even before we announced.

This is really just to under score the commitment that having a focus area and taking this aspect of governance seriously is really important to the leadership team and it's important to all of us. So just so exciting when you think about the future and all the interesting use cases that we have for AIA. Pivotal aspect of our approach involves how AI can enhance collaboration within our workforce. That's a really exciting opportunity. Communities of practice, working

groups and other avenues. We aim to foster a culture of knowledge sharing about AI and collective learning. And thank you. We have seen a significant increase in the number of AI trained employees. Yes, we do have formal training for this and this number is poised to grow. And I encourage those of you who are listening who have not taken this training to look into it. This is all part of this commitment to responsible AI. And it goes actually beyond our internal operations.

We're looking at partnerships with regards to AI leaders in the private sector and in academia, recognizing that this is incredibly important to collaborate to drive cutting edge developments that impact the largest problems of our

time. So our participation in initiatives like the National Science Foundation's National Artificial Intelligence Research Resource Pilot is just one great example of how we're dedicated to also leveraging our partners and collaboration to advance AI on a much broader scale, impacting not just NASA's mission, but the whole country.

AI is going to help us in so many areas, analyzing heliophysics and Earth science imagery, probing the depths of space with our telescopes for new insights, having them work together to schedule communications on our networks. We all know our communications networks are so loaded with fascinating information, but we can barely squeeze it all in. AI can help with that and just all kinds of other exciting things, helping crews in the future on the way to Mars.

And really just think about, we don't even know yet what new insights we're gonna get by using these new techniques to look at old data in new ways. Again, establishing a very robust governance structure and empowering you. We can harness that full potential to fuel our own missions. Dr. Innovation and continue to change the world. So as I close, I just want to emphasize it is a powerful, ingenious and very exciting

tool. But if we don't manage it responsibly, we're going to open ourselves up to a world of risk that jeopardizes our credibility and our mission. Fortunately, this is something we are very good at at NASA. We challenge ourselves to manage risk effectively, to push boundaries and be innovative, to boldly explore as a team and do it with the best possible risk management practices. That's exactly what we're going to do with AI.

So I'm excited for you to hear today from the panel who really knows what's going on, Jeff, AC, Kate, and Dave about how we're using AI at NASA, how the field is changing, and how you can learn more and educate yourself more and hopefully take advantage of this technology. So thanks for joining us today. Thank you, Deputy Administrator Melroy and Administrator Nelson. I hope you all are ready for what is going to be a great

discussion. We're going to go ahead and welcome our panelists to the stage. Joining us today for the discussion, we have Doctor Kate Calvin, NASA's chief scientist, AC Sharnia. He serves as the agency's chief technologist. And then we also have Dave Salganini, NASA's first artificial intelligence officer, and our Chief Information Officer, Jeff Seaton. Go ahead and give them a round of applause for joining us today.

Thank you all for being here. We're looking forward to having what will be an exciting and important conversation around artificial intelligence. I want to start by giving you a chance to talk a little bit about yourselves and your role as it connects to AI. So I'm Kate Calvin. I'm NASA's Chief Scientist and senior climate advisor. As Chief Scientist, my office is responsible for representing and enabling science across the agency, including the use of AI

to advance science. All right, I am AC Trinia, NASA Chief Technologist in the Office of Technology Policy and Strategy here at NASA Headquarters. My job is to help shine a light on how we can be more innovative, specifically with emerging technologies like AI.

Good afternoon. My name is David Salvinini and I started here at NASA last summer as the Chief Data Officer supporting the requirements in the Evidence Act and have since assumed the role of the Chief artificial intelligence Officers. So in both of those roles, I will be looking after NASA's journey as it relates to meeting some of the requirements from both executive orders and also the OMB guidance, but more importantly, doing what's right

for NASA and addressing our journey and facilitating our way forward as it relates to the use of AI in responsible and ethical manner. Great and good afternoon. I'm Jeff Seaton, the Chief Information Officer for NASA. And I've got a responsibility for providing tools and capabilities that enable the the work of the agency, leveraging IT technology and also ensuring the security of our data and systems across the agency and AI falls into to that sphere as well. Thank you all.

And I want to start first with you, Dave. First of all, congratulations on your new role as Chief Artificial Intelligence Officer here at NASA. Would love to hear from you about what your role actually entails and what you're hoping to see as we move forward with with your new role. No, thank you very much for the question. So just a little bit of context about this, if you're not tracking and I'm sure many of you already are, the regulatory guidance coming from the

administration is not new. In December of 2020, they released the Responsible AI Executive Order, which created the need for a responsible AI official within federal organizations and that really went after the need to be ethical, responsible, transparent, safe, and and it it accounted for accountability as it related to how federal organizations use AI. Fast forward from there, we have a new executive order last October, which now UPS the ante

to some degree and says, Nope, a responsible AI official is not enough. What we'd like as a chief artificial intelligence officer. And that person is going to be responsible for ensuring the safety and rights of U.S. citizens and also really driving agencies towards more innovative

practices and risk management. And then since that was issued in in October, OMB released a memorandum in 28, the 20th of March, which offered a lot of additional detail for how federal agencies ought to address that. So in part, my role is addressing those requirements from the executive order and the OMB memo.

So what does that mean? Well, that means, you know, establishing my role, establishing a leader who's ultimately responsible for this journey at NASA, and then really establishing some of those mechanisms that director, Deputy Director Melroy and Administrator Nelson talked about as it relates to how we manage risk associated with AI, but also I would say manage risk while also managing opportunity. So what do I do there? So I create situational

awareness. I would say the role is largely an orchestration coordination role. I think of myself if if NASA were a Symphony, I'm the conductor. And I'm harmonizing the various different instruments and sections of the orchestra in a way where we're all rowing in the same direction. We're situationally aware of what the other is doing, and we're capitalizing on that knowledge of each other's work so that we can build upon it as it relates to our specific

mission area. So there's that role, there's certainly the compliance role, there is the role of standing up governance and doing so in a mission informed manner. So it's not burdensome, but it's a value add to the organizations that look to leverage it. There's a work forcing component to this. In other words, dealing with, well, how do we equip the workforce to address the change in AIAI has been around for years, but AI has also changed dramatically in the last 12 to 18 months.

And there's some responsibilities that we all have as it relates to how we use these tools responsibly, ethically, transparently and so on. So that imparts the responsibility to make sure we're reskilling and upskilling the workforce to, to be aware of the change and understand their responsibilities. So I think that's a good synopsis of the responsibilities associated with the role. So I'll stop there.

And that really ties into what Pam and Bill mentioned about how NASA has been using AI responsibly and effectively for years. And Kate, I'd like to pull you in here. Can you share how AI has contributed and been applied to scientific research? Absolutely. So scientists have been using AI

for a long time. And one of the things that AI is really good at is analyzing large data sets, like the data that we get from our Earth observing satellites or space telescopes or our other science missions. And I want to hone in on one one of the ways that we use AI around with what scientists like to call anomaly detection or

change detection. Essentially, when you look through a data and you look for something that looks distinct, a specific feature, and once you've identified that feature, you can count it, you can track it, you can avoid it, you can seek it depending on what your goal is. So some of the ways we've used AI, one is to track wildfire smoke. So we can teach the computer, this is what smoke from a wildfire looks like, and then look through all of our satellite imagery and find other

wildfire smoke and track that. Another way we've used it is to count trees. So if you teach the computer, here's a tree, find all of the trees. And this is really important to use AI in these because for the wildfire smoke with wildfire and other disasters, what we really want to be able to do is do things quickly. People need that information urgently and the computer can do

it faster than a human can. In terms of the counting trees, you know, it's not tractable for a person to count every tree across the country or a continent. So instead, we can have the computer help us and then we can focus on what do we learn once we know how many trees. So that's in looking at, you know, observational data sets. But we can also use AI to help improve models that that do weather and climate forecast in the future.

And I focused a little bit on Earth science applications just because I think they're easier sometimes for people to relate to. But we use this throughout science. Pam mentioned heliophysics, but also in the science mission director, we do things like counting exoplanets. It's the same type of thing that we do with counting trees. But now we're looking out in the universe and seeing what else is out there. And AC want to pull you in here and talk about the Artemis

missions. Can you talk about how AI is supporting NASA's efforts to put men back on the moon and then eventually put humans, bring humans to Mars? Yeah, thanks. Kind of. As Bill and Pam mentioned, I think artificial intelligence, large language models can be used in combination with humans and not to replace humans. We've used AI for many, many years with humans to examine that data and to make us smarter

about the universe. Relative examples to your question about how do we use artificial intelligence, large language models and the like in terms of Artemis include helping us observe the Moon and Mars. In terms of surface imagery, surface features. We can look at how do we leverage these kinds of technologies and human machine integration interface to help humans more intuitively and

better work with machines. We can look at how we use this technology to help us communicate with spacecraft at a large distance to alleviate, accelerate mission operations, not to replace decision making, but to accelerate our missions. We can also look at machine vision technologies and capabilities that AI enables to help our Rovers on the surface of the Moon or Mars traverse farther and faster.

And finally, I'm also very excited with the potential for AI agents, large language model agents to help us optimize, understand, manage, observe these complicated interconnected networks that we're establishing. Thus, for instance, in the absence of crews, can these technologies helps help us observe and maintain Rovers, space stations, habitats, the like throughout the solar

system? And Jeff, you know, we hear a lot about the concerns and the risks across the industry when it comes to AI. Can you talk about the use of the tool right here at NASA and what we're doing to kind of eliminate that risk? Yeah. But first, let me take a step back because you heard the administrator, he said that it's a new generation of capability. And it definitely is AI is, is not new. We've been using it, as you've heard for years in our existing

missions. But there is something that's new, right? And it's becoming more accessible to, to all of us. And if you think about generative AI, one of the questions is what is it? Well, think about when you were a kid. If you're like me, you like logic puzzles. And I can remember these logic puzzles that say, here are 5 numbers, what's the 6th number in the series, right?

Generative AI is a little bit like that, except instead of this simplistic series of five numbers, it's this trove of information that we can now access because of the networks that are connecting systems worldwide. So instead of a simple series, you have this volume of information that we can learn from that the computer

algorithms can learn from. And because of the progress we've made over the last 20 years in natural language processing and understanding, we're able to take human language and prompts and ask questions and have mathematical models. It's it's math, it's not magic mathematical models that can then generate responses to those questions. And so there's a huge promise in that. But you talk about the some of

the challenges to the risks. And one thing I want to say is that these are computers and programs and algorithms. We have existing processes and approaches to appropriately secure data computers, programs and algorithms. So we're not going to create new processes and policies to protect our systems and software. We're going to use the ones that we already have.

Goes back to understanding the sensitivity of the data, goes back to understanding where is that data going, how is it being transmitted, What systems is it connecting to? And we're going to be able to leverage those same processes to make sure that you're able to access these new emerging capabilities in a safe and secure and a responsible way. So from 1 standpoint, there is a lot that's new. From a security of the system standpoint, there's not a lot

that's new. We're just going to apply our existing processes to to be able to do that and to make these tools available to you ultimately. And coming off of that with so many tools that that are going to be available, Kate, the importance of having humans actually review the work and the research that AI is is doing. Can you talk a little bit about that? Yeah, so human review of AI products is really, really

critical. AI can make mistakes, and we have to look through it and check them and make sure that it's robust. And we've been using AI and science and engineering for a long time, but scientists and engineers have processes to check the products.

So we do things like code review, where if you use something to generate a computer code, you have processes for reviewing that code, for checking that it's accurate, for testing it. We use peer review in the science so that we actually peer review the document and get people looking at it and making sure that it's correct.

And we need to think about those same things when we're looking at AI. There might be some cases where you look at an AI product and it's clear that there's a mistake in it. So Pam was talking about auto correct earlier. So if you see it something that's auto corrected and it's not the word you meant, you'll know it, right? You know what you wanted to say,

you'll see that it's incorrect. But when you're generating a whole paragraph that's maybe outside your expertise, you still need that review because you have to check that it's correct. AI only knows what it's been trained on, and it could be trained on things that are out of date. It's also combining multiple data sources, and sometimes it does that in ways that are really powerful and helpful. Sometimes it makes mistakes and we have to check it. So human review is really

important. And you know, Jeff, we we received a lot of interest in generative AII know you spoke a little bit about what that is. Can you go in a little deeper talk about NASA's current policy and what we can expect to see in the future as these tools are evaluated for safety? Sure. And following on to what Kate said, I think it's good to maybe kind of create two different

buckets. One is the the ad that's built into our missions where you have program and project processes that are doing the review and evaluation of how we're leveraging the the capabilities. And then we have what is really emerging now. And these are the, the general tools that will be accessible to to so many of us to maybe do some things in ways we never even imagined possible. And so I think I'll focus on that latter category right now.

And that's you. You probably, if you listen to the radio, listen to podcast, you can't go 30 minutes without hearing an advertisement of some company that's saying they're now the AI company right there. Everybody is an AI company. And we're going to see AI capabilities built into many of the tools that we're already using today here at NASA. We do a lot of our work from a kind of business product activity standpoint with Microsoft tools.

We use others, but Microsoft is is definitely one of the primary providers of some of our capabilities. And you've probably heard of things like Copilot. Well, Copilot is a term that's an umbrella for a number of different products and offerings and integrations that Microsoft is building in to its product suite. Similarly, many of you might use products from Adobe to create images or videos or audio, and they're building in AI capabilities into their products.

Many companies are doing that and, and we use many, many vendors. So one of the things that we are looking to do is evaluate how those vendors are building these capabilities into their products. And it can be a little bit frustrating sometimes though, because at home you might have access to some of these capabilities today because they're in the generally available products. And within the government, we have a few more requirements that we have to to make sure are

met before we use them. One example would be in the the general utilization, a provider might be using data centers that are worldwide, but for the government, we have to use data centers that are within the continental US typically. And so we have to wait for providers to create a version of products that do meet the federal government requirements.

So we're working with many different providers to understand where they are, understand what capabilities they are building in and how they are protecting those capabilities to ensure that we're being responsible with the data that we have, the data that we are working with. And while all that is happening, there's safety, right, Dave?

That's a priority as well. Can you talk a little bit about how the workforce can actually play a role to make sure that safety is made a priority in an effective way? Yes, certainly so and and I would say it's, it's beyond safety, but just responsible use, right. So as Jeff talked about and Kate talked about an AC talked about the use of AI and the many opportunities you know, there are. I, I almost wanna, I wish artificial intelligence was not referred to as artificial

intelligence. I wish it was referred to as assistive intelligence assistive. In other words, it is my digital assistant. It is that resource that I now have access to that can help me in my decision process. And Kate talked about this quite a bit. So I think it's incumbent upon all of us. You know, the AI is not accountable for the outcome. The person is the human is right, So and the vendor who may offer an embedded AI capability as part of a product suite is not accountable as well.

You know, the user is the person who leverages that tool is responsible. So we have to 1st own that and we have to understand what responsibilities come with us. You know, I was thinking about this a little bit earlier and you know, I was thinking about hurricanes and I was thinking about weather forecasters that released as it relates to hurricanes and think about the validation that goes into sort

of hurricane forecasting. You know, and you've all seen the forecast where they'll show that the projected track and they'll be multiple lines and each one of those lines is represented by a model. Why do they represent multiple lines? Because they're cross correlating the outcomes and trying to ascertain what do we reasonably believe is, you know, an accurate projection and can we assert with confidence what that projection is? Well, it's based on the aggregation of those outputs.

So again, human judgement, not AI doing our job for us. So I think that's the important part. So then, you know, how do we be safe about this? We understand our responsibility as the ultimate accountable person as it relates to the use, as it relates to our work products. And then if we happen to use AI as part of the generation of a work product, that's fine, but just understand its capabilities and limitations.

Another thing I'd like to talk about is, is, you know, there's this notion of hallucinations where maybe the AI kind of gives you a false answer and maybe it generated that false answer, but there's that can be somewhat obvious and more easily detected. I would be more worried about errors of omission. So what about the AI that gives you an answer, but there was a whole bunch of data that it

actually didn't reference. And maybe if that data were referenced in the response from the AI, the answer would have been different. So again, it's incumbent upon us as we think about our use of these tools. This digital assistive technology is going to be in every part of our work day. It's already in many parts of our life. I drove to work this morning.

I used a navigation application that, you know, referenced traffic, OK. And by the way, I actually didn't take the recommended course. I applied judgement and I said, well, I know the flow of traffic. I know preferred courses and routing and so on. I'm gonna take the second course. I applied judgement. So I think that's the key here. We're ultimately accountable.

This is assistive technology and you know, we're not outsourcing our thinking to the AI. We're applying judgement and using it as data points to enable us to make good decisions. And I'm, I'm glad you touched on AI and hallucination and fabricating sources because that was a concern that we heard from a lot of, a lot of people here at NASA and AC turning to you. I mean, what does growth in this field look like when it comes to missions and innovation and

opportunities here at NASA? I I think there's a few aspects to this. One is inspiration inspiring us. Technologies of all types inspire us to think differently, innovatively. I also think we should leverage these tools to help all of our directorates, including mission support in terms of helping us think about how do we accelerate our day-to-day practices separate from science and engineering. And then finally, in some sense we are an engineering and science agency, also a data

agency. And how can we uncover knowledge, new knowledge from these various data sets we have historically we're collecting today. Our missions of the future are going to provide even more data downstream that we can leverage and analyze from both human and robotic missions across the board. We should be working with experts of both internally and externally to help us manage and ride this wave of innovation. And finally, I think about this town hall I was mentioning to

someone earlier this morning. These are the kinds of town halls I came to NASA for. These are the kinds of town halls NASA should be having in the 21st century in terms of leveraging technologies, leading the way of leveraging our data, showcasing how we use it to other government agencies as best practices or the practices of of use leveraging this

technology. So it's a pretty exciting time at the agency of how we use these safely, responsibly, but innovatively to achieve our mission faster and more bolder, I think in more bold fashions if I could. Add a little bit to what AC said too, because he talked about here in the mission support realm, applying some of these technologies and I mentioned we use Microsoft and today if you're in a Teams meeting, you can already get an automated

transcript generated, right? It's listening to the audio and doing a pretty good job of transcribing that meeting. Well, one of the things that I would love to see happen and I hope happens in the near future is we can take that transcript and we can say give me a one page summary of that one hour meeting and automatically the system gives that to you, right. So that OK and we'll have to evaluate and see is that really

valid. But hopefully as the technology matures, it will be a pretty valid synopsis of an hour long meeting that is generated almost in the blink of an eye. We've already been working to automate many processes over the last years. We've saved 10s of thousands of hours using existing technologies to automate manual processes and that work is ongoing. The mission support director at the NSSC working with various

organizations. And now if we are able to apply these, you know, evolving AI capabilities, I think we'll be able to automate much more. And I don't know about you, but I don't hear a lot of people saying I've got so much free time Jeff, give me more to do what I hear is I'm overworked

help, right? And so to apply some of these technologies to address some of the the mundane tasks, to free us up to do other tasks that we would love to get to, but we don't have time to because we're wrapped up in the bureaucracy or the process, right? I think there's a lot of opportunity there. So it's not about replacing the people, it's about enabling us to do more than we can today. And I'm really excited about that.

And I, I want to pull the, the audience into the conversation here, but before we do, I just want to see if there's anything else that you all wanted to touch on or mention. And here I am, the Chief data Officer and now Chief Data and AI officer here at NASA. And I've talked less about data than anyone else on the panel. So I want to say that, you know, in Jeff's case of, let's say, the transcript of a meeting, it's a very, very narrow data set.

You know, that's a pretty easy use case when you think about it. What's much more complicated is a large corpus of holdings, where you're now asking generative AI to comb through a large corpus of holdings and come up with some kind of logical conclusion to be able to do that effectively. There's a a practice of data management that's actually critical to the success of the algorithm. You know, Eloquent algorithms can really go bad with poor data.

So we also have a responsibility when we're thinking about our use of AI is understanding the data that enables the AI to give us the answer that it's providing. And is that data complete or not complete? Is it reliably sourced or not necessarily reliably sourced? So think about, you know, AI also, you know, do we understand the data? Do we understand the origins of the data? Do we understand or have high confidence in the accuracy of the data and its completeness?

Because if if the answer to those questions isn't yes, then you know our confidence in the AI outcome should should be declined or diminished. Yeah, I just add one thing. So the four of us have had a lot of opportunities to talk about AI over the last six months. And I think all four of us sort of we're really excited about the opportunities that the things like Jeff said that AI can do to help make our jobs easier so that we can focus on the things that are new, that

are innovative. And I think we really are. We're here to help. Yeah. And I guess to add on to that a little bit about maybe what you can do. And you asked a policy question early and I don't think I really answered it right, because last year I put out an agency policy on generative AI and said, hey, at this point we don't have approved tools that are in the environment. So no, we shouldn't be installing things and using them. And we're going to get to that. And we're still, you know,

working in that direction. But one of the things that was noted is, hey, on your personal computers with, you know, publicly available data, you can, you know, access and, and so that's one thing I'd like to say right now, as we're continuing to move forward to bringing tools into the NASA environment, I would encourage you to play right now. You can on your home computer, actually on your NASA computer, right? We're not blocking certain sites. You can go to

copilot.microsoft.com. You can go to ChatGPT today and you can use your NASA computer. It's not a block site. And you could just type in queries and questions, you know, I'm taking a vacation this summer. What's a five day itinerary for Mount Rainier National Park and see what it says, right? Start to experiment with some personal questions and things. Just be aware that you send something out. It's out. It's not in your possession anymore.

That's why we're saying, hey, no NASA sensitive data because it, it's outside of our control. But on my phone, alright, my personal phone, I've got a couple of these apps installed and I asked ChatGPT to explain generative AI. So let's have a little fun. So it did good, good, good explanation. Then it said, OK, tell it to me as a Limerick. So here we go. There once was an AI so keen it learned from all things it had seen it wrote and it drew made new from the old 2A marvel of

tech quite serene. And I said, OK, how about let's tighten it up a little bit. How about as a haiku? AI learns from all, creates new from what it knows. Art and words unfold. So I would just encourage you to experiment on your own, you know, little, maybe a little bit of work. Time to go to some of the available sites and start playing with some of the

prompts. Cause one of the things that we see is the questions that we ask actually have a pretty significant influence on the results that we get out of these generative AI tools. And so prompt engineering, right? What are the questions that we're asking is going to be something that's going to be important as we move forward. So I would just encourage everybody to get a little smarter by experimenting a little bit using non sensitive data, right?

Because pretty soon we're gonna have tools at your disposal that will be able to use in our everyday work with sensitive data internal to the agency or in approved cloud capabilities. And the more you know now, the more prepared you'll be for that future that's on the horizon. And if I can add one more thing, Jeff and I are part of the NASA 24 Technology Work stream. You may have heard of NASA 2040 of how we look at how the agency

should operate in the future. And on the technology work stream, we're thinking about how do we work digitally leveraging data sets. AI and Jeff and I are committed to leveraging that work stream that the agencies provided us under the guise of 2040 to help David and others using these technologies. So I think there's a very great synergy and we recognize it of this AI initiative, the NASA 2040 initiative, and how we can leverage 2040 to accelerate everything we've talked about today.

Thank you all. And we're we're not done yet. Got a lot more. If you're in the room and you do have a question, we have a microphone up front and we would invite you to come up and ask that question. So while we give folks a chance to do that, we also received a lot of questions at conference dot IO. So I'm going to go ahead and give you guys a few of those so we can get started. One of the questions when it comes to ensuring that AI is being is being used is done in

an equitable manner. How will NASA ensure the AI used by the agency doesn't develop unforeseen biases? I can I can start with that question. So I love the bias question only because we often think about the bias as a negative and and I would offer this as an analogy, right? Think about think about an autonomous vehicle and think about the algorithm having a bias. Do you want the autonomous algorithm that drives your car to be biased towards speed and

performance or towards safety? I think everyone would admit safety, right? We want some margin, we want some threshold. We want to know that we're going to be safe in the vehicle. So first of all, we have to look at bias as there's positive bias, right? And it's a very well worded question because it says unforeseen bias. So what about the bias that we don't understand? And by the way, with AI, you can adjust that bias. So I can crank up or down the

margin of safety, OK? I can be very deliberate about it. And this applies to so many different use cases. So bias is a good thing when applied appropriately. Now, unforeseen bias could be something like, you know, address an outcome related to some of the concerns the administration has referenced as it relates to privacy. But let's say equitable underserved communities being underrepresented in the outcomes of AI. How do we address this?

You know, another thing could be age discrimination. Think about a data set where the response to people who are, let's say, older is different than the response to questions about people who are younger. You have to understand the data and you have to be on the lookout for those biases and you

have to test. So again, I go back to sort of the accountability and one of the things we have to be careful about as we onboard various different AI technologies that are coming from our vendor partners is do we really understand how they work? And have we thoroughly tested them for bias and have we again understand the data? Is the data leading the algorithm toward a biased

outcome or not? So that really is what it comes down to. So we have often thought about onborn technology from a security perspective in cyber. Is it safe from a cyber perspective? Now we have to add a set of attributes around, OK, is the AI protected against bias? There are things like model drift where AI can actually drift based on its use over time, and we have to be on the lookout for that as well.

So part of what we'll be doing, and you'll see announcements soon is the Summer of AI, which is a training initiative where everyone in NASA is going to have an opportunity to learn more about AI. It's literally a campaign. It's going to be kind of a surge, if you will, of training

opportunity. So stay tuned for those announcements, but I would encourage you, you know, participate in those courses and learn about bias and learn about how you can prevent some of the bias that would be unforeseen. Learn about the data that enables an AI to do what it does. And how do you, if you want to take age bias out of an, an algorithmic outcome from an AI, maybe you take the age parameter out of the data set, but you

have to be thinking about that. Or if there's other ways where you can weight the outcomes differently as it relates to age in that particular data set, that would be an alternative as well. But again, just exercising judgment and understanding the technology and not just treating it as a black box and just assuming, well, it's seems right most of the time, it must always be right.

Well, not necessarily. We've also heard from you all today about, you know, there are so many organizations and tools out there. This next question, how will NASA's AI tools differ from AI tools used by the public? And is NASA looking to partner with any of the leading private AI organizations? Maybe I'll start with that. In terms of the organizations, yes, I mean, I mentioned we work with many, many different vendors and providers of

products. And so there are conversations ongoing with some of those providers already in terms of what their plans are and how those can potentially roll out into the the government NASA environment. As I mentioned, it can be a little bit frustrating because it takes longer for tools, the kind of publicly available generally known tools to to get into our hands in the government environment. We are working on that front with many, many different providers.

So there's that. And then there's another piece going more towards the the mission technical side of things. And we do have established in at a FISMA low level, right. In terms of the the data sensitivity, we have generative AI large language model capability that's being you know, tested out by some some of our folks within the agency. And we anticipate that by mid to late summer we'll have that environment rated at a FISMA

moderate level. So we can start leveraging some sensitive internal data and experimenting. And so that's another, I think value in terms of engagement, getting engaged with the AI community that is being formed and that's that's growing right to be aware of what are some of the capabilities that are emerging also to have your voice in, Hey, what are some of the things we should be doing?

Right, because this is not the four of us up on this stage are not the the know it alls as far as what's happening with Jenner today. We've got a lot of smart people across this agency and I think together we can guide some of of the investments we've got limited time, limited resources, we all do. And so how do we apply those most effectively to advance this, you know, kind of journey that we're on that's going to take involvement from all of us?

And we have a question here in the room, Sir. Hi Moon Kim from OCFO. Thanks for this and super excited to use Jenny AI or AI going forward. Coming from a budget perspective since I'm from OCFO. AI is not cheap, right? It cost GPU clouds, cloud environment is expensive. In the middle of a budget

constrained environment. Do you foresee any of the AI tools being openly available for everybody at NASA despite how much budget you have in your your division in your office or, or and do you see any tools that might be just free, like not free, but Excel, you know, all attached to every single computer like Excel or office. Any plans to navigate around this? Constrained budget please. Good question. I guess I'll start see if you all have anything you want to add to that.

Margaret, the CF OS in the room somewhere I think right. I saw her walk in there she is up front. That was a question that you planted right. So, and but that's a a great question because we we all know that budgets are tight and companies need to make money and so they'll be looking to leverage these new capabilities to make profit. Sure that's that's what they should do. So that's a valid question. What we need to ask ourselves is the value of these capabilities.

So to back to what AC mentioned about 20-40 in the tech work stream. So one of the things we're taking a look at in that work stream is, hey, where should we be investing more in technologies that can actually enable the NASA mission to be successful maybe more rapidly, maybe in different ways. And so the, the investment question is a very real one. I think as an agency, we do need to invest more in our technology, foundational capabilities and some of these advancing capabilities.

And so honestly, that's a leadership conversation that we'll be having, right. So as we take a look at some of the products that we'll be rolling out and we see the cost models and my guess is the cost models are going to continue to evolve. That's what's happened in, you know, the cloud world for the last five to 10 years. The business models have evolved and changed and we've had to try to understand and adapt to them. That's going to happen in the

generative AI space as well. So yes, I think we need to invest. Yes, I think there needs to be a set of capabilities that are just broadly available to the NASA workforce. Go back to my example about the summation of a hour long teams meeting, you know, if that is a capability that rolls out, I don't think that the three people that can pay for that should have access to it and everybody else not so much, right.

So I think there should there should be and there will be a general level of capability that we see value in providing to to the broad workforce. And then I think they'll be communities that will say there's value in the investing in this tool that the whole agency doesn't need, but this community does. And so we'll have to take a look and understand what's available to us and how we optimize the

investments we make. Yeah, just following on a Jeff statements on 2040 to give you a little bit of insight. You know, we're having conversations with senior leadership on these technology investments of how do we work digitally. So we've divided those future investment opportunities in various strings from cybersecurity to data management to AI and having conversations with Jeff and David and others to say in those dreams, what are some investments over the next few years we need to start

making. And so I think I'm committed personally along with Jeff and others to make sure that gets a, a proper voice within the 2040 environment and they're offering that to us. So I think that's a very positive sign of recognizing Jeff, Scott, Jeff's budget over the next few years. But how can we enhance it? How can we enhance directorate funding to enable these tools to come online and all these various streams that are pretty important to how do we work digitally.

So that's going that, that is going on in terms of those budget conversations, I think and those investment opportunities. Thank you, super excited. I, I might share one other thought and that is I talked earlier about, you know, shared situational awareness. The thing is let's not duplicate effort.

So if we know that one organization is exploring an AI capability to enable some part of their business, you know, having that situational awareness and not having to recreate that elsewhere across NASA will go a long way to helping us be efficient about how we pursue this technology. So, and I think there's a governance activity related to

that as well. So when we think about shared awareness, getting us all rowing in the same direction as it relates to our pursuit of AI as a, you know, Team NASA, not as individual organizations. Yeah. And I think that that underscores a good point Dave that I want to mention cause Dave in his new role as Chief AI Officer, he he came and working in my organization as the Chief data Officer, both data and AI. I think that's an apartment

description of of Dave's role. He's trying to facilitate and coordinate for the benefit of the agency, right. So that awareness, that understanding, pulling people together to have the shared conversations is I think, a key piece. And then it goes back to what I mentioned about community. Getting folks involved, being part of those conversations and helping us to frame up where we're going as an agency is an opportunity I think that many have. And we we have another question

here in the room. Hi, Jenny Modder, Art Director for NASA Science. I work a lot with the creative community, and we've noticed that uptick in generative AI imagery falsely attributed to NASA appearing online and in stock libraries. And I'm just curious if there's any plans for authenticating NASA imagery to protect our credibility or if that's a worthwhile investment in time and energies. Thank you.

Soy there is a concern and everyone has probably heard the term deepfakes right where you see the falsification of imagery and even the the fact that AI can create an image that actually didn't occur, a representation of something that

didn't occur. The good news, at least right now at this moment in time, is, is that AI is actually pretty good at detecting that, better than maybe a human might be because there is a sort of a footprint associated with that type of that type of misuse, if you will, of AI. There are ways in which we can validate our content. And I think this gets to also being careful about our security

and where we post content. And you know, if you want to go and find authoritative representation of NASA imagery, I would probably go to NASA as opposed to maybe a third party where there's a potential that something could have happened along the way. Other than to say that, I mean, there are other more sophisticated ways of sort of hashing and encrypting and doing other things with imagery, digital rights protection type of technology.

We will have to look at that in our future as this is this is sort of a fast evolving area. So your point is very well taken and and we'll have to keep an eye on it. Yeah. And I'd say this is, I mean it's early days with the tools too and they are evolving. So if you go back just a few months, some of the generative AI tools that were out there would produce results, but they wouldn't tell you how they generated those results.

Where did that come from? And now some of the tools will cite that this is an AI generated image or they will reference at the end of the output, they give you the three or four primary sources that were used to derive that result. So the tools are evolving too, I think. So some of the concerns that that we have that are

legitimate, right? I'm hopeful that the organizations developing these tools will help with some of the solutions as well so that we won't have to create them ourselves necessarily, but we'll be voicing the concerns. And just one last thing to add to that, I think this is an ongoing discussion, you know, throughout the country and the world about what do we do about this and how do we identify. And it's something that we're tracking.

I've been following a lot sort of the university professor discussions around how do you identify AI? And there are these AI tools. There's also sorts of things to look for that identify it. And we're also establishing these methodologies, like what Jeff said, where you actually acknowledge your use of AI. And so I think we're going to, we'll keep tracking those conversations, you know, across the US government with the university community.

And this will evolve over time, but it is something we're very aware that people are concerned about and we're watching. Thank you. And I want to ask you all a question that's coming from conference dot IO. Is there any intention of obtaining a closed off deployment of an LLM that could be used with internal data? Yeah. And I, I touched on that little bit, little bit earlier. We do have one that's rated at the FISMA low level working to get that to be able to handle

some sensitive data. I will say that's not at a production capability level though. It's for sort of a, you know, AI early adopter kind of experts experimentation phase right now. So we need to take a look at within the agency, what kinds of capabilities do we need, do we want, do we invest in going back to that resource question? And then figure out what that looks like in terms of deployment. But that's not to to prevent.

We have a lot of great mathematicians, computer scientists with the agency, these large language models, some of them are available, right. So we might be doing some internal experimentation in various organizations as well as we go forward. But I do think it will be important to have a kind of inside the wire capability that we're experimenting with. And then in some cases for our missions, we'll be deploying

those. I think the other thing I would add about the experimentation piece of it, why that is important. Our contractor community, academic community are using these tools and if we want to be smart buyers, we need to understand how these tools operate even in experimental mode if not in a production mode. So once again, I think important to have that inside the wire for multiple reasons.

And then I might just add one other thought and that is from a foundational model perspective, I would fully expect that they'll be foundation model development specific to NASA use cases. I was at a conference earlier this week where a colleague from DoD in the intelligence community was talking about some of the concerns that they have with some of the at a box models and not really understanding the underlying process within the model itself.

And they were actually talking about perhaps developing models for themselves as well for that very reason. If we develop it, we understand how it works. If we buy it, we may not understand it to the same degree. So again, an emerging area. And this is an area where we can learn across mission directorates because the Science Mission Directorate has done some work with foundation models and partnerships externally to build them. Thank you all.

Anything else that you'd like to add or mention, I'll say. That it's exciting. There's a lot going on tomorrow, actually, we have our inaugural Artificial Intelligence Strategic Working Group session kick off. That's at 2:00 PM Eastern. Organizations have been asked to identify a working group member. This is the beginnings of our AI governance here at NASA. What we wanted to do is we didn't want to start at the top and work down. We wanted to start at start at a lower level and work up.

The other thing we're going to be doing at that AI strategic working group is we're going to be doing a spotlight series where we highlight various different levels of various different activities that are occurring across NASA and make people again, situationally aware that, oh, Ames is doing this, Langley is doing that, God is doing something else. Oh, science is working in this particular area.

That shared awareness will go a long way to helping us understand not only the opportunity space, but also what the technology can do to, well, understanding the technology better. So there's that. I already mentioned the summer of AI as an opportunity. So I would say get involved there on on teams. You'll find an AI community of practice. Feel free to participate, join that. That community practices as robust as the people who participate and post and share and so on.

So lots of opportunity. Get to know your AI strategic working group member for your organization and ask them how I can help. And then please pursue the training. I would love the statistics on the AI summer of learning to be off the charts as far as how many people consume that training and how many courses people actually complete throughout the period. So thank you. Yeah. And I was at a conference recently and this is going to

build on what Dave just said. And there were discussion was on AI and there it was a physician surgeon on the stage and she was asked, is AI going to replace you? Is AI going to replace doctors and surgeons?

And her response was great. She was like, I'm not worried about AI replacing me, but I do think what's going to happen is surgeons and doctors who don't learn to work with AI are going to be replaced by surgeons and doctors who have learned to work with AI. And so I think that's true for us in terms of the future is going to involve us continuing to adopt and learn new approaches and leverage

capabilities and technologies. And so I think it is incumbent upon us to what they've just said, take advantage to learn to see how we can grow personally, how we can maybe be more successful, capable at our jobs because we're learning about these new technologies. So I would just encourage all of you to play experiments, have those conversations, take the take some training classes over the next several months.

And I'm excited, you know, maybe six months from now what's happening within NASA because we're taking advantage of some of these capabilities. I'd like to thank our panelists. Thank you all for joining us today. And thank you for everyone who participated in today's town hall. It has been an enlightening conversation. And as we continue to explore how artificial intelligence is moving NASA forward, NASA employees can continue to engage.

We had a lot of questions that we did not get to, but we want to invite you to to check out your point and continue to have the conversation. And thank you again. And we look forward to having more conversations around AI. I'm Melissa Howell. Thanks for joining us, everyone. Hey, thank you so much for listening today. I really do appreciate your support.

If you could take a second and hit the subscribe or the follow button on whatever podcast platform that you're listening on right now, I greatly appreciate it. It helps out the show tremendously and you'll never miss an episode. And each episode is about 10 minutes or less to get you caught up quickly. And please, if you want to support the show even more, go to patreon.com/stage Zero. And please take care of yourselves and each other, and I'll see you tomorrow.

Transcript source: Provided by creator in RSS feed: download file
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast