NASA News Conference on Intuitive Machines' First Lunar Landing - podcast episode cover

NASA News Conference on Intuitive Machines' First Lunar Landing

Feb 26, 20241 hr 18 min
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

Intuitive Machines' IM-1 mission made history on Feb. 22, with the first successful Moon landing by a company. This televised news conference will discuss details of Odysseus' landing as part of NASA's CLPS (Commercial Lunar Payload Services) initiative and Artemis campaign. Participants from NASA and Intuitive Machines will discuss next steps for NASA science instruments aboard, as well as details of the landing, which made last-minute use of NASA’s precision landing technology demonstration, NDL, or Navigation Doppler Lidar. Participants in the news conference include: • Joel Kearns, deputy associate administrator for Exploration, Science Mission Directorate, NASA Headquarters in Washington • Prasan Desai, deputy associate administrator, Space Technology Mission Directorate at NASA Headquarters • Steve Altemus, chief executive officer and co-founder, Intuitive Machines • Tim Crain, chief technology officer and co-founder, Intuitive Machines

Transcript

Good afternoon and welcome to NASA's Johnson Space Center in Houston. I'm NILA for Ramsey with NASA Communications. Thank you for joining us. On February 22nd, Intuitive Machines, as I AM 1 mission softly landed in the South Pole region of the Moon near Mallaport. A named Odysseus. The Lander completed a seven day journey to become the first US soft landing on the Moon in more than 50 years.

Joining us today to provide insight on this historic mission and to answer questions, we have Steve Altimas, Co Founder and CEO at Intuitive Machines, Joel Kearns, Deputy Associate Administrator for Exploration Science Mission Directorate at NASA Headquarters in Washington, Doctor Tim Crane, Chief Technology Officer and Co Founder at Intuitive Machines and Doctor Prasan Desai, Deputy Associate Administrator of the Space Technology Mission Directorate at NASA

Headquarters. First, we'll start with some initial remarks from our briefers before opening it up for questions. We'll be taking your questions on our phone bridge this afternoon. So if you've joined us today, please plus star one to add your name to the queue and ask your question. We'll now begin with opening remarks from Steve. Thank you, Dillifer. Well, hello everybody.

It's reflected before we came into the briefing studio this afternoon that this is the first briefing about being on the surface of the moon for the first time in about 52 years in this room. So that's quite incredible and it's a pleasure to be here. Intuitive Machines, Odysseus Lander landed yesterday at 524 a central time. We did have a stable controlled landing and a safe soft

touchdown. I'll give you a little bit of description today about the the state of Odysseus or OD and it's attitude on the surface and what what you can expect from it over the coming days. It's pretty incredible. It's it was a quite a spicy seven day. Mission. To get to the moon and I'll give you some fun facts about how far we've traveled and and how fast we've gone. So just to begin with, the vehicle is stable near or at our intended landing site.

We do have communications with the with the Lander. It's from the larger radio astronomy dishes around the world that are part of our lunar telemetry network and to the spacecraft from several of the antennas and two of the radios. So that's phenomenal to begin with.

So we're beginning to now that we're on the goon hilly dish in the United Kingdom, we're downloading and commanding, downloading data from the from the buffers in the spacecraft and commanding the spacecraft and trying to get you surface photos because I know that everyone's hungry for those surface photos. But we got some interesting data that gives us a position, an attitude of where the where the Lander is. And I'll explain that in a

moment. We have the sun impinging on the solar arrays and charging our batteries. We are providing power to the spacecraft and we're at 100% state of charge. That's fantastic. I talked to you about the communications, and we will be taking an image, hopefully this weekend from the Lunar Reconnaissance Orbiter to find the Lander and pinpoint its location in the South Pole region of the Moon.

If you can go to the photo here that we have, this is a photo that I thought you'd find interesting that we'll release to the public here. Here we're flying about 10 kilometers over the surface of Schaumburger Crater near the South Pole region of the Moon. We're still about 200 kilometers up, up range from from where our intended landing site is.

But here we have a one of our public affairs cameras taking this beautiful image and you see how shadowed and you know undulating the the terrain is and that's important to understand how difficult it is to to land on the surface of the moon. So thanks for that image.

Going back, I could say that it was quite phenomenal that if you think about it, we were traveling 25,000 miles an hour and we came down and touched down at about 6 miles an hour with a down range traverse of about two miles an hour. That's walking speed. So that's kind of just an interesting metric for you. We traveled 2 1/2 times the

distance to the lunar surface. That's about 600,000 miles due to the trajectory and the number of orbits that we've gone through in doing that and in and in performing that incredible deceleration. Our first of a kind liquid oxygen, liquid methane, additively manufactured 3D printed engine burned six times for a cumulative burn time of

over 20 minutes. It's just an incredible performing machine and we're really proud to take that technology to ATRL Level 9. I got to say something about the the team. The OPS teams were cool under pressure for the whole seven days of. It was quite amazing to see him and work real space Cowboys. And you know, we worked through all the difficulties. If you think back from Apollo days, there wasn't 1 mission

that went absolutely perfectly. So you have to be adaptable, you have to be innovative and you have to persevere. And we persevered right up until the last moments to get this soft touchdown like we wanted to. Let me just talk briefly about attitude on the surface this a little Lander. I'm going to pretend that's the rock that the Landers leaning on.

We think we came down with. Like I said, about 6 miles an hour this way and about two miles an hour this way and caught a foot in the surface and the and the Lander has tipped like this and we believe this is the surface the the orientation of the Lander on the moon. We're getting sun moving this way around the Lander, so the solar arrays are being powered and we believe a little later we'll get Solar Sun on the top

deck solar array. The majority of our payloads are all in view and we are collecting science and we've collected science along the way to the moon and I've been downloading that data, in particular, 3 payloads that are positioned on the Lander. They have been active operationally used in this in this mission, the Lane One payload out of Marshall Space Flight Center.

We actually assisted us in determining our precise location in space orbit determination we call it using a Doppler measurement that was very useful and and as it was part of the Deep Space Network it augmented our communications from our own commercial network. The other one you've heard about was the NASA Doppler LIDAR from Langley Research Center and we integrated their telemetry stream into our NAV application navigation application and we use that for our power descent

initiation. And then finally the one that was very useful was a new technology out of Glenn Research Center and that was the radio frequency mass gauging. And that that instrument really gave us an understanding of what what the propellant tank levels were, which helped us budget the amount of propellant to take us all the way safely to the

surface of the moon. So very interesting mission so far as we get more telemetry and turn more things on, we'll be updating you over the coming days of the analysis and the reconstruction of of you know the landing. Tim can comment that on that a little bit today on how we, how we did the power descent all the way to the surface and why we believe in the data that I'm talking to you about today.

Yesterday we thought from just to clear up some confusion, we thought we were upright and the reason was that the tanks we're reading this is the X direction and the tanks were reading gravity on the moon. At the fill levels, there were still residuals in the tank and we saw those measurements in the X direction. Well, that was stale telemetry.

So when we worked through the night to get other telemetry down, we noticed that in the Z direction, this direction is where we're seeing the tank residual tank quantities. And so that's what tells us with certain fairly certain terms, the orientation of the vehicle and hopefully we'll get a picture here this weekend and and share it with you Milifer. That's all I have. Thank you so much, Steve. Next up we have Joel Kearns. Joel.

Hey, thank you, Noah. For first let me congratulate Intuitive Machines for three major accomplishments. The 1st, as Steve said is for having the first a successful soft landing on the moon by the United States since 1972.

The second is for being the first non government commercial organization to actually touchdown safely on the surface of the Moon. And the third is we're having a touchdown .80° S latitude much closer to the South Pole of the moon than any earlier AUS robotic or human explorers. Let me give you some of the context for the importance of Intuitive Machines accomplishment on their mission.

In 2017, the nation charged NASA to expand our scientific and technical work in the area of the Moon science, technology and human explorers under our Artemis initiative. As part of that, NASA went down the path to to listen to what industry had been telling us for some years, which is that for robotic landing services that we should be able to purchase that from US industry instead of doing it ourselves at NASA for

robotic systems. Now NASA is very good at building and operating robotic probes throughout the solar system, but we knew we'd be going back to the moon repeatedly to do science and technical studies and eventually human exploration. So we put into place this commercial Lunar Payload Services initiative or Eclipse to buy and effect the service to bring NASA cargo down to the surface of the moon and have the data from those experiments brought back to Earth by industry.

Intuitive Machines is one of the participants in that initiative that's now been awarded 3 service contracts to bring NASA equipment, experiments and cargo down to the surface of the moon. And this was intuitive, Machines first attempt their first mission to the moon carrying our cargo. Now I've talked about all the potential advantages of having industry do this for NASA. The industry had told us years ago that they thought they were technically ready to do it.

That they thought if they specialized in doing it, that they could probably do it at less cost and much more frequently and much faster from initial order than NASA probably could, since we would normally build a custom spacecraft for

every endeavor. And we've seen that so far in the progress that our CLIPS vendors have made, as they're working down to fly off their first missions in two of the machines, though however, and doing a soft touchdown on the moon has perverted the first real evidence that this is possible to do. It's possible with today's technology, with dedicated engineering and appropriate financial management, to have a private company actually design

A spacecraft, develop a mission, buy a rocket, and fly all the way to the moon at soft land on the surface of the moon. Not just an area where we landed earlier decades ago near the equator with the Apollo missions, but in the unusual territory at the South Pole which is the focus of our future human Artemis missions, this is a gigantic accomplishment. On this particular mission we had the company bring 6 NASA science and technology experiments on board down to the

lunar surface. They ranged to get to do studies in science in looking at the electron density and plasma on

the surface of the moon. Technology studies such as measuring a rocket plume impingement during landing, navigation studies on the way to the moon down to the surface of the moon, laser ranging fuel quantity as other investigations and it's and interesting enough when we started this we had put together a list of different instruments and payloads that the commercial lunar payload services companies could volunteer to take down to the surface of the moon.

And intuitive machines pick the complement of five payloads which we later augmented with the radio frequency mass gauge fuel measurement experiment. And intuitive machines pick the number of payloads and experiments from NASA to to bring down which is Steve Widdell greatly benefited them during the execution of their mission. So at this point today is Intuitive machines looks to make sure they understand the status of the Odysseus vehicle.

We are already looking back at scientific and technological data that we accumulated during the transit out to the moon during the deorbit operations, and we're looking forward to getting even more data as intuitive machines figure finishes the checkout of Odysseus now. In doing so, we knew at NASA when we went out to gather this by Commercial Services that we had these great potential benefits, but we also had risks. We knew, for example, no one had

previously done this. We knew we were asking industry to do an incredibly difficult thing to do to go from those high speeds of orbital velocity all the way down to the very slow speeds at A to get to a particular position on the moon where we wanted them to land and intuitive machines Accomplishment for this actually shows everyone that this is this approach will work and we look forward to using it over and over in the future. Nullifer. Thank you so much. We'll now hand it over to Tim.

Thank you, Nullifer. Very excited to be here today. They they told me to smile before the the press conference and I can't help but smile anyway because we landed on the moon a little bit about Odysseus. Odysseus is a mostly autonomous vehicle. Our operations crew would monitor the vehicle during flight, We'd provide some trajectory updates, parameter updates, and that's what got us into lunar orbit. The lunar descent is different though. During orbit we would prepare for maneuvers.

We'd watch the maneuver and then know that we had time to recover afterwards and replan for the next stage. But lunar powered descent is the end game. There is no after. You're either successful or you fail. And so the last Rev around the moon, we buttoned up any last minute changes we wanted on the vehicle, and there were a few that we may talk about today, and basically the vehicle disappeared behind the far side of the moon. We have lost the signal for 25

minutes. Everybody got up and went to the bathroom. There was nothing to do but wait for the signal to come back on. It was amazing how quickly we adapted to continuous communications during transit to regular losses of signal being a part of our life because we're circling the moon. Once we came up around the North Pole of the moon, we were in a polar orbit. The vehicle was completely autonomous. We watched as the onboard systems pointed our cameras to

the moon. We processed over 10,000 images onboard with our own machine learning algorithms to manage the speed of the vehicle and the guidance system decided based on the propulsion system are available thrust levels, orbital velocity and distance to the target near the South Pole. When the right time to turn the engines were were that's power descent initiation and the engines came on approximately 13

minutes before landing. We were at full thrust for what we call Braking 1. Basically, we were trying to slow down from approximately 3600 mph to something more like 30 mph near the landing site That's breaking one the vehicle performed Florida State. Our our main engine thrust was good. Our thrust control was perfect

engine performance. It has exceeded expectations in many ways and flight control my my personal background kept the vehicle pointed exactly where it was supposed to go for the entire burn. We monitored down until a pitch over event so early in the trajectory. The vehicle is basically flying sideways with respect to the moon and we're flying in One Direction and the engine is slowing us down to take that velocity out of the vehicle.

Once we get within a kilometer of the landing site, however, the vehicle goes into what we call a pitch over and this brings another set of cameras into alignment with the landing site. At that point we lost calm, which we knew we would do because we switched from one set of antennas to another and then we regained communications all the way until approximately 200 meters above the landing site.

Then there was a tense moment where we did not have regular communications, but our dedicated radio and ground operations crew found the signal, and within an hour or so we were getting the first data down from the surface of the

men. I could not be prouder of our operations team and our engineers for putting together Odysseus, which was a marvelous machine, And to look at the moon every night now and know that we have new hardware there that we had a hand in building in our lifetime, something I couldn't say before. It really was a a magical, magical day. Thank you. Thank you so much, Tim. And now, finally, we'll hand it over to Prasan. Thank you.

So first and foremost, congratulations to Intuitive Machines, an amazing successful landing success story. You know, one of the things that we from a technology and space tech want to do is we want to go with repeated access to various parts of the solar system to do this tech demonstration. Because in our view technology drives exploration. And we had a number of experiments on this technology demonstrations on this Lander and one was called to be used

operationally. And I'll talk a little bit about that, but that this aspect of a successful landing really allows to pointing on to what Joel said is repeated access to the lunar surface. We have a slew of technologies we want to demonstrate as well as many science certificate instruments that we want to send for understanding the lunar

environment. And by having a successful story like Today, Yesterday that happened, it allows us for setting up the next set of projects that we want to fly and demonstrate right. One of the things that we wanted to do is trying to do as much as possible testing on the ground, but that only gets us to a certain technology readiness level, which is typically TRL 5, sometimes six.

What that means is we're we're not quite in the environment that we want to be in. And so This is why we want to go and experiment in space or on a lunar surface, wherever it happens to be. The the one of the big technology demonstrations on on this landing was the navigation Doppler Lidar. We were hoping through the test of flying on this mission was to get it to TRL 6, which is the relevant environment, the lunar environment.

However, with the successful ingestion of it during landing, we were able to get a operational system now TRL 9, which is it's ready to be used from now on right as opposed to further testing. This wasn't totally by accident. The teams at NASA Langley Research Center that helped develop this technology did a lot of development over the years, as well as working with intuitive machines to see about ingesting this data if necessary.

Fortunately, all that hard work came to bear yesterday when there was a technical issue and the teams decided that hey, it was best to try to do the switch and rely on this tech demonstration. Everything we understand from the telemetry we received, which is limited to this point until we get all the data back, that the technology performed Florida State better than expected performance. It acquired range and velocity data well above the required 5 kilometers altitude as it's descending.

And the reason why we need this data for successful landing is as Landers come down, we would ideally like to have them come straight down. But because there's errors in the all the operations of the system, you wind up being a little bit going laterally going there. This measurement is really to try to get an understanding of that lateral motion so that the system can counteract that and zero out that lateral motion to come down straight down.

So you need these type of measurements to make that happen. This is one set of technologies that allows to do that. There are a slew of other ones to make the landings even more reliable and safer that we hope to demonstrate on future landings.

And so having this successful landing today allows us to gear up and get ready to do more of this going forward to enable the Artemis endeavor of repeated access to the surface and eventual landing of humans on the on the surface and and sustained presence on the surface with infrastructure laying it down. And so this is the first step in allowing for that and a great day for allowing us to get ready for more to come as we go forward.

Thank you, Prasan, and thank you to our briefers for those initial remarks. We'll now open it up to questions. Again, if you've joined us on the line today or on our phone Bridge, please press * one to submit your question. Once your name is called, please state to whom you'd like to direct your questions. Once your question has been answered you, you will be muted. But if your question has already been answered, you will push Star 2 to withdraw it. Let's open our phone bridge.

First up, we have Gina Sinceri with ABC News. Gina. Question is for either Steve or Tim. What was your Hail Mary moment during that where you went? We think we can make. This work and we just made it work. What was? What were those moments? Or were there more than one? Well, I think there were several of those moments. Like I said, it was a spicy

mission. I'll let Tim comment a little bit, but you know the idea to pull the range telemetry from the the, the NASA Doppler Lidar was interesting and change out the laser range Finder call outs in the navigation application. All that was very straightforward to go calculate. Part of that was put in the table, but part of that had to mean that we had to rewrite the navigation application software.

And when you do that. To upload it to the vehicle you actually have to stop guidance navigation and control. And when we ran that in the simulation and we ran that on the flat sat, it did not like being rebooted like that. That's software and we saw the guidance drift way off. We saw a lot of helium usage and and that was very sporty.

So I think in a very time, crunch time, getting ready for power descent, we had to work feverishly to get that sequence of events, almost like Fred Hayes in a in a in Apollo 13. We're trying to figure out the sequence of events to reinitialize the software, in particular, reinitialize navigation. And so that was done in a very sporty way and it was brilliantly executed by the

team. And so that was the one that had us all biting our nails just a little bit, because once you start power descent, there's no going back like Tim said. Tim, do you have another one? Yeah, there there was. I I will say on that one a parallel effort for sure. So we had one team rewriting the code, we had one team testing procedures and then another team once the code was written pushing it up on the vehicle moving into place.

That synchronization came down to a Florida State executed reboot of the navigation system that allowed us to successfully land. So that was exciting. Another exciting moment we had after our TCM one our trajectory correction maneuver we discovered that our engine pointing geometry had an error in it and we had to study that a bit and we found the the reason why we had a geometry linkage that was a little bit different than we expected.

Very difficult to test how that linkage to the main gimbal would respond under full thrust in space. And so we were able to use flight data to correct that. But that was another area where we had to patch the software to put that correction in place. And you know, we became very proficient at it. I will say, and you hear this in the space industry a lot that we stand on the shoulders of giants. The work we were doing was built upon work people had done before us.

NASA score flight software is a big part of what we do on the flight vehicle and it has a lot of the capabilities to reload and reinitialize software built into it. And we were able to take advantage of that because of the foresight that people who had done space missions before had invested in in that piece of technology and we used it to great effectiveness going forward. Great. Thank you. Next up, we have Marsha Dunn with The Associated Press. Hi. My questions are for you, Steve.

What's your best guess for how close you are to the targeted touchdown area? And you said a lake caught the surface. Do you think the Lander came in builded and it's to catch a lake like that, put it as caught on a rock, and then belly flopped? And do you think Odysseus was ever upright, even for a moment or two, or do you think it just landed on its side from the ghetto? Thanks. Well, thank you Marsha for that

question. We are reconstructing with the data that we get, what we think happened. My theory is just a theory until we get an actual picture and see what happened. But if you pass me the model, Tim, I'll show you here is if we're coming down, we came down a little bit faster. We were supposed to come down at 1m per second, which is about two miles an hour, and we're supposed to null the lateral velocity, which was was supposed to be 0, and we're coming straight down.

We had about two miles an hour going this way. And so if you're coming down at six miles an hour is what we think, and moving 2 miles an hour and you catch a foot, we might have fractured that landing gear and tipped over gently. Like I like I said, we have to go look at when the main engine cut off was to see if the main engine had any coupling effect to that or not. I can't tell you for sure. It'd be good to see the health of the landing gear and see how that all looks.

And so it'll be a few days before we get all of that put together and reconstructed. That's an action I've already given the team and I look forward to the answer to help inform our future flights. I can add to that that after pitch over we have a hazard relative navigation system that generates measurements at 1 Hertz. This is our optical processing and we generated 84 measurements

and process 79 of those. So 84 is important because we have an approximately a 122nd timeline from pitch over to landing. So the fact that we generated 84 accounts for a portion of that timeline, they're not

necessarily continuous. The fact that we process 79 of them and they were accepted by the common filter that we have in our software means that there was very good agreement between the inertial measurement unit and our our camera velocity measurement and the NDL navigation Doppler lighter on board. With those all in agreement that means we had roughly 90 seconds out of 120 seconds guaranteed stable flight coming in. So we were very close to the

vertical phase. We don't have the data from that interval yet. And so we're waiting to see what that is. But that's a really good indication that we were in stable control and vertical at the time we touched down. Thank you. Bill Harwood with CBS News. Hey, thank you very much. I think this is for Steve. How do you guys know it's resting on a rock as it were and not on its side?

In other words, how many degrees of vertical did your hike readings lead you to think if if you even got a number like that? And are there any payloads on board that simply cannot work in the current orientation? Thanks. Well, I'll let Tim address part of it. But our reconstruction by based on how much power we're getting off of this solar array says that it that it has to be somewhat elevated off the surface horizontally.

So that's why we think it's on a on a rock or the foot is in a in a in a crevice or something to to get to hold it in that that attitude. Fortunately for most, most of the payloads are exposed to the outside above the surface that's down the the, the panel that's down towards the surface. That panel only had a single payload on it and it's not an operational payload, it's it's a static payload. And that one, we're still going to try to take a picture of that

payload if we can. And that would meet those objectives of taking a photograph of of that art cube that's in the on that panel and that one that's pointed towards the surface of the of the moon. So we're going to try to download all the pictures and see if we got got that picture in view. Tim, any more insight? We also have some inertial measurement unit data. We've turned a lot of the the flight instrumentation off on the vehicle for power management purposes.

But before we did we were able to get some packets and measure lunar gravity and most of that lunar gravity was in the Z direction on that model which is up along fairly close to level. So there is something whether we run into a slope, which would also explain a tip over if there's more slope than than we anticipated at touchdown. So the inertial measurement unit gives a very strong indication that this is up and and those sensors are very, very exquisite.

So it's a confirmation of what we're seeing from the tanks. Exactly what the material is that's underneath the Lander is something we hope to get some imagery from over the next coming days and and find out more. Where is eager to see those images as the public is. Yeah. And and I would add in terms of the technology payloads, we've already gotten data along the way to say they've been successful, right.

So the radio frequency mass gauge has been working since long, you know, as soon as we got into our low Earth orbit and going on the way. So we've gotten data all along that way as well as during the descent which we're still waiting for telemetry on that. The navigation Doppler or LIDAR, we got that real time going down.

So we know that worked very well and successful aspect of it, right, The scalps, the stereo cameras, we're waiting for the pictures to come back there, but you know everything else seems to be working very well. So we anticipate that that worked well during the descent as well and just waiting for the data to come back to, to analyze to see how that went. So a lot of the payloads have already been successfully

demonstrated. Yeah, I know. This is Joel. What I'd say is that in addition to what Prasad and Tim said about the fact that so much data was acquired during transit out to the moon, long lunar Warburg and descent, Of course we'll evaluate if there's any particular measurements that we can take because of the vehicle configuration, but in general, we expect to get a lot of data and a lot of measurements from the instruments, both science and technology. Yeah, I have an add to that too.

You know, the NDL is a perfect example of a problem solved. But the radio frequency mass gauge was also something that we used for a problem avoided. We had a temperature sensor on one of our tanks and we fly. Cryogenic fluids are very, very cold for propellant and a temperature sensor was recording, reporting back colder than we had anticipated. Well, that could have been indicative of a leak. And so we were beginning to spin up some contingencies.

Well, what if we have a leak, What do we do? But because we had the radio frequency mass gauge, we're able to confirm that our tank masses were stable and we just had a little bit of an anomalous sensor reading and that avoided a problem and we didn't spend more energy going through that. So that technology is one that maybe isn't quite as dramatic as a as a Late Orbit software reboot, but nonetheless gave us confidence going through the mission. Great. Thank you so much for your

insight on that. Next up we have Ken Chang with the New York Times. Ken. Yes, hi. Thank you. I was wondering I guess for Tamara and Steve for a TikTok of what happened after lunar insertion. It looks like the orbit is lower than what was in the press press kit and then you had another burn that evening and then you avoided the DOI burn and and then you had it, it moved up to launch of the landing time. So I was wondering what the very orbits were and how that affected the landing time.

And also, when did you find out that you had a bomb based alternator? I missed the last part. I'll start with the first part. Kenneth, do you want to ask the last part again? When did you find out that you had a laser altimeter? Yeah. OK. Laser altimeter. So the first part of the question was about the lunar orbit insertion and what happened after that, right?

If I understand your question right, well, we were having some difficulty with communications around the world communicating from the different configurations and the different dishes that we had around the world up to our radios. And we have Poisonics radios and Talus Alinea radios and some of those that Talus Alinea radios have a range beacon and we have a frequency that we know, a carrier frequency that we're operating on. And some of the dishes were

smaller around the world. So in certain parts of the world we had a weaker signal and we would lose that carrier lock. And when that carrier lock goes down, you can't get a good orbit determination. And there was a shift in the ranging beacon. So that shift and that turn around ratio in the ranging beacon is such that you had some inaccuracy, we had some inaccuracy. So we got the best data we could possibly get going into our lunar orbit insertion burn. But what we found was that was

slightly elliptical. Actually it was elliptical, not highly elliptical, but it was elliptical orbit And so we were not comfortable necessarily with our the proximity to to, to the to the South Pole area. We were a little too close for our own comfort. So we decided to come in and do a a a raise of our of our periloon position. And we did that very quickly, autonomously and put us in a safer configuration for the mission and be prepared.

And that burn we did in such a way that it eliminated the need for a de orbit insertion burn, very small burn before we did power descent. When we were looking at our position around the moon, we decided to take a laser range Finder, power it on and ping the surface to see how close we were because we're having trouble with this orbit determination in this Doppler measurement that we're trying to get.

And we saw that that laser didn't fire and what we found was that there's a safety enable switch because it's not an ISAFE laser. That safety enable switch is in the box and was not disabled. So it's like having a a safety on a on a on a on a firearm, it's it's for ground processing and that was an oversight on our part. And so those laser range finders could not be turned on and we couldn't manipulate that enable switch or disable switch with

the software. And so those range finders had been tested and would have worked if we'd had caught that oversight and remove that enable before or disable before flight. So I think that got your question. Tim, anything to add on that? No, that's right.

I think the key thing was we have an incredible flight dynamics team who were able to determine that from the orbit we were in, we could raise Paralun with a lunar correction maneuver that they had built in with the foresight to trim the orbit. If we had some unexpected conditions and it basically put us into our descent orbit about four or five Revs before we we

nominally would have done that. But the orbit still phased over the landing site in the right way and gave us a great opportunity to execute power descent. Thank you so much for that Lauren Grush with Bloomberg. Hi. Thank you so much for taking my question. I think this might be for Steve or Tim. I'm curious if you've been able to determine if the tipping damaged the Lander at all based on the rock that it's leaning on. Is there any concern of further degradation because of the

position that it's in? Thanks. Well, again Lauren, we're hopeful to get pictures and really do an assessment of the structure and assessment of all the external equipment. I we we are hopeful that the top deck solar array is not damaged and that as the sun comes around the Lander will be able to get some power generation from the top deck solar array which which is now vertical. And so we'll see what that means.

But so far we have quite a bit of operational capability even though we're we're tipped over and so that's that's really exciting for us and we we're continuing the surface operations mission as a result of it. Thank you. Next up we have Andrea Linefelder with the Houston Chronicle. Andrea. Hi, these questions are for Tim Crane, that final orbit you took. I just want to make sure that was specifically to implement the software path to use NASA's LIDAR tech demo for landing.

Also, Tim on Twitter or excuse me, X, you mentioned a big role maneuver. Was this part of the plan? If not, what caused the role maneuver and did that create any complications? And finally, I was hoping you could walk us through some of the communication issues experienced right after landing. Was it difficult to get a signal because it was at an angle or was it other challenges related to being unstoppable? Thank you. Thanks, Andrea. I did not catch the first part

of your question. Could you, could you repeat that? Sorry, the first part was you know that final orbit that you took that kind of pushed back the landing, Was that specifically to implement the software patch that that helps you land with the NASA's LIDAR tech demo? Yes, OK. Thanks. It was we were we were in good position to land at approximately 3:30. But the the procedures that that Steve was talking about, what order do we bring down the, the flight control, the guidance, do

we inhibit RCS? How do we do that in such a way that there's no unexpected consequence on the vehicle? For example, if we turned off guidance navigation and control but didn't turn off the RCS control valves, they could listen to noise on the computer instead of controls to zero, and we could open up the valves and and lose control.

So we were very, very deliberate about working through in what we call a flat sat, which is basically the spacecraft equipment laid out in a lab driven by a simulation. We were very deliberate about working that procedure so that when we shut the software down, we could bring it back up safely. There was no harm to the vehicle. We had the patch ready in time for the first landing attempt. We hadn't come to a satisfactory procedure yet and we had to get it right.

And so Steve and I conferred. It would be a little bit more fuel to catch the, the, the OR bort once around. But again, our flight dynamics and automation team had written software that gave us a great amount of flexibility to control Odysseus. And we're really at a special time in our lunar program and intuitive machines where most of our operators are also the subject matter experts who built

these systems. So we had incredible insight what was going on. We had great confidence we can make this work, but we needed a little bit more time. And so we made the call to abort once around and implemented the patch at that time so that when we had that final orbit we were in high confidence of landing the role maneuver. At the end we had made some decisions. You know every, every vehicle has a mass limit and you're trying to optimize performance versus mass.

We had flown a vehicle with fixed antenna and in order to fly with fixed antenna we had to look at what our, our landing orientation was at the South Pole. We landed. In fact you'll see in this model there's white, white paint on on some surfaces and and black paint on others. That's because we were going to land on the South, near the South Pole, and the sun was going to illuminate the solar arrays, as you can imagine, and then also these white surfaces to reject heat.

But on the other side we have the cold side, and it gets very, very cold if you're not in direct sunlight on the moon. So we painted that black to catch reflected light off the moon and warm them up. So as we were coming down, we wanted our navigation cameras pointed to the ground. Then we wanted our navigation ground cameras pointing to the

ground after we pitched over. But in landing, we had a planned roll maneuver to bring our antennas to face the earth, and so in order to accommodate that we had a planned roll maneuver. It was not unexpected that the roll maneuver would occur. It was also expected that there would be a loss of communications as we switched from our 1-2 antenna pair to our 3-4 antenna pair. Thank you so much for that. Next up we have Chris Davenport from the Washington Post. Chris. Hey, thanks everyone.

For Tim and Steve, just regarding that audible you had to make up and I want to see if I can come down to some of the chronology to get a sense of how the day unfolded for yesterday. About what time is it that that you realize that that that laser range Finder wasn't working and then did you immediately know that you could go to the MDL system was just something you had planned on as a contingency or did you kind of make this up on the fly and decide work on it, you know, in real time

yesterday? Thanks. I'll I'll start and Tim will add a lot of color to this because this one, this one was like Gina asked was the Hail Mary issue. When we went around the night before and we made that laser range Finder measurement, it looked like the laser fired, we got an enable in the data, but when we did a deeper analysis analysis of it, it was it was not actually fired, it was an

error in the telemetry. So when we dug into it, we that morning, this was the morning of landing, we called MDA and asked them what they thought about it and could we convert that physical enable switch to a software change to command that switch. And they indicated no, there's a physical cut out for this and not a software driven cut out for this. So we now I get into the control room. I can laugh about it now.

And Tim was on console as the mission director and I said, Tim, we're going to have to land without laser range finders. And his face got absolutely white because it was like a punch in the stomach that we were going to lose the mission. And we went around and we said what are we going to do? We started to hack into the OS, the operating system of the laser range Finder to see if there was a way we could trick

it some way. We thought about running a simulated table of the power descent phase and like predict with like some parameters how we might land and there's just way too much variables, way too many variables in that running a simulation table in against the real world situation. So that wasn't going to get work. And so Tim and I were walking through the halls and trying to

find the experts. And he came up with the idea that says, why don't we just Plumb the high beam laser and the low beam laser from the NDL into the registers for the HRN laser range Finder and the TRN laser range Finder. He came up with that while we're walking down the hall in a hurried way. And it only would work if we ingested the range measurement in the NAV application.

And we had done that because we had worked with the team at Langley for so long with the NASA Doppler Lidar that we were able to have that instrument in shadow mode to give them better quality data. And because it was in shadow mode, we had that measurement in the navigation application and it was just a brilliant piece of insight by Doctor Crane to say let's clear the register and put those two lasers in as the as the actual makeshift laser range finders.

So that's kind of how it unfolded and we needed more time. So we delayed and took the risk and said let's delay an orbit and switch to a later landing time because the landing time was originally around 3/3/23 or 3/24 and we delayed till, you know 5/24 as you know, based on a 2 hour orbit around the moon. So, Tim, anything else? Yeah, it it it's it sounds easy in retrospect. We had the the navigation Doppler Lidar already plumbed in the navigation system and had the range rate data.

So the three beams on the the NDL produce a velocity measurement as pursuant and talked about they also produce a range measurement and we were not using the range measurement, we were we had just the range rate as a backup to our optical systems. But because it was already plumbed in there we had to rewrite those rewrite time tags into our measurement loader. But the challenge was the the lasers. So we have, we have these two

navigation pods on the vehicle. If you can zoom in there, maybe, maybe not. Anyways, there are these two navigation pods that have the cameras. There you go, 2 navigation pods on either side of the vehicle that have cameras and the laser range finders point in the same direction as the cameras and those angles were optimized for our flight trajectory to give us the best measurements to land

softly. The NDL was under one of these and its angles were optimized to test the extent of its performance, not necessarily to feed our navigation system, but to test the sensor because it was a technology development. So after we figured out we could write the measurements into the laser range Finder, we had to quickly tell the computer that the laser beams were pointed in different directions.

And so there were a number of attitude transformations of it's not in the same location, it's not in the same orientation, and if you've ever seen engineers doing right hand rule transformations, there were a lot of broken wrists.

Put it down here as people were trying to figure out which way is it pointing, and I will tell you that in normal software development for a spacecraft, this is the kind of thing that would have taken a month of writing down the math, cross checking it with your colleagues, doing some simple calculations to prove that you think you're right, putting it into a simulation, running that simulation 10,000 times,

evaluating the performance. Usually you find an error because you did something of that rotation wrong and you roll it back and you go again. Our team basically did that in an hour and a half and it worked, so it was one of the finest pieces of engineering I've ever had a chance to be

affiliated with. I'd like to add to that that the performance of the navigation Doppler Leader technology and parallel that was developed by NASA's Langley Research Center was outstanding and it was reliable and that's what got got Intuitive Machine some of the key data they needed in order to soft land. Great. Thank you so much for that. Eric Berger with Ars Technica. Eric. Hi, thanks very much.

Congratulations. Question for two questions for Tim or Steve. First of all about propellant management, I'm curious how the cryogenic boil off matched up with your expectations and kind of how much prop you had left at the end. And then what is the transfer data transfer rate you're getting now versus what you expected? You know, trying to get some sense of how much data you're going to get back over the next week or so versus your original expectations? Thanks. Propellant.

So actually the the cryogens did very well and and just a correction Eric our system doesn't really have boil off. Our tanks are rated to hold the pressure of of the methane. It's very close to space storable really what we're worried about isn't a propellant boil off it is temperature management. We want to keep that cryogenic fluid very cold because the density of that fluid in our engine is what gives us the power of of that thrust system.

So really what we were looking at throughout the flight was did our insulation plan and our isolation of the the cryogenic tanks from the hot material, the spacecraft, did that give us the right thermal protection so that we did not heat, heat that cold system up and that worked very well. We found ourselves in a very good situation with propellant

all the way through the mission. We did have we used a little bit more helium than we thought throughout the mission and had to adjust our our control approach for that and that was probably the area of concern we run a little bit low on on helium. So a lot of lessons learned there on how we'll manage that going forward that will play out very well. And in terms of the the bandwidth, that's difficult to answer.

One of the things that's happening right now, we built fault detection technology into our COM system that if we're not getting a command heartbeat up on two of the antenna pair, it will go through a sequence of powering the radios off, restarting them. And then if they still don't get the heartbeat command signal from the from the earth, then it switches to the other antenna pair.

And so one of the first things we're trying to do is get out of that flight configuration and stay locked in on 2 antennas. But with that flip flopping back and forth, right now we're we're trying to get the command up to move out of that flight mode. But there's a beat frequency of we go from a good configuration, the one that's down and then we're about to come up to the new one and we move to a new antenna. And so we're working through that.

When we left to come over for the briefing, I think they just about had that solved. But I can't give you a strong number because there's a variability there as we go from different antennas to different dishes around the world. Great. Thanks again. Jeff Faust with Space News. Jeff.

Good afternoon. Maybe just to quickly follow up on Eric's question for Tim. What is your best guess that how good the data rate you can eventually get once you optimize the system for the Lander and its configuration? And then also I think this question was asked. Earlier I may have missed the answer. What's your best guess in terms of margin of error of how close you are to the predicted landing site, How many kilometers away you think you touched down? Thanks.

Yeah, great questions. Thanks Jeff. Best guess you know in terms of bit rate that's hard to say because that does vary with the antenna size and the sensitivity beach antenna, but we expect to get most of the mission data down once we stabilize our configuration. In terms of landing accuracy, you know with without precision navigation sensors on board the best you can expect to land on an IMU only landing system would probably be in the four to five

kilometer range. However, our optical navigation sensors perform Florida State. In fact our optical measurements looked better on the scopes than they had in simulations. So I'm confident that we're well within probably a 2 to 3 kilometer accuracy of the landing site for this mission would have been better if we'd had our full complement of

sensors as expected. And just as a closing point, Jeff, on this question is that we're planning working with the Lunar Reconnaissance Orbiter and the Arizona State University faculty to do a pass to see if LRO can locate our position precisely and give us a latitude and longitude and we expect that measurement that pass to occur this weekend. Thank you so much for that. Joey Roulette with Reuters. Joey. Hey, thanks for doing this.

Question for Tim or Steve, since the Lander is on its side, I was wondering if you could go into how that will limit what the Lander can do with you know, which operational capabilities are impacted by that and which you know, science objectives, if any won't be able to be conducted because it's on its side? Thanks.

Well, I'll comment initially. Like I mentioned, we don't have active payloads on the panel EI believe is what's facing the surface of the moon and so therefore the the active payloads that need communications and need to give up, we need to command and we get the telemetry out are all exposed to the outside, which is very fortunate for us.

We do have an antenna, however, that are pointed at the surface and those antennas are unusable for transmission to to Earth, back to Earth. And so that really is a limiter, our ability to communicate and get the right right data down so that you know, we get everything we need for the mission. I think it's the most compromised from being on our side. And anything I missed him? No, that was it. Well, maybe one I just thought

of his. I I was telling you before about the solar panel on the top deck we had had to angle that at about 30° tilt up for for landing on the South Pole. That was one of the engineering changes we made when NASA asked us to move to the towards the South Pole region. Now we've tipped over and we don't know the health of that solar panel. It would be great to get a picture and or wait until the sun comes around and see if we get any battery charging off that solar panel.

So we'll see we're in a great state of charge with the batteries. We're getting plenty of sun on the on the horizontal and now horizontal solar panel and we'll just have to wait and see with that that other panel. Thank you so much, Jonathan Surry, Fox News. Thank you for taking my question and congratulations everyone. My question is also for Tim or Steve.

Your team had to essentially rewrite the instruction manual several times while in flight, not just for troubleshooting but also adapting to 1st in space performance of that new engine. Could you give us an idea of how many people were involved with the process, and did the discussions take place in a single war room, or were you conferencing in experts from multiple locations? Just give us an idea of the

human logistics involved. So I'll give you a rough overview and then Tim can comment kind of how how it went over the seven day period. We the operations team was structured in into three shifts, red, white and blue shift. Those shifts were supposed to work 8 hour shifts and then do a a handover between shifts. But between those teams, those teams are about 10 individuals and the other team that we

activated was called team four. And the team four was a handful of us senior leaders that and engineers that could analyze and take the workload off of the operations teams. So if the operations teams are are wrestling with a particularly thorny problem, they would call team four and say get in here and let's work on this, work on this for us and

give us a a solution. So we would pull in the subject matter experts for any of the disciplines that we would need to solve any particular problem and we would work in a war room sense outside the control room to tackle that problem. We would have for example to activate and bring up the simulation or activate and bring up the flat set. We would run analysis cases. We would call the vendors like we called the MDA about the laser laser rage finders.

We called NASA and talked about the Deep Space Network with that orbit determination need also all of that chatter in the back that was handled by I would say about 30 people that that would work a given problem on and off based on the on the discipline. But what had happened during the mission was that red, white and blue team and the teams and team four ended up working nearly around the clock. We really could have staffed more, but it takes a lot of expertise to staff those teams.

And we ended up kind of melding into, we're all working on this last problem through power, descent and we collapsed into a single red, white and blue team, all of it. And to get that solved, which we're going to go back and look at and see, you know, we really, really work the team hard. They're put a lot of hours in. I think one of the longest days was 48 hours long and another another day was 40 hours long for some of the folks. And that's, you know, just just

working too hard. And we need to give them rest so they can be bright and make the right engineering decisions. So we got some lessons learned in that area, but we did it and it was worth it. And it was a whole idea of persevering through the challenges and never giving up, Never ever give up until the last ditch solution you could find and then keep thinking about it if it didn't work. So just a testament to a great operations team.

Yeah, I'll add to that. You know, our operations concept was a, a a blend of. Human spaceflight for space station and space shuttle, we have, you know that's in our culture here in the Houston area. Some of us had worked on the all hadamorphous project at NASA, which was in some way has the DNA that that led into Nova C We had people with a military operations background and then we had people from commercial network operations.

And so we put all of that together and we came up with our own unique blend of how we were going to do spacecraft operations. And a big focus of that was the people inside the room on the red, white and blue teams, Keep the vehicle alive, keep the vehicle alive and doing what it's supposed to do. And then all the mission directors, myself, Jack 2, Fish Fisher and Trent Martin, we had the responsibility to interface with Team Four.

And we would be able to say, I have this problem, I can't solve it with the resources I have in the room and and do what we're supposed to do. And so we would shed those out to Team Four and they did an amazing job. Whether it was talking to the vendors or developing a procedure, they took that off the plate and that load balance. Even though he's right, you know, I joked this morning that, you know, how was your day? I said, well, this mission was the longest seven day day of my life.

But it it really, it really allowed us to focus on keeping the vehicle alive and keeping it moving on its way to the moon and doing the things we needed to do while problems and anomalies could be solved in the backroom. And you know, this is a story that everybody on that team is going to be able to tell for generations about how we landed. Irene Klotz, Aviation Week, your line is open.

Thanks. If I understand that incredible sequence of events correctly, was it just serendipity that a situation developed with that elliptical orbit that caused you to try and get the laser range Finder data where you realized it wasn't working before it would have actually been needed? And when during the touchdown, would that laser range binder nominally have been activated? I think I understood Irene, the

question. It was actually fortuitous that we had an elliptical orbit after lunar orbit insertion, because we would not have arbitrarily activated the laser range finders prior to power descent. We tested them on the ground, we flew them on aircraft, we flew them on helicopters, we and and we assumed after all that testing they worked. So the first usage of those laser range finders was during was supposed to be during the power descent.

But because we had such a low parallel, we activated a laser and found the problem. So that was fortunate and that was a bit of luck for us that then we identified that they weren't firing. So at that point then that was recovered. Like I said, at the next morning we uncovered that and then we had to work feverishly to figure out an alternative solution. Anything there Tim? Yeah. Just the second part, Irene, to your question of when would they normally have come on.

Normally we would have turned them on after deorbit insertion about an hour before landing. And we expected the, what we call the terrain relative navigation LIDAR, their laser range Finder that would have operated really from about 50 kilometers altitude all the way down to landing. And then after pitch over, we had a laser on the other side that would take us from a kilometer down.

So we would have probably been 5 minutes to landing before we would have realized that those lasers weren't working if we had not had that fortuitous event. So serendipity is absolutely the right word. Jackie Waddles, CNN. Hi, everyone. Thanks so much for doing this. Had a quick question for Steve or Tim. I know everyone's really curious about the photos here. So do you guys have any indication of Eagle Cam is in a position to pop off the Lander and take some pictures?

And to that end, if you could just clarify for all of us, are there any specific payloads whether commercial or NASA? I know some of them are passive and you're still working on figure out these data down links and stuff, but are there any that you know for sure or just haven't gotten any data yet and don't know if you will get data from? Thanks so much. Well, fortunately again Eagle Cam sits on a panel. Let me show Tim if panel EI believe is towards the surface of the moon.

Eagle Cam sits over here on this panel and we plan to eject that camera off the side, so it will fall about 30 meters or so, then maybe not that far away from the Lander and get a good shot of the Lander position this way. So we're looking to power up that Eagle Cam. We were waiting on getting commanding ability power that up, clear that SD card and fire the camera and so we can get a A view a back to our Lander. So that's a very exciting image for us.

The reason it wasn't fired as we were landing was because of this NAV system initialization that we had to do which put a flag up to flag the Eagle Cam not to fire. So that was part of the troubleshooting we had to do to to get the Doppler LIDAR into the NAV system. We had to do these navigation initializations and that shut off the Eagle Cam and we knew that was in the software, but we just did not have time to go fix that. And so now we'll get it and get the image in the orientation

that we need. The other question you had about commercial payloads, we think we can meet all of the needs and from the commercial payloads that we have in the orientation, we have the one on Panel E that's covered right now or shaded by the Lander and the surface is the Arc Cube project. And we believe we've got an image of that already that we can download and share with our our customer.

I'll add that for the NASA science payloads, as we said earlier, the many of them have already taken a lot of data, a lot of measurements in transit and also on descent. We're still checking to see if in the current suspected orientation of the vehicle whether there will be any particular measurements that can't be made in some of the

payloads. So, for example, we want to make sure that the laser retro reflectors, which are normally, you know, pointed up so that when the Lunar Reconnaissance Orbiter flies over it can pulse them with a laser beam and find their position. We'll have to check to make sure that they can still be illuminated.

They probably can be when the orbiter's fault is flying it up at a further angle away on the trajectory, very similar to what we found with the recent slim landing from the Japanese Space Exploration Agency. But we are doing an assessment to see are there any measurements still to come that from any of the NASA supply payloads that most likely can't take place particularly because of this new orientation? Great. Thank you so much for that.

Will Robinson Smith for Space Flight Space Flight now. Will, Yes. Hi, thanks for taking the time to answer our questions here. One for Joel and for Sun. If I could, given the success and now the operability of the NDL, will that become AI guess highly recommended or required payload on future eclipse missions? And what are the potential implications or or knock on effect for the human landing system Landers? Will NASA recommend that Blue Origin and SpaceX implement that

into their landing systems? Thanks. I'll take a, I'll take a shot for Sun, but please please add. So for a commercial lunar payload services initiative, we don't prescribe to the company partners that are doing this as a service, you know what techniques or technologies they use. But as you can imagine all these different companies are always looking for low risk, good performance ways to gather the data or conduct the operations that they are going to conduct for NASA.

So we I'm sure that the story as you can tell now is very public about about things like the performance of the NDL. And we would think that people that are looking at lunar Landers would be checking into that technology. Now that it's actually been flight proven operationally on probably the an unanticipated flight test mission, right, actually use it operationally. I would say the same thing.

You know, the human Lander system partners have their own techniques and their own approaches that they're taking. But again, now that this has been actually shown to operationally work, I would think it's going to be of great interest to folks that want to travel to the moon first time. Yeah, I will add, you know, we've actually did. NASA has already licensed this technology to a small company to commercially provide this to whoever wants to buy it, right.

And so this only adds more validation of the system. There's just a technical reason to add it right beyond the aspects that Joel talked about because it is an order of magnitude more accurate in precision and measurement of range and velocity components. It's half the power, half the mass of the traditional approaches that we've used in the past and the volume in terms of signs is about 1/3 of it. So if you just look at it from a technical perspective, it just

provides all these benefits. And so I'm sure future vendors will look at this type of capability anew and and try to incorporate these types of technologies. And that's why we're doing these missions right, is to develop better and better capable systems that allow us to do this more reliably, more capably and hopefully more sustainably and

more cost efficient wise. So in fact, after the landing, I did joke with Steve there, it's like, hey, now Are you ready for I Am 2 because we have 3 payloads already ready to go on I am two, right. And so we're ready to demonstrate even more stuff that will help the greater space economy that burgeoning in here in the US and and we just want to augment that as much as possible we can with what we're doing. Yeah, I'll chime in as well.

You know, as you look forward to future emissions and as we begin delivering cargo emissions with a metric ton and more, you know those those payloads get more and more valuable. And as those payloads get more and more valuable, we're going to have to prove to our customers that we have robustness in our landing systems. One of the ways you achieve robustness is with redundancy or with the similar redundancy. So having two ways of measuring

that landing. We had a camera system on board, but if you have a camera system and a laser system, one might fail in a way that the other one might not.

And so I can see that as the lunar economy opens up, as NASA begins to send cargo and larger, more expensive payloads with companies like ours and others, that you're going to see demand for these kind of sensors complementing a suite of sensors that you use to guarantee safe landing is going to be something that that will be an industry standard. Thank you for that. We are going to try and take two

more questions. So I'm going to ask you guys to be brief in your remarks so we can get the get through these questions. First up, we have Marsha Smith with spacepolicyonline.com. Marsha. Thanks so much. Getting back to the communications question, I gather that part of the challenge is that you have so many different sites around the world with different capabilities. But I know that you would talk before you launched about the challenges of communicating at the South Pole.

But how much of the calm problems are related to the ground stations and how much to the place where you are on the moon? And what lessons are you going to learn from all of this for the Artemis missions? Well, yeah, I'll answer some of that question.

What you get is a phenomenon at the South Pole that NASA is interested in understanding since that's where our future Artemis missions are targeted or NASA's future Artemis missions are targeted is a is a frequency multi path condition and so are you going to get multi path interference on your

communication frequencies? Fortunately we think the antennas that are pointed towards the moon will give us a really good understanding of that phenomenon at the South Pole. Another serendipitous moment, right? But I would say that we thought about this landing on the South Pole quite a bit and if you look at the mock of all the antennas are up high and pointed like we like towards Earth. When you're sitting on on the surface of the Moon in transit, it's very difficult.

You have to constantly change your attitude to point the antennas back to Earth when you're headed to the Moon. So we're going to figure out an antenna location map for subsequent missions and even mission two that gives us an antenna pointed back at the Earth when we're flying out towards the towards the Moon for sure.

Also in this first ever use of our lunar data network, of this commercially now available data network made-up of these large radio astronomy dishes that we've stitched together in a network. Some of those dishes have had have had configuration issues. Some of those dishes have had a weaker power band. So we can all operate on this frequency S band set of frequencies. However, the power to reach the moon is what came into account as we went around and out towards the moon.

The further we got, the some of the times those power transmission levels were too low to have us keep the carrier locked, locked up on the radios. So that was some of the challenges and that's what we're looking for going forward is to really regularize that lunar data network so that operationally we know the configuration we can go upgrade to put additional orbit determination capabilities within our baseband units at each antenna site.

And and the best thing will be when we get our data relay satellites in orbit, we'll have that problem licked and we can communicate short distance from the surface up to a satellite and relay that back to Earth in a in a in a more traditional way. So looking forward to those advances in the communication system. Thank you. And we have one last question we can take this afternoon with Adam Mann from Science.

Adam. Hi there, I'm with Science News actually and I guess this is for the Intuitive machines folks. I'm wondering maybe you've answered this already, but I'm just wondering if you have any idea how long OD might be able to stay operational on lunar surface. Well, it's a great question and and you're going to bring a tear to my eye. We know at at this landing site the sun will move beyond our solar arrays in any configuration in approximately 9 days.

And so the early missions are all solar powered and require that. And then once the sun sets on on OD, the batteries will attempt to keep the vehicle warm and alive, but eventually it'll fall into a deep cold. And then the electronics that we produce just won't survive the deep cold of lunar night. And so, best case scenario, we're we're looking at another nine to 10 days.

And then we will, of course, the next time the sun illuminates the solar arrays, we'll turn our dishes to the moon, just to see if the radios and the batteries in the flight computer survive that deep cold. The solar array should, they should survive the deep cold and provide power, but we'll just see if our electronics made it through. We'll take a look, we'll take a listen.

By that time we'll have gotten very, very good at at listening to that signal, but we do expect probably a maximum of another nine to 10 days. Thank you so much, Tim, and thank you to everyone who submitted questions this

afternoon. And thank you to our briefers for taking the time to discuss this historic mission enabled by the agency's Commercial Lunar Payload Services or Eclipse Initiative. We hope you'll continue to follow along on this mission by keeping track on Intuitive Machines's website and on nasa.gov/CLPS that will wrap today's briefing. Thank you so much.

Transcript source: Provided by creator in RSS feed: download file
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast