Visualizing A Black Hole’s Flares In 3D - podcast episode cover

Visualizing A Black Hole’s Flares In 3D

Apr 30, 202418 minEp. 760
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

The words “black hole” might bring to mind an infinite darkness. But the area right around a black hole, called the accretion disk, is actually pretty bright, with matter compressing hotter and hotter into a glowing plasma as it is sucked in. And amid that maelstrom, there are even brighter areas—bursts of energy that astronomers call flares.

Scientists are trying to better understand what those flares are, and what they can tell us about the nature of black holes. This week in the journal Nature Astronomy, a group of researchers published a video that they say is a 3D reconstruction of the movement of flares around the supermassive black hole at the heart of the Milky Way.

Dr. Katie Bouman, an assistant professor of computing and mathematical sciences, electrical engineering and astronomy at Caltech in Pasadena, California, joins guest host Arielle Duhaime-Ross to talk about the research, and how computational imaging techniques can help paint a picture of things that would be difficult or impossible to see naturally.

Transcripts for this segment will be available the week after the show airs on sciencefriday.com.

Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.

Transcript

Science Friday is supported by Progressive Insurance, whether you love true crime or comedy, celebrity, interviews or news, you call the shots of what's in your podcast queue, right? And guess what? Now you can call them on your auto insurance too! With the name your price tool from Progressive, it works just the way it sounds. You tell Progressive how much you want to pay for car insurance, and they'll show you coverage options that fit your budget.

Here quote today at Progressive.com to join the over 28 million drivers who trust Progressive, Progressive Casualty Insurance Company and Affiliates, price and coverage match limited by state law. Listener Support It, WNYC Studios. How do we image the invisible and how do we use combinations of amazing instruments but also computation to see things that seem like it should be impossible to recover?

It's Tuesday, April 30th. On this day in 1905, Albert Einstein completed his doctoral thesis, you're listening to Science Friday. I'm Sy Fry producer Charles Bergquist. Scientists have used a technique called Computational Imaging to generate a 3D video of the movement of bright regions called Flares near the black hole at the center of our Milky Way. Guest host Ariel Duam Ross talks with Dr. Katie Bowman about the research and the challenge of seeing things that should be invisible.

When I hear black hole, I imagine an infinite darkness. Maybe it's something to do with the phrase gravity so strong even light cannot escape. But the area around the black hole, an area called the accretion disc, is actually pretty bright with matter compressing hotter and hotter as it sucked in. And amid that, Mellstrom, there are even brighter areas, bursts of energy that astronomers call flares.

Researchers are trying to better understand what those flares are and what they can tell us about the nature of black holes. This week in the journal Nature Astronomy, they published a three-dimensional video that they say is a reconstruction of the movement of flares around the supermassive black hole at the heart of the Milky Way.

Joining me now to talk about that work is Dr. Katie Bowman. She's an assistant professor of computing and mathematical sciences, electrical engineering, and astronomy at Caltech in Pasadena, California. Welcome to Science Friday. Hi, thank you so much for inviting me. I'm really excited to be here and tell you a little bit about what we've been working on. It's great to have you. So first of all, do we know what these flares are? You know, what makes a flare around a black hole?

So for years, people have seen that around black holes, there are these flares, this extreme brightening of the light, but people were unsure what it could be. There were kind of different kind of theories. One of those theories was a hotspot, which would be that there are these compact regions that form that become really bright and then they slowly dissipate as they're rotating around the black hole.

But this was debated for a while, but recently there has been more evidence for the fact that hotspots could be causing this flare structure. So basically this reconstruction was a way to try and figure out what is causing those flares? Yeah, so what I find is really exciting is that what we've done is try to take artificial intelligence and physics and combine them in a way to recover the potential 3D structure of how gas looks like during a flare in a vent around a black hole.

And so we made it our goal to try to combine real observational data with our current understanding of black hole physics and modern computational tools from artificial intelligence in order to actually see the 3D structure of what a flare looks like around a black hole and to see if it looked like what we expect a hotspot to look like.

Okay, so I mean that's fascinating, right, because it sounds like what you're telling me is that these images, this video that you guys created, it's not a quote unquote real video, right, it's and it's not a simulation either. So it's like a third category, it's something different. Yeah, exactly. So getting the 3D structure of what the flare looked like is a really, really hard super challenging problem.

Maybe let's just first go back to how hard it is even just to take a 2D picture of a black hole. So if you might have seen like about five years ago, the event horizon telescope collaboration of which I'm a part of along with a number of the other authors on this paper. So the collaboration produced the very first picture of a black hole. I remember that was a big moment. Yeah, it was really exciting.

And doing that was really difficult, you know, black holes are really far away from us really compact. And so they appear very small in the sky. And so it required that we put together this earth sized telescope to see the structure on the scale of the black holes event horizon, that point of no return around the black hole. And so the event horizon telescope had all these telescopes around the world and they worked together and acted like an imperfect telescope on the size of the earth.

And then we computationally combine the information to make a picture. But even that was just a two dimensional picture of a black hole. And here our goal is to recover not the 2D, but the 3D around a black hole. And so that's so much harder. And even further, the event horizon telescope is using telescopes located around the world. But here we only had a telescope at one location, the all-metalscope in Chile.

And so from this one-stream of data, from one telescope, we had to reconstruct not a 2D picture, but a 3D picture. And so how is this even possible? Well, again, going back to the event horizon telescope image, we tried to say in that work, let's make no assumptions about the actual physics of black holes. Let's not assume it's a black hole at all. We didn't want to make any assumptions about what the structure of the image looks like.

We wanted to just purely see what the picture was in the sky. It would have been something that didn't look like a black hole at all, right? And so because of that, we needed all these telescopes working together. But in this new work here, we said, what if we actually allowed ourselves to bring that physics back in again and say, not only that we trusted the black hole, but we also trust a lot of the physics that is happening around the black hole.

For instance, the way the black hole is immense gravity bends light and how gas moves around the black hole. If we trust the physics that we've built up over decades, then can we see more even with less data? And so that's what we did. We tried to build that physics in to our method in order to recover a 3D picture. Let's think about things here on earth first that we do 3D reconstruction of. So for instance, let's say your doctor says you need to go get a CT scan done to see inside your body.

So CT stands for computed tomography. So okay, what happens when you get a CT scan? Well, what happens is you lay inside a machine and what the machine does is it sends x rays through your body and takes a picture of what comes out on the other end. But it doesn't just do this from one direction. It spins around you and takes the picture of your body from all 360 degrees, all possible viewpoints.

And then there are methods that allow you to take those projected images and from it recover back the 3D structure. So the idea is we wanted to use a similar idea for doing the black hole 3D reconstruction. The only problem is we only have one viewpoint right we're never going to be like CT scanner where you see multiple views, different angles of the human body. Here we're only see the black hole from one direction here on the Earth. Yeah, we're extremely limited.

So we need to then multiple views to disinviguate them. So let's say instead you get into the CT scanner and the doctor says, oh no, it's broken. It's not able to rotate around you anymore. But they really want to take the scan. So when the doctor asks you to rotate your body and then side the scanner. And every time you rotate a little bit the doctor takes a picture from the same direction.

Well, if the doctor knows exactly how much you rotated each time, then that's the exact same information we have to do the 3D reconstruction perfectly. And so we used a similar idea for this for the black hole reconstruction. We said, we don't have other views of the black hole from different orientations. But we have some understanding of how the gas is moving around the black hole. That's where our black hole physics comes in.

And so it's kind of like asking the patient to rotate in the CT scanner. The black hole is rotating for us. We know how the material is rotating and so we can use that information to constrain the 3D reconstruction. And so, you know, and if that didn't seem hard enough, there's another challenge. We don't actually get the full 2D picture of the black hole from one viewpoint over time.

We only see the integrated light coming from us. So it's like a single flickering pixel, like if the patient in the CT scanner were like on the moon or something. It's all just the blur. So to get around this, we also had to leverage additional properties of the physics, like the polarization of the light, to interpret that single pixel of flickering light as a 3D structure. Does this mean that this technique only really works because the black hole happens to be rotating?

That's exactly correct. Yeah. Because the gas around the black hole is rotating in a predictable way, then we can use that information knowing how it moves from time one to time two to time three to kind of, you know, simulate like we had multiple views of the black hole. This video is a reconstruction based on what we know about the physics of black holes right now, which means it could change as we learn more. Is it sort of odd to know that it might not represent the full picture?

Well, I think that I work in a field called computational imaging. And it's all about how do we form pictures when we don't just rely on optics, but we also allow ourselves to put in computation and models. And underlying assumptions. And I think it's really important that we don't just restrict ourselves to, you know, results that are only achieved by building new optics and new telescopes that don't involve any computation because then we're limiting ourselves.

But here we're saying, okay, what if we allow ourselves to move on that spectrum from very little assumptions to much stronger assumptions? As long as we are very honest with ourselves about what those assumptions are and how they can bias our solution, you know, we're able to do so much more if we just give ourselves that freedom to add back in those assumptions. Of course, with the understanding that this is the result is only true up to our belief in those assumptions.

Tell me where AI comes in with this work. Yeah, so in this work, we made use of this really cool new computational tool that has kind of taken the computer vision and graphics area by storm. It's called nerfs or neural radiance fields. And the basic idea of a nerf is that instead of representing a 3D volume as a bunch of different 3D pixels called voxels, instead we can represent it as a neural network.

So imagine that the space around you was split up into lots of cubes, kind of like a room size Rubik's cube with lots of lots of little cubes. And this is the original way of representing 3D space. We take a look. Each little cube and assign a value to it with the color of the object inside of it. But there are two disadvantages to this. First is that that representation is discrete.

So you might have a cube on the boundary of an object, for instance, the cube overlaps with like a black mug and a white table it's sitting on. So what should the value of the cube be? Should it be white or black or gray? None of these answers are great. So first is we would like to represent colors in a space in a continuous way, where we don't have these difficulties at the boundary. And the second disadvantage of the cube or voxel representation is that it's really inefficient.

So in the universe usually objects are continuous, yet there was a transition from the black mug to the white table, but most of the time the value of the cube will be similar to its adjacent cube. So adjacent cubes will land on the same object, the same color, it will be the similar value. So we're wasting a lot of resources representing every cube in space independently, even though we know that most of the time we can get away with large regions of space being represented by just one number.

So these neural networks called nerfs help get around both of these issues, rather than solving for all the cubes in space around the black hole, we saw for the parameters of a neural network that leads to a continuous 3D space, and we're parameterizing that space with a neural network. And why is that so important to our problem? Well, we have this very, very little information that we're working off of. And so we want to try to encourage our solution to be smooth.

And the nerf helps in this, making it possible for us to not just find a solution, but find a solution that is reasonable because it's smooth. So actually the video that we reconstruct or the 3D reconstruction itself is a neural network. Science Friday is supported by Zbiotics. The team of PhD scientists at Zbiotics are tackling rough mornings after drinking with their new pre-alcohol probiotic.

This probiotic breaks down the byproduct of alcohol while you drink and sets you up for a great next day. Check out the cutting edge technology for yourself at Zbiotics.com slash Friday and use the code Friday to get 10% off your first order. Zbiotics is backed with 100% money back guarantee, so if you're unsatisfied for any reason they'll refund your money. That's Zbiotics.com slash Friday and use the code Friday at checkout for 15% off. For so many black people, the Wiz feels like home.

The new stage revival has Broadway buzzing and as it gears up for a national tour, we'll consider the impact this story continues to have 50 years down the Yellow Brick Road. I'm Kai Wright, join me on the next Notes from America as we pay tribute to the Wiz. Listen wherever you get your podcast. So what did you figure out based on this reconstruction? What did you learn?

Yes, so we've assumed a bunch of physics in getting this 3D reconstruction. But one thing that we have not assumed that we really left completely open is what that 3D structure of the gas looks like around the black hole. So we could have reconstructed anything. It could have been like light scattered everywhere, no structure at all, just a mess. So even though we can strain some of the physics, we allowed it to have arbitrary structure of what the gas looked like around the black hole.

And so now if you actually look at what we recover, we see that it actually recovered two bright spots, about 75 million kilometers from the black hole. That's about half the distance between us and the sun. And so around that distance, two bright spots appeared right after a bright flare. And as time progressed, those two bright spots spread out as they rotated around the black hole.

And so this compact structure that we got actually aligns with some current theory that is showing what could cause flare, these hot spots. So they look amazingly similar to a lot of simulations of black holes, but it's one thing to have it in theory and another to actually see it from observation. So to me, that was very exciting.

You know, if somebody who's like sort of not very connected to black holes, maybe doesn't immediately feel a sense of wonder, what would you say to that person so that they could understand you know, we really do need to understand this stuff. Well, I would say like originally I come for more the computer science and electrical engineering areas. So I originally have not was not a physicist or astronomer.

And what really grabbed my attention with black holes and why I'm just so inspired by them is the mystery that surrounds them. And the idea of like black holes should be invisible, right? How can we see a black hole or how can we understand when it's going on around a black hole that's 26,000 light years away from us?

How do we image the invisible and how do we use combinations of amazing instruments, but also computation and bring these things together? How does it allow us to see things that seem like it should be impossible to recover? And so to me, like even if you're not excited by black holes himself, I think this idea of we can, you know, by bringing these ideas together, we're able to do things that seem impossible.

And to me, that's like, you know, the most exciting thing. I'd agree. And it's a chance to advance the technology as well, right? So I do have to ask if you're coming at this from a computer science angle, how is it that you were able to, you know, publish work on black hole physics? You know, if you're not, if you don't have an astronomy background.

So it's not one of those projects where you come up with a method and you throw it across the friends to your scientists and you tell them, hey, use this method that I developed. This result really required people working together to build a method that incorporated both AI and physics seamlessly. They were really working together.

So obviously I love us who is currently a member of my team, but soon will be a faculty member at University of Toronto, led this paper. And he brought together an awesome team. Andrew Shale and Machick Wilgis brought that black hole expertise that was necessary to incorporate the physics that we needed to leverage. And Pratchal, Shrinivasan, brought this amazing insight he had developed in developing the Nerf method originally.

And so, obviously, we're closely with both groups of people in incorporating both these state-of-the-art methods and our predicted physics to achieve this result. And so to me, kind of that's the biggest achievement of all. There's obviously some really cool science that came from this, but to me, that's what is most exciting. This was a true interdisciplinary collaboration. You don't see every day and that allowed us to get this really exciting result.

Yeah, I mean, it really does take an entire team. You know, as computer analysis and AI models get better, is there less of a need for images that humans can actually see and interpret visually? That's such a great question. I think that, you know, a picture is worth a thousand words, you know, this is an old saying. And I think that it's so true, right? You can have points on a plot, but it's another thing to see a picture.

And I think that the black hole image of M87 that came out five years ago is kind of evidence of that. You know, people had predicted that there was a black hole for many years. But it's one thing to have points on a plot and another to see a picture of this dark body with the gas surrounding it. And so I think that it just helps us so much in understanding.

And so similarly here, and in my work, I'm interested in how do we take the limited data that we have and from it, get visual representations and construct imagery of what it is that we see. And I think that you can, you know, you can argue that points on a plot might give you the same amount of information, but I think it is just a totally different experience to see a picture.

Dr. Katie Bauman is an assistant professor of computing and mathematical sciences, electrical engineering and astronomy at Caltech in Pasadena, California. Thank you so much for talking with me today. Yeah, thank you so much. And that's it for today's episode. Lots of folks help make the show including deep leadership. Sandy Roberts. Beth Rammie. John Dan Kosky. And many more. Next time, you can now buy a cap or headband that will listen in on your brain waves, but who owns that neural data?

We'll talk about rights, privacy and neurotech. I'm Sy Fry producer Charles Berquist. Thanks for listening. We'll see you soon. I'm David Rammick, host of the New Yorker Radio Hour. There's nothing like finding a story you can really sink into that lets you tune out the noise and focus on what matters. In print or here on the podcast, the New Yorker brings you thoughtfulness and depth and even humor that you can't find anywhere else. So please join me every week for the New Yorker Radio Hour.

I'm going to tell you something. Watch all the time, stop talking and talk having questions. I think you're going to get raised in the art of being told I don't have an option to make the best one ever. LOI Soon I'll teach you what I do when I'm around my real career. Which I'm not smooth at the same time as straightforward finding aariyth and lavish comment. And it's all myowners. Which I could be deafy or boring for when I'm around my minimal living life.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast