Advances In Brain-Computer Interfaces For People With Paralysis - podcast episode cover

Advances In Brain-Computer Interfaces For People With Paralysis

Apr 23, 202519 minEp. 1014
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Summary

This episode of Science Friday explores the advancements in brain-computer interfaces (BCIs) for individuals with paralysis. Guests Dr. Matthew Willsey and Dr. Sergey Stavisky discuss their research on restoring movement and speech through neural activity decoding. They touch on the technology's potential, current limitations, ethical considerations, and future outlook, including commercialization and accessibility.

Episode description

An evolving technology is changing the lives of people with paralysis: brain-computer interfaces (BCI). These are devices that are implanted in the brain and record neural activity, then translate those signals into commands for a computer. This allows people to type, play computer games, and talk with others just by thinking, allowing more freedom to communicate.

For decades, this technology has looked like a person controlling a cursor on a screen. But this work has advanced, and in a recent breakthrough, a person with paralysis in all four limbs was able to move a virtual quadcopter with extreme precision by thinking about moving it with their fingers.

Another area of BCI research involves speech. Recent work has shown promise in allowing people with vocal paralysis to “speak” through a computer, using old recordings to recreate the person’s voice from before their paralysis.

Joining Host Flora Lichtman to discuss the state of this technology, and where it may be headed, are Dr. Matthew Willsey, assistant professor of neurosurgery and biomedical engineering at the University of Michigan, and Dr. Sergey Stavisky, assistant professor of neurosurgery and co-director of the Neuroprosthetics Lab at the University of California, Davis.

Transcript for this segment will be available after the show airs on sciencefriday.com.

Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.

Transcript

Listener supported. WNYC Studios. This is Science Friday. I'm Flora Lichtman. Today on the podcast, an evolving technology has the potential to change the lives of people with paralysis. You would describe it in a way that was like, well, since my injury, this will be the first time that I can figuratively rise up out of my bed. and interact with the world.

The tech is called brain-computer interfaces. They're devices that are implanted in the brain and record neural activity and translate those signals into commands for a computer. This allows people to type, play computer games, and talk with others just by thinking. Today we're checking in on this technology and where it's headed with two researchers at the front lines of this work.

Dr. Matthew Wilsey is an assistant professor of neurosurgery and biomedical engineering at the University of Michigan in Ann Arbor. And Dr. Sergei Stavisky is an assistant professor of neurosurgery and co-director of the Neuroprosthetics Lab at the University of California, Davis. Welcome to you both to Science Friday. Thank you. Matt, I want to start with you.

You work on technology that lets people control objects on a screen by thinking. Does that sound right? Give me the 10,000-foot view of what you do. Yes, that's accurate. So the research that I work on is aimed at people with paralysis. People that typically can't move their arms or their legs, they have no way to really control these devices with movements that you and I would use.

And so what we can do is we can actually place electrodes into people's brain with a brain surgery and then interpret what they're trying to do and use that signal to actually control devices on the computer screen. Tell me about this recent paper where you had a participant with paralysis in all four limbs who was able to control.

What looks kind of like a drone in a video game, just by thinking. Yeah. So this is a person who had spinal cord injury. He was implanted by Jamie Henderson at Stanford University in 2016. What we did is we used a system, an electrode system that could be planted into the brain itself. We then recorded the signals that were coming out of the brain and created a method where we could take the signals and interpret.

what he was trying to do with his fingers and then use this finger control to control a virtual quadcopter in a way similar to someone with normal movements of their hand would be able to control a video game controller. So you're reading the signals of the hand movement signal. That's right. So the person would think, okay, I want to move my thumb in this direction. And then when he would do that, we would record the signals from the brain and say, oh, he was trying to move his thumb.

in this direction. And we would learn that pattern with our computer software and then control a virtual hand with a similar thumb in the direction that he was attempting to move his actual thumb, which was parallel. And is it instantaneous, the translation? The translation is not exactly instantaneous, but very close. We call it real time. So within tens of milliseconds. And what's the application?

So that's a good question. So for many of these people, they can't feed themselves or even necessarily make a phone call. What we've focused on in the past is trying to restore activities that we think as kind of the medical community is important to them. But when you really ask people, what are their missing needs? A lot of the things that they're missing are things like leisurely activities or ways that they can interact with.

the able-bodied community at a level without a deficit, for example, or without paralysis. So this participant was extremely passionate about flying. And so the idea to control a virtual quadcopter was actually the participant's idea and one of the reasons why he wanted to enroll in the study. He would describe it in a way that was like, well, Since my injury, this will be the first time that I can figuratively rise up out of my bed.

and interact with the world. And so it was very moving to him. And so it was kind of his idea. And so we created a game for him to fly a virtual quadcopter through an obstacle course. And he would try and fly it through and get personal record times. And when he did, we'd all celebrate. He would send clips of this video to his friend. It was a very humanistic moment. That's cool. Sergey, how's your work different from Matt?

Yeah, so we're using very similar technologies and techniques, but for a different application. So now instead of decoding attempted finger movements, which you can use for handwriting or playing. flying a quadrocopter, as Matt just described. We are putting the electrodes in a slightly different part of the brain, the speech motor cortex, which is what normally sends commands to the muscles that we use for speaking. So the jaw, the lips, the tongue, the diaphragm, the larynx, voice box.

And we're decoding the neural correlates of when someone's trying to speak. So we recently had a participant, he's a man in his 40s with ALS. It's a neurodegenerative disease that has left him unable to speak intelligibly. So he has a form of vocal tract paralysis.

These same types of electrodes were implanted by Dr. David Brandman, the neurosurgeon I collaborate with here at UC Davis, in his speech motor cortex. And as he tries to speak, we pick up the activity from those several hundred neurons we can detect. We run them through a bunch of algorithms that decode the phonemes. So these are like the sound units.

that he's trying to say and those get strung together into words and sentences that appear on the screen in front of him and then are said out loud by the computer in what actually sounds like his voice. because we have some old recordings, some podcasts that he's done in the past that we were able to train a text-to-speech algorithm to sound like him. So you can think of it as part of the same family of technology, but now instead of decoding hand movements, we're decoding speech movements.

and using that to communicate. How does the device decipher between inner thoughts, like inner monologue, and speech? Oh, that's a really good question. So when we started this, basically not get any inner thoughts because we're recording from the part of the brain that sends commands to the muscles. This is not the language network. This is the speech motor cortex. So think of it like the last stop on the way from thought to muscle movements.

As the person is trying to speak, this area is very active. And that's all... clearly been borne out by the data. That said, there is a new study that will come out soon from our colleagues at Stanford, actually the same lab that both Matt and I trained in, where they found that there are kind of little murmurs of imagined speech. But I'll suffice it to say that our system can distinguish between

an inner voice, that inner monologue and the attempt to speak. So it has not turned out to be a problem. You can think of it as it will only activate when the person's actually trying to talk. I ask because it feels like it raises questions about privacy and, you know, you're then saying things you didn't mean to say.

Right, that was really important. So one of the things that we tested extensively before we enabled the system to be used kind of 24-7 at-home by a participant without the research team there is... Would, for example, it be activated when he's just imagining or planning to speak? And the answer was no. Would it be activated when he's hearing speech? I mean, you could imagine how annoying it would be if

The radio is on and your brain computer interface is basically transcribing what you're listening to because it's activating the same part of the brain. And that also turned out to not be the case. So really, this part of the brain is most active when...

the user is trying to speak and so we get a lot of that privacy and reliability kind of for free but it does take a little bit of careful design of the algorithm What was it like for you two to see your patients unlock abilities that they had lost?

It was amazing to see the years of work that we had put into making the speech neuroprosthesis actually work. So that first day when our first participant... was plugged in we saw the brain signals saw that there were good brain signals clear measurements and then as he tried to speak with those words that were appearing on the screen in front of him

And we could see the joy in his eyes and his wife and child were there in the room watching it happen. And there were tears of joy and high fives and hugs all around. It was wonderful. I'm sure it's why you do what you do. Absolutely. After the break, how the field has changed and where it's headed. A decade ago, the state of the art was someone moving a computer cursor by trying to move their hand.

We went from that 10 years ago to speaking with 98% accuracy or flying a drone in 3D space plus rotations. Can we invest our way out of the climate crisis? Five years ago, it seemed like Wall Street was working on it until a backlash upended everything. So there's a lot of alignment between the dark money right and the oil industry.

this effort. I'm Amy Scott, host of How We Survive, a podcast from Marketplace. And this season, we investigate the rise, fall, and reincarnation of climate conscious investing. Listen to How We Survive. wherever you get your podcasts. Matt, what do these BCIs look like? Like, how big are they? What should I picture?

Yeah, so the actual chip that goes into the brain for the ones that we use is about the size of your thumbnail. And it looks kind of like there's a flat portion like a thumbtack, but instead of just a single thumbtack, there's about 100 thumbtacks that are very, very small. And this device is then implanted into the surface of the brain. And there's a gold wire that exits through the bone, goes underneath the scalp. And then the gold wire connects to a pedestal, which goes through the skin.

And then a connector can attach to the pedestal, which connects the whole system to a computer. What about for you, Sergey? Yes, we're using the same types of electrodes. So everything Matt described holds true. And then once those signals come out of the brain, they go from literally an HDMI cable to kind of a little box that sends it to a bunch of computers. And really...

With pretty simple engineering, that could be made much smaller. So the kind of external component could just be a single laptop or a computer. And like Matt said, there are now multiple startups developing various... forms of these electrodes that are going to be fully implanted and fully wireless. And so I think in the near future,

Instead of thinking a wire coming out of someone's head to a bunch of computers on a cart, think of it as you don't even see anything, kind of like a pacemaker. It's transmitting data maybe to something in their pocket, and that's sending it to a bigger computer somewhere else or to the cloud. That's not here yet, but I think very soon that's going to be the reality.

Can you give me a sense of the scope of use? How many people have brain-computer interfaces? So I would say roughly 50 people worldwide that we're aware of have had systems like this. Can you consult your doctor and ask for one?

That's a very good question. Yeah, the devices that Sergei is describing that require brain surgery to insert them into the brain, these are investigational devices that are part of research studies. As far as implantable brain-computer interfaces, They're still not available for kind of widespread clinical use, but this technology is being kind of currently developed.

We're describing mostly systems that implant directly into the brain, but there's a whole wide variety of systems that you could... There's some that could go directly into the brain. There's some that could lay on top of the brain. And then there's some systems that you could actually place leads on the surface of the skin. Now, the capabilities of all these devices are different depending on how close you can get to the brain signals themselves.

You could see a world where brain-computer interface could be pretty widely variable, many different types of devices, and the actual device you would want to use would be dependent on what you need to use it for. Well, I want to talk about this a little bit. I mean, you two are both at academic institutions. Are private companies developing this technology? Yeah, there are several private companies. Some of the more well-known ones include Neuralink.

Neuralink is Elon Musk's company. That's right. Precision Neuroscience. Echo Neuroscience, which is founded by Eddie Chang, who's kind of a pioneer in this field. And there's several others. Synchron, which is kind of this interesting endovascular, so it goes in through the veins, so it's arguably less invasive. Well, what is the market?

That's a great question. So it depends on what the problem is that you're trying to fix. For example, you know, when I'm looking at people with paralysis, their studies show that somewhere on the order of 5 million or so people. in this country have some sort of motor paralysis. It can be from a variety of different things. It can be from spinal cord injury, but can also be from stroke.

You know, for Sergei's, and I'll let him comment on this, but for Sergei's use case, which is for people that have difficulty producing speech, it's a different market. Yeah, so for vocal tract paralysis, I believe in the U.S., it's roughly 20,000 people a year. Most of that would be people with ALS and then also some forms of subcortical stroke. But there are efforts now towards

building, not speech neuroprasthesis, but actually language neuroprasthesis that could help people who have lost the ability to speak due to more common types of stroke. That is very early days. This has not been done yet, but... our clinical trial, our collaborators and other groups are starting to think about, can we kind of go even more upstream towards language?

brain areas as opposed to speech motor brain areas. And that could help really hundreds of thousands, if not millions of people potentially. How so? So the idea there would be in that pathway from a thought to the specific words that you're trying to say.

to the actual sounds. Right now, what we've done is working on that last step. So someone knows exactly what they're trying to say. Those words are in the speech motor cortex, but they're not reaching the muscles. And that's what a speech neuroprosthesis helps. If we go one step back from idea or concept to the exact words, there are various types of language disorders where that connection is broken. And that often happens after a stroke. And this affects millions of people.

We think that language information is still there, or the thought, the semantic information is there upstream. And so an active area of new research is... Can we identify the neural signals corresponding to the idea or the meaning of what someone's trying to communicate and start to decode that from their brain? But that's going to take a while. Do you have a timeline in mind for when these devices might be more broadly available for people who have paralysis or who have lost speech?

Yeah, that's a great question. I'd be very interested to see what you think, Sergey, but kind of my estimation would be somewhere between a 10 to 15 year timeline for when we could hope that. There could be FDA clearance for use of one of these devices clinically so that you could go to your doctor and say, hey, I'm having this problem. And they might say, this is a potential therapy for you.

I think I might be a little bit more optimistic. I would hope that in five years we might see market approval, but even before that, there will be larger clinical trials, so it might be much easier for someone who wants one of these devices to enroll in the trial. You know, I think for people who are not in the field, like you can imagine the sort of dystopic sci-fi concerns about these devices getting hacked or reading your mind for you two who are experts.

Are there things that concern you and what are they? I think we do need to be careful to build in cybersecurity and privacy features. That said I think these are very solvable engineering problems.

We're still at this research phase where just getting it to work at all is really, really hard. And we're excited that we're doing that. So, you know, I think it's a little bit of a step away to kind of worry about the sort of dystopian applications of it. But you are right that we should be thinking about it.

I'd love to worry about dystopian applications. I think we should always be worrying about dystopian applications. I don't disagree. I just want to echo what Sergey said. We're super excited when these devices work and do what we're intending for them to do. And some of these more nefarious applications seem like they require...

a lot more capability than we're able to provide at the moment. But it's important, I think, to be open and transparent about what these devices are capable of. And we just have to be diligent. and take it one step at a time and be open and transparent with the community so that we can work together as a team.

Give us a sense of where this technology is. Do you feel like you're really on the cutting edge? Have things changed a lot in the last decade? Absolutely. A decade ago, the state-of-the-art was someone moving a computer cursor by trying to move their hand. as decoded from their brain activity and getting it so they could click on buttons or type letters on a virtual keyboard reliably was amazing. There were high impact papers that everyone was super excited about because they could.

hit every button correctly all of a sudden instead of missing half of them. We went from that 10 years ago to speaking with 98% accuracy or flying a drone in 3D space plus rotations. Or in other applications, people walking again after spinal cord injury or people feeding themselves with a brain-controlled robot arm. So I think that pace over the last 10 years has been absolutely incredible.

We're in this field, so maybe we're biased, but it feels like one of the more exciting areas of medical science. That's about all the time we have for now. I want to thank you both for joining me today. You're very welcome. Oh, it's my pleasure. Yeah. Thank you.

Dr. Matthew Wilsey is Assistant Professor of Neurosurgery and Biomedical Engineering at the University of Michigan in Ann Arbor. And Dr. Sergei Stavisky is an Assistant Professor of Neurosurgery and Co-Director of the Neuroprosthetics Lab at the University of California, Davis. And that is about all we have time for. Lots of folks helped make this show happen, including Sandy Roberts, Robin Kazmer, Charles Bergquist, George Harper. I'm Flora Lichtman. Thanks for listening.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast