Welcome to Episode 17 of the language neuroscience Podcast. I'm Stephen Wilson and I'm a neuroscientist at Vanderbilt University Medical Center. If you're a regular listener, I apologize for the delay between episodes. It's been a busy semester, and I'm sure that everybody can empathize. But I hope you'll agree that this episode is worth the wait. My guest today is a living legend of our field, Elissa Newport, Professor of Neurology and rehabilitation medicine at Georgetown University Medical
Center. Elissa is one of the world's leading researchers on language development. I'm going to ask her about her seminal work on statistical learning. But our main focus today will be on her more recent line of work on language outcomes in kids who experienced perinatal stroke. It's very cool, very exciting work mostly unpublished and I hope you'll enjoy our
conversation. I'd also like to say a big thank you in advance to Marcia Petyt for transcribing this episode, and to the journal Neurobiology of Language for providing financial support for transcription. As many of you know, Neurobiology of Language is a new open access journal published by MIT Press, which is a nonprofit, and devoted to this field. I'm a big fan of the journal. I'm on the editorial board. And I think it's a model of the direction scientific publishing needs to be going.
I've published two papers there already, and I hope that you'll consider sending your best papers there, too. I think we should vote with our feet for an open access nonprofit future. Okay, let's get to it. Hi, Elissa. How are you today?
Okay, how are you?
I'm good. Thanks so much for joining me on the podcast.
Thank you for inviting me.
It's like a cold rainy day in Nashville today. How are things where you are?
Nice. It's a little chilly, but sunny, very nice in DC.
Alright. So, I was thinking we could get started by talking about, I always like to ask people how they became a scientist. So when you were a kid, did you always know that you wanted to be a scientist? Or did that kind of come later?
Yeah. Later. When I was a kid, I liked math. I was good in math. But I also really liked English, and I wrote poetry and I was headed off in a different direction. When I started college, I was an English major actually and then,
during college, I transferred. I started out at Wellesley College, where the humanities were very strong and I was very happy as an English major and writing, and then I transferred to Barnard College part of Columbia University and the English Major was quite different, but I got interested in psychology. So when I was in my junior and senior years in college, I started taking lots of psych courses, and I became the TA, the teaching assistant, for the undergraduate classes
behind me. I took care of the rats in the Learning Lab. I did experiments on learning, I started working at a place for disabled people that involved giving them M&Ms and training, toilet training them and so forth. And then I went to grad school in my first year in Clinical Psychology and quickly decided that, that wasn't really my favorite area and ended up studying language acquisition in grad school.
Well, okay, so how did the switch to language acquisition happen? Because that was obviously pretty pivotal for you.
Actually, what happened was that in my first year at Penn in grad school, I took a proseminar in Physiological Psychology and one of the assigned readings was Eric Lenneberg, the Eric Lenneberg Science article published in 1969, about the biological foundations of language, and I really kind of followed the questions that he raised forever since then. That was a very, very important article. I thought it was terrific. I thought it was fascinating. I wanted to study
language. I started going to see aphasia patients with Paul Rozin and Oscar Marin and then I decided I really needed to learn linguistics. So I went to Swarthmore College, where Lila Gleitman was teaching linguistics and took linguistics courses from her.
Okay, well, I guess you just have a knack for finding the right people.
She was great and then I ended up deciding that I wanted to work on language acquisition with her. So that actually is what I did for the rest of grad school.
But she wasn't at Penn, so you kind of collaborated across institutions?
So at the time, there was a nepotism rule at Penn, and Henry Gleitman was on the faculty in the psych department at Penn, and so his wife Lila was not allowed to be hired.
Okay. I guess that's one solution to the two body problem. Maybe not a very good solution.
Yeah. She was the linguistics department at Swathmore and later, she moved to Penn in the Ed school, again, because she wasn't allowed to be hired in the Psych department. And then, sometime after that, they got rid of their nepotism rules. It wasn't just Penn. This was across the United States and so they got rid of their nepotism rules, and at some point, Lila became part of the psych department, but not while I was a grad student. While I was a grad student, she was at
the Ed School, I think. It's hard to remember. But Henry and Lila were jointly my advisors.
Okay! What great people to get trained by! And that's so interesting for me to hear that you know, what really got you excited about this field was the Lenneberg paper. Because obviously, we're gonna come back to that in a moment. But before we kind of get on to our main topic for the day which is your work on perinatal stroke, I wanted to kind of ask you about the paper that made you famous.
I mean, well, you know, science famous, which is "Statistical learning by eight month old infants" by Saffran, Aslin and Newport in 1996. One of the most influential papers in psycholinguistics of all time I would say. I was wondering if you could tell me how it came to be?
Sure. Well, so when we started this work, Dick Aslin and I were both at the University of Rochester, and we started working with Jenny Saffran, who was initially a
first year student with us. And Dick and I actually had discussed before Jenny arrived, that she was interested in Word segmentation, and word learning from her previous undergraduate work and we had both read a paper by Hayes and Clark that used a miniature language, it was an artificial language with non linguistic sounds to look at word segmentation following some of the distributional work in linguistics, and we decided that would be a nice project for Jenny to work on and so when she
came, we suggested that, as one possibility for her. She was very interested and so we immediately started making some artificial speech streams and looking for synthesizers trying to synthesize streams of speech and how to test whether people, this was initially adults. So we ran it first with adults, not
babies. There's a 1996 paper published in JVLVB, Journal of Memory and Language later, that is the word segmentation study with a much more complicated language, much more complicated words and statistics that was done with adults. So the first study that we did together was actually with adults. And I should say, backing up, that I've worked on miniature artificial language learning studies since 1980, or even before my first paper, and that
topic was published in 1981. So working on artificial languages was not a new thing for me. That thing, I'd been doing for quite a long time. I started working on some artificial language work with Jim Morgan. When he was a grad student. He actually was Jenny Saffran's advisor in her undergrad work. So there's all kinds of connections there. Anyway, we started making
streams of speech. We were very interested in whether it was that people could really just listen to streams of speech and acquire the structure of words that they contain. But we didn't at that time, have a formulation about what later became statistical learning. It was really one in a series of artificial language experiments that I had done but others were looking at small grammars, small syntax acquisition, and this one was looking at word
segmentation. So sort of in between what Dick and I are interested in. Dick usually looked at the sound stream, and how babies and adults acquired the sound stream. I had been looking at small syntax problems and so there we were, sort of in between and the word segmentation problem with Jenny's interests. So she ran that study with adults. It worked great. That paper was written up and then Jen and Dick thought they wanted to try a simplified version with babies.
I guess we should just make sure I think, I mean a lot of people know this work. But just to kind of clarify, the central challenge that you're kind of investigating is, you know, words don't come with spaces in between them, right? Like we, when we read, we, you know, we see the words or, you know, with spaces between them, and they kind of have their individual identities. And that might sort of mislead us into thinking that
that's easy. But it's not easy, because language is continuous and a big challenge for the learner is to figure out where the where the boundaries are.
That's right. So that was the problem. That was the problem that we started developing in a miniature form, that the actual acoustics of speech, natural speech, if that there are no pauses and no prosodic cues to what are the individual words. So, there was a problem already in the literature of how would babies do word segmentation? How would they decide what the beginnings and ends of the words were because they aren't signaled with anything that you could start with without already
learning the language. So, as fluent speakers of a language, as fluent listeners, we hear words as though there were pauses between them, but there really aren't. There are some cues called pre-pausal lengthening. So, under some circumstances, the final syllable of a word before a pause, a quote unquote 'pause', is actually lengthened and that is perceived as though it were a pause. But even pre-pausal lengthening isn't an acoustic cue that you could read, that
you could use. So it wasn't clear how people were doing word segmentation and the earlier hypotheses were that you had to know the meanings of the words in order to pull them out of the speech stream. So we were trying to approach this as saying, no, there are certain even if you have a completely continuous stream, even if you synthesize the stream, so that we know for sure, there are no acoustic cues, and there's no meaning at
all. We think that babies and adults might be able to pull out the words, identify what the words are as opposed to the word boundaries, because they're co-occurrences of which syllables are most frequently in a corpus of speech coming together, those would be the words, and which syllables fall across a boundary, which would be much lower probabilities, because the combinations of words change all the time. So, that was what we were mocking up in that synthetic speech
streams. In the adult language, there were six words that had lots of reused syllables and in the baby language, we made a really simple version of it. So that baby language was a second study, we had already demonstrated that adults could do it. But we knew actually that people would be more impressed with the abilities, if we could show it in babies. So that's what we did and that paper was published in Science.
Yeah, it sure was. So what does the stimulus sound like?
The stimuli sound like just continuous. The ones that we presented to babies are two minutes long with no breaks.
Can you do a demo for us? Or is that beneath your dignity? (laughter)
No, I really can't. Jen Saffran is really good. She's actually a singer and so she can do live demos of this. I don't know if she still can, because it was 25 years ago, that we did this study. But no, it's synthetic speech and so it has this kind of monotone character. It's not really monotone. There is some prosody in the speech, but it doesn't have anything to do with the
word boundaries. We used the natural speech synthesizer, but all the syllables were edited to be about the same length and they had no prosody no acoustic cues, the word boundaries, so they have a more natural quality, but they're pretty monotone and it just goes on for two minutes without a break. I can play it for you if you want, but I'd have to find the stimuli.
Well, you know, I guess I'm just gonna have to take it upon myself to imitate how I think it sounds, which is something like "Bali goo pa Liga tula bardi goo pa bardi". like that, right?
That's right. Uh huh. Good for you. (laughter)
I'm not happy that I had to do that, but anyway... (laughter) Okay, so you play that to the infants and yeah, what do you what did you find?
Well, so we played this continuous stream for two minutes. In the adult version, it was a much more complicated statistical organization, because the syllables were reused in different words, which is more like real languages. In the baby version, there were four words and each syllable appeared only in one word. So there was a very high consistency of how the syllables were organized inside the words and then at the boundaries, it was random choice
among the other three words. So, they're what we thought of later actually, not when we originally designed it, is that you could identify a very specific statistic that would allow you to group the syllables into words, if you were keeping track of the recurring sounds and if you were keeping track of the
statistics of the corpus. When we originally did it, we just said well, this is like the structure of words, we're going to sequence them continuously and so the features of words should lead you to be able to pick out the words from listening. But we didn't have a particular heuristic that we had identified as what participants
might be using. But what happens in both babies and adults is that after a certain amount of exposure, we then give them basically a two alternative forced choice, we give them a choice between words, and what we call part-words, which is the end of one word and the
beginning of another. Those are very very similar, they both occur in the stream of speech, but one of them has a more coherent high probability statistic binding their syllables together across the corpus, which is the words and the part words have a lower transition at the boundary. What we ask adults is to pick out the one that sounds most familiar and what we look at in babies, is how long do they look at a speaker that's speaking, the words or the part-words. The
result is comparable. Babies look longer when it's a part-word. So they're showing a novelty effect and that tells us that they've identified the words, that those are more familiar to them by the end of two minutes.
Yeah, it's an incredible finding and I can see why it made such a splash. When I look back at that literature from the late 90s, after you all publish your paper, it seemed like it stirred up a lot of debate. Some people interpreted this like incredible learning ability that you had demonstrated as evidence against the Chomskian notion that language is innate and the Chomskians pushed back and said, it actually has no bearing on our claims. What did you think about all that?
We actually wrote one of the responses. So the paper was initially published in Science. Then there was a series of letters that were written to the editor of Science, one of which was from us. So, Liz Bates and Jeff Elman wrote a paper saying, we always knew that Chomsky was completely wrong, babies can learn anything and this demonstrates that you don't need anything innate in order to learn languages. And we actually wrote back and said, that's not what we think at all.
We wrote in and said word segmentation certainly requires exposure to the language and this, what we're demonstrating is that you can acquire the distributional cues from words. But we did not mean to say that those same kinds of analyses could learn an entire language. That's not true and we disagree.
We think that learning is a combination of innate biases about what kinds of statistics you can compute what kind of structures language will present, and the ability, the extraordinary ability of humans, especially babies to learn from input. So we think it's a combination of nature and nurture, and the chomskians all wrote you know, more strongly than we even but we agreed with a lot of what they said. This experiment doesn't demonstrate that. I also corresponded with
Noam Chomsky, actually. When the paper was in press, we learned to our surprise that a perspectives article about the paper was to be written by Liz Bates and I guess it was Liz and Jeff together. But we were not told that they were asked to do this. We wouldn't have asked them in particular, to do this. And we never saw the perspectives article before it went to print. It presented our findings in a way that we would not have agreed with actually.
So we were actually quite upset that our article was being presented in a context that we had no control over. But I thought it would be prudent to send the paper to Noam and say, hey, look, here's our paper. We hope you really like this. But the perspectives article isn't what we ever meant. So, I just wanted to give you a heads up. And Noam wrote back right away, and said the work is great. I think it's really cool. Actually, look at a footnote in my 1955 paper where I suggested
that this could be done. And he did! (laughter)
Oh, that's classic. (laughter) Yeah, that's an interesting choice that they, you know, went such an opinionated direction for the commentary, but I guess....
Liz told me later that she was one of the reviewers of the paper when it was published in Science. There's an editor of Science who works and works on editing the papers that come in, in our field. And he was the one who asked Liz to review the paper and then asked her to read the perspectives article.
Yeah, well, I know that she always loved the paper, regardless of whether you guys had a disagreement over how it should be interpreted.
Yes. Actually, the reason I went back and forth about this, because I tried to argue to her at the time that if you're going to read a perspectives article about someone else's work, you should be respectful and what they said. And she disagreed. We became really good friends again when she got ill and I wrote to her and said, you know, none of this matters when someone's ill and we're colleagues. I had a long correspondence with her before she passed away.
Yeah. She really liked the back and forth of scientific debate and she was always willing to, you know, put it aside for the personal. So....
I also became very good friends with Jeff Elman, who I also miss a lot.
Yeah, he was a lovely guy. Okay, cool. So, I know that you continue this very productive line of research on statistical learning, like to the present day. But I'd like to shift gears now and talk about how you got into neuroscience. So sometime in the last decade, I don't know exactly when you evidently decided to take your research in a new direction and start studying language in people with perinatal strokes. What made you decide to, you know, shift what you're focusing on?
Well, I've always been interested in the nature/nurture issue and in critical periods. In particular, like what is special about the way that children learn language that doesn't continue in the full form, throughout our lives? What is it that makes children so gifted at learning language?
The original Lenneberg argument, as I read in my first year in grad school, is that there was a critical period and he included in that, in his mustering of evidence for a critical period, that there was extraordinary recovery from injury to the brain in children that was quite different than what you see in adults. He had done that, he collected evidence on that topic, by going through patients that weren't accessible to him and by going through the clinical literature, looking at
case reports. And so in his book, and in that Science article, he has a kind of tally of cases that were injuries to the left and the right hemisphere at different ages, and how recovery looked, how well did the language come out. They were just classified as good or not good, are impaired or not impaired. And the argument that he presented from those data, is that, when you're a baby, damage to the left and the right hemisphere are equally likely to intrude on language
acquisition. And the argument was also made that if you get damaged to the left hemisphere language would recover in the right and that gets less and less true as lateralization of function develops to the left hemisphere in the healthy brain, you no longer see recovery, perfect recovery with injuries and you no longer see a shifting of language to the right hemisphere. So that was already
in the literature. I worked for a long time on issues about critical periods and whether people really did show a critical period for second language learning and also for first language learning. But all of the previous work I had done
was behavioral. But when I was at Rochester, I was the chair of the department of Brain and Cognitive Sciences and so I interacted lots with people in the Medical Center, and became colleagues with people who were in neurosurgery and in neurology, got interested in actually looking at this original phenomenon of Lenneberg's. But I couldn't do this work in Rochester. There aren't enough.... I mean perinatal stroke is very rare. There is not the caseload in Rochester to do that kind of
work. But then I was offered a job at Georgetown to come and start a center on brain plasticity and recovery. And I was very anxious to start moving into doing the neuroscience part of brain plasticity, and really looking at changes over age in injury and recovery. So I accepted the job at Georgetown and I got NIH offers one K award, it's K18, that is for senior investigators to have mentored training in a new
field. So I actually had applied for this K aword, I eventually got it, and I was mentored for two years by a bunch of neurologists, pediatric neurologist and stroke neurologist here. At Children's National Medical Center, I went to clinic for two years, I went on rounds and at Georgetown. I sort of did the whole thing, digging into learning about neurology,
And who were your key mentors on that?
My main mentor was Alex Dromerick, who was the Chair of Rehab Medicine and my Co-Director of the Center for Brain plasticity. Also Peter Turkletaub, who's in the Department of Neurology here and they're both stroke neurologists and adults. And then I met Bill Gaillard, at Children's National Medical Center. He's the head of child neurology. And I met Jessica Carpenter, who at that time was also at Children's
National Medical Center. And I went to stroke clinic at Children's with Jessica Carpenter for two and a half years, and went once a week and saw all the kids who came in with a stroke and so forth. And then Alex and Peter were, really Alex, really helped me to develop a line of research and understand how you do recruiting with MDs and how you get the right team of people involved when you do a big study like we've turned out to do.
Yeah, I just, I just love this story that you, you know, basically, like completely retrained in the new kind of science after having, you know, an ongoing and successful career doing a different kind of science.
So I've done this before. I, I mean, I know it sounds unusual, because I'm, of course aware that most people don't do that. But I actually do it every 15 years or so. When I was a grad student, I worked on language acquisition and mothers' speech to kids in English and of course, and then my first job when I got out of grad school, I was really interested in learning sign
language. And so I took a job at UCSD, I was lucky to get an offer at UCSD, which was right across the street from Ursula Bellugi's lab at Salk Institute, and learned to sign. Learned sign language for quite some time I met my husband, Ted, who's deaf and a native signer and I worked on sign language, which I had never worked on before for quite some time and
still do a little. And then I started doing artificial language learning work and did statistical learning work and then I moved into working on perinatal stroke. But for me, it's all been part of the same picture. All about sort of what's special about kids and what's special about language acquisition.
Yeah, now I see the connection. It's not like you're working on a random collection of disjointed topics, but it's just different skill sets and I think that's really remarkable.
It's very very fun. I mean, I think wouldn't be bored if I stayed working on the same thing. I really love learning new things and being able to be sort of a grad student or a postdoc over and over again.
Yeah, I get that. I mean, that's kind of the fun time in many ways, isn't it?
Yes.
It certainly where the, you know, the slope of acquisition of knowledge is steepest. So if you enjoy being on that steep part of the slope?
Yes!
That's a good strategy. So let's talk about what you've learned from this line of work. So can you tell us about these perinatal stroke patients like Who are they like, what's the incidence of it? Do they have other health problems? Like what is it?
So I'm at the beginning of any discussion about this way back with Lenneberg, for example, I think people didn't really know that there was much a stroke in children. So Lenneberg just talked about brain injuries. But in much more recent years, investigators who work on pediatric neurology have put data together from around the world and discovered sort of what are the syndromes. So this has been much more slow going and required a kind of worldwide
collaboration. Gabrielle deVeber is one of the central people who's organized this. Because any kind of stroke in children is very rare, it hasn't been really known until recently, what exactly you could call the main cluster of syndromes. Perinatal stroke, is a type of ischemic arterial stroke. So this is a clot in an artery that reduces blood flow or stops blood flow to a particular area of the brain, just like most
strokes in adults. Strokes are either hemorrhagic so hemorrhages, or clots and for scientific purposes, it's much better to look at a clot kind of stroke, because you can predict the area of damage. A hemorrhage goes all over the place. Blood is toxic to the brain and so hemorrhages are much less circumscribed. So this is perinatal stroke is a type of stroke that occurs right around birth, it's usually the kids that we study usually have their strokes within a few days of
birth. It's rare but relatively common compared with other kinds of strokes and babies, which don't happen much. Because birth is really hard on the brain. And it's has lots of effects and other systems. Nobody knows what causes perinatal strokes, but they're one out of 1000 or 3000. Somewhere in that neighborhood. People don't know sometimes people say one out of 4000 live births. So the incidence isn't quite known, but that puts it
rare but not super rare. It's much more common than childhood stroke, childhood stroke is almost unheard of, because babies don't have any children don't have any plaque in their arteries. And that's what causes strokes later. So the most common stroke at perinatal time is a left hemisphere middle cerebral artery stroke, which means it's the left hemisphere territory, that pretty much is language in the adult.
So why left? Why is there a hemispheric difference?
The hemispheric differences due to the way the arteries ran out of the heart. So in adults, stroke material plaque comes from the carotids and it's evenly distributed on left and right. But when it's an embolism coming out of the heart, there's a straight path to the left and much more complicated path to the right. And so we see the consequences
of this. Most of the kids that we see have left hemisphere, middle cerebral artery strokes, the middle cerebral artery is this big artery that serves most I mean, it serves most of the frontal and temporal lobes. And it's mostly left for the reasons that I just said and so those are the really common ones. We have a lower incidence of right, middle cerebral artery strokes, and so we're looking at kids who have either right or left but
not both. So that is their kids who have one injured hemisphere. It could either be to the anterior or the posterior regions of that territory or the whole territory. And so we've got a mixture. We are looking at pretty big strokes, so we eliminate people, kids who have little teeny, weeny strokes. Many of these kids, because this is a birth injury that results after a normal pregnancy, they don't have other disorders. They were healthy until the time of the stroke. The pregnancies were
full term and healthy. So they don't typically have other disorders, except that there are consequences of having a stroke, of course and once you have a stroke, there are some higher likelihood of seizures. And so we we look through the medical records and try to make sure that we get kids who are not having a high seizure burden or any seizures if possible.
Right. I mean, a lot of these kids, it's not even known at the time that they had a stroke, right?
That's right. So if they have a seizure, in the hospital, in the nursery when they're born, and then they get sent for imaging, but otherwise, babies don't get imaged. So nobody knows that they had a stroke, and newborns, their movements are not critically controlled. So even if they have a big stroke to the motor areas, you're not gonna see asymmetric movement, you're gonna see perfectly symmetric movements.
So later, as the cortex starts to control motor activity, kids who have a stroke to the motor region will show an impairment of movement on the opposite side of the body, but at birth, that won't show up. So if they have seizures in the nursery, which sometimes happens maybe a third of the time, they get sent to do imaging, and you can see the
stroke on imaging. If they don't have any abnormal behavior, then they go home, and they're perfectly healthy and then the mom starts to notice as they get to be like three, four months old, that they're not using the right side of their body. And then they'll get taken back to the pediatrician, often repeatedly, with pediatrician saying, Oh, you're just an anxious side...
You are just paranoid.
And then they'll do imaging and see a big stroke.
Well, and then I'm guessing, like, if it doesn't affect the motor area, like it may often not be discovered at all until incidentally, much later, if not, never.
Yeah, I mean, a lot of them affect motor behavior and so the kids that we see, we have chosen to look at kids when they're much, much older, I've wanted to look at the outcome, long term outcome and so we recruit kids who have had a perinatal stroke at age 12 or older.
Right. So yeah, is that a practical decision that you made to focus on that, you know, kids 12 and older?
No, it's impractical and we would have a much easier time if we followed them in early childhood, because that's actually when they're identified. What we have to do to find them later is go back into medical records and then try to contact people 12 years after they were at the hospital. So we actually have a much
harder time finding them. I see kids in the clinic when they're little, but they don't have, they don't necessarily continue going regularly to see the neurologist because they don't have neurological problems typically. So in the healthiest cases, they'll see a neurologist in the first few months, and they'll get physical therapy, because if they have a motor impairment, they need to have physical therapy, sometimes Botox in the muscles, and so
forth and then they're okay. And they get seen again when they're at preschool, because they need to have neuro psych exams for school. And then they go into school and typically, by the time they get to be adolescents, they often will have extra time on tests, but they're at grade level. I mean they're cognitively normal. They have executive function impairments that are pretty common. So they will have a little bit of reduction in short term memory, they'll have extra time on
tests. They're a little slower If you give them a speeded task, a fluency task, they're a little slower. But otherwise, they're actually perfectly normal kids.
Yeah, I mean, so let's kind of like get to the very central observation of this whole line of work. I mean, how's their language?
So, it is reported in literature actually by Liz Bates. Liz Bates worked in this field also. She looked at the young children and what they were finding when they followed kids with perinatal strokes younger, is that the left and the right hemisphere strokes have equal effect on language. But they are somewhat slow in their language development. When we look at them as adolescents and young adults, their behavioral language tests look totally
normal. Totally normal. And this is focusing on tasks that test complex sentence, for example if you give them tasks that include executive function. So there are standardized tasks that give, you know, put phrases on cards on the table. Now make as many sentences as you can. I mean, that requires problem solving, and memory and so forth, not just language. The right and the left are a little impaired on those, they're on the low side
of the normal range. But if you give them just ordinary sentence comprehension tasks, online sentence comprehension, perfectly normal, like their siblings, that's their controls. And if you give them production tasks, like we use the frog story, they see a picture book that has no sentences written down, it's just pictures, and they have to tell the story. And then we record them and give them to speech pathologists for scoring. There's no difference in their speech from their
siblings, either. So by the time they get to be adolescents and young adults, eliminating the ones that have severe seizure disorders, we don't see them. But the ones who are otherwise healthy and they do occasionally have seizures, but not huge numbers of seizures, they're breakthrough seizures sometimes that have to do with having had a stroke. Their language is perfectly normal.
Yeah, so it's a pretty stunning finding. I mean, like, you know, if you don't, if you're not Lenneberg, it's not maybe surprising, but I think if you I think it's kind of surprising for most people to learn this.
It is. It is really, I mean, especially if you look at their imaging. I even hate to show their parents their imaging, because parents don't usually know how to read an image they might have once been shown. But if you look at the image of their brain, these are very large strokes that we're seeing and we have one young lady that we've studied, who's lost her entire hemisphere, from a stroke and they just look kind of shocking. If you look at them visually, there's so much brain tissue
missing. But their language is really in the language hemisphere, in the normal, left hemisphere language centers, but their language is quite unaffected and if you talk to them, you would never know that they had a stroke that perfectly normal to interact with.
Yeah, it's kind of amazing.
It really is.
So now, you have the ability to do something that Lenneberg could never do. Or even Broca, as I'm sure, you know hypothesized, had one patient along these lines and hypothesize the, you know, what was going on. But you actually have fMRI, and you collaborate with Bill Gaillard and he has this task called the auditory definition decision task, which you use to map language in these kids.
Yes.
Can you tell us about that paradigm, why you chose it?
Yes. So this is a paradigm that Bill Gaillard and his group have been using for some time with epilepsy. Bill is the chief of epilepsy at Children's. He's one of the collaborators that I mentioned earlier and so they focus on the effects of epilepsy on the brain. And they have developed this task. It's highly reliable, and a lot of times they use this task at Children's before they do surgery for removing
epileptic fossa. So they can use it that the normal procedure and adults used to be that you had to do open brain surgery to figure out where focuses but you can use imaging instead. The task is there, it's a block task. There are blocks in which the listener lying in the scanner will hear a series of sentences like a big gray elephant, sorry, a big gray animal is an elephant and they have to push a button if it's true. Or a big gray animal is a chair and they are not supposed
to push the buttons. So there's blocks of sentences, some of which are true and some of which are silly as we tell the kids and they're supposed to push a button when they hear the true one. That is to provide evidence that they're listening and evidence that they're doing okay and processing the sentences during the blocks. And that is, we do scanning during those blocks. And then that is compared to other blocks that are randomly presented, that have the same audio sequence
backwards. And so the backwards speech has all the auditory properties of the sentences, but it's not comprehensible. So if you look at the parts of the brain that are calling for blood flow, and that are active during the forward sentences, and then take away the areas that are calling for blood and active when you're just listening to backward sentences, and they don't mean anything. The difference are to beat those regions of the brain that are actually involved in
comprehension of the speech. So that's what Bill uses to do his epilepsy studies. We picked it because it's very reliable. This is something that you have been worried about Stephen in your own work. We wanted a task that was very reliable, so consistent that we could do it with individuals. So we do all the analyses of individual kids, we don't need to just do groups. And what you find in a healthy population is that this lights up pretty much the entire left
hemisphere language network. So it's a big activation that you get in both frontal and posterior regions in the healthy brain. In the perinatal stroke kids, if you get any part of the brain that has a big impairment, you see all of the language network in the right hemisphere.
So the whole thing just shifts over.
Yup! And it's not, I mean, parts of it don't shift and you don't have language impairments as I already said. In an adult, if you get damage to one region, the skills that that area controls are impaired. That's the typical outcome. Totally typical in adults. In kids, apparently if you get any part of the language network showing pretty big infarct, they just don't use that hemisphere for language. The healthy hemisphere then wins even though it's not the dominant hemisphere in most
people. It seems at birth, to be just having some kind of stroke in the left hemisphere puts it at a bigger disadvantage than the right and language is acquired in the right. This shows up in everyone except one of our kids, who has a very tiny infarct.
So the kid with the tiny infarct, did language just stay in the infected hemisphere. I mean, the left I guess?
Yes. Well, it's really bilateral in this particular kid. We have other kids.
That's kind of weird.
Yeah, I mean, I don't we haven't looked particularly closely at that child. But we need to dig in to how that child... But you see bilateral activation in healthy kids.
You know, in kids with epilepsy, I know that you occasionally see these interesting cases with what you see a dissociation with frontal areas in one hemisphere and temporal areas in the other. Do you ever see that in perinatal stroke or no?
No. They see this in a very small percentage of the epilepsy kids. So with chronic epilepsy of the left hemisphere, about 20% of the kids show any sort of atypical language organization, so 80%, maintain language in the left hemisphere, even if it's not perfect language, which is the usual outcome. So epilepsy, even chronic seizures throughout childhood seem to have a kind of milder effect on which hemisphere is going to be
dominant than stroke. In our data, if there's a stroke, it just never develops in the left hemisphere. So that's an interesting contrast that we're actually looking at further these days.
Yeah it's really interesting. So I guess your data indicate that my plasticity for language is really quite highly constrained, right? It's not that language just goes and uses parts of the left hemisphere that were undamaged. It really only has one other major option, which is to have the same layout, but in the opposite hemisphere.
That is exactly what we find very plastic and very constrained.
Yeah. So why do you think that is? Why are there only these two symmetrical options?
I think that's a really good question. I don't know. There's something about those two hemispheres. Now, one factoid here is that, if you look at kids, healthy kids who are age four, five, they also show activation of these two networks. It's not symmetric at four and we would like to go back farther and look at earlier stages where we expect that we would see more symmetry. But there is much more activation of the right hemisphere homotopic regions in very young children
than there is in adults. By the time they get to be adults, that right hemisphere activity has pretty much shut off of the tasks that we use for sentence processing tasks and instead, it's used for a different aspect of language. So you find right hemisphere homotopic activation for a network that does emotional prosody, for example, which is another task we use.
You identify the emotions in the voice, is the voice expressing happiness or sadness, etc. So in that task, you see the light up of the right hemisphere and healthy population. So, in an adult, both hemispheres are used for language, but they've been assigned different functions. They've developed different functions, contrasting functions. In the perinatal stroke kids, both functions are in the right hemisphere, if you have a left hemisphere stroke,
Right and that doesn't tend to come at any kind of cost?
No. Not..I mean, the tasks that we've done so far, they do both equally well to their siblings. They do segregate these regions. So, to some degree, they are non overlapping in the same hemisphere. It's not like they're using the same tissue for two different things. They each find their regions.
Right. I think he presented some preliminary data on that at SNL a couple of years ago, but this is that's not published yet, right?
No, this is, we're just still reading this up now. There are a couple of
different papers. So, I'm writing up the overall findings of both tasks and Kelly Martin, one of our neuroscience grad student is working with myself and Peter Turkletaub, Bill Gaillard and Anna Greenwald, has been looking at the homotopic relations between the activations and has evidence that these are really, truly homotopic and also, that if you look at the activation of these two tasks that are in one hemisphere, and the perinatal stroke kids, that they're more
distinct regions, then you find if you just flipped over the healthy activation of the healthy sibling.
Wow, that's super cool. I can't wait to see that in its published form. I guess I should mention that, you know, that the finding you just alluded to about the sort of increasing lateralization with age is published by your group Olulade et.al. 2020, PNAS. I mean, in my reading of that paper, you know, like, certainly, I think your youngest kids were four and a half to six and a half ish, and they were
pretty lateralized already. I mean, I agree with you, I mean, you know, you definitely have statistical evidence that they're not as lateralized as adults. But they're pretty, pretty lateralized.
Absolutely, I agree. Totally. And we've done the comparable analysis. So sorry, let me just address that point first. So, yes, and we would love to look at kids younger, the reason that those data go, only started four and a half is because that those scans come from Bill's group and that's about as young as you can get kids to hold still in the magnet. Now, Nick Turk-Browne is trying to do infants who are not
sleeping in the magnet. But of course, movement is always a problem when you're doing fMRI. So there are some restrictions on being able to work with the very young, and the data on neonates and language, healthy neonates and language, basically doesn't look at the relative activation of the left and the right hemispheres. People have been interested in whether there was any lateralization, but not to what degree. So that is an unaddressed question that we
would like to look at. But we have been doing the same kinds of analyses in visual spatial tasks and in emotional prosody. So these are now right hemisphere tasks in the adult and looking at young children, and so we have two papers. One of them just appeared in one of them in press in Developmental Science, looking at two visual spatial tasks and in a line by section task, for example. This is Katrina Ferrara as the first author Barbara Landau is a
co-author. And in this, in the healthy person, a vertical line by section task activates, right hemisphere, parietal areas. In children, it's bilateral, and it's more symmetric. So we're actually starting a line of work, me and Barbara Landau and Anna Greenwald looking in healthy kids at the development of lateralization for these various skills and trying to figure out, you know, is language always much more lateralized? Are we not picking
it up early enough? Is there a developmental change in language before visual spatial? Or is there the same pattern of lateralization developing? But in all of these regions, we find some degree of bilateral activation in all of these tasks in young children, and much, much more lateralized function as they get older and get to be about 10.
Yeah. It's such a mystery, you know, and it really makes me wish that we could scan really tiny babies and then they went so uncooperative?
Yes, yes. Well, I, we are sending stimuli to Nick, who has to figured out how to do this. So let's see how that kind of behavioral data comes out. I hope if Nick is willing to try our stimuli.
Yeah. Cool. So you've kind of given us some outlines of, you know, the papers that you've got in preparation or in press. What's next for this line of work? Like, what are the, what are you going to be doing next?
Well, so they're, they're a bunch of things that we've already collected scans on that are now still being analyzed. And one of them is that we've been looking at the organization of the inferior temporal lobes, which are object specific areas in the healthy brain. So for example, what happens to the organization and lateralization of the fusiform face area and the visual word form area and the the parahippocampal place area?
These are famous areas that are in the inferior temporal lobe, part of the high level visual system but or the mid level visual system. But but it's not damaged by a middle cerebral artery stroke. So the kids who have lost chunks of the left hemisphere or the right hemisphere, MCA territory still have both inferior temporal lobes healthy. So then one can ask, well, where does the visual word form area go? Does it go to its usual place in the left
temporal lobe? Or does it go with spoken language abilities in the right?
And that's where I'm going to put my money?
That's the outcome, yes.
It wants to hang out with its friends.
That's right. So all of this suggests that there are principles of how networks form that have to do with locality. You asked earlier, like, how many times do you see the split between the posterior regions and the anterior regions of language? And it's very uncommon, I guess it's possible because Bill sees it sometimes but the language network parts seem to like to be together. And that's obviously the same with the visual word form area.
I do believe, you know, Bill's data, because I think his task is really solid. But I've never... I've scanned many epilepsy patients, and I've never seen it myself. So it's got to be super rare.
That's right. I think so. Even in their data. It's just a couple of people. If you look at how many people end up in that cell of their...
Yeah, but I think that I mean, they've studied hundreds of people. So that's why they're gonna see a few.
That's right. That's right. But that's not generally the way networks form best, apparently. And so this is also in line with what standard Hans group and Jocelyn Lambada Han have argued that that the visual word form area is recruited as something that lies that has to lie between high level vision and language. And so it travels with with language, high level visions on both hemispheres. So that's one thing that we have collected the data on and still in progress.
One of our grad students is Maddie Marcelle is a neuroscience MD PhD student who's just starting to do some white matter analyses. We don't know for the stroke kids and for the epilepsy kids that Bill studies, which of these kids have white matter tract injuries that would ruin the connectivity between different parts of the language system. And so that's an important anatomical and physiological addition to
looking at this. But the sort of next bigger step that I would like to take is number one, looking at this issue of healthy development. How does healthy the healthy development of lateralization work? Why are these areas particularly organized the way they are? What's special about in the normal brain about what becomes language? And what doesn't become language? Why do these
areas become language? I think it's not, many people often suggest when I talk about this, oh, it's because of the regions of the mouth and ear but that same language area work for sign language. So that's not the right answer. So we want to understand healthy development as a background and constraint on how recovery from stroke may
occur. A very important issue that Peter Turkletaub, one of my collaborators studies, is there any way to get the opposite hemisphere language, the former language areas to start functioning again, if you have a stroke as an adult? So we have what is called the weak shadow, the the weak shadow of language in the right hemisphere.
I like that term. I like that.
And then yeah, and then this is Kelly Martin again and then the next step, the next big different step is to look at childhood stroke and start looking at when does this kind of shift over to the homotopic regions stop? what happens over the course of development that turns into adult aphasia,
Right! Because it certainly does stop, you know, and like, you know, as you know, in my lab, we study people with aphasia, and I, it's extremely rare, if ever, that language moves over to the right after an adult stroke. As I'm sure you know.
Yes. No, I'm, sure. I believe, I don't know how the data hold up, in your opinion. But there's some argument that immediately after the stroke, you start to see activation of the homotopic regions in the right, but that may be because of release of inhibition, and then the claim is that it moves back to the left.
Yeah, I know. I don't really buy that story. I mean, I do, I think that's a neat line of work. But I mean, I guess you're kind of alluding to Saur et. al. 2006. But I kind of am on board with the analysis of that data by Geranmayeh and colleagues who argued that it was kind of secondary to task difficulty effects, and, you know, the challenges that people have during the task in the immediate post stroke period. Yeah, I don't think that there's a strong evidence for shifts to
the right. But you know, this is kind of something that if there is, it'd be very exciting. Maybe we'll see that one day.
That's right. But there is this, I mean, in the healthy brain, in the adult, there is... so what Kelly has done, she just presented this at the Neurobiology of Language Conference last month. If you look at the adult data from Olulade et. al. (2020, PNAS), and you do what, you do a top voxel approach, you take the adult data, and you say, Okay, if you do a threshold cutoff analysis, there's clearly more activity in the left than the right. But what if we equalize the number of voxels that we're
looking at? Where are the most active voxels in the right, even though they're not very active, and they form, that there is a region that's the homotopic region, to the left, still in the right hemisphere that's responding best to language. It's still the same pattern of activity, it's just not responding very much. So it doesn't show up when you do a threshold cut off.
Yeah. I mean, I think that that's something that we always see when we lower, I mean, if you do like pre surgical language mapping, you always will routinely play around with thresholds to like, try and find, you know, where can I best present this data, to give the surgeon a clearer picture of what's going on and, you know, most people have, what did you call it a weak...?
A weak shadow.
Most people have a weak shadow in the right. I don't think it's completely symmetrical you know, I think that like in the right, like, anterior temporal is much more prominent, relatively speaking, but by but yeah, I mean, you know, the same sort of frontal, frontal and temporal regions are there.
Right. That's right.
They don't have the same capacity to support recovery from damage.
No, that's right. But then the question, I mean, a clinical question of great importance to many people would be, is there some way that you could stimulate it? Does it have enough ability that it's just, it's maybe responding to weekly, but you could enhance that? Is there something you could do that would help people with aphasia who have damaged the left and are not gonna recover and the left very well?
Yeah. I mean, I guess many people are exploring transcranial stimulation methods to open that up, but I bet I mean, I guess another thing is, like, you know, pharmaceutical, like, interventions. I mean, is there something that you know.....could we ever have a drug that would re-open the critical period.
That's right.
Have you got any ideas about that?
No. So we just had a paper appear in PNAS two months ago, that's about adult motor strokes. This is actually Alex Dromerick's work, it's Dromerick et.al. and so this is a phase two clinical trial looking at people who had a stroke to the motor regions, either hemisphere. It's not about lateralization. It's about whether there's a reopening of a critical period spontaneously
after a stroke. And so what they were doing is following (I'm a co author, but it wasn't centrally my work) the animal literature has suggested that immediately after a stroke, there start to be cellular molecular processes that are like what you see in early development, where new synapses are being spontaneously formed around the environment and in
the homotopic regions. And the question asked there was if you give extra, an extra bolus of physical therapy, does it help more if you find this early window immediately after a stroke compared if you do the same thing later? And the answer is yes, there's a significant outcome of earlier, during a specific time window after stroke, the rehab actually has much bigger effect, a
significantly bigger effect. So starting outline, when there might be a critical period after stroke, so this is called CPAS is the acronym. It's critical period after stroke study..
Yeah, that's really interesting. I know, people have been looking at similar concept and aphasia, but I don't think that they have, like actually done a comparison between the same amount of therapy delivered at different times, they were just kind of looking at the effect of early language therapy. And honestly, it's very hard to find that effect. Because like the, you know, the amount of natural recovery is so great that it's hard to see an added effect of
So this study has four groups, it's a phase this. two clinical trials has a control group and three groups of administering the same thing at different times and then the final assessment is 12 months after stroke. And so it's a long term outcome. And the statistics even so it's a small effect, the statistics are pretty fancy statistics that look at the trajectory of recovery. So, part of the advantage is faster recovery in the earlier groups, and part of it is better outcome.
Yeah, well, that's very promising.
Yes. And so again, one would have to do some kind of enhancement, and that it doesn't get people back to normal motor behavior. Some of them do. In the paper actually has this spaghetti curves that are individual, all the individual recovery curves are shown and some of the individuals actually do get back to full ability. But there is a time window. Now the questio is, could you really jack t
at up? Is there some way that ou could improve the outcomes wi h TMS or with pharmaceuticals or some combination?
Yeah. Well, I hope that, that we'll see something like that in the next decades to come. Right. That would be very cool. Well, thank you so much for talking to me today about this work that you've done. I really love this line of work and you know, have enjoyed hearing about it over the years. And I, you know, again, today, I learned a bunch more that I hadn't that hadn't really sunk in before, so this was a lot of fun.
Thank you. Thank you for having me. Thanks a lot. Well, you haven't heard about some of it, because it's not published yet but we're working as fast as we can.
Yeah, there's some new stuff. I mean, there's stuff that I've Yeah, I think I've seen you. You know, we've talked about it over the years. But yeah, there's new stuff today that I hadn't heard before.
Yes.
Yeah. Very exciting work. Really appreciate your your time.
Thank you so much for having me. It was fun to talk with you.
I'll see you later.
Okay, bye.
Bye. Okay, well that's it for episode 17. As always, I've linked the papers we discussed and Elissa's website in the show notes and on the podcast website at langneurosci.org/podcast. Thanks for listening and see you next time.