This is the TED Radio Hour. Each week, groundbreaking TED Talks. Our job now is to dream big. Delivered at TED conferences. To bring about the future we want to see. Around the world. To understand who we are. From those talks, we bring you speakers. And ideas that will surprise you. You just don't know what you're gonna find. Challenge you. We truly have to ask ourselves, like, why is it noteworthy? And even change you. I literally feel like I'm a different person. Yes. Do you feel that way?
Ideas worth spreading. From TED and NPR. I'm Manoush Zomorodi. On the show today, the soundtracks of our lives. If you are one of the millions of people who saw the movie Wicked this winter, you'll remember the opening lines to the iconic song, Defying Gravity. It's kind of mysterious, suspenseful. You can tell the song is going to build and build.
It's so beautiful, and it is absolutely the most wonderful match between lyric and melody. I believe that something has changed because of that big leap. Something has changed. I'm like, ooh, tell me more. This is musician Scarlett Keyes. She's a professor at Berklee College of Music where she teaches the art of songwriting and why a song can make us feel something powerful and emotional.
Often, she says, our favorite songs are defined by small and deliberate choices that composers make, like a key change or a leap in octave. So in Defying Gravity, there's this moment where... The composer wrote Unlimited. Unlimited. I'm unlimited, right? And so... Da-da-da-da. Up. High. Limit. That's the limit. Big. Open. Wide. Right? So up that big octave. So...
Kind of the craft of songwriting is when you create a big leap, a big distance between notes, it creates a hook and it creates something really memorable. So going up eight notes. And we've heard that in Somewhere Over the Window. So he included that little Easter egg from The Wizard of Oz in Wicked, which was so beautiful.
and vast and so that's a word that we borrow from poetry and we call that prosody where everything you're doing melodically, harmonically, rhythmically is all in support of this story being told. Scarlett has taught many musicians over the years. Some of her former students include Charlie Puth and Lizzie McAlpine, artists with billions of streams on Spotify.
And she often encourages her students to start writing by tapping into their deepest feelings. What is the thing you care the most about right now? What is the thing that keeps you up at night? What's the thing you can't stop thinking about? As songwriters, we are repurposing these human tropes into new language for the listener mainly to understand their life in a new viewpoint, with new words, with new music over and over again.
That's the secret to songwriting. How am I feeling? What music feels like that? I want you to empathize with me. I want you to understand me. And all of that is in a song. Are you singing it on the downbeat? Are you singing it off the beat? Are you singing something stable? Are you singing something unstable? You want the listener to feel what you're feeling. Music is always, for me, it's always an emotional.
It can really change the weather instantly in my body, in my mood. It's soothing and it's comforting. And for some people, that's heavy metal. And for some people, it's classical. And for some people, it's country. It's all about what you love. From our favorite music to our own voice, we are surrounded by sound all day long. But what do we actually absorb and what do we just filter out?
How does all this noise affect our emotions and behaviors? Today on the show, the soundtracks of our lives. A musician... technologist, and voice expert explain how what we hear shapes us. So back to Scarlett Keyes. In her classes, she coaches students on the techniques they can use to grab a listener.
Here she is explaining more from the TED stage. One of the tools we use is tone. That's something we all understand, tone. Imagine you're sitting in a cold hospital room waiting to meet your doctor. Wearing nothing but your underwear beneath your dignity gown. And your doctor comes in. Nobody wants to hear, hello, my name is Dr. Watson and I'm your brain surgeon.
we want to hear, hello, my name is Dr. Watson and I am your brain surgeon. Because when his tone of voice goes up, so does your heart rate. And when his tone of voice goes down, you feel calm and like, I'm in good hands. So tone of voice matters. So think of melody as the song's tone of voice. How we say what we say is oftentimes more important than what we say.
As Western listeners, we have a relationship to melody, and we have an expectation to that relationship. So I'm going to play something, and when I stop playing, I want you to tell me what you expect me to play next. There it is. Exactly. So some notes feel stable and some notes feel more unstable begging for resolution. And that's very powerful information for a songwriter to know.
The words we place on those notes make the listener feel certain things. I'd like to take a moment to ruin an Adele song. I'm sure you've all heard her song, Someone Like You. In the verse and in the pre-chorus, she runs into her ex unexpectedly, and she's clearly still in love. Okay, you know the song. What if she had sung it like this? Never mind, I'll find someone like you. What happened? I apologize, by the way. In my version...
We believe her. We believe she will find someone like you. No problem. There's plenty of you out there. Because I have paired stable notes in the key and stable chords bringing a feeling of stability. But that's not the melody she's saying. Those weren't the tones that she sang. This is her version. Never mind, I'll find someone like you. Do you feel the difference? In her version, we know she will never find anyone like you.
We know that because she has paired unstable pitches to match the way she's feeling, building empathy with the audience. Go Adele. You use a couple of examples in your TED Talk of musicians, singer-songwriters who do an amazing job of signaling emotion. What is it that these people do that not only...
connects while you're listening to the song, but makes you want to listen to it over and over again. I mean, I think there's a lot of things that go into a good song. And one of the things that I learned was to really pay attention to symmetry. You know, if everything is expected and everything, all the lines are the same length, all of the sections are the same number of lines, it gets a little boring.
So just surprising our listener, number one, is a really great thing to do. There's another thing I talk about, and it's like if people are having a day when they're not feeling really inspired, they're not downloading the muse. They're not writing the next. I'm like, well, hold on. Just play with shapes. You know, there's a Lady Gaga song. If you were to sketch out the first line of her melody, it would look like a straight line.
That's pretty boring. But it's effective. And so here's her lyric. You've given me a million reasons to let you go. You've given me a million reasons to quit the show. So that's like a static, flat melody with mostly, I think it's about 13 to 15 notes of repetition. I kind of love it, though. I don't know. I do, too. Why? Well, because it feels like she's defeated. Like, oh, I have given you a million reasons. It feels defeated and exhausted. I've had it with you.
She could have moved it a little bit. I've given you a million reasons to let you go. Or thirds. Or wider. No, it was perfect. You're giving me. So sometimes with all of the creative people, I'm like, hey, try a melody that kind of repeats a lot. Just see what happens there. Songs help us process emotion and understand how we feel.
Pick a song in the morning to start your day with instead of the usual negative thought train that blazes through your brain taking you with it. Put on a song you love that has uplifting lyrics that primes your nervous system for a great day. Or the next time you have questionable in-laws coming over, instead of awkward silences and small talk, put on a song you know they love and let the dopamine flow.
I mean, I think we all Spotify and all these different apps that we, it makes it easier to make playlists and find the music that we like and discover new music too. But you have been a real believer in curating the soundtrack of our own lives. Tell me how you started thinking about that and why you think that's important.
Well, so three years ago, I was diagnosed with breast cancer. And I really had to turn back and look at music and go, I'm sorry, music, that I've analyzed you so much because now I really need you and I have forgotten you as medicine. And I had to go through chemotherapy, and I would say to friends, send me your best song. And on the way to chemotherapy, I would have a song in the car.
And it would be, a friend would send me, hey, I've got one, Bill Withers, Lovely Day. When I wake up in the morning, you know, I mean, there's so much that happens to us when we listen to music that is beneficial. boost our immune system this is the science that has come out um over the past 10 years the effects of music on our health so this thing that that had saved my life over and over and over again saved my emotional life saved my
Everything was now, I was returning to that in a way that I hadn't before. Even when we're tired of sound, we need at least... to go to music a couple times a week or a day even and just put on one song, one song. It will change your biology. It's a wonderful thing. And when you wake up in the morning...
Instead of waking up with the same thoughts that we had yesterday, you could wake up and say, I've got my morning song. I'm going to put my morning song on. It's just a great way to start the day. That was songwriter Scarlett Keyes. Her memoir is called What If It All Goes Right? And you can see her full talk at TED.com. On the show today, the soundtracks of our lives. Oh, yeah. I'm Manoush Zomorodi, and you're listening to the TED Radio Hour from NPR. Stay with us.
It's the TED Radio Hour from NPR. I'm Manoush Zomorodi. And on the show today, the soundtracks of our lives. Over the past 100 years, technology has had a profound impact on how we listen to music, from records and radio stations to cassette tapes and CDs to MP3 players and streamers. But now, with artificial intelligence, technology is shaping the music itself. We're right on the cusp of a Chagipiti moment. This is Pierre Barreau.
where you can generate music and you can generate really good music that's really personalized to your taste. Today, he is a pioneer in AI-generated music. But Pierre first started thinking about all this back in 2013 as a university student, studying computer science by day. playing music, and watching movies at night. So at the time I was in my third year of university and I remember very vividly that I was often jamming on my piano, especially at...
midnight or odd hours of the morning. And then one day I sort of stumbled upon this science fiction film called Her. Welcome to the world's first artificially intelligent operating system. If you haven't seen her, it's an indie science fiction sort of romance starring Joaquin Phoenix. Who gets close.
To a super intelligent AI. Where'd you get that name from? I gave it to myself, actually. Voiced by Scarlett Johansson. How come? Because I like the sound of it. And there's this one scene that really jumped out at Pierre. One day, they share this moment together, and Theodore, the main protagonist, asks Samantha, what are you doing? What are you doing? And Samantha replies, I'm just looking at the world.
And writing a new piano piece. Oh yeah? Can I hear it? And she starts playing this very beautiful piece of piano. Been sort of thinking and writing this piece of music for this moment since we cannot take a photograph together. Well, I was thinking we don't really have any photographs of us. And I thought that I would call it a musical photograph.
captures us in this moment in our lives together. For Pierre, as a musician and a computer science major, I like her photograph. this was his aha moment. I heard this and I was immediately hooked. Unlike a lot of science fiction movie previously, it pictured a very positive view of what AI could be in the future. could create these musical photographs that would inspire people, that would capture a shared moment.
And so I just started thinking a little bit obsessively about what was the state of the art of AI and how I could achieve this dream of creating musical photographs. And so over the next three years, Pierre built an early AI-generated music composition tool and made it open to the public. He called it AVA. Ava was trained on scores, some 30,000 pieces of classical music. It learned to emulate all sorts of composer styles, moods.
themes, and tempos of different musical eras. The goal was to generate music that perfectly fit whatever a person wanted to hear. For AVA, this process has taken from years and years of learning, decades of learning as an artist, as a musician and composer, down to a couple of hours. Two years after launching AVA in 2018,
Pierre went on the TED stage to demonstrate what it could do. For example, we were commissioned to create a piece that would be reminiscent of a science fiction soundtrack. And the piece that was created is called Among the Stars. And it was recorded with a CMG orchestra in Hollywood and the great conductor John Neal. And this is what they recorded, made by Eva.
For me, this piece of music is the first embodiment of what it means to create personalized soundtracks for people. And so it's a very meaningful piece of music for me. How did people first respond to the launch of Ava? What did people think? So there was...
A lot of different responses. And as you can imagine, with something like this, there was both extremely positive and, of course, extremely negative for very obvious reasons. Spell out those obvious reasons for us, because maybe someone listening is like, what? This sounds amazing. Fair enough. Well, there's of course going to be concerns about how the technology is going to be used. Is it going to be used to replace humans or is it going to be used to empower humans?
From the very beginning, we've had to wrestle with these questions and explain to people that the value of this technology was not to replace creators and just have a computer. create music going forward, but instead to create tools for people to be able to create music, to bring down the barriers at entry for creating music. This is like a new technology, just like before.
The synthesizer was invented or the digital audio workstation was invented. And this kind of technology has helped historically humans. And this is just the next iteration of that, just maybe a little bit more powerful. There was one example that I read that really struck me that explained the shift that's happening. I read one hit maker. He compared AI to the introduction of drum machines in the 80s.
That like suddenly anything could have a beat, right? And professional drummers needed to learn how to use the technology to stay relevant. Do you find that useful, that comparison? Yeah, I think that's a fair comparison. And for those people who maybe are a bit more skeptical or scared, I think it's important to also notice that all music today is not the street.
just made with drum machines. There's still plenty of music that's made with live drummers. So I think that's a good showcase that technology expands the realm of possible, but it doesn't mean that everything that came before all of a sudden becomes obsolete. Today, the tools to generate music with AI are much more advanced. For one thing, they're not trained on written scores anymore.
The tools analyze songs themselves, looking for patterns, and then producing their own versions in a matter of minutes. For example, my producer James used another AI tool called Udio to make this ridiculous new TED Radio Hour theme song. It's a synth-pop tune straight from the 80s. Or, with a few more mouse clicks, he can transform the song into a James Brown-esque jam.
First of all, which one did you like better? I have to ask you, Pierre. Were you more for the synth Pet Shop Boys sound or were you more sort of deep soul? Deep soul. Yeah, for sure. I don't know. It just has a radio vibe that is uncanny almost. I mean, as amazing as those sound kind of silly, there's a more serious issue here, and that's the debate.
over these AIs being trained on copyrighted music. You have made clear to people that Ava is not trained on copyrighted music. But in 2024, the Recording Industry Association of America... sued a couple AI music companies for copyright infringement. I mean, the makers of ChatGPT trained it by scanning the entire internet under what they claim is fair use. Do you think training AI with music should be considered fair use? I think that it's important to have space for innovation.
And it's also important at some point to recognize that, okay, maybe even if it's authorized or legal to do something, it doesn't mean that there's no grounds to sort of rethink how business models and incentive mechanisms are. are put in place to reward all the actors of an ecosystem. I think that AI companies like ours and others should have a right to train on data, but it doesn't mean that we're not willing to
to work with others, you know, to make sure that going forward this technology works for everyone. I mean, we're talking about entering an entirely new era of, I mean, I hate to use the word, but content. I guess some people worry about originality. If we are making things based on something that's already been made, where's the human strangeness in all of that?
I think that's a fair question but perhaps what this question misses is the fact that AI, at least in my opinion, is not going to be used for use cases that say human-made music is going to be used for. So for example... I think that there's new use cases for the type of tools that we and others are... building. For example, one is music education.
So we see a lot of people using Ava to understand how music is created. So music schools are using it to train students on how to create certain types of music that maybe they're not as comfortable creating. Another example is... creating interactive music for interactive medias like video games. I think that's not something that's scalable with human-made music. So there's all these other use cases that are not currently explored.
or as optimized for human-created music that AI music is perfect for and that is not going to necessarily compete against human music. And that brings us back to the original idea you had, this... idea of crafting a soundtrack to fit your day, not just by building a playlist, but by really scoring music to your life. So I'm a huge listener of music. I listen to probably, I don't know, three to four hours of music every day. I mean, especially as a programmer, I think.
Lots of programmers tend to listen to music while they work. Helps magnify certain emotions or certain things that are happening throughout my life. There was an opportunity to really get to the next level of that and soundtrack everybody's life and everybody's personal moments with music.
and get people engaged with music in completely novel ways, essentially. If you personalize music to that extent, I think you can really get more people excited about creating music and essentially turn everyone into a creator. That was Pierre Barreau. He is the CEO and co-founder of Ava. You can watch his full talk at TED.com. On the show today, the soundtracks of our lives. So we've talked a lot about music, but really, almost all the time, what we hear are voices.
like me, on my way to the studio. Welcome to NYC Now. I've got my earbuds in. Your source for local news in and around. I've got my dog with me. Okay, let's cross. Let's cross. Quick, quick, quick. On the way, I'll stop for coffee. Listen to the busy street. But by the time I sit down to my microphone... I'm often reminded that the voice I hear the most is my own. Most of us use our voice on a daily basis, but have no idea how it works.
And even scientists who've been researching it for a long time, there's still a lot of mystery. This is Rebecca Kleinberger. She is a professor of humanics and voice technology at Northeastern University. And much of her research focuses on the relationship we have to our own voice, what it sounds like to us, and how we think it comes across to others.
All of us have some sort of relationship with our voice. And although it's extremely familiar, we're also very much estranged from our own voice. But our voice is with us constantly. How do you see it? Well, you can kind of think about the voice as a gift you give to others. Your voice is not meant for you to listen to it. There's a lot of different reason within our own body. The reason of...
how we walk, the rhythm of our own heartbeat and our own breathing. And similarly as a voice, all of that are some of the temples in our lives. There are metronomes of our lives. But we are not conscious of them. So you could think of your voice as part of the background ambient music of your life. Rebecca calls the voice we hear in our heads the inward voice.
And she says there are mechanisms that explain why we hear that voice as a quieter, filtered version of the outward voice we project onto the world. Why is it that we're so unfamiliar with it? Why is it that it's... Not the voice that we hear. Here is Rebecca Klein-Bergé on the TED stage. So let's think about it. Let's try to understand the mechanism of perception of this inward voice.
Because your body has many ways of filtering it differently from the outward voice. So to perceive this voice, it first has to travel through your ears. And your outward voice travels through the air, while your inward voice travels through your bones. This is called bone conduction. And because of this, your inward voice is going to sound in a lower register and also more musically harmonic than your outward voice. Once it travels there...
It has to access your inner ear. And there is a second mechanism taking place here. It's a mechanical filter. It's a little partition that comes and protects your inner ear each time you produce the sound. So when you do open your mouth, you also dampen a little bit the sound that's going to enter from your ear. We know that interestingly from studies of frogs.
Some of those frogs are extremely loud, so loud that really at the source of them, they should almost be deafened by their own sound. And researchers realise that when those frogs... make those sounds, there's almost a little wall that comes and protects their inner ear. And they wonder if other species have that. And yes, a lot of species actually have that, even humans. So that also reduces what you hear.
And then there is a third filter. It's a biological filter. Your cochlea, the part of your inner ear that processes the sound, is made out of living cells. And those living cells are going to trigger differently according to how often... They hear the sound. It's an habituation effect. Those are living cells, so they align in a way that correspond to different frequencies. And all have slightly different thresholds in which they fire.
And this threshold changes with how much we hear a certain sound, which means that your cell is just not going to fire as much after a while if you hear one sound a lot. And because your voice is the sound you hear the most in your own life, we could almost simplify this thinking that you have an imprint of your own voice. inside your own inner ear that makes those cells fire less. So you hear your voice even less because of that. Is it literally reducing the volume of what we hear? Or just...
tuning it out, tuning it out. It's a question of sensitivity. Finally, we have a fourth filter. It's a neurological filter. Neurologists found out recently that when you open your mouth to create a sound... your own auditory cortex shuts down. So you hear your voice, but your brain actually never listens to the sound of your voice. When you hear... sounds in general, whether it is voices of other people or sound in your environment, your brain takes a signal.
And do a lot of different kind of analysis very quickly to kind of understand where it comes from, whether there's a danger, how you should understand your interaction and the intent of the people you're talking to. But when it's your own voice, your brain does not process it the same way. Your auditory cortex is actually barely active or way less active when you hear your own voice. One of the main theory is...
Well, your brain does not need to spend that much energy when it's your own voice because it's self-produced. So that was how we hear our own voice. But how do other people hear us? When we come back, more with researcher Rebecca Kleinberger and the surprising things that people can glean from the sound of our voice. Today on the show, the soundtracks of our lives. You're listening to the TED Radio Hour from NPR. I'm Manoush Zomorodi, and we'll be right back.
It's the TED Radio Hour from NPR. I'm Manoush Zomorodi. On the show today, the soundtracks of our lives. And we were just talking to voice researcher Rebecca Klein-Bergé about the various filters that our brain uses to tune out our own voice so that we're not distracted by it. But what about when you hear your own voice played back to you? Why do so many of us hate the sound of our own voice? Dad, can you do a video of me? I don't hear it the same way other people do.
we don't have the mechanism to analyze it. That's why it's so odd when we hear it recorded. This is our first time driving through Monument Valley, and I've been waiting my entire live. When I listen to a recording of my voice, it's always slightly... I would say, to listen to my voice. At least it sounds quite differently to what I think I sound. And this is something that's quite universally reported.
It is interesting. Like when I was growing up, you didn't hear your own voice often at all. I mean, it was very rare. I remember getting a tape recorder and recording my own voice and playing it back. And that was so. disturbing almost, but now people are recording themselves all the time, videos and podcasts and leaving voice memos and playing them back. Do you think that people are
Hearing themselves differently or making peace with the sound of their own voice in ways that maybe they didn't 10 years ago, 20 years ago? It's quite paradoxical because... As a species or as the animals we are, we are not supposed. to be able to hear our voice. The recording, the ability to dissociate time and space in terms of the use of the voice is something that's very unnatural. And even though it's been around for...
a long time that we have those technology and indeed it's being used more and more often. I believe that there's still a part of our brain that is not completely used to it. Our brain has developed over so, so many years to get to the point where it is to have all those optimization of the voice. And so much of it happens at the subconscious level. When I hear someone else.
Completely subconsciously, I'm going to automatically analyze a lot of different elements beyond words. I'm going to get a sense of their age, their gender, their health, the physiology of their face, the shape of their nose. The shape of their nose? Absolutely. People are really good at giving a good estimate of nose shape just from hearing voices. And people are really good at detecting hormone levels from the voice.
So all those elements beyond words mean when I talk to you that I'm going to let a lot of things naturally go that gives you indication about who I am, about my body shape, about my health, about my hormonal identity, etc. Your voice is also very linked to how you create relationships. You have a different voice for every person you talk to. If I take a little snippet of your voice and I analyze it, I can know whether you're talking to your mother, to your brother.
your friend or your boss. We can also use as a predictor the vocal posture, meaning how you decide to place your voice when you talk to someone. So there's a lot to learn from listening to voices. Yeah, it's funny. I'm okay with listening to my own voice on the radio, but my daughter was making fun of me the other day of the voice that she says I use that's different when I'm...
On the radio versus, you know, yelling at them to get out the door because they're late for school. Obviously a different voice. Yeah, and I think there are evolutionary reasons for that. Also the fact that we also have a performing voice. As a teacher, I know that I'm sure as a radio person, you're very familiar with this. And this is a voice of performance. It's a voice of how I present myself in the world.
The word expression literally means a voice. It's pressing towards the outside. How we express ourselves, who we are in the word, is our voice, but is... is a specific voice that we reserve for the world and for who we want to be in the world. And sometimes when we're back into our more familiar environment, it can feel odd to suddenly meet that person again. Is this me or is this me as a performer?
So many different identities, I guess, we take on and our voice reflects that. Yeah, it's a marker for fluid identity. I think it's a good way of thinking about it. So if we go back to this question, why don't we like the sound of our own voice? It's really the performance aspect, the vulnerability aspect. And another way to think about it is that we actually secretly do. When we don't know that it's our voice, we actually like our voice quite a lot.
So there are studies in which people were listening to a lot of different voice samples, among which some of them were their own voice samples, but they were not really aware of which one they were or that their voice were kind of hidden.
in there. And systematically, people seem to score their own voice higher than other people's voice when they don't know it's their own. So if you didn't know it was your voice, you might actually think, oh yeah, that sounds great. That's a good voice. That's nice. I'm feeling very, very in line with this person. But once you know it's your own voice, you have this kind of cringe feeling of vulnerability, of realizing how much you reveal in your voice that maybe you wish you did not.
But what about people who struggle with the ability to control their voice? Rebecca has also been researching ways to help people who stutter. What's really interesting with stuttering is that... their inner voice does not stutter. When they think, they don't stutter. When they read silently, their voice does not stutter. So it's really a decrepancy in the outer voice, that last bit of control between preparing your action and the action coming out.
So some of the ways to think about it is they're going to create sound and the brain thinks that there's too much difference between what they hear and what they intend to say. So instead of letting go of the flow of the voice, it just reboots itself. So if I say R and I hear R, I'm like, that's fine. If I say R and suddenly I hear E or O, my brain is going to think something is odd. And subconsciously, I'm going to change my...
muscle position to try to go back to what I'm trying to say. And this reboot is what we think creates those time difference of blockages, repetition, or continuation of sound. But I would say people who stutter most often do not stutter when they sing. This difference between speech and music is quite fundamental in stuttering. And if we can tweak the brain to make it think that it's singing when it's actually speaking, we could help people who stutter reduce their dysfunction.
Yeah, I wondered if you could tell me about Mumble Melody then, this project that you've been working on. Yes, this led to the Mumble Melody project and to several studies of musically modulated auditory feedback. I've had speech therapy since... It's fascinating because in the video you have of mumble melody, there's a woman who speaks and she is... stuttering on certain words she's struggling with. And then we hear her speak while she's wearing an earpiece that is putting...
I guess you would say a filter on her voice, is that fair to say? Exactly, a modulating filter. The notes app I use all the time because if I think of something, I'm going to forget it. So just write it down or like little things that I see that I want to look into later. So she's hearing her voice in a different way in her own head. But what we hear when she speaks is a much more sort of fluent, relaxed way of talking. And she doesn't stumble at all.
The Notes app I use all the time, because if I think of something, I'm going to forget it, so just write it down. Yeah, so we believe that this is actually changing the brain pathway used in producing and perceiving the voice. And it has led to really great results in terms of fluencies. So we are really working on personalizing all the different parameters so that people can really make their own use of it.
I just thought of one other question about the voice. You said earlier that the brain has a way of filtering out our inner voice so that it doesn't distract us. But what about when we... hum or sing or talk to ourselves aloud. And I've always kind of wondered, why do we bother? We could just, you know, think it. We don't actually need to make the sound. Yeah, and I think this is very interesting to understand and even to think, where does it come from? Because...
We have language, we have words, but what does a voice mean without those words? For example, when I read a text, I'm always reading it with... a voice whether it's my voice someone else's voice if you read an email that come from a colleague you might often read it with their voice in your head and sometimes we can't even control it because our brain is constantly making those
mental model of voices when it comes to inner sound. You can sometimes imagine an entire song or symphony, but my big central theory here is that we think silently and that... is only possible because you yourself have a voice. So sound come before words. And I would say I... I spent the past 15 years studying the human voice. And my main realization or theory is that the voice...
Beyond word is so much richer. We often think of the voice as a vehicle to transmit words, but I like to turn that around and say words are just an excuse to have vocal interaction. What we share with each other through the voice beyond the word is really so much more important as we share who we are. That was Rebecca Klein-Bergé. She is a professor of humanics and voice technology at Northeastern University. You can see her full talk at TED.com.
We have been talking about the human voice and AI-generated music, but we can't talk about the soundtracks of our lives without mentioning the sounds of Mother Nature. Musician Snow Raven grew up in Arctic Siberia. As part of the indigenous Saka culture, she learned at a young age to mimic the animals around her. Her keen ear and gift for recreating animal sounds aren't just a neat trick, though. As she explains, her vocalizations connect her to the Siberian landscape.
Here she is on the TED stage in 2024. This is the way indigenous Saha people greet one another. This phrase has no exact English translation, but it means, I greet the universe in your person. My name is Snow Raven, and I'm from the Republic of Sakha-Yakutia. in Arctic Siberia. The most cold, settled place on earth where winter's temperature can drop as low as negative 96 Fahrenheit.
After six years being away from Sahayakutia, I returned this summer to see my family and also visit our ancestral home. The first... thing I did when I arrived was drop into silence and listen. Listening is one of the powerful gifts. The universe has given humans to connect with nature. It is by listening that I have learned how to mimic nature. I listen with my imagination and become an animal that I hear. I move like it moves. I seek what it seeks. I cry with its cry. The owl.
for instance, it has night vision, it can see all around him, and it also flies without making a sound. The Elia brown kite soars on the heated air and with joy announces the arrival of the summer. When I hear the loon, I feel its longing for its partner alongside its love of the baby it carries. The dance of this beautiful bird is so stunning and divine that Saha people believe that the happy are the eyes who even once witnessed. The dance of a crane in a wilderness. The range is taba.
are lords of tundra, and they run thousands of miles in huge herds to restore and recover their energy. They have a special breath. In a wolf's cry, I can hear the loneliness of the hunter. and it's yearning for the freedom beyond the body. It sees the moon and wants to join it in the sky. So the superpower of listening is that it leaves the room for imagination to dance with a sound. Let's listen. ignite our imagination, and summon our animal superpowers here and now. Oh.
That was Snow Raven. You can watch her amazing talk at TED.com. Thank you so much for listening to the show and making it part of the soundtrack of your life. This episode was produced by Rachel Faulkner-White, Fiona Guerin, James De La Housie, and Matthew Cloutier. It was edited by Sana Zmeshkinkor and me.
Our production staff at NPR also includes Katie Monteleone and Harsha Nahada. Our executive producer is Irene Noguchi. Our audio engineers were Patrick Murray, Gilly Moon, Jimmy Keeley, and Kwesi Lee. Our theme music was written by Ramtin Arablui. Our partners at TED are Chris Anderson, Roxanne Hilash, Alejandra Salazar, and Daniela Ballarezzo. I'm Manoush Zomorodi, and you've been listening to the TED Radio Hour from NPR.