Can AI Compose Good Music? - podcast episode cover

Can AI Compose Good Music?

Nov 26, 201926 min
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

Computers can now drive cars, identify faces and transcribe speech, but many experts said that it would take much longer for AI to tackle creative endeavors. This week on Decrypted, Bloomberg Technology's Natalia Drozdiak meets three composers using artificial intelligence to make music, and she and host Aki Ito dissect their robo-generated songs.


See omnystudio.com/listener for privacy information.

Transcript

Speaker 1

Earlier this month, the Prague Philharmonic Orchestra got on stage to play a symphony written by the famous check composer Antony Davor Jack. The orchestra was premiering a piece of music the world had never heard before. Divorjack died more than a hundred years ago, and he left behind just the beginnings of a composition, just two sheets of music. So a computer program powered by AI studied the rest of divor Jack's music and completed the composer's unfinished work.

This is what the software spit out, something in the style of Davor Jack that was still an entirely new symphony. For years, we've worried about robots replacing the jobs of truck drivers and call center asians and accountants, but experts said creative jobs were going to be safe, that computers were still so far from making things that are new and subjective that move us. So consider the fact that

AI wrote most of this. Today, in the show Reporter, in the tallya Drosiac visits three musicians using artificial intelligence to make their music, including the guy behind this robo Divorgac symphony. If computers can now compose something this beautiful, what's left for us humans to do. Am Ito, you're listening to Decrypted stay with us. M Hey Nat, welcome

to the show. Thanks for having me. So you're our European tech reporter out of Brussels, and you normally write about Europe's regulatory crackdown on the tech industry, but today we're talking about something completely different. Yep. So we've been writing about the use of AI and self driving cars and chatbots and voice assistants, but in the world of music, this is all still kind of new. I mean, musicians just started using AI as a tool a few years ago.

So I talked to three different composers who are using AI in really different ways. The first guy I want to introduce you to is Ben Waki. He's a French musician. He's known in France for these French pop songs, but he's also composed music for artists like the famous French rocker Johnny Halliday. So this is a pretty famous guy out of France. Yeah, he's well known, but he's gotten even more attention recently because of his experimentations with AI

generated music. And how did he get into that. So there's this guy called Francois Pachet who for a long time was ahead of the research lab run by Sony in Paris, and Paschet has kind of become known as the godfather of AI music because he's really done a lot of research in this area. France called me a

long time ago. He discovered my my songs in the late nineties, and he was interested by my way of composing songs, always searching for unexpected coat changes, and and he invited ben Wata's lab to try out some of the tools that he's been developing over the past few years. And one of the tools that Pesh's team developed is this machine learning tool that generates music. So how does

it work? So first, the program basically ingests a bunch of sheets of music that the musician wants the program to train on, and then the computer cuts it up into really tiny bits of music and rearranges it into a whole new composition. So let me give you an example. So a few years ago, ben wa fed this program four seventy lead sheets of jazz standards from the thirties up through the fifties and sixties, and out came these two bars of music that he really liked, very jazzy.

I can definitely sense the influence. Yeah, so Ben Wah really liked the sound of what the computer came up with. It's just two bars at the beginning, UM with ascendant melodique movement. That's it is really unexpected and that I loved. I was at the first first time. So he asked the system to generate new ones based on those new parameters, and then I reiterate with the machine like you know,

if if it were a workmate something like that. It's kind of like, um, two artists collaborating on a song, but it's just that one of them is a machine exactly. And so that ultimately led to this full song that he released under his artist name Skeige. It's it's ding those doings are those use Yeah, I definitely recognized the melody from earlier, but it doesn't sound as jazzy. It sounds some a little old timey and familiar. At the end,

I had really, uh, very interesting melody. I was able to say, Okay, I'm proud of this melody and I think it's new. I think it's interesting because I couldn't have made it by myself. So the collaboration with AI is really interesting because it gives me something new. So for this song, the AI was really just the initial inspiration, but Benuah made a ton of creative decisions to bring this song to the finish line. So in that sense,

maybe you can call this AI light right. And so Bena released this song back in and since then, Francois Pachet from Sony moved over to Spotify, and Ben has been helping francoise new work at Spotify and they're they're building these tools from scratch with researchers for various tasks that are involved in creating a song. And one of those tools that they've developed, they've incorporated into Ben's new album called American Folk Songs. So been one and I

talked about one song called black is a Color. The melody is based on this famous folk song called black is the Color of My True Loves hair. Um. It's been covered by people like Pete Seeger Nina Simone. I'm sure you've heard it before. I actually don't think I have. UM is that bad? I grew up into dead well, I think most of our listeners have. So Ben Wass song is based on this acapella version sung by Pete Seeger. Black Black, Black, Color of My True loves hand and

he had Spotify's AI program. I'll add all these harmonies in the background to make it sound so much a chair. Black black, black is the color of my true love's head. Her face is something wondrous fair. But yourlest eyes and the daintiest hands. I love the ground she STIs. I love. I thought that it could be very black in Sparring, to have something very sophisticated with a simple melody like black is the color, and I got a really unnam is amazing result. I was blown away by this result.

Black black, black is the color of my true loves her her face is something wondrous fair. I think, no, no, no human could have composed this string arrangement because it's too weird. There are two strange things in it too, strange chord changes and internal movements. But it's still really, really beautiful. And I think that this kind of result is very encouraging for for musicians like me and after me, musicians did something new, something that they find in Sparring

for their compositions. You know, it's really different from the music i'man're still listening to, uh and definitely a little weird, but I think I like it. It's beautiful. Yeah, it sounds I don't know, both old and knew at the same time. But what's interesting to me is how he's really revamping this traditional folk music, and I can imagine it might sound strange for some American audiences who grew up listening to this type of music and then have

it repackaged in a totally different way. So the original melody and the lyrics were obviously written by a human a long time ago, but the arrangement we hear in the background was all composed by a machine. Is that right? Mostly so, Ben Watt did make a few choices and editions along the way. He used Pete Seeger's voice to create an au ac choir, for instance. He also composed and recorded some chord sequences. One was in the style

of like an epic Game of Thrones type sound. Another one was in a Bossa Nova style, and he fed that into the machine, which in turn generated a whole new string arrangement for the background on sound and for the final recording. He also chose the string quartet and the director to play that new composition. Okay, so still a lot of human intervention there, Yeah, exactly. Uh, And this is a tool that Spotify is making available to everyone,

so not quite yet. They eventually want to release the tool on open source on the Internet, possibly by the end of next year, but it could take longer than that. And so the next composer and entrepreneur I want to introduce you to is this guy called Pierre Bahl and he's based in Luxembourg. He studied computer science. He's a musician, his father is a film and music producer, his mother is a singer, and he and his brother started this

company called Ava. This is the startup that made the Divor Jack inspired symphony that we heard at the top of the show right, and he and his brother were inspired by how important music is in film, but how long that process can take in terms of creating a soundtrack. So they wanted to see if they could train AI and see if it could basically help a composer create that type of music, not just for films, but also for commercials, promotional videos, video games, and also just generally

assisting composers in their work. So purefed the machine with thirty thou scores of history's greatest composers from Bach, Bitovin, and Mozart. So basically it all starts by teaching and algorithm to to learn the patterns and music. You know that there's this common knowledge that music is sort of emotional and it's it's the opposite of math. But actually, if you look at music very carefully, there's a lot

of patterns in it. Um So AVA understands all these patterns which are very very mathematical at the core, and EVA uses that to generate different kinds of music depending on what you're looking for. So let me play you two songs in a completely different style. The first one is this classical song called I Am a h This makes me think of a winter ballet. Does that make sense? Absolutely? Yeah, I totally think of forest blanketed and snow and little bunny rabbits popping by. I see that too. Let me

play a totally different song. This is a pop song called Guiding Light. So background music is a big industry, and this just underscores the strength of Ava's business model. I mean, for the most part, if you're come penny or like a podcast, you can pay a composer to make custom music, or you can use these catalogs with songs that you're licensed to use. Right, That's what we do for this show. We use these big catalogs of songs UM that we're allowed to use, but it's really

hard because you know, these are already existing songs. A lot of other shows use them, and I don't know, I know, I drive our producers crazy because I'm really picky about which tracks they use for different sections of our show. Yeah, yeah, I mean, so that's the big opportunity for EVA. You could have a custom made piece of music, but because you're having a computer do it, it's probably cheaper and faster than having a human composer

do it. How long does it take to create like a new song, you know, like three minutes, three and a half minutes, So it depends. It depends with the algorithms that we use, because we have different algorithms that we've used of the basket full of years. The very first ones that we had it took usually forty eight to say many two hours because we had to retrain av the influences that we wanted her to specifically emanate UM.

But more recently it takes about like forty five seconds to a minute to create a piece of music from three minutes long. Now, so you know how earlier the first example from ben Wa was just this starting point and there was still like a ton of human intervention

that was involved. How complete is the music that Ava's software is generating, well, Pierce, is it kind of depends on who's playing around with the tool, Like someone who's highly musically trained might make more tweaks and changes to it, whereas someone who isn't might leave it as is, though the quality might not be as good as a result. So in the songs that we just listened to earlier,

did human composers tweak those songs? So nope, that was composed entirely by Eva, but humans did assign the different parts of music two different instruments. We'll be right back, Okay, So before the break, we met two musicians who are using AI to make music in really different ways. Yeah, so Bena uses AI kind of as a collaborator, Pierre Berrow uses AI specifically for background music. But the last guy I want to introduce you to takes it even further.

His name's ash Kusha. Yeah, thanks so much for for having us sober. We're really and he's an electronic musician. He's Iranian born and based in London. The question for me always was how I can I can make very intriguing, complete and complex piece of music only using a computer. And also what if the part of the composition that I'm thinking of can be partly made by the computer.

So that question took me twelve years to go through all the different parts, from instruments, replicating sound of violence, field recording, synthesizing voice, and finally to finding a way to generate lyrics. Um, that is what initiated oxyen. It was trying to tackle what is a non human creative engine. And a few years ago Ash started building virtual entertainers for video games. So you know how they have these avatars and video games, is that like Mario and Mario

kart am I adding itself is a non gamer. Well, so Ash made a virtual avatar that makes music and he called her Yonah. So this is like a whole virtual character that writes her own music. Yeah, so you can kind of think of it as infusing an AI with a personality too. So in this case, Ash trained Yonah's AI on Margaret Atwood's novels and also on articles about teenage life. So Ash's company ox Human created Yonah, and she's an angsty teen in a dystopian world. That's

a really good way of putting it. So, yeah, let me play you a song that Ash just released in September, written and sung by Yona. I never felt alone. You never said a word. I fell from my prone. You didn't want me there, you didn't want me near, You didn't want me there. I never heard you say it's every life I live. I it's definitely very weird, very dystopian. I feel like she's going to come and kill me in my sleep. Yeah, I mean it's definitely a little eerie.

And some of the comments on the video are kind of funny, like one user says this is creepy, and other one says, don't listen to this when you're high, and another guy says, don't mess with organic music. Um, And I think part of what makes it extra creepy is that you can watch Yona sing this song on YouTube and she looks half human half robot. If you cry, would every smile, I'll make you stare? I get where do you were? Were they all happened? You know? So

this definitely really creeps me out. But now that I'm thinking about it, maybe that's the point that it's disturbing yeah. So Ash's goal with Jonah and his other characters he has other AI based characters is to really have um have them in vogue emotion and the people listening to it. So in her music you get this sense that she's sad, and Ash says it's easier to get human reaction through sad and romantic music. Um, we want to make something that when you listen to Yon know you you believe

that there's something being said. So the nature of of the scales that we use, and and the and the type of music that we use, it's very personal, it's very deep, it's very um yeah, it's in the sense is romantic. I think the other thing he says is that he's not trying to replace traditional music in any way, but to create a whole new form of sound and a whole new genre. We we could try and make the best rock track, but I think there are many

many good musicians that do that. Creating genres takes time, and it's just a subculture and it's seen pretty much, so it takes time, so we want to give it time. At the beginning of soundings sometimes wonky and it's a bit weird, but that's what we're looking for. And so with that also goes a different form of consumption. So this type of music might be consumed through games or virtual reality or different types of concerts, and there's the

market for that. Yeah. I mean he's already ashes already monetizing his virtual entertainers. I mean Jonah performed at events and shows and at a poetry festival recently. Wow. So it's it's kind of like um hatsen Amiku and Japan. Do you know about her? I have no idea who that is. So she's just like they call her a virtual pop idol um but it's it's not like they have like tons of people who go to her concerts or her or it or whatever you would call it.

M Yeah, and she gets projected onto a screen and you know, they use the software for her um to to sing, if you can call it that. That is the idea of the modern, the contemporary of pop culture, which is alter egos and creating almost cartoon characters. So we we are kind of adopting all of these things into a new form of expression. This is why we're not apologetic about, you know, not being digital or being a bit weird and not complete or perfect. That's how

she is. And I think, um, the gaming generation, the video game generation and Juris and Alpha is going to be more except thing. So now we started this episode with this idea that creativity is the last frontier of AI, and of course that begs the question, if AI can now make music, is there anything left for us humans

to do? And you know, having listened to your conversations with Benoir and Pierre and Ash and having listened to their AI generated music, I think it's all really interesting and there were definitely sections of their songs that I really liked. But the question we posed at the start feels still very premature to me. It feels like we're

still very very far from AI replacing human composers. And I personally still prefer all of the artists I listened to who are writing their own stuff instead of having computers write their stuff. You know what. It reminds me of the conversation that I had with a Barrow from Ava, and he said that people were really upset when synthesizers were first introduced thirty years ago, and now it's totally normal. So I guess I can imagine a day when I was just going to be another tool for musicians and

people won't even bat an eye. Natalia drags Jack. Thanks for coming on the show today. Thanks for having me Aki. So before I let you guys go, I want to let you know that this is the last episode of this season of Decrypted, and we're going to be publishing a special bonus episode in two weeks where this show's original co host, Brad Stone and I are going to be talking about our favorite episodes that we've ever done and update you on the people and the stories we've

covered over the years. So if there's an episode you want to talk about, tweet at Meat at seven or email me a I t O one six at Bloomberg dot net. Decrypted is hosted by me. Shawn ween is our executive producer. Ethan Brooks mixes show today. Nate Linkson and Neville Gillette helped with recordings. Francesca Levy is the head of Bloomberg Podcasts. We'll see you in two weeks

Transcript source: Provided by creator in RSS feed: download file
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast