86.  Math, Music, and Artificial Intelligence - Levi McClain Interview (Final Part) - podcast episode cover

86. Math, Music, and Artificial Intelligence - Levi McClain Interview (Final Part)

Feb 18, 202428 minEp. 86
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description


Help Support The Podcast by clicking on the links below:

Transcripts are available upon request. Email us at [email protected]

Follow us on X (Twitter)

Follow us on Social Media Pages (Linktree)


Visit our guest Levi McClain's Pages: 

youtube.com/@LeviMcClain

levimcclain.com/


Summary

Levi McClean discusses various topics related to music, sound, and artificial intelligence. He explores what makes a sound scary, the intersection of art and technology, sonifying data, microtonal tuning, and the impact of using 31 notes per octave. Levi also talks about creating instruments for microtonal music and using unconventional techniques to make music. The conversation concludes with a discussion on understanding consonance and dissonance and the challenges of programming artificial intelligence to perceive sound like humans do.



Takeaways:


  • The perception of scary sounds can be analyzed from different perspectives, including composition techniques, acoustic properties, neuroscience, and psychology.
  • Approaching art and music with a technical mind can lead to unique and innovative creations.
  • Sonifying data allows for the exploration of different ways to express information through sound.
  • Microtonal tuning expands the possibilities of harmony and offers new avenues for musical expression.
  • Creating instruments and using unconventional techniques can push the boundaries of traditional music-making.
  • Understanding consonance and dissonance is a complex topic that varies across cultures and musical traditions.
  • Programming artificial intelligence to understand consonance and dissonance requires a deeper understanding of human perception and cultural context.



Chapters

00:00 What Makes a Sound Scary

03:00 Approaching Art and Music with a Technical Mind

05:19 Sonifying Data and Turning it into Sound

08:39 Exploring Music with Microtonal Tuning

15:44 The Impact of Using 31 Notes per Octave

17:37 Why 31 Notes Instead of Any Other Arbitrary Number

19:53 Creating Instruments for Microtonal Music

21:25 Using Unconventional Techniques to Make Music

23:06 Closing Remarks and Questions

24:03 Understanding Consonance and Dissonance

25:25 Programming Artificial Intelligence to Understand Consonance and Dissonance

Transcript

I built an entire horror instrument in order to figure out what makes something sound scary. Now, if I asked a composer, they might say it has something to do with the timbre of the instrument in concert with certain compositional techniques. Appropriately placed dissonances, stinger chords, and things to do with tension and release. An Acoustician by contrast might examine the anatomy of a scary sound, and observe a high degree of roughness in their waveform.

This is an acoustic property which refers to the rate at which the amplitude of a given sound changes, a property high and not just scary sounds, but also human screams. The neuroscientists might note that some types of fearful sounds are purely mechanical process, oriented and actuated by a five-neuron acoustic startle circuit embedded in our brains. And the psychologists?

Well, they might discuss how our relationship with fear changes as we understand ourselves and the world around us better and better through the decades. Implying that some fears are of our own making. I say it's a really complex question. Perhaps a full 30 minute deep dive into the complex realm of psychoacoustics is in order. Oh, hey, and look at that. That's exactly what I did.

So if that's what you're interested in, go check out my video, what makes the sound scary over on my YouTube channel in the link in my bio, hoping to get past a thousand views on this one. That would be nice. So any support helps. So what I kind of want to do is get videos like yours in front of students who are thinking of going to college for music theory, things like that. Also, a lot of people who are really interested in the science but also want to create things like horror movies.

There's definitely a market for that. So with myself and my team, that's one of the things we're thinking about is, how do you get this amazing content in front of the right audience? Because it deserves to be seen, frankly. I think it's awesome. I think so. Yeah, absolutely. So my content, what I try and do is I try and focus on the arts and music and soundscapes and things of that nature, soundtracks. But I come from a bit of the engineering background, I have that kind of mind.

I think of things very technically usually. And a lot of the times people think those two ideas are opposed and they're two different things. I don't think so at all. I think you can absolutely approach art and music and all these pursuits with a technical mind. So approaching music with the language of math, I think can be very useful in some circumstances.

And that's one of the things I want to try and convey with these videos is that like, hey, it doesn't matter who you are, you can create beautiful art with whatever you've got, with whatever you're into. I'm actually really, really glad that you brought that up. We talk about the two different sides of the brain.

One of the things that we tried to do with our very first episode of the Breaking Math Podcast long time ago is talk about when mathematics is inaccessible and I absolutely can relate before I became an engineer and I had to learn all the symbols. It was intimidating. Speaking of fear response. Speaking of fear response. There's a lot of folks for whom when they see mathematical symbols, it elicits a fear response and things shut down.

Do you know that as a grad student in engineering, I wanted to take a material science class and a professor said, oh yeah, come on in, come on in, and like, I didn't have the background to understand some things and I just didn't have the terminologies and I wasn't familiar with a lot of the concepts, you know, despite my own background, which is heavy in physics and math, and I rose up and I stuttered and I felt like an idiot and it was horrible.

But I absolutely relate and I've had to tell myself it's not that we are, I'm sorry, he's used the phrase dumb or stupid. It's that you got to take some time and get used to things and then once you've processed them and really understood them and worked with them, you can really do some amazing things. Absolutely. I think everything is a skill, right? So I have basically no natural aptitude, no natural aptitude for music.

It's something that I've had to practice for 10 plus years of just, you know, going out of every day to get to where I'm at now, which hopefully has a competent musician. And I think the same thing goes with math too, so many people have the assumption that you're either good at it or you're not.

No, it's a skill that you need to flex, you need to work at to be able to get good at it and once you get good at it, then you have this beautiful language to articulate your ideas and you reach a fluency, whether it's math or music, where it becomes just this other fabulous mode of expression. I actually texted you, one of my millions of texts to you and I said, what if we were to do a project where you had a holiday hallmark movie horror film and how would you play with the sound there?

Right. Yeah. It's one of the things I would want to investigate is this idea of the acoustic startle circuit and fast sound. It's pretty well established and it's interesting because that's something you could, I would assume, pretty easily model in terms of AI or model it inside of a computer. You know, that's again, only kind of like one dimension of sound and why something sounds scary.

There's also this idea of the slow fear which has much more to do with psychology and cultural associations. I'm interested because when comparing the visual to the audio there, how do I put that? It doesn't seem like there's as much of an immediate answer as to why something visually is scary as there is for this one part of audio science and audio research. So I'm wondering, how do you square that, I guess, with artificial intelligence?

And if you want to program an AI to be able to have the same fear response as a human, well it seems like we can kind of do that pretty well with at least one dimension of audio. But it seems like it's a bit more of a challenge when it comes to the visual. Yeah, I think you bring up a really good point here. And my answer to that would be, I'm curious in terms of, let's just talk about a quantity or how much information do you get visually and how do you make decisions based on that information.

I know that our eyes have multiple layers in the neural net, which we have our own net of neurons in our neocortex X and in our sensory cortex. And I'm aware that there are at least some initial explanation like like one layer will identify edges. The next layer will put together some of those edges into a shape. And I don't know where movement is included, but I know that another layer identifies movement. Oh, they made a scary stories movie.

And part of that, scary stories movie is they messed with movement. They had a dark hallway with one of those freaky, deaky creatures and it's moving slowly. And then they stuttered the light like a strobe light and suddenly skipped at 10 steps forward. So it completely messed with your expectations of how fast or jerky things move. But I was terrifying. It was very effective.

Yeah. Now, I want to mention, your channel has a multitude, a multitude of videos that discuss mathematics and audio processing and music theory and not just on fear. And I wanted for our listeners, I wanted to do a little, I hope you don't mind a sampler of some of the other topics in your video. Are you okay with that? Sure, that sounds great. Okay, awesome, awesome. Very good.

In 1956, composer Olivier Missile wrote Wasso Exatek, a piece for piano and small orchestra, which is heavily inspired by bird song. Today, fellow TikTok user Sowily continues this tradition by producing beats from bird samples. Birds are the world's natural singers, so it seems only appropriate that we take inspiration in their song.

Today, I want to take inspiration from them too, but instead of bird song or sample, I'm interested in seeing if I can make music with the geometry of bird flight patterns. Check this out. It's called a murmuration, a flock of starlings weaving intricately in and out, creating mesmerizing, highly ordered geometric patterns. This is an example of what's called emergent behavior, a system which does not depend on its individual parts, but rather on their relationships to one another.

In this case, when one bird moves in any direction, its closest neighbor will adjust course to compensate, so no birds in the flock run into each other. This simple system gives way to incredibly precise geometric flock patterns. We can actually replicate this behavior in part by a simulation governed by what is called Boyd's algorithm. By assigning three simple rules to these simulated birds, we can shape a behavior pattern that replicates starling murmurations.

Now if we took these three rules and mapped them to musical parameters instead, we get some pretty interesting results. A murmuration determined delay. A reverb impulse response controlled by bird cohesion and separation. We can even generate a melody that is controlled by the direction and turn radius of an individual bird. Let's layer a few of these concepts and see what trippy music comes as a result. That was incredible.

All of the math that I saw in that I can go into so many directions, but before I do all that, I want to ask you, wow. The music was almost chaotic. It was almost chaotic sounds. You turned it into a beautiful song. How it was just chaos, right? Yeah, it's chaos. This is a part of music that I've found a lot of inspiration in, which is a soundifying data that and turning into sound.

So, soundification is the process of doing that, taking data, taking input, whatever it is, and turning it into sounds. You see this with like heart monitors in hospitals that beep, that sound is a soundification. It's a representation of what's going on with your heartbeat and your heart rhythm. So, the interesting thing about soundification is you have so much control over the end product and the end data.

I can take what you saw there, which is a murmuration-controlled delay or even the melody, which is controlled by the individual turn radius of a single bird, and then I have so many options when it comes to turning that into a sound, because it's just two data points,

two, three, four, five data points, that I can make the sound come out through a piano, I can make come out through a violin, I can change, you know, like maybe the key, the subset of notes that we're using in a particular piece, and that we assign to different things. There's you have so much control over it that you can really turn this chaos into a lot of order through music.

So I find things like the sonifying the boy's algorithm to be an endless source of like musical inspiration, because it's just a different way to approach music. I find it very useful. Incredible. And real quick, what Don Don me is, you just mentioned something. When you made that program that kind of bounced around and made the noise, you do have a choice in what possible noise is the chaos has to choose from. So that's one way that you could control the chaos a little bit.

And then also the number of notes. So you could just choose like a single chord, or rather, just a bunch of notes that are at least in the same key. So there's some semblance of what we, with our Western trained ears, who recognize it's beautiful. And then have chaos do its thing, and that's one way of having a combination of control and chaos, which is a huge theme in machine learning.

It's like, what elements are you controlling, and where do you allow for chaos, and when do you turn that knob, when do you allow for more chaos, and when do you allow for more control? But let's do another video here. What happened to Harmony if we use 31 notes for octave instead of 12? Right. So Okay, that was incredible. I nailed it on YouTube. I've only been made aware specifically

about microtonal music. For those who have never heard of microtonal and are mostly used to western music, can you explain microtonal and what you attempted to do with this video? Yeah, sure thing. So in the west, we are usually relegated to only 12 notes. So we have 12 individual notes that repeat in what we call octaves and that gives us the harmonic, diatonic and chromatic language that we use to build out the framework for all of western

music. So we basically only have 12 notes. And this series is all about asking the question, what if we lived in an alternate universe where we had 31 notes per octave instead of 12? How would that change music and how would that bend the fabric of harmony itself? So it's kind of this experiment where I say, okay, we have 31 notes now. Now we can, you know, in typical western music, you have minor chords, you have major chords. Well, now in 31, you have minor

and major chords, but now you can do sub minor chords. So it's a little bit of a different feeling, different vibe, different flavor for the chords that we can, you know, usually have. In addition, to sub minor, now we have super major chords, which are great. And now we have neutral chords. So basically by like finding the gradient of pitch spectrum that we allow ourselves to have in western music, we have more options, tonally, to explore different spaces and music and find, you know,

different nuance in the beauty that we allow ourselves to create. Oh, incredible. Okay. I have a quick, a few quick questions. I realized that we are almost a bit limited on time. So we'll kind of wrap and fire these pretty quick. Number one, why 31 notes instead of any other arbitrary number? Yeah. So I get that question a lot. It seems pretty random. 31. So 31 is ideal. I would say 12 notes is a essentially a compromise. It's a system that we've created that tempers out some irregularities

in the math of tuning theory. So 12 notes is fairly easy to play. Play with. It's not too much of a, you don't have too many options, but it is a compromise. So certain things are slightly out of tune. Your major chords, your minor chords, they're not as in tune as they could be. So in 31, again, we find that gradient a little bit. And so now we can get chords that are more in tune. Now you might ask like why, why 31 specifically? Why not? Like if it's all about finding that gradient,

why not? Like 48 notes or, you know, some multiple of 12 that would seem to make more logical sense. Look, there's, I know you were looking for a quick rap and fire answer on this, but it's a, it's a little bit complex. So there's, there's basically two or two or maybe three reasons why we might want to choose 31. One, it allows us to use all the same notes that we're familiar with and have them be more in tune with each other. That's great. It allows us to go beyond the 12 notes and

create these weird alien sounding harmonies, which is great. And then also it's not too many notes that it becomes impossible to play. So, you know, like conservatory students are, are, you know, have qualms with like practicing in all 12 of their keys. Imagine saying your teacher saying like, hey, you have to practice your scale in all 144 keys or something like that would be crazy. So 31 is kind of that compromised, compromised, not compromised, sweet spot to where we get all the

benefits of 12. Plus we get to do alien stuff while not being too overwhelmed. Cool. I have two more questions for you on this note before we'll move on on this note, no pun intended. What instruments were you playing with those microtele, or what instruments did you play for that video? Yeah. So, I mean, this heart comes to one of the biggest problems in the microtele music in the, in the community is like,

this stuff is great. And it's, it's really cool to explore and it gives you an avenue to kind of take more mathematical approach to music. But we don't have instruments to be able to play this stuff a lot of the time because, you know, we've, we've have, have hundreds and hundreds of years of developing instruments to play in 12 and to play to a certain, you know, pitch standard and all these things. So, what I end up having to do is create a lot of my own instruments. So, what I use

is a fretless guitar doesn't have threats. So, I can, I can hit the notes in between the notes that we have in in in the west here. But I also develop and build my own instruments like the, I think, in that video you saw a little keyboard that I made, which is basically a modular keyboard that I can input certain notes into and, you know, connect certain notes to each of the keys and the keys are completely modular. So, I can move them around in an, in an order that is great for whatever

tuning system I'm using. So, in that, in this case, it's going to be 31. So, video, last video. This is amazing when playing a bass with a vacuum. Pull it up. Oh my gosh. Okay. All right. I should clarify on that one. If you're, if you're wanting to try that out at home, get a vacuum that has a blow out function. Okay. Because other blowing in won't work. You have to blow out and you have to hit the, the harmonic just right so that it resonates with

itself to get that sound. So, so you didn't ever read there. You just angled it so it made a vibration? Yep. Yeah. You just angle it and you have to be, you have to be very careful because the second you get off that, this is the second, the string starts to lose resonance and then it destabilizes with itself and you lose the sound. Oh, absolutely incredible. Okay. We have had Levi McLean on our show for today. There's so, so much more rich content. Levi is very knowledgeable about music

theory. Levi is knowledgeable about culture and about how our brains process auto-score information. Please, please check out his channel. Go to Levi McLean. Sorry, at Levi McLean, is there, is there an underscore there? No, it's at Levi McLean Music. At Levi McLean Music, you could also go to Google and just type in Levi McLean Music. His videos are entertaining. They're light, they're approachable and they will expand your knowledge of our experience of

audio processing. Please check him out. And I'm going to do my best to push this specifically to, I'm thinking about some communities to identify and of course I'm thinking music theory but also just science and anybody who would appreciate this. So I'll talk to my people and I'll see what we can do. Before we close, I want to give the floor completely to you to say anything you'd like to say to ask any questions about AI or anything else including high mom, that doesn't matter.

Hello. Well, I have to, you since you brought it up, high mom. I can't not do that. So, yeah, well, first off, I'd just like to thank you for having me on. It's always great to have these larger discussions. And I think I said at the top of the program, one of the things I love doing is going slightly outside of my discipline. I love being put a little bit out of my comfort zone. So like to go on here and then discuss, you know, like applications of audio and music with

artificial intelligence, I think that's fantastic. I did have one question and I'll try not to be too long-winded with it because I think we may have discussed this a little bit before. But I'm, I think it's an example that illustrates a larger question I have in terms of artificial intelligence. So how humans process the idea of consonants and dissonants, which is essentially, do we like a sound, do we not like a sound? I'm simplifying it a lot here. But this idea has been explained in a

couple different ways. One of them being this idea of the natural law theory, which essentially looks at different sounds for the harmonic relationships and harmonic ratios between each other. So if you have two notes that create a diad, which is a chord, the relationship between those notes, if you can express that mathematically in a simple ratio, a simple like a three to two or something like that, then our ear tends to classify that sound as consonant, as good sounding. Now the more

complex you get with your harmonic ratios, the more dissonant we tend to class sound. So it's this mathematical model, which explains how humans perceive this idea of consonants and dissonants. Now, if fall short in explaining anything outside of Western culture, right? So if you look at the music of Indonesian gammalon, you'll find that some of the notes, note harmonic ratios between two, two or three or four notes that they use, tend to be a lot more dissonant, but they typically do not

classify their own sounds as dissonant. So it's a model that works well in the West. It's a model that kind of falls short elsewhere. So my question in terms of like artificial intelligence is how, if you're, if you're goal with an artificial intelligence is to say like replicate how a human understands perceives sounds and how they understand consonants and dissonants, how can we program an artificial intelligence to do this well if we have an incomplete understanding of how

we understand this really basic and fundamental concept in audio science. All right, I will line up to bat and I will offer a my own answer, which obviously is a little bit incomplete here. First of all, machine learning has revealed more than anything, anything I think that it is exclusively, exclusively bound by its training data. So you just asked a question that had some specific terms here, like what a human would learn well, first of all, there has to be, well, what is a

human, like which humans. So that'll be based on the training data it has and how it defines human. That's its first limitation. We're not yet at a point where it can have, you know, any larger category than that. Now, my second point here is a quote from Richard Dawkins. He talked about remaining questions in evolutionary theory is what elements in evolution and in biology had to

exist and what elements simply happened to exist. And that's what you're asking right now in terms of what we recognize as consonant or dissonant, what had to be and what simply happened to be and how our values assigned. So those are two ways of looking at this question as we explore the question you brought in much further. I think that I just wanted to make those two clarifications with artificial intelligence and evolutionary theory before the broader question of what is

consonant or dissonant with music styles and why. And I don't have that answer. Why should I say that I do have that answer, but I'm not going to tell you. I'm just kidding. No, I'll leave that that question for our listeners to pursue and you can try to, we'd love to hear your thoughts on that question. Either send them to Levi McLean on his socials. Again, it's at Levi McLean or send them to our email breakingmathpodcast at gmail.com or on any of our socials as well.

This has been an absolute blast. We've got so many more episodes and so much more content that we didn't even get to. Maybe we'll return and talk about how machine learning is now being used to attempt to classify whale language. So more on that some other time and the meantime we'll leave you to search that on your own. And thank you very much Levi. It has been an absolute pleasure. Thanks for having me on.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast