Co-author of the study and professor of neuroscience at the university of California Robert T. Knight, says it's a way for scientists to understand how emotion affects communication.
Category
📺
TVTranscript
00:00 The electrodes are on there, they're in the hospital, and we basically play the song and
00:06 we record the electrical activity.
00:07 Now, the electrical activity is coming from individual electrodes placed on their brain.
00:13 And I think the simplest way to think about it is if you watch someone playing the piano
00:18 and you're a good piano player, you could look at the keys they hit with no sound and
00:24 know what they're playing.
00:26 That's what we're doing.
00:27 We're reading the piano keys of the brain.
00:29 We're ascribing to each electrode a specific role in frequency and tone and rhythm.
00:38 And since the music is extremely comp, you can see the recording sites there in the temporal
00:43 lobe.
00:44 Basically, it's a lot of data, 92 electrodes per person, and by millisecond precision.
00:52 And using AI machine learning specifically allows us to extract the actual electrical
00:59 activity that correlates with the sound that has been delivered to them.
01:05 So once we've figured out the electrical activity that correlates, we take that and we simply
01:10 convert it back into a spectral game of the sound.
01:13 And it's only going to get better.
01:14 And the other thing I'd like to just mention for your viewers is that language is strongly
01:23 lateralized to the left hemisphere.
01:24 So 95% of right handers have language in the left hemisphere.
01:28 But music is more bilateral.
01:31 It's actually represented in both hemispheres, both temporal areas, with a preference slight
01:37 for the right.
01:38 It's maybe 60/40 for music and 95/5 for language and words.
01:44 So that's important because it gives us an infrastructure to understand why patients who
01:52 have aphasia who can't speak can sing.
01:56 And this is a well-known clinical phenomena.
01:58 So there's an exciting connection.
02:01 But I think the overall thing that we want to use with this, it's not so much the machine
02:05 learning.
02:06 That's pretty straightforward.
02:07 It's that it gives us a way for a prosthetic device for someone who has a disabling neurological
02:12 condition, let's say ALS or aphasia or severe stuttering.
02:18 It gives us a way to have the output of a assisted device that we would implant over
02:24 the cortex that would have-- it wouldn't just sound like the robotic sound it is now, like,
02:30 "I love you."
02:32 We would have tone.
02:33 We'd have rhythm.
02:34 It would be, "I love you."
02:36 And that's what this, I think, understanding music gives us a kind of an insight into the
02:41 prosodic and emotional components of how humans communicate.
02:47 [BLANK_AUDIO]