Using Collective Musical Memories
Certain musical tropes, colours and paths are so strongly connected in the collective memories of people with certain emotions/situations that they’re becoming a musical cliché. Imagine Glockenspiel, Celesta etc. or the lydian scale for “magical” things, upward fifth movements for heroisim etc.
These well-known and often overused approaches to scoring are a reason why many learning composers do the complete opposite trying to find new approaches to “standard” scoring situations. While the approach to break up conventions is great and can lead to great new music-picture relations there is an inherent high risk in it to not creating the desired effect. The art form of (commercial) film music works a lot with musical familiarity. Musical ideas that “sound like” something else that is connected in collective memory with a certain emotion will most likely trigger the same emotion based on this familiarity.
A great example to observe this approach is John Williams’ score for the HOME ALONE franchise. The reason why the score pretty instantly evokes this “christmasy feeling” is the incredible popularity of Tchaikovsky’s Nutcracker in the western world in connection with Christmas. The resemblance of some of the score cues with parts of Nutcracker is striking and most likely a conscious decision (rather than Williams falling victim to the temp track). The familiarity with the vocabulary Williams used here instantly creates the desired emotional response with probably the majority of the audience.
Abstracting this thought even further means that our brain always compares “new music” with things it knows already and when it notices familiarity with something it will remember the according situation/emotion. However of course musical experience is a highly individual factor where the danger of “trying out new things” to approach clichéd scoring situations lies.
Just because something feels like a certain emotion to you while you compose something (while your brain filters for familiarity in the same way as mentioned before) doesn’t mean your audience will react the same way to it. Maybe you subconsciously remember a very special situation with this musical trope. A classical and highly individual example here would be songs that are heard during a break-up that burn deep into the memory not only as a whole “song” but also its musical features. Reactivating musical features from that song might trigger a deep emotional response with one person while leaving another person completely untouched.
So the safer way with film music that is supposed to connect to a broad audience is to use more generalized musical vocabulary to be more on the safe side. Unfortunately I can only scratch the surface of this whole extremely extensive and very interesting topic here, but there might be a follow-up on this in the future.
An interesting fact to think about is music that you were exposed to during early childhood and puberty. Most likely this music and its musical features will define a lot what you gravitate towards musically. From personal experience, I figured out in my late 20s that a lot of my harmonic preferences are based on music from a single LP that I heard at least three times a week for a while when I was a little kid. When I accidentally stumbled accross this LP again, I noticed how much its musical devices had burned into my subconsciousness but also how easily these devices triggered emotional responses in me (independently of “childhood nostalgia”).
I would love to hear similar stories by other people, so feel free to share them in the comment section.
Thanks for writing this article. It provides some good information about why we recognize certain types of music and associate them with film genres or scenes. You mention that certain music can be connected to experiences (such as a breakup), and hearing that music evokes the emotions of that time. We know that music certainly affects the emotions. However, I think it also affects the imagination. When the first (4th) Star Wars came out, I saw the film several times and bought the soundtrack. When I listened to the sound track, I saw those film scenes in my imagination. The music actually evoked the visual memory of the film along with the emotional memory.
I’m wondering if music has its own language distinct from what we learn to associate it with–i.e., western music for western films, horror music for horror film, etc. For example, a major chord is “happy”; whereas, a minor chord is “sad.” Are those inherent in the music itself, or do we learn to apply those mearnings to it? I can understand how our film culture has taught us the meaning of music for film. However, I’m wondering how composers and audiences knew the meaning of music before film? I think of Beethoven’s Pastoral Symphony. How did Beethoven know those sounds would create the pastoral feeling in the listener? The Spring movement of Vivaldi’s Four Seasons definitely sounds like spring. People back then didn’t have radios or iPods. They either heard it at a performance or created the music themselves. How did they understand the music?
There was also a movement in the 19th century called “program music.” Composers attempted to use music to tell a story. Here’s a list of their works: https://www.allmusic.com/blog/post/program-music-and-the-romantic-tone-poem
When I listen to these pieces of music, I understand the story it’s trying to tell. Did we all just learn to understand this narrative music, or is there something in the music language itself that has this meaning apart from what we have learned?
Just pondering.
I’m pretty sure it is learned. I love the analogy between music and language. As listeners, over the course of our lives, we learn what certain musical vocabularies mean and will understand these vocabularies in new contexts. As composers, we learn how to use certain musical vocabularies to bring our idea across and can rely on the collective knowledge to be sure people understand what we mean. So I really believe that all these emotional responses are learned behaviour. I think the only concept in music where you could say that a certain emotional response might be “encoded” in the music itself might be between consonance and dissonance and its emotional responses of being more relaxed and being more tenseful. But any higher musical concepts and devices are learned.
It is pretty tricky to imagine from a 21st century standpoint how people have felt in the 19th century about music. I think they simply felt “pastoral” because the title said so and the musical devices used by Beethoven have become collective memory of this “feeling”. I think this theory is being backed by the countless musical pieces from history without programmatic titles that never created a singular emotional conection with the general public. I think back then the accessibility differences between “art music” and “house music” has been enormous and I would say that people from lower social standings due to it just not being possible to “learn that language” of art music would have been completely lost when hearing Beethoven. But this is just assumption.
So the idea of media grammar does apply to music.
Pavik and McIntosh explain that media grammar is “the underlying rules, structures, and patterns by which a medium presents itself and is used and understood by the audience.” Essentially, all media–especially the arts–communicate their messages in specific structures that we learn to recognize through exposure and education. According to Meyrowitz, we must “have some understanding of specific workings of individual media” to have media grammar literacy.
Kassabian claims that musical “competence is based on decipherable codes learned through experience. As with language and visual images, we learn through exposure what a given tempo, series of notes, key, time signature, rhythm, volume, and orchestration are meant to signify.” According to Carolyn Fortuna, “the audience instinctively understands the feelings the filmmaker wants to evoke with a certain style of music.”
I think Fortuna’s statement is intriguing. Composers are communicating a message to the listener through the language of music. If the listener knows that language, he/she understands the message. If that is true, then you bring up an important idea in your article:
“These well-known and often overused approaches to scoring are a reason why many learning composers do the complete opposite trying to find new approaches to ‘standard’ scoring situations. While the approach to break up conventions is great and can lead to great new music-picture relations there is an inherent high risk in it to not creating the desired effect.”
These composers might be using a different dialect by changing the standard scoring, so the listener, who doesn’t know that dialect, might not get the intended message. However, other knowledgeable composers and listeners might make the leap to the new dialect and get it.
So different genres and styles of music could be different dialects. That might be why people who like Bach or very traditional music don’t like Stravinsky. It might be an emotional response to the dissonance, but it could also be that they don’t understand the dialect. I didn’t like Stravinsky until I took a music history class and understood what he was doing in his music. Understanding brings appreciation.
Pavik & McIntosh: http://ablongman.com/html/productinfo/pavlik/0205308031_Ch2.pdf
Meyrowitz: https://sites.psu.edu/comm411spring2015/2015/01/28/media-grammar-literacy-8/
Fortuna: http://www.ala.org/aasl/sites/ala.org.aasl/files/content/aaslpubsandjournals/knowledgequest/docs/KQMarApr10.pdf
Kassabian: https://books.google.com/books?id=GS8PESMx1hkC&pg=PA159&lpg=PA159&dq=competence+is+based+on+decipherable+codes+learned+through+experience.+As+with+language+and+visual+images,+we+learn+through+exposure+what+a+given+tempo,+series+of+notes,+key,+time+signature,+rhythm,+volume,+and+orchestration+are+meant+to+signify.&source=bl&ots=-DbV15lyN-&sig=yyW31jcHHSSpsCi0tGigdCfz7wE&hl=en&sa=X&ved=0ahUKEwjz1KrDpsjaAhUOtlkKHVpTDdsQ6AEIJzAA#v=onepage&q=competence%20is%20based%20on%20decipherable%20codes%20learned%20through%20experience.%20As%20with%20language%20and%20visual%20images%2C%20we%20learn%20through%20exposure%20what%20a%20given%20tempo%2C%20series%20of%20notes%2C%20key%2C%20time%20signature%2C%20rhythm%2C%20volume%2C%20and%20orchestration%20are%20meant%20to%20signify.&f=false
Have you seen this article?
“This article takes an experiential and anecdotal look at the daily lives and work of film composers as creators of music. It endeavors to work backwards from what practitioners of the art and craft of music do instinctively or unconsciously, and try to shine a light on it as a conscious process. It examines the role of the film composer in his task to convey an often complex set of emotions, and communicate with an immediacy and universality that often sit outside of common language. Through the experiences of the author, as well as interviews with composer colleagues, this explores both concrete and abstract ways in which music can bring meaning and magic to words and images, and as an underscore to our daily lives.”
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3832887/
I agree in thinking it’s mostly learned. And a very difficult thing to measure without needing a Truman Show / bubble boy situation. However, I do believe that you’d find a bias towards instinctual understanding of music tropes if you created the circumstances. Eg, a 10 year old who has never heard metal music would probably associate it with aggression, a sequence of two upward modulations by a 5th would probably invoke more feelings of hope than despair. Great topic to chew on!
Yes, unfortunately I don’t think this topic will ever be possible to test under scientific conditions. I would argue that with your 10yo boy who never heard metal music, he would probably still apply already learned responses to it as metal is put simply a loud, fast, screamy version of things he already knows so it would very likely feel like a more aggressive higher energy version of already learned musical tropes. So I’m not sure if the music in itself would be responsible for that emotional response but more the interpolation of this response based on learned things. I think your other example would be even trickier to test as we learn from early ages through all kinds of music that upwards modulations are uplfiting. I would even go as far and argue that the understanding of the tone system needs to be learned first and would not make any sense to anybody who didn’t learn the basic fundamentals before. Would be a super interesting field to do studies on but impossible to find enough subjects to get valuable data out of it, let alone find any two subjects who are even identical to begin with.
So I guess it’s a Truman Show situation then.
Kathy’s idea of genres as dialects is another interesting point. It’s as if we can’t all fully align our customs on a single musical lingua franca, while at the same time we’re unable to allow the differences in musical appreciation be too widespread. There are limits to what a human being will consider musical. Putting aside the extremes (eg unabated construction noise, babies crying, a single pitch) we all have a sense that it IS music, even if it isn’t in an dialect I understand. Point: when we say music is a language, we also agree that we all speak this thing called “language” to begin with.