The Disguised Musical Voice

Recently I blogged about Auto Tune and its magical pitch-correcting abilities.  While the post was ostensibly about a musical technology, its subtext was of course all about the human voice, specifically the delight we take in altering how our voices can sound.  Auto Tune is one way to do it, but that’s really just the tip of the iceberg in terms of how we can electronically process our voices.  Of course, musicians can also take fragments of recorded voices and render them into amorphous wisps of sound–effectively turning them into just another instrumental source (albeit one heavy with signification and all manner of associations that only the voice seems to conjure).  Mutating the voice in this way is the topic of a recent article by David Bevan at that describes how

“…there seems to be a new musical vocabulary emerging, one centered around the way vocals are being manipulated to create moods and atmospheres defined by their amorphous, often spectral nature. Ghost voices. It’s something like what happened in the film Inception, the way music could be heard through layers of dreams. That effect– as though sound were floating through several walls of consciousness, its outlines blurred to be almost unidentifiable– has something to do with the fact that we’ve heard a lot of these vocals before in their original form; they’re often samples that have been resurrected and re-articulated to express a sort of new slang. You can hear it in dance music and hypnagogic pop, in which house, drone, and art rock, the various presentations just as disparate as they are interconnected.”

The article presents a stimulating array of artists–including Burial, James Blake, Balam Acab, Four Tet, and others–to make its points.

You read the article here.

How Music Means So Much

Understanding how music means so much is difficult business, largely because music is a slippery phenomenon.  One of its longstanding mysteries is how it can have such deeply felt meanings for us: when we listen to music (or even listen while we make it), it just seems to be a sensuous stream of sound full of emotional resonance.  Music seems to channel our feelings and desires, sometimes even leading us to sentiments we didn’t know were out there.  Music is a technology for transgression as well as a virtual space for modelling social relations in the real world.  Music is a social glue, an environment enhancer, and a kaleidoscope of codes we seem to get–intuitively.  Music can compel us to dance, or stop us dead in our tracks, imploring us to be still.  When one musician compliments the work of another and says “that’s deep, man” it’s not for lack of words (or the pursuit of hipness) that she uses the word “deep.”  As an experience, music really is deep: it feels bottomless its ability to enhance and energize us, to give us something to flow along to.  So, music is slippery.

Over the years, many writers have tried to pin music down and unpack how it weaves its spell.  In Sound and Symbol (1959), Victor Zuckerkandl explores the metaphysical basis of music and how it offers a way for us to access a realm of feelings not normally accessible to our consciousness.  For example, discussing a passage in Beethoven’s Ninth Symphony, Zuckerkandl unpacks the space between the physical/acoustical and psychic/emotional components of a melody to reveal a third attribute that could be described as a kind of mystical dynamism:

“The two components, then, are present—the physical, the acoustical tone and the psychic, the emotional tone; but the melody, the music, as we know, is in neither of these.  What we hear when we hear melody is simply not F#, G, A, etc., plus “solemn repose,” tone plus emotion, physical plus psychic, but, with that and beyond it, a third thing, which belongs to neither the physical nor the psychic context: 3, 4, 5—a pure dynamism, tonal dynamic qualities.  It is not two components, then, which make up musical tone, but three.  The words we use to describe this third component—words such as force, equilibrium, tension, direction—significantly such as neither of the two sides claims for itself alone and, consequently, may well refer to a separate realm between the two, a realm of pure dynamics.  What makes a musical tone is so much the work not of the physical and not of the psychic component but of the third, a purely dynamic component…” (pp. 59-61).

In The Language Of Music (1959), Deryck Cooke argues that tonal music constitutes a kind of emotional language and that (European classical) composers over the past few hundred years have drawn on a shared lexicon of melodies and harmonies to convey specific feelings and affect listeners in intended ways. Cooke proposes the idea that music is a language and that specific musical gestures–a falling minor third say–have corresponding meanings.  But it’s difficult to ever prove this kind of relationship in music.

The conductor Leonard Bernstein takes a view of musical dynamism similar to that of Zuckerkandl (minus the mysticism).  For Bernstein, the meaning of a piece of music (or put another way: the feelings generated by it) is simply the by-product of its own materials transforming themselves over time. Umberto Eco observes that music is a semiotic system but without content with fixed meaning.  Similarly, Roland Barthes describes music as a field of signification and yet not a coherent system of signs.  In his book Repeated Takes (1999), Michael Chanan says that music “leads a socially charged life” and “creates a special and unique space” in which social subjectivities can be constructed, mixed, suspended, and dissolved in music’s “fluid and fluctuating evocation of sentiment” (31).  In another book, From Handel To Hendrix (1994), Chanan observes that music is “a language of sonic gesture” (23) whose “fluid mixture of different levels in the way [it] communicates produces great semiological complexity, for each level leaves traces of different kinds to produce a confusion of signs extremely complex to unravel” (38).

Russian linguist Mikhail Bakhtin says that music “is denied referential specificity and cognitive differentiation, but is profound in content: its form leads us beyond the boundaries of the acoustical sound production, but does not lead us into an axiological void–content here is, at base, ethical.”  Bakhtin’s work on the nature of speech utterances can also be applied to a view music making as a special kind of communication.  From this perspective, a musical “utterance” is a dialogical social act–dialogical because it is in dialogue with other utterances (through allusion, quotation, or even through transgression and differentiation).  For Bakhtin, observes Michael Chanan (1994), cultural production is always part of a social conversation happening at a specific time and place.  For Bakhtin, musical utterances are never “neutral” in the sense of having some kind of autonomy from our everyday lives, but are “completely shot through with intentions, purposes and ideologies, which constitute both context and subtext” (42).  Building on Bakhtin, the semiologist Julia Kristeva describes the relationships among (musical) texts as intertextuality.  As Chanan observes, for Kristeva the text is a space that links the writer (composer/performer), the reader (listener), and other texts.

In the late 1980s, some musicologists began considering music as a signifying practice.  These so-called “new musicologists” borrowed interpretive techniques from literary theory, gender studies, philosophy, and other disciplines to consider music as a kind of text as well as a cultural practice whose gestures (chord progressions, rhythmic structures, timbre, melodies, and large-scale forms) create subjectivities and conjure feelings and meanings.  Following Barthes, these musicologists consider music a field of signification awaiting our careful interpretation.

The work of new musicology was in part a response to the discipline’s formalist tradition of focusing on “the music itself” while ignoring matrices of meaning “beyond the acoustic” (as Robert Fink puts it in his book Repeating Ourselves).  And it was the new musicologists who reminded us that music is not (and never was) an autonomous discourse, but rather fully enmeshed in the histories and social lives of people–people with subjectivities and identities, gendered desires, and bodies.  Moreover, these musicologists point out that musical and sonic discourses always play a part in broader patterns and cultural formations that Raymond Williams might call “structures of feeling.”  Some of my most stimulating and thought-provoking  reading about music has taken place among the pages of books by the new musicologists, especially Susan McClary (Feminine Endings, Conventional Wisdom), Robert Walser (Running With The Devil) and Robert Fink (Repeating Ourselves) among others.  I would also have to add to this list Michael Chanan (Musica Practica, From Handel to Hendrix), Simon Frith (Performing Rites), and Christopher Small (Musicking) who have written eminently sensible books.

There are also other approaches to understanding the power of music to capture and hold our attention. In 1971 the anthropologist Robert Plant Armstrong wrote a book titled The Affecting Presence which takes a view of art objects (and we could include music here) as material presences imbued with affective energy. Drawing on his understanding of Yoruba expressive culture (especially Yoruban sculpture), Armstrong argues that artworks contain “the direct metaphoric realization of the characteristics of energy…a sense and deep fabric of metaphoric processes productive of energy.” For Armstrong, artworks are “enacting the very shape and energy” of a people’s collective consciousness (71).  When we come into contact with artworks–and I’d include here musical performances–that are so charged we co-resonate with this charge (we’re affected by the affect) and find meaning in the experience.

Another approach to unlocking music’s power is musicologist David Burrows’ work that views musical pieces and performances in terms of dynamical systems theory.  Dynamical systems theory is a field of mathematical study that attempts to describe the changes over time (that is, the behavior) that occurs in physical or artificial complex systems.  In his article, “A Dynamical Systems Perspective On Music” (1997), Burrows views music performance or an unfolding piece of music as a kind of dynamic, complex system and provides a play-by-play account of a cello piece by J.S. Bach. For Burrows, pieces of music change over time to maintain themselves as stable dynamical systems.

Finally, I have always wondered about the potential of applying anthropologist Clifford Geertz’s notion of “thick” ethnographic description to analyze musical utterances.  (See “Thick Description: Toward an Interpretive Theory of Culture”.  In The Interpretation of Cultures: Selected Essays. New York: Basic Books, 1973. 3-30.)  Geertz, remember, illustrated thick description as a means of distinguishing all the subtle shades if meaning that a single gesture such as an eye wink might take on in a given social milieu.  What might a musical thick description look and sound like?  Just as one would need much cultural insight to reveal the many levels of social meaning embedded in an eye wink, so too do we need to bring a broad understanding to reveal layers of musical gesture and signification.

Before I end, one final note by way of Michael Chanan. Even though music can seem like a language that we all understand, it isn’t.  Not only that, no two people understand the same music the same way, nor does any single music have universal meaning.  We inhabit a heterophony of musics, each speaking in its own voice:

“There is no universal musical language because the musical universe is completely heteroglot.  It consists in the proliferation of competing and intersecting voices which coexist within any given historical space: divergent dialects, each with its own repertoire of genres, the idioms of different generations, classes, genders, races and localities asserting their presence, and each contributing their
own utterances to the cultural heterophony of the times” (1994:106).

The Organ Music of Olivier Messiaen

If you’re into long tones, drones and shimmering chords, you might like the organ music of French composer Olivier Messiaen (1908-1992).  While I was a music student in college I discovered the organ music of Messiaen through a CD of some of his best known works. Messiaen was the organist at La Trinite Church in Paris for over 60 years, and in the YouTube video below we learn about the importance of La Trinite church and its organ to the composer, who used to improvise at the instrument in order to try out experiments with sound combinations during midday masses.  These improvisations would later become written compositions.

Here’s a clip of Messiaen improvising at the organ:

Finally, for a representative example of the dynamic, timbral, and emotional range of Messiaen’s organ music, listen his piece “La nativite du seigneur” (1935), a work Messiaen says was inspired by theology, mountains near the Swiss Alps, and the stained glass windows in medieval cathedrals.  The movement in this clip is called “Dieu Parmi Nous.”  If you are into chords and all the emotional hues chords can have, pay close attention at 0:46-1:42 where you can hear Messiaen’s utterly singular harmonic language unfolding over long tones.  Also, the very last chord is a humongous construction of notes that seems to last, well, forever!

Digital Diets, Attention Spans and The Rhythms Of Learning

Is the Internet and all manner of digital media really doing something substantial to our consciousness, to how we think?  Is my attention span not getting worse exactly but maybe becoming fractured?  This is the subject of at least a few articles I’ve read lately, including this one in the Times which is part of a series called “Your Brain On Computers.” My guess is that it’s going to be a while before we have overwhelming evidence that our minds are being ruined by our technology.  But it’s undeniable that computers have changed the rhythms of learning.

Here’s an interesting take on the matter from visual artist Keegan McHargue.  In the Nov/Dec. issue of The Believer, McHargue discusses his blog, Mauve Deep, which seems to be a kind off the cuff repository of images the artist finds compelling.  When asked if he “curates” his blog in any way, McHargue made some interesting observations about the effect of the Internet on how we absorb information:

“I like that the Internet allows information to pour to me indiscriminately.  From high fashion to design to obscure music, sites about art history and theory to blogs about cakes and pastries.  It just comes to me now. I’m not looking at visual information with specific intent anymore. I’m taking it in as a steady stream.  That’s how information currently feels.  It’s certainly very different from seeking things out as we used to have to…It’s funny that people try to fight it, because it feels easier than ever before to learn and grow.

How did I not see the world this way before? I’m an information fiend…It’s too much work to have an opinion of own’s own, and with the steady flow of information coming at us now–maybe we’ll transcend the idea of individual perspectives and move into a more collective consciousness as a whole” (p.84).

What I find interesting here is how McHargue articulates the dynamics of idea discovery on the Internet: the idea of that we can tap into a “steady stream” of pure information, including text, images, sounds on every topic under the sun (including cakes and pastries).  And while it’s easy to dismiss McHargue’s not bothering “to have an opinion of [his] own”, we understand where he’s coming from as an artist: he’s just swimming in a sea of data.

What does all this have to do with musical experience?  Well, I’m thinking about how it feels to explore YouTube: you begin with the goal of “finding” a particular clip on this or that music and soon enough you’re on an adventure in places you never expected to be.  Maybe this is what McHargue is referring to when he speaks of transcending “the idea of individual perspectives and move into a more collective consciousness…”  That is certainly what it can feel like when your YouTube search leads you astray and into something unexpected and interesting that may have little to do with what you wanted.

The Sound Of Auto Tune

You know the Auto Tune sound when you hear it: it sounds artificial, electronic, not quite human enough, too perfect.  Auto Tune is everywhere today, from TV commercials to hip hop to country music.  It’s the Photoshop of the musical world.

The technology was conceived by Andy Hildebrand, an engineer for Exxon who developed methods for interpreting seismic data through sound to help discover ocean oil reserves.  Hidebrand realized that this frequency-analyzing technology could be used in the context of digital sound recording to correct off-pitch singing.  So in 1997, he released the Auto Tune software as a plug-in for computer recording applications.  Auto Tune was used moderately at first, until Cher released her severely auto-tuned Song “Believe” in 1998.  The rest is fast-moving history of a musical technology spreading meme-like through almost every kind of music making, from Cher to the recent best-selling Auto Tune iphone app, “I Am T-Pain” that enables anyone to sound like a well-tuned robot.  Simply put: Auto Tune (and of course its predecessor, the vocoder) changed how we think about voice–the musical voice, but also just our regular speaking voices and their (hitherto hidden) musical potentials.

Just as I began browsing YouTube for videos on Auto-Tune I stumbled upon a very thoughtful, informative and very entertaining video produced by, of all people in the musical universe, “Weird Al”Yankovic.  The video is part of the series “Know Your Meme” and is titled: “‘Weird Al’ Yankovic Helps Explain Auto-Tune.”  But he does much more than this, providing a concise history of the technology, its musical uses, and its circulation as a musical meme over the past thirteen years.  The video traces what it calls the four stages of Auto Tune: 1. introduction, 2. overexposure, 3. parody and remix, and 4. equilibrium.  You can watch the video below:

Of the many insights of Yankovic’s video is the idea that whenever a new technology is introduced everyone rushes to explore the extremes of what it can do in order to unlock its transgressive/expressive potential.  For instance, when stereo sound was invented, musicians overused the ability to pan instruments to extreme left and right positions in the stereo field.  (Listen again to those old Beatles recordings …)

Where is Auto-Tune taking us?  What has it done to the grain of the voice?  Is this just an elaborate cover up for our imperfect singing or something with rich expressive potential?  Or both?  I leave you with Imogen Heap’s “Hide and Seek”:

Music Travels Cont’d

Nine-year old Willow Smith has an infectious pop hit circulating the Internet (a full album seems to be forthcoming).  “Whip My Hair” is an intense affirmation song and is good repetitious fun:

Here’s a cover of the song rendered on piano in a ragtime-jazz-ish style.  I don’t know who the pianist is, but you can hear how he works with and improvises on Smith’s repetitious chorus melody:

Finally, here is late night talk show host Jimmy Fallon (with Bruce Springsteen later on in the clip) doing an impression of the iconic Canadian folk singer Neil Young singing–you guessed it, “I Whip My Hair.”

What makes all of these renditions so interesting to me is how they foreground the importance of musical style as a filter for what we choose to listen to.  Smith’s song, produced by Jukebox, is a high-tech, electronic pop music production, and attracts one kind of audience.  It sounds really good played loud too.  The solo piano version is adventurous and chromatic, with new harmonizations creeping in under the right hand melody.  Fallon’s Neil Young version slows everything way down and sets the lyrics against a old-fashioned two-chord strumming pattern. It’s quintessential 1970s Young (and Fallon has nailed Young’s grain of the voice too).  What makes Fallon’s version funny is that somehow Smith’s lyrics don’t seem “deep” enough for the reflective folk idiom; there’s a disconnect between the seriousness of Fallon’s Young and Smith’s young-playful lyrics.  But it actually works.

I happen to like Smith’s original version the best because it makes the best sense stylistically: the music and her voice seem of one (heavily technologized) piece.  But the cover versions remind us that just about any music can travel from one idiom to another.  And when a song like “Whip My Hair” lands in jazz piano land or the folk music orbit, it asks us to consider for a moment which musical styles resonate the most for us, and more mysteriously, why.

You Are The Controller

Last week Microsoft released the Kinect controller for their XBox video game console.  The Kinect is being hailed/hyped as the next step in gaming technology as it does away with the most annoying part of the gaming experience: those little handheld controllers that serve as an interface between the player and the game.  Nintendo’s Wii got us part of the way there with their handheld controllers that respond to body movement.  So what makes Kinect on another level?  It scans the player’s body movements in real-time, making the human body the controller.  No more wires, no more joysticks, no more buttons to press, nothing to hold.  In the words of the XBox commercial: “You don’t need to know anything you don’t already know.  Or do anything you don’t already do.  All you have to do is be you.  You are the controller.”

I imagine that the Kinect technology will have resonance for many electronic musicians because musical controllers have long been something that we need to address when composing and performing music.  Pick up a music store catalog and you’ll see lots of controllers for sale, each of them offering the musician the prospect of ever better “control” over their music.  Controllers are always aiming for the kind of almost perfect transparency demonstrated by an acoustic musician at his or her instrument–with maybe only a pair of drumsticks or a violin bow or a mouthpiece or set of piano keys as the “interface” between player and expression.  In my conversations with electronic musicians over the years, one recurring theme is the tantalizing prospect of having nothing come between them and their music.  It’s the dream of having one’s physical (and possibly mental) gestures directly translated into sound, a situation where, as Kinect puts it, you are the controller.