“Does music express the secret nature of everyday life, or compensate, on the contrary, for its triviality and superficiality?…Music is nothing else but number and proportion (intervals, rhythm, timbres) and it is at the same time nothing else but lyricism, profusion and dream. It is all vitality, exuberance and sensuality and all analysis, precision and permanence; but only the greatest composers know how to reconcile the two facets.”
– Henri Lefebvre, Everyday Life in the Modern World (1971), p. 20.
The article takes a problem-solving approach to creativity by exploring electronic music production techniques within Ableton Live, one of the most influential and widely used DAW (digital audio workstation) software programs:
“The visual, conceptual, and sonic affordances of Live encourage workflows for manipulating sound as malleable material, while the software’s clips and scenes layout invites thinking about music as a modular structure…Creativity happens in action–in the moment-to-moment details of problem-solving as musicians choose this sound over that one by tinkering with timbres, beats, and form. Ableton Live’s design and capabilities usher its users across music production’s enchanted terrains by opening workflow possibilities whereby even the most unrelated sounds find ways to get along.”
articulate—express an idea or feeling fluently or coherently
(from Latin articulare ‘divide into joints’)
Well-articulated music has amazing communicative power, and when I listen to other musicians play the first thing I notice is how it articulates (or not). For drummers and percussionists, it’s impossible to hide behind sloppy articulation because the sounds we make are for the most part sounds of quick attack and decay. This exposes us—revealing in an instant our time sense, our sound, and our phrasing. Sometimes you’ll hear a musician substituting volume for articulation and this is not a pleasant sound. A loud sound can be beautiful too, but only if it’s thoughtfully articulated and controlled. I once watched a percussionist lose his articulation because, it seemed to me, he was so enjoying cramming in an excess of (improvised) notes. I was enjoying watching him cram in those notes, but cringing at the sound. The musician’s quality of sound—his articulation—fell by the wayside as he chased after ever more filigreed patterns. As I listened I wished I could turn him down and hit a magic button to fix his articulation.
I think about articulation often when I’m working on electronic music because I could be doing more to stay conscious of it and to refine it in my work. I frequently notice a re-occurring sloppy articulation situation: I’ve added supporting parts to a main one and these parts add a pleasing sound wash, but none of them are articulated enough. In other words, I can’t really tell what each one is doing from moment to moment. This isn’t a problem of timbral articulation (though I can certainly address that by adjusting say, EQ), but a problem of musical articulation. Instead of several articulated lines (melodies or chords), the parts make a single blob of sound. It sounds fine, but it could be better if each part were more coherent.
One way I do this is by breaking up the monotony of having every part always playing. A part can be muted here and there without necessarily interrupting the coherence of its line. Music perception is interesting that way: our ears compensate in subtle ways for what’s missing in the music, a fact which makes the music even more interesting. Along with muting, a part’s volume can be temporarily lowered or raised to background or foreground it. While adjusting volume like this is a bit laborious, the effect is powerful because it creates the sensation that the parts are listening to one another. It’s like if you and I were playing a duet and you step up to take a bit of a solo. If I were listening well, I’d immediately lower my dynamic so your soloing would shine even more. When you finished, we’d both meet in the dynamic middle somewhere and continue on.
When a part is muted or has its volume altered momentarily, this sets up conditions for creating a call and response between it and other elements in the music. I’ve written about call and response elsewhere, but what is most powerful about this ancient musical structure is how it creates articulation between multiple parts. As I said in another blog post, call and response gets “the music listening to itself at the various layers of its rhythmic action.”
Zooming out to a more global-structural view of a piece of music, one can think about articulation in terms of how well all of its parts combine to coherently express a feeling. No matter how the parts are arranged and edited, the goal is to sustain some kind of attention in the listener. One of the delights of electronic music production is that I’m constantly finding interesting things to listen to that I never anticipated encountering. For example, after I edited a part by bringing down its volume, all of a sudden a new composite sound in the music reveals itself. Here I was making one part more articulated and in that process something else in the music emerged. As my attention bounces all over the music’s surface, I wonder: How would another listener hear it? Maybe the most useful kind of articulation expresses a feeling without boxing it in, suggests a mood without defining it, and offers a mix of sensations without resolution?
When you’re working on a collection of music, it helps to have them unified in some way. The surest way to do this is for each piece to have the same instrumentation. I’ve done this with my music in that each of my recordings is scored for a single set of sounds. For example, Piano and Metals Music is scored for piano, kalimba, and gong sounds, and Four Piano Music is scored for four pianos. When each piece shares the same instrumentation you compose using a single timbral palette. At the very least, this palette simplifies my decision-making and gives listeners some sense of what to expect.
Generally speaking, electronic music producers working in popular idioms don’t work this way. Sure, many tracks might use say, a TR-808-type kick drum or snare sound, but most producers neither need this timbral consistency nor advertise it in their track titles. On the contrary, they—and critics—value new sounds. In an ideal electronic music production world, every piece would have its own distinctive set of sounds. One argument in support of this view of production is: If any sound can be created, why keep using the same old sounds? With the never-ending stream of new software and hardware releases, why not keep pushing forward music’s timbral boundaries? Isn’t this one the main points of making electronic music and the key criterion by which to judge its inventiveness?
•
But while sounds matter, musical design and process matter even more. I have yet to encounter an interesting sound that made more of an impression on me than an interesting chord, and I have yet fall for a great drum sound instead of a great drum pattern. Prizing new sounds for their novelty often comes at the expense of thinking through interesting things to do with these sounds. As an example of the limitations of timbre, listen to how often TV and film composers rely on single-note, synthesized drones to signal fear, danger, or intrigue. (Oh oh! Something bad’s about to happen!) In many contexts, drones are compositional cop-outs, because no matter how colorful and richly layered their timbres, they do relatively little and there’s little subtlety or enchantment about them. Another example: the gargantuan drum timbres in Hollywood blockbusters. The drums sound a hundred feet tall, but their rhythms are elementary and often plodding. Drumming can be so much more than this. Electronic music producers—include composers for TV and film—sometimes overestimate the power of timbre at the expense of musical design and process.
As I write, I’m remembering Alexandre Desplat’s excellent score for Wes Anderson’s stop-motion animated film, Isle of Dogs. The score uses very little in terms of timbres, sticking mostly to woodwinds, voices, whistling, woodblock, and few drums. Desplat’s choice of timbres fit with the setting of the film and act as almost transparent vehicles for the composer’s designs. One of the film’s main themes is a simple three note motif that drops a perfect fourth and rises up a minor third: g-sharp, d-sharp, f-sharp. It’s pentatonic but also somewhat melancholy—perfect to express the Japanese setting and the conditions faced by a pack of dogs from Kobayashi city exiled to Trash Island. We hear the theme repeated throughout the action played on a flute (along with a complementary g-sharp-b-a-sharp bass counter melody), and we hear it sung and whistled too. It all works exquisitely well.
•
In electronic music production, one of the creative challenges is to reign in timbre’s allure in order to figure out interesting things to do with one’s sounds. While music software manufacturers would have you believe that their sounds are inherently enchanting, that’s not how all musical enchantment works. Music is enchanting as a by-product of what it does. In a way, the task of the electronic musician is to transcend his/her timbres by devising novel ways to structure the music. There are a thousand techniques for doing this, but one foolproof starting point is to keep the music in one’s music making body for as long as possible. This entails:
Playing your parts instead of sequencing them. Always relating the part to the entire texture. Taking the time to refine a part before you record.
Taking advantage of your first run-through or improvisation by recording all the time. Varying your parts on as many resolutions of detail as possible.
By keeping your parts living in the realm of your playing, you also begin to play through the timbres you’re using. Instead of relying on a sound for an effect, try to create an analogous effect by changing a chord, a melody line, or a rhythm. Timbre contributes a lot to how a music feels, but ultimately we hear beyond timbre—listening musically involves listening beyond timbre. Consider an analogy: you’re having a conversation with someone who happens to have a whiny voice. At first, you’re distracted by her vocal timbre, and you may even erroneously attribute to her various personality characteristics based on this whiny sound. But if the conversation is good and the ideas are interesting, the quirks of her voice eventually disappear as an object of interest, let alone significance. Now you’re hearing beyond timbre.
Music is just like this. If the music is interesting enough, the sounds of its timbres disappear.
I leave you with a recording that came to mind while editing this blog post, TM404’s TM404, a recording made exclusively with the timbres of the Roland sequencers and drum machines.
“Careful breathing is always associated with an experience of cooling, of decelerating. It works in almost any scenario where the mind is being catapulted by the body, and we want control”.
“I think human laziness is a really important part of finding good, new ways to do things. I often look at things and think: ‘This is just getting too complicated – let me try to step back and figure out a shortcut.’ A computer will say: ‘Well, I’ve got these tools and I can just bash on, deep into the problem.’ But because it doesn’t get tired and it’s not going to be lazy, maybe it will miss things that our laziness takes us to.”
• A video about Saturation, narrated by the one of a kind narrator, Dan Worrall:
You must be logged in to post a comment.