Over the past month as I was editing some new music for a piano-like instrument it struck me that what I was trying to do is make the music “breathe” more. One component of musical breathing has to with how its sounds are articulated. As my laptop’s dictionary reminds me, in music “articulation” refers to clarity in the production of successive notes. I guess I knew that already, though I don’t think about the concept much unless I’m faced with its absence (un-articulated?). Now that I’m writing about it, I realize I constantly try to articulate clearly on whatever instrument I’m playing. (Writing, too, is all about plotting subtleties of articulation.) Articulation is a big deal—I would prefer to devote myself to making a single note really sing that being able to play a slew of them really fast because a single singing sound has a more magical aura than does a blur of articulations. Regardless of what kind of music you make, one could make the case that effectively affective music articulates itself in some kind of pleasing and natural way—it literally seems to respire, from one note and phrase and section to the next. Specifically, to talk of a musician’s articulation is to talk of their touch—how they shape a single tone in terms of dynamics and timbre, and how they connect those tones into longer phrases, usually by “following the line” of the music. (Read more about following music’s line here.)
Anyway, as I was editing my pieces I was in essence taking a microscope to my original performances and looking for moments where they could be improved. Why would I want to improve on a performance that has a decent overall shape and flow? There’s a few reasons. The first is that my performing ability has limitations that become ever more apparent upon repeated listenings. My execution is uneven, for instance, which I partly blame on my plastic 61-note MIDI keyboard and I partly blame on me just being me. But with MIDI data on the screen in front of me, I can see the shape of my performance and also the patterns of my limitations in the performance’s unfolding. A second reason to edit is to imbue my performance with more of the drama that it suggested but couldn’t fully articulate when I recorded it. (In my defense, I was preoccupied with just getting through the performance!) Going back after the fact and tweaking here and there is a way to add gravitas through newly foregrounded, only noticed-now little details. A third reason to edit is because—duh!—that’s what computers and DAW software are for—photoshopping sound! A final reason for editing is that it uses your head in a different way. It’s like a post-game analysis where you coolly assess what really happened, what did and didn’t work, and how your team gave up those goals.
I edited the music along three of its parameters: its timing, its spatial density, and its articulations. Editing music’s timing aspects involves nudging a note here and there to make it more or less in sync with other notes in the texture. One lesson I’ve learned here is that you never want perfect synchrony in the digital realm because when you have that you literally have notes cancelling themselves out and they sound doubly thin. Out-of-syncness (recalling Charles Keil’s “participatory discrepancies”) is wonderful and can make for a thick groove, but only to a point, beyond which the music sounds like it has lost its human hand. Editing music’s spacial density involves one powerful technique: deleting notes. I love deleting notes. When you delete a note every sound around it immediately shines in a new, and usually wonderful way. Is a texture too busy? Delete a note. Is a melody or harmony murky? Delete a note. (Someday I’ll try a musical project that begins with a lot of notes and then just delete almost all of them to see what’s left.) As with editing music’s space, editing music’s articulations also involves one technique: changing note velocities (volume). This isn’t a simple task, because each different velocity level (the volume of MIDI events ranges from 1 to 127) has a very different feel. Tiny volume changes have huge emotional effects: a soft velocity can feel “delicate” or “feather-like” while a louder velocity suddenly verges into “aggressive” or “obnoxious” territory. Additionally, velocity-sensitive sampled instruments, like the one I was working with for my project, trigger timbrally distinct samples depending on how hard you touch the key (or adjust its velocity after the fact). With these instruments, their sound-feel changes as their velocity does.
As the weeks ticked by and I kept returning to the music to edit—each time wishfully thinking I’m done editing and that there’s no way I’ll be able to improve on the sounds—I found myself spending about 80 percent of my time on articulations. It began innocently enough, a by-product of looking at the music’s MIDI notes as I listened for the nth time. Looking at the screen I would notice that one note out of a cluster of three had an unusually high velocity—say 96 compared to the 74 of the others. I would play the passage again and then notice that the sound of the 96 note was sticking way out, and that I hadn’t heard it sticking out until I saw its MIDI representation. At this point I’d close my eyes and listen again, just to confirm what my eyes had reminded my ears. Then I’d adjust the note to a lower dynamic and listen again. Ah, better. I did the same thing with overly soft notes too, bringing up their dynamics to more audible levels with more presence. Tweaking the music towards more musical articulation reminded me how lousy a listener I can be—thinking that I can effortlessly analytically hear my own music when I can’t. Looking at the MIDI notes helped me hear more clearly what was happening in the sounds.
After fixing a bunch of errant-velocities, loud and soft, I then noticed a pattern to my fixing: I was not only dialing back the volumes of loud notes and boosting the quiet ones; I was also shaping groups of notes so that each group was more like an audibly sensible phrase that goes somewhere. Now I was having fun too, because the more I listened while looking at the MIDI data the more I realized that almost everything was in need of shaping. (How good was my original performance after all?) To use a metaphor from the cosmetics world, I was contouring the music by shaping its articulations into more sensible shapes. My most used techniques were to make phrases either increase in velocity from soft to loud, decrease in velocity from loud to soft, or dip down in a U-shaped dynamic curve. Another technique was to dynamically accentuate downbeats, while leaving the upbeats much quieter. Though sometimes I played with those conventions too, making upbeats and lead-in notes louder than the notes they were setting up. I’d do this when I noticed that the accented note had a pleasing ringing quality to it that hung over the subsequent notes, vaporous and floating, like a cloud. Some sounds you only realize you like and want when you encounter them.
To illustrate, below is a screenshot of my editing in Ableton Live. The MIDI notes placement and duration are represented as little square blocks, and their velocity levels are the thin vertical lines at the bottom of the screen. The black MIDI note on the left side of the screen is the one whose velocity I took way down to make it sound more like a passing note. Also notice that each of the three three-note MIDI phrases on the right side of the screen have varied velocities. Each shape is different and you can hear those differences in the music.
After repeating this articulation editing process numerous times I got to know music I thought I already knew more deeply. When you hear the same section of a piece repeatedly you develop a feel for what its optimal sound could be: as you get to know its subtle twists and turns it’s as if you build up in your mind’s ear the optimal volume for each note. There are other lessons here too. First, don’t do too much—don’t destroy what you started with. You don’t want to fundamentally mess with the contour of your original performance—mistakes and all. To retain this shape you need to leave intact the dynamic and temporal relationships among the music’s parts, and sometimes this means you should to leave intact little, otherwise fixable, errors of timing, spatial density, and articulation. This is okay because you want to preserve the sense from the original performance that here is a record of someone who tried to get it right the first time, even though he didn’t quite. A second lesson from hearing the music repeatedly has to do with controlled exaggeration. When you make a change, the change needs to be clearly audible—I would argue even a tad exaggerated—so that it can be registered from a distance. (I sometimes listen from outside the room the music is playing in to hear if my changes are still audible.) It’s like the difference between someone who mumbles and someone who clearly articulates their words. To articulate is to heighten and accentuate to bring out the musical qualities of what is being said. Articulation, in and outside of music, is one’s sense of touch, articulated.
One thought on “On Editing Music For Articulation”