A few years after I moved to New York I was shopping for electronic music at Kim’s Music and Video in the East Village. Kim’s had these little listening stations set up where you could put on headphones and preview CDs of new music. I put on some phones and listened to an album by someone called Lackluster—a name I thought was great because it didn’t set the bar too high for the listener. I liked the music and its CD cover and asked the sales clerk about what else he had that was similar-sounding. I remember the clerk—he wore dreadlocks and a black tee—and I remember him steering me towards more serious stuff, specifically Autechre, so thank you for that. But I was curious about this Lackluster. In some ways hearing this music was formative to my interest in music production because I heard sounds I liked but didn’t quite get. I was quite naive about how I approached this CD: I didn’t know anything about the artist (nor do I recall ever being curious to investigate the matter further, but for the record, his name is Esa Juhani Ruoho), I didn’t know what style of music this was (sort of a downtempo), or even how its sounds fit into what was going on in electronic music at the time. I simply thought the producer’s choice of timbres, chords, rhythms, and overall vibe were evocative of…something. So I bought the CD, and while I don’t recall when I listened to it, I listened quite a bit, pleased that I had found this thing I didn’t understand.
So much has changed since then. There’s no more record stores. No more CDs or listening stations. Spotify’s algorithms now recommend music for me. And in a way, now I engage in far less naive listening that allows the music I hear to just be.
Lackluster’s track “Cull Streak”, with its additive form, sub bass, pads, staccato lead line, and dry brittle syncussion, was one of my favorite pieces:
Electronic music production, said the producer Mr. Bill in one of his YouTube tutorials, is a game of amounts. It’s true. Bill was referring to the thousands of micro-adjustments one makes in the course of designing sounds, recording parts, arranging, and mixing a track of music. The process is simple but involved and very drawn out: electronic music production is a game of making micro-adjustments over time with the goal of getting the music sounding just so.
How do you know when the music is sounding just so?
How do you know if that last adjustment you just made is the final one, or the first step in another round of adjustments?
You trust your ear. You trust your sense of how what you’re doing compares to what you’ve heard others do. You trust your intuition that what you’re making is either boring or exciting, a rehashing of old gestures, or a sound that’s genuinely new (for now).
Since micro-adjustments is the name of the game you need to get comfortable making them constantly over time, in the same way that you season and taste your food as you’re cooking it, always adjusting and manipulating the ratios of its various components. Adjusting is a gentle form of manipulation, a way to alter elements of the music slightly so these elements fit better with what is around them. For example, adjusting is leveling off a reverb tail so it doesn’t drown a sound, or EQing a drum sound so it’s less boxy. But each adjustment you make sets into motion a new set of relations among the parts. So when you bring down the level of the drums, everything else appears louder. Or when you brighten the noise layer of one sound, the other sounds sound dull by comparison.
With adjustment-making, the computer’s screen is your friend. On the screen you see everything in front of you—the parts, the arrangement, the levels, the waveforms, and the effects routing. With the screen as your lens you can zoom in on whatever resolution of the music-in-progress you want to see, all the way down to the sounds’ waveforms. You can zoom in so tight that a waveform now fills the screen as you hone in on a split-second of a sound’s attack point onset. It’s as if you’ve changed the size differential between you and the music, making the music into a skyscraper as you fiddle around on its ground floor. Or you can zoom out the other way, making a ten-minute piece into a five-inch ribbon sequence, the better to see its structure from afar.
Seeing the music’s components on the screen also helps you hear them better. I often spend time listening repeatedly to a section, looking at its parts layered horizontally on top of one another, trying to hear what I see. When I can’t hear something I zoom in on it, using my eyes to test the acuity of my ears. I look at the volume curve and start adjusting it upwards while I listen. If the change isn’t clear enough I exaggerate it, dragging the line way up and listening to how that sounds instead. The optimal volume is probably somewhere in the middle, but sometimes dramatic changes, at least at the onset of a sound, lead listeners in a way they need. The software designer Steve Duda explains this approach to mixing:
“With events and with parts, I’m emphasizing them at their start then bringing them back to the same [dynamic] place…The listener wants to be guided through the song and be shown the highlights. It’s amazing what can be solved with just good levels.”
Sometimes the music’s representation on the screen can articulate its structure and provide you with ideas on how to elaborate on that. With hundreds of layers of adjustments already automated in the track’s arrangement, the producer begins to notice visual patterns that might cue new ideas. For example, one of the parts fades out right here, illustrated by the volume line that descends from left to right. But you notice that the waveform has information hidden by the fade, so you adjust it by making it ascend instead and this makes audible some new pulsations. Now, with the volume curve turned upwards instead of down and the new pulsations audible, you have the idea to mirror that visual design elsewhere in the other parts. This inspires you to create a series of crescendos that dramatically lead into upcoming chords. This is the kind of production idea that you might not have had at the outset, but by making micro-adjustments cued by what you noticed displayed on the screen you found new ways of phrasing the music. As Jenny Odell describes the nature of idea formation in her book How To Do Nothing: “Any idea is actually an unstable, shifting intersection between myself and whatever I was encountering.”
I had worked on the track for a year, which was far longer than I had ever worked on a ten-minute piece of music. In my defense, things take time: it had taken time to decide on sounds, time to get going and wonder where I was going, time to record chord progressions, beats, and harmonies, time to edit, time to add more sounds and modify existing ones, time to rearrange the arrangement, time to resample sounds, and time to edit more and mix. Also, I like slowness because it gives me the time to think about what I’m trying to do. Cue quote from Lao Tzu: “Nature does not hurry, yet everything is accomplished.”
Around and around I went, beginning each day with what I had so far and trying to build on that, either by adding to it or by subtracting from it, playing along to it or playing with it. I tried out what seemed like never-ending effects-driven alterations to my material, sometimes out of curiosity, other times out of a hope that I might hear something breathtakingly new so I could throw out most of what I had and keep only what made the music more interesting. That was always the goal: to make the music more compelling to listen to.
For the most part, the time I devoted to the track over the year was time well spent, because it allowed me to move it from an almost random series of chords to a finished piece. But there are also downsides to spending a lot of time on a single track. First, since I was the one making all the changes to the music, I now can’t help but hear traces everywhere of my earlier production moves, hearing both the sounds and the layers out of which those sound were created:
Why did I over-crunch the drums?
Why is the marimba recording so noisy?
Why that bass sound?
At each moment in the music I know what’s coming up next sound-wise, so it’s hard to be surprised. One side effect of this situation is that I tried amping up my editing moves to make them more extreme, as if to get more of a reaction out of myself. For example, volume fades that seemed smooth a few months ago now seem tame and I want them ever more exaggerated. It’s as if I’ve become immune to what I once thought were the music’s subtle charms. A second downside then, is that it became increasingly difficult to hear the music with fresh ears. This is a by-product of working on fairly micro levels of attention and musical detail. For example, I might spend a session finessing the levels and panning for a single part, because today that part seems downright wrong in its mix placement, and by the way, what was I thinking last week that made me unable to notice this problem? Working on micro-details is essential editing, but it turns your attention away from the music’s big picture. As I re-listened to the piece, the details were sounding good, but a big picture question I had never asked of the track was now gnawing at me: Why does it need so many parts? I don’t know why it needs so many parts. Maybe it needed so many parts because at some point in the process I had hoped that many would translate into more interesting.
I bounced down the mix and listened. I tried imagining how a friend or my mastering engineer would hear the music for the first time, but it was impossible. For reasons both real and imaginary, I can’t get out of my own perspective on the music. Some parts I like, but there are still problems and surely I can somehow fix those and finally make everything better?
I spend a lot of time working on the endings of my tracks. One reason for this is that I’m both glad the music is almost over, and also sad that I didn’t done more with the time I had allotted for it so now I’m trying to conjure more interesting sonic stuff just as the sounds are fading out or otherwise reducing themselves. When you’re in the middle of writing a track, your attention is all over the place because you don’t know where you’re headed, and certainly don’t have any idea how the music will end. In fact, when a groove or constellation of sounds is working well, ending seems optional. In fact, in any DAW (digital audio workstation) software one can just loop sounds so that they continue forever. And that’s a long time but I’m here to tell you: when you inevitably arrive at the music’s end, it’s satisfying to finesse multiple levels of detail, treating every decision of sound design and arrangement as, well, precious.
My preferred type of ending is a gradual reduction of parts and sounds, so that the music appears to slowly unravel or disintegrate. Gradually reducing is a staple structuring device of electronic dance music, but technology-influenced composers such as Steve Reich have also been using this technique for decades. Unraveling and disintegrating are more compelling to me than the classic fade out, whereby the musicians keep playing their parts as their sounds slowly vanish in the mix’s distance. (Here is an article on the history of fade outs in popular music.) It’s also more useful than an abrupt ending where everyone stops on the “and” of beat 4, or worse yet, come together on a final downbeat punctuated with a giant cymbal crash. As I hear it, unraveling or disintegrating parts and sounds involves altering the music such that it gradually loosens its hold on the listener and the sound-to-silence ratio shifts. It’s about taking things out in subtle ways to subtly cue the listener as to what’s happening.
As I craft endings, I re-evaluate the overall form of the piece and decide if now is really the time end. Sometimes it’s clear that
the music must go on!
and so I’ll extend and transform the ending into something substantial. Recently this involved writing codas for some tracks, so that when you think the music is over it returns. I wrote several codas, but then reversed course on two tracks on which the codas sounded indulgent. Such are the editing choices necessary when producing music. At these moments, it’s essential to distinguish between what one likes and what is actually needed. This is way of thinking that I could apply to other aspects of making music: to bring what I like and what is actually needed into alignment so that either route brings a similar result.
Working on endings sometimes gives me new ideas for beginnings too. A few times I was fiddling with an ending and began listening to a sound on its own. I wondered if the part couldn’t also work at the beginning of the track, and a few times it worked well. One lesson from this is that musical elements are almost always modular: that thing over there could also work right here if you alter it a bit. Another lesson from working on endings is that it gives you a chance to revisi details inside the music that you may have temporarily lost track of.
The most interesting part of musical endings though, is that they up the level of my concentration and so they’re fun to listen to. This is especially so then the music’s volume begins diminishing and the track’s parts are falling apart. I lean in closer towards my monitors, turn up the volume slightly—manually overriding my fade out—and try to hear everything going on. Is this what I want? Can I hear too much or just enough of what is dissolving? I especially love those moments where I notice a sound for the first time, the moments when a sound is felt as much as heard, the moments when the music finally moves like it was moving towards something all along.