Music Production Steps

Creating new music involves many steps, not all of which necessarily involve playing notes, notating them, or recording them. For me, a workflow often involves six steps. Depending on the day, I’ll work on one or two of these steps, but almost never all of them. Each step is a different spoke in the compositional wheel, each requires attention and time, and each helps something musical not yet built build itself. Here are the steps. 

Improvising on a keyboard. This is my preferred method because the keyboard is my most familiar terrain, playing in this terrain is the quickest way to feel like musical things are happening, and the melodic and harmonic DNA of any piece comes from playing at least one its parts. Improvising is thinking on the spot, and this thinking is the foundation of performance. Not performance in the sense of we’ve rehearsed this a thousand times and will now recreate it for you, but performance in the sense of we’re going somewhere new and it may or not may not work out. For me, the most interesting pieces of music feel like performances and performances hold your attention because of both where they go and the means by which they get there. I think of playing/improvising/performing as the essential initial gesture that suggests a future musical something.

Finding and designing sounds. I spend some time—although never as much as I think I need—searching for and altering/creating interesting sounds. Sounds can be obvious or unusual, plain or enchanting. Most of the time I’m searching for sounds that may be useful down the road. While it’s hard to say what sounds these are, most of the time they have some movement to them, or some textural quality that is interesting unto itself. One challenge with finding and making sounds is that the process feels as if on the cusp of music, but never quite making music, never quite as bodily compelling as playing an instrument is.

Intentional listening to other musics. I’m curious as to how other musics achieve the effects they do, especially when they use a minimum of elements. I’ll spend a few minutes spot re-listening to a track I’ve been enjoying to understand what is actually happening. Oh: There’s only four sounds? The texture is that dry? It’s a single chord? How do the sounds hold my attention? Such questions came to mind recently when I listened to the music of SND, circa 2010. (One of the duo’s members, Mark Fell, is quoted several times in my book.) Spot-listening is often a reset—reminding me of what I could do with what I have.

Quickly fleshing out additional parts and an arrangement. If I have improvised something useable, I push ahead with it immediately. Sometimes I’ll correct any glaringly “wrong” notes or dynamics and add additional parts. Sometimes these parts are derived from the initial improvisation, sometimes not. I’m aiming for pleasing counterpoint and complementary textures, but there are many ways to arrive at that kind of euphony. I’ll use sounds I’ve created in the past (step 2 above) and especially those sounds I can locate quickly. In other words, as Harold Budd used to say, I use what is at hand. Using what is at hand is also the first arrangement step. Although it can be finessed later, how might I roughly arrange the parts into a flowing form? The most impactful move is simply not to have every part enter and exit the texture at the same time. Also, removing parts is a way to foreground what remains. An arrangement can also emerge as a by-product of effects shaping the sound of the music’s parts over time.

Signal and effects processing. It can be easy to think that effecting sounds—compressing, reverbing, resampling, or distorting/mangling them—is secondary to composing them, but not quite so in electronic music production. While an initial performance may be the catalyst for a piece, processing sounds can be a linchpin of what makes the music come alive. In fact, effects are deeply imbued with affect. A rule of thumb I follow is that if an effect makes the music more interesting–more affective–I use it. One by-product of effects processing is that it empowers you to understand that you can start with something plain/simple and later transform it into something unusual/complex. When you start with something simple you keep track of where you came from. 

Letting time do its work. While a piece is often sketched out in a few hours, I’ll put it aside for a few days or weeks to forget about it. When I return to it I try to listen like an editor reads. I don’t question the sounds already committed to, but instead make the smallest necessary adjustments to help the music find its clearest articulation. 

Resonant Thoughts: Jenny Odell’s “How To Do Nothing” (2019)

“If we think about what it means to ‘concentrate’ or ‘pay attention’ at an individual level, it implies alignment: different parts of the mind and even the body acting in concert and oriented toward the same thing. To pay attention to one thing is to resist paying attention to other things; it means constantly denying and thwarting provocations outside the sphere of one’s attention. We contrast this with distraction, in which the mind is disassembled, pointing in many different directions at once and preventing meaningful action.”

“The artworks I’ve described so far could be thought of as training apparatuses for attention. By inviting us to perceive at different scales and tempos than we’re used to, they teach us not only how to sustain attention but how to move it back and forth between different registers.”

Jenny Odell, How To Do Nothing (2019)

On Non-Algorithmic Music Recommendations

“In Spotify’s view…when users search for and listen to music they are providing a measurable set of inputs from which musical tastes and desires can be extrapolated. This user behavior is recorded, compared and evaluated against that of other users, then sorted into metadata and used to calculate every song and artist’s degree of relevance to each individual.”
– Thomas Hodgson, “Spotify and the Democratization of Music”,
Popular Music 2020, Vol 40/1., p. 7.

“A difference which makes a difference is an idea.
It is a ‘bit,’ a unit of information.” – Gregory Bateson

On Spotify, music recommendations are flying my way daily—a bombardment of you might enjoy this! based on the company’s data of since you and others with similar listening habits already listened to this. Usually though, the algorithm is off. I like to think this is so because I don’t listen guided by style per se, but by Quality, which comprises a more difficult to quantify–in other words, subjective–set of attributes. From standard playlist prods such as “The State Of Music Today” and “Discover Something New” to recommendations narrowly targeted to what I apparently listen to (and therefore am, musically speaking), such as “Neo-Classical” and “Atmospheric Piano” and “Experimental Electronica”, on Spotify my tastes are reflected back to me and this reflection is refined in real time as I click to listen to this track, but not that one. As I listen, the algorithm refines itself to better reflect my tastes now, and maybe anticipate them in the future.


What’s the end goal? For Spotify to perfectly predict its users’ listening interests and habits to better keep them inside this streaming universe, paying by the month? Admittedly, sometimes it is nice to be figured out, even by software. I appreciate it when every year or two I get notice of say, a new Autechre recording. (Yes I will listen and yes I will probably find some beautiful moments therein. Speaking of which, have you heard “32a-reflected”?) 

But Spotify also generates vast numbers of playlists on which musicians of varying levels of Quality get lumped together via a presumed shared style. This is the reason why what I hear as the cloying piano music of Ludovico Einaudi sits alongside the non-cloying but more meditative/introspective piano of Nils Frahm. By virtue of their shared sonic surfaces and general style (neo- or post-classical?), tracks by each composer could be considered related, and maybe both musicians are, in the end, makers of atmospheric piano music that is roughly similar. But their Quality quotients are different, and Quality comes from details, from bits of information. As the cybernetic anthropologist Gregory Bateson once said, information is “any difference that makes a difference.” One could make the case that Einaudi’s and Frahm’s music are different in substantial, if hard to pin down ways, yet a Spotify playlist that groups them together glosses over such differences. Tiny differences make something (or someone) who they are, and such details can render one thing (or person) slightly annoying, and another thing well-balanced. Whether we’re talking about musical style or musical Quality, differences can make all the difference.

Which brings me to non-algorithmic music recommendations. The other day I looked up a Frahm piece and noticed that on his Spotify page he had posted a few of his own music recommendations via a playlist of a collection of dub-influenced tracks. I wasn’t expecting this, so rather than searching for Frahm’s music I spot-checked his playlist instead. One track, a 1996 piece by a German artist named Nonplace Urban Field (Berndt Friedmann) caught my ear with its minimalist rhythmic profile. This little musical discovery, I thought, was worth it. It was worth it because the sounds have something, some Quality. Here was music offered up not by an algorithm, but by another musician, as an influence: