(Parts almost going out of sync, but pulsing together.)
“In the affluent West, many of our energies of innovation seem to be channeled into creating experiences for the consumer that will make him feel good without making demands on him. This trend has been called ‘affective capitalism.’ Examples include computer gaming, pornography, psychoactive drugs, or a well-curated ecotourism adventure. Manufactured experiences are offered as a substitute for direct confrontation with the world, and this evidently has some appeal for us. We are relieved of the burden of grappling with real things–that is, things that resist our will, and thereby reveal our limited understanding and skill. Experiences that have been designed around us offer escape from the frustrations of dealing with other people and with material reality. They allow us to remain cocooned in a fantasy of competence and empowerment that is safe from the kind of refutation that routinely happens when you…ride a skateboard, for example.”
Matthew B. Crawford,
Why We Drive: Toward a Philosophy Of The Open Road (2020)
One of my music production routines is to build a sound from scratch in a VST synthesizer and then make music with it immediately. I do this most days and now that I think about it, the routine is like a training session, the goal of which is to learn how to produce music from the ground up—not waiting for inspiration but actively cultivating opportunities for it to flourish. One interesting side effect of sound designing is that the process gets me into a more focused head space.
When I begin I have no idea about what I’ll do. The problem is that I don’t yet have anything to bounce ideas off of. This is where the routine comes in handy to get things going: you’re not going anywhere until you try something—anything. With that, I start.
The default sawtooth waveform in my software sounds harsh to me, so I browse through other waveforms, settling on a more rounded-shaped one. I change the sound envelope parameters (ADSR) of the sound, lengthening its attack and decay. Then I begin automating one parameter into another, for example assigning an LFO (low frequency oscillator) to the Waveform Position control, so that the sound scrolls through different locations of itself. I assign another LFO to a Filter, which makes the sound’s timbre seem to open and close slightly. I turn to the Noise control, and assign another LFO to it. There are dozens of noise types to choose from, so I compare a few of them while simultaneously adjusting their pitch and velocity levels. So many choices. Can I just have some noise, please?
In addition to the ADSR, LFO, Filter, and Noise parameters, the software also has an Effects section where I can add distortion, more filtering, EQ, compression, delay, and reverb in varying amounts to my sound. I can also modulate these effects with those LFOs I’ve already assigned elsewhere, so I return to them and try out some more routings. I assign the LFO that is controlling the Waveform Position to simultaneously play with the delay amount (I picture a puppet on strings, bobbing up and down).
After a while I start hearing bits of pulsation and movement in the sound. I have each of the LFOs set to their own tempo (unsynced from the “master clock” of the DAW because that sounds predictable), which means that there’s a lot of micro-timing discrepancies happening inside the sounds. Some of my routings are interacting in strange and unpredictable ways that make the sound interesting because I can’t predict how it’s going to behave from one moment to the next.
But there’s still something annoying about the sound. Is the filter set too wide—is that why it sounds too bright? I darken the sound a bit, but now I’m missing those layers of pulsation, so I open the filter back up halfway. I try swapping out one of the waveforms, then try moving to a new position within one of them, not quite sure if I’m hearing enough of a difference to make a difference. I keep tinkering, trying to get the sound closer to…a sound that I would like to play with. I have the sense that I’m failing at this, but at least I’ll have something to show for it. (Save your presets.)
So far, all of this might seem technical, but the fact is that I’m going on intuition, with only a general idea of the architecture of my software. (Prince: “I don’t want to have a preconception about what a piece of gear should or shouldn’t do. I just start using it. I start pushing buttons, and I discover the sounds that I can make with it.”) I’m trying things out while realizing that the number of paths I’m not pursuing vastly outnumber the paths I’m exploring. I’m using my ears, not my limited theoretical knowledge of sound design principles—are there any?—to navigate through the software.
Then, suddenly, an opening: the whole time I’ve been adjusting the sound-in-progress, my hands have been on the keyboard, playing, and it’s only now that I notice this musical frame for my sound design. There’s a high B-flat and D in the right hand, and a G in the left. While trying to get a timbre just right, I’ve been circling around a chord or two.
A bigger opening: the process of designing a sound and playing a few notes on the keyboard has put me into a headspace very different from the one I was in when I began. It’s not the “perfect” sound and a G minor chord isn’t much, but now I have a level of attention more useful than my imperfect know-how. Now I hear things in what I have in front of me, and that’s something to go on.
“For me, there are two types of creativity: fast and slow. The spontaneous, explosive side, where I’m generating ideas quickly; and the refining, organising side, where I slowly make it fit together into a satisfying whole. It’s important to know whether, at any moment, you need to be in fast mode or slow mode.
I imagine the best live concert being as moment-to-moment unpredictable as a Cecil Taylor free jazz improvisation; and the best studio recording being as detailed, planned and perfect as a Miyazaki stop-frame animation. Again, it’s fast mode versus slow mode.”
(Related reading: Daniel Kahneman’s Thinking, Fast and Slow)
At the center of electronic music production is a disconnect between the technologies of the craft and the feelings I hope to conjure through the sounds. The disconnect arises from the fact that thinking through the potentials of technologies and seeking expressiveness are different ways of being in the world. The technologies I use are software-based, and so by their nature are always more open-ended than I can fathom. For example, where exactly does a VST synthesizer end? With the four hundred presets that come with it? With the sounds I have made with it? The mind-bending fact is that, with software, there is no end to its sounds because (1) I can change those sounds internally in the instrument itself and (2) I can change those sounds externally by routing them into one of the many other software creatures in the ecosystem of my DAW. I often think about the open-endedness of my tools: I’ll be bike riding and wonder, how it would sound if I ran that sound into this device? Taking notes on processes I want to try helps, but only if I actually try out what I made a note to try, let alone remember to read the notes. As much as I ponder the potentials of the tools, their musical horizon always extends further than I can go, promising ever more enchanting sounds if only I would test one more combination.
In contrast to the open-endedness of my tools, I’m more closed and practical in my work routines. I’ll use anything that is at hand—let’s just use Preset no. 35 I made last week—and try to make something expressive from it, just to hear what happens. The sign that I’m shifting into producing mode is that I’ll start playing around with something. For some reason, playing around is always accompanied by the thought, this is truly stupid. But I ignore the thought and keep going. When I’m in this mode, I have no interest in technical questions (unless something isn’t working), and my analytical, what is this exactly? way of thinking becomes quiet. I go on intuition, just playing around, assessing and adjusting, making tiny annoying things a little less annoying, and most importantly, I move fast:
…no that doesn’t sound right, make it more like this; that’s too dark—lighten it up; the texture is muddy, make it clearer; slowly fade that ambience in; take something out because it feels busy; that sound is annoying; make it more lively; why is that sound sticking out?; the chords are boring; do I even like this?…
These kinds of impressions happen in quick succession, and I attend to their urgency without getting caught up in their demands so I can keep things moving. Once in a while, the disconnect between technical thinking and expressive non-thinking disappears and somehow my software tools feel alive—making sounds just as I want them to and being responsive to just what I want to do. For a moment, I’ll forget I’m making highly mediated music. For a moment, there’s no code, no screen, no speakers. It’s as if the computer has vanished and now the sounds can express a feeling rather than talk about themselves.