Effects Are All About Affect

“I’ve found that quite a good trick is that if you feel like you’ve put too much reverb on something just add more.” 

– Clark

When I’m composing, I’m playing or drumming on a keyboard controller, but mostly I’m looking at a screen and inhabiting the virtual world of DAW software, listening, trying to finesse whatever sounds I have into something more compelling than what they already are. If the sound isn’t compelling, ambiguous, out of the ordinary, unusually different, or prompting some kind of real response, I keep trying things until one of those goals comes into focus. In other words, when nothing seems to be working, I haven’t yet found the thing that could work.  

While I do design my own sounds using software synthesizers (such as Serum, Vital, and Hive), I often find that it’s the sound of the effects and signal processing that I apply to my sounds that’s the most interesting. In a computer-based recording environment—which is the default tool used by almost most everyone who produces music themselves—effects are a compelling point of focus because (1) there are so many of them and (2) they can be combined in so many ways. As a starting point, effects include modules for reverb, delay, distortion, EQ, and compression. But from there, they move outwards into more black box sound-mangling, granular, and plain strange territories. Effects add or foreground traces, echoes, mirages, ambiances, textures, harmonies, sub-tones, presence, and implied rhythms to sounds. One can overdo effects and swamp a sound source, or use them lightly so that they are felt as much as heard—that is to say, heard only at that point where they begin to be felt, but no more. In short, effects are negotiations with the fungibility of timbre and ultimately, all about affect. 

The point where musical timbre and texture intersect reminds me of a book that I first encountered in graduate school, The Affecting Presence: an essay in humanistic anthropology (1971) by Robert Plant Armstrong. Drawing on examples from Yoruba and Javanese expressive culture (sculpture, music, and dance), Armstrong proposed that we understand artworks not merely as symbols or representations of social life, but rather as direct “presentations” of feeling and affect. As a musician, this stance aligned with my understanding of what music is as an experience (as opposed to what music is theoretically–as a “thing” to be transcribed, as a part of social life, etc.). Twenty-five years later, I find Armstrong’s phenomenology of art as an inherently affecting presence to be a useful frame for thinking about the use of effects in music production, whether they be simple distortions of instrument timbres, rampant auto-tuning on voices, or complex transformations of a mix. To sum up: if effects are affects, then an effect is only as useful as its affect is affecting.

• 

Back to my (by no means original) practice. One of the ways I learn about effects is to use them in various combination chains. For instance, a simple combination might include a reverb or a delay, followed by a distortion unit, an EQ, and then compression. I save these four separate effects as a single chain called an “effects rack” that I can use on a conventional sound, such as a piano, to hear what happens. Since the rack is saved, I can also re-use it at a later time in another piece, on a different sound. Sometimes I place an effect or effects rack directly on a sound’s track, and other times put the effects on a Return track. This allows to me control how much of the effects processing is applied to several sounds simultaneously. For example, on a piece with keyboard, percussion, and voice tracks, each of the sounds can receive varying amounts of an effects chain from a single Return track. This leads to unforeseen, by-product timbres because each track’s sounds trigger the effects’ affect in different ways.  

The takeaway from my use of effects is twofold. On the one hand, effects aren’t a complete substitute for musical structure and design. Yes, an effect that slowly changes or evolves over time (via automation) does create a kind of (often hypnotic) structure, but such changes have their limits. For me, listening to a slowly-opening filter on a synthesizer is often too predictable to be structurally interesting. On the other hand, effects are a producer’s magic wand, capable of morphing this sound into that one in a second, conjuring an enchanting affect out of the slimmest of materials and reminding us that the sonic profile of music is always fluid, a feeling in progress. 

Working Quickly, Working Slowly 

There are two main ways I work, both of which have their upsides and downsides. The first way is relatively quick, uninterrupted, and takes place over a single session. If work on the music began around 10am, I might be done by 2pm. Not done done, but having decided upon most of the piece’s main sounds and structures. The upsides of this way of working are:

it’s a continuous flow state,
it’s a way to commit to decisions now, not put them off,
because of the flow and quick decision-making, one idea often quickly leads to another,
it’s exciting!

The downside of this way of working is that it feels frantic because there’s a self-imposed urgency to 

finish something right now
find the “right” sound right now
understand where the music is going right now, which often leads to 
following conventions of musical form (e.g. intro theme, development, surprise ending, etc.) that aren’t necessary.

The second way I work is slow, interrupted, and takes place over multiple sessions. This way of working isn’t as exciting as the first way, but it’s more objective in the sense that I have space to consider details that I ignored while working quickly. For example, maybe I want to work in a series of volume automations to parts in a piece. This requires a global view of the the music in its entirety and some patience to carefully draw in the volume changes just so. On a recent project, this task took me about a week. Maybe I could have done it in a day, but frankly an hour or so was all I could muster before I wanted to move on to something else. Such editing tasks have to be done and I’ll do them, but I limit my exposure because I’m biased towards the exciting.

My ways of working quickly-in-a-single-session and slowly-over-time roughly correspond to what the psychologist Daniel Kahneman describes as System 1 and System 2 types of thinking in his book Thinking, Fast and Slow. System 1 is the intuitive type that jumps to conclusions based on limited evidence, while System 2 is the deliberate type that proceeds by cautious reasoning. Kahneman offers dozens of case studies to illustrate the shortcomings of System 1 thinking, from exaggerating the coherence of what we hear, focusing and cognitive illusions, to the limitations of the “insider’s view” and the sunk-cost fallacy. We have, says Kahneman, summing up our proneness to error, an “almost unlimited ability to ignore our ignorance” (201).

With System 1 and System 2 thinking in mind, I alternate between working quickly and working slowly. Working quickly is my preferred production tempo, because it feels intuitive, it’s uninterrupted, and its results often surprise me. But I always revisit this work the next day, the next month, or the next year. If the music felt so exciting then, let’s reassess it now from the perspective of System 2 thinking. Take your time with the music, finesse its details, and make it better.    

Curating The Week: Attention, Time, Decolonizing Electronic Music Software

An essay about attention (and time).

“The phrase attention economy poses attention as a commodity to be portioned off and sold, usually on the internet. (We also say that attention is something to be ‘paid.’) But maybe attention is more a state of mind, akin to happiness or disgust, than a currency. It’s a feeling, a state of alertness that we can choose to enter, forestalling judgment for the sake of gathering information. The best artists coax us into this state, then manipulate that focus.”

A comic about time (and attention).

An article about decolonizing electronic music software.

“It’s not that the music they make will sound ‘more Western, but it is forced into an unnatural rigidity,’ Allami says. ‘The music stops being in tune with itself. A lot of the culture will be gone. It’s like cooking without your local spices, or speaking without your local accent. For me, that’s a remnant of a colonial, supremacist paradigm. The music is colonized in some way.’”