Music’s Main Enchanter: Hearing The Human Touch

One of the challenges with making music using software as a primary instrument is what feels like a six degrees of separation between what I intend to do, what I’m actually doing, and the sounds I’m able to make. As I’ve written about, there can be a disconnect between doings and soundings: my sense of musical touch feels blunted, and my instinct for how to proceed–which in a non-computer context usually leads me to just play something and get going–is frozen by the possibilities of the software. What can I do? What should I do? And is there space for me, the practicing acoustic musician, in this way of making sounds?

I think about this disconnect between doings and soundings when I learn how other musicians approach their workflows. One of the principles of electronic music production that I observe musicians talking about constantly, yet often obliquely, is the importance of making music that captures an audibly human touch. When musicians speak of making their music more organic, more alive and breathing, and less robotic and cold, they’re talking about ways to let the human be heard in, or through, their sound. There are many ways to achieve, or create the semblance of achieving, this human touch, as we see in examples from this blog’s Database. Notice how the producer Biosphere begins a composition with “organic” acoustic sounds, over which he layers synthetic ones: 

“I often start a composition by creating a theme with an acoustic-digital filtered sound. Over this, I often try to integrate pure synthetic sounds from a synthesizer. This makes the music sound a little more organic than if I just used synthesizers.”

Or notice producer James Holden describing setting up his music’s parts as a “living system”:

“The way I make music is to try and set it up as a living system where everything’s moving by itself a little bit, and interacting with each other, but I can steer it.”

Or notice Jean-Michel Jarre talking about avoiding repetition (e.g. drum loops, sequences) to sidestep the robotic sound that characterized the musics of his 1970s and 80s contemporaries: 

“When I started to do electronic music I was obsessed […] about not having anything being repeated in exactly the same way. For me it was exactly the opposite attitude to that of Kraftwerk, Tangerine Dream and all those electronic bands who were doing something more robotic. I considered electronic music in a much more sensual, organic way, where nothing should be repeated.”

In musics where repetition is fundamental, such as hip hop, producers capture and finesse a human touch through subtleties of beat programming. Even though instant note correction or quantization (i.e. rounding off note locations to a prescribed note value, such as a 16th note) is available with a keystroke, beat makers labor over what kinds of quantization to use (“Straight” or “Swing”), in what amounts (100 percent or 20), and which elements to apply it to (just kick and snare, or only the hi hats). Some legendary beat makers, like the late J Dilla, finger drummed their beats, recording them live to capture the wonkily perfect time feel of their performances.

Another way to conjure the human is by using analog electronic instruments and effects processing that produce or evoke the sound artifacts associated with vintage gear. Examples of this include analog synthesizers and drum machines, as well as plug-ins that faithfully emulate magnetic tape recorders (with their hiss, wow, and flutter artifacts), mixing consoles (with their quirky, no-two-are-alike channel strips), classic valve and tube compressors, plate reverbs, crappy RadioShack speakers, and so on. (Today, every part of yesteryear’s recording studio is emulated.) How, you ask, does old gear or simulations of old gear produce sounds considered more human than sounds forged via digital means? One explanation is that the signal paths of analog equipment introduce artifacts (e.g. micro-saturations and distortions) into the sounds that pass through them. It’s the imperfections and inconsistencies of these artifacts that we perceive as warm, as un-machine-like. Put another way, since the late 1950s we’ve come to hear the sound of vintage gear’s mediations as natural. Is it that we’ve come to love our imperfectly recorded artifacts because they ring true with our own imperfect nature? 

Whatever the reason, it’s interesting to hear how conjuring the human touch through the aesthetics of the imperfect by using vintage equipment and their simulations has shaped all kinds of “lo fi” musics. In lo fi hip hop and other (for lack of a better term) ASMR easy listening/chill styles, imperfections of performance, timbre, and recording quality are foregrounded to conjure a cozy sound and a sense of intimacy, as if the music is saying, this is the organic and natural sound made by a perfectly imperfect musician.

My approach to the question of how to incorporate the human touch in electronic music–besides being perpetually frustrated–is to be deliberate about how I put together tracks so as to preserve and hopefully amplify whatever touch I have. I like hearing thinking in the music as the music is happening–music that enacts the And Then… principle. This rules out using most kinds of loops and sequences because the problem with loops and sequences (unless it’s a Bach sequence) is that they’re airtight–they’re unable to accommodate error. I prefer hearing struggle or striving inside the musical process, and recording improvisations is the simplest way to achieve this because when I improvise I struggle. But: struggling always leads to interestingness! I’ve had good results recording an initial part freeform, without a click track, and then laboring to fit other parts around the performance’s odd timings. The parts might sound like a loop, but they’re not quite a loop. I find that working this way captures and scales up the open-ended unsettledness of the track’s initial steps. 

There are other human markers in making music that I’m drawn to. I like when a part almost arrives somewhere, but not quite. I like when I can hear the influences of others musicians in the sound, but the influence is slightly warped. I like repeating the good bit once or twice, but getting it slightly wrong the third time because I couldn’t pull it off. I like not knowing how the music will end up and trying to incorporate that sense of unknowing into the music’s final structure. What I’m saying is that I try to turn liabilities into assets. In sum, any producer or composer will do well to prioritize ways of nurturing and articulating what for them is the human core of their music. It’s this core that we want to hear in, and through, the music. This core is why we listen. This core is music’s main enchanter and maybe the reason why, if you think about it, music’s affective metaphors are always physical: we listen to be moved by music, stirred by music, touched by music.