On Automated Aesthetics For Opened-Up Impact: Jon Hopkins’ Music


“I like everything to evolve, I don’t like sounds to be static really…
I like the idea of trying to create this musical world where everything is fluid
 and any sound can at any point just change.”

-Jon Hopkins

One of the most compelling qualities of Jon Hopkins’ music is how its elements shift, develop, and mutate over time. Even in textures where you think you hear all that’s going on—like there appears to be an obviously steady kick drum—there’s always more, (and that kick isn’t obviously steady either). The shifting and mutating of the music’s parts is audible, but sometimes subliminally so. You feel change happening and sense organic growth, but you’re not sure how it works or even when it started. The way it works is that Hopkins’ music has been changing the whole time. Like the best classic minimalist music (such as Steve Reich’s Drumming), the effect of this growth is that even a simple-sounding track is neither simple nor does it produce a simple effect.

In recent articles in support of his recent recording, Singularity, Hopkins describes some of his aesthetic goals for his music. In the New Yorker, he speaks of using Ableton Live software, whose capabilities encouraged him “to imagine how one sound could lead to the birth of another.” Hopkins explains how the track “Feel First Life” uses “a synth sound that gradually morphs into a choral sound. That idea of a 15-part choir appearing out of the fabric of electronic sounds was what I was looking to do all those years ago.” In an interview with Resident Advisor, Hopkins shares technique details of how he crafts his music to evolve in a subliminal way through sound tweaking and morphing. A piece can consist of over a hundred tracks of audio or MIDI, “not playing simultaneously, just bits all over the place.” Each of these tracks receives “lots of infinitesimal tweaking across loads of parameters…It always comes down to minutiae on a screen. It’s just an accepted fact. A lot of effort goes into making things sound like they just happened naturally, but there’s always a lot of work behind it. But I kind of love that.” Hopkins says he’s often “trying to clean up evidence of the sound morphing techniques that I use. Sometimes I’ll hear a noise and then it stops dead, so I go in and find a sound I worked over a hundred times and realize that I didn’t quite filter the end out properly.”  

Hopkins composes his music entirely within Ableton (“in the box” as it used to be called, though this practice is no longer unusual because many “studios” today now consist of a computer), using numerous plug-ins (e.g. reverbs, delays, EQ, filters) to create chains of effects processing for each of his sounds. In a Stoney Roads interview, he says “I love the fact that the plugins around now allow you to really just have total control over where something sits [in the mix]…These are also just really interesting and creative elements of mixing a track for me.” In the Resident Advisor interview, Hopkins describes how a piece can begin with a few piano chords or even a simple rhythm finger-tapped on his desk, but after this first step the musical assemblage-making begins. No matter what the sounds are or how the track unfolds, every parameter of the music’s sound receives automation of one kind or another. For example, if a sound has reverb applied to it, automating this reverb can make it gradually increase in intensity over a while, and then disappear. In Ableton or any other DAW, automation can be drawn in onscreen as an inclined line or manually controlled using knobs on a MIDI controller. This automating of musical parameters to create continual growth or morphing is a key element of Hopkins’ production craft. Interestingly, he also describes automating things that many musicians wouldn’t necessarily consider to be musical per se, such as the mix’s stereo field. For example, on the piece “C O S M” Hopkins narrows the stereo field “to about 60%. It gradually gets even narrower and the frequency bands reduce as well. It all happens so gradually that you don’t think the sound is getting crap or weak. But when it opens up, there’s a lot of impact.” 

Here is “Feel First Life”, the track that slowly transforms itself from synthesized sounds to singing voices:

If you’re curious about Hopkins, you may also enjoy this Song Exploder podcast interview.


Hardcore Ambient


I notice the ambient style
at concerts and on recordings

a layered thick dense and noisy sound
a sound so deep I can’t see its bottom

a processed sound

it disguises the melodies
disfigures the chords
paints over the rhythms
with smears of snap crackle and pop

as if the musicians
don’t want you know them
or follow or track what’s happening
to know how they made it

as if the sound is unclear
because their music doesn’t allow
for the pleasure of making sense.

Resonant Thoughts: Tim Krabbé’s “The Rider” (1978)


“On a bike your consciousness is small. The harder you work, the smaller it gets. Every thought that arises is immediately and utterly true, every unexpected event is something you’d known all along but had only forgotten for a moment. A pounding riff from a song, a bit of long division that starts over and over, a magnified anger at someone, is enough to fill your thoughts.”

-Tim Krabbé, The Rider (1978), p. 33

Notes On Notated Versus Produced Music


A percussionist recently got in touch to ask if he could buy a copy of one of my percussion scores, Zoom (1994) for some chamber performances this summer. (You can listen to me playing the music here, and my remix of it here.) I told him I’d check if I had a hard copy in storage because my Finale (notation software) files vanished long ago, along with whatever Apple computer I was using at the time. (Note to self: get better at archiving.) After digging through several boxes of I-just-can’t-throw-this-one-away books in storage, I finally found my original score, the paper still crisp because it had been sandwiched between two pieces of cardboard for twenty-four years. Ah, paper—a technology that lasts. 

As I was cleaning up I thought about how I’ve shifted my composing focus from notation to recording. I used to compose scores for acoustic percussion, specifically mallet percussion. A musical score is a set of directions for musicians and its virtue is that it gives non-improvising players a reason and a way to play together. I haven’t worked on a score in a long time. These days I compose through electronic music production and the reason is the possibilities of the computer and music software. Just as you can take a photo with your iPhone and then process it with photo apps, you can do a lot with sound on a computer: my studio weighs five pounds and its DAW software encapsulates the entire history of recorded music. 

My music is still based in performance (always improvisation because it reveals to you what you didn’t know you knew), but I develop pieces not by notating pitches and rhythms but by shaping timbres and creating almost-but-not-quite acoustically real textures. For example, the music on recordings such as Quietudes or Piano and Metals Music could be acoustically played on piano, kalimba, and gongs. But the compositions also involve sound design which makes possible some impossible situations. On Piano and Metals Music, the gong is a small Thai gong, but its sound is sampled which means I can play it high-pitched or super low-pitched on my keyboard, in effect making it an impossibly sized mini or gargantuan gong. To perform this music one would need a lot of custom-made gongs, but in the digital realm, re-sizing/re-pitching an instrument is easy. Similarly, the kalimba I sampled (from a Hugh Tracey instrument) has a sound so quiet that you need to be ten inches away to hear it, and even then its sound is thin. The kalimba can’t match the piano’s thunder, but on a recording they fit together. So the advantage of producing music for recordings is that you can create sonic spaces that are impossible in the real world.

Sometimes I think about returning to composing for acoustic instruments because I play a real marimba every day and ideas for pieces are always close at hand. One appealing aspect of composing scores for acoustic instruments is that the sound of the instrument and the conventions of notation—time and key signatures, staves, dynamic markings, mallet choices, and so on—offer constraints that help the composer make decisions and help the musicians understand the composer. The conventions of writing for acoustic instruments give the composer something to push against, something to resist. Conventions are why, on the one hand, most string quartet music sort of sounds the same, but also why, on the other hand, a remarkable string quartet piece can seem to transcend its instrumentation. Another thing about notated pieces is that they travel a well-trodden path. Musicians have been reading music for a very long time, and notating one’s music opens it up to performance opportunities and audiences. 

Zooming out to a broader historical and cultural perspective on music-making however, puts notation into a different context. First, notation has its limits. Music always exists as sound before it exists as symbols on a page, or as Nassim Taleb puts it, while theories come and go, phenomenologies stay. From this perspective, music notation is like a constraining definition of what music is. Second, most of the world’s musicians—past and present, “folk” or “art”—don’t read music or know it as a notated thing, yet they nevertheless know music deeply. They may not have a score in front of them, but theirs is a recombinant art: they have thousands of compositions and musical building blocks committed to memory, coded into their bodies. (For example, consider a performer of classical Hindustani music who improvises upon the notes of a raga.) Third, some observers have fretted over the rise of the DJ in the late 20th-century and the prominence of electronic musicians who don’t play a musical instrument, don’t know C major from G minor, and don’t play notes from a page yet put together fantastically intricate productions. But maybe this shift simply illustrates contemporary Western musicians aligning themselves with the sounds and uses of so many vernacular musics around the world. Think about this: with most un-notated music, most of the time, there’s a beat that repeats, syncopation, short melodies that vary on a theme (or super long ones that weave around a theme), call and response (but no fugues), drone or a few chords, and community involvement. 

I like music notation, but I like electronic music production too because of the sensations it generates. I like acoustic instruments, but they’re not inherently magical. What is magical is a musician’s sense of touch. Touch is the x-factor that is audible when you hear Yo-Yo Ma bowing Bach, or Autechre unleashing their Max/MSP software patches, or when a drummer plays the bell timeline on a beer bottle and it sounds like a bell. (Conversely, lack of touch is a red flag telling you something else altogether about a musician.) Touch is everywhere in music and the main difference between playing an acoustic instrument and playing electronic music production is that one has many more layers of mediation between your touch and the resulting sound. In electronic music one’s touch can be amplified and beautifully mutated, but also, if you’re not careful, distorted and dissipated, even buried completely. In music production, the instrument is your relationship to your techno-musical system, a cascading feedback loop of inputs and outputs feeding back into inputs. Zooming back to looking for my score of Zoom: musical notation is mediation too, a set of directions connecting the composer to the performer through rules for recreating a set of relationships. Like the produced audio recording, the written score is a template for touch, a guide to some hoped-for musical success.


Music Lessons 2


Music is like an organism
a delicate constellation
of interrelated parts
each of which
needs to work perfectly
to maintain the spell

like a living thing
robust when moving well
flexible and even-tempered

like a biology in sound
one beat ahead of its own extinction.

Resonant Thoughts: Philip Brophy’s “100 Modern Soundtracks” (2004)


“The ‘nature of sound’…is not a/any/all sound’s essential or absolute guise (as such divination is impossible) but its irreducible behavior, distinctive apparition and ingrained purpose. It eschews any essence as to what it might be—as if it is a metaphor pointing to some sonic soul that has motivated the act of description—and instead accepts its pliability, malleability and flexibility as its power.

– Philip Brophy, 100 Modern Soundtracks (2004), p. 6.

Different Types Of Musicians


Once in a while I imagine different general musician types, among which I include myself. Here are six types:  

The underplayer. The underplayer doesn’t play “out” or deliberately enough. He’s often fairly recently out of school (a college music program), has his playing together and knows the notes, but there’s something missing. Maybe his strokes are too delicate—as if he’s not convinced of what he’s playing? The notes are there but they sound ventriloquized, as if they are those of someone else (his teachers, his favorite musicians as he imagines them playing). His gestures are proper but not yet his own. His solos don’t explore the music or dynamically interact with it—frozen riffs and patterns that he inserts at the appropriate time (the same way each night too), hoping for the best. If you change up your dynamics on him (as a friendly experiment), he doesn’t respond. He means well but you feel like he always needs to be turned up—way up. You want to compress and add reverb to his sound to compensate for his lack of presence.  

The overplayer. Unlike the underplayer who may have some doubts about what he’s doing, the overplayer over-believes in the power of his own presence. The overplayer has confidence in his musical-motor skills to a degree that he has no problem, well, overplaying. Overplaying is doing too much of something—throwing in too many unnecessary “licks”, filling too many of the music’s spaces, playing too brashly or loudly, and so on. The overplayer’s confidence makes it difficult for him to listen well and interact sensibly (i.e. complementarily) with anyone else as he’s too caught up in the acrobatics of his own playing to notice. Watching the overplayer overplay, you wonder if he does that because he’s bored or because it’s in his personality. Maybe he’s insecure?

The steadyplayer. The steadyplayer is reliable yet somewhat boring (though apparently not bored), content to play the same way over and over, confident in the proven power of this or that phrasing, of playing the notes just like this, every time. The downside of the steadyplayer is that he can sound like an edited MIDI sequence. On the plus side, the steadyplayer is always attentive: he has space to notice what’s going on around him as he plays. He’s thoughtful, well-adjusted to being a professional, and always gets the job done. 

The flashyplayer. Related to the overplayer is the flashyplayer, who knows he’s very, very good, and his playing reminds you of that at every turn. The flashyplayer is one step down from a virtuoso—the difference being that, unlike the virtuoso, the flashyplayer’s playing can’t make you cry; and unlike the virtuoso, the flashyplayer can never make himself disappear into the music. Like the reflective ball hanging above the disco, the flashyplayer is designed to perpetually disperse his own reflections around the space of the music. Here I amRat-a-tat-tat!! 

The emotionalplayer. The emotionalplayer has sunk his life into these sounds in this musical moment, his every expression connecting to the currents of his inner life. But sometimes it feels awkward to listen to the emotionalplayer because there seems to be no line between his life and the life of the music. Even though that line-blurring is a courageous artistic accomplishment in itself, you sense that maybe the emotionalplayer depends on the music’s cathartic powers too much (and certainly more than you do). You fear that one day music could let him down, and then what will he do?   

The naiveplayer. Whether a child just learning music or an adult with zero musical experience, the naiveplayer is free of the music world’s heaviness and has not yet learned the sonic signs of underplaying, overplaying, steadyplaying, flashyplaying, or emotionalplaying. The naiveplayer simply taps around on a musical instrument, delighting in the sounds being themselves, smiling because he gets one of the keys things about music—which is that its sounds provide instant feedback on your actions. This is so cool he says. It sounds mysterious! Watching the naiveplayer tapping around reminds you that every expertise has its downsides.