Curating The Week: Terry Riley, John Luther Adams, The Universality Of Music

• An interview with Terry Riley.

“Beginnings and endings aren’t that important, because you’re just tuning in to a sound current.

Music is the involvement of the human spirit with sound.

If you know what you’re doing in the arts, then you’re doing it wrong. That’s a pretty good maxim.

I think before Indian classical music I was searching for a tradition within music that I didn’t know about, and when I found Indian classical music it had so many answers for me about questions I was trying to solve.

‘How do you use modal music in a form?’ That’s a big question for Western musicians because we had a big modal tradition in the 14th-century, and then harmony took over and musicians really didn’t know how to work with modal harmony and how to create a form with it, I feel. So they kind of abandoned it, and went with just harmonic structure.”

An interview with John Luther Adams.

“Music for me is a kind of spiritual discipline; it’s as close to religion as I get. It’s a way of being in touch with mysteries larger, deeper, older than I can fathom, and so, because of that, I’ve never really been interested in expressing myself in music.”

An article about the universality of music.

“The world is full of music we can’t hear…hidden in messages and melodies, patterns and harmonies that move through and around us all the time, beyond the range of our perception. It’s in the high harmonics of the swirling atmosphere and the subterranean chords of shifting plates. In the voices of creatures that communicate at frequencies far above and below our speech. Mice that squeak to one another ultrasonically as they move through our walls on padded feet. Birds that flicker by so fast we barely hear their songs—it’s only when we slow down their melodies that they sound like ours. Whales that sing song lines so leisurely they last for hours and transmit halfway across the ocean before they’re done.”

Database: Seth Drake On Harmonic Coloring

“Let’s say you have a bunch of hot AF signals that hit a limiter and now, suddenly, these waveforms have gone from being big and round and way over zero to being square and sitting right at zero. Anytime you have square wave forms on the sides of those squares–as the wave transitions from the peak moment to any of its preceding to the peak, or post-peak receding back to zero– you get what we call side band harmonics. And those side band harmonics are oftentimes things that we find as pleasurable.

When you saturate a bass and suddenly you take this pure tone and it turns into all these upper band harmonics–we like the sound of those things. That is distortion and it’s adding harmonic content that wasn’t present at the beginning of this path. So no matter what you do–whatever system you pick–if you’re squaring your waveforms in your master you’re ending up with artifacts. Doesn’t matter if you can say, ‘they’re bad’; by definition they are square and you now have side band harmonics. It is by definition distorted and you are by definition adding artifacts.

That piece for me matters because we have all this dogma around clipping and distortion […] You can get this saturation, this harmonic coloring out of the clipping, which gives you these side band harmonics, which gives you this richer, fuller sound that doesn’t have a tiny profile associated with it.

These processes…have been part of the mastering process, and it’s just been something that they’ve kept aside.”

Seth Drake

database.

In The Moment Music Production Constraints

“I feel the important part of making a track is to recognize the point where you have to listen to what the track wants. This point comes in around 40–50% of the whole process, where it’s not so much about what you want with the track anymore, but what the track wants you to do with it. To figure out this change of perspective is the only way to successfully finish a track. Of course it’s intuition, but at a certain point the music is the boss. And you always recognize it when you want to finish a very fancy idea for example and it doesn’t work. You build the arrangement and it doesn’t work, you try something else and it doesn’t work. If you just listen to what the track wants, it’s much easier.”

Stimming

A constraint is something that limits you in some way, something you can push against or work around, something that forces you to be resourceful despite a less than optimal situation. Common constraints for makers of art and craft are their materials (or lack thereof), conventions of practice, and of course, time itself. Since there’s never enough time to try out all options, a maker benefits from constraints—either self-imposed or realities of the job at hand—to limit those options and keep making progress.

My approach to constraints is not to set them from the outset of a project, but rather let them evolve as I learn which workflow options are affording me the most pleasing results. (Which reminds me of architect Christoper Alexander’s notion of a continuous very-small-feedback-loop adaptation.) I don’t begin a project by saying, “I’m going to limit myself to just this one synth.” Instead, I begin with whatever sound I happen to be using that’s giving me interesting results and then focus on how I might incorporate constraints to shape how I use it. For example, on a recent project I got into a routine of using audio from my own pieces loaded into a step sequencer/sampler. I quickly learned that to hear a sufficient amount of my samples I needed to (1) greatly extend the sustain and release amounts of the sampler’s envelope and (2) change the sequencer’s note value from 1/16 to 1/8 to increase the samples’ play time. With more sustain, release, and time, the samples had a chance to “bloom” over a slow sixteen-pulse cycle. These requirements for hearing the samples properly became constraints that I continued using on subsequent pieces.

I also used some of the sequencer’s simple sound shaping tools, particularly its filter which works as both a low- and high-pass. Over the course of making pieces using the sampler, I got a feel for a range of filter settings that sounded good, committing to a timbre from the get-go rather than fussing over it later. Similarly, I began panning each part. Typically, sample 1 was panned in the center, while samples 2 and 3 were panned left and right. So what began as a way to more clearly hear the composite texture of several parts playing together by spacing them out along the stereo field became a constraint: I decided that these tracks require only three main parts.

But the most significant constraint that emerged while making the pieces was committing to various sample playback locations. As I auditioned locations I wrote down their numbers so I could return to them. Soon I had a long list of 20 to 30 of my favorite locations. Then it became a matter of how to proceed through these samples: Do I focus on one sample? Cycle through them all? Play each one twice? Four times? A solution that sounded good became a simple arrangement constraint: I would keep part one as a through-line, repeating sample, while parts two and three would move through numerous similar, yet different samples. I wasn’t building loops but rather setting up different rates of change inside a steady texture.

In sum, while there’s nothing wrong with imposing constraints from the outset of a musical project, the arbitrariness of that approach can feel pointless. Instead, cultivating constraints in the moment is way to respond to what the music needs, or as the producer Stimming puts it, to listen to what the track wants. Cultivating constraints looks to the options at hand, here and now, as a dynamic way to interact with, and adapt to, the music in progress.

Check out my book here.

Database: Ryan Lee West On Exploring Tone, Destroying Sounds, And Going Off The Grid

“I think it’s really essential to explore tone—the tone of synths and drums and how bright or dark they are and to listen carefully to how they behave alongside other sounds, exploring tiny amounts of distortion, delay, filtering and compression. But also, don’t be scared to destroy sounds. Sometimes, chaos is needed in electronic music more than acoustic music, because, by its very nature, it’s quite rigid rhythmically and clean-sounding. I would also add that to generate more interesting melodies and chord progressions, you should regularly approach this without a beat or grid. Simply record long passages of improvisation with a synth sound that you enjoy, and then later you’ll be more inspired to make it work with rhythmic samples, because grids often restrict some amazing yet simple possibilities.”

Ryan Lee West

database.

On Reducing Workflow Pressure

A pernicious music production pressure is the unreasonable hope for each step of the process to generate immediate results, offer no resistance, and to go perfectly. I try to reduce this pressure. One way to do that is to divide the process into separate, self-contained steps. (The steps feel like games.) An example from my work is beginning a session with a threadbare rhythm and then processing it later into a more interesting form. Dividing a production process takes pressure off individual steps to be more than they are. Let’s sketch the process.

The Rhythm
For a rhythm I’m looking to make a pattern that’s minimal yet evocative. Using just three or four sounds, I play around with a few notes for 8 bars or sometimes much longer. Whatever the length of the pattern, I double or quadruple it and then edit this longer version, removing (muting) hits here and there to give the rhythm more space. Removing notes is the best way to clarify a texture. A rhythm can never have enough space; this is especially the case with rhythms that will be processed later with effects. Space, in music and out, is a gift to your future self to find ways to fill it.

I notice that rhythms at their early stages of making don’t sound enchanting because their sound isn’t yet nuanced. Even so, I aim for the rhythm in its unprocessed state to have a novel quality about it. Ideally it has an as-if-drummed shape and vitality, plus some degree of unusualness that draws me in.

The Effects
I have two main strategies for processing rhythms. One is to apply individual effects set to their “default” or “Initialize” state and build up sounds from there. The other strategy is to re-use effects racks I have already created and used in other projects. (A Rack, in DAW terminology, is two or more effects, instruments, or effects + instruments serially grouped together). I often go the effects racks route, because it allows me to learn and re-shape devices I’ve customized.

I might begin with a delay-based rack that adds syncopation to the rhythm. I’ll scroll through a few dozen racks (named by device and numbered) and audition them until I hear something with potential. What’s a sound with potential? Sometimes it’s a sound subtly more attractive than its peers; other times it clearly has a bit of magic that I notice. Having chosen a rack, I make some adjustments to it to hear if I can quickly amplify (or tame) what it’s doing. A workflow rule of thumb I follow here is: only move onto a second effect or effect rack (or a second effect in a rack) once you’re enjoying the sound of the current one. This brings to mind some of the most useful advice about composing, from Harold Budd. “The way I work is that I focus entirely on a small thing and try to milk that for all it’s worth, to find everything in it that makes musical sense.”

Next I add a saturation- or distortion-based rack to my signal chain. These effects add texture, grit, and harmonics to sounds. Once I’m hearing a texture I like, I dial the effect’s amount down to the point where it’s felt as much as heard. Another rule of thumb here: try the opposite of what would sound obvious. (Obvious doesn’t leave room for the listener.) At this point my rhythm is sounding different from where I began. Things are getting interesting!

Building On Interestingness
To build on the rhythm further, I sometimes explore more delay- or granular-based racks. The Jamaican dub masters King Tubby and Lee “Scratch” Perry realized over fifty years ago that delays were tools of rhythmic alchemy. In their hands, delays could transform a 1-2-3-4 rhythm into syncopated African rhythmscapes by multiplying shards of brief drum sounds into a triplet cascades of echo-pulses. Whatever I add to the rhythm this point I hope is subtly transformative. One of dub’s long-resonating lessons is the more abstract, murky, and unpredictable a beat becomes the more we want to listen to it. The most exciting sessions for a producer are those that leverage unpredictability–where what began as macro obvious becomes micro interesting.

With the various effects chains all processing simultaneously, I make adjustments to their settings. I do this to hear what various settings are actually doing and also to fine-tune certain effects I’ve begun noticing within the effects soundscape. Until I automate a parameter, all of my adjustments are incremental and cumulative—turning a single knob to hear how that changes the sound. The knob turning is ad hoc yet cautious, as I try to gauge what’s happening without fully understanding it. When a texture is unclear, I turn off an effect entirely to reset my attention onto what it’s doing.

Now we’re getting somewhere: certain percussion sounds (or their echoes) begin popping out of the mix, either intriguingly or annoyingly. There might be a hi hat that’s too bright or strident, or a kick that could be muffled more. Adjust, adjust, adjust. Fixing such problems is simultaneously a task to get done and a portal to an ever more tuned-in listening. I tend to make sounds darker rather than brighter, but every project presents different sound problems in need of different kinds of shaping solutions.

Effects processing introduces me to sounds I didn’t know I liked until I heard them. Such discoveries are a substantial chunk of learning production as a species of composing: you begin remembering the moves that led you to sounds you like. This remembering is visceral, but can also be very specific. At random times during the day when I’m far from a computer I find myself thinking through possibilities I’ve yet to explore: What would happen if I swapped this effect for that one? Could I have begun with a much simpler pattern? It’s at the intersection of our practice and memories of it that we build and internalize a repertoire of production gestures to re-use and re-configure down the road.

After adjusting the settings of each effect rack I re-save the racks. Some of my racks now have between 10 and 30 iterations, and I add either numbers or letters to the rack names (e.g. “brett delay 29a”) to distinguish among them. Why do I save racks? One reason is to continually broaden my palette of processors and understanding of ways to use them. It’s a gift to future me who might not remember what a rack does (and labels like “brett 29a” certainly don’t help) but will notice that past me thought it sounded cool. In sum, creative work flows when it’s playful and free of the pressure to generate immediate results. Making a rhythm first and then processing it later is just one way to achieve this flow. Like a blog post, music production in a DAW is a record of things tried.