Database: Laurel Halo On Keeping Your Own Sound At The Core

“For me it usually starts off with either coming up with a chord progression I haven’t quite heard before, a beat I haven’t heard before, a new type of sample. I’m always just trying out new things – Hour Logic is quite different from King Felix, which is quite different from the things I’ve done earlier. I think it’s always going to be like that, because I feel that if you can keep your [own] sound at the core, it doesn’t really matter, ultimately, what form it takes. With everybody being so connected and having access to everything, everybody is in this sort of shuffle mode – and if you can make music that sounds like you, people will be able to hear that, no matter what the context.”

Laurel Halo


Playing An Instrument, Playing A DAW, Omnimusicality

I’ve been thinking about how playing an acoustic musical instrument can be a model for producing music in DAW software. As I hear it, producers who approach their DAWs as instruments develop novel and idiosyncratic ways of creatively playing the technology rather than merely using it. This post compares the experiences of playing an instrument with playing a DAW to explain the unique, omnimusical demands DAWs make on musicians.

Playing An Acoustic Musical Instrument
Musicians get immediate feedback from their instruments. As a percussionist, I get responses from my instruments as I play them: striking creates sound vibrations. Gong playing offers a simple example. When I play a gong there are many variables besides the gong itself that shape its sound production. There’s the mallet I’m using (soft or hard, yarn-wrapped or rubber), the contact point where I’m striking (center, off-center, or at the outside edge), my stroke type (single hit or continuous roll), and the intensity of the strokes (soft or forceful). Changing any, or several, of these variables changes the sound the gong makes. If I want to shift from say, a thin/brighter sound to a thicker/darker sound I change how and where I strike the instrument. My gestures connect to sound-moods—or at least it feels that way, such is the close connection I feel with the instrument as I’m playing it. From a player’s perspective, a gong can conjure a drone, or a coming storm, depending on how it’s approached, and depending on your imagination.

The most important aspect of playing an acoustic instrument is the tight coupling between touch and time. At any moment musicians are their own conductors: they can change course on an whim–accelerating or slowing the tempo, playing louder or softer, choosing different pitches, or conjuring different timbres using unorthodox techniques–and their instruments respond instantly. Consider another example. When you watch a violinist playing a melody, notice when they dig in with the bow and apply more pressure and vibrato to the strings with their fingers and the melody sings, as if the instrument itself is doing the emoting. When we play an acoustic instrument we’re in full control and feel as one with our gongs or violins, but there’s no hiding: you sound as you can make sound, you sound as you are.

Playing A DAW
When I play DAW software, I have control over its sound generation but the process of making sounds and receiving feedback from my playing is greatly slowed down. As with playing a gong, there are many variables that can shape the DAW’s sound, but these variables are not immediately at hand the way they are with an acoustic instrument. Instead, they have to be set up one by one (and often laboriously, by mouse-clicking, selecting etc.) before I can play with them. For example, let’s say I want to work with a pad sound, a quintessentially electronic timbre halfway between strings and keys. I’ll begin with a pad that’s in the ballpark of what I want, knowing that at some point during the production process I might alter its sound by altering some of its parameters. Unlike a gong whose material presence mostly determines its sound and provides me with immediate feedback, my pad sound is not readymade, and certainly not stable. I’ve had to find or design the pad whilst keeping in mind that I might edit it later. In a DAW, one’s sounds are potentially always open-ended–like questions ready for answers.

This finding, designing, and manipulating takes time…time away from making music with the pad sound! (This is why producers sometimes separate sound design work from composing work–because each demands a different kind of focus.) The most important aspect of playing a DAW then, is not the tight coupling of touch and time, but rather trusting a process of interacting with the software–trusting that we may accidentally discover something interesting whilst playing with a sound, or what the producer Huerco S. calls “tinkering away, conducting experiments, & discovering artefacts deep deep below.” Producers’ discoveries most often emerge as a by-product of sound relationships they’ve put into motion to steer the music forward. The production goal is always, to quote Brian Eno, to ride on the dynamics of a musical system.

This brings us to the questions of what makes a DAW a unique musical instrument, and what’s involved in playing it. The DAW’s uniqueness comes from that fact that, unlike a gong or a violin, it offers more than a one-to-one, this gesture creates that sound relationship. Instead, the DAW offers the musician the possibility of a one-to-many, one gesture can create many sounds relationship. The DAW then, is not a single, bounded instrument, but a thousand unbounded instruments always in flux. Playing a DAW involves crafting musical experiences both in the moment and over longer spans of time, from micro edits to macro organizing, from recording a part right now to shaping many parts into a piece over weeks or months of cumulative, touch-decoupled-from-time work.

In sum, the experience of playing an acoustic instrument is a helpful model for how to approach a DAW, yet playing the software goes beyond the acoustic. Playing a DAW is improvising, composing, sound designing, arranging, orchestrating, recording, and engineering. This fact makes music production a unique omnimusical experience, by which I mean composing that engages sound from many angles and by many means. The omnimusical producer designs a soundscape, plays with multiple sounds simultaneously, shapes tones, timbres and rhythms, and builds tensegrity through feedbacking interactions of parameter controls, all in a Quest to create a dynamic music that feels alive at many levels at once.

Curating The Week: Peter Doig On Artworks Taking Time To Resolve, Lo-fi Aesthetics, ChatGPT

• An interview with Peter Doig.

“Paintings have taken a lot of time to resolve, but I would keep them rather than abandon them—because I would think the elements were worth pursuing.”

“I don’t really like the term ‘magical realism.’ You know, the spaces that exist within the painting are really spaces that have to do with painting itself and what comes about by painting.”

An article on lo-fi aesthetics in music.

“The appeal of equipment noise and why it is still included (or artificially added) to recordings is what [Nomi] Epstein calls ‘material fragility’–where ‘the object or instrument used in sound production is damaged in such a way that it can no longer successfully carry out its function as sound-maker’…Material fragility, whether recorded from fragile media or simulated, adds a melancholic subtext to recordings. It gives the impression that this may be the last time these sounds will be heard.”

A book about ChatGPT co-written with Chat GPT.

“Essentially, GPT-4 arranges vast, unstructured arrays of human knowledge and expression into a more connected and interoperable networks, thus amplifying humanity’s ability to compound its collective ideas and impact.” […]

Principle 1: Treat GPT-4 like an undergrad research assistant, not an omniscient oracle.

Principle 2: Think of yourself as a director, not a carpenter.”

Database: Amon Tobin On Sampling

“[Sampling] was about capturing the energy the recording like a photograph. If you look at sports photographs–someone in mid-air jumping. You can tell what happened before and what’s going to happen after, but in that frozen moment you have all the energy of both things encapsulated. And that’s more or less how I viewed sampling: it managed to trap the energy of something much bigger than its little components. Then when you recontextualize that and you put it amongst lots of other things that are pulling in lots of other directions you end up with a really dynamic and interesting sound. And I what found is that you can do that with smaller and smaller and smaller particles and they’ll still retain some energy of something before.”

Amon Tobin


Prepare Close At Hand Possibilities, Then Reap The Random

Recently I was building a track out of samples from one of my recordings. I was recycling bits of looped, pitched-down audio, delighting in defamiliarizing myself with music I knew well. Having found a few samples that got along, I reached the stage of being curious about how the audio converted to MIDI might sound (a topic I discuss in Ways Of Tonal Evolutions). I dragged the audio onto a MIDI track and waited for my DAW software to “translate” the samples.

On a whim, I opened Omnisphere, an instrument–like many of the instruments I use–that I intend to use often because of its depth, but forget to. Omnisphere’s sonic terrain is vast, so much so that instead of searching through new sounds I usually revisit the few that I’ve made or customized myself. On this occasion, I had no particular sound in mind, but was curious about what sonics my MIDI track might bring. This is the exciting aspect of working with MIDI derived from audio: lossy translation. The computer software’s wonky interpretation of fluid, sometimes amorphous audio in the form of discrete, sometimes erroneous data bits creates conditions for compelling results. Whether the data trigger pads, percussion, or bass doesn’t matter so much as our surprise from the results.

The first sound I clicked on was a walloping mono bass sound with echoing white noise mixed in. As the sound interpreted its MIDI, it leapt between low and high octaves (sounds that are monophonic, rather than polyphonic, interrupt one note’s sustain when the next note begins), creating an unpredictably rhythmic sound. It reminded me of something Tony Levin might have played on his Chapman Stick bass, which of course led me back to King Crimson’s enchanting “Nuages” that I used to listen to growing up. I found this bass part so unusual yet useful that I declared my search over and continued working on the track.

Later in the day it occurred to me that the fortune of re-encountering this bass sound was connected to my having made the sound in the first place, forgetting about it, then discovering it again. I had, in other words, prepared a possibility and reaped its random rewards. Experiences like this illustrate the value of tinkering on sounds that may have no immediate utility and saving them so they might be used down the road. Such tinkering or sound design gradually cultivates a garden of what Harold Budd called “close at hand” sounds that can be drawn upon–or in my case, stumbled upon. The power of close at hand sounds is that (1) they are pre-filtered by your former self and so will probably continue to sound halfway interesting today, and (2) they give you license not to search endlessly through sounds with little connection to your work, which keeps a workflow flowing. To conclude by way of an analogy: music production is a species of gardening, where sounds are sown and reaped in an ongoing creative cycle of exploration and exploitation.