The most striking difference between composing for acoustic instruments and virtual synthetic tones has to do with expectation. When I sit down to play piano, I have an excellent idea how it will sound ahead of time and how the instrument—whether it’s a real piano or a virtual one—will respond to my touch. The layout of the keyboard is also important: the piano’s territory is familiar and I have a repertoire of go-to moves, chord shapes, and hand positions upon which I can build music. With the piano, I know what to expect in terms of touch, sound, and layout. But when I sit down to play synthesized tones (triggered by a keyboard or a pad controller), my expectations are dashed because I have no idea what to expect before I begin exploring. Whether I’m listening to presets, triggering samples, or tweaking patches of my own, the central problem is that there’s no end to where the sound stops changing. I can take a sound and roll off some of its frequencies, truncate its attack or decay, filter or modulate or swap out its waveforms, and so on (and on and on), until the sound has become something else. With a few twists of a virtual knob or fader, a piano sound could be an oozing bass or a screeching bell. How did I get there? I sometimes think that in electronic music there’s no there there, because any sound can become any other.
The fact that synthesized tones have no endpoints challenges my sense that sounds should have a stable and clear identity. Should they? Where did I get this idea anyway? Maybe from playing percussion—from accumulating a body knowledge of how these instruments respond to my touch. From the instruments’ potentials forever lying just beyond reach of my skills. From having watched my teachers play the drums and mallets better that I could/can—more musically, more consistently, with more control—and then chasing after that sound and technique. When I’m exploring synthesized tones, I’m going after a sound, but I’m not sure what or where the sound is that I’m looking for. Maybe I’ll recognize it when I hear it, but when I’m chasing a sound I don’t yet know I’m not in control of the process. Instead, I’m open to the possibility that I could hear something that re-orients my notion of what is musical. If there’s a technique I’m chasing after, it might be a more refined sense for how different sounds can fit together.
Despite the ambiguities of the process, working with synthesized tones offers lessons.
First, sounds are fluid. Just because a sound sounds one way doesn’t prevent you from changing it into something else, so it’s useful to be perpetually open to changing your sound.
Second, suspend your expectations. Just because you expect to hear one thing doesn’t mean you won’t find something else that is more delightful. (If you do, go with that.)
Third, sounds don’t have to have a stable and clear identify. Embrace uncertainty. If you can’t describe why you like a sound, that’s okay—sound perception often works on subliminal levels. As the artist Robert Irwin says: “Intuition is about sensing facts before they materialize.”
Fourth, let the process guide you. Hearing an unexpected sound and then responding to it (e.g. with further tweaking) is a robust methodology for moving forward on a project. Nassim Taleb describes this methodology with a phrase I love: We shall be guided by what is lively.
Finally, the lessons of working with synthesized tones are an analogy for thinking effectively. Applying the concepts of fluidity, suspending expectations, embracing uncertainty, and following a process is a way to synthesize yourself anew as you try to figure something out.