Recently I was working on a mix for a piece of music with eight parts: percussion, piano, vibraphone, bass, pad, and voices. The piece has two percussion parts, the first comprising a kick-snare-clap-hi hat drum pattern, and the second a top loop part, which is a beat (extracted from another, earlier piece) consisting of mostly high frequencies. I was listening to the point in the piece where the top loop enters the mix, joining the main drum pattern, and was finding the loop a smidge too loud, so I turned it down. But, surprise surprise, now it was too soft, so I readjusted the volume by ever smaller increments to see if I could get the loop to blend just right with the drum pattern.
As I adjusted the volume and listened I noticed that I was listening to the two parts in a new way: I was listening relationally, weighing one against the other, listening to them together, listening to them as a composite, foregrounding them against all of the other parts in the piece as if I had put the percussion in an aural spotlight. In listening relationally I heard each sound within each percussion part interacting and interfering with the other sounds. I noticed that when two or more drum sounds are sounding simultaneously, calls and responses emerge between them. For example, the hi hat sound dovetails with part of top loop because they both share high frequencies, my ear drawn to their dialogue. While the respective volumes of these treble elements were about right, I heard other mix options to create contrast, such as making one sound darker by rolling off its high end. Here was a lesson to apply to all the sounds I could notice within the percussion patterns: make timbre adjustments so that each sound in each part is articulated without ever obfuscating any of the other parts.
What I’ve so far described is a part of mixing a piece of music so that each of its elements can be heard optimally, yet never exclusively. Listening relationally leads me to adjust volumes and timbres, but it also spurs me to make quick arrangement decisions. At several spots in the piece, I began muting percussion hits to reduce the number of simultaneous co-hits, where sounds from the two percussion patterns play at the same time. Thus, I muted kick drum hits on downbeats (the most conventional place to place them) and reduced an every off-beat cymbal to just once in a while. Muting hits opens up space in the composite drum part and changed once more how I hear the texture. Now, in addition to a call and response quality, the two percussion patterns have a more intentional feel: I can hear them trying to be mindful of one another’s sounds, interlocking in a synergetic way. In other words, the sequences sound more human–like real musicians listening to one another in the moment.
After this tinkering with the two percussion patterns for a while and assessing the results, I reintroduced the other six parts of the piece to hear how the percussion would interact with them. Now, with piano, vibraphone, bass, pad, and voices added in, another relational listening was necessary. The percussion talked well along themselves, but would they listen and respond to what the other parts were saying? Some of these parts needed nudging, either volume-wise or timbre-wise. For example, the lower vibraphone notes disappeared while the highest ones stuck out like shrill bells, so I boosted and reduced here and there (by drawing in automation) so that notes never vanished or got annoying. The pad sound could be too wall-of-sound-ish, so I thinned out its lows and mids. The bass volume proved tricky, because in this piece I want to feel it more than hear it. But even as I finess the vibraphone, pad, and bass parts so that they are all co-present in the mix, the voices must be front and center. This means that their blemishes are always on display. Voices can go from being just the right level to a tad too soft or loud in an instant, and the effects on them can easily verge into cloying territory—are they in a small cathedral or a gigantic cave?—so I spend time micro-adjusting dynamic contours and effects levels so the voices seemed emphatic yet natural.
At some point I began listening to the entire piece to hear the mix of all my changes to the piece’s individual parts. Finally I could hear the relationships among the sounds. The spaces opened up by the muted percussion allows the piano to ring on the downbeats, the boosted vibraphone low tones brings out their harmonies with the piano, and pad is transparent enough to let the voices shine. Subjectively speaking, the mix is sounding more coherent and has more synergy, but it’s hard to know for sure. I keep listening relationally, alternating my attention among the two percussion parts, the vibraphone, the bass, piano, pad, and the voices.
Can I hear what I need to hear when I need to hear it?
Are all the sounds cooperating?
I’m on the inside of the music, hearing it as if standing among musicians playing around me in a virtual room. But when I share the finished piece, will others hear what I hear?