Mixing and Mastering: Why not try to turn something down?

For once, I’m not talking about loudness wars when I’m telling you to turn something down for a change: this time, it’s about balancing things – whether it’s in mixing, or in mastering.

The General Conundrum

We’ve all been there. Let’s say you’re working on the mix for a rock/pop song, with a simple enough audio setup: there’s a lead vocalist, a guitar, a bass guitar, and a drum kit (maybe recorded as bass drum, snare drum and a stereo overhead pair). When listening to the mix, you find the bass drum barely audible. Of course, that can be fixed by simply pulling the fader up. However, that creates an imbalance between the bass drum and snare drum level, quickly corrected by pulling up the snare drum fader. You may continue by then having to adjust the guitar, make sure the vocals can still be heard (by pulling up the fader, of course), at which point you notice the bass guitar can no longer be heard (or felt) properly, and after correcting that, you’re exactly at your starting point, only with all faders pulled up by a few dBs. What did go wrong?

It has often to do with an audiopsychological effect called masking: in essence, certain auditory events can make other events inaudible. They’re still there, and you can see them on a scope – only you can no longer hear them properly. This effect is in fact the basis for a lot of the voodoo in lossy compression algorithms such as MP3.

So how does that help? It can simply help us to achieve a goal of a better (in the sense of more balanced, transparent) mix or master, by way of turning things down instead of up. And incidentally, it will even help us in achieving high loudness, because simply speaking, a transparent mix is much easier to make loud (if, for some odd reason, you decide to do just that).

The Basics

What will mask what, and how to react to that, will work in a number of effects. Without a claim for completeness or science-level accuracy, here’s some rules of thumb:

  1. things in the same frequency band will affect each other – “the stronger wins, but both lose”.
  2. things in the next frequency band mask or “disturb” those in the one below it.
  3. sustained sounds are stronger than percussive ones.
  4. harsh sounds are stronger than soft ones.

Excursion: Frequency Bands

In the context of this article, I will use the somewhat ambiguously used terms of “low frequency” (LF), “low mids” (LMF), “mid frequency” (MF), “high-mids” (HMF) and “high frequency” (HF). If we take a look at the human hearing range, which is roughly ten octaves, we can have that as (fundamental frequencies of musical instruments given as a guideline):

  • LF: 16-64Hz (the lowest register of bass instrument, with e.g. lowest key of a Bösendorfer Imperial at 16.35Hz, 5-string bass low B at 30.87Hz),
  • LMF: 64-256Hz (from the lowest non-pedal note of a tenor/bass trombone to about middle C);
    note that low-mid here is what most people refer to when talking about “bass”,
  • MF: 256Hz-1024Hz (the soprano register for instruments – and for some singers as well),
  • HMF: 1024-4096Hz (up to the highest key of a grand piano, piccolo flute and violin),
  • HF: 4096Hz-16384Hz and above (no normal instruments play here).

Some things are worth mentioning here:

  1. A lot of home stereo systems start to cut off at or around the lower end of the LMF range. This is relevant a lot, as the LF energy tends to muddy up the audio played back on those systems, and the cutoff also can disturb the frequency balance.
  2. There’s not a lot of energy in the top octaves from around 4kHz up to the top frequency (of the medium, typically 22.5 or 24kHz).

…and back to the main storyline

(1) Things in the same frequency band

Great examples here include bass drum and bass guitar (mostly lower LMF range, a little bit in LF), snare drum and guitar (upper LMF range), violins in a string quartet (MF), or about everything except for the cymbals in a death metal band (LMF).

One approach is part of the classic Motown mixing techniques, namely to give each instruments one (or two) characteristic frequency, turn that up, and turn the rest down. In a contemporary DAW setup, this can easily be achieved using the standard channel strip 4-band EQ (maybe including its LP and/or HP filters). As an example, the bass drum wants a boost at (depending on the musical style, or on the actual bass drum) between 90 (Hip Hop), 105-110 (Rock), 130 (80s Pop) or even 140-180 (DnB) Hertz. A second boost is possible (although I often find it unnecessary, especially when working with a fast gate and a tailored bass drum mic, such as the EV N/D 868) in the higher frequency – for that obnoxius contemporary clicky sound, make that at or above 3kHz.

Ok, but where’s our goal of turning things down? The most important step is to turn everything else down. Meaning you would highcut above the top boost, lowcut somewhere between the bottom boost and 40Hz or so, and have a deep and wide parametric cut between both bosts. For added LF bonus, add just a hint of shelving gain below the bottom boost (but just a hint, otherwise you’ll muddy up playback on small speakers).

The second most important thing is that once you’ve done this, you might actually be able to turn something even further down, i.e. the snare drum, because due to a boost in the 2kHz range, it’s now much more snappy, and you can turn it down, thus freeing up more space in the crowded 200Hz region.

We’ve gone through the bass drum as one example, and the interwebs are rich with hints which frequencies to pick here, or you can simply watch your source signal on an analyzer and find the best frequencies yourself.

However, to take this one step further, you can also work the same way in addition or independently of this approach, namely when you find a conflict. Let’s say in your track you have a gently driven bass guitar and a clean, fingerpicked electric guitar, and you don’t get enough of the latter. One chance is that the first harmonics of the bass guitar (i.e. the around 80-200Hz range), here intensified by the drive on the bass guitar, hide the fundamentals of the guitar – so the approach is to try to not turn the guitar up, but do cut the bass guitar with a parametric in the 140Hz range (not a to surgical one). If you then find that the bass guitar doesn’t have enough sparkle, you can add the classic 80s bass guitar boost from around 1-2kHz, because there’s not that much conflicting fundamentals there (unless you combined the bass guitar and guitar with a piccolo flute quartet).

(2) Things in the next frequency band

In (1), we’ve found something that works best in mixing (because you would need to adjust individual track’s levels, or even EQ). Here, we have something that can be applied in a lot of cases during mastering.

We’ll start with the standard problem: everything is overpowered in the LF and lower LMF range, because the mixing engineer wanted to give it enough punch in the bass domain – with the aforementioned problem of muddying things up, especially on smaller speakers. Simply turning down that frequency range will fix the mud problem – but keep the bass instruments’ presence lacking. The solution here is typically to cut in the upper LMF/lower MF range – depending on the situation, around 300-700 Hz. As if by magic, this will make nothing disappear, but the low frequencies appear louder.

The same thing works equally well with the high-mids masking mid frequencies, and in theory, of course also with LMFs masking the bottom end. However, in the latter case, the solution to turn down the lower mids has the disadvantage of lacking bass instrument presence on smaller speakers.

One example for applying this was on the oscillator theory track IMPATT NDR. When working on the master, the first impression was that this was lacking a lot in the LMF department – and as usual, turning that up just made it boomy. It turned out that the culprit was several sounds with a lot of bite in the high mids (see also (4) below for more on this track). This, in turn, had led me to pull up the mid-frequency-centered instruments in the mix, making essentially everything in between 200-odd Hz and about 3kHz very loud, thus masking the low-mids almost completely. Simply applying a wide (Q of about 0.3) EQ with a dip of only -1.7dB suddenly brought the punch of the bass drum and the bite of the bass synth back.

Another example was during work on the Eclectic Blah album, when mastering engineer Thomas DiMuzio vastly improved the bass clarity and bass drum punch by simply cutting just a little at 270Hz (about 1.5dB!), and then boosting a bit at 3.3kHz.

(3) Sustained sounds

Typically relevant in the same frequency band, here we have the classic conflict of a bass guitar and a bass drum. There’s no simple solution as above present, also because we still have the possible bass guitar/guitar conflict (see above). So what to do?

The classic here is to use ducking – and for that, you need a compressor with a sidechain functionality.

Put a compressor on the bass guitar channel, and put it in sidechain mode. Then, send the (mixed) bass drum signal into the sidechain. The result: when the bass drum hits, the bass guitar will get a little quieter, only to return to its original level once the bass drum has decayed. Typical combos of ratio and threshold here are to keep the gain reduction at 2-4dB for an almost inaudible effect, or a little over 6 for a discernible but not irritating one. As for timing, very fast attacks and release times in between 20ms (for some pumping) up to the maximum time delay between two bass drum hits work well.

Using the comfort of modern DAWs, I tend to apply this trick with a frequency split bass guitar signal, i.e. the ducking only affects the LF and mid-LMF range (typically, around 140-180Hz or so – as a rule of thumb “where the BD boost we discussed in (1) stops”), either by using a multiband compressor (in one-band mode), or by splitting the bass guitar channel using two channels and symmetrical EQs (make sure to use so-called phase-linear EQs here, or the result will most probably be a serious notch filter around the crossover frequency!).

(4) Harsh Sounds

Instead of attempting a scientific defintion, I’ll just state “you know what I mean by harsh”, and that’s essentially digitally synthesized synth voices or (even worse) snare drum/percussion sounds. Typically the harshness here happens in that sensitive upper-mids range, and a first choice is to apply some EQ in the mix. This often brings out more of the lower frequencies of that very instrument (e.g. in LMF range), and with that, you can turn down the overall level of that track, resulting in a more present but less obnoxious effect.

A problem arises if you do want to keep more of that harshness – in that case, a multiband compressor (again, only using one band, i.e. the harsh frequency range) with fast attack and release and relatively high ratio can help. Incidentally, applying this will add distortion, which ends up in the HF band, in which we usually don’t have a problem, anyway.

This one was the second approach I used when working on the aforementioned IMPATT NDR. Here, this really triggered a chain reaction: applying a gentle EQ on a very harsh synthesized snare drum sound sample, I was able to make a lot of room in the high-mids, in turn allowing me to turn down the bass synth rich in harmonics, which brought the otherwise very much hidden bass drum out so well I even had to bring it down as well. The end result: more bass drum and at the same time more headroom!

Summary

Generally, especially if you’re still building up your experience in audio engineering, the approach to try to find a way to bring something up by turning something down can help ways on the way for a transparent and punchy end result, and this is even more true in the mastering stage, where the degrees of freedom what you can do without affecting too much else are mostly limited.

In the end, this approach works best when working on material that is based on the sonic expectation that everything clearly stands out in the track – which is typical for almost all kinds of pop, rock, dance, jazz and even classical chamber music. On the other hand, if the goal is for nothing to stand out (such as in some kinds of ambient music), the approach needs to be exactly the opposite, which I’ve found out while working on Schumann Resonance – where after a long mixing run all faders ended up between -16 and -22dB.

And how does that help me with loudness?

Ok, if you really want to do that:

First of all, applying shitloads of compression makes your master more crowded and unclear. Which in consequence means you best start with something as transparent as possible so you can hit it hard without ruining it completely.

Another important facts is that we’ve seen above (mainly in (2)) that we typically turn something down instead of turning the LF/LMF range up. Why is this so important? This range simply takes up most of the energy in the audio signal, and thus, by avoiding additional gain in that range, we can turn it up even more!

So much for today,

yours,

Rainer

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s