This is a post in response to Steve Lawson’s blog post „Expanding Audio Orthodoxy – Recording, Mixing, Mastering“. And as such, while I do quote the central statements from Steve’s post, it wouldn’t make that much sense to read this post before you have digested Steve’s.
And why am I writing this? Simply because I want to challenge Steve’s line of argument, and the conclusion at which he arrives. That is not to say that the mode d’emploi he has adopted (and describes) won’t work or is less valid – but rather that I believe it is for other reasons than the ones he gives.
As an introductory statement, I will assume throughout this article that affordable computer-based software implementations of audio processors (e.g. compressors, EQs) are (at least) on par with their analogue hardware counterparts. Many will challenge this statement, but I will make that assumption, simply because without that Steve’s line of argument doesn’t hold at all, at least not in the way he describes his process.
“The received wisdom of how and why things are done is, it seems to me, based on a resource equation that exists in a paid studio environment”.
In greater detail: We mix at a mixing facility and then send the resulting 2-track off to a dedicated mastering facility because a resource-optimized mixing studio can’t have the required gear to both to a complex mixing job and at the same time a high-quality mastering job.
Otherwise, (and this is what I’d like to call Steve’s Conclusion):
We would simply do mixing and mastering together in one and the same process step at the same facility with the same team, especially if time is not an issue.
The Consequence in the DAW World: Steve’s Process
Based on that conclusion, it becomes clear what Steve does in his own mixing/mastering process: as his DAW is powerful enough to do both at the same time, both process steps are merged into one. That is, mixing and mastering are done in a standard DAW project (in his case, using Reaper), which is essentially a mixing setup with the mastering chain on the 2bus already in place.
The advantage here is that mixing and mastering can be done interactively, i.e. that if when applying that mastering compressor at the end it turns out that the mix levels need to be changed slightly, this doesn’t require an additional mixing run (at another facility), getting that into the mastering chain and starting all over – an iterative process is possible.
Moreover, it’s already possible to print fully processed masters at the mixing stage, thus allowing for easy distribution (e.g. to artists involved in the process, who are not present during the mixing process) and assessment (e.g. how mix changes turn out in different listening environments) – and all that with a truly non-destructive, total-recall editing environment.
A Look at the Process Landscape
If we have a look at how the audio engineering tasks in a record production form a process chain on a very superficial level, we arrive at something like this:
This is similar to the right branch of a V-model cycle (the implementation/integration one), with clearly defined signoffs between the layers of this model.
In Steve’s world, the process chain changes to this:
At first sight, we see that we lose one process step and one signoff. However, that’s not the only change: while before, the process steps were all linear (i.e. in on the left, out on the right), the Mixing/Mastering step is now an iterative one: our V-model has changed into a waterfall model.
The reason why waterfall models are usually implemented is to save time, especially if confronted with rapidly changing requirements. However, this doesn’t apply here, simply because the requirements don’t change. In fact, they’re always the same: make a great-sounding album.
My challenge has to do with a number of reasons why a dedicated mastering step is superior to Steve’s approach, so much indeed that it outweighs the obvious advantage of the interactivity of Steve’s process. And they have to do with different reasons for using a dedicated mastering facility/engineer other than the ones Steve gives.
Let an Expert do the Job
Mastering is a very dedicated skill and art. To quote Steve, “the task is one of polishing and refining, not undoing and deleting”. It stands to reason that an expert for the job, who has countless hours under his belt plus the specific talent, will do a better job than a multi-talent (however talented).
Another Set of Ears
Already during mixing, it can appear beneficial to have an additional set of ears available, one that hasn’t (like the artist) been working on the material for a long time. This applies even more so for the mastering stage:
Mastering is, in addition to the most-often-cited tasks of applying compression and EQ and maybe some other voodoo so it sounds good on every system, a number of other seemingly small but no-less-important jobs – two that come to mind are track order and track spacing/fades. And especially for those two, the mindset of someone who is completely new to the project content at hand (but on the other hand, an expert in those topics) is very beneficial to the end result.
It’s still about the Tools
Even with the multi-talented DAWs of today, there is still a reason to use a special tool for mastering, as opposed to the standard DAW. This has mainly to do with workflow optimization, and also how DAWs will tend to become rather cumbersome when used to work on a complete album with multiple, maybe vastly different tracks.
You might have hardware synthesizers connected for the mixdown, and maybe a big quantity, which during the interactive mix approach might get in each other’s way input-wise. Moving around whole songs easily and fading maybe one into the other is not something a standard DAW excels in (unless you play tricks, which in turn will make your project unnecessarily complex). All in all, even if the computer can do it, it will quickly become cumbersome for the engineer. And we don’t want to get the engineer distracted from his job – which is mixing, or mastering, or maybe both.
Why Steve’s approach can still work
You can think of situations where the expanded audio orthodoxy can still work, but these are somewhat special:
- Relatively simple DAW projects,
- An engineer who is both a mixing and mastering expert,
- The requirement for a very quickly completed master,
- The engineer ideally also suffers from borderline syndrome,
- Cost considerations
Item 3. seems especially interesting to me, because one of Steve’s assumptions was that the new model works because time is not an issue, and this analysis leads to the result that it works especially if time is an issue.
Another interesting one is 5.: mainly because right now, we have been assuming, that the artist/main stakeholder is, as in Steve’s case, doing the mix and may choose to outsource mastering to a third party (or not). Not outsourcing is cheaper, but only if the artist decides to not (or forgets to) bill the time spent on mastering onto the project business case. Your time, your spare time, is not free, folks, just because you don’t have to pay for that! Essentially, it either eats away your spare time, or forces you to decline paying engagements.
Essentially, it seems to me that Steve’s approach works, but for all the wrong reasons (or rather for reasons different to those he gave). While I will still rely on an additional mastering engineer every time this is feasible, I have also been guilty of doing such jobs in the past for my own projects (with different results, but luckily, with a steady increase in quality), but with the reason being cost considerations, and not process optimization ones.
The Expanded Audio Orthodoxy can work really well – but the dedicated mastering engineer/process step is still a better choice. This is my belief.
Steve was kind enough to post a lengthy reply to this article on his own blog post, mainly pointing out some misconceptions on my behalf. So without further ado, here’s my reply again:
thanks for that lengthy reply, and it did in fact clear up some misunderstandings regarding the message of your post on my behalf. As you correctly say, we’re essentially stating the same thing it seems, but arrive at different conclusions for our own individual use cases due to differing ancillary conditions.
Going throught the paragraphs in your reply:
1. Working with the Mastering Engineer
Of course, the mastering engineer works differently than you with regard to the time invested in the job, as the majority of engineers I heard of have fixed-price offers for at least most of their work, as opposed to a per-hour billing. That means that for the mastering engineer to make a good cut, his goal is to get it done as quickly as possible, without compromising quality so much that it hurts his proven track record.
So essentially, you are offered (as you state, and I’ve seen more or less the same) one mastering and one revision run at a flat fee, which means if after the first revision you haven’t arrived where you wanted to be, it will mean additional cost, or compromises in quality. Is there a way around this?
I made the experience that a requirements-oriented work approach can be helpful (at least if the mastering site agrees to this and is not a dick about it): tell them, in as much detail as possible, without overspecifying, what you need. That way, if the first submittal from the engineer is not compliant with what you specified, you don’t need to pay for additional revisions – the contracted effort hasn’t been completed so the next revision is on him, not you. This is, however, a peculiar approach to working (at least for most artists), and may be “not for everyone, not for you”.
2. Is the dedicated Engineer really the better Engineer?
One assumption I made in rough generalization (and one I made, I have to admit, without listening in the relevant depth to one of your recent releases) is that the specialized mastering engineer is “better” (meaning more skilled and efficient in mastering) than the artist. And this, as you go on to explain, is not the case for you.
If that is the case, then essentially a large part of my line of argument falls apart. And if you have the skills, then the other relevant factor (dedicated equipment) really isn’t a factor anymore, because you already have the computer to run the DAW, and the required software taken together will cost less than one single piece of dedicated analogue mastering hardware. The only item worth looking at might be the listening situations (speaker, room treatment), but if you’re an audio enthusiast, chance are good you got that covered, too.
3. Project-specific considerations
Another part of what I stated is that some of my line of argument becomes moot the moment you’re limiting yourself to very specific projects, like the one you described in your article. More specifically, this affects the entire “specific complex projects can’t be done that way” and a large part of “mastering editing doesn’t work so well in a typcial multitracking DAW”.
The hardware synth topic here was brought in as an example, you could also use huge software synths or vastly varying track nature with large numbers of tracks from track to track, but all of that does not apply to you, or in other words: if all of the album is relatively similar in the nature of the DAW tracks, as it is in your case, then my line of argument is simply not applicable.
4. Project Scheduling
What you added as an additional point at the end of your reply is also an important observation: every signoff/interface in the workflow takes time. This can be opimized a lot, but not so much also because you have to adapt to other people’s schedules. In fact, the only situation where you’d actually be quicker with the “distributed” approach was e.g. if you finished mixing, went on tour, and when you return you already have the release-ready audio. But life typcially doesn’t work like this.
5. Ears with Hands are better than Ears?
This may be really the only point I can still keep working, simply because scientific understanding of creative processes shows that the additional “ears with hands” are more valuable than just the ears. Why? In a “draft-and-revisions” waterfall approach, you typically tend to address review comments by adapting the approach you have initially chosen, rather than starting over from scratch. With the additional hands, you have the chance to get the initial master from someone who is not you, and then iterate from there – which, I must add, needn’t always be a good thing, but it sure can be a good thing.
All in all, it becomes very obivous that my line of argument only applies for your specific business case in a very limited way, and as such, your conclusions are different than mine. Thanks again for the valuable input.
7. Annex: My Own Special Case
Note that in the past, I have actually worked mostly with self-mastering (with results of different quality, some of my masters, like “Neinnein auf dem kleinen Weg”, suck so much to my ears that I really need to rework then before making them available online). In fact, the only two albums where I worked with external support are the two #secretalbum releases (“Verschluckbare Kleinteile” has a mastering engineer as a review consultant, “Rückwärtsfließpreßverfahren”, to be released this Thursday, really has a dedicated mastering engineer, who did a job that was better than what I had done). “Verschluckbare Kleinteile” is also an example where the integrated workflow wouldn’t have worked (at least not with my hardware), due to the big number of hardware synths, hardware effects and big sample-based software synths.
When doing the mastering myself, my approach is to have a very minimalist touch with regard to processing – in the world of my ears, “punch” is already added on a per-track or per-sub basis, and long-time level control works more convincing with riding faders for me (also due to often “odd” stuff). Even for limiting, I have recently taken Bob Katz’ approach of using microscopic fader rides. Thus, my mastering chain often boils down to parametric constant-group-transit-time EQ, followed by dither/noiseshape, only in some cases adding a brickwall limiter and/or a slow compressor. The things that take up workload are track levelling, spacing and fading.