Synthesizers: About those synthesis concepts

I guess anyone who has ever bought, sought to buy or even looked at synthesizers at any time since the early 80s will understand the reason for this article:

You want a synthesizer to generate sounds electronically – either to imitate other instruments (which, in turn, can very well be synthesizers), or to generate new sounds. To make those sounds, a synthesizer uses a synthesis method. Think of it as the analogy of the string/fretboard/body combination on a guitar, or the reed/mouthpiece/body/holes combination on a clarinet or saxophone. The problem is: there’s such a quantity of those synthesis methods, and different from an acoustic instrument, where by simply looking at and playing it, you immediately get an understanding how it works, for a synthesizer it’s neatly hidden away behind the front panel.

So, which methods are there, which ones do we need, and how do we categorize them? That’s what this post is for – in a view somehwere in-between that of a musician and a signal theory expert.

Note that I do at times lean heavily on my explanations in my post “Is there a perfect synthesizer?” – so if you haven’t done so already, why not check that out as well?

The Basics: Why are there different methods?  And which ones do we really need?

If we look at it from the users’ point of view, it goes like this: we want to get a specific sound (meaning: an acoustic waveform), and we want it in a way so we can program that sound properly. Of course, the playability of the sound also plays a role. Then, there’s different theoretical concepts to arrive at that, sometimes different concepts for different kinds of sounds (say, percussion as opposed to strings, or bleeps). And then, there’s the technological implementation.

To get a first overview, I went to the website of a medium-sized music store, looked at the synthesizer they carry, and then looked at what they mention regarding synthesis methods. The result:

  • analog, virtual analog, AWM2, wave sequencing, FM, EDS, “same as synth xxx”, SGX-1, EP-1, CX-3, HD-1, AL-1, MS20EX, MOD-7, STR-1, Poly6EX, SRX, ARX, “from our latest synthesizers”, wavetable.

Ok, so we have “analog” and “virtual analog” (which is not a method, rather a technology), we have two references to other products by the same manufacturer, we have a big collection of letters and digits, and we have two things which have “wave” in their names.

All in all, that’s 20 methods, for a small part of the currently available synths. So, essentially: who is supposed to understand this?

An Attempt at Taxonomy

What I’m going to try next is to find a proper approach for classification – one that will allow us to put all of those names from before (most of which are just some industry buzzwords that will be forgotten before long, anyway) in a system that allows us to compare them somehow. And for that, we’ll have different criteria…

How we generate the partials: Additive and Subtractive

Ok, in lack of a better word, I’m gonna call this criterion the “sign” of the method under discussion.

Here, we’re talking about the way how we create the harmonics, the different frequencies in the sound we want. And there’s two ways to go there: first, take a lot of partials and then take away some, or start with one and then add others. Of course, the “taking away” is called “subtractive” and the other one is called “additive”.

As a rule of thumb, everything which has a filter in it is, by definition, subtractive, because the filter does just that – subtract. However, having said that, the vast majority of synthesizers which are additive in nature do still have filters.

General Architecture

Ok, this is somewhat tricky to describe. The by a large amount vast majority of synthesis methods use a functional architecture which, in some way, has an oscillator section and an amplifier section, and (if it’s not purely additive), also a filter section. The only ones which are not and of which I’m aware at the moment are Physical Modeling implementations.

However, it also helps to look how those building blocks are assembled: while a lot of them have a sequence of oscillator-filter-amplifier, this is different for others – prime example: FM/PM (as implemented by Yamaha). So we somehow try to use architecture as well in our classification.

Technology: Analog, Digital, Whatever

The theorists do normally not even get to that point in their discussions: using different mathematical concepts, they know that there is a formal proof that everything that can be done with an analog circuit can also be done with a computer program or digital logic, and vice versa.

However, it still makes sense to consider this, and mainly for two reasons:

Reason one is one of ideology. If you ever talked to a synth head, you will no doubt have heard the “nothing can sound like a true analog synth” rant. Interestingly, the digital people are considerably less intolerant in their view – just think of the analog synth people as the vegans of the synthesizer world.

Reason two, however, has to do with the feasibility of actually implementing a specific method using a technology: the reason it took that long to do a proper FM/PM synth was that getting the required oscillator stability was not possible with analog circuitry – at least not at a price that made sense.

So, while it certainly gives us a good idea of the technology of a synthesizer, we don’t need to look at it from that synthesis method point of view.

Oscillator concept

Now this gets interesting: as we know, the oscillators are but one building block in a synthesis architecture. However, leaving all the details aside, a lot of synthesis methods which are sold under different names in fact only differ on the oscillator level. Need some examples? Any sampler typically is the same architecture as your run-of-the-mill analog subtractive synth, only the saw/triangle/square oscillators are replaced by ones which play back samples. And if that’s the case, it’s also the case for all sample players, independently of what the thing is called. Same goes for wavetable stuff, granular stuff, and for a lot of other things as well, for that matter.

What about those modular things?

So which synthesis method does a modular synthesizer use? It would stand to reason that a modular synth could do any synthesis method, and, limitations as discussed in the technology section and practicability nonwithstanding, that’s the case. This is especially true for the software environments such as NI Reaktor, MAX/MSP or PureData, where, if you feel so inclined, you really can go down to the most basic level when assembling your synthesizer.

Physical Modelling

Now here’s another tricky one: with a physical modelling approach, we do not only not have a general architecture, we don’t even have an oscillator per se (or all those other building blocks, like envelope generators, filters, LFOs etc.).

A physical modelling synthesizer essentially calculates a physical model of a thing that makes sound – or to stay in the terminology of the scientific simulation geeks, usually a behavioural model of a physical thing which makes sound.

And for that reason, that basic understanding of building blocks which are responsible for different parts of the sound (e.g. oscillator for pitch, amplifier for amplitude envelope etc.) is not applicable. After all, to take a physical model of a reed instrument as an example, the pitch is defined by a complex combination of breath, lips, mouthpiece and instrument body (and its configuration, i.e. flaps).

So in our classification, the physical modelling domain is somewhat hard to grasp, simply because it hugely depends on what is modelled, but that must be seen as a complex blackbox with “player input in, sound out” configuration.

So…what do we make of that?

Right now, we’ve found that to classify synthesis concepts, we have:

  • the “sign” (with the choices additive and subtractive),
  • the architecture (going osc->flt->amp, or osc->amp->flt, or osc->amp, but sometimes with one amp per oscillator for the latter two),
  • the oscillator type, with the important ones being sine, “basic” (saw, pulse etc.), PCM and wavetable (as a special case of PCM),

and there’s physical modeling which doesn’t fit anywhere.

All in all, that would give us 2x3x4+1=25 synthesis methods (independently of whether they make sense). In the introduction, we found that at one store right now, there’s about 26?

Many names for the same thing…

Essentially, that’s not a surprise. Wavetable and wave sequencing is, in the scope of this article, the same. A lot of names are for (physical modeling of analog synths aka) virtual analog. Another big bunch is subtractive/osc-flt-amp/PCM.

However, and that seems to be the case throught the decades:

The vast majority is subtractive/osc-flt-amp, and either “basic” or PCM.

Now what should that tell us? Perhaps only to once upon a time whip out one of the fancy additive synths that we still might have…

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s