Synthesizers – or more specifically, music synthesizers (and that’s the only synths we’ll be discussing here) – have been around for a long time. They’ve also come a long way from the Elisha Gray invention of 1876 to today’s superpowered synth workstations like Korg’s latest Kronos flagship.
Especially since Moog’s Minimoog of 1970 gave us the first really useable synth, the endless stream of new synth releases has been summing up to the upper hundreds, if not thousands. And while today large music instrument retailers list around 200 products in their synth category, some experts believe that for specific applications, you still need one of those countless synths from days gone by.
Which leads to my main question: Why is there so many of them? Why isn’t there, with today’s advanced technology, a perfect synthesizer? Is that even possible?
note: this text requires some basic knowledge of synthesizer technology – ADSR, oscillator, LFO etc. If you’re not familiar with those concepts, look for some literature recommendations at the end of this text.
If we look both at today’s landscape of synthesizer products and the years past, there’s an almost unmanageable variety of synths. They come quite literally in all sizes, with different synthesis approaches, different feature sets and, of course, different price tags. Still, to this day, I have not encountered a synthesizer that would do it all. Which could be used to replace all the synthesizers I own or even all the synthesizers which exist. So why is this the case?
First, we might have a look at what this perfect synthesizer must be like – which requirements it needs to fulfil.
The first idea would be:
It needs to be able to generate every possible sound – both imitating existing instruments or sound effects, and creating new ones (1).
Now this isn’t hard to do at all. Starting in the 80s, most of the samplers of that era allowed you to use a CRT and a lightpen and draw any possible waveform on it – and then play back these sounds. A solution like that existed even for the Commodore 64 home computer. However, this is hardly intuitive in any way – or have you ever tried to draw the sound of a church bell hit by a rubber mallet on a screen? Or any sound, for that matter? We need something different.
It must allow you to program new sounds intuitively (2).
Now that’s a tricky one. And now, instead of just playing samples as above, we’re already deep in real synthesizer territory. It’s interesting in this context to look at two important steps in synthesizer evolution. First, Moog’s Minimoog (1970), using what’s called subtractive synthesis using analogue circuitry, and which may stand, in its general concept, as an example for the approach used by almost any of those devices. Our second example is Yamaha’s DX7 (1983), which used digital technology to do what Yamaha called frequency modulation (FM) synthesis (although, strictly speaking, it’s phase modulation ).
Subtractive Synthesis (Minimoog example)
This is in fact a very straightforward affair. From left to right, you got the oscillator section with 3 tuned oscillators and a noise source. These then get mixed together and are fed into the filter. After the filter, you get the amplifier section and then the output. Both the filter and the amplifier have an envelope generator. One of the oscillators may be used to modulate other oscillators, the filter or the amplifier section. No polyphony, only one note at a time.
If you have ever played with a Minimoog or a similar synth before, you will agree that it is, in fact, rather intuitive. Three basic and separate aspects of the sound are shaped in the three main sections of oscillator, filter and amplifier. You set the general sound character with the waveform for the oscillators (from left to right: pure to shrill), and the pitch of each oscillator with the octave. Want some combination? Do a linear superposition of two (or three) oscillators. Then the filter section. Turning cutoff to the left makes the sound more dull, to the right more bright. And finally the amplifier section, and with it the most-important envelope of any synthesizer. With that envelope, you adjust how the sound develops over time, so to speak: gently coming in with long attack, percussive behaviour with low sustain and short decay and so on. Even the more advanced stuff – like doing some “mwah” with the filter envelope – is really easy to grasp if you spend some time with one of those.
I’m getting into this in some depth here, and even though most of you will be well familiar with that concept, because it’s important in two ways. First, the approach of subtractive synthesis is common to almost all analogue synthesizers from the past, and even from today. Then, the oscillator -> filter -> amplifier sections are indeed the main building blocks of almost all synthesizers, not only analogue subtractive synths, but most of them. Some omit the filter section (like we’ll see in the next example), some swap the order of amplifier and filter section – but these are really those main building blocks we’ll encounter all the time.
So, with the early Minimoog, we already got a synth which was very intuitive to program. Not perfect, perhaps, but good enough. There’s just one problem: while the Moog (and many other synths for that era) are still sought after today (and replicated via DSPs) for their characteristic snyth basses and synth brass and synth strings, they do not really imitate those instruments properly – and they fail completely at trying to imitate others, especially bells, crotales, nylon string guitars and so on. So we obviously need something different for our requirement (1) above – and with that, we’ve arrived at our second example.
Phase Modulation (“Frequency Modulation”) Synthesis (DX7 example) 
The synthesis here consists of building blocks called “operators”, of which there can be up to six per voice (note that the DX7 is polyphonic, but we won’t discuss that difference here). Each operator consists of an oscillator (with only one waveform – a sine) and an amplifier with envelope – in other words, similar to the Minimoog, just without a filter. The operators are now connected in one of 32 preprogrammed algorithms, where the output of the oscillators can either be heard or be used to modulate the frequency of other oscillators (or even fed back into itself to modulate its own frequency). Now by signal theory, in doing so you will create additional frequencies in addition to the basic sine wave of your “audible operator” – and that’s just what happens here. Note that this amount of modulation needn’t be constant over time, because each operator has its own envelope, so you can change the frequency spectrum over time.
Esentially, it’s much the same as in the preceding example – only this time, there’s no such simple recipe for making a sound more bright or more pure as with the Minimoog. Ok, you now can do crotales and bells and whatnot, but you need to find them by trial and error, by a lot of experience gained from trial and error, or by having an advanced degree in signal theory and theory of musical instruments. And why is it so hard to get those sounds the DX7 is capable of? The rather mathematical synthesis concept plays its role here, but so does the user interface of buttons and small LCD display in contrast to the Minimoog’s lots of knobs – in fact, we still need another requirement:
It must have a good user interface for programming and selecting sounds (3).
Still with the thoughts of the last two examples in our heads, it’s time to look at another approach altogether:
The idea is simple. Take a sound you like (like, a guitar). Then record it. Then play it back at different playback speeds and with different volumes, and you got yourself a synthesizer which sounds like a guitar (in theory). The first shortcoming of that idea is clear: you’ll have a hard time doing new sounds. That nonwithstanding, this approach (mostly without the possibility to store your own samples, only using preset sample material) has been the core of the majority of synthesizers from the late 80s to mid-90s – including such successful and influential devices as the Korg M1, the E-mu Proteus series and Roland’s JV-1080. All of these would use a typical synthesizer architecture; meaning the sample playback thing would take the place of the oscillator in the above examples and then be combined with a filter and an amplifier, envelope generator etc. Some of them were very powerful in the filter department, especially the devices by E-mu with the analogue filters in their Emulator II and Emax and the complex “Z-Plane” filters in the Morpheus and the later members of the Proteus family. And interestingly, those devices performed (and perform) surprisingly well when doing synthetic tones, but still wouldn’t convince you for imitation.
One reason for that is that you can’t play that sample sent through an amplifier and filter the same way you’d play, say, a trumpet, with different embochure and tongueing techniques and all. At least not by just using a keyboard (at least, with the DX7 generation, they had gotten velocity-sensitive, so you could finally play softer and louder like on a piano). So we have another thing:
It must be playable expressively (4).
There’s been three main approaches to allow for that, which I’ll discuss followingly. They are not mutually exclusive, in fact, they work together very well. They are, in order from the player to the actual synthesis engine: a proper user interface, a flexible modulation/control structure and a physical modelling architecture.
User interface optimized for expressive playing
From the early days on, a synth was played with a piano-style keyboard and two wheels called “pitchbend” and “modulation”. Later on, at least (attack) velocity was added, and still later aftertouch and in some still rare cases release velocity . Still, you’d need something to access those expressive possibilities in a possibly intuitive and playable way. Other approaches included pedals (basically, a slider mapped to a pedal, to keep your hands free), freely assignable knobs to control myriads of parameters (the Oberheim Xpander and the Korg Prophecy, both mentioned later on, were very nice in this regard), breath controllers and ribbon controllers. Yahama was especially keen on establishing the breath controller (a thing which measured how hard you blew into a thing), while the ribbon controller (a one- to three-dimensional controller where you moved your finger around on) was implemented by many a manufacturer, but didn’t find its way to being a standard. Nice implementations were done by Korg (the Prophecy again, which had a ribbon controller mounted on a wheel), and Kurzweil (the two ribbon controllers were standard on the larger K2500 and K2600 and later models, and the long one was even sold as a standalone device to control other synths). A company named Haken Audio even makes a thing called the Continuum, which is a multi-touch ribbon controller the size of a grand piano keyboard. And finally, the virtual analogue trend (see below under Physical modelling synthesis) gave us back the loads of dedicated knobs from the old era which were lost in the digital age, with synths like the Waldorf Q offering around 60 dedicated knobs, which are huge fun when you have an arpeggiator running and just turn knobs and see what happens (this is also a good solution for (3) above!).
Some people maintain that one of the biggest problems of synthesizers in this regard is that they all are, after all, very keyboard-oriented. Strangely, this has even increased when MIDI arrived – a technology that made it possible to simply and exchangeably combine different controllers and sound sources, and that’s because the MIDI data format is centered around the concept of a key on a piano keyboard being struck and that triggering a sound. So any approaches to work with something different (like, a data glove) had been niche solutions.
Flexible modulation/control structure
Just as a reminder, we have two different kinds of signals in synthesizers. The audio signals (which is what you hear at the end), and the control signals, which affect specific parameters in the synthesis engine. Perhaps the most basic of those control signals is the key position controlling the oscillator frequency, in a way that the oscillator creates the pitch corresponding to the key you hit. We will be talking about the control signal flow here.
In those early synths of the 60s, the structure was modular: you had several building blocks (like the oscillators, the filters, the envelope generators etc. we’ve encountered before), which you could connect via patch cables in any way you liked. With the advent of the keyboard synthesizers following the Minimoog, this was much more limited: most control sources were hardwired to a specific destination that made sense, and only in some cases, some flexibility was offered – either via switches (as on the Minimoog) or via patch cables (like on the ARP 2600 and Korg MS20). Later products, especially the digital ones, lacked even more of the original flexibility.
In 1984 (and one might say, already at the end of the first wave of analogue synth technology – by that time, the DX7 has already started to dominate the world of pop music synthesizer use), Oberheim introduced their Matrix series (including the Matrix-12, Xpander, Matrix-6/6R and Matrix-1000). The name here came from the concept of the modulation matrix – which means that any control source can again be assigned to any destination, and on the way be modified by different mathematical functions (scalings, ramp generator, delay etc.). This brought back a lot of the flexibility of the old modular synths, while packaging this in a fully-programmable system without any patch cables, and for this reason, vintagesynth.com refers to the Xpander as “the most flexible non-modular analog synth ever built” .
Unfortunately, the complexity of the modulation never seemed to work well as a selling handle, so it’s really studying the manuals in detail to find which synth will offer the wanted flexibility here. Key features here are if all sources can be routed to all destinations, how many such routings can be specified, and which additional processing options (like mathematical functions) are available. In addition to the analogue modular synths and the Oberheim Matrix series, good examples which come to mind are the Kurzweil K2xxx series, E-mu’s EIII architecture (found also in the ESi-32 and the Proteus series), most synths by Waldorf and of course the virtual modular analogue Nord Modular series by Clavia.
Physical modelling synthesis
We’re looking at the early 90s. With sample memory and digital circuitry having become relatively cheap, the mainstay of the synth development had moved into the sample-based synthesis territory. And, as we’ve discussed before, this wasn’t as good for imitating actual instruments as everybody had hoped. So yet another idea was needed.
Again, Yamaha was one of the precursors with their V line, of which the VP never left prototype stadium, but the VL1 and VL7 got released. The buzzword: physical modelling. So now instead of trying to use a specific synthesis architecture and then tweaking its parameters so it would sound similar to an acoustic instrument, the approach was to simulated the physical properties of the instrument in question. In case of a wind instrument, this would be the instrument’s body (material, shape), the toneholes (shape, position, size), the mouthpiece, reed, the player’s lips and breath etc. And with that, or so it was thought, it would be possible to make a synth which accurately sounded like a saxophone, and could be played expressively like one, because now to get that effect of hard vs. soft tongueing, you didn’t adjust the filter, envelope of the modulator etc. at once in a very peculiar way – you just moved a control for “tongueing strength”.
How did it work? Briefly: not very well. One reason why many users were discontent with the result is that those synthesis concepts didn’t work that well on creating the sounds of new instruments, and that’s for a simple reason: physics. There is a reason that nobody has ever built an oboe with exponentially-shaped body, 3cm long tone holes and a tuba mouthpiece, and that is because that instrument just wouldn’t work. However, phyiscal modelling still was one of the most-used technologies from the second half of the 90s to the early 21st century, for one specific application: virtual analogue. The idea: use phsyical modelling to get the sound of an anlogue synth, but with the reliability, ease of use and price of a digital synth. One of the early example was Korg’s Prophecy (mentioned earlier), and its MOSS technolgy was later included in many other Korg synths. Other companies were even founded on that approach or increased their visibility considerably with the electronic music scene’s need for analogue-style sounds: Clavia, Access, Waldorf and Quasimidi are but a few examples of this trend. Still, this didn’t bring us forward in our search for the perfect synth: after all, they just sounded convincingly like those synths of old.
Requirements not directly related to playing/synthesis: the synthesizer workstation
It turned out that producing cool sounds and being able to play them wasn’t the only thing people wanted of a synthesizer. And this, among other things, paved the way for one of the most-successful synths: the Korg M1. Yes, in part this was also due to the characteristic sound of this synthesizer (sample-based subtractive synthesis, if you want to know), but also for another reason: in addition to the synth engine, the M1 included effects and a sequencer. This essentially allowed you to create a synthesizer track with just this device – no additional effects or a hardware or software sequencer were required. If you synced it to a tape recorder, you could also use it in a traditional (meaning: not-synth-oriented) studio as a simple solution to bring in a few synth tracks – or a lot of them, as the M1 was 8-fold multitimbral.
Following the M1’s success, most synth manufacturers also tried to get their slice from that cake, with varying success. Yamaha’s first venture into that territory came with the SY85 (which, oddly, didn’t have the FM synthesis of all the other synths of that model line), Roland rushed out an ill-fated attempt of a sampler workstation called the W30, plus a derivate of their successful D-50 called the D-20. Kurzweil’s K2xxx line was a workstation concept from the beginning, as were flagship models by e.g. E-mu. In today’s world, workstations are typically flagship models for the bigger companies, like Korg’s Oasys and Kronos, Yamaha’s Motif or Roland’s Fantom (and again, Kurzweil does nothing else), while the more specialized companies have largely stayed away: a workstation does not really improve how good a synthesizer is at its synthesizer capabilities, and today, everyone has loads of effects and an extremely powerful sequencer inside of his computer, anyway.
Coming back to (3): selecting sounds intuitively
This is one of the requirements which did only surface with the advent of digital technology (not necessarily for sound creation) in synthesizers. With the early analogue synths, there was one sound – until you reprogrammed it, then that sound was lost. Things started to get interesting with the digital synths, which allowed you to store your patches and recall them at the touch of a button. Still, not much of a problem with early synths: the Yamaha DX7, again used as an example, had 32 patches you could store and give them a name. It was trickier with the earlier Sequential Circuits Prophet 5, which had up to 120 patches, which only had a number. The MIDI protocol (which, incidentially, was designed among others by Sequential Circuit’s Dave Smith) allowed you to select from an address range of 128 patches.
From my own experience, keeping track of around 128 patches was well possible. Here, even if patch names are vague, you still can remember what “Soundtrack” and what “Dr. Solo” sounds like and where to find them . By the early 90s, however, things started to look grim: even modestly-priced synths like the E-mu Proteus 1/XR+ had no less than 384 patches. How would you quickly find a patch called “Empyrean”? Searching for the patch name – if the synth in question offered that feature. How to quickly find a kind-of dark and soft digital pad sound? No help here, you have to go through all of the 384 patches.
There have been different approaches to this dilemma: Kurzweil’s “Quick Access” allows you to group your favourite sounds in certain categories – like all analogue-style synth pads, or all odd sound FX. Other synthesizers (as well as many computer programs) use tags, so you could search for “dark”, “digital”, “pad” and “soft” – but only if the guy who assigned those tags feels the same about their use as you do. Unfortunately, some automatized search thingie which does it all is still not there – and it’s one of the things I believe would be possible.
There’s however a second use case for the sound selection, at that’s the live situation: you’re onstage, and you quickly need to switch from “Empyrean” to “Dr. Solo”, avoiding lots of time for the search or accidentally selecting “Atmosphere”. Fortunately, even earlier synths (even my old and entry-level Kawai K-1 had this!) allowed for some chains, through which you could simply step with a button or a footswitch.
Excursion: About those modular things
It’s time to dedicate a section to those “modular things”. In this article, we’ve often found that a truly modular system is the best when it comes to flexibility in creating sounds.
A lot of the early synthesizers were truly modular, mainly in the 60s. The move away from them to more integrated (and less flexible) devices came with the legendary Minimoog. And one of the reasons it was such a success back then had to do with one property of modular synths: they were really cumbersome to handle. First, you had a huge thing with lots of cables, taking up lots of rack space, and the controller-result relationship changing with every patch. Apropos “patch”: this word, used for synth programs up to this day, of course came from the analogue synths, where creating a sound meant making connections with patch cables first. So they were, on the other hand, really hard to reprogram, and some kind of patch memory was simply not possible.
Just a little more about the concept, if it isn’t already clear: so far, we’ve talked about the different functional blocks (e.g. oscillator, filter, envelope generator), how they are wired and how modulation is done. In a typical (i.e. non-modular) synth, the order of the blocks is fixed, e.g. oscillator into filter into amplifier, and there is some flexibility with regard to modulation.
In a modular synthesizer, on the other hand, functional blocks can be connected any way you like – giving you the maximum amount of sonic flexibility for a given set of functional blocks – but at the same time, a little reducing our intuitive programming requirement (2).
Still, the theory would be that with digital systems (both in hardware and in software), the disadvantages of the modular structure could be circumvented while retaining the benefits. And this is the case. One example (also mentioned before) is Kurzweil’s K2xxx series. While not fully modular, there’s a great flexibility in patching together different functional blocks and very flexible routing. This was even expanded with the K2600 and its triple architecture. Clavia’s Nord Modular series (based, like all of Clavia’s products, around a virtual analogue approach) brought this full circle: using a computer-based editor, you can – just as with a modular analogue synth – assemble the modules and connect them any way you like. Plus, you can store and recall patches quickly, this way giving you the best of both worlds.
In the software domain, important examples include Native Instruments’ Reaktor or, going even more deeply, PureData or MAX/MSP. While the concept is the same at first sight, those allow you to go into the functionality of the modular blocks down to code level, creating the equivalent of circuit-bending your hardware modules or building new ones entirely.
Finally, the analogue modular systems are still made by a few specialized companies. One example, Doepfer’s A100 system, has around 150 different modules in its range, along with the necessary racks, power supplies etc. And there’s several other manufacturers.
Having played with the Nord Modular myself a lot, I can only say that modular synths are indeed the choice if you really want to create odd things. However, even though the digital implementations have all the advantages of other synth architectures with regard to patch handling, there’s still the problem that programming and expressively playing them in an intuitive fashion simply doesn’t work by design.
Current tendencies in synthesizer products
It doesn’t seem that improvement is on the way: if you look at today’s synth products, it’s somewhat split between virtual-something things (which either emulate analogue synths, or electric pianos and organs, or both), the all-powerful workstations (which often also do virtual-something), and some retro real-analogue synths. None of those new products makes an impression to be a real landmark in any of those territories we’ve identified as lacking so far…
Coming back to the question from the very beginning: Is the perfect synthesizer even possible?
By all propability, no. You may disagree with me here, but I’m not disappointed by that fact. Leaving all the unkept promises of synthesizer technology aside, I find it nice that even with devices which are very similar from their concept, each one has its own character, its own advantages and its own quirks. That even when you program similar synthesizers to do the same, they still sound different – and that not only for such odd analogue synths like a Minimoog and ARP 2600, but also in case of the much-debated lifeless and cold digital nature of a Nord Modular vs. a Waldorf Q. Even though they got the same DSPs, they sound different. A lot.
And frankly, it’s not that unusual to have more than one instrument of the same type, as every player of acoustic instruments will know. And if you can have several acoustic guitars, because they sound and play differently and are differently suited for specific applications, this should also be true for synthesizers.
- Minimoog operation manual (via fantasyjackpalance.com)
- Yamaha DX7 manual (via yamaha.co.jp)
- James Clark: Advanced Programming Techniques for Modular Synthesizers
- Rob Hordijk: Nord Modular Tips & Tricks
- For an in-detail comparison of frequency and phase modulation in synthesizers, see my FM/PM blog post or this article from “Nord Modular Tips & Tricks” (see above).
- It’s interesting to note that on the Minimoog, oscillator 3 can also modulate the other oscillators’ frequency. So a very basic approach to FM is also possible with the Minimoog – at least in theory.
- Both the MIDI-equipped Oberheim synths (e.g. the Matrix series) and most Waldorf synthesizer keyboards include the feature of release velocity.
- vintagesynth.com page on the Oberheim XPander.
- Both examples taken from the Roland D-50-derived MT-32 sound library.
All of the media references (i.e. pictures) which are not directly or indirectly marked with a copyright note, the copyright lies with the author. For any use of this material, please do contact the author.
The author is a multi-instrumentalist, composer and electronics geek – the latter also in his day job. While he does not consider himself a collector, he still has more than a dozen synthesizers in his possession, most of which he regularily uses. In addition, he sometimes even gets fond of software instruments. Personal favourites in his arsenal include the Kurzweil K2600XS (for great flexibility and a decent keyboard), the original Waldorf Q (for cool sounds, lots of knobs and bright yellow colour) and the Nord Modular (for its huge flexibility, obviously).