I think everyone who has used more than one synthesizer in his life has used some sample-based instrument at least once. This had to a large degree to do with three important products ore product lines: Korg’s M1 (released in 1988) was of course groundbreaking due to being the first synthesizer keyboard with a workstation concept. However, it was also one of the earlier successful instruments with a sample-based sound generation, albeit only using samples from ROM. E-mu quickly followed suit in ’89, however with a different approach: taking the general architecture from their EIII flagship sampler and packing it into a 1HU rack case together with a few MB of high-quality ROM samples and patches, the Proteus series really became a kind of catchall for different applications in many variants up to the Proteus 2500. Finally, Roland jumped the ship somewhat late, but had a big success with their JV series, established in 1991: the JV-1080 variant (from 1994) became one of the (if not the) best-selling (and most-heard) electronic instruments ever.
Even today, every synth workstation does use sample-based sound generation to some degree. So, the title question seems kinda stupid: if some of the most successful products sold as a “synthesizer” all use samples, then how can this not be a real synthesizer?
A Question of Taxonomy
The question may be to a larger part one of taxonomy. After all, for the aforementioned reasons, no one would really dispute the validity of a sample-based instrument as a “real electronic instrument”. However, is it a “synthesizer”?
From an etymological point of view, synthesis (and thusly that what a synthesizer does) is to take two or more separate entities and form something new that is more than the sum of its parts. So how do we see that for a music synthesizer? To stick with the classic example of an analogue subtractive synth, we take one or more oscillators, a filter, an amplifier some envelopes and maybe some other things as well and with that, are able to create something entirely new.
And with that, I believe we’re already moving in the right direction. If we take a sample-based synthesizer to simply play back a recording of an acoustic instruments, maybe with different volume, then this is hardly synthesis by definition. This is what is very often done e.g. for drums, but also for heavily multisampled tuned instruments, such as a piano. We don’t use envelopes, we don’t use filters, let alone LFOs or pattern sequencers – it’s just playing back that audio recording at its original speed. And naturally, this is even more the case if we use the instrument just to play back a sample loop.
The other extreme is when you take a very flexible sound synthesis algorithm, such as Kurzweil’s VAST, which uses samples to do about everything. Now with that synth, you can of course create patches that sound convincingly like an analogue synth. This is, however, not done by sampling the output of some Minimoog or MS20 patch, but rather by taking a sample of a saw or pulse wave, adding those waves, sending them through filters and amplifiers, and controlling them with envelopes, LFOs and whatnot.
It’s up to what you do with it
The answer, I’m afraid, is not a simple “yes” or “no”, but rather an “it depends”. And with that, it’s rather clear that except for those radical cases which I’ve stated above, a decision is rather subjective.
Which means for me that I will consider those sampler things as “synthesizers”, because (with the exception of some very simple phrase samplers) you can synthesize with them. On the other hand, I’ll decidedly steer clear of the more trivial uses for those cases where I want to make a synth album.