I have been thinking about sampler design lately while taking a closer look at Kontakt, SFZ and hardware designs.
The basic signal paths in most samplers (or rather for a single voice) are relatively common:
Building blocks
* A
sample player, the source and play back the sampler in one way or another. Typically it has several parameters and at least one for the pitch. Control
how something is played back.
* A
filter bank that manipulates the audio stream and change timbre. Parameters depend on what filters that are available and inserted. Shapes the timbre and sound.
* An
amplifier that control the output volume and stereo panning. The parameters are quite obvious.
Source
The sample player being used can be thought of as a realization of a generalized source. Traditionally a sampler have just had one type of playback mechanism while a next generation design may have several: playback from RAM, playback from disk stream, rubber band playback, pure synthesis, etc.
Filters
Exactly what kind of filters that is available in the filter block is another design choice. The common and basic approach is to have a defined set of band pass and eq filters, the
E-MU Proteus 2k has a flexible Z-plane filter that replace several types of filters at once, while Kontakt let you insert a number of filters. In a flexible next generation design only the available hardware (CPU and RAM) sets the limit for the number and kinds of filters inserted in the filter bank.
Amplifier
The only part that is relatively straight forward compared to the other blocks is the amplifier. No apparent need to be able to swap it out for other realizations unlike the source and filter blocks. It does what it does.
Parameterization and modulation
In most samplers it usually just comes down to what parameters that are available and how (or if) they can be manipulated. A really basic approach is to just having constant values, another (and common) is to have associated EGs and LFOs for some or all of the controls (
SFZ for instance define a EG and LFO for every building block). I think my E-MU Proteus 2k is one of the most modular, take at a look at the basic programming chapter
in the manual for example. E-MU makes the point that it's not a sample player but a synth that happen to use samples instead of oscillators and take a "patch" approach (but is instead limited by the number of available modules that can be patched together at once).
Modularization
So we have a generalized signal path, but as you can see there several realizations of it. The actual implementation of each building block, and parts in the filter case of the filter bank, can be chosen out of a pool of ready implementations (and new ones can be added). The number of modulators that can be patched to parameters of the building blocks will be even greater.
Mapping
So far we have just concentrated on the signal path for a single voice: the stuff that will make some kind of noise in the end. Another important block is the
mapper who is responsible for mapping input events to parameter and modulator settings in addition to selecting and loading sample to be played back for each voice. It need to be really flexible and encompass a lot of needs that can be easily expressed. It is the equivalent of the dimension system in the DLS and GIG formats while the SFZ format has a smart set of "opcodes" with mapping possibilities similar to Kontakt (afaik).
GUI and scripting
The scripting in Kontakt is usually heralded as its core power. But for the most part that isn't true: the power comes from a great deal of flexibility and powerful building blocks so that few instrument need to take full advantage of the scripting. The exception being the line of
basses from Scarbee that really use the possibility of the custom performance view GUI and manipulation/transformation of input events to make them really playable and realistic. To sum it up, in the case of Kontakt, scripting have three uses: creating custom GUIs, generating/filter/transforming input events and setting/changing parameters based on GUI and event callbacks. A script works on an instrument level and can be thought of as glorified programmable MIDI processor, MIDI event in MIDI event out, but with read/write access to internal parameters. When it comes to GUI there's always the question about separation using MVC/MVP patterns.
Engine definition and instrument loader
As a first step the loader define the engine: what modules it consist of and how they are put together. Then it reads the user provided instrument file it was designed for and fill in the mapper, set the appropriate parameters and load any scripts. Such flexibility makes it relatively easy to emulate the mapper and signal path of other samplers to play back formats like AKAI programs or SFZ instruments. An engine definition is created emulating what was available in the original sampler and then an instrument file loader need to be written for the format to populate the engine with instruments.