A draft whitepaper of my parameterization idea

Everything and anything, but nothing about the LinuxSampler project.
User avatar
dahnielson
Moderator
Posts: 632
Joined: Wed Jan 23, 2008 11:25 pm
Location: Linköping / Tranås, Sweden
Contact:

Re: A draft whitepaper of my parameterization idea

Post by dahnielson » Sat Mar 29, 2008 4:10 pm

Just want to add that my original thought about compiling the virtual instruments, instead of just providing a run-time environment for them, was the possibility to have them compiled into LV2 plugins (replacing LADSPA/DSSI) and implementing the "instrument mode" as a LV2 host. The compilation of circuits into a single dynamically loadable object could provide optimization advantages over run-time built circuits in addition to create stand-alone (un-editable) LV2 plugins and JACK-application.

Another possibility is to just compile the components into LV2 plugins and having Omnibus just patch them together into circuits. Which I just realized when studying the LV2 website:
Ports are no longer limited to the float data type. Port type extensions will allow plugins to handle any data type, e.g. MIDI or OSC. Other possibilities include frequency domain (FFT) data, sample files managed by the host... any type conceivable
Anders Dahnielson

Ardour2, Qtractor, Linuxsampler, M-AUDIO Delta 1010, Axiom 61, Korg D12, AKAI S2000, E-MU Proteus 2k, Roland R-5, Roland HP 1300e, Zoom RFX-1000, 4GB RAM x86_64 Intel Pentium Dual 1.80GHz Gentoo Linux

User avatar
dahnielson
Moderator
Posts: 632
Joined: Wed Jan 23, 2008 11:25 pm
Location: Linköping / Tranås, Sweden
Contact:

Re: A draft whitepaper of my parameterization idea

Post by dahnielson » Sat Mar 29, 2008 4:17 pm

Consul wrote:As you can probably guess by now, I care as much (if not, more) about the non-linear characteristics of the circuit than the linear. Simulating the linear properties is well-established and a lot of documentation exists on how to do it. Nailing the linear and non-linear at the same time, that's the interesting problem that will result in a final program above and beyond what's been done before. Runge-Kutta, again, looks like a method that solves only for linear behavior. The thing is, Spice does know how to do both at the same time, and I really would like to know how, so that we could then possibly make our own optimized version for real-time audio.
Well, as you described it yourself, I think you have to approximate the non-linear behavior so it can be solved as a linear behavior. E.g. in simulation of vehicle dynamics the friction has obvious non-linear properties and the most common solution for tires is to use Pacejka and solve it linearly (N.B.: it's a long time since I played around with such behavior, dynamics and integration).

Remember, attention to details is important if you're designing a nuclear reactor, in synthesizing all we have to do is to fool our quite gullible auditory system.
Anders Dahnielson

Ardour2, Qtractor, Linuxsampler, M-AUDIO Delta 1010, Axiom 61, Korg D12, AKAI S2000, E-MU Proteus 2k, Roland R-5, Roland HP 1300e, Zoom RFX-1000, 4GB RAM x86_64 Intel Pentium Dual 1.80GHz Gentoo Linux

User avatar
Consul
Moderator
Posts: 189
Joined: Wed Jan 23, 2008 11:19 pm
Location: Port Huron, Michigan, USA
Contact:

Re: A draft whitepaper of my parameterization idea

Post by Consul » Sat Mar 29, 2008 5:19 pm

The only thing I can think of would be to build the non-linear part of the model as a piecewise-defined set of the solved linear equations. In other words, depending on the amplitude of the input sample, the appropriate linear model will be picked, process the sample, then output it. The feedback loops (delay lines) will be common to all of the piecewise models, though. I'll use a basic filter for illustration.

A basic IIR filter requires two feedback delay lines of two samples each, one for the input, and one for the output. Each new input and output are placed at the beginning of their respective delay lines, and the math for the next output is done using the values now held in both delay lines.

Now, imagine we have a piecewise-defined filter, with different math to be done depending on the value of the new input sample. Regardless of which piecewise "layer" is used, though, the same delay lines are always used. This means that the previous inputs and outputs to be used for the new calculation may very well have been calculated with a different layer than what is being used now.

The interesting part is going to be compiling the different piecewise functions based upon the non-linear properties of the components of that particular model. An important part of this will be knowing how many piecewise functions to compile, and what their respective ranges will be. One advantage we have is that computers now have lots of memory available. We might also have to pay attention to the cases right at the layer boundaries, so that the change in the algorithm from one sample to the next doesn't result in the creation of any "corners" in the wave, which would create aliasing noise.
Darren Landrum

User avatar
dahnielson
Moderator
Posts: 632
Joined: Wed Jan 23, 2008 11:25 pm
Location: Linköping / Tranås, Sweden
Contact:

Re: A draft whitepaper of my parameterization idea

Post by dahnielson » Sat Mar 29, 2008 6:19 pm

The emulation of non-linear behavior is encapsulated by the component, i.e. the component being a black box.

N.B.: My suggestion to call networks "circuits" and their nodes "components" was rather tongue-in-cheek and buzzwording on my part.

P.S. I'm starting to warm up to the idea of components as LV2 plugins and Omnibus VIs as hosts routing the signals between them, that is circuits being assembled at run-time and components at compile-time.
Anders Dahnielson

Ardour2, Qtractor, Linuxsampler, M-AUDIO Delta 1010, Axiom 61, Korg D12, AKAI S2000, E-MU Proteus 2k, Roland R-5, Roland HP 1300e, Zoom RFX-1000, 4GB RAM x86_64 Intel Pentium Dual 1.80GHz Gentoo Linux

User avatar
Consul
Moderator
Posts: 189
Joined: Wed Jan 23, 2008 11:19 pm
Location: Port Huron, Michigan, USA
Contact:

Re: A draft whitepaper of my parameterization idea

Post by Consul » Sat Mar 29, 2008 6:43 pm

Well, the LV2 spec is certainly flexible enough to allow for the idea to work. RDF is still kinda alien to me, but I suppose I can get it eventually.

As for "circuits" and such being tongue-in-cheek, I pretty much knew that. It did open the door for me to explain my ultimate goal of the simulation of linear and non-linear behaviors, though. I've been wanting to get a discussion going on that for some time. If we go the black-box approach and use LV2 for each component of a DSP network, I'll still be able to experiment with non-linear effects in my own time.

Over on the Reaper side of the music world, myself and one other fellow started up a project to use the internal Jesusonic engine to make a modular synthesizer. Jesusonic is a compiled scripting language for DSP. The way it was implemented in Reaper, though, it had 64 variables that represented 64 audio-rate busses that signals could travel down, per track. So we were able to design a synthesizer out of multiple Jesusonic scripts (the track treats each as its own plugin) using those 64 channels for routing our signals. There were disadvantages, such as no polyphony, though the other fellow (whose name escapes me) had some ideas on getting around that. But MIDI information flowing to the track was always available to every plugin slot, and all 64 busses could be accessed by name and in a functional manner as well. Here's the thread on it:

http://www.cockos.com/forum/showthread.php?t=16926

PS - I'm usually always in #linuxsampler on Freenode if you ever want to drop by. I can't always guarantee being available, but I'm logged in at any rate. I go by Consul on there as well.
Darren Landrum

User avatar
Consul
Moderator
Posts: 189
Joined: Wed Jan 23, 2008 11:19 pm
Location: Port Huron, Michigan, USA
Contact:

Re: A draft whitepaper of my parameterization idea

Post by Consul » Sat Mar 29, 2008 7:00 pm

For the record, the current SVN of Ingen (http://wiki.drobilla.net/Ingen) already supports LV2, so what we want might already be built.

I still want to explore the whole "compilation of circuits with linear and non-linear properties" though, since that's an area unexplored in the open source world.
Darren Landrum

User avatar
dahnielson
Moderator
Posts: 632
Joined: Wed Jan 23, 2008 11:25 pm
Location: Linköping / Tranås, Sweden
Contact:

Re: A draft whitepaper of my parameterization idea

Post by dahnielson » Sat Mar 29, 2008 7:03 pm

One thing that I forgot to mentions was that the idea of letting people create new components using Omnibus still holds water (i.e. being sort of a LV2 factory).

You can always treat components as integrated circuits. :mrgreen:

My point is, I don't see why you can't do components with linear and non-linear properties.
Last edited by dahnielson on Sat Mar 29, 2008 7:51 pm, edited 1 time in total.
Anders Dahnielson

Ardour2, Qtractor, Linuxsampler, M-AUDIO Delta 1010, Axiom 61, Korg D12, AKAI S2000, E-MU Proteus 2k, Roland R-5, Roland HP 1300e, Zoom RFX-1000, 4GB RAM x86_64 Intel Pentium Dual 1.80GHz Gentoo Linux

User avatar
dahnielson
Moderator
Posts: 632
Joined: Wed Jan 23, 2008 11:25 pm
Location: Linköping / Tranås, Sweden
Contact:

Re: A draft whitepaper of my parameterization idea

Post by dahnielson » Sat Mar 29, 2008 7:49 pm

Consul wrote:I still want to explore the whole "compilation of circuits with linear and non-linear properties" though, since that's an area unexplored in the open source world.
Well, compile-time it is then with continuous integration. No LV2 as components, just our own.
Anders Dahnielson

Ardour2, Qtractor, Linuxsampler, M-AUDIO Delta 1010, Axiom 61, Korg D12, AKAI S2000, E-MU Proteus 2k, Roland R-5, Roland HP 1300e, Zoom RFX-1000, 4GB RAM x86_64 Intel Pentium Dual 1.80GHz Gentoo Linux

User avatar
dahnielson
Moderator
Posts: 632
Joined: Wed Jan 23, 2008 11:25 pm
Location: Linköping / Tranås, Sweden
Contact:

Re: A draft whitepaper of my parameterization idea

Post by dahnielson » Sat Mar 29, 2008 8:20 pm

Consul wrote:Over on the Reaper side of the music world, myself and one other fellow started up a project to use the internal Jesusonic engine to make a modular synthesizer. Jesusonic is a compiled scripting language for DSP. The way it was implemented in Reaper, though, it had 64 variables that represented 64 audio-rate busses that signals could travel down, per track. So we were able to design a synthesizer out of multiple Jesusonic scripts (the track treats each as its own plugin) using those 64 channels for routing our signals. There were disadvantages, such as no polyphony, though the other fellow (whose name escapes me) had some ideas on getting around that. But MIDI information flowing to the track was always available to every plugin slot, and all 64 busses could be accessed by name and in a functional manner as well.
Cool. Nice hacks are always cool. 8-)
Anders Dahnielson

Ardour2, Qtractor, Linuxsampler, M-AUDIO Delta 1010, Axiom 61, Korg D12, AKAI S2000, E-MU Proteus 2k, Roland R-5, Roland HP 1300e, Zoom RFX-1000, 4GB RAM x86_64 Intel Pentium Dual 1.80GHz Gentoo Linux

User avatar
Consul
Moderator
Posts: 189
Joined: Wed Jan 23, 2008 11:19 pm
Location: Port Huron, Michigan, USA
Contact:

Re: A draft whitepaper of my parameterization idea

Post by Consul » Sat Mar 29, 2008 9:35 pm

Part of the way this would have to work is by allowing recursive embedding of blocks within blocks. To illustrate, at the Elemental level (we can't call it the core level, as that's the name Reaktor uses officially), we have:
  • Adder
  • Mutliplier
  • Subtractor
  • Divider
  • N-Sample Delay Line (where every sample in the line can be read separately via an index)
  • Splitter
Theoretically, everything DSP can be made with these elements, but there are more we can add for convenience (I'm sure there are more, but this'll illustrate the idea):
  • Comb Filter
  • Variable Delay Line (where the length of the line can be changed in real-time, at the expense of being able to index any point along it)
However, the Elemental level can also contain other elemental operations for things like file access, arrays and indexing, access to sockets, and other such things. A sample playback object might want to be an element as well.

Blocks can then be created from elements, which can then be parts of other blocks, and on and on, until one finally decides to use blocks to build an instrument.
Darren Landrum

Post Reply