A draft whitepaper of my parameterization idea

Everything and anything, but nothing about the LinuxSampler project.
Post Reply
User avatar
Consul
Moderator
Posts: 189
Joined: Wed Jan 23, 2008 11:19 pm
Location: Port Huron, Michigan, USA
Contact:

Re: A draft whitepaper of my parameterization idea

Post by Consul » Fri Mar 28, 2008 1:09 am

Well, that's what I thought I was doing. I tend to think very bottom-up in my design. Decide what I want to do, break it into the smallest reasonable units possible, build those, and then build on top of those. When I'm talking about classes, I'm really just talking about atomic units out of which the whole can be made. I'm just trying to use language that is fairly standard, really. Like I said, the single "wave player" class was easy.

So, what does Omnibus need to be able to do. Presumably, it'll handle the routing of both audio and control signals. Maybe we can start there.

EDIT: Is it possible for a master object to define a function that can pass data between sub-objects? If we define a master class for Omnibus, and then put the actual processing blocks in subclasses, could we make a function in the master class that can pass a sample (whether audio or control) from the output of one block to the input of another?
Darren Landrum

User avatar
dahnielson
Moderator
Posts: 632
Joined: Wed Jan 23, 2008 11:25 pm
Location: Linköping / Tranås, Sweden
Contact:

Re: A draft whitepaper of my parameterization idea

Post by dahnielson » Fri Mar 28, 2008 1:21 am

Yes, but what I would like to know is how to handle voices for example. Some "blocks" will be per instrument (like MIDI event mapper) while other will be per voice (like sample players, filters, etc.) to support polyphony and there need to be several types of voices for layering (to emulate a Roland JX-10 you need two independent groups of JX-8 voices). What will the concept of grouping these be as in terms of blocks?

I haven't yet looked into how Reaktor (and others) handle it. Will probably have a better idea after a bit of research.

Agree, talking about the solution to these issues in terms of OOP is A-OK. :)
Anders Dahnielson

Ardour2, Qtractor, Linuxsampler, M-AUDIO Delta 1010, Axiom 61, Korg D12, AKAI S2000, E-MU Proteus 2k, Roland R-5, Roland HP 1300e, Zoom RFX-1000, 4GB RAM x86_64 Intel Pentium Dual 1.80GHz Gentoo Linux

User avatar
dahnielson
Moderator
Posts: 632
Joined: Wed Jan 23, 2008 11:25 pm
Location: Linköping / Tranås, Sweden
Contact:

Re: A draft whitepaper of my parameterization idea

Post by dahnielson » Fri Mar 28, 2008 1:31 am

Consul wrote:IEDIT: Well, we've been talking about control signals as audio-rate signals, so there's no reason why they shouldn't be the basic 32-bit float like the audio signals. What other kinds of data would be handy for any Omnibus processing blocks?
Agree. Was thinking about the 32-bit float suggestion myself today.

I need some further sleep and thunkwork before giving a more definite suggestion. My gut sense is: as few as possible. For instance let control signals and audio signals being 32-bit float. Heck, even not necessarily be in the -1.0f to 1.0f range, of course a signal need to be scaled to that range before being used by something expecting it. But feed the instrument designer enough rope to hang him self (if he wish)! So other than 32-bit float, I would say a string type.
Anders Dahnielson

Ardour2, Qtractor, Linuxsampler, M-AUDIO Delta 1010, Axiom 61, Korg D12, AKAI S2000, E-MU Proteus 2k, Roland R-5, Roland HP 1300e, Zoom RFX-1000, 4GB RAM x86_64 Intel Pentium Dual 1.80GHz Gentoo Linux

lowkey
User
Posts: 69
Joined: Thu Jan 24, 2008 2:11 am

Re: A draft whitepaper of my parameterization idea

Post by lowkey » Fri Mar 28, 2008 1:36 am

What about support for graphics tablets?

http://en.wikipedia.org/wiki/Wacom#Intuos

Different "brushes" could affect the sound being worked on.

User avatar
dahnielson
Moderator
Posts: 632
Joined: Wed Jan 23, 2008 11:25 pm
Location: Linköping / Tranås, Sweden
Contact:

Re: A draft whitepaper of my parameterization idea

Post by dahnielson » Fri Mar 28, 2008 1:40 am

From the UI perspective I had in mind three type of screens for interaction:

* Instrument mode: The interaction mode used by users of the virtual instruments created with Omnibus. Can look like almost anything. The user need no knowledge of how it works "under the hood".

* Engine editor: The network node editor where you build the engine for your virtual instruments by connecting blocks together and typing in code in blocks.

* User interface editor: Used to create a nice interface for the engine that will face the user in the "Instrument mode".

The generated virtual instrument "ui" and "engine" parts will be decoupled in such way that the "ui" is aware of the "engine" but not the other way around.
Last edited by dahnielson on Fri Mar 28, 2008 1:41 am, edited 1 time in total.
Anders Dahnielson

Ardour2, Qtractor, Linuxsampler, M-AUDIO Delta 1010, Axiom 61, Korg D12, AKAI S2000, E-MU Proteus 2k, Roland R-5, Roland HP 1300e, Zoom RFX-1000, 4GB RAM x86_64 Intel Pentium Dual 1.80GHz Gentoo Linux

User avatar
dahnielson
Moderator
Posts: 632
Joined: Wed Jan 23, 2008 11:25 pm
Location: Linköping / Tranås, Sweden
Contact:

Re: A draft whitepaper of my parameterization idea

Post by dahnielson » Fri Mar 28, 2008 1:40 am

lowkey wrote:What about support for graphics tablets?

http://en.wikipedia.org/wiki/Wacom#Intuos

Different "brushes" could affect the sound being worked on.
That's an idea for the UI interaction. Could be a widget.
Anders Dahnielson

Ardour2, Qtractor, Linuxsampler, M-AUDIO Delta 1010, Axiom 61, Korg D12, AKAI S2000, E-MU Proteus 2k, Roland R-5, Roland HP 1300e, Zoom RFX-1000, 4GB RAM x86_64 Intel Pentium Dual 1.80GHz Gentoo Linux

User avatar
dahnielson
Moderator
Posts: 632
Joined: Wed Jan 23, 2008 11:25 pm
Location: Linköping / Tranås, Sweden
Contact:

Re: A draft whitepaper of my parameterization idea

Post by dahnielson » Fri Mar 28, 2008 1:46 am

I'm going start a laundry list of available libraries and code we can take advantage of.

* Aubio
* Rubber Band
* SndObj
* Zita Convolver (Fons' FFTW based convolution library)
* FlowCanvas
* Cairo

The GUI for the instrument mode could be drawn using Cairo (which is now used Gtk/Gtkmm by default, afaik), believe that's what FlowCanvas utilize.

Now, I need to head to bed!

Good night, and good luck.
Anders Dahnielson

Ardour2, Qtractor, Linuxsampler, M-AUDIO Delta 1010, Axiom 61, Korg D12, AKAI S2000, E-MU Proteus 2k, Roland R-5, Roland HP 1300e, Zoom RFX-1000, 4GB RAM x86_64 Intel Pentium Dual 1.80GHz Gentoo Linux

User avatar
dahnielson
Moderator
Posts: 632
Joined: Wed Jan 23, 2008 11:25 pm
Location: Linköping / Tranås, Sweden
Contact:

Re: A draft whitepaper of my parameterization idea

Post by dahnielson » Fri Mar 28, 2008 1:52 am

Consul wrote:EDIT: Is it possible for a master object to define a function that can pass data between sub-objects? If we define a master class for Omnibus, and then put the actual processing blocks in subclasses, could we make a function in the master class that can pass a sample (whether audio or control) from the output of one block to the input of another?
Will post more on it later. In the meantime, look at how K-3D (the best C++ code on the net) handle its Visual Pipeline and JACK pull data through its network for inspiration.
Anders Dahnielson

Ardour2, Qtractor, Linuxsampler, M-AUDIO Delta 1010, Axiom 61, Korg D12, AKAI S2000, E-MU Proteus 2k, Roland R-5, Roland HP 1300e, Zoom RFX-1000, 4GB RAM x86_64 Intel Pentium Dual 1.80GHz Gentoo Linux

User avatar
Consul
Moderator
Posts: 189
Joined: Wed Jan 23, 2008 11:19 pm
Location: Port Huron, Michigan, USA
Contact:

Re: A draft whitepaper of my parameterization idea

Post by Consul » Fri Mar 28, 2008 2:02 am

EDIT: In the time it took me to write this reply, there are already... Five posts since? Wow...
dahnielson wrote:(snip) ... Some "blocks" will be per instrument (like MIDI event mapper) while other will be per voice (like sample players, filters, etc.) ... (snip)
Hrm, that's a very good point. We do need to make sure we can separate blocks and signals that are per-instrument and per-voice. I think Reaktor does it simply by creating the audio and MIDI ins and outs in the master application, with its own routing, and then each voice is made up of what you create on the canvas. We would need to handle it more gracefully than that, I think.

Here's a thought. How about running the base application as two threads (not counting GUI here): one for instrument-level signals and blocks, and one for voice-level signals and blocks. Then we could use sockets to communicate between the two levels. Well, it's a thought, anyway.
Darren Landrum

User avatar
dahnielson
Moderator
Posts: 632
Joined: Wed Jan 23, 2008 11:25 pm
Location: Linköping / Tranås, Sweden
Contact:

Re: A draft whitepaper of my parameterization idea

Post by dahnielson » Fri Mar 28, 2008 2:12 am

Consul wrote:EDIT: In the time it took me to write this reply, there are already... Five posts since? Wow...
I'm manic.
Anders Dahnielson

Ardour2, Qtractor, Linuxsampler, M-AUDIO Delta 1010, Axiom 61, Korg D12, AKAI S2000, E-MU Proteus 2k, Roland R-5, Roland HP 1300e, Zoom RFX-1000, 4GB RAM x86_64 Intel Pentium Dual 1.80GHz Gentoo Linux

Post Reply