A draft whitepaper of my parameterization idea

Everything and anything, but nothing about the LinuxSampler project.
Post Reply
User avatar
Consul
Moderator
Posts: 189
Joined: Wed Jan 23, 2008 11:19 pm
Location: Port Huron, Michigan, USA
Contact:

Re: A draft whitepaper of my parameterization idea

Post by Consul » Fri Mar 28, 2008 2:25 am

Well, I'm the one who wanted a discussion. :D Believe me, I'm happy with this. I'll just have to make sure I quote on the more important stuff.
Darren Landrum

User avatar
Consul
Moderator
Posts: 189
Joined: Wed Jan 23, 2008 11:19 pm
Location: Port Huron, Michigan, USA
Contact:

Re: A draft whitepaper of my parameterization idea

Post by Consul » Fri Mar 28, 2008 4:23 am

Something else I've thought about is the idea that GUI widgets or other elements could be "attached" to the use of any particular block in a synth design. Say I code a block that makes a Bezier-curve based oscillator. I could also make a widget to go with it that allows users to make new curves, and also allows for loading and saving of B-curve waveforms. The classics (sawtooth, square, pulse, triangle, sine) would be available "from the factory", with a few others perhaps. When one uses the B-curve osc block, the widget for it will appear in the GUI editor.
Darren Landrum

User avatar
Consul
Moderator
Posts: 189
Joined: Wed Jan 23, 2008 11:19 pm
Location: Port Huron, Michigan, USA
Contact:

Re: A draft whitepaper of my parameterization idea

Post by Consul » Fri Mar 28, 2008 1:36 pm

I do have a serious question, though.

Do you really think we can accomplish all of this in a reasonable amount of time? I'm sitting here, thinking that just Omnibus and a GUI will be hard enough, but the idea of bolting on Reaktor-like functionality (which I would love to do, don't get me wrong) with custom GUI building in-app seems awfully ambitious.

Of course, being open source, we do have one advantage NI never did: We can leverage existing code. As it turns out, there is a lot of existing code we can leverage to build a lot of functionality quite quickly.

That brings up the final question. I really want to pursue this design we're brainstorming here. Should we start a new project for it eventually, when we feel we have enough of a design? Or would Benny and Christian like to roll the effort into the new LinuxSampler? I have little doubt they're reading this thread. ;)

PS - We should consider changing the name of Omnibus, as I don't feel like a trademark fight with Omnisphere. Spectrasonics will no doubt think of the same pun prior to release.
Darren Landrum

User avatar
dahnielson
Moderator
Posts: 632
Joined: Wed Jan 23, 2008 11:25 pm
Location: Linköping / Tranås, Sweden
Contact:

Re: A draft whitepaper of my parameterization idea

Post by dahnielson » Fri Mar 28, 2008 1:49 pm

FOSS isn't a product, it's a process. There's no inherit deadline other than the point when people start to give up on a project. So yes, the "in a reasonable amount of time" do factor in somewhat. I think the hardest part will be to get the concept worked out to a point where it both work in theory and in practice as well.

Don't worry too much about the name. It was a joke to begin with. But it actually turned out to be a descriptive name derived from the fact both audio and control signals (hence the "omni") can share the same connections (hence the "bus"). It's at least a good code name when talking about the concept.

EDIT: Just want to stress that the concept is separate from implementation. How we implement it is currently of lesser importance, because the same concept can in theory be implemented in several ways by several different people.
Anders Dahnielson

Ardour2, Qtractor, Linuxsampler, M-AUDIO Delta 1010, Axiom 61, Korg D12, AKAI S2000, E-MU Proteus 2k, Roland R-5, Roland HP 1300e, Zoom RFX-1000, 4GB RAM x86_64 Intel Pentium Dual 1.80GHz Gentoo Linux

User avatar
Consul
Moderator
Posts: 189
Joined: Wed Jan 23, 2008 11:19 pm
Location: Port Huron, Michigan, USA
Contact:

Re: A draft whitepaper of my parameterization idea

Post by Consul » Fri Mar 28, 2008 1:57 pm

Okay, Omnibus it is until we either think of a better name or get hit with a cease and desist. :mrgreen:

As for the other stuff, I guess it's just "charlie mike" (continue mission :) ) at this point. I'm not trying to be an upstart or force things, but eventually, I'd like to try to make the vision actually work. I don't have a good metric for when we can start coding.

What does Onmibus have to be able to do, independent of the interface?
  • Route and pass signals around
  • Allow for the easy programming of new processing blocks (at the code level)
  • Generate warnings and errors when it detects potential disaster from a connection made?
I have to hurry out now, but I'll likely amend this list later.
Darren Landrum

User avatar
dahnielson
Moderator
Posts: 632
Joined: Wed Jan 23, 2008 11:25 pm
Location: Linköping / Tranås, Sweden
Contact:

Re: A draft whitepaper of my parameterization idea

Post by dahnielson » Fri Mar 28, 2008 2:04 pm

I should add that when the concept is worked out to a point where there's confidence it will work:

* Divide and conquer the concept implementationwise.
* Begin with the simplest thing that can possibly work.
Anders Dahnielson

Ardour2, Qtractor, Linuxsampler, M-AUDIO Delta 1010, Axiom 61, Korg D12, AKAI S2000, E-MU Proteus 2k, Roland R-5, Roland HP 1300e, Zoom RFX-1000, 4GB RAM x86_64 Intel Pentium Dual 1.80GHz Gentoo Linux

User avatar
Consul
Moderator
Posts: 189
Joined: Wed Jan 23, 2008 11:19 pm
Location: Port Huron, Michigan, USA
Contact:

Re: A draft whitepaper of my parameterization idea

Post by Consul » Fri Mar 28, 2008 6:46 pm

I had a conversation with a friend of mine on AIM about the challenges of passing signals around in an application, and he had some interesting ideas coming from a networking perspective (packet-based signals). I don't know if it'll be useful, but I present it here anyway, with his permission, cut and paste from the conversation with editing here and there to cut out the non-related stuff.

-----

01:13:50 PM) Darren: As for me, I'm researching "tokens" in relation to programming. The CLAM audio DSP guys referred to "passing tokens around" in terms of how their system works, and the stuff I'm finding on tokens in programming doesn't fit that.
(01:14:13 PM) cavehamst: huh, that could mean anything
(01:14:43 PM) cavehamst: passing pointers to data structures, some sort of queuing system, anything
(01:14:57 PM) Darren: Yeah, I thought so.
(01:16:46 PM) Darren: Another fellow and myself are brainstorming a synthesis and sampling program that will allow for the routing of control and audio signals, treat control signals as audio signals (that bit needs a bit of explaining), and allow for the creation of custom GUIs by sound and synth designers.
(01:17:25 PM) cavehamst: hmm, interesting
(01:17:56 PM) Darren: My main contribution is the "treating control signals as audio signals" part.
(01:18:52 PM) Darren: My thought is, if control signals can be routed in the same way as audio, and can be processed by DSP blocks, then a few simple DSP functions could be combined to create more complex functions.
(01:19:53 PM) Darren: But it would do this for control signals, so things like LFOs, envelopes, and lots of different types of parameter modulation could be created from building blocks.
(01:23:59 PM) cavehamst: so, you are going to wrap the audio packets with control data or?
(01:24:34 PM) Darren: Oh, for God's sake! Why didn't I think of that?!
(01:25:13 PM) Darren: Well, the thing is, they aren't packets. We're running the system in real-time as a clock ticks at the sample rate.
(01:26:05 PM) Darren: But the control signals will look just like audio signals, though with different periodic behavior.
(01:26:23 PM) cavehamst: well, at the base level, it's just a stream of numbers. you could packetize that, 'tokenize' if you would, and pass it through kind of like TCP/IP data with headers
(01:26:53 PM) cavehamst: provided that the audio stream is moving slow enough that your processor has the overhead to handle chunks, which, it should, I would think
(01:27:11 PM) cavehamst: that's the method employed in cell phone handsets, chunking out packets of data back and forth
(01:27:22 PM) Darren: There's one important reason that we can't: latency. When you play a note on the keyboard, you have to hear it start right away. Packetizing the data would generate too much delay from initial note-on.
(01:27:28 PM) cavehamst: of course, the core reason for doing that on a cell network is to handle radios dropping data, but hey
(01:28:31 PM) cavehamst: well, the packets could be really small, chop them up every millisecond or so. i can't discern anything under about 25ms as being slower or faster then 1ms
(01:28:52 PM) Darren: At 44.1khz, the packets would have to be 128 sample large (a sample would be a 32-bit float).
(01:30:18 PM) cavehamst: eh, just an idea. you have to tag the data somewhere, maybe by the control signal referencing an index pointer in the audio stream as it goes through
(01:30:50 PM) Darren: Well, the idea is, the control stream would hook up to a parameter that will change as the stream changes.
(01:31:04 PM) Darren: The parameter could be, for example, the cutoff of a filter.
(01:31:12 PM) cavehamst: how often would the control signal need to change? could you just preface a control signal in front of a train of audio with the implication of, 'do this until next control frame'?
(01:31:28 PM) Darren: And the filter cutoff would go up or down as the control signal does.
(01:32:00 PM) Darren: That's a thought. Typically, you can run control signals at about 1/10th the rate of audio, and not notice it.
(01:32:29 PM) Darren: I want to avoid that, though, as there are certain advantage to running control signals at audio rates.
(01:32:43 PM) Darren: Audio-rate frequency and amplitude modulation being among them.
(01:33:14 PM) Darren: That's essentially a whole other form of sound synthesis available as a side effect. :-)
(01:33:16 PM) cavehamst: well, really small control signals tacked onto small audio trains, kind of like packetizing them, shouldn't hurt throughput much
(01:34:02 PM) cavehamst: heh, have something like a baudot tokenizer to strip out the control signal automagically
(01:34:09 PM) Darren: The destinations are also different. Audio signals will go to the input of a function, control signals will go to a parameter of the function.
(01:34:35 PM) Darren: But I guess that can be dealt with once the control signal is stripped out.
(01:34:37 PM) cavehamst: you'd need some way of syncing the two
(01:34:54 PM) cavehamst: if they dont arrive right at the same time and are processed in parallel
(01:35:00 PM) Darren: Well, yeah, the master clock would be the sync source, at the sample rate.
(01:35:06 PM) cavehamst: ok
(01:35:34 PM) Darren: Hrm...
(01:35:43 PM) Darren: You are making me think, though, and that's good. :-)
(01:36:28 PM) cavehamst: well, my one piece of advice for tokenizing is this: the best way to handle asynchronous data is to have the packets be prefaced by a special flag, and end with a different special flag. that way, the pre-processor doesn't have to calculate packets, it just finds whole ones quickly and shuffles them off to a handler
(01:36:46 PM) cavehamst: i dealt with a serial protocol once that had to be calculated as you go
(01:37:22 PM) cavehamst: if the 8th bit was then, then expect x more bytes, and the nth byte told you how many more bytes to eat, and then there was a checksum, and, gah
(01:38:03 PM) cavehamst: you ended up having to tokenize and deal with the data in realtime as it came in, instead of being able to take advantage of threads and parallelize the incoming data, and process the data packets in parallel
(01:38:08 PM) cavehamst: anyway
(01:38:24 PM) cavehamst: i've spent too much of my time dealing with asynchronous protocols :P
(01:38:49 PM) Darren: I don't think a signal packet method is the way to go, but it's always good to listen to other ideas.
(01:39:25 PM) cavehamst: hehe
(01:39:34 PM) Darren: But there are ideas in there that might find use. :-)
(01:39:37 PM) cavehamst: i'm full of ideas, whether any of them work, that's open for debate :P
(01:39:44 PM) Darren: Especially the idea of multiplexing signals.
(01:41:21 PM) Darren: The other thought is splitting audio data into power-of-two sized packets makes them automatically FFT-ready.
(01:42:19 PM) Darren: But the convolution library we were going to use allows for variable-frame FFT, which is a great boon to real-time synthesis.
(01:42:33 PM) Darren: variable frame size, I mean
(01:43:13 PM) Darren: Any DSP block that needs to do FFT will likely handle its own windowing.

-----

And that's it in a nutshell. :)
Darren Landrum

User avatar
Consul
Moderator
Posts: 189
Joined: Wed Jan 23, 2008 11:19 pm
Location: Port Huron, Michigan, USA
Contact:

Re: A draft whitepaper of my parameterization idea

Post by Consul » Fri Mar 28, 2008 7:16 pm

Actually, Hamster did help me realize something important.

What is Omnibus, exactly? I have an answer for that, now.

Omnibus is a signal routing library. It purpose is to take a signal from one place and send it to another. It allows for intermediate processing on the signals which can range from scaling the signal to full-blown DSP, if desired. Signals can be routed between internal objects, to and from sockets, to and from JACK, and perhaps also pipes and named pipes. Omnibus processes signals at the sample level, synced to a clock source, which can be generated internally, or connected from the outside.

Is that a good enough place to start?
Darren Landrum

User avatar
dahnielson
Moderator
Posts: 632
Joined: Wed Jan 23, 2008 11:25 pm
Location: Linköping / Tranås, Sweden
Contact:

Re: A draft whitepaper of my parameterization idea

Post by dahnielson » Fri Mar 28, 2008 7:46 pm

I'm posting this without yet having read any of the new posts above:

* Did you check out K-3D? We're using the Observer Pattern for almost all communication, in particular the Visualization Pipeline (the network that makes up a scene/document), implemented using sigc++ which is excellent for networks built at run-time. Take a look at libsigc++.

* SndObj also handles signal routing specified at run-time. But I haven't looked at how they do it.

* The same goes for JACK, Pure Data... the list goes on.

* FAUST is compile-time, and because it's functional, is probably passing points or references to values (which is quicker than passing the actual values) between functions compiled together.
Anders Dahnielson

Ardour2, Qtractor, Linuxsampler, M-AUDIO Delta 1010, Axiom 61, Korg D12, AKAI S2000, E-MU Proteus 2k, Roland R-5, Roland HP 1300e, Zoom RFX-1000, 4GB RAM x86_64 Intel Pentium Dual 1.80GHz Gentoo Linux

User avatar
Consul
Moderator
Posts: 189
Joined: Wed Jan 23, 2008 11:19 pm
Location: Port Huron, Michigan, USA
Contact:

Re: A draft whitepaper of my parameterization idea

Post by Consul » Fri Mar 28, 2008 9:44 pm

dahnielson wrote:* Did you check out K-3D? We're using the Observer Pattern for almost all communication, in particular the Visualization Pipeline (the network that makes up a scene/document), implemented using sigc++ which is excellent for networks built at run-time. Take a look at libsigc++.
I went to the K-3D site, but short of reading through all of the source code, I don't really know exactly what I'm looking for. On the other hand, I found the site for libsigc++ and am reading through it now. It looks quite useful.

EDIT: One question that comes to mind is, is sigc++ real-time safe? Of course, that question may also just expose me as an inexperienced moron, but better a dumb question than to live in ignorance.
* The same goes for JACK, Pure Data... the list goes on.
JACK handles signals between applications. Omnibus would be for within a single application. Apparently libsigc++ does that as well, but at a lower level, so perhaps it could be built upon to make a signal routing library specifically for audio. It's a thought, anyway.
* FAUST is compile-time, and because it's functional, is probably passing points or references to values (which is quicker than passing the actual values) between functions compiled together.
I've been thinking about FAUST integration lately, as I applied to GSoC to do the FAUST integration for CLAM. Because FAUST compiles to C++ source code, it would probably not be hard to create a system where a new function is coded in FAUST, and then compiled to C++, which is then included into a DSP block "shell" which can then be compiled into a functioning block. That's my high-level thinking, anyway.
Darren Landrum

Post Reply