Page 2 of 2
Posted: Fri Mar 21, 2008 3:34 pm
To start a convolver with a specific set of IRs and you don't need the ability to start/stop individual IRs then you can create a single configuration file for a convolver with multiple channels and use a different IR for each channel. E.g. using two stereo IRs to create a convolver with two stereo channels, basically two reverbs in one:
Code: Select all
/convolver/new 4 4 256 204800
/impulse/read 1 1 1 0 0 0 1 ir1.wav
/impulse/read 2 2 1 0 0 0 2 ir1.wav
/impulse/read 3 3 1 0 0 0 1 ir2.wav
/impulse/read 4 4 1 0 0 0 2 ir2.wav
Yes, you need one buss per reverb. In the above example I would create two stereo busses to pre-fader insert each reverb into. The buss fader will control how much of the return from the reverb contribute to the final mix.
Say ir1.wav is used for the first and second section of the orchestra and ir2.wav for the third and fourth, I then create a post-fader send from each sections fader to the respective reverb. The send fader (in Ardour it's located in each sends pop-up window) control how much of the sections dry signal should be sent to the rebverb and let you balance the amount reverb between the sections sharing the same reverb.
This is just like how you would do it in a hardware studio with the exception that there's no limit on the number sends and returns in Ardour that would otherwise be hardwired (and are in other DAWs!).
If you start JConv and LS before opening the project in Ardour, it will then automatically hook itself up to all the audio sources.
Posted: Fri Mar 21, 2008 4:14 pm
Anders, thanks mate.
I finally got the routing worked out, and it runs well. Once i've got the ir level right in the buss, it's easy to fine tune the send as wet/dry/ mix.
I appreciate you helping out.
And the idea of one instance of jconv loaded up with multiple verbs sounds good. I'm going to see how many i can add, but 4's good and 5 would be delightful!
Onwards and upwards.
Posted: Fri Mar 21, 2008 9:57 pm
Ok Anders, a quick update.
jconv works with 4 impulses, and the routing is fairly easy. The sound is dynamic, clear, and doesn't fall over in the busy passages. CPU is no problem, and it's chewing very little memory. The Wet/dry adjustment is quick, and easily noticable when going too far one way or the other. (Always good news.) And, Fantasia is finally stable on my setup, as qsampler is and was. It's been running without a glitch, all day.
I'm doing some more experimenting, but i'd like to say thanks for the help. I took a big step forward today, with your assistance.
At last, a decent convolution programme to go in the toolbox.
p.s. Do you have any idea what fconv does? It was in the same package.
Posted: Fri Mar 21, 2008 10:54 pm
Ok Anders, i have another question.
Given we both have jconv up and running, and working ok, what about early reflections?
How do we incorporate early reflections,and importantly, a control mechanism (i.e. gain slider) to 'ride' the time balance between ER and Tail?
Another buss is ok, but what about time? Change the latency to suit?
p.s. I should add here, that i'm trying this stuff too, not just firing questions. Currently experimenting with latency/delay, but i'm wondering if this wouldn't be easier to handle directly 'in track.' (given we're both using ardour now.)
Posted: Fri Mar 21, 2008 11:53 pm
Well, both first reflection
, early reflections
and reverb tail
are part of the impulse response
(the direct sound
, the first or initial samples in the IR are usually edited out since it will be added to the mix as the dry signal). But I think it would be possible to create a convolution reverb that massage the IR (like envelope shaping), believe that's what Fons' Aella is all about in comparison to JConv. Another idea would be to analyze and resynthesize IRs, e.q. split ERs and RTs into different IRs or to high-pass filter them (like Numerical Sound's "Hollywood Impulse Responses"
, a brilliant product by the way) so their individual contribution to the mix can be controlled. This is why I'm also interested in synthesizing IRs
Posted: Sat Mar 22, 2008 5:51 pm
Something I would like to accomplish by synthesizing IRs is an interface that let you position sound sources and microphones (a sound source will become a JACK input and microphones becomes outputs) in a room. An IR is synthesized for each sound source/microphone pair. The room is defined as a simple 3D model.
Posted: Sat Mar 22, 2008 5:56 pm
A real-time ray-traced reverb is what you're after? I would think that a simple impulse would be the input for the system if you want to synthesize IRs for use in a convolution reverb. A real-time system would be pretty cool, though. Or do you want to integrate the renderer with the convolution reverb, so adjustments can be made, a button clicked, and a new IR made and loaded? That would also be pretty cool.
Posted: Sat Mar 22, 2008 6:19 pm
No, not a real-time one. The IRs would only be regenerated when you change the microphone/source position (or change the room). So no animation (of course you could "animate" a microphone by position several and crossfade between them).
My idea is to create simplistic room models in a 3D package like K-3D or Wings3D that can be loaded into the "IR reverb synthesizer". One idea would be to develop it completely in K-3D as a plugin.