frink wrote:Certainly, if we want to allow the musician who is performing with the final samples to use advanced breath control we must provide that through modelling.
The use of a breath controller don't require "modeling" as in "synthesis". It is quite interchangeable with the modulations wheel if you lack one (and its controller values can be edited/generated in the sequencer) but makes more sense for a lot of instruments and frees up the modwheel for other purposes. The "modeling" controlled by any controller, be it keyboard, pitch wheel, mod wheel, sustain pedal, expression pedal or breath controller should be "behavioral modeling" (usually achieved through scripting and/or layers/dimensions) which is applicable to all sample based instruments.
frink wrote:However, most of the time the composer who use our sampled instrument does so because he does NOT know how to play that instrument with any sort of grace.
I hope you don't imply that your prospective audience is a bunch of raging monkeys randomly hammering their keyboards sans grace.
On a more serious note, the use of a breath controller to play a sampled horn is nothing like learning to play the actual horn (I know, I started to play the horn at 7). The only learning curve will be learning how to use the controller itself and how it affects the sampled instrument in the same way you have to learn how to work the modwheel for a four-layer dynamic instrument in VSL (undoubtedly seriously sample based).
frink wrote:Up to this point we have been focused on capturing the sound of an instrument with the act of sampling. What I'm suggesting now is a paradigm shift to the capture of an actual performance of both instrument and musician.
Alex will probably now accuse me of being nitpicking, but the act of sampling has always captured both the instrument and performer in addition to the room, microphone and pre-amplifier characteristics.
Don't get me wrong, I'm just yanking your chain. I see your point.
frink wrote:This is also partly because of the limitations of MIDI in it's representation of performance. Sheet music still holds a much more rich set of instructions than basic MIDI.
Agree. I don't know if HD-MIDI (or whatever it will be called) will rectify it. Anyone here sitting on a link with a good desciption of it without having to saw off and give MMA the upper part of your arm?
And apropos sheet music, everyone should take an inspirational look at the way Lilypond handle music.
frink wrote:I think that this is mostly due to the paradigm of the original people who created music electronically. They were not interested at the time with producing accurate representations of existing instruments but rather creating new previously non-existent sounds.
Yes. Remember that MIDI was created to control a bank of synths replacing CVs and the infamous
"wall of synths". Sampling was just in its infancy when MIDI was created in the early 80's. Sure, Fairlight CMI had been released five years before publication of the MIDI 1.0 specification, but as a sampler it was fairly shitty despite its historical importance. Hardware samplers and sample based sound modules wouldn't become any good and affordable until the early 90's.
frink wrote:I predict that in the coming years sample technology will fork into two separate directions. There will still be a vein that focuses on real-time performance samples. However, more and more music technicians are dreaming about the power and grace of rendered sample splices with artificial intelligence applied to the performance. If for example, I can send a MIDI file or a piece of sheet music and let the computer study it and apply some intelligent logic referring to it's vast sample catalog I am likely to have a much more realistic presentation than if the computer has to produce the sounds on-the-fly not knowing what notes are coming before or after and having no cognisant understanding of the other instruments and their parts.
I don't see it as a necessary split, only a matter of sensible decoupling. If you have a very controllable real-time performance instrument then you can have an AI do the performance using the same interface/protocol as any human would. Of course better performances have been right, left and center in academic research with tools like Csound, its predecessors and similar toolsets being developed over the years.
BTW, a MIDI file (or to be specific a SMF) should be considered being a recorded performance and not notation. A program like Rosegarden has a very clever custom event system (internally supplants the MIDI events) to deal with both un-tight human performance values and the rigidity of sheet music values at the same time. On a second note, some of the most talented (and even less talented) sample based music "realizators" (wish that word existed in English) do "program" their pieces by performing each part and only doing minor editing of tempo maps, keyswitches and stray notes.
frink wrote:In the beginning of synthesised music we focused on producing sounds from oscillators using processing to shape them. Now I'm suggesting that we look at taking sounds recordings from their original sources and attempt to micro-splice them together to produce performer-accurate representations of what is possible in through that instrument. Essentially beat splicing at a micro level with crossfades to hide the splicework.
Now we're talking!
Just a quick and very broad historic rundown for everyones enjoyment and not to be a nitpicking besserwisser:
It begun back in the late 1800s and early 1900s with additive synthesis, but it turned out to be much of a burden for the creator/programmer of the virtual instruments (and a technical nightmare to be realistic). Attention then turned to subtractive synthesis that was easy to implement and program but didn't produce any great realism (which was not necessarily the goal either). FM synthesis was introduced but it was difficult to program for most people. Then sampling made inroads and companies like E-MU knew how to combine the best of both worlds in terms of raw sampling and modulation from synthesis. But all the great hardware samplers (there were some great AKAI, Yamaha and E-MU's in the late 90's) were instantly killed by the release of Nemesys GigaSampler, which was actually really basic (actually
extremely basic compared to many of the hardware samplers available) whose only differencing feature was disk streaming capable of playing back
really long samples (no more looping!). Native Instruments release of Kontakt 2 rectified the situation by combining a flexible sampler with features such as scripting (for behavior) in addition to disk streaming. With instruments like Wallander's WIVI (although it's mainly a commercializations and improvement of academic techniques developed in the 80's) we have somewhat come full circle back to where it started using additive synthesis, only now backed by the power of computers capable of reconstructing the instrument from actual samples.
frink wrote:I have no doubt that eventually we will get to the point of complete modelling synthesis.
Yes. But it has a long way to go. The question is: Will it be necessary? Especially with methods such as Tomasini's "sample modeling" and Wallander's analysis/resynthesis based on anechoic samples of instruments. The first one being in essence analogue to your "micro-splicing".
Just to clarify my previous language here and elsewhere:
* When I talk about synthesizing it do include sample based methods as a subset. It's just a form of synthesis/resynthesis using sampled waveforms as oscillators instead of generated ones.
* When I talk about modulation I'm speaking about a controller manipulating the parameter of some other controller (like my hand modulating the volume knob on the stereo for some amplitude modulation) which can be used for behavioral modeling. See my
lauding of E-MU in one of the "Next Generation" threads.
frink wrote:I just listened to the violin you refer to. This is very similar to stuff I've heard from Big Fish and London Solo Strings. I think the Stratavari tops it but very good.
FYI, the Garritan Stradivari violin and Gofriller cello have been discontinued and the engineers behind it have moved on to take the technique further:
http://www.samplemodeling.com
frink wrote:Still what I'm looking for is a way for solo musician to participate in open source music by providing recordings of themselves playing their instruments that can be turned into sampled instruments. A true open orchestra that perhaps has chairs and everything. We need to provide a format that automates sample production and allows us to merely fine-tune the computer generated collection of samples. Much of what I wrote above was after writing a full specification for an open orchestra.
Yes, it has crossed my mind too. This is what currently fuels much of my research into sample recording (on a budget) utilizing an approximated free-field condition over a reflecting plane: Making it possible for people around the world to produce samples in equivalent settings by a defined and standardized recording method.
Automation can be useful in the process. But I think that manual editing will always be necessary to some degree (just like 3D replicators are cool but not always applicable). What the free and open source method can offer in the meantime is a distribution of workload so that the producer of samples not necessarily need to edit anything him- or herself, similar to how translation and localization of FLOSS software are done today. Harness the power of the crowd!
frink wrote:I've thought of getting involved with the open orchestra project but it seems to have stagnated and may be better to start my own rather than seek to resurrect that one...
That's the right attitude!
It's alway better to scratch your own itch as a start instead of waiting on someone else do it for you. A project need some traction to build momentum.