octal-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

info on voice architecture (was re: simple modelling)


From: David O'Toole
Subject: info on voice architecture (was re: simple modelling)
Date: Mon Mar 12 14:42:02 2001

> It's almost identical to Geonik's Plucked String synth for Buzz, but
> with one important difference:  new plucks will cut off older ones. 
> I've thought of two ways (that don't involve allocating memory on new
> plucks) to fix this:
> 
> 1.  Allocate 16 or so buffers, and assign new plucks to them in some
> method.
>

Perfect question timing :-). 

This is what ox_channel is for. (I am renaming ox_track because the name
conflicts with another concept.) Let me run it by you, how it works now.

Essentially each voice (simultaneous note) of your machine is assigned
to a channel. You don't have to do any voice allocation, Octal will
assign note events to voices. All you have to do is perhaps keep an
array of objects, each of which is capable of creating one voice in your
machine. Each event comes with a channel number (note on, note off,
controller change, etc) so all you have to do is select the right object
before doing your ox_update stuff on it. 

How you combine the output of each voice into the final output buffer is
up to you. During ox_work you might produce an output buffer from each
voice and then add them together; for more efficiency, you might
construct your voices to start with one blank buffer, and then have each
voice add its output to the buffer as it's being generated. (That will
prevent the need for allocating a zillion buffers.) 

How you respond to ox_channel() messages is also up to you. All it
really means is "prepare channel X for possible use real soon." If you
keep the objects around and don't need a lot of buffers, then you can
just set a channel's "in use" flag and not have to allocate/deallocate
memory when you recieve track messages. Matt, how do these ideas look
from your point of view?  

I will of course have much more detail when the manual is updated.  But
the basic idea is to create an object that captures the concept of one
voice in your machine, and decide how they will work together (mixing or
adding during generation, etc) to create a multi-voice machine.  

Voices need not have the same timbre. As in Buzz, each channel/track can
be set to a different waveform, etc, etc, etc. 

These are likely the very last big changes to the API. I would rather
make changes now, based on feeback from the first machine developers,
since these changes will make it easier to extend the API in the future
without breaking compatibility then. 


-- 
@@@ david o'toole
@@@ address@hidden
@@@ www.gnu.org/software/octal




reply via email to

[Prev in Thread] Current Thread [Next in Thread]