octal-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: ReWire thingy


From: Bullwinkle J. Moose
Subject: Re: ReWire thingy
Date: Tue Jan 23 17:33:11 2001

(this is kind of long, sorry...)

i thought about it bit more, and i think it would need to support two types of
conections.  The simpler type would be when one app is not actually streaming
into another.  Consider useing some sort of sequencing / aranging app.  To use
something from Octal in the sequening app, you don't really need to send the
outpuf of Octal into the sequening app, it just needs to know a little about the
'Octal track' such as how long it is, then it can put an icon representing the
Octal track up with the MIDI or Audio or what ever other tracks it has, and you
can suffle it around and arange it just like the other tracks. Then it just
needs Octal start and stop playing when it asks, and perhapse jump to certain
specific points within the track.

The other type of conection would be where audio is actual being sreamed.  Such
as if you wanted Sweep to stream an sample into Octal, or if you wanted to
stream the output of of A Hard Disk Recorder into Octal to use Octal like a sort
of real time Effects processor (do you think Octal could do that?  if the
machine were powerfull enough?)  Start/Stop/Time Position signals would still be
needed...

What i was thinking was that each app create its own port or socket (a local
one, not a network one, although maybe latter it could network enabled ...), and
'register' that socket with some daemon.  That daemon would keep a list Apps
available, and what they are available for.  Every app would need to be
available 'command signals', that would be conection to the daemon.  Not all
apps would need to be able to recive audio nesecarily, but they all should be
able to send thier output put to another app instead of what ever sound output
device.  Here is where Esound, or something like it might come in.  Overall, it
would be simpler if the apps would just send there output to a port/socket when
working with other apps.  That soket would lead into the deamon, which would
then either rout it to another app, or to the output device (mixing it with
other audio streams if needed).  ESound automaticaly 'catches' audio out put
from other apps and mixes them down to a single stream for output on a single
device simultaniously, it seems it sould be posible to send that stream to
another app instead.  But then how would it specify what app to send it to?
Does it need to do that?  You could have the sending app specify what app it
wants to send its output to.  You could also have the reciveing app specify that
it wants to recive the output of some specific app.  Also you could have a GUI
to draw the conections, sort of like Octal's signal network editor (is that what
it is going to be called?  the window where arange the machines and draw the
conections between them?)  If the sending and recieving apps can request where
to send/recive from, the GUI might be nesecary to fine tune things.  i think it
would be best if the apps working together send there output to a port/soket
that fed into the daemon, and then the daemon could mix them into a single
stream, adjusting formats as needed, and keeping levels as unchanged as
possible.  The deamon would then be srot of a big patchbay/sychronizer.

If several apps were involved, switching between the different apps to adjust
levels could get a bit trying.  So it might be usefull to have a mixer that do
that.  Done so that a sequencer could send each track it had open to a different
chanel on the mixer so that the mixer really could take over the mixing.  You
could potential have it set up like a real high end mixer with a couple of send
and recive loops on each chanel for real time effects.  The mixer would be
optional, but with it, the over all thing becomes like a patch bay and a mixer
for useing a buch of audio gear (the apps) together.

To the app, it would not be to complicated.  It would 'Register' with the
daemon, setting up a port/socket to send and recive comands and time
information.  As well as a port/socket that it will send all of its output to,
and possibly a port/socket to recive audio on.  Multiple output ports/sockets
could be created for each app if the user wants to let the deamon take over
mixing.  i,m shure there are plenty of places to stick real time effects in
there.  Would that be something LADSPA is geared to?  The effect would only be
dealing with arbitrary audio data, it wouldn't need any type of wave table or
anythin (at least, i don't think it would).

Basic MIDI signals could be used to deal with the Play/Stop/Time Position
stuff.  The harder part would be what message format would be used to specifiy
the format of the output.  If you are sending data from one app to another, the
reciveing app probably need to know what the bit resolution and sample rate of
the incomeing data is.  How would a system mixer deal with that?  You know like
if one app is sending data encoded to be 24bit at 96KHtz, and then another app
send some data encoded to be 16bit at 44.1KHtz.  Is there already some method
for specifing this at the begining of a stream that i do not know about?  But
you would need to be able to ask the reciving app if it can handle that format
so that it can acceppt or decline it for compatability reasons.  Pehpase adding
in the decline what the closest format it can handle is, that if the user wants,
a converson to a compatable format can be done?

As for time codes, i was thinking MIDI Time Code.  It is based on SMPTE.  There
are a couple of different variations on SMPTE based on different frame rate
formats, and i think MIDI Time Code would avoid those (i think), while still
being easily mappable back to any vaerion of SMPTE.  That would be nice because
then it could work with video editing apps.  The video editors could focus on
editing video and hand the audio part off to a dedicated audio editor.

aRts has two things going kind of, MIDIbus and MCOP.  Neither one seem to be
very complete.  Over on the Brahms site
(http://lienhard.desy.de/mackag/homepages/jan/Brahms/) it sugests removing the
header and source file for MIDIbus to solve compile problems (wich does not give
me a warm feeling about how far along it come).  And the aRts site
(http://www.arts-project.org/) indicates that development on it is very slow.
MCOP seems to also be very early in development.  It seems to be a modifaction
of CORBA, altered to deal with multimedia.  It seems a bit more complicated than
what i am thinking of.  MIDIbus seems to be just a way to route MIDI messages
around between multiple programs and external MIDI hardware, so it does not seem
to go quwit far enoughe.  But i may have missed something.

BSD style ports/sockets are just a type of inter-process comunication.  That is
what all this is esentialy, different processes communicating.  It was developed
at Berkly in the late seventies.  It is one of the older methods of IPC.
Everyhing i have ever read on them introduces them as Berkly sytle
ports/sockets, but then goes on to describe them simply as ports or sockets.  i
don't know of any other type.  i,m fairly sure it is implimented on most all
Linux systems.  WinSock is suposed to be fairly similar (almost but not
completely the same) implimentation on MS operating systems, so porting to
MS-Windows shouldn't be to difficult.  i would gess that other operating systems
have relitivly similar systems, so it would be relativly portable.

With all of these apps running at the same time, trying to work in as close to
real time as possible, latency could become a real problem.  The web site for
Ardour (http://ardour.sourceforge.net/) mentioned a 'low latency' patch for
kernel version 2.4.0-test9 from Andrew Morton.  i think this is what they were
refering to -> http://www.uow.edu.au/~andrewm/linux/schedlat.html
Does any one else no of any kernel patches to reduce latency?

Was any of that coherent?  Or am i going of in a completly wrong direction?



David O'Toole wrote:

> On Tue, 23 Jan 2001, Jared wrote:
>
> >       Have you had a look at esound? Am I thinking the right
> > thing?  Perhaps you could have an esound client plugin, so any source from
> > any esound aware app could go in?  Maybe in my voluminous spare time I
> > might have a poke...
> > J
>
> Possibly. I don't think ESound would be ideal, from what I have read its
> approach involves high latency. You might have noticed this where you
> press stop on XMMS and it takes a half-second to stop playing. But there
> are certainly things to look at.
>
> --
> @@@ david o'toole
> @@@ address@hidden
> @@@ www.gnu.org/software/octal
>
> _______________________________________________
> Octal-dev mailing list
> address@hidden
> http://mail.gnu.org/mailman/listinfo/octal-dev




reply via email to

[Prev in Thread] Current Thread [Next in Thread]