[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: NSSound Reimplementation
From: |
Christopher Armstrong |
Subject: |
Re: NSSound Reimplementation |
Date: |
Sun, 7 Jun 2009 11:10:53 +1000 |
Hi Stefan
I don't purport to be an expert on sound APIs, but I've played around
a lot with asynchronous APIs (marshalling in Java with Swing and
multithreaded APIs).
On 07/06/2009, at 2:01 AM, address@hidden wrote:
I guess I've been making a lot of noise about this lately, fact is
I'm kind
of bored! Anyway, as most of you know, I've put some code out
recently and
got quite a bit of feedback. Reading some of the comments I
realized I
hadn't thought this through as much as I had initially thought. In
this
e-mail I'm going to put forth some ideas for the reimplementation of
NSSound
and would really appreciate the community's opinion to make a better
decision on the way to go. I know it's long, sorry but I do have a
tendency
of doing that!
* PulseAudio - Pros: Cross-platform, Powerful/large API, used by the
GNOME
project, a simple API is available, is a sound server. Cons:
Requires a
dedicated mainloop for the asynch API, I still don't understand how
it works
(the asynch API, the simple API is pretty straight forward).
Looking at pulse audio's documentation, it appears that with the
asynchronous API, you call some functions which run an exclusive event
loop for pulseaudio on a separate thread. I'm not sure if you will
need to spawn a separate thread for them, or if they start a thread
themselves. You could alternatively integrate with the poll()
mechanism, which may be simpler or more difficult depending on what
GNUstep uses.
The main GNUstep GUI loop will need to use pulseaudio lock functions
before calling pulseaudio APIs. Callbacks which send data back to
GNUstep will need to respect the single-threaded nature of the GNUstep
GUI thread, which means that the code they call must execute on the
GNUstep run loop, not the pulseaudio one. The only exception to this
is passing data back: in the callback function, you may need to copy
the data that pulseaudio gives you into a GNUstep data structure
(NSString, NSData, etc) and pass that back to the main GNUstep run
loop. The GNUstep data structure will obviously need to be
instantiated in the callback, which means that its one of the thread-
safe GNUstep APIs.
Reading the pulseaudio documentation, it does appear that you have to
use its locking functions before calling into its API, but this
shouldn't pose much of an issue if you follow the rules. For callback
functions, I would expect you'd use something like NSRunLoop's -
performSelector:target:argument:order:modes: method to marshall a call
back into the main GNUstep GUI loop). The dedicated main loop running
on a separate thread should pose less of a problem than something that
needs constant access to the main thread. As long as you can post
messages into its event loop, and the event loop is fairly responsive,
everything should operate smoothly.
IMHO the most important thing to remember with these sorts of APIs is
that the other thread can be in an inconsistent state compared to your
thread, so you just need a little bit of extra state checking e.g. if
you send a "stop" command to the other thread whilst it is playing, it
may not stop synchronously, so if you try sending "stop" again, make
sure that the API either ignores it, or make sure you can catch any
exceptions or error codes if it is already stopped.
David A. also mentioned the possibility of a streaming architecture,
which I
like because it makes NSSound a lot more useful. With current
NSSound code,
and my original submission, NSSound simply read the file/data whole,
storing
it in a NSData object and later playing that. Streaming would allow
us to
keep nothing but a pointer to the file/data (still in a NSData
object) and
decoding it as we're playing. This is the design of all sound
applications
I've had the pleasure of using.
A streaming architecture sounds like a good idea, even if it requires
some extra plumbing, like an NSData subclass. This way we know we can
scale to large files e.g. alot of MP3 music I have is whole sets that
run upto 2 hours long (~100-300MB), and this would be impractical to
load completely into memory.
MY OPINION
No matter what I do, it looks like a separate thread is going to
have to be
spawned to do the streaming. Problem there is that I've never
programmed
with threads before, interesting for me since it'll be a learning
experience. The easiest library to use is libao, it includes output
every
thing out there (from ALSA to PulseAudio to WINMM). OpenAL is also
very
nice, but it's asynchronous by design and doesn't lend it self very
well for
streaming. I also really like the idea of loadable bundles/plug-
ins, this
would allow quite a bit of flexibility, not only to GNUstep but to the
application programmer. Lastly, moving the code to GNUstep-back, like
suggested by David C., seems like a good idea (specially with the
plug-in
based setup) removing the dependency on the -gui library but pushing
it over
to -back.
You shouldn't be afraid of extra threads - they're really not that bad
if you follow some simple design patterns when using them. The
"asynchronous notification" API (which sounds like what you have with
pulseaudio) is usually just a matter of calling functions in the right
order, and noting if they are blocking or non-blocking.
Hope this is of some help.
Cheers
Chris
--------
Christopher Armstrong
address@hidden
Re: NSSound Reimplementation, Stefan Bidigaray, 2009/06/24