[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: NSSound Reimplementation
From: |
David Chisnall |
Subject: |
Re: NSSound Reimplementation |
Date: |
Sun, 7 Jun 2009 12:54:55 +0100 |
On 7 Jun 2009, at 02:10, Christopher Armstrong wrote:
Hi Stefan
I don't purport to be an expert on sound APIs, but I've played
around a lot with asynchronous APIs (marshalling in Java with Swing
and multithreaded APIs).
On 07/06/2009, at 2:01 AM, address@hidden wrote:
* PulseAudio - Pros: Cross-platform, Powerful/large API, used by
the GNOME
project, a simple API is available, is a sound server. Cons:
Requires a
dedicated mainloop for the asynch API, I still don't understand how
it works
(the asynch API, the simple API is pretty straight forward).
Calling PulseAudio cross-platform is a stretch. As far as I can tell,
it works moderately-well with Ubuntu, less-well with other Linux
distributions, and is a mess everywhere else.
Looking at pulse audio's documentation, it appears that with the
asynchronous API, you call some functions which run an exclusive
event loop for pulseaudio on a separate thread. I'm not sure if you
will need to spawn a separate thread for them, or if they start a
thread themselves. You could alternatively integrate with the poll()
mechanism, which may be simpler or more difficult depending on what
GNUstep uses.
See NSFileHandle for how to get notifications from file descriptors.
David A. also mentioned the possibility of a streaming
architecture, which I
like because it makes NSSound a lot more useful. With current
NSSound code,
and my original submission, NSSound simply read the file/data
whole, storing
it in a NSData object and later playing that. Streaming would
allow us to
keep nothing but a pointer to the file/data (still in a NSData
object) and
decoding it as we're playing. This is the design of all sound
applications
I've had the pleasure of using.
A streaming architecture sounds like a good idea, even if it
requires some extra plumbing, like an NSData subclass. This way we
know we can scale to large files e.g. alot of MP3 music I have is
whole sets that run upto 2 hours long (~100-300MB), and this would
be impractical to load completely into memory.
This is what mmap() is for. NSData already has a subclass on GNUstep
that wraps mmap(), so you only need 300MB of address space - not
excessive even on a 32-bit platform - and the OS will handle loading
and evicting the data when required.
If you need a streaming API, you shouldn't be using NSSound, you
should be using something like Étoilé's MediaKit or Apple's QTKit.
MY OPINION
No matter what I do, it looks like a separate thread is going to
have to be
spawned to do the streaming. Problem there is that I've never
programmed
with threads before, interesting for me since it'll be a learning
experience. The easiest library to use is libao, it includes
output every
thing out there (from ALSA to PulseAudio to WINMM). OpenAL is also
very
nice, but it's asynchronous by design and doesn't lend it self very
well for
streaming. I also really like the idea of loadable bundles/plug-
ins, this
would allow quite a bit of flexibility, not only to GNUstep but to
the
application programmer. Lastly, moving the code to GNUstep-back,
like
suggested by David C., seems like a good idea (specially with the
plug-in
based setup) removing the dependency on the -gui library but
pushing it over
to -back.
A plugable architecture would be ideal. On platforms where the kernel
exposes a sane interface (e.g. FreeBSD, Solaris) you don't have any
extra dependencies, because the OSS APIs are just open()/read()/
write()/ioctl() calls on the device[1]. On platforms with a second-
rate sound subsystem in the kernel you can fall back to something like
libao.
David
[1] See:
http://svn.gna.org/viewcvs/etoile/trunk/Etoile/Frameworks/MediaKit/oss.m?rev=3470
Re: NSSound Reimplementation, Stefan Bidigaray, 2009/06/24