fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] Thread safety long-term thoughts


From: David Henningsson
Subject: Re: [fluid-dev] Thread safety long-term thoughts
Date: Mon, 16 Nov 2009 21:51:34 +0100
User-agent: Thunderbird 2.0.0.23 (X11/20090817)

address@hidden wrote:
Quoting David Henningsson <address@hidden>:
The fluid_synth is becoming increasingly large and complex. I've
started to think of it as two parts, a state machine, and a voice
renderer. The voice renderer is strict real-time, and corresponds
roughly to fluid_synth_one_block and everything below that.

Just for clarification, are you referring to code organization and/or code changes?

Both. To increase cohesion, let's say that the fluid_synth object could own a fluid_voice_renderer object, which only deals with the voices and not the MIDI.

Are you referring to fluid_synth.c in particular?

More the object than the file, but yes.

As I get back into libInstPatch and Swami development, I'm going to start seriously considering adding libInstPatch support to FluidSynth.

Ah, and then there is the Swami use case, which has its unique requirements. Keep forgetting about that...

If this goes well, then we may want to just make it the core of the instrument management in the future. Making FluidSynth GObject oriented, isn't much of a step beyond that. With GObject introspection being a hot topic these days, that could lead to just about any language binding which supports it, automatically. This would be FluidSynth 2.0 though and would probably change the API significantly.

So yet another U-turn on the glib dependency, dropping thoughts about iPhone etc?

The state machine is everything else, and a MIDI synthesizer is a state
machine; people expect to set a variable in there and be able read it
correctly afterwards. On the other hand, we can probably ease the
real-time requirements on this part.

The state machine is multi-threaded by default, but we must be able to
switch it off to avoid overhead for some use cases, such as the
embedded ones, and fast-render. The more MIDI events that can be
handled within a fixed time, the better, though. But for the harder
ones (e g program change) we are allowed to use mutexes.

I don't think the multi-thread stuff adds too much overhead. If the fluid_synth_* functions get called from the synthesis thread, which is the case for fast render, then no queuing is done, in which case the only real overhead is checking to see if its the synthesis thread and assigning the thread ID in fluid_synth_one_block.

In my long-term thoughts, I would like to avoid checking thread ID's as far as possible. We should instead assume that only one thread calls fluid_synth_one_block at a time, and that calls to the state machine are either synchronized or not synchronized depending on the configuration / use case.

I think it is the right way to make us resistant to the thread-jumping problem Ebrahim reported.

This also proposes moving the thread boundaries from before fluid_synth
to between the state machine and the voice renderer. The voice renderer
needs an in-queue of "voice events", events so much prepared that the
voice renderer can meet its real-time requirements.

This would also have the advantage of moving the sfloader callbacks
outside the most realtime sensitive code.

That seems like a good idea. A lot of state machine processing though, relies on the current state of voices.

Hmm? It could be that swami wants information about the current voices, but otherwise I would say that information goes from the state machine to the voice renderer only. What information does it need that comes from the voices rather than the current MIDI state?

However, nothing new without a downside. Since the sample is used by
the voice renderer, freeing a preset or soundfont is not solved easily.
But outlined, first we should check if there is an audio thread
running, if there isn't (embedded case, fast-render), we can just go
ahead. Otherwise send a voice event saying we should kill active voices
referencing the preset or soundfont. We then block the call until the
audio thread has processed our message (simplest). Optionally we could
return asynchronously, but then we still need some kind of garbage
queue.

I think reference counting would help a lot with this. When a voice is using a sample, it holds a reference. The sample references its parent SoundFont, etc. This is how libInstPatch works. If a SoundFont gets removed or changed, different presets get assigned, causing the samples to become unreferenced, ultimately freeing the SoundFont if no more references are held.

We must make sure freeing a SoundFont never happens in the audio thread, since then it'll lose real-time. So I don't see how this solves the problem.

For the multi-core support to make a difference - assuming
rendering/interpolating voices it what takes the most time - it would
be nice to add a pre-renderer. This pre-renderer would be a function
that copies the current value of a voice, assumes nothing happens to
that voice, and renders a few buffers ahead, say 100-500 ms. It should
run in one or several non-realtime threads, depending on the number of
CPU cores. Now the voice renderer, after having processed its in-queue,
takes these pre-rendered buffers instead of rendering them directly,
assuming nothing happened to the voice and the renderer has the buffer
available.

That sounds like a good idea. There could be some change prediction logic too, to select those voices deemed less likely to change (haven't been changed in a while or are known not to change for some time if playing back a MIDI file).

Jimmy's post got me thinking that perhaps we should cache the result of the rendering, especially for drum tracks which are often repetitive. At least it will speed up rendering of techno music ;-)

Nice to hear your thoughts on FluidSynth future. It would be good to get an idea of the next phase of development. As I mentioned, I'll primarily be focusing on libInstPatch and Swami for the coming months. So the next release should probably be more focused on bug fixes, optimization, voice stealing improvements, etc. But limit the amount of new functionality or code overhaul.

I guess that's more reality based, since we (well, mostly you) just made a lot of code overhaul, and I won't have time to do this change currently anyway. It's just that given the recent posts, I just can't help thinking that perhaps we didn't do the thread safety in the best way possible.

// David





reply via email to

[Prev in Thread] Current Thread [Next in Thread]