fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] Thread safety long-term thoughts


From: josh
Subject: Re: [fluid-dev] Thread safety long-term thoughts
Date: Mon, 16 Nov 2009 00:37:40 -0800
User-agent: Internet Messaging Program (IMP) H3 (4.1.6)

Quoting David Henningsson <address@hidden>:
While the recent thread safety improvements are much better than the
previous handling (since the previous had unpredictable crashes), the
recent postings, the shadow variable workaround, and the multi-core
support got me thinking. See this as long-term thoughts for discussion,
rather than something I plan to implement in the near future.

The fluid_synth is becoming increasingly large and complex. I've
started to think of it as two parts, a state machine, and a voice
renderer. The voice renderer is strict real-time, and corresponds
roughly to fluid_synth_one_block and everything below that.


Just for clarification, are you referring to code organization and/or code changes?

Are you referring to fluid_synth.c in particular? I have also thought it would be good to break this up into the synthesis part (fluid_synth_core.c perhaps) and perhaps separate out other code too like the tuning stuff (fluid_synth_tuning.c).

The synthesis portion is definitely more cleanly divided than previous versions of FluidSynth, but I can also see room for improvement. I think some of those improvements may require API changes though, in particular in relation to the SoundFont loader API. Object reference counting and less exposure of the internals of the objects would provide us with more flexibility.

As I get back into libInstPatch and Swami development, I'm going to start seriously considering adding libInstPatch support to FluidSynth. If this goes well, then we may want to just make it the core of the instrument management in the future. Making FluidSynth GObject oriented, isn't much of a step beyond that. With GObject introspection being a hot topic these days, that could lead to just about any language binding which supports it, automatically. This would be FluidSynth 2.0 though and would probably change the API significantly.


The state machine is everything else, and a MIDI synthesizer is a state
machine; people expect to set a variable in there and be able read it
correctly afterwards. On the other hand, we can probably ease the
real-time requirements on this part.

The state machine is multi-threaded by default, but we must be able to
switch it off to avoid overhead for some use cases, such as the
embedded ones, and fast-render. The more MIDI events that can be
handled within a fixed time, the better, though. But for the harder
ones (e g program change) we are allowed to use mutexes.



I don't think the multi-thread stuff adds too much overhead. If the fluid_synth_* functions get called from the synthesis thread, which is the case for fast render, then no queuing is done, in which case the only real overhead is checking to see if its the synthesis thread and assigning the thread ID in fluid_synth_one_block.


This also proposes moving the thread boundaries from before fluid_synth
to between the state machine and the voice renderer. The voice renderer
needs an in-queue of "voice events", events so much prepared that the
voice renderer can meet its real-time requirements.

This would also have the advantage of moving the sfloader callbacks
outside the most realtime sensitive code.



That seems like a good idea. A lot of state machine processing though, relies on the current state of voices. It could pose a problem trying to figure out how to expose this information in a lock-free manner. Perhaps I'm not quite seeing the details of what you are proposing though.


However, nothing new without a downside. Since the sample is used by
the voice renderer, freeing a preset or soundfont is not solved easily.
But outlined, first we should check if there is an audio thread
running, if there isn't (embedded case, fast-render), we can just go
ahead. Otherwise send a voice event saying we should kill active voices
referencing the preset or soundfont. We then block the call until the
audio thread has processed our message (simplest). Optionally we could
return asynchronously, but then we still need some kind of garbage
queue.


I think reference counting would help a lot with this. When a voice is using a sample, it holds a reference. The sample references its parent SoundFont, etc. This is how libInstPatch works. If a SoundFont gets removed or changed, different presets get assigned, causing the samples to become unreferenced, ultimately freeing the SoundFont if no more references are held.



For the multi-core support to make a difference - assuming
rendering/interpolating voices it what takes the most time - it would
be nice to add a pre-renderer. This pre-renderer would be a function
that copies the current value of a voice, assumes nothing happens to
that voice, and renders a few buffers ahead, say 100-500 ms. It should
run in one or several non-realtime threads, depending on the number of
CPU cores. Now the voice renderer, after having processed its in-queue,
takes these pre-rendered buffers instead of rendering them directly,
assuming nothing happened to the voice and the renderer has the buffer
available.


That sounds like a good idea. There could be some change prediction logic too, to select those voices deemed less likely to change (haven't been changed in a while or are known not to change for some time if playing back a MIDI file).

// David



Nice to hear your thoughts on FluidSynth future. It would be good to get an idea of the next phase of development. As I mentioned, I'll primarily be focusing on libInstPatch and Swami for the coming months. So the next release should probably be more focused on bug fixes, optimization, voice stealing improvements, etc. But limit the amount of new functionality or code overhaul.

Josh





reply via email to

[Prev in Thread] Current Thread [Next in Thread]