fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] Thread safety long-term thoughts


From: Ebrahim Mayat
Subject: Re: [fluid-dev] Thread safety long-term thoughts
Date: Tue, 17 Nov 2009 14:09:14 -0500


On Nov 16, 2009, at 12:47 AM, David Henningsson wrote:

While the recent thread safety improvements are much better than the previous handling (since the previous had unpredictable crashes), the recent postings, the shadow variable workaround, and the multi- core support got me thinking. See this as long-term thoughts for discussion, rather than something I plan to implement in the near future.

The fluid_synth is becoming increasingly large and complex. I've started to think of it as two parts, a state machine, and a voice renderer. The voice renderer is strict real-time, and corresponds roughly to fluid_synth_one_block and everything below that.

The state machine is everything else, and a MIDI synthesizer is a state machine; people expect to set a variable in there and be able read it correctly afterwards. On the other hand, we can probably ease the real-time requirements on this part.

The state machine is multi-threaded by default, but we must be able to switch it off to avoid overhead for some use cases, such as the embedded ones, and fast-render. The more MIDI events that can be handled within a fixed time, the better, though. But for the harder ones (e g program change) we are allowed to use mutexes.

This also proposes moving the thread boundaries from before fluid_synth to between the state machine and the voice renderer. The voice renderer needs an in-queue of "voice events", events so much prepared that the voice renderer can meet its real-time requirements.

This would also have the advantage of moving the sfloader callbacks outside the most realtime sensitive code.

However, nothing new without a downside. Since the sample is used by the voice renderer, freeing a preset or soundfont is not solved easily. But outlined, first we should check if there is an audio thread running, if there isn't (embedded case, fast-render), we can just go ahead. Otherwise send a voice event saying we should kill active voices referencing the preset or soundfont. We then block the call until the audio thread has processed our message (simplest). Optionally we could return asynchronously, but then we still need some kind of garbage queue.

For the multi-core support to make a difference - assuming rendering/ interpolating voices it what takes the most time - it would be nice to add a pre-renderer. This pre-renderer would be a function that copies the current value of a voice, assumes nothing happens to that voice, and renders a few buffers ahead, say 100-500 ms. It should run in one or several non-realtime threads, depending on the number of CPU cores. Now the voice renderer, after having processed its in- queue, takes these pre-rendered buffers instead of rendering them directly, assuming nothing happened to the voice and the renderer has the buffer available.

David

Currently, fluidsynth has five threads.

5 process 6014 thread 0x6f03  0x936a11f8 in mach_msg_trap ()
4 process 6014 thread 0x6703 0x936a1278 in semaphore_timedwait_signal_trap ()
3 process 6014 thread 0x6203  0x936a11f8 in mach_msg_trap ()
2 process 6014 thread 0x1003  0x936a7c0c in __semwait_signal ()
* 1 process 6014 thread 0x10b  0x936ae6b8 in read$UNIX2003 ()

Of these threads the shell process (fluidsynth.c) is the first one. The second thread begins with fluid_synth_return_event_process_thread which together with fluid_synth_one_block are declared in fluid_synth.c

The other three threads (please correct me if I am wrong) address the audio, MIDI and I/O procs.

How does the state machine and voice renderer fit into this picture ?

Thanks in advance,
Ebrahim




reply via email to

[Prev in Thread] Current Thread [Next in Thread]