fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] Timing revisited


From: David Henningsson
Subject: Re: [fluid-dev] Timing revisited
Date: Sat, 25 Apr 2009 10:18:12 +0200
User-agent: Thunderbird 2.0.0.21 (X11/20090409)

> On Mon, 2009-04-20 at 20:59 +0200, David Henningsson wrote:
>> But I've recently come to think of a disadvantage as well. If we're 
>> low-latency, it's important that fluid_synth_one_block finishes as soon as 
>> possible. If we do more things, we risc a buffer underrun if one of these 
>> calls take unexpectedly long time (e g when the player loads another midi 
>> file). This brings me into thinking...
> Ohh, I get it now.  So the callback in that case would be running 
> synchronously.  That could indeed be a problem, as you mention.  Any 
> operating system calls can block for unspecified amounts of time (malloc, 
> fopen, etc) and should be avoided in the audio synthesis thread.

We're very far from a hard real-time system. We cannot protect ourselves
from anyone trying to send one million midi events at the same point in
time, which will lead to buffer underrun anyway. So this is just about
lowering the possibility a bit. (And instead of buffer underruns we get
untimed data, which is not a good either, but perhaps better than an
underrun.)

>> > What about using some sort of
>> > message queue to pass the MIDI events to the synth?
>> ...well, something like that.
>> > I imagine it is
>> > probably good to try and avoid locks if at all possible in the
>> synthesis
>> > thread, but perhaps some lock-less mechanism can be used (circular buffer 
>> > for example) to pass the events.  Does this make sense?  glib
>> has
>> > portable atomic integers which could be used for this task.
>> I don't know if the overhead of using atomic integers (compared to ordinary 
>> non-thread-safe integers), perhaps we should have a parameter in the synth 
>> that sets it in either "thread-safe" or "non-thread-safe" mode. In the 
>> fast-file-rendering case, there is just one thread and a callback to the 
>> player is done after every FluidBuffer samples. (pasted from another thread, 
>> I think it belongs here)
> 
> On single 32 bit CPU systems I think generally all 32 bit integers are 
> atomic, when simply reading/writing them.  Its when you have multiple CPUs 
> that they may not be atomic.  Memory barrier tricks and other assembly 
> instructions are used for specific architectures to provide additional atomic 
> operations (add/sub/test, etc).  
>These instructions are generally very fast, compared to the alternative of 
>using mutexes, especially in the case where contention occurs.

That seems reasonable although I'm a bit curious about the overhead
compared to ordinary non-safe integers. Do atomic operations block other
CPUs and manipulate their caches, and what performance hit will that be
for the other processors?

> If you have a queue with one consumer and one producer, then you can simply 
> use an atomic integer to atomically add/subtract how many bytes are in the 
> buffer and the head and tail of the circular buffer are only accessed by one 
> or the other thread.

We will probably have more than one producer? If somebody plays MIDI on
his/her keyboard while another thread is a midi file player.

>> The former. It seems to me that either fluid_synth_one_block should not be 
>> called at the same time from one thread at the same time as another thread 
>> calls fluid_synth_handle_midi_event and friends. So either we have 
>> concurrency issues, or I'm overlooking a smart and undocumented locking 
>> mechanism. (synth->busy seems to prevent some things but not
all
>> of them? And commented out in some places?)
> I wouldn't be surprised if there are threading issues, so what you have found 
> is likely a valid issue.  I have also suspected such issues could exist, 
> since there doesn't seem to be much regard for locking sensitive data or 
> anything of the sort.  This could lead to synthesis issues (if multiple 
> synthesis parameters are dependent or on multi-CPU systems) or worse case 
> would be crashes.

Actually what's surprising is that it does not crash often (or outputs
wrong data). That's why I thought there might be something I overlooked
that makes it work anyway.

// David





reply via email to

[Prev in Thread] Current Thread [Next in Thread]