fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] New development : system clock vs. audio clock


From: Bernat Arlandis i Mañó
Subject: Re: [fluid-dev] New development : system clock vs. audio clock
Date: Tue, 27 Jan 2009 17:49:43 +0100
User-agent: Mozilla-Thunderbird 2.0.0.19 (X11/20090103)

Antoine Schmitt escrigué:
Hi Josh and Bernat,

The issue I fixed was for real time rendering, when using the sequencer. And it was related, not only to standard and simpler latency caused by the size of the driver buffer, but because of unexpected behavior from the DSound driver, which, depending on the target hardware and other unknown reasons, would actually request buffers in bulk : it would request 16 buffers in a row, thus multipiying the latency by 16. And this would not be consistent (sometimes 1 buffer would be asked, sometimes 16). I have logs on this. This means that audio was in a way running much ahead of real time.

The result was that the "Sub audio buffer MIDI event processing" issue that Josh mentions was multiplied by 16, resulting in audible irregularities in rythms. IIRC, midi playback is also attached to the system clock, with a timer. So this problem will also happen for midi file playback, not only for sequencer playback. [as a side note, there is a redundancy in code, again, IIRC, between the sequencer and the midifile playback. This could be factored by having for example the midifile playback use the sequencer to insert midi events in the audio stream - end of side note]

I fixed this by branching the sequencer on the audio time (how many samples have elapsed), _and_ by calling the sequencer routine just before filling each audio buffer.

-> I guess that I did not fix this same issue with midifile playback then. -> and also, I reduced the precision to a single buffer length. I did not address sub-buffer precision.
=> I guess this could really benefit an overall cleanup.

As for the question of where to do the processing of the scheduled (whether through the sequencer or through the midifile playback) midi events, I think that the only way to have consistent and reliable rendering is indeed to do it inside the callback from the audio driver, especially if the audio runs ahead of real time.


Thank you very much for taking the time to explain it, now I understand much better what you have done, and yes, it's related to Josh's proposal.

Before trying to solve this problem in the best way we have to understand it well, and it's somewhat complex. I see two problems here: 1. A soundcard/driver with a unusually high minimal buffer size (associated to a unusually high latency). 2. FS doesn't work well with big audio output sizes (or high latencies) since MIDI events get quantized to it.

Problem #1 is the real problem and it's related to the soundcard/system/driver, not FS., but it can be mostly ignored when you're only playing back MIDI files. You should see whether this is the problem solved by ASIO drivers on Windows, someone else will know better than me, poor Linux user. :)

Problem #1 could be almost fixed by solving problem #2, but not really. Implementing Josh's proposal would complicate the code a lot and it would hurt performance very bad for systems with already good latency, that's any modern computer with appropriate audio drivers and configuration.

There might be a field to explore there, maybe Antoine's patch is good to implement a workaround to the latency problem in DSound drivers, this could be good.

Le 27 janv. 09 à 03:32, Josh Green a écrit :
It seems to me like using a system timer for MIDI file event timing
(something that has different resolutions depending on the system) is
going to be a lot less reliable than using the sound card time.  Again
though, I agree that this probably only benefits MIDI file
playback/rendering.

It depends on what you're looking for. If you see FS output only as numeric series, then we should sacrifice everything for exact sample resolution. But this is sound so latency and performance reliability are a lot more important than sample accuracy. Don't get me wrong, I would love to achieve sample accuracy with good performance, reliability and latency, but that's not realistic specially since we're aimed towards personal computers.

Still, there's good news, we can get sample accuracy with non-RT (or offline) rendering, but this doesn't need any timers, and I'd like to do it for 2.0. This would be good for testing and also for master track rendering of pre-recorded midi tracks.
What about just using it as a timing source?  I still haven't thought it
all through, but I could see how this could have its advantages.

Using the audio driver as a timing source could be an option for 2.0, in fact, I'd like to be able to use anything as a timing source, but there's a difference, there would be separate threads for MIDI and core processing with different priorities, same as now, but sharing the same timing source.

We're getting to very complex issues that I think shouldn't be the most important thing now, unless someone wants to experiment with them, but this kind of experiments should be done in its own experimental branch. The 2.x branch should not be experimental.

Cheers.

--
Bernat Arlandis i Mañó





reply via email to

[Prev in Thread] Current Thread [Next in Thread]