fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] New development


From: Josh Green
Subject: Re: [fluid-dev] New development
Date: Sun, 25 Jan 2009 18:29:27 -0800

On Mon, 2009-01-26 at 01:47 +0100, Bernat Arlandis i Mañó wrote:
> Specifically, on the modularization aspect I'd like to break FS down in 
> several libraries that could build separately if needed. This libraries 
> might be, guessing a bit: fluid-synth (the synthesis engine), 
> fluid-soundfont (sound font loader), fluid-midi (midi playing 
> capabilities incl. midi drivers), fluid-midir (midi routing), 
> fluid-audio (audio conversion utilities and output drivers, maybe LADSPA 
> too), fluid-ctrl (shell and server).
> 
> Some of these components could grow and become independent projects, in 
> particular I think midi routing could become a general library being 
> able to read routing rules from a XML file and with a front end for 
> editing these files. Some other components might just disappear if 
> there's some external one that can do the same.
> 
> Being able to break it down and build it like this would be a good 
> modularization test. It would also help 3rd party developers taking just 
> what they need and connect all the parts in more flexible ways than is 
> possible now.
> 
> In some way, the code has already been developed with this goals in 
> mind, so we're not that far. It's really difficult to fully reach these 
> goals in one try, or even two, but we're already somewhat close.
> 

I think breaking FluidSynth into multiple libraries would likely
needlessly complicate things.  However, I think keeping the code well
modularized is good and would make splitting things off into separate
libraries easy, if and when it seems like a good idea.

> > - 24 bit sample support
> > - Sample streaming support
> >   
> 24bit support is needed for complete SF2.04 support, and sample 
> streaming would be good too, specially with 24bit samples. I thought 
> this belonged to libInstPatch but no. These should be post-2.0.

libInstPatch *does* handle 24 bit audio and floating point as well.  Its
got its own audio managing for converting formats internally and what
not.  Swami and libInstPatch support 24 bit audio in SoundFont files.
FluidSynth does not.  That is what I would like to change.

> > - Sub audio buffer MIDI event processing
> >   
> This one would be hard and I think it would hit performance hard. I 
> don't think it's important to have such a high MIDI resolution. Talk 
> later about this, post-2.0.

I agree with this.  I think this was mainly an issue in regards to some
audio drivers not processing audio at the buffer size to which they are
set to.  That seems to be more of an issue with the sound driver though.

> > - Faster than realtime MIDI file to audio rendering
> >   
> When doing modularization, I'd like to implement external timing, that 
> is, synthesis and MIDI timing controlled by external functions. That 
> would make it really easy to do.

Yeah, I think most of the timing related stuff in regards to MIDI
playback happens in realtime, rather than holding a queue of timing
events.  This is one area though, where I am a bit in the dark as far as
the FluidSynth code base.  It would be nice though, to be able to render
a WAV file from a MIDI and SoundFont file and get the exact same audio
output, every time.  This would also be extremely useful in SoundFont
compliance testing, something that I think really needs to be done with
FluidSynth.

> > - Overhaul SoundFont loader API (used only by Swami as far as I know)
> >   
> This means Swami depends on FS Soundfont API, I thought libInstPatch 
> duplicated this functionality. This is in the pack then.

No that isn't quite right.  The SoundFont loader API is used for
synthesis in FluidSynth (not loading SoundFont files themselves).
libInstPatch and Swami do their own instrument management, but when they
want to synthesize those instruments, the SoundFont loader API is used.
This API abstracts the synthesis into a set of voices which can be
created by the application.  The voices have a list of SoundFont based
parameters, modulators and sample data.  In this way though, FluidSynth
can be used to synthesize any format, well at least within the confines
of SoundFont parameters.  Its a flexible API, but I think it could use
some cleanup and expansion of its capabilities (different audio formats
for example, like 24 bit).

> > - Leverage off of libInstPatch (optional dependency perhaps, maybe not?)
> > which would add support for other formats and flexible framework for
> > managing/manipulating instruments.
> >   
> You could certainly help a lot with this.

I'm starting to think having libInstPatch be a dependency could be a
good move.  libInstPatch is itself a glib/gobject based library.  It has
some additional dependencies, but most of them are optional (the Python
binding for example).  The components that would be of the most interest
would be the instrument loading and synthesis cache objects.  The cache
allows for the "rendering" of instrument objects into a list of
potential voices.  When a MIDI note-on event occurs, these voices can be
selected in a lock-free fashion, the cache is generated at MIDI program
selection time.  It seems like FluidSynth should be able to take
advantage of this code, whether it be used in something like Swami or
standalone.

> That's the idea, although I wouldn't put the 1.x branch to rest so fast, 
> I think it's gonna take some time. Fixes could easily go to both 
> branches whenever possible, but new serious development should go to the 
> new branch.

Totally agree.

> With better modularization I expect it'll be easier for everyone to 
> focus on their preferred component, and each one could grow 
> independently of the others. It'd be also easier to implement 
> alternatives for components with little effort.
> 
> I thank you all for the positive and helpful responses. I need some time 
> to gather more information, I will explain better what I would do and 
> then exchange points of view with you so we can work together on this 
> from the start.
> 
> Cheers.
> 

Seems like you have some good ideas.  Lets try to keep a good balance
though between modularity and simplicity.  Over-modularizing stuff can
make things more complicated than they need to be, when its really
supposed to have the opposite effect.

I think the next question is.  When should we branch?  It probably makes
the most sense to release 1.0.9 and then branch off 2.x.  At some point
2.x would become the head and we would make a 1.x branch.

Some decisions should be made about what remains to put into 1.0.9.

What of the following should be added?
- PortAudio driver (it exists, does it just need to be improved?)
- Jack MIDI driver
- ASIO driver

If someone feels inspired to tackle any of these, speak up.  I don't
think we should hold back a release too long for any of these.  Jack
MIDI would be nice.

Best regards,
        Josh






reply via email to

[Prev in Thread] Current Thread [Next in Thread]