fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] New development


From: Bernat Arlandis i Mañó
Subject: Re: [fluid-dev] New development
Date: Mon, 26 Jan 2009 15:49:28 +0100
User-agent: Mozilla-Thunderbird 2.0.0.19 (X11/20090103)

On Sun, 25 Jan 2009 18:29:27 -0800, Josh Green <address@hidden> wrote:
I think breaking FluidSynth into multiple libraries would likely
needlessly complicate things.  However, I think keeping the code well
modularized is good and would make splitting things off into separate
libraries easy, if and when it seems like a good idea.

I agree, the possibility of building separate libraries or a library with
just the chosen components should only be implemented when needed. But
those components should have a well defined API and be totally independent
that makes this possible without touching the code. That's one of the
things I'd be more interested on achieving. At this point, I don't
understand how splitting into separate libs would complicate things,
though.

Still, keep in mind that it's not my main goal splitting into separate libs
but having good modularization to the point this would be really easy and
practical.

libInstPatch *does* handle 24 bit audio and floating point as well.  Its
got its own audio managing for converting formats internally and what
not.  Swami and libInstPatch support 24 bit audio in SoundFont files.
FluidSynth does not.  That is what I would like to change.

I'll need help there with libInstPatch integration. I don't know exactly
what libInstPatch can do and what not, but using it seems a good idea.

No that isn't quite right.  The SoundFont loader API is used for
synthesis in FluidSynth (not loading SoundFont files themselves).
libInstPatch and Swami do their own instrument management, but when they
want to synthesize those instruments, the SoundFont loader API is used.
This API abstracts the synthesis into a set of voices which can be
created by the application.  The voices have a list of SoundFont based
parameters, modulators and sample data.  In this way though, FluidSynth
can be used to synthesize any format, well at least within the confines
of SoundFont parameters.  Its a flexible API, but I think it could use
some cleanup and expansion of its capabilities (different audio formats
for example, like 24 bit).

That's really interesting, this is what I like the least from FS.
Theoretically this would help supporting every sound font format but it
becomes a very hard thing to do mainly because when trying you'll be
implementing a synthesis engine inside every font loader. There's another
solution that would work best. Later on this.

I'm starting to think having libInstPatch be a dependency could be a
good move.  libInstPatch is itself a glib/gobject based library.  It has
some additional dependencies, but most of them are optional (the Python
binding for example).  The components that would be of the most interest
would be the instrument loading and synthesis cache objects.  The cache
allows for the "rendering" of instrument objects into a list of
potential voices.  When a MIDI note-on event occurs, these voices can be
selected in a lock-free fashion, the cache is generated at MIDI program
selection time.  It seems like FluidSynth should be able to take
advantage of this code, whether it be used in something like Swami or
standalone.

I really think all the sound font loader stuff should go there, after
having moved the synthesis related parts to the synth component.

Seems like you have some good ideas.  Lets try to keep a good balance
though between modularity and simplicity.  Over-modularizing stuff can
make things more complicated than they need to be, when its really
supposed to have the opposite effect.

I don't like complicating things, I always try to follow the keep it simple
approach. If things were getting more complicated than they are I'd vote
for throwing the new branch to the can, and I don't want to get to this.
Have in mind, though, that new developments bring new things to learn, and
that's always a bit of work, but learning the new shouldn't be harder than
learning the old. Besides, when more features start to appear they will add
a bit more to learn, but that's unavoidable.

I think the next question is.  When should we branch?  It probably makes
the most sense to release 1.0.9 and then branch off 2.x.  At some point
2.x would become the head and we would make a 1.x branch.

These should be totally independent, don't think of them as related in any
way. We can branch when it's needed, and 1.0.9 can be released whenever you
want. If you can wait a few days I'll kick off the new branch with a
proposal and I'll also throw a couple fixes in, they started playing with
the code but have become a bit more serious.

Usually, in most projects, new development goes to the trunk and stable
releases are branches. However, since people here is used to having a
stable trunk we can start the experimental 2.x in a branch.

Some decisions should be made about what remains to put into 1.0.9.

What of the following should be added?
- PortAudio driver (it exists, does it just need to be improved?)
- Jack MIDI driver
- ASIO driver

That's another discussion, we should think about two different and
independent development branches. Personally, I'm not interested, but
someone might want to do them for 1.x and we could be merge them later in
the 2.x branch.

--
Bernat Arlandis i Mañó




reply via email to

[Prev in Thread] Current Thread [Next in Thread]