octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Possible (summer of code) projects for Octave


From: Jaroslav Hajek
Subject: Re: Possible (summer of code) projects for Octave
Date: Tue, 4 Jan 2011 13:40:26 +0100

On Mon, Jan 3, 2011 at 10:22 PM, Søren Hauberg <address@hidden> wrote:
> man, 03 01 2011 kl. 21:49 +0100, skrev Daniel Kraft:
>> I'm also not really an expert or experienced with JIT technology of
>> related stuff, unfortunately.  What I do for gfortran/GCC is basically
>> front-end stuff (like implementation of Fortran 2003 OOP features) and I
>> did not yet touch any optimization or code-generation in GCC.  But on
>> the other hand, I guess that using existing frameworks like LLVM the
>> main concern is not compilation technology and code generation but more
>> about how to translate Octave into a more static form.
>
> The simplest thing (according to the compiler people I hang out with
> from time to time) would most likely be to output C++ code. Then the
> missing part is essentially "just" type estimation. I don't think this
> is particularly easy, but I'm really no expert here...
>
>>   But this means that currently there are no projects in planning or
>> development to tackle something like that?
>
> Not anything I know of.
>
>> My first thought was more about using simple parallel algorithms (and
>> probably mostly shared-memory with few cores rather than cluster
>> computing) for stuff like matrix/vector element-wise operations,
>> dot-products or BLAS in general.  Although, I think that it is not
>> always easy to come up with code that performs well on different
>> architectures or for different problem sizes -- and AFAIK, Octave uses
>> BLAS/LAPACK routines for (some of) those operations, right?  I don't
>> really know, but I could imagine that there are already projects out
>> there to develop (free and portable) parallel BLAS routines.  So maybe
>> one could "simply" try integrating them into Octave and implementing
>> some framework to control parallelization depending on the problem size
>> and user preferences or the like.
>
> I would tend to agree. Having mechanisms for switching BLAS/LAPACK
> implementation at run-time could potentially be quite nice. I'm not sure
> it's particularly easy, though.
>

I really doubt that. It often takes some effort to tune the
configuration to make a single BLAS/LAPACK library compile with
Octave, let alone multiple ones. For no obvious benefit, at least I
don't see one. A number of BLAS libs (e.g. GotoBLAS) are parallelized
and the number of threads is usually controllable by some run-time
mechanism, but for obvious reasons that differs. NumPy is (at least
was when I last checked) strongly ATLAS-biased and I would not be
surprised if they simply exposed the feature just for ATLAS.
Octave tries to be more neutral w.r.t. BLAS, so if anyone wanted to do
something like this in Octave, it would probably involve some
extensive configure-checking to assess the proper API. But otherwise
it wouldn't be that hard.


>> But of course also extension of selected existing Octave functions in a
>> way you mention it above seems like an interesting idea.  Do you know
>> what the opinions of "the community" are with respect to this ansatz in
>> general?  Would it be considered useful, and for which functions /
>> functionality?
>
> I would consider it useful. My guess would be that others would feel the
> same way if the code was simple enough to maintain. Such features would
> never stand a chance if they where too hard to maintain.
>
> Cheers
> Søren
>
>


reply via email to

[Prev in Thread] Current Thread [Next in Thread]