[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Bls: Bls: Bls: State of JIT compiler
From: |
Mario Ray Mahardhika |
Subject: |
Bls: Bls: Bls: State of JIT compiler |
Date: |
Fri, 25 Feb 2011 01:59:19 +0800 (SGT) |
> Will you please reply on the list to keep this discussion public so others
> may
>comment?
Sorry, I forgot to use "reply all".
> Can you say more about exactly what you plan to do?
Just make a new "visitor" of the generated syntax tree which will build a
temporary LLVM module (for the current execution, i.e. from the prompt), then
execute it via LLVM JIT. Used modules are compiled as well (or perhaps we could
offer to compile all the modules upon installation) and saved in files for
later
use (recompiled when necessary).
> Why do you need the language grammar separate from the Octave sources?
> Octave's lexer and parser generate a parse tree, which is then
> evaluated (see the file src/pt-eval.cc).
I didn't really say that. It's OK if the source have some kind of grammar
representation in format expected by Bison, GOLD or anything else (that it
still
a parser generator).
> As I see it, the difficult parts of this are
> inferring types so that you can generate code that a C++ compiler can
> do something intelligent with, and also deciding which blocks of code
> to actually compile (full function bodies? just some loops?)
Yeah, the type inference indeed the most difficult part due to the dynamicity
of
variables' type in the language. LLVM is statically and strongly typed, but the
pointers is just as flexible as C.
> In any case, type inferencing is the key feature here. If you only operate
> on octave_value objects, you won't see much speedup, as a lot of the
> time is spent decoding exactly what each octave_value object contains
> and dispatching operations once the underlying types are decoded. To
> speed things up, you have to do the inferencing and dispatching ahead
> of time. I think that's what makes tracing JIT implementations
> interesting -- they don't attempt general type inferencing, but wait
> to see what values functions (or smaller blocks of code) are actually
> called with, compile for those conditions, and then when the same
> conditions are met, use the compiled code to perform the operations.
I need to see that octave_value definition and how it's used by the interpreter
in the current implementation to decode types. And I guess I need to learn
about
this trace-based JIT thing. From your explanation, it's much like pure
functional language type inference mechanism (bottom up type inference).
For a starting point, I'll try implementing for simple arithmetics.
P.S.:
Geez, I just realized a simple "format xxx" would cause the whole used modules
to be recompiled. Assuming it's not called as often as other constructs, maybe
it's safe to let it like that.
- Re: State of JIT compiler, (continued)
Re: State of JIT compiler, Jordi Gutiérrez Hermoso, 2011/02/22
- Re: State of JIT compiler, Jordi Gutiérrez Hermoso, 2011/02/22
- Bls: State of JIT compiler, Mario Ray Mahardhika, 2011/02/23
- Re: State of JIT compiler, Jordi Gutiérrez Hermoso, 2011/02/23
- Bls: State of JIT compiler, Mario Ray Mahardhika, 2011/02/23
- Re: Bls: State of JIT compiler, Michael D Godfrey, 2011/02/23
- Re: Bls: State of JIT compiler, John W. Eaton, 2011/02/23
- Bls: State of JIT compiler, John W. Eaton, 2011/02/23
- Message not available
- Message not available
- Bls: Bls: Bls: State of JIT compiler,
Mario Ray Mahardhika <=