mit-scheme-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [MIT-Scheme-devel] floating-point environment


From: Taylor R Campbell
Subject: Re: [MIT-Scheme-devel] floating-point environment
Date: Fri, 15 Oct 2010 05:14:58 +0000
User-agent: IMAIL/1.21; Edwin/3.116; MIT-Scheme/9.0.1

   Date: Thu, 14 Oct 2010 13:02:24 -0700
   From: address@hidden (Matt Birkholz)

   Does the SIGFPE handler not do the usual "request interrupt" thing?
   It is a "trap", not "interrupt" handler, so its continuation is... ?

On SIGFPE, the system tries to find where the PC is, and the debugger
will report that location.  If the PC is in compiled Scheme code, then
the debugger can usually tell you what compiled block you were in, and
will use some heuristics to try to find what the nearest continuation
was to produce a stack trace.

Try it out -- write some hairy flonum arithmetic expression that
involves a division, pass in zero for the divisor, and see what the
debugger gives you.

Note that control does not proceed in compiled Scheme code until the
next interrupt check -- instead, control immediately transfers to a
microcode error handler in the runtime.

This is all how MIT Scheme already works.  I am not aware of any other
Scheme system in which floating-point exceptions are unmasked at the
machine level -- most Schemes will just give NaN, infinity, or zero,
in exceptional situations, without any option to trap.  And sometimes
that may be what one wants, which is what FLO:SET-MASKED-EXCEPTIONS!
would be for.

   > [...]  But if we instead had primitives to read floating-point
   > values from and write them to octet vectors, which could be
   > relatively easily open-coded, I imagine the difference between
   > frobbing the machine's floating-point exception mask versus
   > involving the condition system would be pretty substantial.

   We have "floating-point vector primitives"

           flo:vector-cons
           flo:vector-ref
           flo:vector-set!
           flo:vector-length

   and they appear to already be open-coded!

What we don't have is a performant way to read IEEE 754 floating-point
values in from the network or write them to a file or anything --
floating-point vectors are useless for that.

   I am looking forward to using them.  Bonus question: how do I write
   the following so that it uses fancy SIMD instructions? :-)

Sorry, 'fraid you'll have to hack LIAR for that.

You can at least make the procedure a little conciser, and take
responsibility for checking types and ranges yourself.  Untested, but
something like this:

(define (transform-3d-points transform points)
  (guarantee-flo:vector-of-length transform 9)
  (map (lambda (point)
         (guarantee-flo:vector-of-length point 3)
         (let ((new (flo:vector-cons 3)))
           (declare (no-type-checks) (no-range-checks))
           (define-integrable (p i) (flo:vector-ref point i))
           (define-integrable (t i j) (flo:vector-ref transform (+ (* i 3) j)))
           (define-integrable (init i) (flo:vector-set! new i (f i)))
           (define-integrable (f i)
             (flo:+ (flo:* (p 0) (t i 0))
                    (flo:+ (flo:* (p 1) (t i 1))
                           (flo:* (p 2) (t i 2)))))
           ;; I presume you meant to initialize 0, 1, and 2, rather
           ;; than 0, 0, and 0?
           (init 0) (init 1) (init 2)))
       points))

(define-integrable (guarantee-flo:vector-of-length ...) ...)

LIAR's type and range checking is pretty hokey, so disabling type and
range checks is necessary to prevent boxing in immediate arithmetic
expressions.  Once you've disabled the checks, go wild integrating
arithmetic expressions, even variable references, as in (let ((x
(flo:...))) (declare (integrate x)) ...) -- if you don't integrate X,
it will be stored boxed on the stack.  The RTL common subexpression
eliminator will usually take care of the duplication caused by
integration anyway.

Here's some code I wrote when I last needed to do reasonably fast
floating-point computation and I/O -- most of the important operations
(e.g., matrix multiplication) do not incur intermediate consing and
are totally open-coded when compiled:

<http://mumble.net/~campbell/tmp/mit-matrix.scm>
<http://mumble.net/~campbell/tmp/mit-vector.scm>
<http://mumble.net/~campbell/tmp/mit-flonum-bits.scm>

(The last is a sleazy hack I'd like to replace by new primitives.)



reply via email to

[Prev in Thread] Current Thread [Next in Thread]