[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [MIT-Scheme-devel] possible bug in min function
From: |
Chris Hanson |
Subject: |
Re: [MIT-Scheme-devel] possible bug in min function |
Date: |
Sun, 11 Feb 2007 14:38:25 -0500 |
User-agent: |
Icedove 1.5.0.9 (X11/20061220) |
Sean D'Epagnier wrote:
> I have been using mit-scheme for nearest neighbor algorithms, and often
> I have a default "distance" of ieee754 infinity for a point.
>
> I am looking to find the minimum, but when I execute the following:
>
> 1 ]=> (min 1 (/ 1.0 0.0))
>
> ;Value: #[+inf]
>
>
> This seems wrong, I have worked around it for now using something like:
>
> 1 ]=> (min (exact->inexact 1) (/ 1.0 0.0))
>
> ;Value: 1.
>
>
> Does this make any sense? is it a bug? This is on a 64bit linux
> system with the c backend. I tried it on a 32bit linux system, and
> I get a division by zero error. Shouldn't both versions behave the same?
It looks like you've found two separate bugs. The first is in the
handling of a mixture of an infinity and a non-flonum. The second is
that the 64-bit platform isn't initializing its floating-point unit
correctly. The correct behavior in all situations should be that an
error is signalled.