On Thu, 14 Apr 2022 15:11:54 -0700 (PDT)
Fred Wright <fw@fwright.net> wrote:
On Wed, 13 Apr 2022, Gary E. Miller wrote:
That log is your HPPOSLLH issue. I have a working fix for that
too, but not tonight.
Thatt collides with what I was doing, as well as being slightly
wrong, and now I have to do more work to sort it out.
At least it works.
The general concept is the same as my change, i.e. combining the two
pieces as integers before a single conversion to FP. But:
Good.
1) You're defining the multiplier as 100L, when it should be 100LL.
With 100L on a 32-bit platform, it will compute the combined integer
in 32 bits and potentially overflow it.
which is why the (int64_t) to ensure 64 bit math.
2) I'm not sure whether casting the constant to double actually
matters in this context, but it certainly doesn't hurt, so it might
as well be there.
My tests show it is required. To keep C from using (long double)
math when FLT_EVAL_MATCH != 0.
3) Changing the scaling from multiplication to division simply slows
it down without improving the accuracy.
Yes, and no. "/ 100" is exact, until the divide is done. "* 1e-2",
the old way is a problem since a double, long double sane not store
it exactly. Since some C comciplers store is as a (double) and
others as a (long double), that wass one source of the problems.
THis test case, that I posted earlier, shows the problem:
#include <float.h>
#include <stdio.h>
int r1;
double ten = 10.0;
int main(int c, char **v)
{
r1 = 0.1 == (1.0 / ten);
printf("FLT_EVAL_METHOD %d\n", FLT_EVAL_METHOD);
printf("1=%d\n", r1);
}
The problem is floating point constants at compile time!
It's tempting to think that
the division approach is more accurate since the constant can be
represented exactly,
Not worried about "accuracy", worried about portability. I changed it
because it broke portability.
but ultimately there needs to be a division by a
power of 10 at some point either way, with the only difference being
whether it does that at compile time in computing the constant, or at
run time.
Yup. And the compile time randomness was one of the portability
problems.
The latter is more expensive, and not likely to be more
accurate unless the compiler is screwing up computing the constant in
the former case.
Not worried about "accuracy", worried about portability. I changed it
because it broke portability.