[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: Suggestion For mpz_sizeinbase() Function
From: |
David T. Ashley |
Subject: |
RE: Suggestion For mpz_sizeinbase() Function |
Date: |
Mon, 16 Jul 2001 20:18:33 -0400 |
Hi Kevin,
> chars_required = (12655 * bits_required)/42039 + 1
>
> Most machines will do this blindingly fast: one integer multiply as a
> processor instruction followed by one integer divide as a processor
> instruction.
>A fixed point binary approximation might be better, so it could be one
>mul and a shift, if that gave enough precision.
It depends on what you mean by "better", and it depends on what assumptions
you make about the computing platforms available.
If you assume that all platforms can natively multiply integers (which might
be a good assumption) but that not all platforms can natively divide
integers (I don't know if this is true or false), then fixed-point of the
form h/2**q is the better form, because I'm sure all machines can shift.
If on the other hand, you assume that all platforms can divide integers up
to a certain size as part of the instruction set, the general h/k
approximations (Farey series) would in some sense be better, where "better"
means that you can usually choose h/k closer to some arbitrary real number.
It depends on the assumptions about the platforms. Can they all divide
integers? And the least capable platform--what sizes of integers can it
divide?
A few points about the Farey series are important. The first important
point is that in most places it is much denser with formable numbers than
"fixed-point". For example, consider the distance between 1/255 and 1/254
(pretty close together). Second point, the worst-case distance between
terms is quite bad (adjacent to integers). For example, consider the
distance between 0/255 and 1/255 (quite a big distance). So, using the
Farey series as a representational method for real numbers isn't a perfect
scheme. There are some bad worst cases which approach the poorness of fixed
point (like the 0/255 and 1/255 case above).
I'm not averse to fixed point approximations of the form h/2**q. This will
work, too. The guaranteed case with this is about the same as the worst
case of the Farey series.
> The other issue to cover is in general if this method is used, whether the
> claim that the value returned will be exact or one greater will be met.
>That chars_per_bit_exactly is used in a few different places, it'd be
>good to convert all of them. I'm not sure if a rounded-up or
>rounded-down value would best suit.
Generally, you want a rounded up value (for something like the sizeinbase()
function). You want to overestimate rather than underestimate the space
requirements as the size of integers gets large.
> So, what does everyone think of using rational approximations instead of
> floating-point arithmetic? Do most machines provide floating point
support,
> and is there any speed or code savings here?
>On the x86s (other than athlon) there's a penalty for switching
>between MMX and float code, so avoiding unnecessary float is a good
>thing. On some of the RISCs, on the other hand, float can be faster
>than integer, though it doesn't always give enough precision (see the
>tasks entry about mpf exponents >2^53).
>But speed in this radix conversion stuff isn't all that important,
>since no sensible program should be dominated by such things.
>Correctness is very important though, and getting away from the
>vagueries of floating point rounding modes would be good from that
>point of view.
I agree with you absolutely here. A lot of safety-critical software coding
standards positively prohibit floating-point in any form. "Vagueries" is
just the right word. Another good word would be "nebulousities" (just
joking--that ain't a word).
Dave.
_______________________________________________
Bug-gmp mailing list
address@hidden
http://mail.gnu.org/mailman/listinfo/bug-gmp