bug-glibc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

jrand48() function


From: Jeff Higham
Subject: jrand48() function
Date: Fri, 19 Jan 2001 17:31:59 -0500 (EST)

Hi,

There appears to be a bug in the jrand48() standard C library function.
The man page for this function states:

    "...Then the appropriate number of bits,  according
    to  the  type  of data item to be returned, is copied from
    the  high-order  bits  of  Xi  and  transformed  into  the
    returned value."
    
Line 48 of jrand48_r.c in version 2.2.1 of the library, however, seems to
be doing something a little different.  It reads:

    *result = ((xsubi[2] & 0x7fff) << 16) | xsubi[1];
    
whereas it should (I think) read:

    *result = ((xsubi[2] & 0xffff) << 16) | xsubi[1];
    
Indeed the following code outputs the same result (-507604825) on
Solaris 2.6 and other UNIX platforms (with their respective implementations
of the C standard library), whereas GNU's C library on Linux outputs
-1639878823:

    #include <stdlib.h>
    #include <stdio.h>

    int main()
    {
        int seed = 1621482298;
        unsigned short xsubi[3];

        xsubi[0] = (unsigned short) 0x330e16;
        xsubi[1] = ((unsigned)seed) & 0xffff;
        xsubi[2] = ((unsigned)seed) >> 16;

        printf("%d\n", jrand48(xsubi));

        return 0;
    }

These results seem to be consistent with the suspected error in the code
mentioned above.

Some feedback on this problem would be much appreciated.

Sincerely,

Jeff Higham
Associate Financial Engineer
Algorithmics Incorporated
Toronto, Ontario, Canada

Email: address@hidden





reply via email to

[Prev in Thread] Current Thread [Next in Thread]