gcl-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gcl-devel] Inefficient unsigned 32-bit arithmetic?


From: dshardin
Subject: [Gcl-devel] Inefficient unsigned 32-bit arithmetic?
Date: Wed, 23 Jun 2004 14:56:55 -0500


We would like to use more unsigned 32-bit types in our code (e.g., for hardware modeling), but we've noticed an inefficiency in the compilation of unsigned 32-bit arithmetic, relative to signed 32-bit types, using GCL 2.6.1.

Consider the following:

(defun +<s32> (x y)
  (declare (type (signed-byte 32) x)
           (type (signed-byte 32) y))
  (the (signed-byte 32) (+ (the (signed-byte 32) x)
                           (the (signed-byte 32) y))))

(defun +<u32> (x y)
  (declare (type (unsigned-byte 32) x)
           (type (unsigned-byte 32) y))
  (the (unsigned-byte 32) (+ (the (unsigned-byte 32) x)
                             (the (unsigned-byte 32) y))))

Note that the only difference between +<s32> and +<u32> is that +<s32> declares its arguments and result to be 32-bit signed integers and +<u32> declares them to be 32-bit unsigned integers.  When we compile to C, this is what we get (in part):

#include "temp.h"
void init_code(){do_init(VV);}


/*        function definition for +<S32>        */
static void L1()
{register object *base=vs_base;
        register object *sup=base+VM1; VC1
        vs_check;
        {long V1;
        long V2;
        V1=fix(base[0]);
        V2=fix(base[1]);
        vs_top=sup;
        goto TTL;
TTL:;
        base[2]= CMPmake_fixnum((long)(V1)+(V2));
        vs_top=(vs_base=base+2)+1;
        return;
        }
}


/*        function definition for +<U32>        */
static void L2()
{register object *base=vs_base;
        register object *sup=base+VM2; VC2
        vs_check;
        {IDECL(GEN V3,V3space,V3alloc);
        IDECL(GEN V4,V4space,V4alloc);
        SETQ_IO(V3,V3alloc,(base[0]),alloca);
        SETQ_IO(V4,V4alloc,(base[1]),alloca);
        vs_top=sup;
        goto TTL;
TTL:;
        V5 = make_integer(V3);
        V6 = make_integer(V4);
        base[2]= number_plus(V5,V6);
        vs_top=(vs_base=base+2)+1;
        return;
        }
}

The thing to notice here is that L1, which is the function definition for +<s32>, represents v1 and v2 as longs and uses C's native + function to add the arguments, but L2 (the function definition for +<u32>) uses make_integer() and number_plus() instead.  We're pretty sure that this means that GCL uses bignums for 32-bit unsigned integers instead of using C's unsigned long type and performing the addition using C's +.  Is this the case?  If so, is there a reason why the GCL compiler behaves this way?  We should also note that for unsigned *31*-bit types,  make_integer() is *not* called on the addends, and C's native + *is* used.

This problem is a big deal to us because we're doing millions of these sorts of calculations in the course of a given hardware simulation run.


Thanks,

David Hardin

P.S.  It would also be nice if GCL knew about 64-bit native types.  The last I knew, "long long" wasn't part of the ANSI C standard, but it's well-supported in GCC, as well as in other C compilers.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]