tinycc-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Tinycc-devel] Weird bitfield size handling, discrepancy with gcc


From: David Mertens
Subject: Re: [Tinycc-devel] Weird bitfield size handling, discrepancy with gcc
Date: Mon, 21 Nov 2016 22:15:55 -0500

Hello all,

I have finally found a bit of time to work on this. Just to re-iterate, I've found variation in alignment with bitfields across different flavors of compilers, and tcc is incompatible with gcc on Linux. If a library was compiled on Linux with gcc, there is no way to compile consuming code using tcc that is binary-compatible. I've listed the test program and the known results below.

I believe the sensible thing to do is to make tcc more gcc-ish. This would involve (1) making tcc's default behavior align with gcc's default behavior, which appears to be consistent across platforms, and (2) providing the -mms-bitfields command-line option for Windows folks who need it. This makes binary compatibility possible across both Linux and Windows, and asks no more of Windows folks than what gcc asks.

Alternatively, we could make tcc's default behavior configurable. This would require an additional configure flag. It would also require both -mms-bitfields and -no-mms-bitfields (or whatever the proper name for that flag should be). And finally, it would probably be best to make the default configuration OS-dependent.

I prefer the first option because (a) it is simpler and (b) any configuration tools that think they're working with gcc will be smart enough to include the -mms-bitfields flag when it's needed on Windows. (Perl's build chain does this, for example.) Pragmatically, it is what I have the time to accomplish. I think the second approach is in some ways "better", but it'll also add a bunch of configuration code that I think tcc would be better without.

Preferences?
David


Here is a re-iteration of known results, and a new one: mingw on Windows.

--------%<--------
#include <stdint.h>
#include <stdio.h>
struct t1 {
    uint8_t op_type:1;
    uint8_t op_flags;
};
struct t2 {
    uint32_t op_type:1;
    uint8_t op_flags;
};
struct t3 {
    unsigned op_type:1;
    char op_flags;
};

int main() {
    printf("t1 struct size: %ld\n", sizeof(struct t1));
    printf("t2 struct size: %ld\n", sizeof(struct t2));
    printf("t3 struct size: %ld\n", sizeof(struct t3));
    return 0;
}
-------->%--------

With tcc on 64-bit Linux, this prints:
t1 struct size: 2
t2 struct size: 8
t3 struct size: 8

With gcc on 64-bit Linux, this prints:
t1 struct size: 2
t2 struct size: 4
t3 struct size: 4

With i686-w64-mingw32 (i.e. with MinGW on 64-bit Windows), this prints
t1 struct size: 2
t2 struct size: 4
t3 struct size: 4

According to Christian Jullien, VC++ 32 and 64 both return:
t1 struct size: 2
t2 struct size: 8
t3 struct size: 8

--
 "Debugging is twice as hard as writing the code in the first place.
  Therefore, if you write the code as cleverly as possible, you are,
  by definition, not smart enough to debug it." -- Brian Kernighan

reply via email to

[Prev in Thread] Current Thread [Next in Thread]