[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Tinycc-devel] Disabling memmove optimization

From: Raul Hernandez
Subject: [Tinycc-devel] Disabling memmove optimization
Date: Tue, 26 Apr 2022 11:04:02 +0200

Hi, list, 

I’ve noticed that, in some cases, TCC will emit calls to standard library functions such as memmove.

For example, the following snippet:

struct Big { void *a, *b, *c; };

struct Big some_function(struct Big b) {
    return b;

… compiles to something like this (cleaned up for readability; the full assembled code can be seen on Godbolt: https://godbolt.org/z/d4nh7oh63):

    push   rbp
    mov    rbp, rsp
    sub    rsp, 0x10
    lea    rsi, [rbp+0x10]
    mov    rdi, QWORD PTR [rbp-0x10]
    mov    edx, 0x18
    mov    eax, 0x0
    call   memmove

I guess TCC does this as either an optimization (to take advantage of vectorization in the implementation of memmove), or as a way of simplifying the generated code.
My question is: is there any way of disabling this behavior? Ideally I’d wish to be able to disable it for a single function, but I’d be happy with a compiler flag when invoking TCC or a compile-time define when building it.


This probably sounds like the XY problem; the reason why I need to change that behavior is that I’m trying to write a closure implementation for the V programming language. It works similarly to the approach described in https://nullprogram.com/blog/2017/01/08/, but I’m trying to write it in pure C for portability (so that it works with any calling convention and number of parameters). The implementation works correctly when compiled with GCC and clang, but sometimes fails under TCC because of the optimization I described. Any function calls within the closure wrapper will become invalid after the wrapper is copied somewhere else in memory, since the relative offset to that function will be different and the CPU will jump to a garbage location instead. 

Thank you,


reply via email to

[Prev in Thread] Current Thread [Next in Thread]