qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 20/20] linux-user/s390x: Add vdso


From: Philippe Mathieu-Daudé
Subject: Re: [PATCH v5 20/20] linux-user/s390x: Add vdso
Date: Thu, 7 Sep 2023 08:17:28 +0200
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.15.0

On 6/9/23 19:56, Richard Henderson wrote:
On 9/4/23 08:00, Alex Bennée wrote:
Due to the b4 dropping the vdso.so in the patch this fails:

   Program build-vdso.sh found: YES (/home/alex/lsrc/qemu.git/linux-user/build-vdso.sh)    ../../linux-user/s390x/meson.build:24:0: ERROR: File vdso.so does not exist.    A full log can be found at /home/alex/lsrc/qemu.git/builds/all/meson-logs/meson-log.txt
   FAILED: build.ninja
   /home/alex/lsrc/qemu.git/builds/all/pyvenv/bin/meson --internal regenerate /home/alex/lsrc/qemu.git /home/alex/lsrc/qemu.git/builds/all
   ninja: error: rebuilding 'build.ninja': subcommand failed
     BUILD   aarch64-softmmu guest-tests
   tests/tcg/aarch64-softmmu: -march=armv8.3-a detected

which makes me think the dependencies are broken anyway because I have a
working s390x compiler:

   ➜  cat tests/tcg/s390x-linux-user/config-target.mak
   # Automatically generated by configure - do not modify
   TARGET_NAME=s390x
   TARGET=s390x-linux-user
   EXTRA_CFLAGS=
   CC=/home/alex/lsrc/qemu.git/builds/all/pyvenv/bin/python3 -B /home/alex/lsrc/qemu.git/tests/docker/docker.py --engine docker cc --cc s390x-linux-gnu-gcc -i qemu/debian-s390x-cross -s /home/alex/lsrc/qemu.git --    CCAS=/home/alex/lsrc/qemu.git/builds/all/pyvenv/bin/python3 -B /home/alex/lsrc/qemu.git/tests/docker/docker.py --engine docker cc --cc s390x-linux-gnu-gcc -i qemu/debian-s390x-cross -s /home/alex/lsrc/qemu.git --    AR=/home/alex/lsrc/qemu.git/builds/all/pyvenv/bin/python3 -B /home/alex/lsrc/qemu.git/tests/docker/docker.py --engine docker cc --cc s390x-linux-gnu-ar -i qemu/debian-s390x-cross -s /home/alex/lsrc/qemu.git --    AS=/home/alex/lsrc/qemu.git/builds/all/pyvenv/bin/python3 -B /home/alex/lsrc/qemu.git/tests/docker/docker.py --engine docker cc --cc s390x-linux-gnu-as -i qemu/debian-s390x-cross -s /home/alex/lsrc/qemu.git --    LD=/home/alex/lsrc/qemu.git/builds/all/pyvenv/bin/python3 -B /home/alex/lsrc/qemu.git/tests/docker/docker.py --engine docker cc --cc s390x-linux-gnu-ld -i qemu/debian-s390x-cross -s /home/alex/lsrc/qemu.git --    NM=/home/alex/lsrc/qemu.git/builds/all/pyvenv/bin/python3 -B /home/alex/lsrc/qemu.git/tests/docker/docker.py --engine docker cc --cc s390x-linux-gnu-nm -i qemu/debian-s390x-cross -s /home/alex/lsrc/qemu.git --    OBJCOPY=/home/alex/lsrc/qemu.git/builds/all/pyvenv/bin/python3 -B /home/alex/lsrc/qemu.git/tests/docker/docker.py --engine docker cc --cc s390x-linux-gnu-objcopy -i qemu/debian-s390x-cross -s /home/alex/lsrc/qemu.git --    RANLIB=/home/alex/lsrc/qemu.git/builds/all/pyvenv/bin/python3 -B /home/alex/lsrc/qemu.git/tests/docker/docker.py --engine docker cc --cc s390x-linux-gnu-ranlib -i qemu/debian-s390x-cross -s /home/alex/lsrc/qemu.git --    STRIP=/home/alex/lsrc/qemu.git/builds/all/pyvenv/bin/python3 -B /home/alex/lsrc/qemu.git/tests/docker/docker.py --engine docker cc --cc s390x-linux-gnu-strip -i qemu/debian-s390x-cross -s /home/alex/lsrc/qemu.git --
   BUILD_STATIC=y
   QEMU=/home/alex/lsrc/qemu.git/builds/all/qemu-s390x
   HOST_GDB_SUPPORTS_ARCH=y

We really need to express the dependency on
docker-image-debian-s390x-cross (when using containers) to ensure we can
build the vdso.so and not rely on the copy we have.

I think expressing the dependency is a mistake.

The major problem is network unreliability.  I installed a new vm to build test this, and it took more than a dozen retrys to get all of the docker images built.

What we do right now is determine if docker or podman is present and works, and then *assume* we can make all of the cross-compilers work later, and so register them as cross-compilers early.

I think the only moderately reliable thing is to determine what containers are already present and working and use only those. Developers will need to manually rebuild containers periodically and then re-run configure to make those visible to the cross-build machinery.  Not completely ideal, of course, but nothing else is either.

We discussed this 1 or 2 years ago. My suggestion was when we tag
a release, we also tag the gitlab docker images (with all the distro
packages installed from that tag). Then those working with the release
can pull the pre-installed image from the tag; since it won't need
newer package, no need for network access within the image.

During the latest development cycle, we can either use the latest
tagged image if sufficient, or we have to deal with the same issues
you mentioned (network stability, broken package deps from time to
time).



reply via email to

[Prev in Thread] Current Thread [Next in Thread]