qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: virtio-pci in qemu-system-arm is broken in 8.2


From: Alex Bennée
Subject: Re: virtio-pci in qemu-system-arm is broken in 8.2
Date: Thu, 21 Dec 2023 22:00:22 +0000
User-agent: mu4e 1.11.26; emacs 29.1

Alex Bennée <alex.bennee@linaro.org> writes:

> Michael Tokarev <mjt@tls.msk.ru> writes:
>
>> It looks like virtio-pci is entirely broke in qemu-system-arm, at least in 
>> tcg
>> mode running on x86.  The guest (current linux system) just does not detect
>> any virtio-pci devices at all.
>>
>> When 8.1 is booting, the following messages are displayed (debian initramfs):
>>
>> Loading, please wait...
>> Starting systemd-udevd version 255-1
>> [    6.455941] virtio-pci 0000:00:01.0: enabling device (0100 -> 0103)
>> [    6.929155] virtio-pci 0000:00:02.0: enabling device (0100 -> 0103)
>> [    7.764652] virtio_blk virtio1: 2/0/0 default/read/poll queues
>> [    7.783216] virtio_blk virtio1: [vda] 2097026 512-byte logical blocks 
>> (1.07 GB/1024 MiB)
>> [    8.636453] virtio_net virtio0 enp0s1: renamed from eth0
>>
>> But when 8.2 is booting, it ends up at:
>>
>> Loading, please wait...
>> Starting systemd-udevd version 255-1
>> ..and nothing.  here it waits for the root fs to appear, and drops into the 
>> shell
>>
>> git bisect points at this commit:
>>
>> commit b8f7959f28c4f36496bc0a694fa28bf5078152c5
>> Author: Peter Maydell <peter.maydell@linaro.org>
>> Date:   Mon Jul 24 18:43:33 2023 +0100
>>
>>     target/arm: Do all "ARM_FEATURE_X implies Y" checks in post_init
>>
>> Reverting this commit on top of 8.2.0 (or current qemu master)
>> makes things works again.
>>
>> It's interesting how we missed this out entirely, as it's been applied
>> at the beginning of 8.2 development cycle, it's 228th commit after
>> 8.1.0.
>>
>> It looks like we've quite a bit more regressions like this in 8.2.0..
>> :(
>
> We have at least one test that does use virtio-pci on qemu-system-arm
> and passes. From:
>
>   ./pyvenv/bin/avocado run ./tests/avocado --filter-by-tags=arch:arm
>
> We can see:
>
>   grep "virtio" 
> /home/alex/avocado/job-results/job-2023-12-21T18.11-8dc03b2/job.log | grep pci
>   2023-12-21 18:14:31,294 machine L0470 DEBUG| VM launch command:
> './qemu-system-arm -display none -vga none -chardev socket,id=mon,fd=5
> -mon chardev=mon,mode=control -machine versatilepb -chardev
> socket,id=console,fd=19 -serial chardev:console -cpu arm926 -kernel
> /home/alex/avocado/data/cache/by_location/a8e6fbd14f0270fef06aaef9fc413c5a6ed71120/zImage
> -append printk.time=0 root=/dev/vda console=ttyAMA0 -blockdev
> driver=raw,file.driver=file,file.filename=/home/alex/avocado/job-results/job-2023-12-21T18.11-8dc03b2/test-results/tmp_dir1mqewwjv/40-._tests_avocado_tuxrun_baselines.py_TuxRunBaselineTest.test_armv5/rootfs.ext4,node-name=hd0
> -device virtio-blk-pci,drive=hd0 -dtb
> /home/alex/avocado/data/cache/by_location/a8e6fbd14f0270fef06aaef9fc413c5a6ed71120/versatile-pb.dtb'
>   2023-12-21 18:14:31,722 __init__         L0153 DEBUG| virtio-pci 
> 0000:00:0d.0: enabling device (0100 -> 0103)
>
> But obviously not enough coverage to catch this regression.

modified   tests/avocado/tuxrun_baselines.py
@@ -168,7 +168,7 @@ def run_tuxtest_tests(self, haltmsg):
     def common_tuxrun(self,
                       csums=None,
                       dt=None,
-                      drive="virtio-blk-device",
+                      drive="virtio-blk-pci",
                       haltmsg="reboot: System halted",
                       console_index=0):
         """

And then we get:

 (1/3) ./tests/avocado/tuxrun_baselines.py:TuxRunBaselineTest.test_armv5: PASS 
(5.64 s)
 (2/3) ./tests/avocado/tuxrun_baselines.py:TuxRunBaselineTest.test_armv7: FAIL: 
Failure message found in console: "Kernel panic - not syncing". Expected: 
"Welcome to TuxTest" (1.21 s)
 (3/3) ./tests/avocado/tuxrun_baselines.py:TuxRunBaselineTest.test_armv7be: 
FAIL: Failure message found in console: "Kernel panic - not syncing". Expected: 
"Welcome to TuxTest" (1.24 s)
RESULTS    : PASS 1 | ERROR 0 | FAIL 2 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB TIME   : 8.50 s

So I guess this somehow hits ARMv7 only. Maybe something about I/O
access?

  2023-12-21 18:21:29,424 __init__         L0153 DEBUG| pl061_gpio 
9030000.pl061: PL061 GPIO chip registered
  2023-12-21 18:21:29,427 __init__         L0153 DEBUG| pci-host-generic 
4010000000.pcie: host bridge /pcie@10000000 ranges:
  2023-12-21 18:21:29,428 __init__         L0153 DEBUG| pci-host-generic 
4010000000.pcie:       IO 0x003eff0000..0x003effffff -> 0x0000000000
  2023-12-21 18:21:29,428 __init__         L0153 DEBUG| pci-host-generic 
4010000000.pcie:      MEM 0x0010000000..0x003efeffff -> 0x0010000000
  2023-12-21 18:21:29,428 __init__         L0153 DEBUG| pci-host-generic 
4010000000.pcie:      MEM 0x8000000000..0xffffffffff -> 0x8000000000
  2023-12-21 18:21:29,429 __init__         L0153 DEBUG| pci-host-generic 
4010000000.pcie: can't claim ECAM area [mem 0x10000000-0x1fffffff]: address 
conflict with pcie@10000000 [mem 0x10000000-0x3efeffff]
  2023-12-21 18:21:29,429 __init__         L0153 DEBUG| pci-host-generic: probe 
of 4010000000.pcie failed with error -16

>
>>
>> /mjt

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro



reply via email to

[Prev in Thread] Current Thread [Next in Thread]