qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] tests: migration-test: Allow test to run without uffd


From: Thomas Huth
Subject: Re: [PATCH] tests: migration-test: Allow test to run without uffd
Date: Wed, 20 Jul 2022 16:11:43 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.10.0

On 19/07/2022 12.37, Daniel P. Berrangé wrote:
On Tue, Jul 19, 2022 at 12:28:24PM +0200, Thomas Huth wrote:
On 18/07/2022 21.14, Peter Xu wrote:
Hi, Thomas,

On Mon, Jul 18, 2022 at 08:23:26PM +0200, Thomas Huth wrote:
On 07/07/2022 20.46, Peter Xu wrote:
We used to stop running all tests if uffd is not detected.  However
logically that's only needed for postcopy not the rest of tests.

Keep running the rest when still possible.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
    tests/qtest/migration-test.c | 11 +++++------
    1 file changed, 5 insertions(+), 6 deletions(-)

Did you test your patch in the gitlab-CI? I just added it to my testing-next
branch and the the test is failing reproducibly on macOS here:

   https://gitlab.com/thuth/qemu/-/jobs/2736260861#L6275
   https://gitlab.com/thuth/qemu/-/jobs/2736623914#L6275

(without your patch the whole test is skipped instead)

Thanks for reporting this.

Is it easy to figure out which test was failing on your side?  I cannot
easily reproduce this here on a MacOS with M1.

I've modified the yml file to only run the migration test in verbose mode
and got this:

...
ok 5 /x86_64/migration/validate_uuid_src_not_set
# starting QEMU: exec ./qemu-system-x86_64 -qtest unix:/tmp/qtest-58011.sock
-qtest-log /dev/null -chardev socket,path=/tmp/qtest-58011.qmp,id=char0 -mon
chardev=char0,mode=control -display none -accel kvm -accel tcg -name
source,debug-threads=on -m 150M -serial
file:/tmp/migration-test-ef2fMr/src_serial -drive
file=/tmp/migration-test-ef2fMr/bootsect,format=raw  -uuid
11111111-1111-1111-1111-111111111111 2>/dev/null -accel qtest
# starting QEMU: exec ./qemu-system-x86_64 -qtest unix:/tmp/qtest-58011.sock
-qtest-log /dev/null -chardev socket,path=/tmp/qtest-58011.qmp,id=char0 -mon
chardev=char0,mode=control -display none -accel kvm -accel tcg -name
target,debug-threads=on -m 150M -serial
file:/tmp/migration-test-ef2fMr/dest_serial -incoming
unix:/tmp/migration-test-ef2fMr/migsocket -drive
file=/tmp/migration-test-ef2fMr/bootsect,format=raw   2>/dev/null -accel
qtest
ok 6 /x86_64/migration/validate_uuid_dst_not_set
# starting QEMU: exec ./qemu-system-x86_64 -qtest unix:/tmp/qtest-58011.sock
-qtest-log /dev/null -chardev socket,path=/tmp/qtest-58011.qmp,id=char0 -mon
chardev=char0,mode=control -display none -accel kvm -accel tcg -name
source,debug-threads=on -m 150M -serial
file:/tmp/migration-test-ef2fMr/src_serial -drive
file=/tmp/migration-test-ef2fMr/bootsect,format=raw    -accel qtest
# starting QEMU: exec ./qemu-system-x86_64 -qtest unix:/tmp/qtest-58011.sock
-qtest-log /dev/null -chardev socket,path=/tmp/qtest-58011.qmp,id=char0 -mon
chardev=char0,mode=control -display none -accel kvm -accel tcg -name
target,debug-threads=on -m 150M -serial
file:/tmp/migration-test-ef2fMr/dest_serial -incoming
unix:/tmp/migration-test-ef2fMr/migsocket -drive
file=/tmp/migration-test-ef2fMr/bootsect,format=raw    -accel qtest
**
ERROR:../tests/qtest/migration-helpers.c:181:wait_for_migration_status:
assertion failed: (g_test_timer_elapsed() < MIGRATION_STATUS_WAIT_TIMEOUT)
Bail out!
ERROR:../tests/qtest/migration-helpers.c:181:wait_for_migration_status:
assertion failed: (g_test_timer_elapsed() < MIGRATION_STATUS_WAIT_TIMEOUT)

This is the safety net we put it to catch case where the test has
got stuck. It is set at 2 minutes.

There's a chance that is too short, so one first step might be to
increase to 10 minutes and see if the tests pass. If it still fails,
then its likely a genuine bug

I tried to increase it to 5 minutes first, but that did not help. In a second try, I increased it to 10 minutes, and then the test was passing, indeed:

https://cirrus-ci.com/task/5819072351830016?logs=build#L7208

Could it maybe be accelerated, e.g. by tweaking the downtime limit again?

 Thomas




reply via email to

[Prev in Thread] Current Thread [Next in Thread]