qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Big TCG slowdown when using zstd with aarch64


From: Daniel P . Berrangé
Subject: Re: Big TCG slowdown when using zstd with aarch64
Date: Fri, 2 Jun 2023 10:10:27 +0100
User-agent: Mutt/2.2.9 (2022-11-12)

On Thu, Jun 01, 2023 at 11:06:42PM +0200, Juan Quintela wrote:
> 
> Hi
> 
> Before I continue investigating this further, do you have any clue what
> is going on here.  I am running qemu-system-aarch64 on x86_64.
> 
> $ time ./tests/qtest/migration-test -p 
> /aarch64/migration/multifd/tcp/plain/none


> real  0m4.559s
> user  0m4.898s
> sys   0m1.156s

> $ time ./tests/qtest/migration-test -p 
> /aarch64/migration/multifd/tcp/plain/zlib

> real  0m1.645s
> user  0m3.484s
> sys   0m0.512s
> $ time ./tests/qtest/migration-test -p 
> /aarch64/migration/multifd/tcp/plain/zstd

> real  0m48.022s
> user  8m17.306s
> sys   0m35.217s
> 
> 
> This test is very amenable to compression, basically we only modify one
> byte for each page, and basically all the pages are the same.
> 
> no compression: 4.5 seconds
> zlib compression: 1.6 seconds (inside what I would expect)
> zstd compression: 48 seconds, what is going on here?

This is non-deterministic. I've seen *all* three cases complete in approx
1 second each. If I set 'QTEST_LOG=1', then very often the zstd test will
complete in < 1 second.

I notice the multifd tests are not sharing the setup logic with the
precopy tests, so they have no set any migration bandwidth limit.
IOW migration is running at full speed.

What I happening is that the migrate is runing so fast that the guest
workload hasn't had the chance to dirty any memory, so 'none' and 'zlib'
tests only copy about 15-30 MB of data, the rest is still all zeroes.

When it is fast, the zstd test also has similar low transfer of data,
but when it is slow then it transfers a massive amount more, and goes
through a *huge* number of iterations

eg I see dirty-sync-count over 1000:

{"return": {"expected-downtime": 221243, "status": "active", "setup-time": 1, 
"total-time": 44028, "ram": {"total": 291905536, "postcopy-requests": 0, 
"dirty-sync-count": 1516, "multifd-bytes": 24241675, "pages-per-second": 
804571, "downtime-bytes": 0, "page-size": 4096, "remaining": 82313216, 
"postcopy-bytes": 0, "mbps": 3.7536507936507939, "transferred": 25377710, 
"dirty-sync-missed-zero-copy": 0, "precopy-bytes": 1136035, "duplicate": 
124866, "dirty-pages-rate": 850637, "skipped": 0, "normal-bytes": 156904067072, 
"normal": 38306657}}}


I suspect that the zstd logic takes a little bit longer in setup,
which allows often allows the guest dirty workload to get ahead of
it, resulting in a huge amount of data to transfer. Every now and
then the compression code gets ahead of the workload and thus most
data is zeros and skipped.

IMHO this feels like just another example of compression being largely
useless. The CPU overhead of compression can't keep up with the guest
dirty workload, making the supposedly network bandwidth saving irrelevant.

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




reply via email to

[Prev in Thread] Current Thread [Next in Thread]