qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] coroutine: resize pool periodically instead of limiting s


From: Stefan Hajnoczi
Subject: Re: [PATCH v2] coroutine: resize pool periodically instead of limiting size
Date: Mon, 11 Oct 2021 13:43:38 +0100

On Mon, Sep 13, 2021 at 04:35:24PM +0100, Stefan Hajnoczi wrote:
> It was reported that enabling SafeStack reduces IOPS significantly
> (>25%) with the following fio benchmark on virtio-blk using a NVMe host
> block device:
> 
>   # fio --rw=randrw --bs=4k --iodepth=64 --runtime=1m --direct=1 \
>       --filename=/dev/vdb --name=job1 --ioengine=libaio --thread \
>       --group_reporting --numjobs=16 --time_based \
>         --output=/tmp/fio_result
> 
> Serge Guelton and I found that SafeStack is not really at fault, it just
> increases the cost of coroutine creation. This fio workload exhausts the
> coroutine pool and coroutine creation becomes a bottleneck. Previous
> work by Honghao Wang also pointed to excessive coroutine creation.
> 
> Creating new coroutines is expensive due to allocating new stacks with
> mmap(2) and mprotect(2). Currently there are thread-local and global
> pools that recycle old Coroutine objects and their stacks but the
> hardcoded size limit of 64 for thread-local pools and 128 for the global
> pool is insufficient for the fio benchmark shown above.
> 
> This patch changes the coroutine pool algorithm to a simple thread-local
> pool without a maximum size limit. Threads periodically shrink the pool
> down to a size sufficient for the maximum observed number of coroutines.
> 
> The global pool is removed by this patch. It can help to hide the fact
> that local pools are easily exhausted, but it's doesn't fix the root
> cause. I don't think there is a need for a global pool because QEMU's
> threads are long-lived, so let's keep things simple.
> 
> Performance of the above fio benchmark is as follows:
> 
>       Before   After
> IOPS     60k     97k
> 
> Memory usage varies over time as needed by the workload:
> 
>             VSZ (KB)             RSS (KB)
> Before fio  4705248              843128
> During fio  5747668 (+ ~100 MB)  849280
> After fio   4694996 (- ~100 MB)  845184
> 
> This confirms that coroutines are indeed being freed when no longer
> needed.
> 
> Thanks to Serge Guelton for working on identifying the bottleneck with
> me!
> 
> Reported-by: Tingting Mao <timao@redhat.com>
> Cc: Serge Guelton <sguelton@redhat.com>
> Cc: Honghao Wang <wanghonghao@bytedance.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Daniele Buono <dbuono@linux.vnet.ibm.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
> v2:
>  * Retained minimum pool size of 64 to keep latency low for threads that
>    perform I/O infrequently and to prevent possible regressions [Daniele]
> ---
>  include/qemu/coroutine-pool-timer.h | 36 ++++++++++++++++
>  include/qemu/coroutine.h            |  7 +++
>  iothread.c                          |  6 +++
>  util/coroutine-pool-timer.c         | 35 +++++++++++++++
>  util/main-loop.c                    |  5 +++
>  util/qemu-coroutine.c               | 66 ++++++++++++++++-------------
>  util/meson.build                    |  1 +
>  7 files changed, 126 insertions(+), 30 deletions(-)
>  create mode 100644 include/qemu/coroutine-pool-timer.h
>  create mode 100644 util/coroutine-pool-timer.c

Applied to my block-next tree:
https://gitlab.com/stefanha/qemu/commits/block-next

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]