[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: [PATCH 1/1] util: adjust coroutine pool size to virtio block queue
From: |
Hiroki Narukawa |
Subject: |
RE: [PATCH 1/1] util: adjust coroutine pool size to virtio block queue |
Date: |
Fri, 28 Jan 2022 08:50:09 +0000 |
> > /* Config size before the discard support (hide associated config
> > fields) */ #define VIRTIO_BLK_CFG_SIZE offsetof(struct
> > virtio_blk_config, \ @@ -1222,6 +1223,8 @@ static void
> virtio_blk_device_realize(DeviceState *dev, Error **errp)
> > for (i = 0; i < conf->num_queues; i++) {
> > virtio_add_queue(vdev, conf->queue_size,
> virtio_blk_handle_output);
> > }
> > + qemu_coroutine_increase_pool_batch_size(conf->num_queues *
> conf->queue_size
> > + / 2);
>
> Why "/ 2"?
In my understanding, a request on virtio-blk consumes 2 entries each for rx and
tx,
so there can be num_queues * queue_size / 2 requests running at the same time.
Start point of this was that coroutine pool size was 64, queue_size equivalent
size was 128,
and num_queue equivalent size was 1 from long ago and seems to work well.
New value num_queues * queue_size / 2 also seems to work well as more
overprovisioned value.
>
> > virtio_blk_data_plane_create(vdev, conf, &s->dataplane, &err);
> > if (err != NULL) {
> > error_propagate(errp, err);
>
> Please handle hot unplug (->unrealize()) so the coroutine pool shrinks down
> again when virtio-blk devices are removed.
>
Added it in v3 and resent it.
> My main concern is memory footprint. A burst of I/O can create many coroutines
> and they might never be used again. But I think we can deal with that using a
> timer
> in a separate future patch (if necessary).
In my understanding coroutine pool size does not limit the peak memory
consumption,
so I think even when coroutines are temporarily released, it is a room required
for
qemu to keep running with accessing disk IO, which I think users does not
imagine to be memory-consuming task.
Timer to release unused memory would be nice, but how serious is it?