qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC patch 0/1] block: vhost-blk backend


From: Andrey Zhadchenko
Subject: Re: [RFC patch 0/1] block: vhost-blk backend
Date: Thu, 28 Jul 2022 08:28:37 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.7.0


On 7/27/22 16:06, Stefano Garzarella wrote:
On Tue, Jul 26, 2022 at 04:15:48PM +0200, Denis V. Lunev wrote:
On 26.07.2022 15:51, Michael S. Tsirkin wrote:
On Mon, Jul 25, 2022 at 11:55:26PM +0300, Andrey Zhadchenko wrote:
Although QEMU virtio-blk is quite fast, there is still some room for
improvements. Disk latency can be reduced if we handle virito-blk requests
in host kernel so we avoid a lot of syscalls and context switches.

The biggest disadvantage of this vhost-blk flavor is raw format.
Luckily Kirill Thai proposed device mapper driver for QCOW2 format to attach files as block devices: https://www.spinics.net/lists/kernel/msg4292965.html
That one seems stalled. Do you plan to work on that too?
We have too. The difference in numbers, as you seen below is quite too
much. We have waited for this patch to be sent to keep pushing.

It should be noted that may be talk on OSS this year could also push a bit.

Cool, the results are similar of what I saw when I compared vhost-blk and io_uring passthrough with NVMe (Slide 7 here: [1]).

About QEMU block layer support, we recently started to work on libblkio [2]. Stefan also sent an RFC [3] to implement the QEMU BlockDriver.
Currently it supports virtio-blk devices using vhost-vdpa and vhost-user.
We could add support for vhost (kernel) as well, though, we were thinking of leveraging vDPA to implement in-kernel software device as well.

That way we could reuse a lot of the code to support both hardware and software accelerators.

In the talk [1] I describe the idea a little bit, and a few months ago I did a PoC (unsubmitted RFC) to see if it was feasible and the numbers were in line with vhost-blk.

Do you think we could join forces and just have an in-kernel vdpa-blk software device?

This seems worth trying. Why double the efforts to do the same. Yet I would like to play a bit with your vdpa-blk PoC beforehand. Can you send it to me with some instructions how to run it?


Of course we could have both vhost-blk and vdpa-blk, but with vDPA perhaps we can have one software stack to maintain for both HW and software accelerators.

Thanks,
Stefano

[1] https://kvmforum2021.sched.com/event/ke3a/vdpa-blk-unified-hardware-and-software-offload-for-virtio-blk-stefano-garzarella-red-hat
[2] https://gitlab.com/libblkio/libblkio
[3] 20220708041737.1768521-1-stefanha@redhat.com/">https://lore.kernel.org/qemu-devel/20220708041737.1768521-1-stefanha@redhat.com/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]