qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v4 00/20] vDPA shadow virtqueue


From: Eugenio Perez Martin
Subject: Re: [RFC PATCH v4 00/20] vDPA shadow virtqueue
Date: Tue, 12 Oct 2021 11:09:37 +0200

On Tue, Oct 12, 2021 at 6:06 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Tue, Oct 12, 2021 at 11:59 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> >
> > 在 2021/10/1 下午3:05, Eugenio Pérez 写道:
> > > This series enable shadow virtqueue (SVQ) for vhost-vdpa devices. This
> > > is intended as a new method of tracking the memory the devices touch
> > > during a migration process: Instead of relay on vhost device's dirty
> > > logging capability, SVQ intercepts the VQ dataplane forwarding the
> > > descriptors between VM and device. This way qemu is the effective
> > > writer of guests memory, like in qemu's virtio device operation.
> > >
> > > When SVQ is enabled qemu offers a new vring to the device to read
> > > and write into, and also intercepts kicks and calls between the device
> > > and the guest. Used buffers relay would cause dirty memory being
> > > tracked, but at this RFC SVQ is not enabled on migration automatically.
> > >
> > > It is based on the ideas of DPDK SW assisted LM, in the series of
> > > DPDK's https://patchwork.dpdk.org/cover/48370/ . However, these does
> > > not map the shadow vq in guest's VA, but in qemu's.
> > >
> > > For qemu to use shadow virtqueues the guest virtio driver must not use
> > > features like event_idx or indirect descriptors. These limitations will
> > > be addressed in later series, but they are left out for simplicity at
> > > the moment.
> > >
> > > SVQ needs to be enabled with QMP command:
> > >
> > > { "execute": "x-vhost-enable-shadow-vq",
> > >        "arguments": { "name": "dev0", "enable": true } }
> > >
> > > This series includes some patches to delete in the final version that
> > > helps with its testing. The first two of the series freely implements
> > > the feature to stop the device and be able to retrieve its status. It's
> > > intended to be used with vp_vpda driver in a nested environment. This
> > > driver also need modifications to forward the new status bit.
> > >
> > > Patches 2-8 prepares the SVQ and QMP command to support guest to host
> > > notifications forwarding. If the SVQ is enabled with these ones
> > > applied and the device supports it, that part can be tested in
> > > isolation (for example, with networking), hopping through SVQ.
> > >
> > > Same thing is true with patches 9-13, but with device to guest
> > > notifications.
> > >
> > > The rest of the patches implements the actual buffer forwarding.
> > >
> > > Comments are welcome.
> >
> >
> > Hi Eugenio:
> >
> >
> > It would be helpful to have a public git repo for us to ease the review.
> >
> > Thanks
> >

Hi Jason,

I just pushed this tag to
https://github.com/eugpermar/qemu/tree/vdpa_sw_live_migration.d/vdpa-v4
,
but let me know if you find another way more convenient.

Thanks!

>
> Btw, we also need to measure the performance impact of the shadow virtqueue.
>

I will measure it in subsequent series, since I'm still making some
changes. At the moment I'm also testing with nested virtualization
that can affect it.

However we need to take into account that this series still has a lot
of room for improvement. I would say that packed vq and isolating code
in its own aio context could give a noticeable boost on the numbers.

Thanks!

> Thanks
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]