qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v9 03/23] vdpa: delay set_vring_ready after DRIVER_OK


From: Eugenio Perez Martin
Subject: Re: [RFC PATCH v9 03/23] vdpa: delay set_vring_ready after DRIVER_OK
Date: Fri, 8 Jul 2022 11:56:41 +0200

On Fri, Jul 8, 2022 at 11:06 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Thu, Jul 7, 2022 at 2:40 AM Eugenio Pérez <eperezma@redhat.com> wrote:
> >
> > To restore the device in the destination of a live migration we send the
> > commands through control virtqueue. For a device to read CVQ it must
> > have received DRIVER_OK status bit.
> >
> > However this open a window where the device could start receiving
> > packets in rx queue 0 before it receive the RSS configuration. To avoid
> > that, we will not send vring_enable until all configuration is used by
> > the device.
> >
> > As a first step, reverse the DRIVER_OK and SET_VRING_ENABLE steps.
>
> I wonder if it's better to delay this to the series that implements
> migration since the shadow cvq doesn't depends on this?
>
> >
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> >  hw/virtio/vhost-vdpa.c | 22 ++++++++++++++++------
> >  1 file changed, 16 insertions(+), 6 deletions(-)
> >
> > diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> > index 66f054a12c..2ee8009594 100644
> > --- a/hw/virtio/vhost-vdpa.c
> > +++ b/hw/virtio/vhost-vdpa.c
> > @@ -728,13 +728,18 @@ static int vhost_vdpa_get_vq_index(struct vhost_dev 
> > *dev, int idx)
> >      return idx;
> >  }
> >
> > +/**
> > + * Set ready all vring of the device
> > + *
> > + * @dev: Vhost device
> > + */
> >  static int vhost_vdpa_set_vring_ready(struct vhost_dev *dev)
> >  {
> >      int i;
> >      trace_vhost_vdpa_set_vring_ready(dev);
> > -    for (i = 0; i < dev->nvqs; ++i) {
> > +    for (i = 0; i < dev->vq_index_end; ++i) {
> >          struct vhost_vring_state state = {
> > -            .index = dev->vq_index + i,
> > +            .index = i,
>
> Looks like a cleanup or bugfix which deserves a separate patch?
>
> >              .num = 1,
> >          };
> >          vhost_vdpa_call(dev, VHOST_VDPA_SET_VRING_ENABLE, &state);
> > @@ -1097,7 +1102,6 @@ static int vhost_vdpa_dev_start(struct vhost_dev 
> > *dev, bool started)
> >          if (unlikely(!ok)) {
> >              return -1;
> >          }
> > -        vhost_vdpa_set_vring_ready(dev);
> >      } else {
> >          ok = vhost_vdpa_svqs_stop(dev);
> >          if (unlikely(!ok)) {
> > @@ -1111,16 +1115,22 @@ static int vhost_vdpa_dev_start(struct vhost_dev 
> > *dev, bool started)
> >      }
> >
> >      if (started) {
> > +        int r;
> > +
> >          memory_listener_register(&v->listener, &address_space_memory);
> > -        return vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_DRIVER_OK);
> > +        r = vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_DRIVER_OK);
> > +        if (unlikely(r)) {
> > +            return r;
> > +        }
> > +        vhost_vdpa_set_vring_ready(dev);
>
> Interesting, does this mean we only enable the last two queues without
> this patch?
>

The function vhost_vdpa_set_vring_ready is changed in this patch.
Instead of enabling only the vrings of the device, it enables all the
vrings of the device, from 0 to dev->vq_index_end.

In the case of networking, vq_index_end changes depending on how CVQ
and MQ are negotiated or not, so we should be safe here.

Based on your comments it's clear that this is an unexpected change
and I need to add that description to the patch message :).

Thanks!




reply via email to

[Prev in Thread] Current Thread [Next in Thread]