qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 09/19] vhost: Track number of descs in SVQElement


From: Eugenio Perez Martin
Subject: Re: [PATCH v2 09/19] vhost: Track number of descs in SVQElement
Date: Fri, 15 Jul 2022 07:41:47 +0200

On Fri, Jul 15, 2022 at 6:10 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Fri, Jul 15, 2022 at 12:32 AM Eugenio Pérez <eperezma@redhat.com> wrote:
> >
> > Since CVQ will be able to modify elements, the number of descriptors in
> > the guest may not match with the number of descriptors exposed. Track
> > separately.
> >
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> >  hw/virtio/vhost-shadow-virtqueue.h |  6 ++++++
> >  hw/virtio/vhost-shadow-virtqueue.c | 10 +++++-----
> >  2 files changed, 11 insertions(+), 5 deletions(-)
> >
> > diff --git a/hw/virtio/vhost-shadow-virtqueue.h 
> > b/hw/virtio/vhost-shadow-virtqueue.h
> > index f35d4b8f90..143c86a568 100644
> > --- a/hw/virtio/vhost-shadow-virtqueue.h
> > +++ b/hw/virtio/vhost-shadow-virtqueue.h
> > @@ -17,6 +17,12 @@
> >
> >  typedef struct SVQElement {
> >      VirtQueueElement elem;
> > +
> > +    /*
> > +     * Number of descriptors exposed to the device. May or may not match
> > +     * guest's
> > +     */
> > +    unsigned int ndescs;
> >  } SVQElement;
>
> Can we simplify things furtherly by moving ndscs into a dedicated array at 
> svq?
>
> Then we don't need to bother with introducing SVQElement.
>

Yes, I'll move to a desc_state.

Thanks!

> Thanks
>
> >
> >  /* Shadow virtqueue to relay notifications */
> > diff --git a/hw/virtio/vhost-shadow-virtqueue.c 
> > b/hw/virtio/vhost-shadow-virtqueue.c
> > index 442ca3cbd3..3b112c4ec8 100644
> > --- a/hw/virtio/vhost-shadow-virtqueue.c
> > +++ b/hw/virtio/vhost-shadow-virtqueue.c
> > @@ -243,10 +243,10 @@ static int vhost_svq_add(VhostShadowVirtqueue *svq, 
> > const struct iovec *out_sg,
> >                            size_t in_num, SVQElement *svq_elem)
> >  {
> >      unsigned qemu_head;
> > -    unsigned ndescs = in_num + out_num;
> > +    svq_elem->ndescs = in_num + out_num;
> >      bool ok;
> >
> > -    if (unlikely(ndescs > vhost_svq_available_slots(svq))) {
> > +    if (unlikely(svq_elem->ndescs > vhost_svq_available_slots(svq))) {
> >          return -ENOSPC;
> >      }
> >
> > @@ -393,7 +393,7 @@ static SVQElement 
> > *vhost_svq_get_buf(VhostShadowVirtqueue *svq,
> >      SVQElement *elem;
> >      const vring_used_t *used = svq->vring.used;
> >      vring_used_elem_t used_elem;
> > -    uint16_t last_used, last_used_chain, num;
> > +    uint16_t last_used, last_used_chain;
> >
> >      if (!vhost_svq_more_used(svq)) {
> >          return NULL;
> > @@ -420,8 +420,8 @@ static SVQElement 
> > *vhost_svq_get_buf(VhostShadowVirtqueue *svq,
> >      }
> >
> >      elem = svq->ring_id_maps[used_elem.id];
> > -    num = elem->elem.in_num + elem->elem.out_num;
> > -    last_used_chain = vhost_svq_last_desc_of_chain(svq, num, used_elem.id);
> > +    last_used_chain = vhost_svq_last_desc_of_chain(svq, elem->ndescs,
> > +                                                   used_elem.id);
> >      svq->desc_next[last_used_chain] = svq->free_head;
> >      svq->free_head = used_elem.id;
> >
> > --
> > 2.31.1
> >
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]