qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v9 20/23] vdpa: Buffer CVQ support on shadow virtqueue


From: Jason Wang
Subject: Re: [RFC PATCH v9 20/23] vdpa: Buffer CVQ support on shadow virtqueue
Date: Thu, 14 Jul 2022 15:04:08 +0800

On Thu, Jul 14, 2022 at 2:54 PM Eugenio Perez Martin
<eperezma@redhat.com> wrote:
>
> > > > +static void vhost_vdpa_net_handle_ctrl_used(VhostShadowVirtqueue *svq,
> > > > +                                            void *vq_elem_opaque,
> > > > +                                            uint32_t dev_written)
> > > > +{
> > > > +    g_autoptr(CVQElement) cvq_elem = vq_elem_opaque;
> > > > +    virtio_net_ctrl_ack status = VIRTIO_NET_ERR;
> > > > +    const struct iovec out = {
> > > > +        .iov_base = cvq_elem->out_data,
> > > > +        .iov_len = cvq_elem->out_len,
> > > > +    };
> > > > +    const DMAMap status_map_needle = {
> > > > +        .translated_addr = (hwaddr)(uintptr_t)cvq_elem->in_buf,
> > > > +        .size = sizeof(status),
> > > > +    };
> > > > +    const DMAMap *in_map;
> > > > +    const struct iovec in = {
> > > > +        .iov_base = &status,
> > > > +        .iov_len = sizeof(status),
> > > > +    };
> > > > +    g_autofree VirtQueueElement *guest_elem = NULL;
> > > > +
> > > > +    if (unlikely(dev_written < sizeof(status))) {
> > > > +        error_report("Insufficient written data (%llu)",
> > > > +                     (long long unsigned)dev_written);
> > > > +        goto out;
> > > > +    }
> > > > +
> > > > +    in_map = vhost_iova_tree_find_iova(svq->iova_tree, 
> > > > &status_map_needle);
> > > > +    if (unlikely(!in_map)) {
> > > > +        error_report("Cannot locate out mapping");
> > > > +        goto out;
> > > > +    }
> > > > +
> > > > +    switch (cvq_elem->ctrl.class) {
> > > > +    case VIRTIO_NET_CTRL_MAC_ADDR_SET:
> > > > +        break;
> > > > +    default:
> > > > +        error_report("Unexpected ctrl class %u", cvq_elem->ctrl.class);
> > > > +        goto out;
> > > > +    };
> > > > +
> > > > +    memcpy(&status, cvq_elem->in_buf, sizeof(status));
> > > > +    if (status != VIRTIO_NET_OK) {
> > > > +        goto out;
> > > > +    }
> > > > +
> > > > +    status = VIRTIO_NET_ERR;
> > > > +    virtio_net_handle_ctrl_iov(svq->vdev, &in, 1, &out, 1);
> > >
> > >
> > > I wonder if this is the best choice. It looks to me it might be better
> > > to extend the virtio_net_handle_ctrl_iov() logic:
> > >
> > > virtio_net_handle_ctrl_iov() {
> > >      if (svq enabled) {
> > >           host_elem = iov_copy(guest_elem);
> > >           vhost_svq_add(host_elem);
> > >           vhost_svq_poll(host_elem);
> > >      }
> > >      // usersapce ctrl vq logic
> > > }
> > >
> > >
> > > This can help to avoid coupling too much logic in cvq (like the
> > > avail,used and detach ops).
> > >
> >
> > Let me try that way and I'll come back to you.
> >
>
> The problem with that approach is that virtio_net_handle_ctrl_iov is
> called from the SVQ used handler. How could we call it otherwise? I
> find it pretty hard to do unless we return SVQ to the model where we
> used VirtQueue.handle_output, discarded long ago.

I'm not sure I get this. Can we simply let the cvq to be trapped as
the current userspace datapath did?

Thanks

>
> I'm about to send a new version, but I still need to call
> virtio_net_handle_ctrl_iov from the avail handler. The handlers used
> and discard are removed at least.
>
> Thanks!
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]