qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 0/2] Move ASID test to vhost-vdpa net initialization


From: Eugenio Perez Martin
Subject: Re: [PATCH v4 0/2] Move ASID test to vhost-vdpa net initialization
Date: Sun, 25 Jun 2023 11:11:55 +0200

On Fri, Jun 23, 2023 at 3:06 PM Peter Maydell <peter.maydell@linaro.org> wrote:
>
> On Fri, 26 May 2023 at 16:32, Eugenio Pérez <eperezma@redhat.com> wrote:
> >
> > QEMU v8.0 is able to switch dynamically between vhost-vdpa passthrough
> > and SVQ mode as long as the net device does not have CVQ.  The net device
> > state followed (and migrated) by CVQ requires special care.
> >
> > A pre-requisite to add CVQ to that framework is to determine if devices with
> > CVQ are migratable or not at initialization time.  The solution to it is to
> > always shadow only CVQ, and vq groups and ASID are used for that.
> >
> > However, current qemu version only checks ASID at device start (as "driver 
> > set
> > DRIVER_OK status bit"), not at device initialization.  A check at
> > initialization time is required.  Otherwise, the guest would be able to set
> > and remove migration blockers at will [1].
> >
> > This series is a requisite for migration of vhost-vdpa net devices with CVQ.
> > However it already makes sense by its own, as it reduces the number of 
> > ioctls
> > at migration time, decreasing the error paths there.
>
> Hi -- since you're working on the net_init_vhost_vdpa() code,
> would you mind having a look at Coverity CID 1490785 ?
> This is about a leak of the vdpa_device_fd. We fixed one
> instance of that leak in commit aed5da45daf734ddc54 but
> it looks like there's still a different leak:
>
>     for (i = 0; i < queue_pairs; i++) {
>         ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
>                                      vdpa_device_fd, i, 2, true, opts->x_svq,
>                                      iova_range, features);
>         if (!ncs[i])
>             goto err;
>     }
>
>     if (has_cvq) {
>         nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
>                                  vdpa_device_fd, i, 1, false,
>                                  opts->x_svq, iova_range, features);
>         if (!nc)
>             goto err;
>     }
>
>     return 0;
>
> In this code, if queue_pairs is non-zero we will use
> vdpa_device_fd because we pass it to net_vhost_vdpa_init().
> Similarly, if has_cvq is true then we'll also use the fd.
> But if queue_pairs is zero and has_cvq is false then we
> will not do anything with the fd, and will return 0,
> leaking the file descriptor.
>
> Maybe this combination is not supposed to happen, but
> I can't see anything in vhost_vdpa_get_max_queue_pairs()
> or in this function which guards against it. If it's
> an invalid setup we should detect it and return an
> error, I think.
>

Yes, a device that returns 0 max_vq_pairs would be invalid. I will
introduce a guard for that.

Thanks for the heads up!

> thanks
> -- PMM
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]