qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC v1 05/18] vfio/pci: add pasid alloc/free implement


From: Liu, Yi L
Subject: Re: [Qemu-devel] [RFC v1 05/18] vfio/pci: add pasid alloc/free implementation
Date: Mon, 22 Jul 2019 07:02:51 +0000

> From: address@hidden [mailto:address@hidden] On Behalf
> Of David Gibson
> Sent: Wednesday, July 17, 2019 11:07 AM
> To: Liu, Yi L <address@hidden>
> Subject: Re: [RFC v1 05/18] vfio/pci: add pasid alloc/free implementation
> 
> On Tue, Jul 16, 2019 at 10:25:55AM +0000, Liu, Yi L wrote:
> > > From: address@hidden [mailto:address@hidden] On
> Behalf
> > > Of David Gibson
> > > Sent: Monday, July 15, 2019 10:55 AM
> > > To: Liu, Yi L <address@hidden>
> > > Subject: Re: [RFC v1 05/18] vfio/pci: add pasid alloc/free implementation
> > >
> > > On Fri, Jul 05, 2019 at 07:01:38PM +0800, Liu Yi L wrote:
> > > > This patch adds vfio implementation 
> > > > PCIPASIDOps.alloc_pasid/free_pasid().
> > > > These two functions are used to propagate guest pasid allocation and
> > > > free requests to host via vfio container ioctl.
> > >
> > > As I said in an earlier comment, I think doing this on the device is
> > > conceptually incorrect.  I think we need an explcit notion of an SVM
> > > context (i.e. the namespace in which all the PASIDs live) - which will
> > > IIUC usually be shared amongst multiple devices.  The create and free
> > > PASID requests should be on that object.
> >
> > Actually, the allocation is not doing on this device. System wide, it is
> > done on a container. So not sure if it is the API interface gives you a
> > sense that this is done on device.
> 
> Sorry, I should have been clearer.  I can see that at the VFIO level
> it is done on the container.  However the function here takes a bus
> and devfn, so this qemu internal interface is per-device, which
> doesn't really make sense.

Got it. The reason here is to pass the bus and devfn info, so that VFIO
can figure out a container for the operation. So far in QEMU, there is
no good way to connect the vIOMMU emulator and VFIO regards to
SVM. hw/pci layer is a choice based on some previous discussion. But
yes, I agree with you that we may need to have an explicit notion for
SVM. Do you think it is good to introduce a new abstract layer for SVM
(may name as SVMContext). The idea would be that vIOMMU maintain
the SVMContext instances and expose explicit interface for VFIO to get
it. Then VFIO register notifiers on to the SVMContext. When vIOMMU
emulator wants to do PASID alloc/free, it fires the corresponding
notifier. After call into VFIO, the notifier function itself figure out the
container it is bound. In this way, it's the duty of vIOMMU emulator to
figure out a proper notifier to fire. From interface point of view, it is no
longer per-device. Also, it leaves the PASID management details to
vIOMMU emulator as it can be vendor specific. Does it make sense?
Also, I'd like to know if you have any other idea on it. That would
surely be helpful. :-)

> > Also, curious on the SVM context
> > concept, do you mean it a per-VM context or a per-SVM usage context?
> > May you elaborate a little more. :-)
> 
> Sorry, I'm struggling to find a good term for this.  By "context" I
> mean a namespace containing a bunch of PASID address spaces, those
> PASIDs are then visible to some group of devices.

I see. May be the SVMContext instance above can include multiple PASID
address spaces. And again, I think this relationship should be maintained
in vIOMMU emulator.

Thanks,
Yi Liu

> 
> >
> > Thanks,
> > Yi Liu
> >
> > > >
> > > > Cc: Kevin Tian <address@hidden>
> > > > Cc: Jacob Pan <address@hidden>
> > > > Cc: Peter Xu <address@hidden>
> > > > Cc: Eric Auger <address@hidden>
> > > > Cc: Yi Sun <address@hidden>
> > > > Cc: David Gibson <address@hidden>
> > > > Signed-off-by: Liu Yi L <address@hidden>
> > > > Signed-off-by: Yi Sun <address@hidden>
> > > > ---
> > > >  hw/vfio/pci.c | 61
> > > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > > >  1 file changed, 61 insertions(+)
> > > >
> > > > diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
> > > > index ce3fe96..ab184ad 100644
> > > > --- a/hw/vfio/pci.c
> > > > +++ b/hw/vfio/pci.c
> > > > @@ -2690,6 +2690,65 @@ static void
> vfio_unregister_req_notifier(VFIOPCIDevice
> > > *vdev)
> > > >      vdev->req_enabled = false;
> > > >  }
> > > >
> > > > +static int vfio_pci_device_request_pasid_alloc(PCIBus *bus,
> > > > +                                               int32_t devfn,
> > > > +                                               uint32_t min_pasid,
> > > > +                                               uint32_t max_pasid)
> > > > +{
> > > > +    PCIDevice *pdev = bus->devices[devfn];
> > > > +    VFIOPCIDevice *vdev = DO_UPCAST(VFIOPCIDevice, pdev, pdev);
> > > > +    VFIOContainer *container = vdev->vbasedev.group->container;
> > > > +    struct vfio_iommu_type1_pasid_request req;
> > > > +    unsigned long argsz;
> > > > +    int pasid;
> > > > +
> > > > +    argsz = sizeof(req);
> > > > +    req.argsz = argsz;
> > > > +    req.flag = VFIO_IOMMU_PASID_ALLOC;
> > > > +    req.min_pasid = min_pasid;
> > > > +    req.max_pasid = max_pasid;
> > > > +
> > > > +    rcu_read_lock();
> > > > +    pasid = ioctl(container->fd, VFIO_IOMMU_PASID_REQUEST, &req);
> > > > +    if (pasid < 0) {
> > > > +        error_report("vfio_pci_device_request_pasid_alloc:"
> > > > +                     " request failed, contanier: %p", container);
> > > > +    }
> > > > +    rcu_read_unlock();
> > > > +    return pasid;
> > > > +}
> > > > +
> > > > +static int vfio_pci_device_request_pasid_free(PCIBus *bus,
> > > > +                                              int32_t devfn,
> > > > +                                              uint32_t pasid)
> > > > +{
> > > > +    PCIDevice *pdev = bus->devices[devfn];
> > > > +    VFIOPCIDevice *vdev = DO_UPCAST(VFIOPCIDevice, pdev, pdev);
> > > > +    VFIOContainer *container = vdev->vbasedev.group->container;
> > > > +    struct vfio_iommu_type1_pasid_request req;
> > > > +    unsigned long argsz;
> > > > +    int ret = 0;
> > > > +
> > > > +    argsz = sizeof(req);
> > > > +    req.argsz = argsz;
> > > > +    req.flag = VFIO_IOMMU_PASID_FREE;
> > > > +    req.pasid = pasid;
> > > > +
> > > > +    rcu_read_lock();
> > > > +    ret = ioctl(container->fd, VFIO_IOMMU_PASID_REQUEST, &req);
> > > > +    if (ret != 0) {
> > > > +        error_report("vfio_pci_device_request_pasid_free:"
> > > > +                     " request failed, contanier: %p", container);
> > > > +    }
> > > > +    rcu_read_unlock();
> > > > +    return ret;
> > > > +}
> > > > +
> > > > +static PCIPASIDOps vfio_pci_pasid_ops = {
> > > > +    .alloc_pasid = vfio_pci_device_request_pasid_alloc,
> > > > +    .free_pasid = vfio_pci_device_request_pasid_free,
> > > > +};
> > > > +
> > > >  static void vfio_realize(PCIDevice *pdev, Error **errp)
> > > >  {
> > > >      VFIOPCIDevice *vdev = PCI_VFIO(pdev);
> > > > @@ -2991,6 +3050,8 @@ static void vfio_realize(PCIDevice *pdev, Error
> **errp)
> > > >      vfio_register_req_notifier(vdev);
> > > >      vfio_setup_resetfn_quirk(vdev);
> > > >
> > > > +    pci_setup_pasid_ops(pdev, &vfio_pci_pasid_ops);
> > > > +
> > > >      return;
> > > >
> > > >  out_teardown:
> > >
> >
> 
> --
> David Gibson                  | I'll have my music baroque, and my code
> david AT gibson.dropbear.id.au        | minimalist, thank you.  NOT _the_ 
> _other_
>                               | _way_ _around_!
> http://www.ozlabs.org/~dgibson



reply via email to

[Prev in Thread] Current Thread [Next in Thread]