qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 3/6] hw/vfio/common: Do not replay IOMMU mapp


From: Peter Xu
Subject: Re: [Qemu-devel] [PATCH v2 3/6] hw/vfio/common: Do not replay IOMMU mappings in nested case
Date: Wed, 3 Jul 2019 18:21:22 +0800
User-agent: Mutt/1.11.4 (2019-03-13)

On Wed, Jul 03, 2019 at 11:04:38AM +0200, Auger Eric wrote:
> Hi Peter,

Hi, Eric,

> 
> On 7/3/19 7:41 AM, Peter Xu wrote:
> > On Mon, Jul 01, 2019 at 11:30:31AM +0200, Eric Auger wrote:
> >> In nested mode, the stage 1 translation tables are owned by
> >> the guest and there is no caching on host side. So there is
> >> no need to replay the mappings.
> >>
> >> As of today, the SMMUv3 nested mode is not yet implemented
> >> and there is no functional VFIO integration without. But
> >> keeping the replay call would execute the default implementation
> >> of memory_region_iommu_replay and attempt to translate the whole
> >> address range, completely stalling qemu. Keeping the MAP/UNMAP
> >> notifier registration allows to hit a warning message in the
> >> SMMUv3 device that tells the user which VFIO device will not
> >> function properly:
> >>
> >> "qemu-system-aarch64: -device vfio-pci,host=0000:89:00.0: warning:
> >> SMMUv3 does not support notification on MAP: device vfio-pci will not
> >> function properly"
> >>
> >> Besides, removing the replay call now allows the guest to boot.
> >>
> >> Signed-off-by: Eric Auger <address@hidden>
> >> ---
> >>  hw/vfio/common.c | 7 ++++++-
> >>  1 file changed, 6 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> >> index a859298fda..9ea58df67a 100644
> >> --- a/hw/vfio/common.c
> >> +++ b/hw/vfio/common.c
> >> @@ -604,6 +604,7 @@ static void vfio_listener_region_add(MemoryListener 
> >> *listener,
> >>      if (memory_region_is_iommu(section->mr)) {
> >>          VFIOGuestIOMMU *giommu;
> >>          IOMMUMemoryRegion *iommu_mr = IOMMU_MEMORY_REGION(section->mr);
> >> +        bool nested = false;
> >>          int iommu_idx;
> >>  
> >>          trace_vfio_listener_region_add_iommu(iova, end);
> >> @@ -631,8 +632,12 @@ static void vfio_listener_region_add(MemoryListener 
> >> *listener,
> >>          QLIST_INSERT_HEAD(&container->giommu_list, giommu, giommu_next);
> >>  
> >>          memory_region_register_iommu_notifier(section->mr, &giommu->n);
> >> -        memory_region_iommu_replay(giommu->iommu, &giommu->n);
> >>  
> >> +        memory_region_iommu_get_attr(iommu_mr, IOMMU_ATTR_VFIO_NESTED,
> >> +                                     (void *)&nested);
> >> +        if (!nested) {
> >> +            memory_region_iommu_replay(iommu_mr, &giommu->n);
> >> +        }
> > 
> > For nested, do we need these IOMMU notifiers after all?
> > 
> > I'm asking because the no-IOMMU case of vfio_listener_region_add()
> > seems to suite very well for nested page tables to me.  For example,
> > vfio does not need to listen to MAP events any more because we'll
> > simply share the guest IOMMU page table to be the 1st level page table
> > of the host SMMU IIUC.
> We don't need the MAP notifier but we need the UNMAP notifier: when the
> guest invalidates an ASID/IOVA we need to propagate this to the physical
> IOMMU.

Indeed we need the unmaps.  However I've got a major confusion here:
With nested mode, we should need unmap events for the 1st level rather
than the 2nd level, am I right?  I mean, the invalidate request should
be a GVA range rather than GPA range?  While here IIUC
vfio_listener_region_add() should be working on GPA address space.

I don't know SMMU enough, but for Intel there should have two
different kinds of invalidation messages.  Currently because we still
don't support nested on Intel so the 1st level invalidation is still
not yet implemented (VTD_INV_DESC_PIOTLB).  And IMHO if it is going to
be implemented, I think it should be different comparing to current
IOMMU_NOTIFIER_UNMAP in that it should not even need to bind to a
memory region, and modules like vfio should simply deliver that exact
message to the host IOMMU driver for the GVA range to be invalidated,
just like what it will do with the root pointer of guest 1st level
page table.

> 
> As mentioned in the cover letter, at the moment, I still register both
> MAP/UNMAP notifiers as the MAP notifier registration produces an
> explicit warning message in the SMMUv3 device. If I remove the
> registration we will loose this message. I hope this code is just an
> intermediate state towards the actual nested stage support.

I didn't see it in the cover letter.  Would you please provide a link
to the message?

> 
>   And if we have 2nd page table changes (like
> > memory hotplug) then IMHO vfio_listener_region_add() will do this for
> > us as well just like when there's no SMMU.
> 
> In the current integration, see [RFC v4 20/27] hw/vfio/common: Setup
> nested stage mappings (https://patchwork.kernel.org/patch/10962721/) I
> use a prereg_listener for stage 2 mappings.
> > 
> > Another thing is that IOMMU_ATTR_VFIO_NESTED will be the same for all
> > the memory regions, so it also seems a bit awkward to make it per
> > memory region.  If you see the other real user of this flag (which is
> > IOMMU_ATTR_SPAPR_TCE_FD) it's per memory region.
> 
> That's correct all SMMUv3 regions will return this value. But what other
> API can be used to query IOMMU level attributes?
> 
> On the other hand,
> 
> Alexey's commit f1334de60b2 ("memory/iommu: Add get_attr()") says:
>     This adds get_attr() to IOMMUMemoryRegionClass, like
>     iommu_ops::domain_get_attr in the Linux kernel.
> 
> and DOMAIN_ATTR_NESTING is part of enum iommu_attr at kernel level.

Yeah it's fine to me.

Thanks,

-- 
Peter Xu



reply via email to

[Prev in Thread] Current Thread [Next in Thread]