qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 12/12] vfio: Remove 64-bit IOVA address space assumption


From: Alex Williamson
Subject: Re: [PATCH v2 12/12] vfio: Remove 64-bit IOVA address space assumption
Date: Wed, 20 Sep 2023 14:02:06 -0600

On Wed, 13 Sep 2023 10:01:47 +0200
Eric Auger <eric.auger@redhat.com> wrote:

> Now we retrieve the usable IOVA ranges from the host,
> we now the physical IOMMU aperture and we can remove
> the assumption of 64b IOVA space when calling
> vfio_host_win_add().
> 
> This works fine in general but in case of an IOMMU memory
> region this becomes more tricky. For instance the virtio-iommu
> MR has a 64b aperture by default. If the physical IOMMU has a
> smaller aperture (typically the case for VTD), this means we
> would need to resize the IOMMU MR when this latter is linked
> to a container. However this happens on vfio_listener_region_add()
> when calling the IOMMU MR set_iova_ranges() callback and this
> would mean we would have a recursive call the
> vfio_listener_region_add(). This looks like a wrong usage of
> the memory API causing duplicate IOMMU MR notifier registration
> for instance.
> 
> Until we find a better solution, make sure the vfio_find_hostwin()
> is not called anymore for IOMMU region.

Thanks for your encouragement to double check this, it does seem like
there are some gaps in the host window support.  First I guess I don't
understand why the last chunk here assumes a contiguous range.
Shouldn't we call vfio_host_win_add() for each IOVA range?

But then we have a problem that we don't necessarily get positive
feedback from memory_region_iommu_set_iova_ranges().  Did the vIOMMU
accept the ranges or not?  Only one vIOMMU implements the callback.
Should we only call memory_region_iommu_set_iova_ranges() if the range
doesn't align to a host window and should the wrapper return -ENOTSUP
if there is no vIOMMU support to poke holes in the range?  Thanks,

Alex

 
> Signed-off-by: Eric Auger <eric.auger@redhat.com>
> 
> ---
> 
> I have not found any working solution to the IOMMU MR resizing.
> So I can remove this patch or remove the check for IOMMU MR. Maybe
> this is an issue which can be handled separately?
> ---
>  hw/vfio/common.c | 25 ++++++++++++-------------
>  1 file changed, 12 insertions(+), 13 deletions(-)
> 
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index 26da38de05..40cac1ca91 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
> @@ -1112,13 +1112,6 @@ static void vfio_listener_region_add(MemoryListener 
> *listener,
>  #endif
>      }
>  
> -    hostwin = vfio_find_hostwin(container, iova, end);
> -    if (!hostwin) {
> -        error_setg(&err, "Container %p can't map guest IOVA region"
> -                   " 0x%"HWADDR_PRIx"..0x%"HWADDR_PRIx, container, iova, 
> end);
> -        goto fail;
> -    }
> -
>      memory_region_ref(section->mr);
>  
>      if (memory_region_is_iommu(section->mr)) {
> @@ -1177,6 +1170,14 @@ static void vfio_listener_region_add(MemoryListener 
> *listener,
>          return;
>      }
>  
> +    hostwin = vfio_find_hostwin(container, iova, end);
> +    if (!hostwin) {
> +        error_setg(&err, "Container %p can't map guest IOVA region"
> +                   " 0x%"HWADDR_PRIx"..0x%"HWADDR_PRIx, container, iova, 
> end);
> +        goto fail;
> +    }
> +
> +
>      /* Here we assume that memory_region_is_ram(section->mr)==true */
>  
>      /*
> @@ -2594,12 +2595,10 @@ static int vfio_connect_container(VFIOGroup *group, 
> AddressSpace *as,
>          vfio_get_iommu_info_migration(container, info);
>          g_free(info);
>  
> -        /*
> -         * FIXME: We should parse VFIO_IOMMU_TYPE1_INFO_CAP_IOVA_RANGE
> -         * information to get the actual window extent rather than assume
> -         * a 64-bit IOVA address space.
> -         */
> -        vfio_host_win_add(container, 0, (hwaddr)-1, container->pgsizes);
> +        g_assert(container->nr_iovas);
> +        vfio_host_win_add(container, 0,
> +                          container->iova_ranges[container->nr_iovas - 
> 1].end,
> +                          container->pgsizes);
>  
>          break;
>      }




reply via email to

[Prev in Thread] Current Thread [Next in Thread]