[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [RFC PATCH v2 3/3] VFIO: Type1 IOMMU mapping support fo
From: |
Neo Jia |
Subject: |
Re: [Qemu-devel] [RFC PATCH v2 3/3] VFIO: Type1 IOMMU mapping support for vGPU |
Date: |
Thu, 10 Mar 2016 22:10:59 -0800 |
User-agent: |
Mutt/1.5.24 (2015-08-30) |
On Fri, Mar 11, 2016 at 04:46:23AM +0000, Tian, Kevin wrote:
> > From: Neo Jia [mailto:address@hidden
> > Sent: Friday, March 11, 2016 12:20 PM
> >
> > On Thu, Mar 10, 2016 at 11:10:10AM +0800, Jike Song wrote:
> > >
> > > >> Is it supposed to be the caller who should set
> > > >> up IOMMU by DMA api such as dma_map_page(), after calling
> > > >> vgpu_dma_do_translate()?
> > > >>
> > > >
> > > > Don't think you need to call dma_map_page here. Once you have the pfn
> > > > available
> > > > to your GPU kernel driver, you can just go ahead to setup the mapping
> > > > as you
> > > > normally do such as calling pci_map_sg and its friends.
> > > >
> > >
> > > Technically it's definitely OK to call DMA API from the caller rather
> > > than here,
> > > however personally I think it is a bit counter-intuitive: IOMMU page
> > > tables
> > > should be constructed within the VFIO IOMMU driver.
> > >
> >
> > Hi Jike,
> >
> > For vGPU, what we have is just a virtual device and a fake IOMMU group,
> > therefore
> > the actual interaction with the real GPU should be managed by the GPU
> > vendor driver.
> >
>
> Hi, Neo,
>
> Seems we have a different thought on this. Regardless of whether it's a
> virtual/physical
> device, imo, VFIO should manage IOMMU configuration. The only difference is:
>
> - for physical device, VFIO directly invokes IOMMU API to set IOMMU entry
> (GPA->HPA);
> - for virtual device, VFIO invokes kernel DMA APIs which indirectly lead to
> IOMMU entry
> set if CONFIG_IOMMU is enabled in kernel (GPA->IOVA);
How does it make any sense for us to do a dma_map_page for a physical device
that we don't
have any direct interaction with?
>
> This would provide an unified way to manage the translation in VFIO, and then
> vendor
> specific driver only needs to query and use returned IOVA corresponding to a
> GPA.
>
> Doing so has another benefit, to make underlying vGPU driver VMM agnostic.
> For KVM,
> yes we can use pci_map_sg. However for Xen it's different (today Dom0 doesn't
> see
> IOMMU. In the future there'll be a PVIOMMU implementation) so different code
> path is
> required. It's better to abstract such specific knowledge out of vGPU driver,
> which just
> uses whatever dma_addr returned by other agent (VFIO here, or another Xen
> specific
> agent) in a centralized way.
>
> Alex, what's your opinion on this?
>
> Thanks
> Kevin
- Re: [Qemu-devel] [RFC PATCH v2 3/3] VFIO: Type1 IOMMU mapping support for vGPU, Jike Song, 2016/03/02
- Re: [Qemu-devel] [RFC PATCH v2 3/3] VFIO: Type1 IOMMU mapping support for vGPU, Neo Jia, 2016/03/04
- Re: [Qemu-devel] [RFC PATCH v2 3/3] VFIO: Type1 IOMMU mapping support for vGPU, Jike Song, 2016/03/07
- Re: [Qemu-devel] [RFC PATCH v2 3/3] VFIO: Type1 IOMMU mapping support for vGPU, Neo Jia, 2016/03/07
- Re: [Qemu-devel] [RFC PATCH v2 3/3] VFIO: Type1 IOMMU mapping support for vGPU, Jike Song, 2016/03/09
- Re: [Qemu-devel] [RFC PATCH v2 3/3] VFIO: Type1 IOMMU mapping support for vGPU, Neo Jia, 2016/03/10
- Re: [Qemu-devel] [RFC PATCH v2 3/3] VFIO: Type1 IOMMU mapping support for vGPU, Tian, Kevin, 2016/03/10
- Re: [Qemu-devel] [RFC PATCH v2 3/3] VFIO: Type1 IOMMU mapping support for vGPU,
Neo Jia <=
- Re: [Qemu-devel] [RFC PATCH v2 3/3] VFIO: Type1 IOMMU mapping support for vGPU, Tian, Kevin, 2016/03/11
- Re: [Qemu-devel] [RFC PATCH v2 3/3] VFIO: Type1 IOMMU mapping support for vGPU, Alex Williamson, 2016/03/11
- Re: [Qemu-devel] [RFC PATCH v2 3/3] VFIO: Type1 IOMMU mapping support for vGPU, Neo Jia, 2016/03/11
- Re: [Qemu-devel] [RFC PATCH v2 3/3] VFIO: Type1 IOMMU mapping support for vGPU, Alex Williamson, 2016/03/11
- Re: [Qemu-devel] [RFC PATCH v2 3/3] VFIO: Type1 IOMMU mapping support for vGPU, Neo Jia, 2016/03/11