|
From: | David Hildenbrand |
Subject: | Re: [PATCH] udmabuf: revert 'Add support for mapping hugepages (v4)' |
Date: | Thu, 15 Jun 2023 11:48:34 +0200 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.12.0 |
Skimming over at shmem_read_mapping_page() users, I assume most of them use a VM_PFNMAP mapping (or don't mmap them at all), where we won't be messing with the struct page at all. (That might even allow you to mmap hugetlb sub-pages, because the struct page -- and mapcount -- will be ignored completely and not touched.)Oh, are you suggesting that if we do vma->vm_flags |= VM_PFNMAP in the mmap handler (mmap_udmabuf) and also do vmf_insert_pfn(vma, vmf->address, page_to_pfn(page)) instead of vmf->page = ubuf->pages[pgoff]; get_page(vmf->page); in the vma fault handler (udmabuf_vm_fault), we can avoid most of the pitfalls you have identified -- including with the usage of hugetlb subpages?
Yes, that's my thinking, but I have to do my homework first to see if that would really work for hugetlb.
The thing is, I kind-of consider what udmabuf does a layer violation: we have a filesystem (shmem/hugetlb) that should handle mappings to user space. Yet, a driver decides to bypass that and simply map the pages ordinarily to user space. (revealed by the fact that hugetlb does never map sub-pages but udmabuf decides to do so)
In an ideal world everybody would simply mmap() the original memfd, but thinking about offset+size configuration within the memfd that might not always be desirable. As a workaround, we could mmap() only the PFNs, leaving the struct page unaffected.
I'll have to look closer into that. -- Cheers, David / dhildenb
[Prev in Thread] | Current Thread | [Next in Thread] |