qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 00/14] libvhost-user: support more memslots and cleanup me


From: Mario Casquero
Subject: Re: [PATCH v2 00/14] libvhost-user: support more memslots and cleanup memslot handling code
Date: Mon, 11 Mar 2024 21:00:27 +0100

This series has been successfully tested by QE. Start the
qemu-storage-daemon in the background with a rhel 9.5 image and
vhost-user-blk. After that, boot up a VM with virtio-mem and
vhost-user-blk-pci. Check with the HMP command 'info mtree' that
virtio-mem is making use of multiple memslots.


On Wed, Feb 14, 2024 at 4:18 PM David Hildenbrand <david@redhat.com> wrote:
>
> This series adds support for more memslots (509) to libvhost-user, to
> make it fully compatible with virtio-mem that uses up to 256 memslots
> accross all memory devices in "dynamic-memslot" mode (more details
> in patch #2).
>
> With that in place, this series optimizes and extends memory region
> handling in libvhost-user:
> * Heavily deduplicate and clean up the memory region handling code
> * Speeds up GPA->VA translation with many memslots using binary search
> * Optimize mmap_offset handling to use it as fd_offset for mmap()
> * Avoid ring remapping when adding a single memory region
> * Avoid dumping all guest memory, possibly allocating memory in sparse
>   memory mappings when the process crashes
>
> I'm being very careful to not break some weird corner case that modern
> QEMU might no longer trigger, but older one could have triggered or some
> other frontend might trigger.
>
> The only thing where I am not careful is to forbid memory regions that
> overlap in GPA space: it doesn't make any sense.
>
> With this series, virtio-mem (with dynamic-memslots=on) +
> qemu-storage-daemon works flawlessly and as expected in my tests.
>
> v1 -> v2:
> * Drop "libvhost-user: Fix msg_region->userspace_addr computation"
>  -> Not actually required
> * "libvhost-user: Factor out adding a memory region"
>  -> Make debug output more consistent (add missing ":")
> * "libvhost-user: Use most of mmap_offset as fd_offset"
>  -> get_fd_pagesize -> get_fd_hugepagesize; remove getpagesize()
>  -> "mmap_offset:" to "old mmap_offset:" in debug message
>  -> "adj mmap_offset:" to "new mmap_offset:" in debug message
>  -> Use "(unsigned int)fs.f_type"; the man page of fstatfs() calls out
>     that the type of f_type can vary depending on the architecture.
>     "unsigned int" is sufficient here.
>  -> Updated patch description
> * Added RBs+ACKs
> * Did a Gitlab CI run, seems to be happy reagrding libvhost-user
>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Jason Wang <jasowang@redhat.com>
> Cc: Stefan Hajnoczi <stefanha@redhat.com>
> Cc: Stefano Garzarella <sgarzare@redhat.com>
> Cc: Germano Veit Michel <germano@redhat.com>
> Cc: Raphael Norwitz <raphael.norwitz@nutanix.com>
>
> David Hildenbrand (14):
>   libvhost-user: Dynamically allocate memory for memory slots
>   libvhost-user: Bump up VHOST_USER_MAX_RAM_SLOTS to 509
>   libvhost-user: Factor out removing all mem regions
>   libvhost-user: Merge vu_set_mem_table_exec_postcopy() into
>     vu_set_mem_table_exec()
>   libvhost-user: Factor out adding a memory region
>   libvhost-user: No need to check for NULL when unmapping
>   libvhost-user: Don't zero out memory for memory regions
>   libvhost-user: Don't search for duplicates when removing memory
>     regions
>   libvhost-user: Factor out search for memory region by GPA and simplify
>   libvhost-user: Speedup gpa_to_mem_region() and vu_gpa_to_va()
>   libvhost-user: Use most of mmap_offset as fd_offset
>   libvhost-user: Factor out vq usability check
>   libvhost-user: Dynamically remap rings after (temporarily?) removing
>     memory regions
>   libvhost-user: Mark mmap'ed region memory as MADV_DONTDUMP
>
>  subprojects/libvhost-user/libvhost-user.c | 595 ++++++++++++----------
>  subprojects/libvhost-user/libvhost-user.h |  10 +-
>  2 files changed, 334 insertions(+), 271 deletions(-)
>
> --
> 2.43.0
>
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]