qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 4/9] vfio: Support for RamDiscardMgr in the !vIOMMU case


From: Alex Williamson
Subject: Re: [PATCH v1 4/9] vfio: Support for RamDiscardMgr in the !vIOMMU case
Date: Wed, 2 Dec 2020 16:26:33 -0700

On Thu, 19 Nov 2020 16:39:13 +0100
David Hildenbrand <david@redhat.com> wrote:

> Implement support for RamDiscardMgr, to prepare for virtio-mem
> support. Instead of mapping the whole memory section, we only map
> "populated" parts and update the mapping when notified about
> discarding/population of memory via the RamDiscardListener. Similarly, when
> syncing the dirty bitmaps, sync only the actually mapped (populated) parts
> by replaying via the notifier.
> 
> Small mapping granularity is problematic for vfio, because we might run out
> of mappings. Warn to at least make users aware that there is such a
> limitation and that we are dealing with a setup issue e.g., of
> virtio-mem devices.
> 
> Using virtio-mem with vfio is still blocked via
> ram_block_discard_disable()/ram_block_discard_require() after this patch.
> 
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Michael S. Tsirkin" <mst@redhat.com>
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Cc: Igor Mammedov <imammedo@redhat.com>
> Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
> Cc: Peter Xu <peterx@redhat.com>
> Cc: Auger Eric <eric.auger@redhat.com>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: teawater <teawaterz@linux.alibaba.com>
> Cc: Marek Kedzierski <mkedzier@redhat.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  hw/vfio/common.c              | 233 ++++++++++++++++++++++++++++++++++
>  include/hw/vfio/vfio-common.h |  12 ++
>  2 files changed, 245 insertions(+)
> 
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index c1fdbf17f2..d52e7356cb 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
...
> +static void vfio_register_ram_discard_notifier(VFIOContainer *container,
> +                                               MemoryRegionSection *section)
> +{
> +    RamDiscardMgr *rdm = memory_region_get_ram_discard_mgr(section->mr);
> +    RamDiscardMgrClass *rdmc = RAM_DISCARD_MGR_GET_CLASS(rdm);
> +    MachineState *ms = MACHINE(qdev_get_machine());
> +    uint64_t suggested_granularity;
> +    VFIORamDiscardListener *vrdl;
> +    int ret;
> +
> +    vrdl = g_new0(VFIORamDiscardListener, 1);
> +    vrdl->container = container;
> +    vrdl->mr = section->mr;
> +    vrdl->offset_within_region = section->offset_within_region;
> +    vrdl->offset_within_address_space = section->offset_within_address_space;
> +    vrdl->size = int128_get64(section->size);
> +    vrdl->granularity = rdmc->get_min_granularity(rdm, section->mr);
> +
> +    /* Ignore some corner cases not relevant in practice. */
> +    g_assert(QEMU_IS_ALIGNED(vrdl->offset_within_region, TARGET_PAGE_SIZE));
> +    g_assert(QEMU_IS_ALIGNED(vrdl->offset_within_address_space,
> +                             TARGET_PAGE_SIZE));
> +    g_assert(QEMU_IS_ALIGNED(vrdl->size, TARGET_PAGE_SIZE));
> +
> +    /*
> +     * We assume initial RAM never has a RamDiscardMgr and that all memory
> +     * to eventually get hotplugged later could be coordinated via a
> +     * RamDiscardMgr ("worst case").
> +     *
> +     * We assume the Linux kernel is configured ("dma_entry_limit") for the
> +     * maximum of 65535 mappings and that we can consume roughly half of that


s/maximum/default/

Deciding we should only use half of it seems arbitrary.


> +     * for this purpose.
> +     *
> +     * In reality, we might also have RAM without a RamDiscardMgr in our 
> device
> +     * memory region and might be able to consume more mappings.
> +     */
> +    suggested_granularity = pow2ceil((ms->maxram_size - ms->ram_size) / 
> 32768);
> +    suggested_granularity = MAX(suggested_granularity, 1 * MiB);
> +    if (vrdl->granularity < suggested_granularity) {
> +        warn_report("%s: eventually problematic mapping granularity (%" 
> PRId64
> +                    " MiB) with coordinated discards (e.g., 'block-size' in"
> +                    " virtio-mem). Suggested minimum granularity: %" PRId64
> +                    " MiB", __func__, vrdl->granularity / MiB,
> +                    suggested_granularity / MiB);
> +    }


Starting w/ kernel 5.10 we have a way to get the instantaneous count of
available DMA mappings, so we could avoid assuming 64k when that's
available (see ex. s390_pci_update_dma_avail()).  Thanks,

Alex




reply via email to

[Prev in Thread] Current Thread [Next in Thread]