[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v3 0/5] Support message-based DMA in vfio-user server
From: |
Stefan Hajnoczi |
Subject: |
Re: [PATCH v3 0/5] Support message-based DMA in vfio-user server |
Date: |
Thu, 14 Sep 2023 10:39:12 -0400 |
On Thu, Sep 07, 2023 at 06:04:05AM -0700, Mattias Nissler wrote:
> This series adds basic support for message-based DMA in qemu's vfio-user
> server. This is useful for cases where the client does not provide file
> descriptors for accessing system memory via memory mappings. My motivating use
> case is to hook up device models as PCIe endpoints to a hardware design. This
> works by bridging the PCIe transaction layer to vfio-user, and the endpoint
> does not access memory directly, but sends memory requests TLPs to the
> hardware
> design in order to perform DMA.
>
> Note that there is some more work required on top of this series to get
> message-based DMA to really work well:
>
> * libvfio-user has a long-standing issue where socket communication gets
> messed
> up when messages are sent from both ends at the same time. See
> https://github.com/nutanix/libvfio-user/issues/279 for more details. I've
> been engaging there and a fix is in review.
>
> * qemu currently breaks down DMA accesses into chunks of size 8 bytes at
> maximum, each of which will be handled in a separate vfio-user DMA request
> message. This is quite terrible for large DMA accesses, such as when nvme
> reads and writes page-sized blocks for example. Thus, I would like to
> improve
> qemu to be able to perform larger accesses, at least for indirect memory
> regions. I have something working locally, but since this will likely result
> in more involved surgery and discussion, I am leaving this to be addressed
> in
> a separate patch.
Have you tried setting mr->ops->valid.max_access_size to something like
64 KB?
Paolo: Any suggestions for increasing DMA transaction sizes?
Stefan
>
> Changes from v1:
>
> * Address Stefan's review comments. In particular, enforce an allocation limit
> and don't drop the map client callbacks given that map requests can fail
> when
> hitting size limits.
>
> * libvfio-user version bump now included in the series.
>
> * Tested as well on big-endian s390x. This uncovered another byte order issue
> in vfio-user server code that I've included a fix for.
>
> Changes from v2:
>
> * Add a preparatory patch to make bounce buffering an AddressSpace-specific
> concept.
>
> * The total buffer size limit parameter is now per AdressSpace and can be
> configured for PCIDevice via a property.
>
> * Store a magic value in first bytes of bounce buffer struct as a best effort
> measure to detect invalid pointers in address_space_unmap.
>
> Mattias Nissler (5):
> softmmu: Per-AddressSpace bounce buffering
> softmmu: Support concurrent bounce buffers
> Update subprojects/libvfio-user
> vfio-user: Message-based DMA support
> vfio-user: Fix config space access byte order
>
> hw/pci/pci.c | 8 ++
> hw/remote/trace-events | 2 +
> hw/remote/vfio-user-obj.c | 88 +++++++++++++++++--
> include/exec/cpu-common.h | 2 -
> include/exec/memory.h | 39 ++++++++-
> include/hw/pci/pci_device.h | 3 +
> softmmu/dma-helpers.c | 4 +-
> softmmu/memory.c | 4 +
> softmmu/physmem.c | 155 ++++++++++++++++++----------------
> subprojects/libvfio-user.wrap | 2 +-
> 10 files changed, 220 insertions(+), 87 deletions(-)
>
> --
> 2.34.1
>
signature.asc
Description: PGP signature
- Re: [PATCH v3 2/5] softmmu: Support concurrent bounce buffers, (continued)