qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: RFC: New device for zero-copy VM memory access


From: Dr. David Alan Gilbert
Subject: Re: RFC: New device for zero-copy VM memory access
Date: Thu, 31 Oct 2019 13:24:43 +0000
User-agent: Mutt/1.12.1 (2019-06-15)

* address@hidden (address@hidden) wrote:
> Hi Dave,
> 
> On 2019-10-31 05:52, Dr. David Alan Gilbert wrote:
> > * address@hidden (address@hidden) wrote:
> > > Hi All,
> > > 
> > > Over the past week, I have been working to come up with a solution
> > > to the
> > > memory transfer performance issues that hinder the Looking Glass
> > > Project.
> > > 
> > > Currently Looking Glass works by using the IVSHMEM shared memory
> > > device
> > > which
> > > is fed by an application that captures the guest's video output.
> > > While this
> > > works it is sub-optimal because we first have to perform a CPU copy
> > > of the
> > > captured frame into shared RAM, and then back out again for display.
> > > Because
> > > the destination buffers are allocated by closed proprietary code
> > > (DirectX,
> > > or
> > > NVidia NvFBC) there is no way to have the frame placed directly into
> > > the
> > > IVSHMEM shared ram.
> > > 
> > > This new device, currently named `introspection` (which needs a more
> > > suitable
> > > name, porthole perhaps?), provides a means of translating guest
> > > physical
> > > addresses to host virtual addresses, and finally to the host offsets
> > > in RAM
> > > for
> > > file-backed memory guests. It does this by means of a simple
> > > protocol over a
> > > unix socket (chardev) which is supplied the appropriate fd for the
> > > VM's
> > > system
> > > RAM. The guest (in this case, Windows), when presented with the
> > > address of a
> > > userspace buffer and size, will mlock the appropriate pages into RAM
> > > and
> > > pass
> > > guest physical addresses to the virtual device.
> > 
> > Hi Geroggrey,
> >   I wonder if the same thing can be done by using the existing
> > vhost-user
> > mechanism.
> > 
> >   vhost-user is intended for implementing a virtio device outside of the
> > qemu process; so it has a character device that qemu passes commands
> > down
> > to the other process, where qemu mostly passes commands via the virtio
> > queues.   To be able to read the virtio queues, the external process
> > mmap's the same memory as the guest - it gets passed a 'set mem table'
> > command by qemu that includes fd's for the RAM, and includes base/offset
> > pairs saying that a particular chunk of RAM is mapped at a particular
> > guest physical address.
> > 
> >   Whether or not you make use of virtio queues, I think the mechanism
> > for the device to tell the external process the mappings might be what
> > you're after.
> > 
> > Dave
> > 
> 
> While normally I would be all for re-using such code, the vhost-user while
> being very feature-complete from what I understand is overkill for our
> requirements. It will still allocate a communication ring and an events
> system
> that we will not be using. The goal of this device is to provide a dumb &
> simple method of sharing system ram, both for this project and for others
> that
> work on a simple polling mechanism, it is not intended to be an end-to-end
> solution like vhost-user is.
> 
> If you still believe that vhost-user should be used, I will do what I can to
> implement it, but for such a simple device I honestly believe it is
> overkill.

It's certainly worth having a look at vhost-user even if you don't use
most of it;  you can configure it down to 1 (maybe 0?) queues if you're
really desperate - and you might find it comes in useful!  The actual
setup is pretty easy.

The process of synchronising with (potentially changing) host memory
mapping is a bit hairy; so if we can share it with vhost it's probably
worth it.

Dave

> -Geoff
> 
> > > This device and the windows driver have been designed in such a way
> > > that
> > > it's a
> > > utility device for any project and/or application that could make
> > > use of it.
> > > The PCI subsystem vendor and device ID are used to provide a means
> > > of device
> > > identification in cases where multiple devices may be in use for
> > > differing
> > > applications. This also allows one common driver to be used for any
> > > other
> > > projects wishing to build on this device.
> > > 
> > > My ultimate goal is to get this to a state where it could be accepted
> > > upstream
> > > into Qemu at which point Looking Glass would be modified to use it
> > > instead
> > > of
> > > the IVSHMEM device.
> > > 
> > > My git repository with the new device can be found at:
> > > https://github.com/gnif/qemu
> > > 
> > > The new device is:
> > > https://github.com/gnif/qemu/blob/master/hw/misc/introspection.c
> > > 
> > > Looking Glass:
> > > https://looking-glass.hostfission.com/
> > > 
> > > The windows driver, while working, needs some cleanup before the
> > > source is
> > > published. I intend to maintain both this device and the windows
> > > driver
> > > including producing a signed Windows 10 driver if Redhat are
> > > unwilling or
> > > unable.
> > > 
> > > Kind Regards,
> > > Geoffrey McRae
> > > 
> > > HostFission
> > > https://hostfission.com
> > > 
> > --
> > Dr. David Alan Gilbert / address@hidden / Manchester, UK
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]