qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Outline for VHOST_USER_PROTOCOL_F_VDPA


From: Stefan Hajnoczi
Subject: Re: Outline for VHOST_USER_PROTOCOL_F_VDPA
Date: Wed, 30 Sep 2020 15:57:52 +0100

On Wed, Sep 30, 2020 at 04:07:59AM -0400, Michael S. Tsirkin wrote:
> On Tue, Sep 29, 2020 at 07:38:24PM +0100, Stefan Hajnoczi wrote:
> > On Tue, Sep 29, 2020 at 06:04:34AM -0400, Michael S. Tsirkin wrote:
> > > On Tue, Sep 29, 2020 at 09:57:51AM +0100, Stefan Hajnoczi wrote:
> > > > On Tue, Sep 29, 2020 at 02:09:55AM -0400, Michael S. Tsirkin wrote:
> > > > > On Mon, Sep 28, 2020 at 10:25:37AM +0100, Stefan Hajnoczi wrote:
> > > > > > Why extend vhost-user with vDPA?
> > > > > > ================================
> > > > > > Reusing VIRTIO emulation code for vhost-user backends
> > > > > > -----------------------------------------------------
> > > > > > It is a common misconception that a vhost device is a VIRTIO device.
> > > > > > VIRTIO devices are defined in the VIRTIO specification and consist 
> > > > > > of a
> > > > > > configuration space, virtqueues, and a device lifecycle that 
> > > > > > includes
> > > > > > feature negotiation. A vhost device is a subset of the corresponding
> > > > > > VIRTIO device. The exact subset depends on the device type, and some
> > > > > > vhost devices are closer to the full functionality of their
> > > > > > corresponding VIRTIO device than others. The most well-known 
> > > > > > example is
> > > > > > that vhost-net devices have rx/tx virtqueues and but lack the 
> > > > > > virtio-net
> > > > > > control virtqueue. Also, the configuration space and device 
> > > > > > lifecycle
> > > > > > are only partially available to vhost devices.
> > > > > > 
> > > > > > This difference makes it impossible to use a VIRTIO device as a
> > > > > > vhost-user device and vice versa. There is an impedance mismatch and
> > > > > > missing functionality. That's a shame because existing VIRTIO device
> > > > > > emulation code is mature and duplicating it to provide vhost-user
> > > > > > backends creates additional work.
> > > > > 
> > > > > 
> > > > > The biggest issue facing vhost-user and absent in vdpa is
> > > > > backend disconnect handling. This is the reason control path
> > > > > is kept under QEMU control: we do not need any logic to
> > > > > restore control path data, and we can verify a new backend
> > > > > is consistent with old one.
> > > > 
> > > > I don't think using vhost-user with vDPA changes that. The VMM still
> > > > needs to emulate a virtio-pci/ccw/mmio device that the guest interfaces
> > > > with. If the device backend goes offline it's possible to restore that
> > > > state upon reconnection. What have I missed?
> > > 
> > > The need to maintain the state in a way that is robust
> > > against backend disconnects and can be restored.
> > 
> > QEMU is only bypassed for virtqueue accesses. Everything else still
> > goes through the virtio-pci emulation in QEMU (VIRTIO configuration
> > space, status register). vDPA doesn't change this.
> > 
> > Existing vhost-user messages can be kept if they are useful (e.g.
> > virtqueue state tracking). So I think the situation is no different than
> > with the existing vhost-user protocol.
> > 
> > > > Regarding reconnection in general, it currently seems like a partially
> > > > solved problem in vhost-user. There is the "Inflight I/O tracking"
> > > > mechanism in the spec and some wording about reconnecting the socket,
> > > > but in practice I wouldn't expect all device types, VMMs, or device
> > > > backends to actually support reconnection. This is an area where a
> > > > uniform solution would be very welcome too.
> > > 
> > > I'm not aware of big issues. What are they?
> > 
> > I think "Inflight I/O tracking" can only be used when request processing
> > is idempotent? In other words, it can only be used when submitting the
> > same request multiple times is safe.
> 
> 
> Not inherently it just does not attempt to address this problem.
> 
> 
> Inflight tracking only tries to address issues on the guest side,
> that is, making sure the same buffer is used exactly once.
> 
> > A silly example where this recovery mechanism cannot be used is if a
> > device has a persistent counter that is incremented by the request. The
> > guest can't be sure that the counter will be incremented exactly once.
> > 
> > Another example: devices that support requests with compare-and-swap
> > semantics cannot use this mechanism. During recover the compare will
> > fail if the request was just completing when the backend crashed.
> > 
> > Do I understand the limitations of this mechanism correctly? It doesn't
> > seem general and I doubt it can be applied to all existing device types.
> 
> Device with any kind of atomicity guarantees will
> have to use some internal mechanism (e.g. log?) to ensure
> internal consistency, that is out of scope for tracking.

Rant warning, but probably useful to think about for future vhost-user
and vfio-user development... :)

IMO "Inflight I/O tracking" is best placed into libvhost-user instead of
the vhost-user protocol. Here is why:

QEMU's vhost-user code actually does nothing with the inflight data
except passing it back to the reconnected vhost-user device backend and
migrating it as an opaque blob.

The fact that it's opaque to QEMU is a warning sign. QEMU is simply a
mechanism for stashing a blob of data. Stashing data is generic
functionality and not specific to vhost-user devices. One could argue
it's convenient to have the inflight data available to QEMU for
reconnection, but as you said, device backends may still need to
maintain additional state.

It's not clear how the opaque inflight data is within the scope of
vhost-user but additional device backend data is outside the scope. This
is why I think "Inflight I/O tracking" shouldn't be part of the
protocol.

"Inflight I/O tracking" should be a utility API in libvhost-user instead
of a vhost-user protocol feature. That way the backend can stash any
additional data it needs along with the virtqueues. There needs to be
device state save/load support in the vhost-user protocol but eventually
we'll need that anyway because some backends are stateful.

> > > > There was discussion about recovering state in muser. The original idea
> > > > was for the muser kernel module to host state that persists across
> > > > device backend restart. That way the device backend can go away
> > > > temporarily and resume without guest intervention.
> > > > 
> > > > Then when the vfio-user discussion started the idea morphed into simply
> > > > keeping a tmpfs file for each device instance (no special muser.ko
> > > > support needed anymore). This allows the device backend to resume
> > > > without losing state. In practice a programming framework is needed to
> > > > make this easy and safe to use but it boils down to a tmpfs mmap.
> > > > 
> > > > > > If there was a way to reuse existing VIRTIO device emulation code it
> > > > > > would be easier to move to a multi-process architecture in QEMU. 
> > > > > > Want to
> > > > > > run --netdev user,id=netdev0 --device virtio-net-pci,netdev=netdev0 
> > > > > > in a
> > > > > > separate, sandboxed process? Easy, run it as a vhost-user-net device
> > > > > > instead of as virtio-net.
> > > > > 
> > > > > Given vhost-user is using a socket, and given there's an elaborate
> > > > > protocol due to need for backwards compatibility, it seems safer to
> > > > > have vhost-user interface in a separate process too.
> > > > 
> > > > Right, with vhost-user only the virtqueue processing is done in the
> > > > device backend. The VMM still has to do the virtio transport emulation
> > > > (pci, ccw, mmio) and vhost-user connection lifecycle, which is complex.
> > > 
> > > IIUC all vfio user does is add another protocol in the VMM,
> > > and move code out of VMM to backend.
> > > 
> > > Architecturally I don't see why it's safer.
> > 
> > It eliminates one layer of device emulation (virtio-pci). Fewer
> > registers to emulate means a smaller attack surface.
> 
> Well it does not eliminate it as such, it moves it to the backend.
> Which in a variety of setups is actually a more sensitive
> place as the backend can do things like access host
> storage/network which VMM can be prevented from doing.
> 
> > It's possible to take things further, maybe with the proposed ioregionfd
> > mechanism, where the VMM's KVM_RUN loop no longer handles MMIO/PIO
> > exits. A separate process can handle them. Maybe some platform devices
> > need CPU state access though.
> > 
> > BTW I think the goal of removing as much emulation from the VMM as
> > possible is interesting.
> > 
> > Did you have some other approach in mind to remove the PCI and
> > virtio-pci device from the VMM?
> 
> Architecturally, I think we can have 3 processes:
> 
> 
> VMM -- guest device emulation -- host backend
> 
> 
> to me this looks like increasing our defence in depth strength,
> as opposed to just shifting things around ...

Cool idea.

Performance will be hard because there is separation between the guest
device emulation and the host backend.

There is also more communication code involved, which might make it hard
to change the guest device emulation <-> host backend interfaces.

These are the challenges I see but it would be awesome to run guest
device emulation in a tightly sandboxed environment that has almost no
syscalls available.

> > > Something like multi-process patches seems like a way to
> > > add defence in depth by having a process in the middle,
> > > outside both VMM and backend.
> > 
> > There is no third process in mpqemu. The VMM uses a UNIX domain socket
> > to communicate directly with the device backend. There is a PCI "proxy"
> > device in the VMM that does this communication when the guest accesses
> > registers. The device backend has a PCI "remote" host controller that a
> > PCIDevice instance is plugged into and the UNIX domain socket protocol
> > commands are translated into PCIDevice operations.
> 
> Yes, but does anything prevent us from further splitting the backend
> up to emulation part and host side part?

See above.

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]