qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/3] Add dbus-vmstate


From: Daniel P . Berrangé
Subject: Re: [Qemu-devel] [PATCH 0/3] Add dbus-vmstate
Date: Wed, 10 Jul 2019 10:10:13 +0100
User-agent: Mutt/1.12.0 (2019-05-25)

On Tue, Jul 09, 2019 at 02:47:32PM +0400, Marc-André Lureau wrote:
> Hi
> 
> On Tue, Jul 9, 2019 at 1:02 PM Daniel P. Berrangé <address@hidden> wrote:
> >
> > On Tue, Jul 09, 2019 at 12:26:38PM +0400, Marc-André Lureau wrote:
> > > Hi
> > >
> > > On Mon, Jul 8, 2019 at 8:04 PM Daniel P. Berrangé <address@hidden> wrote:
> > > > QEMU already has a direct UNIX socket connection to the helper
> > > > processes in question. I'd much rather we just had another direct
> > > > UNIX socket  connection to that helper, using D-Bus peer-to-peer.
> > > > The benefit of debugging doesn't feel compelling enough to justify
> > > > running an extra daemon for each VM.
> > >
> > > I wouldn't minor the need for easier debugging. Debugging multiple
> > > processes talking to each other is really hard. Having a bus is
> > > awesome (if not required) in this case.
> > >
> > > There are other advantages of using a bus, those come to my mind:
> > >
> > > - less connections (bus topology)
> >
> > That applies to general use of DBus, but doesn't really apply to
> > the proposed QEMU usage, as every single helper is talking to the
> > same QEMU endpoint. So if we have 10 helpers, in p2p mode, we
> > get 10 sockets open between the helper & QEMU. In bus mode, we
> > get 10 sockets open between the helper & dbus and another socket
> > open between dbus & QEMU. The bus is only a win in connections
> > if you have a mesh-like connection topology not hub & spoke.
> 
> The mesh already exist, as it's not just QEMU that want to talk to the
> helpers, but the management layer, and 3rd parties (debug tools,
> audit, other management tools etc). There are also cases where helpers
> may want to talk to each other. Taking networking as an example, 2
> slirp interfaces may want to share the same DHCP, bootp/TFTP,
> filter/service provider. Redirection/forwarding may be provided on
> demand (chardev-like services). The same is probably true for block
> layers, security, GPU/display etc. In this case, the bus topology
> makes more sense than hiding it under.

These are alot of scenarios / use cases not described in the
cover letter for this series.

I'm reviewing this series from the POV of the need to transfer
vmstate from a helper back to QEMU, which was the scenario in
the cover letter. From this I see no need for a bus.

If you think there's a more general use cases involving QEMU
backends that will need the bus, then I think the bigger picture
needs to be described when proposing the use of the bus, instead
of only describing the very simple vmstate use case as the
motivation.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



reply via email to

[Prev in Thread] Current Thread [Next in Thread]