qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC] dbus-vmstate: Connect to the dbus only during the migration ph


From: Daniel P . Berrangé
Subject: Re: [RFC] dbus-vmstate: Connect to the dbus only during the migration phase
Date: Wed, 2 Dec 2020 17:23:29 +0000
User-agent: Mutt/1.14.6 (2020-07-11)

On Wed, Dec 02, 2020 at 10:33:01PM +0530, priyankar jain wrote:
> On 02/12/20 9:46 pm, Daniel P. Berrangé wrote:
> > On Wed, Dec 02, 2020 at 09:25:27PM +0530, priyankar jain wrote:
> > > On 20/11/20 12:17 am, Daniel P. Berrangé wrote:
> > > > On Thu, Nov 19, 2020 at 06:28:55PM +0000, Priyankar Jain wrote:
> > > > > Today, dbus-vmstate maintains a constant connection to the dbus. This 
> > > > > is
> > > > > problematic for a number of reasons:
> > > > > 1. If dbus-vmstate is attached during power-on, then the device holds
> > > > >      the unused connection for a long period of time until migration
> > > > >      is triggered, thus unnecessarily occupying dbus.
> > > > > 2. Similarly, if the dbus is restarted in the time period between VM
> > > > >      power-on (dbus-vmstate initialisation) and migration, then the
> > > > >      migration will fail. The only way to recover would be by
> > > > >      re-initialising the dbus-vmstate object.
> > > > > 3. If dbus is not available during VM power-on, then currently 
> > > > > dbus-vmstate
> > > > >      initialisation fails, causing power-on to fail.
> > > > > 4. For a system with large number of VMs, having multiple QEMUs 
> > > > > connected to
> > > > >      the same dbus can lead to a DoS for new connections.
> > > > 
> > > > The expectation is that there is a *separate* dbus daemon created for
> > > > each QEMU instance. There should never be multiple QEMUs connected to
> > > > the same dbus instance, nor should it ever connect to the common dbus
> > > > instances provided by most Linux distros.
> > > > 
> > > > None of these 4 issues should apply when each QEMU has its own dedicated
> > > > dbus instance AFAICT.
> > > > 
> > > > 
> > > > Regards,
> > > > Daniel
> > > > 
> > > 
> > > How does having a separate dbus daemon resolve issue (2)? If any daemon
> > > restarts between VM power-on and migration, the dbus-vmstate object for 
> > > that
> > > VM would have to be reinitialized, no?
> > 
> > The private dbus damon for QEMU is expected to exist for the lifetime of
> > that QEMU process.
> 
> Totally agree on the expectation. But any external stimuli (maybe
> unintended) can easily break this condition, and this would indeed result
> into a situation where the VM is basically non migratable until the VM is
> powered cycled or the dbus-vmstate is removed by manual intervention.



> Secondly, having dbus-vmstate backend connect to dbus at migration time
> eases the complexity for any management plane to recover in these failure
> situations by monitoring dbus and restarting it with the same params if dbus
> gets killed without affecting QEMU.

The vmstate support is just one use case for the dbus daemon.

The intention when specifying dbus was that this serves as a general
purpose framework on which other functionality will be built for
mgmt of helper processes associated with QEMU VM. So ultimately it
is thought that the dbus service will be relevant  throuh the lifetime
of the VM.

Obviously it doesn't appear that way right now, but I'm wary about
optimizing to dynamically connect/disconnect around migration since
we're expecting it to be in use any arbitrary times for other things
long term.

> 
> > > Secondly, on a setup with large number of VMs, having separate 
> > > dbus-daemons
> > > leads to high cummulative memory usage by dbus daemons, is it a feasible
> > > approach to spawn a new dbus-daemon for every QEMU, given the fact that
> > > majority of the security aspect lies with the dbus peers, apart from the
> > > SELinux checks provided by dbus.
> > 
> > The memory usage of a dbus daemon shouldn't be that high. A large portion
> > of the memory footprint should be readony pages shared between all dbus
> > procsses. The private usage should be a functional of number of clients
> > and the message traffic. Do you have any measured figures you're concerned
> > with ?
> 
> One of our setup had a long running private dbus-daemon (nearly 4-5 days) in
> the destination hypervisor after performing migration, it was showing the
> following memory usage (figures in kB):
> Virt  - 90980
> Rss   - 19576
> Total - 110556
> 
> Extrapolating these figures for 100s of daemons results in considerable Rss
> usage. These figures were taken using `top` linux utility. But I had not
> considered the readonly shared pages aspect at the time of capture.

Yep, you can't just multiply  'Rss' by the number of instances, as that'll
way over-estimate the overhead. Need to look at the private Rss only. 

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




reply via email to

[Prev in Thread] Current Thread [Next in Thread]