qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 2/2] migration: savevm_state_handler_insert: constant-time


From: Dr. David Alan Gilbert
Subject: Re: [PATCH v2 2/2] migration: savevm_state_handler_insert: constant-time element insertion
Date: Mon, 21 Oct 2019 09:14:44 +0100
User-agent: Mutt/1.12.1 (2019-06-15)

* David Gibson (address@hidden) wrote:
> On Fri, Oct 18, 2019 at 10:43:52AM +0100, Dr. David Alan Gilbert wrote:
> > * Laurent Vivier (address@hidden) wrote:
> > > On 18/10/2019 10:16, Dr. David Alan Gilbert wrote:
> > > > * Scott Cheloha (address@hidden) wrote:
> > > >> savevm_state's SaveStateEntry TAILQ is a priority queue.  Priority
> > > >> sorting is maintained by searching from head to tail for a suitable
> > > >> insertion spot.  Insertion is thus an O(n) operation.
> > > >>
> > > >> If we instead keep track of the head of each priority's subqueue
> > > >> within that larger queue we can reduce this operation to O(1) time.
> > > >>
> > > >> savevm_state_handler_remove() becomes slightly more complex to
> > > >> accomodate these gains: we need to replace the head of a priority's
> > > >> subqueue when removing it.
> > > >>
> > > >> With O(1) insertion, booting VMs with many SaveStateEntry objects is
> > > >> more plausible.  For example, a ppc64 VM with maxmem=8T has 40000 such
> > > >> objects to insert.
> > > > 
> > > > Separate from reviewing this patch, I'd like to understand why you've
> > > > got 40000 objects.  This feels very very wrong and is likely to cause
> > > > problems to random other bits of qemu as well.
> > > 
> > > I think the 40000 objects are the "dr-connectors" that are used to plug
> > > peripherals (memory, pci card, cpus, ...).
> > 
> > Yes, Scott confirmed that in the reply to the previous version.
> > IMHO nothing in qemu is designed to deal with that many devices/objects
> > - I'm sure that something other than the migration code is going to
> > get upset.
> 
> It kind of did.  Particularly when there was n^2 and n^3 cubed
> behaviour in the property stuff we had some ludicrously long startup
> times (hours) with large maxmem values.
> 
> Fwiw, the DRCs for PCI slots, DRCs and PHBs aren't really a problem.
> The problem is the memory DRCs, there's one for each LMB - each 256MiB
> chunk of memory (or possible memory).
> 
> > Is perhaps the structure wrong somewhere - should there be a single DRC
> > device that knows about all DRCs?
> 
> Maybe.  The tricky bit is how to get there from here without breaking
> migration or something else along the way.

Switch on the next machine type version - it doesn't matter if migration
is incompatible then.

Without knowing anything about the innards of DRCs, I suggest a 
DRCMulti that takes a parameter and represents 'n' DRCs at consecutive
chunks of memory.  Then use one DRCMulti for each RAMBlock or DIMM or
other convenient sized thing.

Dave

> -- 
> David Gibson                  | I'll have my music baroque, and my code
> david AT gibson.dropbear.id.au        | minimalist, thank you.  NOT _the_ 
> _other_
>                               | _way_ _around_!
> http://www.ozlabs.org/~dgibson


--
Dr. David Alan Gilbert / address@hidden / Manchester, UK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]