qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 2/2] migration: Do not perform RAMBlock dirty sync during


From: David Hildenbrand
Subject: Re: [PATCH v1 2/2] migration: Do not perform RAMBlock dirty sync during the first iteration
Date: Thu, 14 Nov 2024 22:16:41 +0100
User-agent: Mozilla Thunderbird

On 14.11.24 20:28, Peter Xu wrote:
On Thu, Nov 14, 2024 at 10:02:37AM +0100, David Hildenbrand wrote:
On 13.11.24 21:12, Peter Xu wrote:
On Wed, Nov 13, 2024 at 07:49:44PM +0100, David Hildenbrand wrote:
I think I had precisely that, and I recall you suggested to have it only
after the initial sync. Would work for me, but I'd still like to understand
why essentially none of the "discard" was effective -- all of guest RAM got
touched.

Yes it'll be interesting to know..

One thing I'm wildly guessing is dirty_memory_extend(), so maybe after the
ramblock is created nobody yet to clear the "1"s there for each of the
client, including DIRTY_MEMORY_MIGRATION.  Then it'll be synced to ramblock
bmap only in the initial sync, once for each qemu lifecycle.


In ram_block_add() we do the

cpu_physical_memory_set_dirty_range(new_block->offset,
                                    new_block->used_length,
                                    DIRTY_CLIENTS_ALL);

ramblock_dirty_bitmap_clear_discarded_pages()->...->migration_clear_memory_region_dirty_bitmap_range()->migration_clear_memory_region_dirty_bitmap()
won't end up clearing the bits in the dirty bitmap.

First I thought because of:

if (!rb->clear_bmap || !clear_bmap_test_and_clear(rb, page)) {
     return;
}

But then I realized that even memory_region_clear_dirty_bitmap() will not
clear the ramblock_dirty_bitmap_ bits! It's only concerned about
listener->log_clear() calls.

Looking for DIRTY_MEMORY_BLOCK_SIZE users, only
cpu_physical_memory_sync_dirty_bitmap() and
cpu_physical_memory_clear_dirty_range() clear them, whereby the latter is
only used when resizing RAMblocks.

At first I wondered whether ramblock_dirty_bitmap_clear_discarded_pages()
should also call cpu_physical_memory_clear_dirty_range(), but then I am not
so sure if that is really the right approach.

That sounds actually reasonable to me so far.. What's the concern in your
mind?

I think what I had in mind was that for the initial bitmap sync, when we set the bmap to all-1s already, we could just clear the whole ramblock_dirty_bitmap_ + KVM ... bitmaps.

So, instead of an "initial sync" we might just want to do an "initial clearing" of all bitmaps.




virtio-balloon() calls qemu_guest_free_page_hint() which calls

migration_clear_memory_region_dirty_bitmap_range()
bitmap_clear()

So it would maybe have the same issue.

Should virtio-balloon do the same?

virtio-balloon is more interesting, because I assume here we could run after the "initial clearing" and would want to mark it clean everywhere.


So I suppose the idea here is some module may want to say "we should ignore
these pages in the dirty bitmap", and so far that's only about migration.

Then cpu_physical_memory_clear_dirty_range() does look like the right thing
to do, in which case the bmap in ram_list used to be overlooked.. it seems.

But of course, cpu_physical_memory_clear_dirty_range() still doesn't cover
the migration bitmap itself, which is ramblock->bmap.  So we'll need to
switch from migration_clear_memory_region_dirty_bitmap() to use things like
cpu_physical_memory_clear_dirty_range(), just to cover ram_list bitmaps.
Then keeping the rb->bmap operations like before..

For virtio-balloon likely yes. Regarding virtio-mem, maybe "initial clearing" + only modifying the rb->bmap when processing discards could work and would even be more efficient.

(but I'm confused because we have way too many bitmaps, and maybe the KVM one could be problematic without an initial sync ... we'd want an initial clearing for that as well ...)

--
Cheers,

David / dhildenb




reply via email to

[Prev in Thread] Current Thread [Next in Thread]