qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] migration: support file: uri for source migration


From: Nikolay Borisov
Subject: Re: [PATCH] migration: support file: uri for source migration
Date: Mon, 12 Sep 2022 19:30:50 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.11.0



On 12.09.22 г. 18:41 ч., Daniel P. Berrangé wrote:
On Thu, Sep 08, 2022 at 01:26:32PM +0300, Nikolay Borisov wrote:
This is a prototype of supporting a 'file:' based uri protocol for
writing out the migration stream of qemu. Currently the code always
opens the file in DIO mode and adheres to an alignment of 64k to be
generic enough. However this comes with a problem - it requires copying
all data that we are writing (qemu metadata + guest ram pages) to a
bounce buffer so that we adhere to this alignment.

The adhoc device metadata clearly needs bounce buffers since it
is splattered all over RAM with no concern of alignemnt. THe use
of bounce buffers for this shouldn't be a performance issue though
as metadata is small relative to the size of the snapshot as a whole.

Bounce buffers can be eliminated altogether so long as we simply switch between buffered/DIO mode via fcntl.


The guest RAM pages should not need bounce buffers at all when using
huge pages, as alignment will already be way larger than we required.
Guests with huge pages are the ones which are likely to have huge
RAM sizes and thus need the DIO mode, so we should be sorted for that.

When using small pages for guest RAM, if it is not already allocated
with suitable alignment, I feel like we should be able to make it
so that we allocate the RAM block with good alignemnt to avoid the
need for bounce buffers. This would address the less common case of
a guest with huge RAM size but not huge pages.

Ram blocks are generally allocated with good alignment due to them being mmaped(), however as I was toying with eliminating bounce buffers for ram I hit an issue where the page headers being written (8 bytes each) aren't aligned (naturally). Imo I think the on-disk format can be changed the following way:


<ramblock header, containing base address of ramblock>, each subsequent page is then written at an offset from the base address of the ramblock, that is it's index would be :

page_offset = page_addr - ramblock_base, Then the page is written at ramblock_base (in the file) + page_offset. This would eliminate the page headers altogether. This leaves aligning the initial ramblock header initially. However, this would lead to us potentially having to issue 1 lseek per page to write - to adjust the the file position, which might not be a problem in itself but who knows. How dooes that sound to you?


Thus if we assume guest RAM is suitably aligned, then we can avoid
bounce buffers for RAM pages, while still using bounce buffers for
the metadata.

                                                    With this code I get
the following performance results:

       DIO              exec: cat > file         virsh --bypass-cache
       82                               77                                      
                81
       82                           78                                          
        80
       80                           80                                          
        82
       82                           82                                          
        77
       77                           79                                          
        77

AVG:  80.6                              79.2                                    
        79.4
stddev: 1.959                   1.720                                           
2.05

All numbers are in seconds.

Those results are somewhat surprising to me as I'd expected doing the
writeout directly within qemu and avoiding copying between qemu and
virsh's iohelper process would result in a speed up. Clearly that's not
the case, I attribute this to the fact that all memory pages have to be
copied into the bounce buffer. There are more measurements/profiling
work that I'd have to do in order to (dis)prove this hypotheses and will
report back when I have the data.

When using the libvirt iohelper we have mutliple CPUs involved. IOW the
bounce buffer copy is taking place on a separate CPU from the QEMU
migration loop. This ability to use multiple CPUs may well have balanced
out any benefit from doing DIO on the QEMU side.

If you eliminate bounce buffers for guest RAM and write it directly to
the fixed location on disk, then we should see the benefit - and if not
then something is really wrong in our thoughts.

However I'm sending the code now as I'd like to facilitate a discussion
as to whether this is an approach that would be acceptable to upstream
merging. Any ideas/comments would be much appreciated.

AFAICT this impl is still using the existing on-disk format, where RAM
pages are just written inline to the stream. For DIO benefit to be
maximised we need the on-disk format to be changed, so that the guest
RAM regions can be directly associated with fixed locations on disk.
This also means that if guest dirties RAM while its saving, then we
overwrite the existing content on disk, such that restore only ever
needs to restore each RAM page once, instead of restoring every
dirtied version.


With regards,
Daniel



reply via email to

[Prev in Thread] Current Thread [Next in Thread]