qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH for-4.2 v3 0/2] s390: stop abusing memory_region


From: Igor Mammedov
Subject: Re: [Qemu-devel] [PATCH for-4.2 v3 0/2] s390: stop abusing memory_region_allocate_system_memory()
Date: Mon, 5 Aug 2019 10:54:40 +0200

On Fri, 2 Aug 2019 17:04:21 +0200
Christian Borntraeger <address@hidden> wrote:

> On 02.08.19 16:59, Christian Borntraeger wrote:
> > 
> > 
> > On 02.08.19 16:42, Christian Borntraeger wrote:  
> >> On 02.08.19 15:32, Igor Mammedov wrote:  
> >>> Changelog:
> >>>   since v2:
> >>>     - break migration from old QEMU (since 2.12-4.1) for guest with >8TB 
> >>> RAM
> >>>       and drop migratable aliases patch as was agreed during v2 review
> >>>     - drop 4.2 machines patch as it's not prerequisite anymore
> >>>   since v1:
> >>>     - include 4.2 machines patch for adding compat RAM layout on top
> >>>     - 2/4 add missing in v1 patch for splitting too big MemorySection on
> >>>           several memslots
> >>>     - 3/4 amend code path on alias destruction to ensure that RAMBlock is
> >>>           cleaned properly
> >>>     - 4/4 add compat machine code to keep old layout (migration-wise) for
> >>>           4.1 and older machines 
> >>>
> >>>
> >>> While looking into unifying guest RAM allocation to use hostmem backends
> >>> for initial RAM (especially when -mempath is used) and retiring
> >>> memory_region_allocate_system_memory() API, leaving only single hostmem 
> >>> backend,
> >>> I was inspecting how currently it is used by boards and it turns out 
> >>> several
> >>> boards abuse it by calling the function several times (despite documented 
> >>> contract
> >>> forbiding it).
> >>>
> >>> s390 is one of such boards where KVM limitation on memslot size got 
> >>> propagated
> >>> to board design and memory_region_allocate_system_memory() was abused to 
> >>> satisfy
> >>> KVM requirement for max RAM chunk where memory region alias would suffice.
> >>>
> >>> Unfortunately, memory_region_allocate_system_memory() usage created 
> >>> migration
> >>> dependency where guest RAM is transferred in migration stream as several 
> >>> RAMBlocks
> >>> if it's more than KVM_SLOT_MAX_BYTES. During v2 review it was agreed to 
> >>> ignore
> >>> migration breakage (documenting it in release notes) and leaving only KVM 
> >>> fix.
> >>>
> >>> In order to replace these several RAM chunks with a single memdev and 
> >>> keep it
> >>> working with KVM memslot size limit, following was done:
> >>>    * [1/2] split too big RAM chunk inside of KVM code on several memory 
> >>> slots
> >>>            if necessary
> >>>    * [2/2] drop manual ram splitting in s390 code
> >>>
> >>>
> >>> CC: address@hidden
> >>> CC: address@hidden
> >>> CC: address@hidden
> >>> CC: address@hidden
> >>> CC: address@hidden
> >>> CC: address@hidden  
> >>
> >> With the fixup this patch set seems to work on s390. I can start 9TB 
> >> guests and
> >> I can migrate smaller guests between 4.1+patch and 4.0 and 3.1. I 
> >> currently can
> >> not test migration for the 9TB guest due to lack of a 2nd system.   
> > 
> > I have to correct myself. The 9TB guest started up but it does not seem to 
> > do
> > anything useful (it hangs).  
> 
> Seems that the userspace addr is wrong (its the same). 
> [pid 258234] ioctl(10, KVM_SET_USER_MEMORY_REGION, {slot=0, flags=0, 
> guest_phys_addr=0, memory_size=8796091973632, userspace_addr=0x3fff7d00000}) 
> = 0
> [pid 258234] ioctl(10, KVM_SET_USER_MEMORY_REGION, {slot=1, flags=0, 
> guest_phys_addr=0x7fffff00000, memory_size=1099512676352, 
> userspace_addr=0x3fff7d00000}) = 0

It's a bug in 1/2, I forgot to advance mem->ram along with mem->start_addr.
Let me fix it and simulate it on small s390 host (/me sorry for messy patches)
it won't test migration properly but should be sufficient for testing KVM code 
patch.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]