qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC v2 0/2] Add live migration support in the PVRDMA d


From: Marcel Apfelbaum
Subject: Re: [Qemu-devel] [RFC v2 0/2] Add live migration support in the PVRDMA device
Date: Mon, 8 Jul 2019 21:58:36 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1



On 7/8/19 12:38 PM, Daniel P. Berrangé wrote:
On Sat, Jul 06, 2019 at 10:04:55PM +0300, Marcel Apfelbaum wrote:
Hi Sukrit,

On 7/6/19 7:09 AM, Sukrit Bhatnagar wrote:
Changes in v2:

* Modify load_dsr() such that dsr mapping is not performed if dsr value
    is non-NULL. Also move free_dsr() out of load_dsr() and call it right
    before if needed. These two changes will allow us to call load_dsr()
    even when we have already done dsr mapping and would like to go on
    with the rest of mappings.

* Use VMStateDescription instead of SaveVMHandlers to describe migration
    state. Also add fields for parent PCI object and MSIX.

* Use a temporary structure (struct PVRDMAMigTmp) to hold some fields
    during migration. These fields, such as cmd_slot_dma and resp_slot_dma
    inside dsr, do not fit into VMSTATE macros as their container
    (dsr_info->dsr) will not be ready until it is mapped on the dest.

* Perform mappings to CQ and event notification rings after the state is
    loaded. This is an extension to the mappings performed in v1;
    following the flow of load_dsr(). All the mappings are succesfully
    done on the dest on state load.
Nice!

Link(s) to v1:
https://lists.gnu.org/archive/html/qemu-devel/2019-06/msg04924.html
https://lists.gnu.org/archive/html/qemu-devel/2019-06/msg04923.html


Things working now (were not working at the time of v1):

* vmxnet3 is migrating successfully. The issue was in the migration of
    its PCI configuration space, and is solved by the patch Marcel had sent:
    https://lists.gnu.org/archive/html/qemu-devel/2019-07/msg01500.html

* There is no problem due to BounceBuffers which were failing the dma mapping
    calls in state load logic earlier. Not sure exactly how it went away. I am
    guessing that adding the PCI and MSIX state to migration solved the issue.

I am sure it was connected somehow, anyway, I am glad we can continue
with the project.

What is still needed:

* A workaround to get libvirt to support same-host migration. Since
    the problems faced in v1 (mentioned above) are out of the way, we
    can move further, and in doing so, we will need this.
[Adding Daniel  and Michal]
Is there anyway to test live-migration for libvirt domains on the same host?
Even a 'hack' would be enough.
Create two VMs on your host & run inside those. Or create two containers
if you want a lighter weight solution. You must have two completely
independant libvirtd instances, sharing nothing, except optionally where
you store disk images.

We'll work with live-cd, no storage is needed.

Thank you for the help!
Marcel

Regards,
Daniel




reply via email to

[Prev in Thread] Current Thread [Next in Thread]