[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] Runtime-modified DIMMs and live migration issue
From: |
Andrey Korolyov |
Subject: |
Re: [Qemu-devel] Runtime-modified DIMMs and live migration issue |
Date: |
Fri, 19 Jun 2015 23:02:01 +0300 |
On Fri, Jun 19, 2015 at 7:57 PM, Andrey Korolyov <address@hidden> wrote:
>> I don`t think that it could be ACPI-related in any way, instead, it
>> looks like race in vhost or simular mm-touching mechanism. The
>> repeated hits you mentioned should be fixed as well indeed, but they
>> can be barely the reason for this problem.
>
> Please find a trace from a single dimm plugging in attached. The
> configuration is -m 512 + three 512-Mb dimms on start, then a 512-Mb
> dimm is plugged in, the NUMA topology is created with a single node0.
Tried the same thing without vhost being involved, with same result.
What is interesting, the second and consequent migrations are
successful not regarding if the workload was applied or not during
those migrations. So, the only first migration after DIMM hotplug may
fail (if the DIMMs are plugged separately between migrations, only the
migration+workload following the hotplug event may crash guest
kernel).