[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v4 0/7] Move memory listener register to vhost_vdpa_init
From: |
Markus Armbruster |
Subject: |
Re: [PATCH v4 0/7] Move memory listener register to vhost_vdpa_init |
Date: |
Fri, 16 May 2025 08:40:37 +0200 |
User-agent: |
Gnus/5.13 (Gnus v5.13) |
Jason Wang <jasowang@redhat.com> writes:
> On Thu, May 8, 2025 at 2:47 AM Jonah Palmer <jonah.palmer@oracle.com> wrote:
>>
>> Current memory operations like pinning may take a lot of time at the
>> destination. Currently they are done after the source of the migration is
>> stopped, and before the workload is resumed at the destination. This is a
>> period where neigher traffic can flow, nor the VM workload can continue
>> (downtime).
>>
>> We can do better as we know the memory layout of the guest RAM at the
>> destination from the moment that all devices are initializaed. So
>> moving that operation allows QEMU to communicate the kernel the maps
>> while the workload is still running in the source, so Linux can start
>> mapping them.
>>
>> As a small drawback, there is a time in the initialization where QEMU
>> cannot respond to QMP etc. By some testing, this time is about
>> 0.2seconds.
>
> Adding Markus to see if this is a real problem or not.
I guess the answer is "depends", and to get a more useful one, we need
more information.
When all you care is time from executing qemu-system-FOO to guest
finish booting, and the guest takes 10s to boot, then an extra 0.2s
won't matter much.
When a management application runs qemu-system-FOO several times to
probe its capabilities via QMP, then even milliseconds can hurt.
In what scenarios exactly is QMP delayed?
You told us an absolute delay you observed. What's the relative delay,
i.e. what's the delay with and without these patches?
We need QMP to become available earlier in the startup sequence for
other reasons. Could we bypass the delay that way? Please understand
that this would likely be quite difficult: we know from experience that
messing with the startup sequence is prone to introduce subtle
compatility breaks and even bugs.
> (I remember VFIO has some optimization in the speed of the pinning,
> could vDPA do the same?)
That's well outside my bailiwick :)
[...]
- Re: [PATCH v4 0/7] Move memory listener register to vhost_vdpa_init, (continued)
Re: [PATCH v4 0/7] Move memory listener register to vhost_vdpa_init, Si-Wei Liu, 2025/05/14
Re: [PATCH v4 0/7] Move memory listener register to vhost_vdpa_init, Jason Wang, 2025/05/15
Re: [PATCH v4 0/7] Move memory listener register to vhost_vdpa_init, Jason Wang, 2025/05/15
- Re: [PATCH v4 0/7] Move memory listener register to vhost_vdpa_init,
Markus Armbruster <=