qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] vdpa: dont check vhost_vdpa->suspended when unsupported


From: Zhu, Lingshan
Subject: Re: [PATCH] vdpa: dont check vhost_vdpa->suspended when unsupported
Date: Thu, 8 Jun 2023 21:19:34 +0800
User-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Firefox/102.0 Thunderbird/102.12.0



On 6/8/2023 6:59 PM, Eugenio Perez Martin wrote:
On Thu, Jun 8, 2023 at 11:34 AM Zhu, Lingshan <lingshan.zhu@intel.com> wrote:


On 6/8/2023 4:49 PM, Eugenio Perez Martin wrote:
On Thu, Jun 8, 2023 at 9:07 AM Zhu, Lingshan <lingshan.zhu@intel.com> wrote:

On 6/7/2023 9:51 PM, Eugenio Perez Martin wrote:
On Wed, Jun 7, 2023 at 11:09 AM Zhu Lingshan <lingshan.zhu@intel.com> wrote:
When read the state of a virtqueue, vhost_vdpa need
to check whether the device is suspended.

This commit verifies whether VHOST_BACKEND_F_SUSPEND is
negotiated when checking vhost_vdpa->suspended.

I'll add: Otherwise, qemu prints XXX error message.

On second thought, not returning an error prevents the caller
(vhost.c:vhost_virtqueue_stop) from recovering used idx from the
guest.

I'm not sure about the right fix for this, should we call
virtio_queue_restore_last_avail_idx here? Communicate that the backend
cannot suspend so vhost.c can print a better error message?
I agree it is better not to return an error.

Instead if neither the device does not support _F_SUSPEND nor failed to
suspend,
I think vhost_vdpa should try to read the last_avail_idx from
the device anyway. We can print an error message here,
like: failed to suspend, the value may not reliable.

We need to drop that value if it is not reliable. If the device keeps
using descriptors, used_idx may increase beyond avail_idx, so the
usual is to just restore whatever used_idx value the guest had at
stop.
This interface is used to read the last_avail_idx, the ideal case
is to fetch it after device has been suspended.

If the device is not suspended, the value may be unreliable
because the device may keep consuming descriptors.
However at that moment, the guest may be freezed as well,
so the guest wouldn't supply more available descriptors to the device.

Actually, if we cannot suspend the device we reset it, as we cannot
afford to modify used_idx.

I'm thinking now that your original idea was way better to be honest.
To not call vhost_get_vring_base in case the VM is shutting down and
we're not migrating seems to fit the situation way better. We save an
ioctl / potential device call for nothing.
agree, but I failed to find a code patch indicting the VM is in
"shutting down" status, and it may be overkill to add
a new status field which only used here.

So maybe a fix like this:
1) if the device does not support _F_SUSPEND:
call virtio_queue_restore_last_avail_idx(dev, state->index)
2) if the device support _F_SUSPEND but failed to suspend:
return -1 and vhost_virtqueue_stop() will call virtio_queue_restore_last_avail_idx()
in the original code path.

Does this look good to you?

I think that means the last_avail_idx may not be reliable but still safe,
better than read nothing. Because the last_avail_idx always lags behind
the in-guest avail_idx.

Assuming we're migrating and we don't either reset or suspend the device
* guest set avail_idx=3
* device fetch last_avail_idx=3
* guest set avail_idx=6
* VM_SUSPEND
* we call unreliable get_vring_base, and obtain device state 3.
* device is not reset, so it reads guest's avail_idx = 6. It can
process the descriptors in avail_idx position 3, 4 and 5, and write
that used_idx.
* virtio_load detects that inconsistency, as (uint)last_avail_idx -
(uint)used_idx > vring size.

So I think we need to keep printing an error and recovery from the
guest as a fallback.
so true, I think that is virtio_queue_restore_last_avail_idx()

I am not sure we can restore last_avail_idx with last_used_idx, there is
always
a delta between them.

Yes, we assume it is safe to process these descriptors again or that
they'll be migrated when that is supported.

In any case, if a device does not work with this, we should be more
restrictive, not less.

Does it make sense to you?
Yes, so if the device doesn't support _F_SUSPEND, we call virtio_queue_restore_last_avail_idx()
in vhost_vdpa_get_vring_base(), look good?

Thanks

Thanks!

Thanks

Thanks!

Thanks
Thanks!

Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com>
---
    hw/virtio/vhost-vdpa.c | 2 +-
    1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index b3094e8a8b..ae176c06dd 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -1397,7 +1397,7 @@ static int vhost_vdpa_get_vring_base(struct vhost_dev 
*dev,
            return 0;
        }

-    if (!v->suspended) {
+    if ((dev->backend_cap & BIT_ULL(VHOST_BACKEND_F_SUSPEND)) && 
(!v->suspended)) {
            /*
             * Cannot trust in value returned by device, let vhost recover used
             * idx from guest.
--
2.39.1





reply via email to

[Prev in Thread] Current Thread [Next in Thread]