qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 9.0 00/13] Consolidate common vdpa members in VhostVDPAShared


From: Lei Yang
Subject: Re: [PATCH 9.0 00/13] Consolidate common vdpa members in VhostVDPAShared
Date: Thu, 30 Nov 2023 16:19:20 +0800

On Thu, Nov 30, 2023 at 3:38 PM Eugenio Perez Martin
<eperezma@redhat.com> wrote:
>
> On Thu, Nov 30, 2023 at 4:22 AM Lei Yang <leiyang@redhat.com> wrote:
> >
> > Hi Eugenio
> >
> > QE performed regression testing after applying this patch. This series
> > patch introduced a qemu core dump bug, for the core dump information
> > please review the attached file.
> >
>
> Hi Lei, thank you very much for the testing!
>
Hi Eugenio

> Can you describe the test steps that lead to the crash? I think you
> removed the vdpa device via QMP, but I'd like to be sure.

Yes, you're right, the core dump occurs when hot unplug nic, please
review the following simple test steps:
Test Steps:
1. create two vdpa device(vdpa0 and vdpa1) with multi queues
2. Boot a guest with vdpa0
3. set_link false to vdpa0
4. hotplug vdpa1
5. stop and resume guest via QMP
6. hotunplug vdpa1, hit core dump in this time

Thanks
Lei
>
> Thanks!
>
> > Tested-by: Lei Yang <leiyang@redhat.com>
> >
> >
> >
> >
> > On Sat, Nov 25, 2023 at 1:14 AM Eugenio Pérez <eperezma@redhat.com> wrote:
> > >
> > > Current memory operations like pinning may take a lot of time at the
> > > destination.  Currently they are done after the source of the migration is
> > > stopped, and before the workload is resumed at the destination.  This is a
> > > period where neigher traffic can flow, nor the VM workload can continue
> > > (downtime).
> > >
> > > We can do better as we know the memory layout of the guest RAM at the
> > > destination from the moment the migration starts.  Moving that operation 
> > > allows
> > > QEMU to communicate the kernel the maps while the workload is still 
> > > running in
> > > the source, so Linux can start mapping them.  Ideally, all IOMMU is 
> > > configured,
> > > but if the vDPA parent driver uses on-chip IOMMU and .set_map we're still
> > > saving all the pinning time.
> > >
> > > This is a first required step to consolidate all the members in a common
> > > struct.  This is needed because the destination does not know what 
> > > vhost_vdpa
> > > struct will have the registered listener member, so it is easier to place 
> > > them
> > > in a shared struct rather to keep them in vhost_vdpa struct.
> > >
> > > v1 from RFC:
> > > * Fix vhost_vdpa_net_cvq_start checking for always_svq instead of
> > >   shadow_data.  This could cause CVQ not being shadowed if
> > >   vhost_vdpa_net_cvq_start was called in the middle of a migration.
> > >
> > > Eugenio Pérez (13):
> > >   vdpa: add VhostVDPAShared
> > >   vdpa: move iova tree to the shared struct
> > >   vdpa: move iova_range to vhost_vdpa_shared
> > >   vdpa: move shadow_data to vhost_vdpa_shared
> > >   vdpa: use vdpa shared for tracing
> > >   vdpa: move file descriptor to vhost_vdpa_shared
> > >   vdpa: move iotlb_batch_begin_sent to vhost_vdpa_shared
> > >   vdpa: move backend_cap to vhost_vdpa_shared
> > >   vdpa: remove msg type of vhost_vdpa
> > >   vdpa: move iommu_list to vhost_vdpa_shared
> > >   vdpa: use VhostVDPAShared in vdpa_dma_map and unmap
> > >   vdpa: use dev_shared in vdpa_iommu
> > >   vdpa: move memory listener to vhost_vdpa_shared
> > >
> > >  include/hw/virtio/vhost-vdpa.h |  36 +++++---
> > >  hw/virtio/vdpa-dev.c           |   7 +-
> > >  hw/virtio/vhost-vdpa.c         | 160 +++++++++++++++++----------------
> > >  net/vhost-vdpa.c               | 117 ++++++++++++------------
> > >  hw/virtio/trace-events         |  14 +--
> > >  5 files changed, 174 insertions(+), 160 deletions(-)
> > >
> > > --
> > > 2.39.3
> > >
> > >
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]