qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 1/1] hw/pvrdma: Add live migration support


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [RFC 1/1] hw/pvrdma: Add live migration support
Date: Fri, 28 Jun 2019 12:26:34 +0100
User-agent: Mutt/1.12.0 (2019-05-25)

* Yuval Shaia (address@hidden) wrote:
> On Fri, Jun 21, 2019 at 08:15:41PM +0530, Sukrit Bhatnagar wrote:
> > Define and register SaveVMHandlers pvrdma_save and
> > pvrdma_load for saving and loading the device state,
> > which currently includes only the dma, command slot
> > and response slot addresses.
> > 
> > Remap the DSR, command slot and response slot upon
> > loading the addresses in the pvrdma_load function.
> > 
> > Cc: Marcel Apfelbaum <address@hidden>
> > Cc: Yuval Shaia <address@hidden>
> > Signed-off-by: Sukrit Bhatnagar <address@hidden>
> > ---
> >  hw/rdma/vmw/pvrdma_main.c | 56 +++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 56 insertions(+)
> > 
> > diff --git a/hw/rdma/vmw/pvrdma_main.c b/hw/rdma/vmw/pvrdma_main.c
> > index adcf79cd63..cd8573173c 100644
> > --- a/hw/rdma/vmw/pvrdma_main.c
> > +++ b/hw/rdma/vmw/pvrdma_main.c
> > @@ -28,6 +28,7 @@
> >  #include "sysemu/sysemu.h"
> >  #include "monitor/monitor.h"
> >  #include "hw/rdma/rdma.h"
> > +#include "migration/register.h"
> >  
> >  #include "../rdma_rm.h"
> >  #include "../rdma_backend.h"
> > @@ -592,9 +593,62 @@ static void pvrdma_shutdown_notifier(Notifier *n, void 
> > *opaque)
> >      pvrdma_fini(pci_dev);
> >  }
> >  
> > +static void pvrdma_save(QEMUFile *f, void *opaque)
> > +{
> > +    PVRDMADev *dev = PVRDMA_DEV(opaque);
> > +
> > +    qemu_put_be64(f, dev->dsr_info.dma);
> > +    qemu_put_be64(f, dev->dsr_info.dsr->cmd_slot_dma);
> > +    qemu_put_be64(f, dev->dsr_info.dsr->resp_slot_dma);
> > +}
> > +
> > +static int pvrdma_load(QEMUFile *f, void *opaque, int version_id)
> > +{
> > +    PVRDMADev *dev = PVRDMA_DEV(opaque);
> > +    PCIDevice *pci_dev = PCI_DEVICE(dev);
> > +
> > +    // Remap DSR
> > +    dev->dsr_info.dma = qemu_get_be64(f);
> > +    dev->dsr_info.dsr = rdma_pci_dma_map(pci_dev, dev->dsr_info.dma,
> > +                                    sizeof(struct 
> > pvrdma_device_shared_region));
> > +    if (!dev->dsr_info.dsr) {
> > +        rdma_error_report("Failed to map to DSR");
> > +        return -1;
> > +    }
> > +    qemu_log("pvrdma_load: successfully remapped to DSR\n");
> > +
> > +    // Remap cmd slot
> > +    dev->dsr_info.dsr->cmd_slot_dma = qemu_get_be64(f);
> > +    dev->dsr_info.req = rdma_pci_dma_map(pci_dev, 
> > dev->dsr_info.dsr->cmd_slot_dma,
> > +                                     sizeof(union pvrdma_cmd_req));
> > +    if (!dev->dsr_info.req) {
> 
> So this is where you hit the error, right?
> All looks good to me, i wonder why the first pci_dma_map works fine but
> second fails where the global BounceBuffer is marked as used.
> 
> Anyone?

I've got a few guesses:
  a) My reading of address_space_map is that it gives a bounce buffer
pointer and then it has to be freed; so if it's going wrong and using a
bounce buffer, then the 1st call would work and only the 2nd would fail.

  b) Perhaps the dma address read from the stream is bad, and isn't
pointing into RAM properly - and that's why you're getting a bounce
buffer.

  c) Do you have some ordering problems? i.e. is this code happening
before some other device has been loaded/initialised?  If you're relying
on some other state, then perhaps the right thing is to perform these
mappings later, at the end of migration, not during the loading itself.

Other notes:
  d) Use VMSTATE macros and structures rather than open-coding qemu_get
calls.

  e) QEMU normally uses /* comments */ rather than //

Dave

> > +        rdma_error_report("Failed to map to command slot address");
> > +        return -1;
> > +    }
> > +    qemu_log("pvrdma_load: successfully remapped to cmd slot\n");
> > +
> > +    // Remap rsp slot
> 
> btw, this is RFC so we are fine but try to avoid such way of comments.
> 
> > +    dev->dsr_info.dsr->resp_slot_dma = qemu_get_be64(f);
> > +    dev->dsr_info.rsp = rdma_pci_dma_map(pci_dev, 
> > dev->dsr_info.dsr->resp_slot_dma,
> > +                                     sizeof(union pvrdma_cmd_resp));
> > +    if (!dev->dsr_info.rsp) {
> > +        rdma_error_report("Failed to map to response slot address");
> > +        return -1;
> > +    }
> > +    qemu_log("pvrdma_load: successfully remapped to rsp slot\n");
> > +
> > +    return 0;
> > +}
> > +
> > +static SaveVMHandlers savevm_pvrdma = {
> > +    .save_state = pvrdma_save,
> > +    .load_state = pvrdma_load,
> > +};
> > +
> >  static void pvrdma_realize(PCIDevice *pdev, Error **errp)
> >  {
> >      int rc = 0;
> > +    DeviceState *s = DEVICE(pdev);
> >      PVRDMADev *dev = PVRDMA_DEV(pdev);
> >      Object *memdev_root;
> >      bool ram_shared = false;
> > @@ -666,6 +720,8 @@ static void pvrdma_realize(PCIDevice *pdev, Error 
> > **errp)
> >      dev->shutdown_notifier.notify = pvrdma_shutdown_notifier;
> >      qemu_register_shutdown_notifier(&dev->shutdown_notifier);
> >  
> > +    register_savevm_live(s, "pvrdma", -1, 1, &savevm_pvrdma, dev);
> > +
> >  out:
> >      if (rc) {
> >          pvrdma_fini(pdev);
> > -- 
> > 2.21.0
> > 
> > 
> 
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]