qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PULL 8/9] virtio-gpu: split virtio-gpu-pci & virtio-vg


From: Laurent Vivier
Subject: Re: [Qemu-devel] [PULL 8/9] virtio-gpu: split virtio-gpu-pci & virtio-vga
Date: Mon, 24 Jun 2019 17:47:58 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.7.0

On 29/05/2019 06:40, Gerd Hoffmann wrote:
> From: Marc-André Lureau <address@hidden>
> 
> Add base classes that are common to vhost-user-gpu-pci and
> vhost-user-vga.
> 
> Signed-off-by: Marc-André Lureau <address@hidden>
> Message-id: address@hidden
> Signed-off-by: Gerd Hoffmann <address@hidden>
> ---
>  hw/display/virtio-vga.h     |  32 +++++++++
>  hw/display/virtio-gpu-pci.c |  52 +++++++++-----
>  hw/display/virtio-vga.c     | 135 ++++++++++++++++++------------------
>  MAINTAINERS                 |   2 +-
>  4 files changed, 137 insertions(+), 84 deletions(-)
>  create mode 100644 hw/display/virtio-vga.h
> 

This patch breaks something in the migration (no need of an OS, tested during 
SLOF sequence).

Tested between v4.0.0 and master.

v4.0.0: ppc64-softmmu/qemu-system-ppc64 -machine pseries-4.0 \
                                        -device virtio-gpu-pci \
                                        -serial mon:stdio -incoming tcp:0:4444

master: ppc64-softmmu/qemu-system-ppc64 -machine pseries-4.0 \
                                        -device virtio-gpu-pci \
                                        -serial mon:stdio


master: (qemu) migrate tcp:localhost:4444

v4.0.0:

  qemu-system-ppc64: get_pci_config_device: Bad config data: i=0x34 read: 98 
device: 84 cmask: ff wmask: 0 w1cmask:0
  qemu-system-ppc64: Failed to load PCIDevice:config
  qemu-system-ppc64: Failed to load virtio-gpu:virtio
  qemu-system-ppc64: error while loading state for instance 0x0 of device 
'pci@800000020000000:02.0/virtio-gpu'
  qemu-system-ppc64: load of migration failed: Invalid argument

Is this something known?

Thanks,
Laurent




reply via email to

[Prev in Thread] Current Thread [Next in Thread]