qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Bug 1894818] Re: COLO's guest VNC client hang after failover


From: Derek Su
Subject: [Bug 1894818] Re: COLO's guest VNC client hang after failover
Date: Tue, 08 Sep 2020 16:54:25 -0000

Hi, Lukas

After fixing the misuse of AWD.
There is still a high probability of the VNC/RDP client hangs after PVM died 
and SVM takeover.

Here are the steps.

1. Start PVM script
```
imagefolder="/mnt/nfs2/vms"
  
 qemu-system-x86_64 -enable-kvm -cpu qemu64,+kvmclock -m 4096 -smp 2 -qmp stdio 
\                   
   -device piix3-usb-uhci -device usb-tablet -name primary \
   -device virtio-net-pci,id=e0,netdev=hn0 \
   -netdev 
tap,id=hn0,br=br0,vhost=off,helper=/usr/local/libexec/qemu-bridge-helper \
   -chardev socket,id=mirror0,host=0.0.0.0,port=9003,server,nowait \
   -chardev socket,id=compare1,host=0.0.0.0,port=9004,server,wait \
   -chardev socket,id=compare0,host=127.0.0.1,port=9001,server,nowait \
   -chardev socket,id=compare0-0,host=127.0.0.1,port=9001 \
   -chardev socket,id=compare_out,host=127.0.0.1,port=9005,server,nowait \
   -chardev socket,id=compare_out0,host=127.0.0.1,port=9005 \
   -object filter-mirror,id=m0,netdev=hn0,queue=tx,outdev=mirror0 \
   -object filter-redirector,netdev=hn0,id=redire0,queue=rx,indev=compare_out \
   -object filter-redirector,netdev=hn0,id=redire1,queue=rx,outdev=compare0 \
   -object iothread,id=iothread1 \
   -object colo-compare,id=comp0,primary_in=compare0-0,secondary_in=compare1,\
outdev=compare_out0,iothread=iothread1 \
   -drive 
if=ide,id=colo-disk0,driver=quorum,read-pattern=fifo,vote-threshold=1,\
children.0.file.filename=$imagefolder/primary.qcow2,children.0.driver=qcow2 
-vnc :0 -S
```

2. Start SVM script
```
#!/bin/bash
  
imagefolder="/mnt/nfs2/vms"
primary_ip=127.0.0.1

qemu-img create -f qcow2 $imagefolder/secondary-active.qcow2 100G
qemu-img create -f qcow2 $imagefolder/secondary-hidden.qcow2 100G

qemu-system-x86_64 -enable-kvm -cpu qemu64,+kvmclock -m 4096 -smp 2 -qmp stdio \
-device piix3-usb-uhci -device usb-tablet -name secondary \
-device virtio-net-pci,id=e0,netdev=hn0 \                                       
                    
-netdev 
tap,id=hn0,br=br0,vhost=off,helper=/usr/local/libexec/qemu-bridge-helper \
-chardev socket,id=red0,host=$primary_ip,port=9003,reconnect=1 \
-chardev socket,id=red1,host=$primary_ip,port=9004,reconnect=1 \
-object filter-redirector,id=f1,netdev=hn0,queue=tx,indev=red0 \
-object filter-redirector,id=f2,netdev=hn0,queue=rx,outdev=red1 \
-object filter-rewriter,id=rew0,netdev=hn0,queue=all \
-drive 
if=none,id=parent0,file.filename=$imagefolder/secondary.qcow2,driver=qcow2 \
-drive if=none,id=childs0,driver=replication,mode=secondary,file.driver=qcow2,\
top-id=colo-disk0,file.file.filename=$imagefolder/secondary-active.qcow2,\
file.backing.driver=qcow2,file.backing.file.filename=$imagefolder/secondary-hidden.qcow2,\
file.backing.backing=parent0 \
-drive 
if=ide,id=colo-disk0,driver=quorum,read-pattern=fifo,vote-threshold=1,children.0=childs0
 \
-vnc :1 \
-incoming tcp:0.0.0.0:9998
```

3. On Secondary VM's QEMU monitor, issue command
{'execute':'qmp_capabilities'}
{'execute': 'nbd-server-start', 'arguments': {'addr': {'type': 'inet', 'data': 
{'host': '0.0.0.0', 'port': '9999'} } } }
{'execute': 'nbd-server-add', 'arguments': {'device': 'parent0', 'writable': 
true } }

4. On Primary VM's QEMU monitor, issue command:
{'execute':'qmp_capabilities'}
{'execute': 'human-monitor-command', 'arguments': {'command-line': 'drive_add 
-n buddy 
driver=replication,mode=primary,file.driver=nbd,file.host=127.0.0.2,file.port=9999,file.export=parent0,node-name=replication0'}}
{'execute': 'x-blockdev-change', 'arguments':{'parent': 'colo-disk0', 'node': 
'replication0' } }
{'execute': 'migrate-set-capabilities', 'arguments': {'capabilities': [ 
{'capability': 'x-colo', 'state': true } ] } }
{'execute': 'migrate', 'arguments': {'uri': 'tcp:127.0.0.2:9998' } }

5. kill PVM

6. On SVM, issues
```
{'execute': 'nbd-server-stop'}
{'execute': 'x-colo-lost-heartbeat'}

{'execute': 'object-del', 'arguments':{ 'id': 'f2' } }
{'execute': 'object-del', 'arguments':{ 'id': 'f1' } }
{'execute': 'chardev-remove', 'arguments':{ 'id': 'red1' } }
{'execute': 'chardev-remove', 'arguments':{ 'id': 'red0' } }
```

I use "-device virtio-net-pci" here, but after replacing with "-device rtl8139",
the behavior seems normal.
Is "-device virtio-net-pci" available in COLO?

Thanks.

Regards,
Derek

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1894818

Title:
  COLO's guest VNC client hang after failover

Status in QEMU:
  Invalid

Bug description:
  Hello,

  After setting up COLO's primary and secondary VMs,
  I installed the vncserver and xrdp (apt install tightvncserver xrdp) inside 
the VM.

  I access the VM from another PC via VNC/RDP client, and everything is OK.
  Then, kill the primary VM and issue the failover commands.

  The expected result is that the VNC/RDP client can reconnect and
  resume automatically after failover. (I've confirmed the VNC/RDP
  client can reconnect automatically.)

  But in my test, the VNC client's screen hangs and cannot be recovered
  no longer. I need to restart VNC client by myself.

  BTW, it works well after killing SVM.

  Here is my QEMU networking device
  ```
  -device virtio-net-pci,id=e0,netdev=hn0 \
  -netdev 
tap,id=hn0,br=br0,vhost=off,helper=/usr/local/libexec/qemu-bridge-helper \
  ```

  Thanks.

  Regards,
  Derek

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1894818/+subscriptions



reply via email to

[Prev in Thread] Current Thread [Next in Thread]