[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [Qemu-block] [PATCH COLO-Block v6 07/16] Add new block
From: |
Wen Congyang |
Subject: |
Re: [Qemu-devel] [Qemu-block] [PATCH COLO-Block v6 07/16] Add new block driver interface to connect/disconnect the remote target |
Date: |
Thu, 2 Jul 2015 08:55:29 +0800 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 |
On 07/02/2015 02:42 AM, Dr. David Alan Gilbert wrote:
> * Wen Congyang (address@hidden) wrote:
>> On 07/01/2015 04:11 PM, Dr. David Alan Gilbert wrote:
>>> * Wen Congyang (address@hidden) wrote:
>>>> On 07/01/2015 03:01 AM, Dr. David Alan Gilbert wrote:
>>>>> * Wen Congyang (address@hidden) wrote:
>>>>>> On 06/27/2015 03:03 AM, Dr. David Alan Gilbert wrote:
>>>>>
>>>>> <snip>
>>>>>
>>>>>>> Ah, I hadn't realised you could do that; so do you just do:
>>>>>>>
>>>>>>> migrate_set_parameter colo on
>>>>>>> migrate -d -b tcp:otherhhost:port
>>>>>>>
>>>>>>> How does the secondary know to feed that data straight into the disk
>>>>>>> without
>>>>>>> recording all the old data into the hidden-disk ?
>>>>>>
>>>>>> hidden disk and active disk will be made empty when starting block
>>>>>> replication.
>>>>>
>>>>> Hmm, yes - I think I need to update to your current world; in the version
>>>>> from the end of May, I get a 'error while loading state for instance 0x0
>>>>> of device 'block''
>>>>> if I try to use migrate -d -b
>>>>> (the bdrv_write fails)
>>>>
>>>> Can you give me both primary and secondary qemu's command? I think the
>>>> command line is wrong,
>>>> and disk migration fails.
>>>>
>>>
>>> Primary:
>>>
>>> ./try/bin/qemu-system-x86_64 -enable-kvm -nographic \
>>> -boot c -m 4096 -smp 4 -S \
>>> -name debug-threads=on -trace events=trace-file \
>>> -netdev tap,id=hn0,script=$PWD/ifup-prim,\
>>> downscript=no,colo_script=$PWD/qemu/scripts/colo-proxy-script.sh,colo_nicname=em4
>>> \
>>> -device e1000,mac=9c:da:4d:1c:b5:89,id=net-pci0,netdev=hn0 \
>>> -drive if=virtio,driver=quorum,read-pattern=fifo,no-connect=on,\
>>> cache=none,aio=native,\
>>> children.0.file.filename=./bugzilla.raw,\
>>> children.0.driver=raw,\
>>> children.1.file.driver=nbd,\
>>> children.1.file.host=ibpair,\
>>> children.1.file.port=8889,\
>>> children.1.file.export=colo1,\
>>> children.1.driver=replication,\
>>> children.1.mode=primary,\
>>> children.1.ignore-errors=on
>>
>> Add id=nbd_target1 to primary disk option, and try it. Disk migration needs
>> the same id to sync
>> the disk.
>
> Thank you! That worked nicely.
> The only odd thing was that the hidden disk on the secondary went upto ~2GB
> in size during
> the disk copy; (This is still on the version you posted at the end of may).
> I don't really understand why it was 2GB - the disk was 40GB, and qemu-img
> tells me that
> 2.6GB of it were used. Still, it would be good to avoid the overhead of
> going through
> the hidden disk on the secondary for the initial replication.
Yes, I have fixed it in v7: starting backup job later.
Thanks
Wen Congyang
>
> Dave
>
>> Thanks
>> Wen Congyang
>>
>>>
>>>
>>> Secondary:
>>>
>>> ./try/bin/qemu-system-x86_64 -enable-kvm -nographic \
>>> -boot c -m 4096 -smp 4 -S \
>>> -name debug-threads=on -trace events=trace-file \
>>> -netdev tap,id=hn0,script=$PWD/ifup-slave,\
>>> downscript=no,colo_script=$PWD/qemu/scripts/colo-proxy-script.sh,colo_nicname=em4
>>> \
>>> -device e1000,mac=9c:da:4d:1c:b5:89,id=net-pci0,netdev=hn0 \
>>> -drive
>>> if=none,driver=raw,file=bugzilla.raw,id=nbd_target1,cache=none,aio=native \
>>> -drive
>>> if=virtio,driver=replication,mode=secondary,export=colo1,throttling.bps-total-max=70000000,\
>>> file.file.filename=/run/colo-active-disk.qcow2,\
>>> file.driver=qcow2,\
>>> file.backing_reference.drive_id=nbd_target1,\
>>> file.backing_reference.hidden-disk.file.filename=/run/colo-hidden-disk.qcow2,\
>>> file.backing_reference.hidden-disk.driver=qcow2,\
>>> file.backing_reference.hidden-disk.allow-write-backing-file=on \
>>> -incoming tcp:0:8888
>>>
>>>
>>> Thanks,
>>>
>>> Dave
>>>
>>>>>>>> If the user uses mirror job, we don't cancel the mirror job now.
>>>>>>>
>>>>>>> It would be good to get it to work with mirror, that seems preferred
>>>>>>> these
>>>>>>> days to the old block migration.
>>>>>>
>>>>>> In normal migration, is mirror job created and cancelled by libvirt?
>>>>>
>>>>> Yes, I think so; you should be able to turn on full logging on libvirt and
>>>>> watch the qmp commands it sends.
>>>>
>>>> Supporting mirror job in my TODO list now. But I think we should focus the
>>>> basci function now.
>>>>
>>>> Thanks
>>>> Wen Congyang
>>>>
>>>>>
>>>>> Dave
>>>>>
>>>>> --
>>>>> Dr. David Alan Gilbert / address@hidden / Manchester, UK
>>>>> .
>>>>>
>>>>
>>> --
>>> Dr. David Alan Gilbert / address@hidden / Manchester, UK
>>> .
>>>
>>
> --
> Dr. David Alan Gilbert / address@hidden / Manchester, UK
> .
>