qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 0/7] block-backend: Introduce I/O hang


From: cenjiahui
Subject: Re: [RFC PATCH 0/7] block-backend: Introduce I/O hang
Date: Tue, 29 Sep 2020 17:48:01 +0800
User-agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.2.2

On 2020/9/28 18:57, Kevin Wolf wrote:
> Am 27.09.2020 um 15:04 hat Ying Fang geschrieben:
>> A VM in the cloud environment may use a virutal disk as the backend storage,
>> and there are usually filesystems on the virtual block device. When backend
>> storage is temporarily down, any I/O issued to the virtual block device will
>> cause an error. For example, an error occurred in ext4 filesystem would make
>> the filesystem readonly. However a cloud backend storage can be soon 
>> recovered.
>> For example, an IP-SAN may be down due to network failure and will be online
>> soon after network is recovered. The error in the filesystem may not be
>> recovered unless a device reattach or system restart. So an I/O rehandle is
>> in need to implement a self-healing mechanism.
>>
>> This patch series propose a feature called I/O hang. It can rehandle AIOs
>> with EIO error without sending error back to guest. From guest's perspective
>> of view it is just like an IO is hanging and not returned. Guest can get
>> back running smoothly when I/O is recovred with this feature enabled.
> 
> What is the problem with setting werror=stop and rerror=stop for the
When an I/O error occurs, if simply setting werror=stop and rerror=stop, the
whole VM will be paused and unavailable. Moreover, the VM won't be recovered
until the management tool manually resumes it after the backend storage 
recovers.
> device? Is it that QEMU won't automatically retry, but management tool
> interaction is required to resume the guest?
By using I/O Hang mechanism, we can temporarily hang the IOs, and any other
services unrelated with the hung virtual block device like network can go on
working. Besides, once the backend storage is recovered, our I/O rehandle
mechanism will automatically complete the hung IOs and continue the VM's work.
> 
> I haven't checked your patches in detail yet, but implementing this
> functionality in the backend means that blk_drain() will hang (or if it
> doesn't hang, it doesn't do what it's supposed to do), making the whole
What if we disable rehandle before blk_drain().
> QEMU process unresponsive until the I/O succeeds again. Amongst others,
> this would make it impossible to migrate away from a host with storage
> problems.
Exactly if the storage is recovered during migration iteration phase, the
migration can succeed, but if the storage is still not recovered on migration
completion phase, the migration should fail and be cancelled.

Thanks,
Jiahui Cen
> 
> Kevin
> 
> 
> .
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]