qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] util/hbitmap: fix unaligned reset


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [Qemu-devel] [PATCH] util/hbitmap: fix unaligned reset
Date: Mon, 5 Aug 2019 09:45:56 +0000

03.08.2019 0:19, Max Reitz wrote:
> On 02.08.19 20:58, Vladimir Sementsov-Ogievskiy wrote:
>> hbitmap_reset is broken: it rounds up the requested region. It leads to
>> the following bug, which is shown by fixed test:
>>
>> assume granularity = 2
>> set(0, 3) # count becomes 4
>> reset(0, 1) # count becomes 2
>>
>> But user of the interface assume that virtual bit 1 should be still
>> dirty, so hbitmap should report count to be 4!
>>
>> In other words, because of granularity, when we set one "virtual" bit,
>> yes, we make all "virtual" bits in same chunk to be dirty. But this
>> should not be so for reset.
>>
>> Fix this, aligning bound correctly.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
>> ---
>>
>> Hi all!
>>
>> Hmm, is it a bug or feature? :)
>> I don't have a test for mirror yet, but I think that sync mirror may be 
>> broken
>> because of this, as do_sync_target_write() seems to be using unaligned reset.
> 
> Crap.
> 
> 
> Yes, you’re right.  This would fix it, and it wouldn’t fix it in the
> worst way.
> 
> But I don’t know whether this patch is the best way forward still.  I
> think call hbitmap_reset() with unaligned boundaries generally calls for
> trouble, as John has laid out.  If mirror’s do_sync_target_write() is
> the only offender right now, I’d prefer for hbitmap_reset() to assert
> that the boundaries are aligned (for 4.2),

OK, agree that asserting this is better.

  and for
> do_sync_target_write() to be fixed (for 4.1? :-/).
> 
> (A practical problem with this patch is that do_sync_target_write() will
> still do the write, but it won’t change anything in the bitmap, so the
> copy operation was effectively useless.)
> 
> I don’t know how to fix mirror exactly, though.  I have four ideas:
> 
> (A) Quick fix 1: do_sync_target_write() should shrink [offset, offset +
> bytes) such that it is aligned.  This would make it skip writes that
> don’t fill one whole chunk.
> 
> +: Simple fix.  Could go into 4.1.
> -: Makes copy-mode=write-blocking equal to copy-mode=background unless
>     you set the granularity to like 512. (Still beats just being
>     completely broken.)
> 
> (B) Quick fix 2: Setting the request_alignment block limit to the job’s
> granularity when in write-blocking mode.
> 
> +: Very simple fix.  Could go into 4.1.
> +: Every write will trigger a RMW cycle, which copies the whole chunk to
>     the target, so write-blocking will do what it’s supposed to do.
> -: request_alignment forces everything to have the same granularity, so
>     this slows down reads needlessly.  (But only for write-blocking.)
> 
> (C) Maybe the right fix 1: Let do_sync_target_write() expand [offset,
> offset + bytes) such that it is aligned and read head and tail from the
> source node.  (So it would do the RMW itself.)
> 
> + Doesn’t slow reads down.
> + Writes to dirty areas will make them clean – which is what
>    write-blocking is for.
> - Probably more complicated.  Nothing for 4.1.

This is how backup works.

> 
> (D) Maybe the right fix 2: Split BlockLimits.request_alignment into
> read_alignment and write_alignment.  Then do (B).

Now it's OK, but if we implement bitmap mode for mirror (which is upcoming
anyway, I think), it will slow down all writes, when we are interested only
in which are touching dirty parts.

> 
> In effect, this is more or less the same as (C), but probably in a
> simpler way.  Still not simple enough for 4.1, though.
> 
> 
> So...  I tend to do either (A) or (B) now, and then probably (D) for
> 4.2?  (And because (D) is an extension to (B), it would make sense to do
> (B) now, unless you’d prefer (A).)
> 
> Max
> 


-- 
Best regards,
Vladimir

reply via email to

[Prev in Thread] Current Thread [Next in Thread]