qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [bugfix ping2] Re: [PATCH v2 0/2] fix qcow2_can_store_new_dirty_bitm


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [bugfix ping2] Re: [PATCH v2 0/2] fix qcow2_can_store_new_dirty_bitmap
Date: Wed, 11 Dec 2019 08:10:12 +0000

10.12.2019 23:27, John Snow wrote:
> 
> 
> On 12/10/19 8:24 AM, Max Reitz wrote:
>> On 10.12.19 09:11, Max Reitz wrote:
>>> On 09.12.19 23:03, Eric Blake wrote:
>>>> On 12/9/19 11:58 AM, Max Reitz wrote:
>>>>> On 09.12.19 17:30, Max Reitz wrote:
>>>>>> On 02.12.19 15:09, Vladimir Sementsov-Ogievskiy wrote:
>>>>>>> Hi again!
>>>>>>>
>>>>>>> Still forgotten bug-fix :(
>>>>>>>
>>>>>>> Is it too late for 4.2?
>>>>>>
>>>>>> Sorry. :-/
>>>>>>
>>>>>> Yes, I think I just forgot it.  I don’t think it’s too important for
>>>>>> 4.2, so, well, it isn’t too bad, but...  Sorry.
>>>>>>
>>>>>>> I can't imagine better test, and it tests exactly what written in
>>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1712636
>>>>>>>
>>>>>>> (Hmm, actually, I doubt that it is real use-case, more probably it's
>>>>>>> a bug in management layer)
>>>>>>>
>>>>>>> So, take this with test or without test, to 4.2 or 5.0.
>>>>>>
>>>>>> I was thinking of seeing whether I could write a quicker test, but of
>>>>>> course we should take the patch either way.
>>>>>
>>>>> OK, I give up.  It’s very much possible to create an image with 65535
>>>>> bitmaps very quickly (like, under a second) outside of qemu, but just
>>>>> opening it takes 2:30 min (because of the quadratic complexity of
>>>>> checking whether a bitmap of the same name already exists).
>>>>
>>>> Can we fix that to use a hash table for amortized O(1) lookup rather
>>>> than the current O(n) lookup?
>>>
>>> Not unreasonable, considering that this is probably what we would’ve
>>> done from the start in any language where hash tables are built in.
>>>
>>> But OTOH when you have 66k bitmaps, you probably have other problems.
>>> Like, writes being incredibly slow, because all those bitmaps have to be
>>> updated.
>>>
>>> (Well, you can technically have 99 % of them disabled, but who’d do such
>>> a thing?)
>>>
>>> ((Maybe I’ll look into it.))
>>
>> Hmm, now I did.  This gets the test down to 24 s.  Still not sure
>> whether it’s worth it, though...
>>
>> Max
>>
> 
> I agree we very likely have other problems once we reach resource usage
> of this level.
> 
> 
> Still, if we want to make this blazing fast for the love of doing so:
> 
> (1) Read in the directory *once*, and cache it. We have avoided doing
> this largely to feel more confident that the code is correct and is
> never working on an "outdated" version of the directory.
> 
> [On cache invalidation, we can write the directory back out to the
> bitmap, and delete our cache. The next time we need the list, we can
> reload it. This should alleviate consistency concerns.]

Note, that in this case, if we want to modify cached directory, bitmaps
compatible bit must be cleared (so, if we fail to flush directory at some
moment, we just lose bitmaps, not the consistency)

Note2, interesting, could existing qcow2 metadata caching infrastructure be
reused?

> 
> 
> (2) Store the entries in an rbtree! 65536 entries is only ~16 lookups
> maximum in the worst case. I took a look at the linux rbtree
> implementation and did a very quick back-of-the-envelope benchmarking
> for inserting strings (len=32) into a tree:
> 
> name generation 53151 usec
> insert [0-10] 5 usec
> insert [10-100] 14 usec
> insert [100-1000] 195 usec
> insert [1000-10000] 2919 usec
> insert [10000-65536] 41485 usec
> 
> This seems fast enough that we're likely going to be eclipsed just by
> other string handling concerns.
> 

Is rbtree the best thing to use? May be, better to construct prefix tree?
Still, I think g_hash_table should be enough and most simple to use..

Still, I really doubt that it worth it, as we never have too many bitmaps..

Actually there is no reason to have more than one active bitmap. So,
we may have a lot of disabled bitmaps, marking some history of the
drive.

I think, in this case, the better optimization would be to teach Qemu
not to load disabled bitmap data, only headers. And then, load them
on demand (for example, when user wants to enable it or use somehow).

And, if we have a lot of active bitmaps (for example to cover time regions
[t0, t_current], [t1, t_current], [t2, t_current] ..., the optimization is
to use instead disabled bitmaps, and merge them when we need:

[t0, t1], [t1, t2], [t2, t_current]


-- 
Best regards,
Vladimir

reply via email to

[Prev in Thread] Current Thread [Next in Thread]