|
From: | Vladimir Sementsov-Ogievskiy |
Subject: | Re: [Qemu-block] [PATCH RFC for-2.6 1/3] HBitmap: Introduce "meta" bitmap to track bit changes |
Date: | Wed, 30 Dec 2015 14:26:41 +0300 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.8.0 |
On 30.12.2015 14:07, Fam Zheng wrote:
On Wed, 12/30 13:53, Vladimir Sementsov-Ogievskiy wrote:On 07.12.2015 08:59, Fam Zheng wrote:The meta bitmap will have the same size and granularity as the tracked bitmap, and upon each bit toggle, the corresponding bit in the meta bitmap, at an identical position, will be set. Signed-off-by: Fam Zheng <address@hidden> --- include/qemu/hbitmap.h | 7 +++++++ util/hbitmap.c | 22 ++++++++++++++++++++++ 2 files changed, 29 insertions(+) diff --git a/include/qemu/hbitmap.h b/include/qemu/hbitmap.h index bb94a00..09a6b06 100644 --- a/include/qemu/hbitmap.h +++ b/include/qemu/hbitmap.h @@ -181,6 +181,13 @@ void hbitmap_iter_init(HBitmapIter *hbi, const HBitmap *hb, uint64_t first); */ unsigned long hbitmap_iter_skip_words(HBitmapIter *hbi); +/* hbitmap_create_meta + * @hb: The HBitmap to operate on. + * + * Create a "meta" hbitmap to track dirtiness of the bits in this HBitmap. + */ +HBitmap *hbitmap_create_meta(HBitmap *hb); + /** * hbitmap_iter_next: * @hbi: HBitmapIter to operate on. diff --git a/util/hbitmap.c b/util/hbitmap.c index 50b888f..3ad406e 100644 --- a/util/hbitmap.c +++ b/util/hbitmap.c @@ -81,6 +81,9 @@ struct HBitmap { */ int granularity; + /* A meta dirty bitmap to track the dirtiness of bits in this HBitmap. */ + HBitmap *meta; + /* A number of progressively less coarse bitmaps (i.e. level 0 is the * coarsest). Each bit in level N represents a word in level N+1 that * has a set bit, except the last level where each bit represents the @@ -232,6 +235,7 @@ static inline bool hb_set_elem(unsigned long *elem, uint64_t start, uint64_t las /* The recursive workhorse (the depth is limited to HBITMAP_LEVELS)... */ static void hb_set_between(HBitmap *hb, int level, uint64_t start, uint64_t last) { + uint64_t save_start = start; size_t pos = start >> BITS_PER_LEVEL; size_t lastpos = last >> BITS_PER_LEVEL; bool changed = false; @@ -252,6 +256,9 @@ static void hb_set_between(HBitmap *hb, int level, uint64_t start, uint64_t last } } changed |= hb_set_elem(&hb->levels[level][i], start, last); + if (hb->meta && level == HBITMAP_LEVELS - 1 && changed) { + hbitmap_set(hb->meta, save_start, last - save_start + 1); + }I thing now, that the same may be accomplished for BdrvDirtyBitmap, all we need is return "changed" status from hb_set_between and then from hbitmap_set.That is true, but it makes further optimizing to *really* only set the toggled meta bits much more difficult (i.e. when only a few bits between start and last are changed). I haven't written any code for that optimization, but I did base my other persistent dirty bitmap work on v2 of this series. It would be great if we can harmonize on this, so we both have a common base of block dirty bitmap. I can post v2 to the list very soon, if the idea is okay for you; or if you've your preferred way, we can take a look together. What do you think?
Hmm, I see, optimization is possible.. Something like call hb_set_elem for meta bitmap directly? Cool optimization may be done if meta bitmap has the same granularity, but it is not the case.. Ok, I'll wait for your v2.
Fam/* If there was any change in this layer, we may have to update * the one above. @@ -298,6 +305,7 @@ static inline bool hb_reset_elem(unsigned long *elem, uint64_t start, uint64_t l /* The recursive workhorse (the depth is limited to HBITMAP_LEVELS)... */ static void hb_reset_between(HBitmap *hb, int level, uint64_t start, uint64_t last) { + uint64_t save_start = start; size_t pos = start >> BITS_PER_LEVEL; size_t lastpos = last >> BITS_PER_LEVEL; bool changed = false; @@ -336,6 +344,10 @@ static void hb_reset_between(HBitmap *hb, int level, uint64_t start, uint64_t la lastpos--; } + if (hb->meta && level == HBITMAP_LEVELS - 1 && changed) { + hbitmap_set(hb->meta, save_start, last - save_start + 1); + } + if (level > 0 && changed) { hb_reset_between(hb, level - 1, pos, lastpos); } @@ -384,6 +396,9 @@ void hbitmap_free(HBitmap *hb) for (i = HBITMAP_LEVELS; i-- > 0; ) { g_free(hb->levels[i]); } + if (hb->meta) { + hbitmap_free(hb->meta); + } g_free(hb); } @@ -493,3 +508,10 @@ bool hbitmap_merge(HBitmap *a, const HBitmap *b) return true; } + +HBitmap *hbitmap_create_meta(HBitmap *hb) +{ + assert(!hb->meta); + hb->meta = hbitmap_alloc(hb->size, hb->granularity); + return hb->meta; +}-- Best regards, Vladimir * now, @virtuozzo.com instead of @parallels.com. Sorry for this inconvenience.
-- Best regards, Vladimir * now, @virtuozzo.com instead of @parallels.com. Sorry for this inconvenience.
[Prev in Thread] | Current Thread | [Next in Thread] |