On Fri, Mar 20, 2015 at 03:16:58PM -0400, John Snow wrote:
+void hbitmap_truncate(HBitmap *hb, uint64_t size)
+{
+ bool shrink;
+ unsigned i;
+ uint64_t num_elements = size;
+ uint64_t old;
+
+ /* Size comes in as logical elements, adjust for granularity. */
+ size = (size + (1ULL << hb->granularity) - 1) >> hb->granularity;
+ assert(size <= ((uint64_t)1 << HBITMAP_LOG_MAX_SIZE));
+ shrink = size < hb->size;
+
+ /* bit sizes are identical; nothing to do. */
+ if (size == hb->size) {
+ return;
+ }
+
+ /* If we're losing bits, let's clear those bits before we invalidate all of
+ * our invariants. This helps keep the bitcount consistent, and will
prevent
+ * us from carrying around garbage bits beyond the end of the map.
+ *
+ * Because clearing bits past the end of map might reset bits we care about
+ * within the array, record the current value of the last bit we're
keeping.
+ */
+ if (shrink) {
+ bool set = hbitmap_get(hb, num_elements - 1);
+ uint64_t fix_count = (hb->size << hb->granularity) - num_elements;
+
+ assert(fix_count);
+ hbitmap_reset(hb, num_elements, fix_count);
+ if (set) {
+ hbitmap_set(hb, num_elements - 1, 1);
+ }
Why is it necessary to set the last bit (if it was set)? The comment
isn't clear to me.