qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 4/8] block/backup: improve unallocated clusters


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [Qemu-devel] [PATCH 4/8] block/backup: improve unallocated clusters skipping
Date: Fri, 9 Aug 2019 10:25:06 +0000

09.08.2019 12:12, Vladimir Sementsov-Ogievskiy wrote:
> 09.08.2019 10:50, Vladimir Sementsov-Ogievskiy wrote:
>> 07.08.2019 21:01, Max Reitz wrote:
>>> On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
>>>> Limit block_status querying to request bounds on write notifier to
>>>> avoid extra seeking.
>>>
>>> I don’t understand this reasoning.  Checking whether something is
>>> allocated for qcow2 should just mean an L2 cache lookup.  Which we have
>>> to do anyway when we try to copy data off the source.
>>
>> But for raw it's seeking. I think we just shouldn't do any unnecessary 
>> operations
>> in copy-before-write handling. So instead of calling
>> block_status(request_start, disk_end) I think it's better to call
>> block_status(request_start, request_end). And it is more applicable for 
>> reusing this
>> code too.
>>
>>>
>>> I could understand saying this makes the code easier, but I actually
>>> don’t think it does.  If you implemented checking the allocation status
>>> only for areas where the bitmap is reset (which I think this patch
>>> should), then it’d just duplicate the existing loop.
>>>
>>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
>>>> ---
>>>>   block/backup.c | 38 +++++++++++++++++++++-----------------
>>>>   1 file changed, 21 insertions(+), 17 deletions(-)
>>>>
>>>> diff --git a/block/backup.c b/block/backup.c
>>>> index 11e27c844d..a4d37d2d62 100644
>>>> --- a/block/backup.c
>>>> +++ b/block/backup.c
>>>
>>> [...]
>>>
>>>> @@ -267,6 +267,18 @@ static int coroutine_fn backup_do_cow(BackupBlockJob 
>>>> *job,
>>>>       wait_for_overlapping_requests(job, start, end);
>>>>       cow_request_begin(&cow_request, job, start, end);
>>>> +    if (job->initializing_bitmap) {
>>>> +        int64_t off, chunk;
>>>> +
>>>> +        for (off = offset; offset < end; offset += chunk) {
>>>
>>> This is what I’m referring to, I think this loop should skip areas that
>>> are clean.
>>
>> Agree, will do.
> 
> Hmm, I remembered that I thought of it, but decided that it may be not 
> efficient if
> bitmap fragmentation is higher than block-status fragmentation. So, if we 
> don't know
> what is better, let's keep simpler code.

Hmm2, but I see a compromise (may be you meant exactly this): not using next 
zero as limit
(but always use request_end), but before each iteration skip to next_dirty.

> 
>>
>>>
>>>> +            ret = backup_bitmap_reset_unallocated(job, off, end - off, 
>>>> &chunk);
>>>> +            if (ret < 0) {
>>>> +                chunk = job->cluster_size;
>>>> +            }
>>>> +        }
>>>> +    }
>>>> +    ret = 0;
>>>> +
>>>>       while (start < end) {
>>>>           int64_t dirty_end;
>>>> @@ -276,15 +288,6 @@ static int coroutine_fn backup_do_cow(BackupBlockJob 
>>>> *job,
>>>>               continue; /* already copied */
>>>>           }
>>>> -        if (job->initializing_bitmap) {
>>>> -            ret = backup_bitmap_reset_unallocated(job, start, 
>>>> &skip_bytes);
>>>> -            if (ret == 0) {
>>>> -                trace_backup_do_cow_skip_range(job, start, skip_bytes);
>>>> -                start += skip_bytes;
>>>> -                continue;
>>>> -            }
>>>> -        }
>>>> -
>>>>           dirty_end = bdrv_dirty_bitmap_next_zero(job->copy_bitmap, start,
>>>>                                                   end - start);
>>>>           if (dirty_end < 0) {
>>>
>>> Hm, you resolved that conflict differently from me.
>>>
>>> I decided the bdrv_dirty_bitmap_next_zero() call should go before the
>>> backup_bitmap_reset_unallocated() call so that we can then do a
>>>
>>>    dirty_end = MIN(dirty_end, start + skip_bytes);
>>>
>>> because we probably don’t want to copy anything past what
>>> backup_bitmap_reset_unallocated() has inquired.
>>>
>>>
>>> But then again this patch eliminates the difference anyway...
>>>  >
>>>> @@ -546,7 +549,8 @@ static int coroutine_fn backup_run(Job *job, Error 
>>>> **errp)
>>>>                   goto out;
>>>>               }
>>>> -            ret = backup_bitmap_reset_unallocated(s, offset, &count);
>>>> +            ret = backup_bitmap_reset_unallocated(s, offset, s->len - 
>>>> offset,
>>>> +                                                  &count);
>>>>               if (ret < 0) {
>>>>                   goto out;
>>>>               }
>>>>
>>>
>>>
>>
>>
> 
> 


-- 
Best regards,
Vladimir

reply via email to

[Prev in Thread] Current Thread [Next in Thread]