qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 3/6] block/block-copy: refactor copying


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [PATCH 3/6] block/block-copy: refactor copying
Date: Mon, 7 Oct 2019 16:29:09 +0000

07.10.2019 17:17, Max Reitz wrote:
> On 03.10.19 19:15, Vladimir Sementsov-Ogievskiy wrote:
>> Merge copying code into one function block_copy_do_copy, which only
>> calls bdrv_ io functions and don't do any synchronization (like dirty
>> bitmap set/reset).
>>
>> Refactor block_copy() function so that it takes full decision about
>> size of chunk to be copied and does all the synchronization (checking
>> intersecting requests, set/reset dirty bitmaps).
>>
>> It will help:
>>   - introduce parallel processing of block_copy iterations: we need to
>>     calculate chunk size, start async chunk copying and go to the next
>>     iteration
>>   - simplify synchronization improvement (like memory limiting in
>>     further commit and reducing critical section (now we lock the whole
>>     requested range, when actually we need to lock only dirty region
>>     which we handle at the moment))
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
>> ---
>>   block/block-copy.c | 113 ++++++++++++++++++++-------------------------
>>   block/trace-events |   6 +--
>>   2 files changed, 53 insertions(+), 66 deletions(-)
> 
> Looks good to me, just some clean-up path nit picks below.
> 
>> diff --git a/block/block-copy.c b/block/block-copy.c
>> index 75287ce24d..cc49d2345d 100644
>> --- a/block/block-copy.c
>> +++ b/block/block-copy.c
>> @@ -126,25 +126,43 @@ void block_copy_set_callbacks(
>>   }
>>   
>>   /*
>> - * Copy range to target with a bounce buffer and return the bytes copied. If
>> - * error occurred, return a negative error number
>> + * block_copy_do_copy
>> + *
>> + * Do copy of cluser-aligned chunk. @end is allowed to exceed s->len only to
>> + * cover last cluster when s->len is not aligned to clusters.
>> + *
>> + * No sync here: nor bitmap neighter intersecting requests handling, only 
>> copy.
>> + *
>> + * Returns 0 on success.
>>    */
>> -static int coroutine_fn block_copy_with_bounce_buffer(BlockCopyState *s,
>> -                                                      int64_t start,
>> -                                                      int64_t end,
>> -                                                      bool *error_is_read)
>> +static int coroutine_fn block_copy_do_copy(BlockCopyState *s,
>> +                                           int64_t start, int64_t end,
>> +                                           bool *error_is_read)
>>   {
>>       int ret;
>> -    int nbytes;
>> -    void *bounce_buffer = qemu_blockalign(s->source->bs, s->cluster_size);
>> +    int nbytes = MIN(end, s->len) - start;
>> +    void *bounce_buffer = NULL;
>>   
>>       assert(QEMU_IS_ALIGNED(start, s->cluster_size));
>> -    bdrv_reset_dirty_bitmap(s->copy_bitmap, start, s->cluster_size);
>> -    nbytes = MIN(s->cluster_size, s->len - start);
>> +    assert(QEMU_IS_ALIGNED(end, s->cluster_size));
>> +    assert(end < s->len || end == QEMU_ALIGN_UP(s->len, s->cluster_size));
>> +
>> +    if (s->use_copy_range) {
>> +        ret = bdrv_co_copy_range(s->source, start, s->target, start, nbytes,
>> +                                 0, s->write_flags);
>> +        if (ret < 0) {
>> +            trace_block_copy_copy_range_fail(s, start, ret);
>> +            s->use_copy_range = false;
>> +        } else {
>> +            return ret;
> 
> Maybe the “fail” label should be called ”out” and then we could go there
> from here.  Doesn’t make much of a difference here (1 LoC either way),
> but maybe it’s a bit cleaner to always use the clean-up path in this
> function (even when there’s nothing to clean up).

Hmm, I always do immediate return if possible (things which needs cleanup not
yet happened). A kind of taste of course, you are maintainer, so if you like
another way, I'll change it to goto.

> 
> *shrug*
> 
>> +        }
>> +    }
>> +
>> +    bounce_buffer = qemu_blockalign(s->source->bs, nbytes);
>>   
>>       ret = bdrv_co_pread(s->source, start, nbytes, bounce_buffer, 0);
>>       if (ret < 0) {
>> -        trace_block_copy_with_bounce_buffer_read_fail(s, start, ret);
>> +        trace_block_copy_read_fail(s, start, ret);
>>           if (error_is_read) {
>>               *error_is_read = true;
>>           }
> 
> [...]
> 
>> @@ -163,42 +181,12 @@ static int coroutine_fn 
>> block_copy_with_bounce_buffer(BlockCopyState *s,
>>   
>>       qemu_vfree(bounce_buffer);
>>   
>> -    return nbytes;
>> +    return 0;
>> +
>>   fail:
>>       qemu_vfree(bounce_buffer);
>> -    bdrv_set_dirty_bitmap(s->copy_bitmap, start, s->cluster_size);
>> -    return ret;
>> -
>> -}
> 
> Wouldn’t it be simpler to drop the “qemu_vfree(bounce_buffer); return
> 0;” above the fail label and just fall through?

Hmm yes that's better and more usual pattern. Will do.

> 
> In any case:
> 
> Reviewed-by: Max Reitz <address@hidden>
> 


-- 
Best regards,
Vladimir

reply via email to

[Prev in Thread] Current Thread [Next in Thread]