qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 6/7] block/backup: teach backup_cow_with_boun


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [Qemu-devel] [PATCH v3 6/7] block/backup: teach backup_cow_with_bounce_buffer to copy more at once
Date: Tue, 13 Aug 2019 16:45:26 +0000

13.08.2019 19:30, Max Reitz wrote:
> On 13.08.19 17:32, Vladimir Sementsov-Ogievskiy wrote:
>> 13.08.2019 18:02, Max Reitz wrote:
>>> On 13.08.19 17:00, Vladimir Sementsov-Ogievskiy wrote:
>>>> 13.08.2019 17:57, Max Reitz wrote:
>>>>> On 13.08.19 16:39, Vladimir Sementsov-Ogievskiy wrote:
>>>>>> 13.08.2019 17:23, Max Reitz wrote:
>>>>>>> On 13.08.19 16:14, Vladimir Sementsov-Ogievskiy wrote:
> 
> [...]
> 
>>>>>>>> But still..
>>>>>>>>
>>>>>>>> Synchronous mirror allocates full-request buffers on guest write. Is 
>>>>>>>> it correct?
>>>>>>>>
>>>>>>>> If we assume that it is correct to double memory usage of guest 
>>>>>>>> operations, than for backup
>>>>>>>> the problem is only in write_zero and discard where guest-assumed 
>>>>>>>> memory usage should be zero.
>>>>>>>
>>>>>>> Well, but that is the problem.  I didn’t say anything in v2, because I
>>>>>>> only thought of normal writes and I found it fine to double the memory
>>>>>>> usage there (a guest won’t issue huge write requests in parallel).  But
>>>>>>> discard/write-zeroes are a different matter.
>>>>>>>
>>>>>>>> And if we should distinguish writes from write_zeroes and discard, 
>>>>>>>> it's better to postpone this
>>>>>>>> improvement to be after backup-top filter merged.
>>>>>>>
>>>>>>> But do you need to distinguish it?  Why not just keep track of memory
>>>>>>> usage and put the current I/O coroutine to sleep in a CoQueue or
>>>>>>> something, and wake that up at the end of backup_do_cow()?
>>>>>>>
>>>>>>
>>>>>> 1. Because if we _can_ allow doubling of memory, it's more effective to 
>>>>>> not restrict allocations on
>>>>>> guest writes. It's just seems to be more effective technique.
>>>>>
>>>>> But the problem with backup and zero writes/discards is that the memory
>>>>> is not doubled.  The request doesn’t need any memory, but the CBW
>>>>> operation does, and maybe lots of it.
>>>>>
>>>>> So the guest may issue many zero writes/discards in parallel and thus
>>>>> exhaust memory on the host.
>>>>
>>>> So this is the reason to separate writes from write-zeros/discrads. So at 
>>>> least write will be happy. And I
>>>> think that write is more often request than write-zero/discard
>>>
>>> But that makes it complicated for no practical gain whatsoever.
>>>
>>>>>
>>>>>> 2. Anyway, I'd allow some always-available size to allocate - let it be 
>>>>>> one cluster, which will correspond
>>>>>> to current behavior and prevent guest io hang in worst case.
>>>>>
>>>>> The guest would only hang if it we have to copy more than e.g. 64 MB at
>>>>> a time.  At which point I think it’s not unreasonable to sequentialize
>>>>> requests.
>>>
>>> Because of this.  How is it bad to start sequentializing writes when the
>>> data exceeds 64 MB?
>>>
>>
>> So you want total memory limit of 64 MB? (with possible parameter like in 
>> mirror)
>>
>> And allocation algorithm to copy count bytes:
>>
>> if free_mem >= count: allocate count bytes
>> else if free_mem >= cluster: allocate cluster and copy in a loop
>> else wait in co-queue until some memory available and retry
>>
>> Is it OK for you?
> 
> Sounds good to me, although I don’t know whether the second branch is
> necessary.  As I’ve said, the total limit is just an insurance against a
> guest that does some crazy stuff.
> 

I'm afraid that if there would be one big request it may wait forever while 
smaller
requests will eat most of available memory. So it would be unfair queue: smaller
requests will have higher priority in low memory case. With [2] it becomes more 
fair.



-- 
Best regards,
Vladimir

reply via email to

[Prev in Thread] Current Thread [Next in Thread]