qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] block: don't probe zeroes in bs->file by defaul


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [Qemu-devel] [PATCH] block: don't probe zeroes in bs->file by default on block_status
Date: Wed, 23 Jan 2019 11:53:38 +0000

22.01.2019 21:57, Kevin Wolf wrote:
> Am 11.01.2019 um 12:40 hat Vladimir Sementsov-Ogievskiy geschrieben:
>> 11.01.2019 13:41, Kevin Wolf wrote:
>>> Am 10.01.2019 um 14:20 hat Vladimir Sementsov-Ogievskiy geschrieben:
>>>> drv_co_block_status digs bs->file for additional, more accurate search
>>>> for hole inside region, reported as DATA by bs since 5daa74a6ebc.
>>>>
>>>> This accuracy is not free: assume we have qcow2 disk. Actually, qcow2
>>>> knows, where are holes and where is data. But every block_status
>>>> request calls lseek additionally. Assume a big disk, full of
>>>> data, in any iterative copying block job (or img convert) we'll call
>>>> lseek(HOLE) on every iteration, and each of these lseeks will have to
>>>> iterate through all metadata up to the end of file. It's obviously
>>>> ineffective behavior. And for many scenarios we don't need this lseek
>>>> at all.
>>>>
>>>> So, let's "5daa74a6ebc" by default, leaving an option to return
>>>> previous behavior, which is needed for scenarios with preallocated
>>>> images.
>>>>
>>>> Add iotest illustrating new option semantics.
>>>>
>>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
>>>
>>> I still think that an option isn't a good solution and we should try use
>>> some heuristics instead.
>>
>> Do you think that heuristics would be better than fair cache for lseek 
>> results?
> 
> I just played a bit with this (qemu-img convert only), and how much
> caching lseek() results helps depends completely on the image. As it
> happened, my test image was the worst case where caching didn't buy us
> much. Obviously, I can just as easily construct an image where it makes
> a huge difference. I think that most real-world images should be able to
> take good advantage of it, though, and it doesn't hurt, so maybe that's
> a first thing that we can do in any case. It might not be the complete
> solution, though.
> 
> Let me explain my test images: The case where all of this actually
> matters for qemu-img convert is fragmented qcow2 images. If your image
> isn't fragmented, we don't do lseek() a lot anyway because a single
> bdrv_block_status() call already gives you the information for the whole
> image. So I constructed a fragmented image, by writing to it backwards:
> 
> ./qemu-img create -f qcow2 /tmp/test.qcow2 1G
> for i in $(seq 16384 -1 0); do
>      echo "write $((i * 65536)) 64k"
> done | ./qemu-io /tmp/test.qcow2
> 
> It's not really surprising that caching the lseek() results doesn't help
> much there as we're moving backwards and lseek() only returns results
> about the things after the current position, not before the current
> position. So this is probably the worst case.
> 
> So I constructed a second image, which is fragmented, too, but starts at
> the beginning of the image file:
> 
> ./qemu-img create -f qcow2 /tmp/test_forward.qcow2 1G
> for i in $(seq 0 2 16384); do
>      echo "write $((i * 65536)) 64k"
> done | ./qemu-io /tmp/test_forward.qcow2
> for i in $(seq 1 2 16384); do
>      echo "write $((i * 65536)) 64k"
> done | ./qemu-io /tmp/test_forward.qcow2
> 
> Here caching makes a huge difference:
> 
>      time ./qemu-img convert -p -n $IMG null-co://
> 
>                          uncached        cached
>      test.qcow2             ~145s         ~70s
>      test_forward.qcow2     ~110s        ~0.2s

Unsure about your results, at least 0.2s means, that we benefit from cached 
read, not lseek.

= my results =

in short:

uncached read:
+--------------+--------+------+------+--------+
|              | master | you  |  me  | master |
+--------------+--------+------+------+--------+
| test         |   30.4 | 32.6 | 33.9 |   32.4 |
| test_forward |   28.3 | 33.5 | 32.9 |   32.8 |
+--------------+--------+------+------+--------+

('you' is your patch, 'me' is my simple patch, see below)

(I retested master, as first test run seemed noticeable faster than patched)
So, No significant difference may be noticed (if ignore first run:). Or I need 
a lot more test runs,
to calculate the average.
However, I don't expect any difference here, I'm afraid that lseek() time 
becomes noticeable in
comparison with read() for a lot larger disks.
Also, problems of lseek are mostly related to lseek bugs, now seems that my 
kernel is not buggy..
What kernel and fs do you use to get such a significant difference between 
cached/uncached lseek?


On the other hand, results with cached read are more interesting:

+--------------+--------+------+------+--------+
|              | master | you  |  me  | master |
+--------------+--------+------+------+--------+
| test         |   0.24 | 0.20 | 0.16 |   0.24 |
| test_forward |   0.24 | 0.16 | 0.16 |   0.24 |
+--------------+--------+------+------+--------+

And they show, that my patch wins. So no lseeks = no problems.
Moreover, keeping in mind, that we in Virtuozzo don't have scenarios,
where we'll benefit from lseeks, and after two slow-lseek bugs, it's
definitely safer for me to just keep one out-of-tree patch, than
rely on lseek-cache + lseek, which both are not needed in our case.
Of-course, lseek-cache is a good thing, and your patch seems reasonable,
but I'll go with my patch anyway, and I proposed an option to bring
such behavior to upstream, if someone wants it.

= more detailed tests run description =

1. master branch

]# sync; echo 3 > /proc/sys/vm/drop_caches
]# time ./qemu-img convert -p -n /tmp/test.qcow2 null-co://
     (100.00/100%)

real    0m30.402s
user    0m0.361s
sys     0m0.859s
]# time ./qemu-img convert -p -n /tmp/test.qcow2 null-co://
     (100.00/100%)

real    0m0.240s
user    0m0.173s
sys     0m0.300s
]# time ./qemu-img convert -p -n /tmp/test.qcow2 null-co://
     (100.00/100%)

real    0m0.245s
user    0m0.166s
sys     0m0.286s
]# sync; echo 3 > /proc/sys/vm/drop_caches
]# time ./qemu-img convert -p -n /tmp/test_forward.qcow2 null-co://
     (100.00/100%)

real    0m28.291s
user    0m0.343s
sys     0m0.943s
]# time ./qemu-img convert -p -n /tmp/test_forward.qcow2 null-co://
     (100.00/100%)

real    0m0.231s
user    0m0.154s
sys     0m0.308s
]# time ./qemu-img convert -p -n /tmp/test_forward.qcow2 null-co://
     (100.00/100%)

real    0m0.241s
user    0m0.158s
sys     0m0.284s

2. your patch

]# sync; echo 3 > /proc/sys/vm/drop_caches
]# time ./qemu-img convert -p -n /tmp/test.qcow2 null-co://
     (100.00/100%)

real    0m32.634s
user    0m0.328s
sys     0m0.884s
]# time ./qemu-img convert -p -n /tmp/test.qcow2 null-co://
     (100.00/100%)

real    0m0.208s
user    0m0.169s
sys     0m0.304s
]# time ./qemu-img convert -p -n /tmp/test.qcow2 null-co://
     (100.00/100%)

real    0m0.189s
user    0m0.155s
sys     0m0.263s
]# time ./qemu-img convert -p -n /tmp/test.qcow2 null-co://
     (100.00/100%)

real    0m0.194s
user    0m0.154s
sys     0m0.273s
]# sync; echo 3 > /proc/sys/vm/drop_caches
]# time ./qemu-img convert -p -n /tmp/test_forward.qcow2 null-co://
     (100.00/100%)

real    0m33.486s
user    0m0.374s
sys     0m0.959s
]# time ./qemu-img convert -p -n /tmp/test_forward.qcow2 null-co://
     (100.00/100%)

real    0m0.160s
user    0m0.173s
sys     0m0.253s
]# time ./qemu-img convert -p -n /tmp/test_forward.qcow2 null-co://
     (100.00/100%)

real    0m0.165s
user    0m0.186s
sys     0m0.216s
]# time ./qemu-img convert -p -n /tmp/test_forward.qcow2 null-co://
     (100.00/100%)

real    0m0.168s
user    0m0.160s
sys     0m0.253s

3. my patch

]# sync; echo 3 > /proc/sys/vm/drop_caches
]# time ./qemu-img convert -p -n /tmp/test.qcow2 null-co://
     (100.00/100%)

real    0m33.915s
user    0m0.299s
sys     0m0.848s
]# time ./qemu-img convert -p -n /tmp/test.qcow2 null-co://
     (100.00/100%)

real    0m0.158s
user    0m0.158s
sys     0m0.267s
]# time ./qemu-img convert -p -n /tmp/test.qcow2 null-co://
     (100.00/100%)

real    0m0.167s
user    0m0.159s
sys     0m0.246s
]# time ./qemu-img convert -p -n /tmp/test.qcow2 null-co://
     (100.00/100%)

real    0m0.169s
user    0m0.175s
sys     0m0.236s
]# sync; echo 3 > /proc/sys/vm/drop_caches
]# time ./qemu-img convert -p -n /tmp/test_forward.qcow2 null-co://
     (100.00/100%)

real    0m32.980s
user    0m0.377s
sys     0m0.924s
]# time ./qemu-img convert -p -n /tmp/test_forward.qcow2 null-co://
     (100.00/100%)

real    0m0.152s
user    0m0.149s
sys     0m0.265s
]# time ./qemu-img convert -p -n /tmp/test_forward.qcow2 null-co://
     (100.00/100%)

real    0m0.162s
user    0m0.174s
sys     0m0.214s
]# time ./qemu-img convert -p -n /tmp/test_forward.qcow2 null-co://
     (100.00/100%)

real    0m0.164s
user    0m0.166s
sys     0m0.239s


4. master retest

]# sync; echo 3 > /proc/sys/vm/drop_caches
]# time ./qemu-img convert -p -n /tmp/test.qcow2 null-co://
     (100.00/100%)

real    0m32.418s
user    0m0.347s
sys     0m0.856s
]# time ./qemu-img convert -p -n /tmp/test.qcow2 null-co://
     (100.00/100%)

real    0m0.236s
user    0m0.153s
sys     0m0.316s
]# time ./qemu-img convert -p -n /tmp/test.qcow2 null-co://
     (100.00/100%)

real    0m0.232s
user    0m0.173s
sys     0m0.263s
]# time ./qemu-img convert -p -n /tmp/test.qcow2 null-co://
     (100.00/100%)

real    0m0.244s
user    0m0.160s
sys     0m0.287s
]# sync; echo 3 > /proc/sys/vm/drop_caches
]# time ./qemu-img convert -p -n /tmp/test_forward.qcow2 null-co://
     (100.00/100%)

real    0m32.846s
user    0m0.385s
sys     0m0.987s
]# time ./qemu-img convert -p -n /tmp/test_forward.qcow2 null-co://
     (100.00/100%)

real    0m0.243s
user    0m0.173s
sys     0m0.299s
]# time ./qemu-img convert -p -n /tmp/test_forward.qcow2 null-co://
     (100.00/100%)

real    0m0.243s
user    0m0.164s
sys     0m0.274s
]# time ./qemu-img convert -p -n /tmp/test_forward.qcow2 null-co://
     (100.00/100%)

real    0m0.243s
user    0m0.159s
sys     0m0.282s


= My patch =

diff --git a/block/io.c b/block/io.c
index bd9d688f8b..45e4a52ded 100644
--- a/block/io.c
+++ b/block/io.c
@@ -2186,34 +2186,6 @@ static int coroutine_fn 
bdrv_co_block_status(BlockDriverState *bs,
          }
      }

-    if (want_zero && local_file && local_file != bs &&
-        (ret & BDRV_BLOCK_DATA) && !(ret & BDRV_BLOCK_ZERO) &&
-        (ret & BDRV_BLOCK_OFFSET_VALID)) {
-        int64_t file_pnum;
-        int ret2;
-
-        ret2 = bdrv_co_block_status(local_file, want_zero, local_map,
-                                    *pnum, &file_pnum, NULL, NULL);
-        if (ret2 >= 0) {
-            /* Ignore errors.  This is just providing extra information, it
-             * is useful but not necessary.
-             */
-            if (ret2 & BDRV_BLOCK_EOF &&
-                (!file_pnum || ret2 & BDRV_BLOCK_ZERO)) {
-                /*
-                 * It is valid for the format block driver to read
-                 * beyond the end of the underlying file's current
-                 * size; such areas read as zero.
-                 */
-                ret |= BDRV_BLOCK_ZERO;
-            } else {
-                /* Limit request to the range reported by the protocol driver 
*/
-                *pnum = file_pnum;
-                ret |= (ret2 & BDRV_BLOCK_ZERO);
-            }
-        }
-    }
-
  out:
      bdrv_dec_in_flight(bs);
      if (ret >= 0 && offset + *pnum == total_size) {



--
Best regards,
Vladimir

reply via email to

[Prev in Thread] Current Thread [Next in Thread]