qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PULL 02/17] block: Collapse padded I/O vecs exceeding IOV_MAX


From: Hanna Czenczek
Subject: Re: [PULL 02/17] block: Collapse padded I/O vecs exceeding IOV_MAX
Date: Fri, 9 Jun 2023 09:45:22 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.10.0

On 08.06.23 11:52, Peter Maydell wrote:
On Mon, 5 Jun 2023 at 16:48, Hanna Czenczek <hreitz@redhat.com> wrote:
When processing vectored guest requests that are not aligned to the
storage request alignment, we pad them by adding head and/or tail
buffers for a read-modify-write cycle.
Hi; Coverity complains (CID 1512819) that the assert added
in this commit is always true:

@@ -1573,26 +1701,34 @@ static void bdrv_padding_destroy(BdrvRequestPadding 
*pad)
  static int bdrv_pad_request(BlockDriverState *bs,
                              QEMUIOVector **qiov, size_t *qiov_offset,
                              int64_t *offset, int64_t *bytes,
+                            bool write,
                              BdrvRequestPadding *pad, bool *padded,
                              BdrvRequestFlags *flags)
  {
      int ret;
+    struct iovec *sliced_iov;
+    int sliced_niov;
+    size_t sliced_head, sliced_tail;

      bdrv_check_qiov_request(*offset, *bytes, *qiov, *qiov_offset, 
&error_abort);

-    if (!bdrv_init_padding(bs, *offset, *bytes, pad)) {
+    if (!bdrv_init_padding(bs, *offset, *bytes, write, pad)) {
          if (padded) {
              *padded = false;
          }
          return 0;
      }

-    ret = qemu_iovec_init_extended(&pad->local_qiov, pad->buf, pad->head,
-                                   *qiov, *qiov_offset, *bytes,
-                                   pad->buf + pad->buf_len - pad->tail,
-                                   pad->tail);
+    sliced_iov = qemu_iovec_slice(*qiov, *qiov_offset, *bytes,
+                                  &sliced_head, &sliced_tail,
+                                  &sliced_niov);
+
+    /* Guaranteed by bdrv_check_qiov_request() */
+    assert(*bytes <= SIZE_MAX);
This one. (The Coverity complaint is because SIZE_MAX for it is
UINT64_MAX and an int64_t can't possibly be bigger than that.)

Is this because the assert() is deliberately handling the case
of a 32-bit host where SIZE_MAX might not be UINT64_MAX ? Or was
the bound supposed to be SSIZE_MAX or INT64_MAX ?

It’s supposed to be SIZE_MAX, because of the subsequent bdrv_created_padded_qiov() call that takes a size_t.  So it is for a case where SIZE_MAX < UINT64_MAX, i.e. 32-bit hosts, yes.

Looking at bdrv_check_qiov_request(), it seems to check bytes
against BDRV_MAX_LENGTH, which is defined as something somewhere
near INT64_MAX. So on a 32-bit host I'm not sure that function
does guarantee that the bytes count is going to be less than
SIZE_MAX...

Ah, crap.  I was thinking of BDRV_REQUEST_MAX_BYTES, which is checked by bdrv_check_request32(), which both callers of bdrv_pad_request() run/check before calling bdrv_pad_request().  So bdrv_pad_request() should use the same function.

I’ll send a patch to change the bdrv_check_qiov_request() to bdrv_check_request32().  Because both callers (bdrv_co_preadv_part(), bdrv_co_pwritev_part()) already call it (the latter only for non-zero writes, but bdrv_pad_request() similarly is called only for non-zero writes), there should be no change in behavior, and the asserted condition should in practice already be always fulfilled (because of the callers checking).

Hanna




reply via email to

[Prev in Thread] Current Thread [Next in Thread]