qemu-stable
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH for-5.1] nbd: Fix large trim/zero requests


From: Eric Blake
Subject: Re: [PATCH for-5.1] nbd: Fix large trim/zero requests
Date: Thu, 23 Jul 2020 06:47:18 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0

On 7/23/20 2:23 AM, Vladimir Sementsov-Ogievskiy wrote:
23.07.2020 00:22, Eric Blake wrote:
Although qemu as NBD client limits requests to <2G, the NBD protocol
allows clients to send requests almost all the way up to 4G.  But
because our block layer is not yet 64-bit clean, we accidentally wrap
such requests into a negative size, and fail with EIO instead of
performing the intended operation.


@@ -2378,8 +2378,17 @@ static coroutine_fn int nbd_handle_request(NBDClient *client,
          if (request->flags & NBD_CMD_FLAG_FAST_ZERO) {
              flags |= BDRV_REQ_NO_FALLBACK;
          }
-        ret = blk_pwrite_zeroes(exp->blk, request->from + exp->dev_offset,
-                                request->len, flags);
+        ret = 0;
+        /* FIXME simplify this when blk_pwrite_zeroes switches to 64-bit */
+        while (ret == 0 && request->len) {
+            int align = client->check_align ?: 1;
+            int len = MIN(request->len, QEMU_ALIGN_DOWN(BDRV_REQUEST_MAX_BYTES,
+                                                        align));
+            ret = blk_pwrite_zeroes(exp->blk, request->from + exp->dev_offset,
+                                    len, flags);
+            request->len -= len;
+            request->from += len;
+        }
          return nbd_send_generic_reply(client, request->handle, ret,
                                        "writing to file failed", errp);

It's easy enough to audit that blk_pwrite_zeroes returns 0 (and not arbitrary positive) on success.


@@ -2393,8 +2402,17 @@ static coroutine_fn int nbd_handle_request(NBDClient *client,
                                        "flush failed", errp);

      case NBD_CMD_TRIM:
-        ret = blk_co_pdiscard(exp->blk, request->from + exp->dev_offset,
-                              request->len);
+        ret = 0;
+        /* FIXME simplify this when blk_co_pdiscard switches to 64-bit */
+        while (ret == 0 && request->len) {

Did you check all the paths not to return positive ret on success? I'd prefer to compare ret >= 0 for this temporary code to not care of it

And for blk_co_pdiscard, the audit is already right here in the patch: pre-patch, ret = blk_co_pdiscard, followed by...


+            int align = client->check_align ?: 1;
+            int len = MIN(request->len, QEMU_ALIGN_DOWN(BDRV_REQUEST_MAX_BYTES,
+                                                        align));
+            ret = blk_co_pdiscard(exp->blk, request->from + exp->dev_offset,
+                                  len);
+            request->len -= len;
+            request->from += len;

Hmm.. Modifying the function parameter. Safe now, as nbd_handle_request() call is the last usage of request in nbd_trip().

+        }
          if (ret == 0 && request->flags & NBD_CMD_FLAG_FUA) {

...a check for ret == 0.

              ret = blk_co_flush(exp->blk);
          }




Yes, I can respin the patch to use >= 0 as the check for success in both functions, but it doesn't change the behavior.

--
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org




reply via email to

[Prev in Thread] Current Thread [Next in Thread]