qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 5/5] parallels: Image repairing in parallels_open()


From: Alexander Ivanov
Subject: Re: [PATCH v5 5/5] parallels: Image repairing in parallels_open()
Date: Sun, 11 Jun 2023 16:45:04 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.11.0



On 6/9/23 15:41, Hanna Czenczek wrote:
On 09.06.23 15:21, Alexander Ivanov wrote:


On 6/2/23 16:59, Hanna Czenczek wrote:
On 29.05.23 17:15, Alexander Ivanov wrote:
Repair an image at opening if the image is unclean or out-of-image
corruption was detected.

Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
---
  block/parallels.c | 65 +++++++++++++++++++++++++----------------------
  1 file changed, 35 insertions(+), 30 deletions(-)

diff --git a/block/parallels.c b/block/parallels.c
index d64e8007d5..7bbd5cb112 100644
--- a/block/parallels.c
+++ b/block/parallels.c

[...]

@@ -1130,6 +1101,40 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
          goto fail;
      }
      qemu_co_mutex_init(&s->lock);
+
+    if (le32_to_cpu(ph.inuse) == HEADER_INUSE_MAGIC) {
+        s->header_unclean = true;
+    }
+
+    for (i = 0; i < s->bat_size; i++) {
+        sector = bat2sect(s, i);
+        if (sector + s->tracks > s->data_end) {
+            s->data_end = sector + s->tracks;
+        }
+    }
+
+    /*
+     * We don't repair the image here if it's opened for checks. Also we don't +     * want to change inactive images and can't change readonly images.
+     */
+    if ((flags & (BDRV_O_CHECK | BDRV_O_INACTIVE)) || !(flags & BDRV_O_RDWR)) {
+        return 0;
+    }
+
+    /*
+     * Repair the image if it's dirty or
+     * out-of-image corruption was detected.
+     */
+    if (s->data_end > file_nb_sectors || s->header_unclean) {
+        BdrvCheckResult res;
+        ret = bdrv_check(bs, &res, BDRV_FIX_ERRORS | BDRV_FIX_LEAKS);
+        if (ret < 0) {

Should we also verify that res->corruptions == res->corruptions_fixed && res->check_errors == 0?
If ret == 0 there must be res->check_errors == 0 and res->corruptions == res->corruptions_fixed.

OK.


+ error_free(s->migration_blocker);

I’d move this clean-up to a new error path below, then we could even reuse that where migrate_add_blocker() fails.
Is this guaranteed that s->migration_blocker is NULL at the function parallels_open() beginning? If so it could be easy to move the clean-up,
otherwise it could lead to code complication.

Three answers here:

First, I just realized that we probably need to undo the migrate_add_blocker() call, too, i.e. call migrate_del_blocker() here.

Second, I’m pretty sure that s->migration_blocker must be NULL before the error_setg(&s->migration_blocker) call, because error_setg() asserts that the *errp passed to it is NULL.

Third, I meant to add a new path e.g.:

```
fail_blocker:
    error_free(s->migration_blocker);
fail_format:
[...]
```

And then use `goto fail_blocker;` here and in the migrate_add_blocker() error path, so it shouldn’t really matter whether s->migration_blocker is NULL before the error_setg() call.  But then again, I think the probably necessary migrate_del_blocker() call complicates things further.

Hanna
Do we need to run the rest part of the parallels_close() code?

    if ((bs->open_flags & BDRV_O_RDWR) && !(bs->open_flags & BDRV_O_INACTIVE)) {
        s->header->inuse = 0;
        parallels_update_header(bs);

        /* errors are ignored, so we might as well pass exact=true */
        bdrv_truncate(bs->file, s->data_end << BDRV_SECTOR_BITS, true,
                      PREALLOC_MODE_OFF, 0, NULL);
    }

    g_free(s->bat_dirty_bmap);

If so, maybe it would be better to call parallels_close()?


Anyway, not wrong as-is, just suggestion, so:

Reviewed-by: Hanna Czenczek <hreitz@redhat.com>

+            error_setg_errno(errp, -ret, "Could not repair corrupted image");
+            goto fail;
+        }
+    }
+
      return 0;
    fail_format:




--
Best regards,
Alexander Ivanov




reply via email to

[Prev in Thread] Current Thread [Next in Thread]