[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v3 0/8] blockdev: avoid acquiring AioContext lock twice at do
From: |
Sergio Lopez |
Subject: |
Re: [PATCH v3 0/8] blockdev: avoid acquiring AioContext lock twice at do_drive_backup and do_blockdev_backup |
Date: |
Tue, 19 Nov 2019 13:31:09 +0100 |
User-agent: |
mu4e 1.2.0; emacs 26.2 |
Kevin Wolf <address@hidden> writes:
> Am 19.11.2019 um 12:35 hat Sergio Lopez geschrieben:
>>
>> Kevin Wolf <address@hidden> writes:
>>
>> > Am 19.11.2019 um 11:54 hat Sergio Lopez geschrieben:
>> >>
>> >> Max Reitz <address@hidden> writes:
>> >>
>> >> > On 13.11.19 14:24, Sergio Lopez wrote:
>> >> >>
>> >> >> Sergio Lopez <address@hidden> writes:
>> >> >>
>> >> >>> address@hidden writes:
>> >> >>>
>> >> >>>> Patchew URL: https://patchew.org/QEMU/address@hidden/
>> >> >>>>
>> >> >>>>
>> >> >>>>
>> >> >>>> Hi,
>> >> >>>>
>> >> >>>> This series failed the docker-quick@centos7 build test. Please find
>> >> >>>> the testing commands and
>> >> >>>> their output below. If you have Docker installed, you can probably
>> >> >>>> reproduce it
>> >> >>>> locally.
>> >> >>>>
>> >> >>>> === TEST SCRIPT BEGIN ===
>> >> >>>> #!/bin/bash
>> >> >>>> make docker-image-centos7 V=1 NETWORK=1
>> >> >>>> time make docker-test-quick@centos7 SHOW_ENV=1 J=14 NETWORK=1
>> >> >>>> === TEST SCRIPT END ===
>> >> >>>>
>> >> >>>> TEST iotest-qcow2: 268
>> >> >>>> Failures: 141
>> >> >>>
>> >> >>> Hm... 141 didn't fail in my test machine. I'm going to have a look.
>> >> >>
>> >> >> So here's the output:
>> >> >>
>> >> >> --- /root/qemu/tests/qemu-iotests/141.out 2019-11-12
>> >> >> 04:43:27.651557587 -0500
>> >> >> +++ /root/qemu/build/tests/qemu-iotests/141.out.bad 2019-11-13
>> >> >> 08:12:06.575967337 -0500
>> >> >> @@ -10,6 +10,8 @@
>> >> >> Formatting 'TEST_DIR/o.IMGFMT', fmt=IMGFMT size=1048576
>> >> >> backing_file=TEST_DIR/t.IMGFMT backing_fmt=IMGFMT
>> >> >> {"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP},
>> >> >> "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id":
>> >> >> "job0"}}
>> >> >> {"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP},
>> >> >> "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id":
>> >> >> "job0"}}
>> >> >> +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP},
>> >> >> "event": "JOB_STATUS_CHANGE", "data": {"status": "paused", "id":
>> >> >> "job0"}}
>> >> >> +{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP},
>> >> >> "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id":
>> >> >> "job0"}}
>> >> >> {"error": {"class": "GenericError", "desc": "Node 'drv0' is busy:
>> >> >> node is used as backing hd of 'NODE_NAME'"}}
>> >> >> {"return": {}}
>> >> >> {"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP},
>> >> >> "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id":
>> >> >> "job0"}}
>> >> >>
>> >> >> Those extra lines, the "paused" and "running", are a result of the job
>> >> >> being done in a transaction, within a drained section.
>> >> >>
>> >> >> We can update 141.out, but now I'm wondering, was it safe creating the
>> >> >> job at do_drive_backup() outside of a drained section, as
>> >> >> qmp_drive_backup was doing?
>> >> >
>> >> > I think it is. Someone needs to drain the source node before attaching
>> >> > the job filter (which intercepts writes), and bdrv_backup_top_append()
>> >> > does precisely this.
>> >> >
>> >> > If the source node is in an I/O thread, you could argue that the drain
>> >> > starts later than when the user has invoked the backup command, and so
>> >> > some writes might slip through. That’s correct. But at the same time,
>> >> > it’s impossible to drain it the instant the command is received. So
>> >> > some writes might always slip through (and the drain will not stop them
>> >> > either, it will just let them happen).
>> >> >
>> >> > Therefore, I think it’s fine the way it is.
>> >> >
>> >> >> Do you think there may be any potential drawbacks as a result of always
>> >> >> doing it now inside a drained section?
>> >> >
>> >> > Well, one drawback is clearly visible. The job goes to paused for no
>> >> > reason.
>> >>
>> >> This is something that already happens when requesting the drive-backup
>> >> through a transaction:
>> >>
>> >> {"execute":"transaction","arguments":{"actions":[{"type":"drive-backup","data":{"device":"drv0","target":"o.qcow2","sync":"full","format":"qcow2"}}]}}
>> >>
>> >> I don't think it makes sense to have two different behaviors for the
>> >> same action. So we either accept the additional pause+resume iteration
>> >> for qmp_drive_backup, or we remove the drained section from the
>> >> transaction based one.
>> >>
>> >> What do you think?
>> >
>> > Draining all involved nodes is necessary for transactions, because you
>> > want a consistent backup across all involved disks. That is, you want it
>> > to be a snapshot at the same point in time for all of them - no requests
>> > may happen between starting backup on the first and the second disk.
>> >
>> > For a single device operation, this requirement doesn't exist, because
>> > there is nothing else that must happen at the same point in time.
>>
>> This poses a problem with the unification strategy you suggested for qmp
>> commands and transactions. I guess that, if we really want to preserve
>> the original behavior, we can extend DriveBackup to add a flag to
>> indicate whether the transaction should create a drained section or not.
>>
>> Does this sound reasonable to you?
>
> I think we can accept an unnecessary drain for the single-device case.
> It's only minimally worse than not draining early (because, as Max said,
> we'll drain the node anyway later).
>
> I'm not sure what the code looks like, but does the job go to paused
> even when it's already created inside the drained section? (As opposed
> to first creating the job and then draining.) I assume that this is what
> you're already doing, just double-checking.
Yes, that's the case. drive_backup_prepare() calls to
bdrv_drained_begin() first, and then to do_backup_common(), which creates
the job.
> If this is how things work, I'd just adjust the test output and explain
> the change in the commit message.
OK, I'll prepare a v4 with a rework of the patchset and an update to the
job.
Thanks,
Sergio.
signature.asc
Description: PGP signature
- [PATCH v3 7/8] blockdev: change qmp_blockdev_backup to make use of transactions, (continued)
- [PATCH v3 7/8] blockdev: change qmp_blockdev_backup to make use of transactions, Sergio Lopez, 2019/11/12
- [PATCH v3 5/8] blockdev: merge blockdev_backup_prepare with do_blockdev_backup, Sergio Lopez, 2019/11/12
- Re: [PATCH v3 0/8] blockdev: avoid acquiring AioContext lock twice at do_drive_backup and do_blockdev_backup, no-reply, 2019/11/12
- Re: [PATCH v3 0/8] blockdev: avoid acquiring AioContext lock twice at do_drive_backup and do_blockdev_backup, Sergio Lopez, 2019/11/13
- Re: [PATCH v3 0/8] blockdev: avoid acquiring AioContext lock twice at do_drive_backup and do_blockdev_backup, Sergio Lopez, 2019/11/13
- Re: [PATCH v3 0/8] blockdev: avoid acquiring AioContext lock twice at do_drive_backup and do_blockdev_backup, Max Reitz, 2019/11/19
- Re: [PATCH v3 0/8] blockdev: avoid acquiring AioContext lock twice at do_drive_backup and do_blockdev_backup, Sergio Lopez, 2019/11/19
- Re: [PATCH v3 0/8] blockdev: avoid acquiring AioContext lock twice at do_drive_backup and do_blockdev_backup, Kevin Wolf, 2019/11/19
- Re: [PATCH v3 0/8] blockdev: avoid acquiring AioContext lock twice at do_drive_backup and do_blockdev_backup, Sergio Lopez, 2019/11/19
- Re: [PATCH v3 0/8] blockdev: avoid acquiring AioContext lock twice at do_drive_backup and do_blockdev_backup, Kevin Wolf, 2019/11/19
- Re: [PATCH v3 0/8] blockdev: avoid acquiring AioContext lock twice at do_drive_backup and do_blockdev_backup,
Sergio Lopez <=