qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 06/14] block: remove AioContext locking


From: Stefan Hajnoczi
Subject: Re: [PATCH v2 06/14] block: remove AioContext locking
Date: Tue, 19 Dec 2023 15:04:01 -0500

On Tue, 19 Dec 2023 at 10:59, Kevin Wolf <kwolf@redhat.com> wrote:
>
> Am 05.12.2023 um 19:20 hat Stefan Hajnoczi geschrieben:
> > This is the big patch that removes
> > aio_context_acquire()/aio_context_release() from the block layer and
> > affected block layer users.
> >
> > There isn't a clean way to split this patch and the reviewers are likely
> > the same group of people, so I decided to do it in one patch.
> >
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > Reviewed-by: Eric Blake <eblake@redhat.com>
> > Reviewed-by: Kevin Wolf <kwolf@redhat.com>
> > Reviewed-by: Paul Durrant <paul@xen.org>
>
> > diff --git a/migration/block.c b/migration/block.c
> > index a15f9bddcb..2bcfcbfdf6 100644
> > --- a/migration/block.c
> > +++ b/migration/block.c
> > @@ -313,22 +311,10 @@ static int mig_save_device_bulk(QEMUFile *f, 
> > BlkMigDevState *bmds)
> >      block_mig_state.submitted++;
> >      blk_mig_unlock();
> >
> > -    /* We do not know if bs is under the main thread (and thus does
> > -     * not acquire the AioContext when doing AIO) or rather under
> > -     * dataplane.  Thus acquire both the iothread mutex and the
> > -     * AioContext.
> > -     *
> > -     * This is ugly and will disappear when we make bdrv_* thread-safe,
> > -     * without the need to acquire the AioContext.
> > -     */
> > -    qemu_mutex_lock_iothread();
> > -    aio_context_acquire(blk_get_aio_context(bmds->blk));
> >      bdrv_reset_dirty_bitmap(bmds->dirty_bitmap, cur_sector * 
> > BDRV_SECTOR_SIZE,
> >                              nr_sectors * BDRV_SECTOR_SIZE);
> >      blk->aiocb = blk_aio_preadv(bb, cur_sector * BDRV_SECTOR_SIZE, 
> > &blk->qiov,
> >                                  0, blk_mig_read_cb, blk);
> > -    aio_context_release(blk_get_aio_context(bmds->blk));
> > -    qemu_mutex_unlock_iothread();
> >
> >      bmds->cur_sector = cur_sector + nr_sectors;
> >      return (bmds->cur_sector >= total_sectors);
>
> With this hunk applied, qemu-iotests 183 fails:
>
> (gdb) bt
> #0  0x000055aaa7d47c09 in bdrv_graph_co_rdlock () at ../block/graph-lock.c:176
> #1  0x000055aaa7d3de2e in graph_lockable_auto_lock (x=<optimized out>) at 
> /home/kwolf/source/qemu/include/block/graph-lock.h:215
> #2  blk_co_do_preadv_part (blk=0x7f38a4000f30, offset=0, bytes=1048576, 
> qiov=0x7f38a40250f0, qiov_offset=qiov_offset@entry=0, flags=0) at 
> ../block/block-backend.c:1340
> #3  0x000055aaa7d3e006 in blk_aio_read_entry (opaque=0x7f38a4025140) at 
> ../block/block-backend.c:1620
> #4  0x000055aaa7e7aa5b in coroutine_trampoline (i0=<optimized out>, 
> i1=<optimized out>) at ../util/coroutine-ucontext.c:177
> #5  0x00007f38d14dbe90 in __start_context () at /lib64/libc.so.6
> #6  0x00007f38b3dfa060 in  ()
> #7  0x0000000000000000 in  ()
>
> qemu_get_current_aio_context() returns NULL now. I don't completely
> understand why it depends on the BQL, but adding the BQL locking back
> fixes it.

Thanks for letting me know. I have reviewed migration/block.c and
agree that taking the BQL is correct.

I have inlined the fix below and will resend this patch.

Stefan
---
diff --git a/migration/block.c b/migration/block.c
index 2bcfcbfdf6..6ec6a1d6e6 100644
--- a/migration/block.c
+++ b/migration/block.c
@@ -311,10 +311,17 @@ static int mig_save_device_bulk(QEMUFile *f,
BlkMigDevState *bmds)
     block_mig_state.submitted++;
     blk_mig_unlock();

+    /*
+     * The migration thread does not have an AioContext. Lock the BQL so that
+     * I/O runs in the main loop AioContext (see
+     * qemu_get_current_aio_context()).
+     */
+    qemu_mutex_lock_iothread();
     bdrv_reset_dirty_bitmap(bmds->dirty_bitmap, cur_sector * BDRV_SECTOR_SIZE,
                             nr_sectors * BDRV_SECTOR_SIZE);
     blk->aiocb = blk_aio_preadv(bb, cur_sector * BDRV_SECTOR_SIZE, &blk->qiov,
                                 0, blk_mig_read_cb, blk);
+    qemu_mutex_unlock_iothread();

     bmds->cur_sector = cur_sector + nr_sectors;
     return (bmds->cur_sector >= total_sectors);



reply via email to

[Prev in Thread] Current Thread [Next in Thread]