[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Qemu-devel] [PULL 11/71] nbd: Restrict connection_co reentrance
From: |
Kevin Wolf |
Subject: |
[Qemu-devel] [PULL 11/71] nbd: Restrict connection_co reentrance |
Date: |
Mon, 25 Feb 2019 16:19:53 +0100 |
nbd_client_attach_aio_context() schedules connection_co in the new
AioContext and this way reenters it in any arbitrary place that has
yielded. We can restrict this a bit to the function call where the
coroutine actually sits waiting when it's idle.
This doesn't solve any bug yet, but it shows where in the code we need
to support this random reentrance and where we don't have to care.
Add FIXME comments for the existing bugs that the rest of this series
will fix.
Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Eric Blake <address@hidden>
---
block/nbd-client.h | 1 +
block/nbd-client.c | 22 ++++++++++++++++++++++
2 files changed, 23 insertions(+)
diff --git a/block/nbd-client.h b/block/nbd-client.h
index d990207a5c..09e03013d2 100644
--- a/block/nbd-client.h
+++ b/block/nbd-client.h
@@ -35,6 +35,7 @@ typedef struct NBDClientSession {
NBDClientRequest requests[MAX_NBD_REQUESTS];
NBDReply reply;
+ BlockDriverState *bs;
bool quit;
} NBDClientSession;
diff --git a/block/nbd-client.c b/block/nbd-client.c
index f0ad54ce21..5ce4aa9520 100644
--- a/block/nbd-client.c
+++ b/block/nbd-client.c
@@ -76,8 +76,24 @@ static coroutine_fn void nbd_connection_entry(void *opaque)
Error *local_err = NULL;
while (!s->quit) {
+ /*
+ * The NBD client can only really be considered idle when it has
+ * yielded from qio_channel_readv_all_eof(), waiting for data. This is
+ * the point where the additional scheduled coroutine entry happens
+ * after nbd_client_attach_aio_context().
+ *
+ * Therefore we keep an additional in_flight reference all the time and
+ * only drop it temporarily here.
+ *
+ * FIXME This is not safe because the QIOChannel could wake up the
+ * coroutine for a second time; it is not prepared for coroutine
+ * resumption from external code.
+ */
+ bdrv_dec_in_flight(s->bs);
assert(s->reply.handle == 0);
ret = nbd_receive_reply(s->ioc, &s->reply, &local_err);
+ bdrv_inc_in_flight(s->bs);
+
if (local_err) {
trace_nbd_read_reply_entry_fail(ret, error_get_pretty(local_err));
error_free(local_err);
@@ -116,6 +132,8 @@ static coroutine_fn void nbd_connection_entry(void *opaque)
s->quit = true;
nbd_recv_coroutines_wake_all(s);
+ bdrv_dec_in_flight(s->bs);
+
s->connection_co = NULL;
aio_wait_kick();
}
@@ -970,6 +988,8 @@ void nbd_client_attach_aio_context(BlockDriverState *bs,
{
NBDClientSession *client = nbd_get_client_session(bs);
qio_channel_attach_aio_context(QIO_CHANNEL(client->ioc), new_context);
+
+ /* FIXME Really need a bdrv_inc_in_flight() here */
aio_co_schedule(new_context, client->connection_co);
}
@@ -1076,6 +1096,7 @@ static int nbd_client_connect(BlockDriverState *bs,
* kick the reply mechanism. */
qio_channel_set_blocking(QIO_CHANNEL(sioc), false, NULL);
client->connection_co = qemu_coroutine_create(nbd_connection_entry,
client);
+ bdrv_inc_in_flight(bs);
nbd_client_attach_aio_context(bs, bdrv_get_aio_context(bs));
logout("Established connection with NBD server\n");
@@ -1108,6 +1129,7 @@ int nbd_client_init(BlockDriverState *bs,
{
NBDClientSession *client = nbd_get_client_session(bs);
+ client->bs = bs;
qemu_co_mutex_init(&client->send_mutex);
qemu_co_queue_init(&client->free_sema);
--
2.20.1
- [Qemu-devel] [PULL 01/71] MAINTAINERS: Replace myself with John Snow for block jobs, (continued)
- [Qemu-devel] [PULL 01/71] MAINTAINERS: Replace myself with John Snow for block jobs, Kevin Wolf, 2019/02/25
- [Qemu-devel] [PULL 02/71] MAINTAINERS: Remove myself as block maintainer, Kevin Wolf, 2019/02/25
- [Qemu-devel] [PULL 04/71] block/snapshot: remove bdrv_snapshot_delete_by_id_or_name, Kevin Wolf, 2019/02/25
- [Qemu-devel] [PULL 05/71] qcow2-snapshot: remove redundant find_snapshot_by_id_and_name call, Kevin Wolf, 2019/02/25
- [Qemu-devel] [PULL 03/71] block/snapshot.c: eliminate use of ID input in snapshot operations, Kevin Wolf, 2019/02/25
- [Qemu-devel] [PULL 07/71] commit: Replace commit_top_bs on failure after deleting the block job, Kevin Wolf, 2019/02/25
- [Qemu-devel] [PULL 06/71] block: don't set the same context, Kevin Wolf, 2019/02/25
- [Qemu-devel] [PULL 09/71] block-backend: Make blk_inc/dec_in_flight public, Kevin Wolf, 2019/02/25
- [Qemu-devel] [PULL 08/71] qemu-img: fix error reporting for -object, Kevin Wolf, 2019/02/25
- [Qemu-devel] [PULL 10/71] virtio-blk: Increase in_flight for request restart BH, Kevin Wolf, 2019/02/25
- [Qemu-devel] [PULL 11/71] nbd: Restrict connection_co reentrance,
Kevin Wolf <=
- [Qemu-devel] [PULL 12/71] io: Make qio_channel_yield() interruptible, Kevin Wolf, 2019/02/25
- [Qemu-devel] [PULL 14/71] nbd: Move nbd_read_eof() to nbd/client.c, Kevin Wolf, 2019/02/25
- [Qemu-devel] [PULL 18/71] block: Fix AioContext switch for drained node, Kevin Wolf, 2019/02/25
- [Qemu-devel] [PULL 13/71] io: Remove redundant read/write_coroutine assignments, Kevin Wolf, 2019/02/25
- [Qemu-devel] [PULL 16/71] nbd: Increase bs->in_flight during AioContext switch, Kevin Wolf, 2019/02/25
- [Qemu-devel] [PULL 20/71] block: Use normal drain for bdrv_set_aio_context(), Kevin Wolf, 2019/02/25
- [Qemu-devel] [PULL 19/71] test-bdrv-drain: AioContext switch in drained section, Kevin Wolf, 2019/02/25
- [Qemu-devel] [PULL 15/71] nbd: Use low-level QIOChannel API in nbd_read_eof(), Kevin Wolf, 2019/02/25
- [Qemu-devel] [PULL 17/71] block: Don't poll in bdrv_set_aio_context(), Kevin Wolf, 2019/02/25
- [Qemu-devel] [PULL 21/71] aio-posix: Assert that aio_poll() is always called in home thread, Kevin Wolf, 2019/02/25