qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v6 00/25] monitor: add asynchronous command type


From: Marc-André Lureau
Subject: Re: [PATCH v6 00/25] monitor: add asynchronous command type
Date: Fri, 13 Dec 2019 20:28:12 +0400

Hi

On Fri, Dec 13, 2019 at 8:04 PM Kevin Wolf <address@hidden> wrote:
>
> Am 08.11.2019 um 16:00 hat Marc-André Lureau geschrieben:
> > The following series implements an internal async command solution
> > instead. By introducing a session context and a command return
> > handler, QMP handlers can:
> > - defer the return, allowing the mainloop to reenter
> > - return only to the caller (instead of the broadcast event reply)
> > - optionnally allow cancellation when the client is gone
> > - track on-going qapi command(s) per session
>
> This requires major changes to existing QMP command handlers. Did you
> consider at least optionally providing a way where instead of using the
> new async_fn, QMP already creates a coroutine in which the handler is
> executed?

Yes, but I don't see how this could be done without the basic callback
infrastructure I propose here. Also forcing existing code to be
translated to coroutine-aware is probably even more complicated.

>
> At least for some of the block layer commands, we could simply enable
> this without changing any of the code and would automatically get rid of
> blocking the guest while the command is doing I/O. If we need to
> implement .async_fn, in contrast, it would be quite a bit of boiler
> plate code for each single handler to create a coroutine for the real
> handler and to wrap all parameters in a struct.

We could have the generator do that for you eventually, and spawn the coroutine.

>
> I started playing a bit with this and it didn't look too bad, but we
> can't move every command handler to a coroutine without auditing it, so
> I would have had to add a new option to the QAPI schema - and at that
> point I thought that I might end up duplicating a lot of your code if I
> continued. So I'm now trying to get your opinion or advice before I
> continue with anything in that direction.

thanks for looking at this old series!

>
> > This does not introduce new QMP APIs or client visible changes, the
> > command are handled in order, and the reply still come in order (even
> > when handlers finish out of order).
> >
> > Existing qemu commands can be gradually replaced by async:true
> > variants when needed, while carefully reviewing the concurrency
> > aspects. The async:true commands marshaller helpers are splitted in
> > half, the calling and return functions. The command is called with a
> > QmpReturn context, that can return immediately or later, using the
> > generated return helper.
>
> This part would certainly become simpler with coroutines (the marshaller
> could stay unchanged).

That's not much change honestly. I am not sure sneaking a coroutine
behind its back is going to be simpler, I would need to look at it.

>
> > The screendump command is converted to an async:true version to solve
> > rhbz#1230527. The command shows basic cancellation (this could be
> > extended if needed). It could be further improved to do asynchronous
> > IO writes as well.
>
> After converting it to QIOChannel like you already do, I/O would
> automatically become asynchronous when run in a coroutine.
>
> If you don't want this yet, but only fix the BZ, I guess you could delay
> that patch until later and just have a single yield and reenter of the
> command handler coroutine like this:
>
>     co = qemu_coroutine_self();
>     aio_co_schedule(qemu_coroutine_get_aio_context(co), co);
>     qemu_coroutine_yield();
>

If various places of code start doing that, we are in trouble, the
stack may grow, cancellation becomes hairy.

Furthermore, in the case of screendump, IO is not necessarily within
the coroutine context. In this case, we need to wait for the QXL
device to "flush" the screen. Communicating this event back to the
coroutine isn't simpler than what I propose here.

Thanks!

-- 
Marc-André Lureau



reply via email to

[Prev in Thread] Current Thread [Next in Thread]