qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Maximum QMP reply size


From: Daniel P . Berrangé
Subject: Re: Maximum QMP reply size
Date: Fri, 16 Sep 2022 16:36:20 +0100
User-agent: Mutt/2.2.6 (2022-06-05)

On Fri, Sep 16, 2022 at 04:12:00PM +0100, Peter Maydell wrote:
> On Thu, 15 Sept 2022 at 16:21, Dr. David Alan Gilbert
> <dgilbert@redhat.com> wrote:
> >
> > * Peter Maydell (peter.maydell@linaro.org) wrote:
> > > On Tue, 6 Sept 2022 at 20:41, John Snow <jsnow@redhat.com> wrote:
> > > > Hi, I suspect I have asked this before, but I didn't write it down in
> > > > a comment, so I forget my justification...
> > > >
> > > > In the QMP lib, we need to set a buffering limit for how big a QMP
> > > > message can be -- In practice, I found that the largest possible
> > > > response was the QAPI schema reply, and I set the code to this:
> > > >
> > > >     # Maximum allowable size of read buffer
> > > >     _limit = (64 * 1024)
> > > >
> > > > However, I didn't document if this was a reasonable limit or just a
> > > > "worksforme" one. I assume that there's no hard limit for the protocol
> > > > or the implementation thereof in QEMU. Is there any kind of value here
> > > > that would be more sensible than another?
> > > >
> > > > I'm worried that if replies get bigger in the future (possibly in some
> > > > degenerate case I am presently unaware of) that the library default
> > > > will become nonsensical.
> > >
> > > There are some QMP commands which return lists of things
> > > where we put no inherent limit on how many things there
> > > are in the list, like qom-list-types. We'd have to be getting
> > > a bit enthusiastic about defining types for that to get
> > > up towards 64K's worth of response, but it's not inherently
> > > impossible. I think using human-monitor-command to send
> > > an 'xp' HMP command is also a way to get back an arbitrarily
> > > large string (just ask for a lot of memory to be dumped).
> >
> > We could put size limits on xp; most Humans will only dump a few kB
> > maximum like that, any larger and you can dump to file.
> 
> Sure, we could, but why? It's not clear to me why a consumer
> of QMP needs to impose a maximum message size limit on it:
> I thought it was just JSON. Fixed buffer sizes are very 1980s :-)

Well even if they parse the JSON as it streams in, rather than
reading the whole doc and then parsing it in one go, you still
need to have limits on any sane QMP client.

The QEMU process is an untrusted component in the stack, talking
to a trusted mgmt layer. If the QEMU process sends a 1 TB JSON
message as a QMP reply, the mgmt layer must not try to parse
that as they'll let loose the kraken^H^H^H^H^H OOM killer.

To be robust against either a malicious or mis-behaving QEMU
they need to impose a limit on the size of QMP response they'll
be willing to process. The challenge is figuring out what limit
is big enough to handle any conceivable valid message, while
being small enough to minimize denial of service risks.

NB, that's not the only thing clients need todo to protect from
a bad QEMU. Rate limiting consumption is potentially important too
lest a bad QEMU inflict a DoS on the CPU by sending such frequent
messages that the QMP client is burning 100% CPU just parsing
them.  I've not seen any QMP client do this in practice though,
not even libvirt has attempted it.

> If this is a common requirement then should we define something
> in the protocol where the client says "I can support messages
> up to this big" and then the server has to split things up?

Splitting doesn't help protect against the DoS, because the QMP
client would have to reassemble the pieces afterwards to process
the reply / async event.

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




reply via email to

[Prev in Thread] Current Thread [Next in Thread]