qemu-trivial
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH-for-5.0] qga-posix: Avoid crashing process when failing to al


From: Philippe Mathieu-Daudé
Subject: Re: [PATCH-for-5.0] qga-posix: Avoid crashing process when failing to allocate memory
Date: Tue, 31 Mar 2020 15:32:59 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.5.0

On 3/30/20 7:06 PM, Daniel P. Berrangé wrote:
On Mon, Mar 30, 2020 at 06:04:49PM +0200, Philippe Mathieu-Daudé wrote:
Cc'ing the ppl who responded the thread you quoted.

On 3/30/20 4:11 PM, Markus Armbruster wrote:
Philippe Mathieu-Daudé <address@hidden> writes:
---
   qga/commands-posix.c | 8 +++++++-
   1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/qga/commands-posix.c b/qga/commands-posix.c
index 93474ff770..8f127788e6 100644
--- a/qga/commands-posix.c
+++ b/qga/commands-posix.c
@@ -493,7 +493,13 @@ struct GuestFileRead *qmp_guest_file_read(int64_t
handle, bool has_count,
           gfh->state = RW_STATE_NEW;
       }

-    buf = g_malloc0(count+1);
+    buf = g_try_malloc0(count + 1);
+    if (!buf) {
+        error_setg(errp,
+                   "failed to allocate sufficient memory "
+                   "to complete the requested service");
+        return NULL;
+    }
       read_count = fread(buf, 1, count, fh);
       if (ferror(fh)) {
           error_setg_errno(errp, errno, "failed to read file");


On 3/25/20 7:19 AM, Dietmar Maurer wrote:
but error_setg() also calls malloc, so this does not help at all?

IIUC the problem, you can send a QMP command to ask to read let's say
3GB of a file, and QEMU crashes. But this doesn't mean there the .heap
is empty, there is probably few bytes still available, enough to
respond with an error message.

We've discussed how to handle out-of-memory conditions many times.
Here's one instance:

      Subject: When it's okay to treat OOM as fatal?
      Message-ID: <address@hidden>
      https://lists.nongnu.org/archive/html/qemu-devel/2018-10/msg03212.html

No improvement since then; there's no guidance on when to check for OOM.
Actual code tends to check only "large" allocations (for subjective
values of "large").

I reiterate my opinion that whatever OOM handling we have is too
unreliable to be worth much, since it can only help when (1) allocations
actually fail (they generally don't[*]), and (2) the allocation that
fails is actually handled (they generally aren't), and (3) the handling
actually works (we don't test OOM, so it generally doesn't).


[*] Linux overcommits memory, which means malloc() pretty much always
succeeds, but when you try to use "too much" of the memory you
supposedly allocated, a lethal signal is coming your way.  Reasd the
thread I quoted for examples.

So this patch takes Stefan reasoning:
https://lists.nongnu.org/archive/html/qemu-devel/2018-10/msg03525.html

   My thinking has been to use g_new() for small QEMU-internal structures
   and g_try_new() for large amounts of memory allocated in response to
   untrusted inputs.  (Untrusted inputs must never be used for unbounded
   allocation sizes but those bounded sizes can still be large.)

In any cases (malloc/malloc_try) we have a denial of service
(https://www.openwall.com/lists/oss-security/2018/10/17/4) and the service
is restarted.

Daniel suggests such behavior should be catched by external firewall guard
(either on the process or on the network). This seems out of scope of QEMU
and hard to fix.

Did I ?

IMHO if the guest agent is connected to a chardev that allows access to
a user that isn't the one running QEMU, that is serious mis-configuration.
IOW, it is user error, not a QEMU guest agent bug.  Firewalls are not
required  - QEMU should be reconfigured to not make it accessible over TCP.
This is the same situation documented in 
4f24430821c568936aeda417bbb00e989a9e1984
when it was claimed that QEMU had a security flaw because the QMP monitor
could be exposed over TCP.

The guest agent is a privileged interface to interact with a guest OS, and
as such it should only be made available to users who are trustworthy on
the host.

So, can we improve something? Or should we let this code as it?

Personally I would have just put a low, hard limit on "count" in the
guest agent spec & impl. eg don't allow count to be larger than 10 MB
and document that this command is just for limited, ad-hoc debugging,
such as log file access. In extremis we could make the hard limit be
a configuration parameter for the guest agent.

This is clearly a smarter/cleaner fix, thanks!

The QEMU guest agent protocol is not sensible way to access huge files
inside the guest. It requires the inefficient process of reading the
entire data into memory than duplicating it again in base64 format, and
then copying it again in the JSON serializer / monitor code.

For arbitrary general purpose file access, especially for large files,
use a real file transfer program or use a network block device, not the
QEMU guest agent.


Regards,
Daniel





reply via email to

[Prev in Thread] Current Thread [Next in Thread]