qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 0/6] net/tap: Fix QEMU frozen issue when the maximum numbe


From: Daniel P . Berrangé
Subject: Re: [PATCH v4 0/6] net/tap: Fix QEMU frozen issue when the maximum number of file descriptors is very large
Date: Thu, 29 Jun 2023 10:05:25 +0100
User-agent: Mutt/2.2.9 (2022-11-12)

On Thu, Jun 29, 2023 at 11:33:29AM +0300, Michael Tokarev wrote:
> 28.06.2023 18:27, Bin Meng wrote:
> > 
> > Current codes using a brute-force traversal of all file descriptors
> > do not scale on a system where the maximum number of file descriptors
> > is set to a very large value (e.g.: in a Docker container of Manjaro
> > distribution it is set to 1073741816). QEMU just looks frozen during
> > start-up.
> 
> So, the same question as before. *Why* do we close all filedescriptors
> to begin with?

The O_CLOSEXEC flag is a terrible concept, as the default behaviour of
file descriptors is to be leaked into all child processes, unless code
takes explicit action to set O_CLOEXEC in every case. Even if they are
diligent about their own code, apps developers can have zero confidence
that every library they use has set O_CLOEXEC. Not just set it after the
FD is open, but set it atomically when the the FD is open, because
threads create race conditions if not atomically set.

Leaking FDs is a security risk, and QEMU is an especially security
critical application. QEMU needs stronger guarantees that O_CLOEXEC
can offer, and mass-close before execve is the only viable option
to achieve this.

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




reply via email to

[Prev in Thread] Current Thread [Next in Thread]