[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v3 0/6] net/tap: Fix QEMU frozen issue when the maximum numbe
From: |
Michael Tokarev |
Subject: |
Re: [PATCH v3 0/6] net/tap: Fix QEMU frozen issue when the maximum number of file descriptors is very large |
Date: |
Wed, 28 Jun 2023 20:13:44 +0300 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.12.0 |
17.06.2023 08:36, Bin Meng wrote:
Current codes using a brute-force traversal of all file descriptors
do not scale on a system where the maximum number of file descriptors
is set to a very large value (e.g.: in a Docker container of Manjaro
distribution it is set to 1073741816). QEMU just looks frozen during
start-up.
What's the reason to close all these file descriptors in the first place?
No other software I know does this.
For some situations, such closing is actually bad, -- think, eg,
flock lockfile qemu-system-foo ... --
this one opens a file, locks it using fcntl/flock, and executes the
command, keeping the file descriptor open across exec, so the file
stays locked until the process terminates. This works and works well.
Qemu with its let's-close-everything approach breaks this.
Why? :)
/mjt
- Re: [PATCH v3 3/6] util/async-teardown: Fall back to close fds one by one, (continued)