help-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: What does the error "Process <URL> not running" mean?


From: Eli Zaretskii
Subject: Re: What does the error "Process <URL> not running" mean?
Date: Sun, 06 Feb 2022 18:54:08 +0200

> Date: Sun, 06 Feb 2022 16:24:15 +0000
> From: emacsq <laszlomail@protonmail.com>
> Cc: help-gnu-emacs@gnu.org
> 
> 
> > Just an idea. I'll check it when I have the time.
> 
> Apparently, it's too many open files:
> 
> ("make client process failed" "Too many open files" :name "example.com" ...)
> 
> Which is curious, because it's the same with url-queue-retrieve
> which allows 6 parallel processes by default.
> 
> I checked and on windows (where I tried) the default file
> handle limit is 512 for a process. (which can be increased with
> _setmaxstdio)

I don't think _setmaxstdio is relevant here: it only affects the
number of FILE objects a process can have at any given time (hence the
"stdio" part: it alludes to the stdio.h header).  IOW, it only affects
stream I/O functions: fscanf, fprintf, fread, fwrite, etc.  And we
don't use those in networking implementation in Emacs, we use
low-level functions that go through file descriptors and even
lower-level handles.

It is much more probable that you are hitting here the 32 subprocesses
limit we impose on MS-Windows (for boring technical reasons related to
how we emulate 'pselect' and SIGCHLD there).  If a Lisp program
attempts to create more than 32 sub-processes/network connections at
the same time, it will indeed get "Too many open files" (EMFILE).
That limit cannot be lifted unless we reimplement core parts of
subprocess support on MS-Windows.  (If you are interested, read the
large comment around line 850 in w32proc.c.)

> 512 is not much, but if only 6 network calls are going on at the
> same time then it should be enough. Unless, the emacs windows'
> networking code does not close file handles in a timely manner.
> 
> Emacs letting the open handles lingering for a while or not
> closing them for some reason (handle leaking) could explain
> why fetching many urls can run into the handle limit even if
> there are not many parallel fetches at the same time.

I don't think that's the case: we close the handles as soon as the
network connection is closed.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]