[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gcl-devel] Large memory

From: Camm Maguire
Subject: [Gcl-devel] Large memory
Date: Mon, 11 Apr 2016 15:50:37 -0400

Greetings!  As all may know, the 2.6.13 prerelease attempts to make
efficient use of large memories available at runtime.  One quirk arises
in trying to estimate the amount of memory that might be needed should
the user desire to compile files late in a job, which traditionally
invokes gcc via a call to system(), which in turn calls fork().  Under
Linux, this is a copy on write implementation, so the memory overhead is
just that needed to copy kernel page tables.  Unfortunately, I have run
into circumstances when the kernel runs out of the memory required to
perform a fork() even though memory operations in the running process
are within bounds.  These circumstances appear erratically under
isolated combinations of ram and swap and ultimately stem from the
kernel's oom implementation.

I've put in a temporary heuristic to leave 15% of apparently available
memory free, but this is wasteful.  I am considering forking a minimal
process at startup just to receive and process gcc invocation requests,
as this is by far the most common occurrence of fork() in gcl (but not
the only ones -- see #'si::socket and #'si::run-process).

Anyone have any better ideas?
Camm Maguire                                        address@hidden
"The earth is but one country, and mankind its citizens."  --  Baha'u'llah

reply via email to

[Prev in Thread] Current Thread [Next in Thread]