help-cfengine
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: SUMMARY: Too many cfagents running. Was: Load problem with cfserv d


From: David E. Nelson
Subject: Re: SUMMARY: Too many cfagents running. Was: Load problem with cfserv d
Date: Thu, 17 Mar 2005 12:48:38 -0600 (CST)


Hi Mark,

Would it be prudent to delete all *db files whenever cfengine is restarted - say when a host reboots or from '/etc/init.d/cfengine restart'?

Thanks,
         /\/elson


On Thu, 17 Mar 2005, Mark Burgess wrote:

On Thu, 2005-03-17 at 09:11 -0500, Baker, Darryl wrote:
 -----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Finally got all the problems fix. First I installed the latest snap
shot and the load dropped. We had some scripts using ssh that were
setting up X window tunnels for no reason. We fixed those scripts and
the load dropped. I switched the configuration to "schedule = (
Min00_05 Min30_35 )" but left the rules saying Q1 and Q3. Nothing
changed. Then I had an inspiration and removed /var/cfengine/*db. The
load and the contention for the mutex dropped like a rock. Why?

We believe that the main problem was that the kernel mutex we were
spinning on is part of /dev/random and that between all the ssh
clients this machine spawns and the cfengine connections that it was
just now being able to generate enough randomness and was blocking.
Then why would removing those files improve things


This is almost certainly an internal roblem in berkeley DB, probably
caused by having mixed versions or upgrades on your system. Sleepycat
change internals quite often and that causes problems in compatability.

That is why deleting the file with incorrect format will cure the
problem. It will then not be incompatible.

M



_______________________________________________
Help-cfengine mailing list
Help-cfengine@gnu.org
http://lists.gnu.org/mailman/listinfo/help-cfengine


--
~~ ** ~~  If you didn't learn anything when you broke it the 1st ~~ ** ~~
                        time, then break it again.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]