help-cfengine
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: cfservd memory/file handle leak?


From: Iain Morgan
Subject: Re: cfservd memory/file handle leak?
Date: Thu, 22 Sep 2005 11:46:49 -0700 (PDT)

I've seen situations where cfservd (2.1.13) on Solaris accumulates threads
over time. The threads don't get expired. Presumably if this happens over
a protracted period it could eventually prevent cfservd from accepting
new connections.

--
Iain Morgan

On Thu Sep 22 10:50:09 2005, Martin, Jason H wrote:
> 
> I've seen cfservd stops accepting connections before, but sadly I didn't
> get enough information about it to say why. Linux / 2.1.15.
> 
> -Jason Martin
> 
> > On Thu, 22 Sep 2005, Paul Krizak wrote:
> > 
> > > It appears that cfservd is leaking file handles and 
> > (possibly) memory. 
> > > I ran
> > > cfagent -qB on 1225 hosts twice in a 24-hour period (~12 
> > hrs between each 
> > > run) and cfservd is now using over 120M of memory and is 
> > using 3705 file 
> > > descriptors.  When it reaches the shell limit of 4096 file 
> > descriptors, 
> > > cfservd locks and refuses to accept more connections, 
> > though the process does 
> > > not die.
> > >
> > > Has anybody else experienced this?  I hate to take the windoze 
> > > approach and
> > > just restart cfservd every morning.
> > >
> > 
> > 
> > I've seen the growth of memory utilization, I've also see it 
> > stop taking 
> > connections after a certain point..  I've seen this with 
> > 2.1.10 and still 
> > with 2.1.13.  I'm not planning on going to 2.1.15 until I have my 
> > environment a bit more under control.  Right now I have cfagent kill 
> > cfservd every night at midnight.  It then restarts it later 
> > in the morning 
> > after backups and whatnot are done.
> 
> 
> _______________________________________________
> Help-cfengine mailing list
> Help-cfengine@gnu.org
> http://lists.gnu.org/mailman/listinfo/help-cfengine
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]