[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: cfservd memory/file handle leak?
From: |
Mark Burgess |
Subject: |
Re: cfservd memory/file handle leak? |
Date: |
Wed, 21 Dec 2005 15:53:35 +0100 |
There is no reason not to upgrade.
On Wed, 2005-12-21 at 09:48 -0500, christian pearce wrote:
> Has anyone tried the latest and greatest? Is it fixd?
>
> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
> root 2491 0.3 26.3 1080636 1069048 ? S Nov22 129:29 cfservd
>
> I am running 2.1.15 on RedHat. It has been running for over a month
> now and it has 1 gig of memory consumed.
>
> On 9/24/05, Mark Burgess <Mark.Burgess@iu.hio.no> wrote:
> > Or you could try upgrading first to see if the problem has already been
> > fixed.
> > M
> >
> > On Fri, 2005-09-23 at 19:54 -0400, christian pearce wrote:
> > > There are definitely problems. We simulate configs of 10,000
> > > machines. By the end of the day we has 512M used up by cfservd. We
> > > are running 2.1.15. I have not had a chance to try and track it down.
> > > It is actually pretty tough to do this. I know there are tools, but
> > > I am not up to speed on them.
> > >
> > > On 9/22/05, Mark Burgess <Mark.Burgess@iu.hio.no> wrote:
> > > >
> > > > Upgrade, upgrade, upgrade! :)
> > > >
> > > > On Thu, 2005-09-22 at 11:46 -0700, Iain Morgan wrote:
> > > > > I've seen situations where cfservd (2.1.13) on Solaris accumulates
> > > > > threads
> > > > > over time. The threads don't get expired. Presumably if this happens
> > > > > over
> > > > > a protracted period it could eventually prevent cfservd from accepting
> > > > > new connections.
> > > > >
> > > > > --
> > > > > Iain Morgan
> > > > >
> > > > > On Thu Sep 22 10:50:09 2005, Martin, Jason H wrote:
> > > > > >
> > > > > > I've seen cfservd stops accepting connections before, but sadly I
> > > > > > didn't
> > > > > > get enough information about it to say why. Linux / 2.1.15.
> > > > > >
> > > > > > -Jason Martin
> > > > > >
> > > > > > > On Thu, 22 Sep 2005, Paul Krizak wrote:
> > > > > > >
> > > > > > > > It appears that cfservd is leaking file handles and
> > > > > > > (possibly) memory.
> > > > > > > > I ran
> > > > > > > > cfagent -qB on 1225 hosts twice in a 24-hour period (~12
> > > > > > > hrs between each
> > > > > > > > run) and cfservd is now using over 120M of memory and is
> > > > > > > using 3705 file
> > > > > > > > descriptors. When it reaches the shell limit of 4096 file
> > > > > > > descriptors,
> > > > > > > > cfservd locks and refuses to accept more connections,
> > > > > > > though the process does
> > > > > > > > not die.
> > > > > > > >
> > > > > > > > Has anybody else experienced this? I hate to take the windoze
> > > > > > > > approach and
> > > > > > > > just restart cfservd every morning.
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > I've seen the growth of memory utilization, I've also see it
> > > > > > > stop taking
> > > > > > > connections after a certain point.. I've seen this with
> > > > > > > 2.1.10 and still
> > > > > > > with 2.1.13. I'm not planning on going to 2.1.15 until I have my
> > > > > > > environment a bit more under control. Right now I have cfagent
> > > > > > > kill
> > > > > > > cfservd every night at midnight. It then restarts it later
> > > > > > > in the morning
> > > > > > > after backups and whatnot are done.
> > > > > >
> > > > > >
> > > > > > _______________________________________________
> > > > > > Help-cfengine mailing list
> > > > > > Help-cfengine@gnu.org
> > > > > > http://lists.gnu.org/mailman/listinfo/help-cfengine
> > > > > >
> > > > >
> > > > >
> > > > > _______________________________________________
> > > > > Help-cfengine mailing list
> > > > > Help-cfengine@gnu.org
> > > > > http://lists.gnu.org/mailman/listinfo/help-cfengine
> > > >
> > > >
> > > >
> > > > _______________________________________________
> > > > Help-cfengine mailing list
> > > > Help-cfengine@gnu.org
> > > > http://lists.gnu.org/mailman/listinfo/help-cfengine
> > > >
> > >
> > >
> > > --
> > > Christian Pearce
> > > Perfect Order, Inc.
> >
> >
>
>
> --
> Christian Pearce