help-cfengine
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Cfservd coredumps


From: Martin, Jason H
Subject: RE: Cfservd coredumps
Date: Tue, 22 Nov 2005 12:35:09 -0800

Along the 'run under gdb' lines, have you noticed that cfservd
threads/processes do not exit under gdb? I ran it under gdb and ended up
with several thousand defunct cfservd processes. The defunct processes
do not show up in the absence of gdb.

-Jason Martin

> -----Original Message-----
> From: Mark Burgess [mailto:address@hidden 
> Sent: Tuesday, November 22, 2005 12:31 PM
> To: Martin, Jason H
> Cc: address@hidden
> Subject: Re: Cfservd coredumps
> 
> 
> 
> Try
> 
> http://www.cfengine.org/confdir/segfault.html
> 
> 
> On Tue, 2005-11-22 at 11:29 -0800, Martin, Jason H wrote:
> > Hello, I am experiencing frequent cfservd coredumps (about every 30
> > minutes) with 2.1.14 and 2.1.17. This on RHEL ES2.1. I've 
> pasted the 
> > backtrace and logfile messages below.
> > 
> > Any suggestions on solving the problem?
> > 
> > Linux XXX 2.4.9-e.30smp #1 SMP Fri Nov 28 07:18:53 EST 2003 i686 
> > unknown glibc-2.2.4-32.8
> > 
> > # gdb /usr/local/src/cfengine-2.1.17/src/cfservd 
> ./core.13063 GNU gdb 
> > Red Hat Linux (5.2-2) Copyright 2002 Free Software Foundation, Inc.
> > GDB is free software, covered by the GNU General Public 
> License, and you
> > are
> > welcome to change it and/or distribute copies of it under certain
> > conditions.
> > Type "show copying" to see the conditions.
> > There is absolutely no warranty for GDB.  Type "show warranty" for
> > details.
> > This GDB was configured as "i386-redhat-linux"...
> > Core was generated by 
> `/usr/local/src/cfengine-2.1.17/src/cfservd -F -d
> > 2'.
> > Program terminated with signal 11, Segmentation fault.
> > Reading symbols from 
> /usr/local/BerkeleyDB.4.4/lib/libdb-4.4.so...done.
> > Loaded symbols for /usr/local/BerkeleyDB.4.4/lib/libdb-4.4.so
> > Reading symbols from /lib/libnss_nis.so.2...done.
> > Loaded symbols for /lib/libnss_nis.so.2
> > Reading symbols from /lib/i686/libpthread.so.0...done.
> > Loaded symbols for /lib/i686/libpthread.so.0
> > Reading symbols from /lib/i686/libm.so.6...done.
> > Loaded symbols for /lib/i686/libm.so.6
> > Reading symbols from /lib/i686/libc.so.6...done.
> > Loaded symbols for /lib/i686/libc.so.6
> > Reading symbols from /lib/libnsl.so.1...done.
> > Loaded symbols for /lib/libnsl.so.1
> > Reading symbols from /lib/libnss_files.so.2...done.
> > Loaded symbols for /lib/libnss_files.so.2
> > Reading symbols from /lib/ld-linux.so.2...done.
> > Loaded symbols for /lib/ld-linux.so.2
> > Reading symbols from /lib/libnss_nisplus.so.2...done.
> > Loaded symbols for /lib/libnss_nisplus.so.2
> > Reading symbols from /lib/libnss_dns.so.2...done.
> > Loaded symbols for /lib/libnss_dns.so.2
> > Reading symbols from /lib/libresolv.so.2...done.
> > Loaded symbols for /lib/libresolv.so.2
> > #0  0x401daea8 in memmove (dest=0x2e0a4008, src=0x2e084201,
> > len=969818625) at ../sysdeps/generic/memmove.c:105
> > 105     ../sysdeps/generic/memmove.c: No such file or directory.
> >         in ../sysdeps/generic/memmove.c
> > 
> > RecvSocketStream(8)
> >     (Concatenated 8 from stream)
> > Transaction Receive [t 16][]
> > RecvSocketStream(16)
> >     (Concatenated 16 from stream)
> > Got Blowfish size 16
> > BinaryBuffer(16)[40b6350000100020000] = 16
> > cfservd: Host XXXXX granted access to /SOMEFILE
> > Clocks were off by 0
> > StatFile(/SOMEFILE)
> > cfservd: Host YYYY granted access to /OTHERFILE
> > Clocks were off by 0
> > StatFile(/OTHERFILE)
> > cfservd:
> > 
> > The debug output cuts off, I suspect that unflushed buffers are the 
> > problem. It would be nice if the debug output performed a fflush() 
> > after each print to avoid losing data.
> > 
> > Thank you,
> > -Jason Martin
> > 
> > 
> > _______________________________________________
> > Help-cfengine mailing list
> > address@hidden 
> > http://lists.gnu.org/mailman/listinfo/help-cfengine
> 
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]