gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] pre6 - SEGV on first try


From: Matt Paine
Subject: Re: [Gluster-devel] pre6 - SEGV on first try
Date: Wed, 25 Jul 2007 22:15:27 +1000

Hi August - great to hear someones using the RPM's :)

I'de love to hear some feedback on your experience if you get a chance.
Any hints on how they can be improved would be fantastic too.

You can download prebuilt rpm's (built on centos-5) from the download
section now.....

http://ftp.zresearch.com/pub/gluster/glusterfs/1.3-pre/RPMS/



Thanks.

Matt.



On Tue, 2007-07-24 at 12:14 -0400, August R. Wohlt wrote:
> Most excellent. thanks.
> 
> On 7/24/07, Amar S. Tumballi <address@hidden> wrote:
> >
> > Hi August,
> >  Hope this link helps you.
> >
> > http://www.gluster.org/docs/index.php/GlusterFS_Building_RPMs
> >
> > -amar
> >
> >  On 7/24/07, August R. Wohlt <address@hidden> wrote:
> >
> > > HI avati -
> > >
> > > Indeed, I had the pre5_3 rpms installed earlier in the path. I've
> > > removed
> > > them, re-installed pre6 and everything looks good now. I'll go see if I
> > > can't get the clients to segfault now :-)
> > >
> > > As an aside, when will the rpms be updated to pre6 on the download site?
> > >
> > > thanks,
> > > :goose
> > >
> > > On 7/24/07, Anand Avati <address@hidden> wrote:
> > > >
> > > > August,
> > > >   thanks for trying pre6. The bug you have reported was fixed long
> > > back,
> > > > much before the pre6. Also the line numbers from the backtrace dont
> > > match
> > > > that of pre6 either. Are you sure that the server you have run is from
> > > the
> > > > pre6 release? please confirm.
> > > >
> > > > thanks,
> > > > avati
> > > >
> > > > 2007/7/24, August R. Wohlt <address@hidden>:
> > > > >
> > > > > Hello,
> > > > >
> > > > > I downloaded pre6 today and compiled it. glusterfsd starts up
> > > > > successfully,
> > > > > but if I connect to the socket then disconnect, it segfaults. It
> > > does
> > > > > this
> > > > > every time. The server never segfaulted with pre5 on the same
> > > > > configuration,
> > > > > though my clients did at random times after heavy load inside
> > > > > write-behind.
> > > > > Hope this report helps:
> > > > >
> > > > > Distro is CentOS 5 - stock kernel, latest yum updates, everything
> > > > > included
> > > > > in installation.
> > > > > address@hidden ~]$ uname -a
> > > > > Linux chai 2.6.18-8.1.8.el5xen #1 SMP Tue Jul 10 07:06:45 EDT 2007
> > > > > x86_64
> > > > > x86_64 x86_64 GNU/Linux
> > > > >
> > > > > fuse: fuse-2.7.0 - built by hand with standard installation:
> > > > > ./configure
> > > > > make
> > > > > make install
> > > > > modprobe fuse
> > > > >
> > > > > glusterfs downloaded, untarred and built like so:
> > > > >
> > > > > ./configure --prefix=/usr
> > > > > make CFLAGS='-g -O0'
> > > > > make install
> > > > >
> > > > > started with this config:
> > > > >
> > > > > volume test_ko
> > > > >   type storage/posix
> > > > >   option directory /home/vg_3ware1/test/ko
> > > > > end-volume
> > > > >
> > > > > volume test_op
> > > > >   type storage/posix
> > > > >   option directory /home/vg_3ware1/test/op
> > > > > end-volume
> > > > >
> > > > > ### Add network serving capability to above brick.
> > > > > volume server
> > > > >   type protocol/server
> > > > >   option transport-type tcp/server     # For TCP/IP transport
> > > > >   option bind-address 192.168.2.5     # Default is to listen on all
> > > > > interfaces
> > > > >   subvolumes test_ko test_op
> > > > >   option auth.ip.test_op.allow * # Allow access to "brick" volume
> > > > >   option auth.ip.test_ko.allow * # Allow access to "brick" volume
> > > > > end-volume
> > > > >
> > > > >
> > > > > To produce the segfault, I simply telnet to the port and immediately
> > >
> > > > > disconnect. Dumps every time:
> > > > >
> > > > > telnet 192.168.2.5 6996
> > > > > ^]quit
> > > > >
> > > > > Backtrace from gdb shows:
> > > > >
> > > > > [ address@hidden ~]$ sudo gdb glusterfsd
> > > > > GNU gdb Red Hat Linux (6.5-16.el5rh )
> > > > > Copyright (C) 2006 Free Software Foundation, Inc.
> > > > > GDB is free software, covered by the GNU General Public License, and
> > > you
> > > > > are
> > > > > welcome to change it and/or distribute copies of it under certain
> > > > > conditions.
> > > > > Type "show copying" to see the conditions.
> > > > > There is absolutely no warranty for GDB.  Type "show warranty" for
> > > > > details.
> > > > > This GDB was configured as "x86_64-redhat-linux-gnu"...(no debugging
> > > > > symbols found)
> > > > > Using host libthread_db library "/lib64/libthread_db.so.1".
> > > > >
> > > > > (gdb) set args -N
> > > > > (gdb) run
> > > > > Starting program: /sbin/glusterfsd -N
> > > > > warning: Lowest section in system-supplied DSO at 0xffffe000 is
> > > .hash at
> > > > >
> > > > > ffffe0b4
> > > > > (no debugging symbols found)
> > > > > (no debugging symbols found)
> > > > > [Thread debugging using libthread_db enabled]
> > > > > [New Thread 4160661184 (LWP 26983)]
> > > > > glusterfsd: WARNING: ignoring stale pidfile for PID 26972
> > > > > [New Thread 4160658320 (LWP 26986)]
> > > > >
> > > > > Program received signal SIGSEGV, Segmentation fault.
> > > > > [Switching to Thread 4160661184 (LWP 26983)]
> > > > > 0x4a76f2a0 in pthread_mutex_lock () from //lib/libpthread.so.0
> > > > > (gdb) bt
> > > > > #0  0x4a76f2a0 in pthread_mutex_lock () from //lib/libpthread.so.0
> > > > > #1  0xf75dd895 in get_frame_for_transport (trans=0x8055ee0) at
> > > > > server-protocol.c:5525
> > > > > #2  0xf75de225 in notify (this=0x8050f28, event=2, data=0x8055ee0)
> > > at
> > > > > server-protocol.c:5638
> > > > > #3  0x47611c87 in transport_notify (this=0x0, event=0) at
> > > transport.c
> > > > > :152
> > > > > #4  0x476126f9 in sys_epoll_iteration (ctx=0xff89f348) at epoll.c:54
> > > > > #5  0x47611d5d in poll_iteration (ctx=0xff89f348) at transport.c:260
> > > > > #6  0x08049150 in main ()
> > > > >
> > > > > and the logs show:
> > > > >
> > > > > 2007-07-24 08:36:28 C [common-utils.c :208:gf_print_trace]
> > > > > debug-backtrace:
> > > > > Got signal (11), printing backtrace
> > > > > 2007-07-24 08:36:28 C [ common-utils.c:210:gf_print_trace]
> > > > > debug-backtrace:
> > > > > //lib/libglusterfs.so.0(gf_print_trace+0x2d) [0x476107ed]
> > > > > 2007-07-24 08:36:28 C [common-utils.c:210:gf_print_trace]
> > > > > debug-backtrace:
> > > > > [0xffffe500]
> > > > > 2007-07-24 08:36:28 C [ common-utils.c:210:gf_print_trace]
> > > > > debug-backtrace:
> > > > > //lib/glusterfs/1.3.0-pre5.3/xlator/protocol/server.so [0xf75dd895]
> > > > > 2007-07-24 08:36:28 C [common-utils.c:210:gf_print_trace]
> > > > > debug-backtrace:
> > > > > //lib/glusterfs/1.3.0- pre5.3/xlator/protocol/server.so(notify+0x1a5)
> > > > > [0xf75de225]
> > > > > 2007-07-24 08:36:28 C [common-utils.c:210:gf_print_trace]
> > > > > debug-backtrace:
> > > > > //lib/libglusterfs.so.0(transport_notify+0x37) [0x47611c87]
> > > > > 2007-07-24 08:36:28 C [ common-utils.c:210:gf_print_trace]
> > > > > debug-backtrace:
> > > > > //lib/libglusterfs.so.0(sys_epoll_iteration+0xd9) [0x476126f9]
> > > > > 2007-07-24 08:36:28 C [common-utils.c :210:gf_print_trace]
> > > > > debug-backtrace:
> > > > > //lib/libglusterfs.so.0(poll_iteration+0x1d) [0x47611d5d]
> > > > > 2007-07-24 08:36:28 C [common-utils.c:210:gf_print_trace]
> > > > > debug-backtrace:
> > > > > [glusterfsd] [0x8049150]
> > > > > 2007-07-24 08:36:28 C [common-utils.c:210:gf_print_trace]
> > > > > debug-backtrace:
> > > > > //lib/libc.so.6(__libc_start_main+0xdc) [0x4a63edec]
> > > > > 2007-07-24 08:36:28 C [ common-utils.c:210:gf_print_trace]
> > > > > debug-backtrace:
> > > > > [glusterfsd] [0x8048b91]
> > > > > 2007-07-24 08:37:13 E [protocol.c
> > > :262:gf_block_unserialize_transport]
> > > > > libglusterfs/protocol: full_read of  header failed: peer (
> > > 192.168.2.5)
> > > > >
> > > > > thanks for the great work!
> > > > > -goose
> > > > > _______________________________________________
> > > > > Gluster-devel mailing list
> > > > > address@hidden
> > > > > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Anand V. Avati
> > > _______________________________________________
> > > Gluster-devel mailing list
> > > address@hidden
> > > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> > >
> >
> >
> >
> > --
> > Amar Tumballi
> > http://amar.80x25.org
> > [bulde on #gluster/irc.gnu.org]
> _______________________________________________
> Gluster-devel mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/gluster-devel





reply via email to

[Prev in Thread] Current Thread [Next in Thread]