with your inputs I could guess one thing
"glusterfs: fuse_session.c:188: fuse_chan_receive: Assertion `ch->compat'
failed."
this error happens only if fuse module/libfuse version is older than 2.6.x. can
you check that once?
-amar
On 5/30/07, Jonathan Newman < address@hidden> wrote:
>
> Once again, same net result as stated...here is the exact copy/paste of
> what I am doing to generate the core and backtrace:
>
> ypapa1 mnt # glusterfs --no-daemon -f /etc/glusterfs/glusterfs-
> client.vol -l /var/log/glusterfs/glusterfs.log /mnt/test
> glusterfs: fuse_session.c:188: fuse_chan_receive: Assertion `ch->compat'
> failed.
> Segmentation fault (core dumped)
> ypapa1 mnt # ls
> cdrom core.27841 floppy test
> ypapa1 mnt # gdb -c core.27841 glusterfs
> GNU gdb 6.4
> Copyright 2005 Free Software Foundation, Inc.
> GDB is free software, covered by the GNU General Public License, and you
> are
> welcome to change it and/or distribute copies of it under certain
> conditions.
> Type "show copying" to see the conditions.
> There is absolutely no warranty for GDB. Type "show warranty" for
> details.
> This GDB was configured as "i686-pc-linux-gnu"...Using host libthread_db
> library "/lib/libthread_db.so.1".
>
> Core was generated by `glusterfs --no-daemon -f
> /etc/glusterfs/glusterfs-client.vol -l /var/log/gluste'.
> Program terminated with signal 11, Segmentation fault.
>
> warning: Can't read pathname for load map: Input/output error.
> Reading symbols from /usr/lib/libglusterfs.so.0...done.
> Loaded symbols for /usr/lib/libglusterfs.so.0
> Reading symbols from /usr/lib/libfuse.so.2...done.
> Loaded symbols for /usr/lib/libfuse.so.2
> Reading symbols from /lib/librt.so.1...done.
> Loaded symbols for /lib/librt.so.1
> Reading symbols from /lib/libdl.so.2...done.
> Loaded symbols for /lib/libdl.so.2
> Reading symbols from /lib/libpthread.so.0...done.
> Loaded symbols for /lib/libpthread.so.0
> Reading symbols from /lib/libc.so.6...done.
> Loaded symbols for /lib/libc.so.6
> Reading symbols from /lib/ld-linux.so.2...done.
> Loaded symbols for /lib/ld-linux.so.2
> Reading symbols from
> /usr/lib/glusterfs/1.3.0-pre4/xlator/protocol/client.so...done.
> Loaded symbols for
> /usr/lib/glusterfs/1.3.0-pre4/xlator/protocol/client.so
> Reading symbols from
> /usr/lib/glusterfs/1.3.0-pre4/xlator/cluster/unify.so...done.
> Loaded symbols for /usr/lib/glusterfs/1.3.0-pre4/xlator/cluster/unify.so
>
> Reading symbols from
> /usr/lib/glusterfs/1.3.0-pre4/scheduler/rr.so...done.
> Loaded symbols for /usr/lib/glusterfs/1.3.0-pre4/scheduler/rr.so
> Reading symbols from
> /usr/lib/glusterfs/1.3.0-pre4/transport/tcp/client.so...done.
> Loaded symbols for /usr/lib/glusterfs/1.3.0-pre4/transport/tcp/client.so
> Reading symbols from
> /usr/lib/gcc/i686-pc-linux-gnu/4.1.1/libgcc_s.so.1...done.
> Loaded symbols for /usr/lib/gcc/i686-pc-linux-gnu/4.1.1/libgcc_s.so.1
> #0 0xb7f2e410 in ?? ()
> (gdb) bt
> #0 0xb7f2e410 in ?? ()
> #1 0xbfa1ad98 in ?? ()
> #2 0x0000000b in ?? ()
> #3 0x00006cc1 in ?? ()
> #4 0xb7ef02cd in raise () from /lib/libpthread.so.0
> #5 0xb7f237be in gf_print_trace (signum=6) at common-utils.c:221
> #6 0xb7f2e420 in ?? ()
> #7 0x00000006 in ?? ()
> #8 0x00000033 in ?? ()
> #9 0x00000000 in ?? ()
> (gdb)
>
>
> On 5/30/07, Amar S. Tumballi <address@hidden> wrote:
> >
> > Hi, you needed to run 'gdb -c /core[.pid] glusterfs' and NOT 'gdb -c
> > /core'
> >
> > -Amar
> >
> > On 5/30/07, Jonathan Newman < address@hidden> wrote:
> > >
> > > Running gdb -c /core.[pid] results in the same exact output as I had
> > > already given you. Thoughts?
> > >
> > > -Jon
> > >
> > > On 5/30/07, Amar S. Tumballi < address@hidden> wrote:
> > > >
> > > > Hi Jon,
> > > > actually it would be great if you could run 'gdb -c /core[.pid]
> > > > glusterfs' as the command line instead of just 'gdb -c /core', because
its
> > > > not able to resolve symbols.
> > > >
> > > > Regards,
> > > > Amar
> > > >
> > > > On 5/30/07, Jonathan Newman <address@hidden > wrote:
> > > > >
> > > > > Here is the backtrace given for the core dump:
> > > > >
> > > > > Core was generated by `glusterfs --no-daemon -f
> > > > > /etc/glusterfs/glusterfs-
> > > > > client.vol -l /var/log/gluste'.
> > > > > Program terminated with signal 11, Segmentation fault.
> > > > >
> > > > > warning: Can't read pathname for load map: Input/output error.
> > > > > Reading symbols from /usr/lib/libglusterfs.so.0...done.
> > > > > Loaded symbols for /usr/lib/libglusterfs.so.0
> > > > > Reading symbols from /usr/lib/libfuse.so.2...done.
> > > > > Loaded symbols for /usr/lib/libfuse.so.2
> > > > > Reading symbols from /lib/librt.so.1...done.
> > > > > Loaded symbols for /lib/librt.so.1
> > > > > Reading symbols from /lib/libdl.so.2...done.
> > > > > Loaded symbols for /lib/libdl.so.2
> > > > > Reading symbols from /lib/libpthread.so.0...done.
> > > > > Loaded symbols for /lib/libpthread.so.0
> > > > > Reading symbols from /lib/libc.so.6...done.
> > > > > Loaded symbols for /lib/libc.so.6
> > > > > Reading symbols from /lib/ld-linux.so.2...done.
> > > > > Loaded symbols for /lib/ld-linux.so.2
> > > > > Reading symbols from
> > > > > /usr/lib/glusterfs/1.3.0-pre4/xlator/protocol/client.so...done.
> > > > > Loaded symbols for
> > > > > /usr/lib/glusterfs/1.3.0-pre4/xlator/protocol/client.so
> > > > > Reading symbols from
> > > > > /usr/lib/glusterfs/1.3.0-pre4/xlator/cluster/unify.so...done.
> > > > > Loaded symbols for
> > > > > /usr/lib/glusterfs/1.3.0-pre4/xlator/cluster/unify.so
> > > > > Reading symbols from
> > > > > /usr/lib/glusterfs/1.3.0-pre4/scheduler/rr.so...done.
> > > > > Loaded symbols for /usr/lib/glusterfs/1.3.0-pre4/scheduler/rr.so
> > > > >
> > > > > Reading symbols from
> > > > > /usr/lib/glusterfs/1.3.0-pre4/transport/tcp/client.so...done.
> > > > > Loaded symbols for
> > > > > /usr/lib/glusterfs/1.3.0-pre4/transport/tcp/client.so
> > > > > Reading symbols from
> > > > > /usr/lib/gcc/i686-pc-linux-gnu/4.1.1/libgcc_s.so.1...done.
> > > > > Loaded symbols for
> > > > > /usr/lib/gcc/i686-pc-linux-gnu/4.1.1/libgcc_s.so.1
> > > > > #0 0xb7fb4410 in ?? ()
> > > > > (gdb) bt
> > > > > #0 0xb7fb4410 in ?? ()
> > > > > #1 0xbfb226a8 in ?? ()
> > > > > #2 0x0000000b in ?? ()
> > > > > #3 0x00006c08 in ?? ()
> > > > > #4 0xb7f762cd in raise () from /lib/libpthread.so.0
> > > > > #5 0xb7fa97be in gf_print_trace (signum=6) at common-utils.c
> > > > > :221
> > > > > #6 0xb7fb4420 in ?? ()
> > > > > #7 0x00000006 in ?? ()
> > > > > #8 0x00000033 in ?? ()
> > > > > #9 0x00000000 in ?? ()
> > > > >
> > > > >
> > > > > Any help is much appreciated...thanks.
> > > > >
> > > > > -Jon
> > > > >
> > > > > On 5/30/07, Anand Avati <address@hidden > wrote:
> > > > > >
> > > > > > Jonathan,
> > > > > > it looks like the glusterfs client has exited or segfaulted.
> > > > > is it
> > > > > > possible for you to get a backtrace from the core? (if it is
> > > > > not
> > > > > > generating a core run 'ulimit -c unlimited' and then start
> > > > > glusterfs
> > > > > > with -N (non daemon mode) and re-do the steps to generate the
> > > > > error).
> > > > > > that apart, please try the 1.3-pre4 release and see if you
> > > > > still get
> > > > > > the error. 1.2.3 is pretty old and a lot of things have
> > > > > happened
> > > > > > since.
> > > > > >
> > > > > > thanks,
> > > > > > avati
> > > > > >
> > > > > > 2007/5/29, Jonathan Newman < address@hidden>:
> > > > > > > Hey guys, I am relatively new to glusterfs and am having a
> > > > > bit of
> > > > > > difficulty
> > > > > > > getting a clustered fs up and running using it. Here are the
> > > > > details:
> > > > > > > GlusterFS package: 1.2.3
> > > > > > >
> > > > > > > 3 servers total, 2 running glusterfsd and 1 as client to
> > > > > mount clustered
> > > > > > fs.
> > > > > > > The glusterfsd-server.vol on the two servers are identical
> > > > > and contain:
> > > > > > > ### File: /etc/glusterfs-server.vol - GlusterFS Server
> > > > > Volume
> > > > > > Specification
> > > > > > >
> > > > > > > ### Export volume "brick" with the contents of "/data"
> > > > > directory.
> > > > > > > volume brick
> > > > > > > type storage/posix # POSIX FS translator
> > > > > > > option directory /data # Export this
> > > > > directory
> > > > > > > end-volume
> > > > > > >
> > > > > > > ### Add network serving capability to above brick.
> > > > > > > volume server
> > > > > > > type protocol/server
> > > > > > > option transport-type tcp/server # For TCP/IP
> > > > > transport
> > > > > > > option client-volume-filename /etc/glusterfs/glusterfs-
> > > > > client.vol
> > > > > > > subvolumes brick
> > > > > > > option auth.ip.brick.allow 10.* # Allow access to "brick"
> > > > > volume
> > > > > > > end-volume
> > > > > > >
> > > > > > > The client file contains this:
> > > > > > > ### File: /etc/glusterfs/glusterfs- client.vol - GlusterFS
> > > > > Client Volume
> > > > > > > Specification
> > > > > > >
> > > > > > > ### Add client feature and attach to remote subvolume of
> > > > > server1
> > > > > > > volume client1
> > > > > > > type protocol/client
> > > > > > > option transport-type tcp/client # for TCP/IP
> > > > > transport
> > > > > > > option remote-host 10.20.70.1 # IP address of the
> > > > > remote brick
> > > > > > > option remote-subvolume brick # name of the remote
> > > > > volume
> > > > > > > end-volume
> > > > > > >
> > > > > > > ### Add client feature and attach to remote subvolume of
> > > > > server2
> > > > > > > volume client2
> > > > > > > type protocol/client
> > > > > > > option transport-type tcp/client # for TCP/IP
> > > > > transport
> > > > > > > option remote-host 10.20.70.2 # IP address of the
> > > > > remote brick
> > > > > > > option remote-subvolume brick # name of the remote
> > > > > volume
> > > > > > > end-volume
> > > > > > >
> > > > > > > ### Add unify feature to cluster "server1" and "server2".
> > > > > Associate an
> > > > > > > ### appropriate scheduler that matches your I/O demand.
> > > > > > > volume brick
> > > > > > > type cluster/unify
> > > > > > > subvolumes client1 client2
> > > > > > > ### ** Round Robin (RR) Scheduler **
> > > > > > > option scheduler rr
> > > > > > > option rr.limits.min-free-disk 4GB # Units in KB,
> > > > > MB and GB
> > > > > > are
> > > > > > > allowed
> > > > > > > option rr.refresh-interval 10 # Check server
> > > > > brick after
> > > > > > > 10s.
> > > > > > > end-volume
> > > > > > >
> > > > > > > Server daemons on both servers are started using:
> > > > > > > /usr/sbin/glusterfsd
> > > > > --log-file=/var/log/glusterfs/glusterfs.log
> > > > > > >
> > > > > > > And then I mount the file system on the client using this
> > > > > command:
> > > > > > > /usr/sbin/glusterfs -f
> > > > > > > /etc/glusterfs/glusterfs-
> > > > > > client.vol--log-file=/var/log/glusterfs/glusterfs.log
> > > > > > > /mnt/test
> > > > > > >
> > > > > > > All appears well and running mount on the client produces
> > > > > (among other
> > > > > > > items):
> > > > > > > glusterfs:17983 on /mnt/test type fuse
> > > > > > (rw,allow_other,default_permissions)
> > > > > > >
> > > > > > > However the logs on the servers show (both show same output
> > > > > in logs):
> > > > > > > Tue May 29 11:56:29 2007 [DEBUG] tcp/server: Registering
> > > > > socket (4) for
> > > > > > new
> > > > > > > transport object of 10.20.30.1
> > > > > > > Tue May 29 11:56:29 2007 [DEBUG] server-protocol:
> > > > > mop_setvolume:
> > > > > > received
> > > > > > > port = 1020
> > > > > > > Tue May 29 11:56:29 2007 [DEBUG] server-protocol:
> > > > > mop_setvolume: IP addr
> > > > > > =
> > > > > > > 10.*, received ip addr = 10.20.30.1
> > > > > > > Tue May 29 11:56:29 2007 [DEBUG] server-protocol:
> > > > > mop_setvolume:
> > > > > > accepted
> > > > > > > client from 10.20.30.1
> > > > > > > Tue May 29 11:56:29 2007 [DEBUG] libglusterfs: full_rw: 0
> > > > > bytes r/w
> > > > > > instead
> > > > > > > of 113
> > > > > > > Tue May 29 11:56:29 2007 [DEBUG] libglusterfs:
> > > > > full_rw: Ñ÷·Ág, error
> > > > > > string
> > > > > > > 'File exists'
> > > > > > > Tue May 29 11:56:29 2007 [DEBUG] libglusterfs/protocol:
> > > > > > > gf_block_unserialize_transport: full_read of header failed
> > > > > > > Tue May 29 11:56:29 2007 [DEBUG] protocol/server: cleaned up
> > > > > xl_private
> > > > > > of
> > > > > > > 0x8050178
> > > > > > > Tue May 29 11:56:29 2007 [DEBUG] tcp/server: destroying
> > > > > transport object
> > > > > > for
> > > > > > > 10.20.30.1:1020 (fd=4)
> > > > > > >
> > > > > > > AND running any sort of file operation from within /mnt/test
> > > > > yields:
> > > > > > > ~ # cd /mnt/test; ls
> > > > > > > ls: .: Transport endpoint is not connected
> > > > > > >
> > > > > > > 10.20.30.1 is the client and 10.20.70.[1,2] are the servers.
> > > > > > >
> > > > > > > Anyone have any pointers that may lead me in the correct
> > > > > direction?
> > > > > > >
> > > > > > > Thanks.
> > > > > > >
> > > > > > > -Jon
> > > > > > > _______________________________________________
> > > > > > > Gluster-devel mailing list
> > > > > > > address@hidden
> > > > > > > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> > > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Anand V. Avati
> > > > > >
> > > > > _______________________________________________
> > > > > Gluster-devel mailing list
> > > > > address@hidden
> > > > > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Amar Tumballi
> > > > http://amar.80x25.org
> > >
> > >
> > >
> >
> >
> > --
> > Amar Tumballi
> > http://amar.80x25.org
> > [bulde on #gluster/irc.gnu.org]
>
>
>
--
Amar Tumballi
http://amar.80x25.org
[bulde on #gluster/irc.gnu.org]