gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] glusterfs crash with mainline-2.5 patch 260 with dbe


From: Dale Dude
Subject: Re: [Gluster-devel] glusterfs crash with mainline-2.5 patch 260 with dbench
Date: Mon, 02 Jul 2007 17:59:59 -0400
User-agent: Thunderbird 2.0.0.5pre (Windows/20070701)

I pasted the bt from the coredump using which is in your reply: gdb glusterfsd -c /core.9682. glusterfsd dies, not glusterfs.

Just before it crashes it says some files dont exist sporadically. But they do exist.

If I use 'gdb glusterfsd -c /core.9682' (note glusterfs not glusterfsd):
#0  0x00002aaaab07b35a in ?? ()
#1  0x00002aaaab5658bd in ?? ()
#2  0x0000003000000020 in ?? ()
#3  0x00002aaaab900020 in ?? ()
#4  0x000000000000000d in ?? ()
#5  0x00002aaaab900020 in ?? ()
#6  0x000000000000000d in ?? ()
#7  0x000000000000000c in ?? ()
#8  0x00002aaaaabd65be in ?? ()
#9  0x00007ffffffeef20 in ?? ()
#10 0x00007ffffffef018 in ?? ()
#11 0x00002aaaab073c56 in ?? ()
#12 0x000000000000000c in ?? ()
#13 0x00002aaaab06ccda in ?? ()
#14 0x00007ffffffeefd0 in ?? ()
#15 0x00007ffffffef00c in ?? ()
#16 0x00007ffffffef008 in ?? ()
#17 0x00007ffffffeefc8 in ?? ()
#18 0x00007ffffffef004 in ?? ()
#19 0x00007ffffffef000 in ?? ()
#20 0x00007ffffffeeffc in ?? ()
#21 0x00007ffffffeeff8 in ?? ()
#22 0x00007ffffffeeff4 in ?? ()
#23 0x00007ffffffeefe8 in ?? ()
#24 0x00007ffffffeedc0 in ?? ()
#25 0x00007ffffffeefe0 in ?? ()
#26 0x00007ffffffef01c in ?? ()
#27 0x00007ffffffef018 in ?? ()
#28 0x00007ffffffeefd8 in ?? ()
#29 0x00007ffffffef014 in ?? ()
#30 0x00007ffffffef010 in ?? ()
#31 0x0000000000508bc0 in ?? ()
#32 0x00002aaaab914c00 in ?? ()
#33 0x00002aaaab945f00 in ?? ()
#34 0x000000190000000d in ?? ()
#35 0x00002aaaab980ed0 in ?? ()
#36 0x0000000000000000 in ?? ()

=========================================

GLUSTERFSD.LOG:
2007-07-02 19:54:42 C [common-utils.c:205:gf_print_trace] debug-backtrace: Got signal (11), printing backtrace 2007-07-02 19:54:42 C [common-utils.c:207:gf_print_trace] debug-backtrace: /lib/libglusterfs.so.0(gf_print_trace+0x21) [0x2aaaaabced11] 2007-07-02 19:54:42 C [common-utils.c:207:gf_print_trace] debug-backtrace: /lib/libc.so.6 [0x2aaaab0391b0] 2007-07-02 19:54:42 C [common-utils.c:207:gf_print_trace] debug-backtrace: /lib/libc.so.6(strncpy+0x7a) [0x2aaaab07b35a] 2007-07-02 19:54:42 C [common-utils.c:207:gf_print_trace] debug-backtrace: /lib/glusterfs/1.3.0-pre5/xlator/protocol/server.so(server_writedir+0x39d) [0x2aaaab5658bd]



Harris Landgarten wrote:
Could you post the bt using gdb and the core. I think the devs are having a 
hard time reproducing this error.

Harris

----- Original Message -----
From: "Dale Dude" <address@hidden>
Cc: "gluster-devel" <address@hidden>
Sent: Monday, July 2, 2007 5:47:48 PM (GMT-0500) America/New_York
Subject: Re: [Gluster-devel] glusterfs crash with mainline-2.5 patch 260 with 
dbench

I had just turned it on actually. Trying your 'du -h' freezes the whole box freezes for a few seconds and glusterfsd cores only:

#0  0x00002aaaab07b35a in strncpy () from /lib/libc.so.6
#1 0x00002aaaab66a8bd in server_writedir (frame=0x2aaaab908610, bound_xl=0x5097f0, params=<value optimized out>) at server-protocol.c:4520
#2  0x0000000000000000 in ?? ()


Harris Landgarten wrote:
Dale,

Have you had any problems with posix-locks. Yesterday I could not complete a du 
-h with it in the server chain. There have been a lot of fixes since but none I 
saw dealt directly with posix-locks.

Harris

----- Original Message -----
From: "Dale Dude" <address@hidden>
To: "gluster-devel" <address@hidden>
Sent: Monday, July 2, 2007 5:32:38 PM (GMT-0500) America/New_York
Subject: Re: [Gluster-devel] glusterfs crash with mainline-2.5 patch 260 with 
dbench

Reply to list. Sorry for the direct email Harris ;)

Dale Dude wrote:
Im now just using the below. After taking out iocache on the client there is no more crash. Thanks much for the ping, Harris.

Server: posix, posix-locks, io-threads, server
Client: client, io-threads, RR unify, writebehind, readahead

Dale

Harris Landgarten wrote:
Dale,

I was only using unify, readahead and writebehind on the client. I have 
io-threads on the server

Harris

----- Original Message -----
From: "Dale Dude" <address@hidden>
To: "gluster-devel" <address@hidden>
Sent: Monday, July 2, 2007 5:18:40 PM (GMT-0500) America/New_York
Subject: Re: [Gluster-devel] glusterfs crash with mainline-2.5 patch 260 with 
dbench

Removing performance/io-cache from glusterfs-client.vol solves this. Now only using writebehind.

Dale

Dale Dude wrote:
Ran again with 261 (which only seems to fix perms?) with same results. Using ubuntu dapper (2.6.15-28-amd64-server) with fuse 2.6.5.

I only have one dbench test file so I have no choice.
-rw-r--r-- 1 root root 26214401 2005-11-19 21:19 /usr/share/dbench/client.txt

_______________________________________________
Gluster-devel mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/gluster-devel


_______________________________________________
Gluster-devel mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/gluster-devel


_______________________________________________
Gluster-devel mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/gluster-devel




reply via email to

[Prev in Thread] Current Thread [Next in Thread]