[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gluster-devel] 3.0.7 segfaults on startup

From: Reinis Rozitis
Subject: [Gluster-devel] 3.0.7 segfaults on startup
Date: Thu, 3 Feb 2011 14:52:56 +0200

I tried to upgrade from 3.0.5 but it kinda crashes immideatly.

BTW is 3.0.x branch still supported at any level or 3.1.x is suggested on production systems? Also is it fine to use 3.1.x without the elastic management features (eg not making the pool for example) but the old fashioned way of writing the configuration yourself? As tried the latest 3.1.2 and it kinda crashed similar way only GIT checkout release worked but gave sometimes strange inconsistent directory readings (like sudenly there was merge of 2 different directories) with something like:

[2011-02-02 05:22:30.407925] E [posix.c:497:posix_lookup] posix: lstat on /gallery/233/957/10233957.jpg/1273473504_4046392.jpg failed: Not a directory [2011-02-02 05:22:30.408344] W [fuse-bridge.c:190:fuse_entry_cbk] glusterfs-fuse: 5026901: LOOKUP() /gallery/233/957/10233957.jpg/1273473504_4046392.jpg => -1 (Not a directory)

so I had to fallback to original 3.0.5.

The setup is kinda simple 2 big storage nodes which are also clients themselves.

OpenSuse 11.1

From the log:

Version      : glusterfs 3.0.7 built on Feb  3 2011 13:11:14
git: v3.0.7
Starting Time: 2011-02-03 13:44:51
Command line : /data/gluster/sbin/glusterfsd /data/storage
PID          : 28942
System name  : Linux
Nodename     : store224
Kernel Release : 2.6.31-44-default
Hardware Identifier: x86_64

Given volfile:
 1: volume posix
 2:   type storage/posix
 3:   option directory /mnt/storage
 4: end-volume
 6: volume locks
 7:   type features/locks
 8:   subvolumes posix
 9: end-volume
11: volume brick
12:   type performance/io-threads
13:   option thread-count 16
14:   subvolumes locks
15: end-volume
17: volume server
18:  type protocol/server
19:  option transport-type tcp
20:  option transport.socket.nodelay on
21:  option auth.addr.brick.allow *
22:  subvolumes brick
23: end-volume
25: volume remote
26:   type protocol/client
27:   option transport-type tcp
28:   option remote-host
29:   option transport.socket.nodelay on
30:   option remote-subvolume brick
31: end-volume
33: volume replicate
34:   type cluster/replicate
35:   option read-subvolume locks
36:   option favorite-child locks
37:   subvolumes locks remote
38: end-volume

[2011-02-03 13:44:51] W [afr.c:2961:init] replicate: You have specified subvolume 'locks' as the 'favorite child'. This means that if a discrepancy in the content or attributes (ownership, permission, etc.) of a file is detected among the subvolumes, the file on 'locks' will be considered the definitive version and its contents will OVERWRITE the contents of the file on other subvolumes. All versions of the file except that on 'locks' WILL BE LOST. [2011-02-03 13:44:51] N [afr.c:2662:notify] replicate: Subvolume 'locks' came back up; going online. [2011-02-03 13:44:51] N [afr.c:2662:notify] replicate: Subvolume 'locks' came back up; going online. [2011-02-03 13:44:51] N [fuse-bridge.c:2953:fuse_init] glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13 kernel 7.12 [2011-02-03 13:44:51] N [afr.c:2662:notify] replicate: Subvolume 'locks' came back up; going online. [2011-02-03 13:44:51] N [glusterfsd.c:1423:main] glusterfs: Successfully started [2011-02-03 13:44:51] E [socket.c:802:socket_connect_finish] remote: connection to failed (Connection refused) [2011-02-03 13:44:51] E [socket.c:802:socket_connect_finish] remote: connection to failed (Connection refused)
pending frames:

patchset: v3.0.7
signal received: 11
time of crash: 2011-02-03 13:44:54
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.0.7

Reinis Rozitis

reply via email to

[Prev in Thread] Current Thread [Next in Thread]