gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] GlusterFS /home and KDE


From: NovA
Subject: Re: [Gluster-devel] GlusterFS /home and KDE
Date: Mon, 7 May 2007 16:09:09 +0400

Avati,
  I've updated GlusterFS to patch-161 half an hour ago. The bug is
still there. :(
As I've found out, it's related with posix/locks, not io-threads. If
posix/locks is turned off in server spec, then KDE starts flawlessly.
Otherwise glusterfsd dies with the following backtrace log:
--------
[May 07 15:33:04] [CRITICAL/common-utils.c:215/gf_print_trace()]
debug-backtrace:Got signal (11), printing backtrace
[May 07 15:33:04] [CRITICAL/common-utils.c:217/gf_print_trace()]
debug-backtrace:/usr/lib64/libglusterfs.so.0(gf_print_trace+0x21)
[0x2b25736faf71]
[May 07 15:33:04] [CRITICAL/common-utils.c:217/gf_print_trace()]
debug-backtrace:/lib64/libc.so.6 [0x2b25741725b0]
[May 07 15:33:04] [CRITICAL/common-utils.c:217/gf_print_trace()]
debug-backtrace:/usr/lib64/glusterfs/1.3.0-pre3/xlator/features/posix-locks.so
[0x2aaaaacb47b7]
[May 07 15:33:04] [CRITICAL/common-utils.c:217/gf_print_trace()]
debug-backtrace:/usr/lib64/glusterfs/1.3.0-pre3/xlator/protocol/server.so
[0x2aaaaaebf4fb]
[May 07 15:33:04] [CRITICAL/common-utils.c:217/gf_print_trace()]
debug-backtrace:/usr/lib64/glusterfs/1.3.0-pre3/xlator/protocol/server.so
[0x2aaaaaeba694]
[May 07 15:33:04] [CRITICAL/common-utils.c:217/gf_print_trace()]
debug-backtrace:/usr/lib64/libglusterfs.so.0(sys_epoll_iteration+0xd4)
[0x2b25736fc7e4]
[May 07 15:33:04] [CRITICAL/common-utils.c:217/gf_print_trace()]
debug-backtrace:[glusterfsd] [0x40164c]
[May 07 15:33:04] [CRITICAL/common-utils.c:217/gf_print_trace()]
debug-backtrace:/lib64/libc.so.6(__libc_start_main+0xf4)
[0x2b257415fae4]
[May 07 15:33:04] [CRITICAL/common-utils.c:217/gf_print_trace()]
debug-backtrace:[glusterfsd] [0x401149]
---------

My server.vol spec contains:
--------
volume disk
 type storage/posix              # POSIX FS translator
 option directory /mnt/hd        # Export this directory
end-volume

#volume brick
#  type features/posix-locks
#  subvolumes disk
#end-volume

volume brick
 type performance/io-threads
 option thread-count 8
 subvolumes disk
end-volume

### Add network serving capability to above brick
volume server
 type protocol/server
 option transport-type tcp/server     # For TCP/IP transport
# option bind-address 192.168.1.10     # Default is to listen on all interfaces
# option listen-port 6996              # Default is 6996
 option client-volume-filename /etc/glusterfs/client.vol
 subvolumes brick
 option auth.ip.brick.allow 10.1.0.*  # Allow access to "brick" volume
end-volume
-------

With best regards,
 Andrey


2007/5/5, Anand Avati <address@hidden>:
Andrey,
  can you please confirm the bug with the latest TLA checkout? few
fixes have been committed in various places along with io-threads.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]