gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Cascading different translator doesn't work as expec


From: yaomin @ gmail
Subject: Re: [Gluster-devel] Cascading different translator doesn't work as expectation
Date: Tue, 6 Jan 2009 20:21:35 +0800

Krishna,

   1, The version is 1.3.9
   2, the client and server vol files are in the attachments.
   3, The result is "No Stack"

Thanks,
Yaomin

--------------------------------------------------
From: "Krishna Srinivas" <address@hidden>
Sent: Tuesday, January 06, 2009 5:36 PM
To: "yaomin @ gmail" <address@hidden>
Cc: <address@hidden>
Subject: Re: [Gluster-devel] Cascading different translator doesn't work as expectation

Yaomin,

Can you:
* mention what version you are using
* give the modified client and server vol file (to see if there are any errors)
* give gdb backtrace from the core file? "gdb -c /core.pid glusterfs"
and then type "bt"

Krishna

On Tue, Jan 6, 2009 at 2:43 PM, yaomin @ gmail <address@hidden> wrote:
Krishna,

    Thank you for your kind help before.

According to your advice, I confront a new error. The storage node has
no log information, and the client's log is like following:

/lib64/libc.so.6[0x3fbb2300a0]
/usr/local/lib/glusterfs/1.3.9/xlator/cluster/afr.so(afr_setxattr+0x6a)[0x2aaaaaf0658a]
/usr/local/lib/glusterfs/1.3.9/xlator/cluster/stripe.so(notify+0x220)[0x2aaaab115c80]
/usr/local/lib/libglusterfs.so.0(default_notify+0x25)[0x2aaaaaab8f55]
/usr/local/lib/glusterfs/1.3.9/xlator/cluster/afr.so(notify+0x16d)[0x2aaaaaefc19d]
/usr/local/lib/glusterfs/1.3.9/xlator/protocol/client.so(notify+0x681)[0x2aaaaacebac1]
/usr/local/lib/libglusterfs.so.0(sys_epoll_iteration+0xbb)[0x2aaaaaabe14b]
/usr/local/lib/libglusterfs.so.0(poll_iteration+0x79)[0x2aaaaaabd509]
[glusterfs](main+0x66a)[0x4026aa]
/lib64/libc.so.6(__libc_start_main+0xf4)[0x3fbb21d8a4]
[glusterfs][0x401b69]
---------

address@hidden ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2             9.5G  6.8G  2.2G  76% /
/dev/sda1             190M   12M  169M   7% /boot
tmpfs                1006M     0 1006M   0% /dev/shm
/dev/sda4             447G  2.8G  422G   1% /locfs
/dev/sdb1             459G  199M  435G   1% /locfsb
df: `/mnt/new': Transport endpoint is not connected

Thanks,
Yaomin
--------------------------------------------------
From: "Krishna Srinivas" <address@hidden>
Sent: Tuesday, January 06, 2009 1:09 PM
To: "yaomin @ gmail" <address@hidden>
Cc: <address@hidden>
Subject: Re: [Gluster-devel] Cascading different translator doesn't work as
expectation

Alfred,
Your vol files are wrong. you need to remove all the volume
definitions below "writeback" in the client vol file. For server vol
file the definition of performance translators is not having any
effect. Also you need to use "features/locks" translator above
"storage/posix"
Krishna

On Tue, Jan 6, 2009 at 8:51 AM, yaomin @ gmail <address@hidden>
wrote:
All,

    It seems difficult for you.

    There is a new problem when I tested.

When I kill all the storage nodes, the client still try to send data,
and doesn't quit.

Thanks,
Alfred
From: yaomin @ gmail
Sent: Monday, January 05, 2009 10:52 PM
To: Krishna Srinivas
Cc: address@hidden
Subject: Re: [Gluster-devel] Cascading different translator doesn't work
as
expectation
Krishna,
    Thank you for your quick response.
    There are two log information in the client's log file when setting
up
the client.
    2009-01-05 18:44:59 W [fuse-bridge.c:389:fuse_entry_cbk]
glusterfs-fuse:
2: (34) / => 1 Rehashing 0/0
    2009-01-05 18:48:04 W [fuse-bridge.c:389:fuse_entry_cbk]
glusterfs-fuse:
2: (34) / => 1 Rehashing 0/0

  There is no any information in the storage node's log file.

  Although I changed the scheduler from ALU to RR, there only the
No.3(192.168.13.5) and No.4(192.168.13.7) storage nodes on working.

  Each machine has 2GB memory.

Thanks,
Alfred

Attachment: client.txt
Description: Text document

Attachment: server.txt
Description: Text document


reply via email to

[Prev in Thread] Current Thread [Next in Thread]