gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] about Qluster FS configuration


From: Onyx
Subject: Re: [Gluster-devel] about Qluster FS configuration
Date: Sun, 18 Nov 2007 14:43:36 +0100
User-agent: Thunderbird 2.0.0.6 (Windows/20070728)

We did some tests with latency and loss on an afr link. Both have a rather big impact on the write performance. The write-behind translator helps a lot in this situation. You can easily test latency and loss on a link in the lab by using the network emulation functionality of linux.
Like this for example:

tc qdisc add dev eth0 root netem delay 10ms
tc qdisc change dev eth0 root netem loss 0.5%



Felix Chu wrote:
I am interested to see if it is ok to build up a global clustered storage
across different IDCs with single naming space. That mean the Gluster
servers are located in different IDCs. Next week I organize more detailed info of test environment and send you.
Thanks again your reply. This project is pretty good and we are happy to
continue testing on it.

-----Original Message-----
From: address@hidden [mailto:address@hidden On
Behalf Of Krishna Srinivas
Sent: Friday, November 16, 2007 5:24 PM
To: Felix Chu
Cc: address@hidden
Subject: Re: [Gluster-devel] about Qluster FS configuration

Felix,

sometimes touch does not open() so a better way would be "od -N1" command.

Regarding your setup, can you give more details? how would glusterfs be
setup
across 20 data centers? what would the speed be between them?

Krishna

On Nov 16, 2007 2:38 PM, Felix Chu <address@hidden> wrote:
Hi Krishna,

Thanks your quick reply.

About self heal, is that mean before the event open() triggered, the whole
replication cluster will have one less replica than normal state? If our
goal is to make the replication status back to normal(same #of replicas as
normal), we need to trigger open() for all files store in the cluster file
system, right? If so, the easiest way is to "touch *" in the clustered
mount
point, right?

By the way, we will setup a testing environment to create a GlusterFS
across
20 data centres, each data centre has point to point fiber in between. The
longest distance between two data centres is about 1000km. Do you think
GlusterFS can be applied in this kind of environment? Any minimum network
quality between storage servers and clients?

Regards,
Felix


-----Original Message-----
From: address@hidden [mailto:address@hidden On
Behalf Of Krishna Srinivas
Sent: Friday, November 16, 2007 4:19 PM
To: Felix Chu
Cc: address@hidden
Subject: Re: [Gluster-devel] about Qluster FS configuration

On Nov 16, 2007 1:18 PM, Felix Chu <address@hidden> wrote:
Hi all,



I am new user to this QlusterFS project. I just started the test in
local
environment with 3 server nodes and 2 client nodes.



So far, it works fine and now I have two questions:



1.      I cannot understand the option related to "namespace" clearly. I
find that in most of the server conf files separated "DS" and "NS"
volumes,
what is the purpose of it?

namespace is used :
* to assign inode numbers
* to readdir(), instead of reading contents of all the subvols, unify
readdir()s just from NS.

e.g. in

http://www.gluster.org/docs/index.php/GlusterFS_High_Availability_Storage_wi
th_GlusterFS



there are "ds" and "ns" volumes in this config

volume mailspool-ds

           type storage/posix

           option directory /home/export/mailspool

   end-volume



   volume mailspool-ns

           type storage/posix

           option directory /home/export/mailspool-ns

   end-volume



2.      In my testing environment, I applied the replication function to
replicate from one server to other 2 servers. Then I unplug one of the
server. On client side it still ok to access the mount point. After a
period, I up the unplugged server again and find that all data during
the
outage period does not appear on this server. Any steps required to sync
data back to new recovered server?

You need to open() that file to trigger selfheal for that file.

Krishna





_______________________________________________
Gluster-devel mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/gluster-devel






_______________________________________________
Gluster-devel mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/gluster-devel




reply via email to

[Prev in Thread] Current Thread [Next in Thread]