gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Performance


From: Raghavendra G
Subject: Re: [Gluster-devel] Performance
Date: Thu, 18 Mar 2010 08:46:25 +0400

Hi Roland,

What are the applications you are using on glusterfs? In particular, what is the i/o pattern of applications? As a general guideline, you can try enabling/disabling each of the performance translators and observe the gain/loss of performance and tune the configuration accordingly.

regards,
On Wed, Mar 17, 2010 at 9:05 PM, Roland Fischer <address@hidden> wrote:
Hi Community,

i need your help. i have performance problems with glusterfs 3.0.0 and domUs (xen)

i use 2 ident glusterfs-server (physikal HW) and two xen-server (physikal)

currently i use client side replication - which is awful slow. i use a monitoring tool and see in domUs that there is a lot of cpu waiting. (before i switch to glusterfs there was no wait CPU)

is server-side-replication faster and failsave. i mean if one glusterfs server goes down, does the other take over the domUs?

is there anything in volfiles which i can tune?!? should i use server-side-replication?!?

should i use the --disable-direct-io-mode? if yes on server side or client-side or both  -  and how to add in fstab (with --disable-direct-io-mode)?????

Thank you for your help!!!

servervolfile:
cat /etc/glusterfs/export-domU-images-client_repl.vol
#############
volume posix
 type storage/posix
 option directory /GFS/domU-images
end-volume

volume locks
 type features/locks
 subvolumes posix
end-volume

volume domU-images
 type performance/io-threads
 option thread-count 8 # default is 16
 subvolumes locks
end-volume

volume server
 type protocol/server
 option transport-type tcp
 option auth.addr.domU-images.allow 192.*.*.*,127.0.0.1
 option transport.socket.listen-port 6997
 subvolumes domU-images
end-volume
######################

clientvolfiles:

cat /etc/glusterfs/mount-domU-images-client_repl.vol
volume gfs-01-01
 type protocol/client
 option transport-type tcp
 option remote-host hostname
 option transport.socket.nodelay on
 option remote-port 6997
 option remote-subvolume domU-images
 option ping-timeout 5
end-volume

volume gfs-01-02
 type protocol/client
 option transport-type tcp
 option remote-host hostname
 option transport.socket.nodelay on
 option remote-port 6997
 option remote-subvolume domU-images
 option ping-timeout 5
end-volume

volume gfs-replicate
 type cluster/replicate
 subvolumes gfs-01-01 gfs-01-02
end-volume

volume writebehind
 type performance/write-behind
 option cache-size 4MB   #default 16
 subvolumes gfs-replicate
end-volume

volume readahead
 type performance/read-ahead
 option page-count 8              # cache per file  = (page-count x page-size)
 subvolumes writebehind
end-volume

volume iocache
 type performance/io-cache
 option cache-size 1GB   #new 1GB supported
 option cache-timeout 1
 subvolumes readahead
end-volume

volume statprefetch
   type performance/stat-prefetch
   subvolumes iocache
end-volume

#################################################


Best regards,
Roland



_______________________________________________
Gluster-devel mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/gluster-devel



--
Raghavendra G


reply via email to

[Prev in Thread] Current Thread [Next in Thread]