gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] philosophy


From: Anand Avati
Subject: Re: [Gluster-devel] philosophy
Date: Wed, 21 Nov 2007 22:11:59 +0530

Chris,

  missed this mail somehow. please check replies inline

      Is there any philosophy in glusterfs as to what should happen on
> the server vs. the client and in which order the translators should be
> stacked?


 there is no hard rule of what component should be where. You can have any
of the performance translators, feature translators and cluster translators
on either client or server. Each of them have a different implication by
loading on client or server side. You need to understand what you get out of
a translator and decide whether you want them loaded on client or server.
For example write-behind loaded on server would cut the disk access time,
while loading it on client will cut disc access plus network transfer time.


      I've come to the conclusion that glusterfs on one brick with one
> real filesystem doesn't help much.  I have 4 filesystems set up and I
> can grab a few more if needed.


GlusterFS was designed to be a clustered filesystem. It just happens to be
flexible enough to be used as a 1:1 network filesystem too. We have seen
GlusterFS peaking at the network link speed on gig/e and in terms of
throughput is >= NFS (with dd tests). In tests like io-zone where re-read is
done, NFS does heavy client-side page caching which gives tremendous
performance. You will need to use io-cache to get the equivalent
functionality.

      And is there any way to gracefully turn striping on and off
> without off loading everything or not?  I'm thinking not.



Depends how messy "gracefully" is. If you have your stripe volume seperated
cleanly with the switch scheduler in unify, it should be a lot easier. But
there is no single ON/OFF switch.


avati

-- 
It always takes longer than you expect, even when you take into account
Hofstadter's Law.

-- Hofstadter's Law


reply via email to

[Prev in Thread] Current Thread [Next in Thread]