gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: AFR load-balancing (was:Re: [Gluster-devel] Full bore.)


From: Krishna Srinivas
Subject: Re: AFR load-balancing (was:Re: [Gluster-devel] Full bore.)
Date: Mon, 19 Nov 2007 22:06:08 +0530

On Nov 19, 2007 9:39 PM, Jerker Nyberg <address@hidden> wrote:
> On Sat, 17 Nov 2007, Krishna Srinivas wrote:
>
> > Actually, if a file is read from a node, if another application opens and
> > reads it, reads should be done from the same node to take advantage
> > of the server side io-caching and server side kernel caching. As of now
> > we just hash the inode number to decide on the read-node, a plain and
> > simple mechanism.
> >
> > (We can think about schedulers similar to the ones in unify, but they can't
> > be re-used, but conceptually they are similar, unify would schedule based
> > on the disk space and afr would schedule based on CPU utilization.
> > In future versions though. Suggestions welcome :) )
>
> Just for letting you know, this works as intended. However, In my case I'm
> when testing out the read balancing I am afr'ing over seven servers and
> reading the a single file from all the server. If I could stripe the reads
> from the servers,

Striping reads is not implemented yet.

> or even getting the client to choose another server than
> the first I would get happy, this way I could scale the throughput when
> when adding more servers. (But I could just let the clients read from them
> selves in the mean time.)

This can be done with "option read-subvolume" option in AFR right? Am I
missing something? You can load AFR on the client side and configure
each client for read from different subvol.

>
> Writes are slow, (since a client need to write to all servers, but perhaps
> is it possible to stack the afrs on the server side and let the servers do
> the replicating when writing... Hmmm...)

Are you using write-behind?

Krishna

>
> When suggestions are welcome, a parity translator (RAID5 RAID6) on a file
> level (self-healing) would also be nice. I tried to look at the stripe
> code to figure out what is happening but I didn't understand much. :) (I
> might be having some time this spring to look at something like that, but
> that particular project is probably too much for me.)
>
> --jerker
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]