gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] raft consensus algorithm and glusterfs


From: Prashanth Pai
Subject: Re: [Gluster-devel] raft consensus algorithm and glusterfs
Date: Thu, 6 Feb 2014 04:15:21 -0500 (EST)

Hi,

There is already an ongoing effort to use RAFT for synchronous replication. You 
can find the repo here:

http://review.gluster.org/#/q/project:glusterfs-nsr,n,z
https://forge.gluster.org/new-style-replication

Regards,
 -Prashanth Pai

----- Original Message -----
From: "Sharuzzaman Ahmat Raslan" <address@hidden>
To: "Gluster Devel" <address@hidden>
Sent: Thursday, February 6, 2014 1:53:45 PM
Subject: [Gluster-devel] raft consensus algorithm and glusterfs

Hi all, 

I'm not sure if you all have heard about Raft Consensus Algorithm, but from the 
paper, video, and slides on the Internet, I think that this algorithm can help 
to solve a lot of distributed cases in GlusterFS. 

For example, right now the initial mounting of the filesystem can use any nodes 
IP, and later when the application needs to read or write the data, it will 
randomly picked from the peer list 

With Raft, one of the node will act as a master, after successful election 
among the peers. With one master in place, the read/write can directly goes to 
the master, which will replicate to other follower, according to the 
distribution requirement (distribute, replicate, stripe, distribute-replicate, 
etc) 

I also believe that with this method (having a master), the lock issue happen 
with samba could be reduced or resolved. 

For more information, please visit 
http://raftconsensus.github.io/ 
https://www.youtube.com/watch?v=YbZ3zDzDnrw 

and the paper 
http://ramcloud.stanford.edu/raft.pdf 

Thanks. 


-- 
Sharuzzaman Ahmat Raslan 

_______________________________________________
Gluster-devel mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/gluster-devel



reply via email to

[Prev in Thread] Current Thread [Next in Thread]