-:- Desine address@hidden has joined #gluster DDDDDDeeeeesssiiiiiiinnneeeeeee wats up :) I have an idea for gluster. say you have 5 servers configured with gluster each server say has 100gb storage available so that's a total of 500gb ok now here is the tricky bit I want to have all servers backing up until 500gb so that they all have the same data actually 100gb so replicate up to 100gb you want the same data replicated on all 5 servers? then when it reaches 100gb the servers go into individual mode ok let me explain with 4 servers it is easier 4 servers 400gb total so each server replicates up to 100gb then when it reaches 100gb they go in dual mode so now 2 server replicate to 200gb then when it reaches 200gb they divide and each stores 100gb understand teh concept? 4 servers with equal data can store only 100gb max if you divide them and have 2 groups then you can store 200gb on each group so 2 machines work as one 2 servers ok, got it now as the data goes up they divide until you got 4 each storing 100gb unique data :) what do you think? so until usage reaches 100gb, you want 4-way replication, and after that you want only 2-way replication yeah but this should be automated :) ok thinking about how you might do this i think it's tricky but i think it's a cool feature :) one way to do this is right now it can't be done automatically yes you'd need to majorly reorganize files when you 'divide' well you can destroy data on 2 machines then start copying 100gb over but before it reaches 100gb :) so you get like 90gb say yeah so when it hits 90gb you activate double mode it would not be 100gb so you have 40gb to play with :) as a cache to start transfering then you get the client to only read from 2 machiens while this is happening :) until the system is divided we'll be writing some infrastructure code for the 1.4 release, which will be used to write things like fsck and glusterfs-defrag using that framework you'll be able to do this, with a few scripts (hopefully) though doing this on-the-fly would be too much work it would take time * dtor is back from lunch hey Desine he dtor hey what do you think of my idea? Desine idea sounds good, thinking how to implement it it should be possible without 'moving' files around at all sounds good :) say you start off with 4 servers, S1, S2, S3, S4 and you have 4 files, A, B, C D initlally: S1: A B C D yep S2: A B C D S3: A B C D now all four servers are almost full (90-100%) and the fifth file E is about to come yep now you delete files in a pattern from all 4 servers S1: AB S2: A B S3: C D S4: C D and E goes to S1+S2 and F goes to S3+S4 sounds very smart just be deleting files in a a pattern we can 'spread thin' you pretty much nailed it another extension to the idea is to delete on-demand i mean, when E is about to come, you need not delte A,B on S1,S2 and CD on S3,S4.. you can just delte A on S1 and S2 and make place for E and when F comes delete C,D on S3,S4 making way for it