gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] GlusterFS Example


From: DeeDee Park
Subject: Re: [Gluster-devel] GlusterFS Example
Date: Tue, 31 Jul 2007 02:57:34 +0000

Client unifies S1 and S2 (call him U1)
Client unifies S3 and S4 (call him U2)
Client then AFR's together U1 and U2 and S5

In this example, if I save a file and its *:3, it will appear....
* on U1 (either S1 or S2 depending on scheduler)
* on U2 (either S3 or S4 depending on sched.)
* on S5

Is that supported? I was trying to do something similiar earlier, but was told that AFR is under(is a subvolume of) unify since I was having too many problems. Can I create a couple unifies, and then have both U1 and U2 as subvolumes of AFR? If so, let me
know, and I can test it.

Can I do something like the following?
BrickS1-1
BrickS1-2
Unify US1
  subvolumes S1-1,S1-2
BrickS2-1
BrickS2-2
Unify US2
  subvolumes S2-1,S2-2
AFR
 subvolumes US1, US2
 replicate *:2

From: Matt Paine <address@hidden>
To: address@hidden
Subject: Re: [Gluster-devel] GlusterFS Example
Date: Tue, 31 Jul 2007 09:15:32 +1000



option replicate *.tmp:1,*.html:4,*.php:4,*.db:1,*:2

In this case any .tmp or .db files will not get replicated (will sit on
the first afr brick only), all html and php files will be mirrored
across the first four bricks of afr, and all other files will be
mirrorred across the first two bricks. (note all on one line, note comma
delimited, note also no spaces before or after the comma).


Right, but is it possible to just tell the config you want to store 2
copies of everything without telling each system where the redundant
copies should be stored? I.E. I am looking for something like RAID 6,
where I have say 8 bricks with two copies of every file. If I just enter
*:2 then the spare copies will just be on the first two bricks and since
each 8 brick has the same amount of storage I will run out of space on the
first two before the others vs the redundant copies distributed over the 8
bricks.


Absolutly, except its there is a bit more work to do. AFR works by starting at the first brick and any replicating goes to the second, third etc as described above. But these are normal gluster bricks, so there is nothing stopping you from creating an AFR for the first brick, and AFR for the second, and just a posix brick for the rest (if thats what you want of course). Let me try to explain.....


Server one has one brick: S1
Server two has one brick: S2
Server three has one brick: S3
Server four has one brick: S4
Server five has one brick: S5

Client unifies S1 and S2 (call him U1)
Client unifies S3 and S4 (call him U2)
Client then AFR's together U1 and U2 and S5

In this example, if I save a file and its *:3, it will appear....
* on U1 (either S1 or S2 depending on scheduler)
* on U2 (either S3 or S4 depending on sched.)
* on S5

If I save a file as only *:1, it will appear only on U1 (either S1 or S2 depending on scheduler).

Ad nausium.


Of coures there is nothing stopping you from unifying three bricks, or even unifying an afr to afr.


i.e. (might need a mono spaced font to see correctly...)

                        A5
            +-----------------------+
            |                       |
            U1                      U2
     +-------------+           +---------+
     |             |           |         |
     A1            A2          A3        A4
 +---+---+   +---+---+---+   +---+   +---+---+
 |   |   |   |   |   |   |   |   |   |   |   |
S01 S02 S03 S04 S05 S06 S07 S08 S09 S10 S11 S12


Where Sxx = Server bricks
       Ax = AFR brick
       Ux = Unify brick



So in this configuration (which you have already worked out i'm sure) that if you save something *:2, then it will appear in both U1 and U2, which means (depending on the spec from a[1-4], assume *:2) it will appear in either A1 or A2 (because of the unify), AND it will also appear in either A3 or A4. etc etc etc.....

I think i've laboured the point far enough :)






4) In my application the server is also the client, in my current config
when one out of 3 servers is done I can no longer write, even tho I am
telling my config to write to the local brick. Is there any way to fix
this? If a note is down I understand how I would not have access to the
files, but I don't want to take down the full cluster.

I'm note sure sorry. Maybe one of the devs can clarify that. I know
there was an issue with an earlier relase to do with AFR. When the first
brick was down then afr was no longer available.


Ah... That could be it.

Which version of glusterfs are you using? tla, pre ?

(that issue has been fixed for a little while now, so if your using pre6 you shoulnd't have come across it)





Matt.



_______________________________________________
Gluster-devel mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/gluster-devel

_________________________________________________________________
http://newlivehotmail.com





reply via email to

[Prev in Thread] Current Thread [Next in Thread]