gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Re: [Gluster-users] I/O fair share to avoid I/O bott


From: Gordan Bobic
Subject: Re: [Gluster-devel] Re: [Gluster-users] I/O fair share to avoid I/O bottlenecks on small clsuters
Date: Sun, 31 Jan 2010 13:52:27 +0000
User-agent: Thunderbird 2.0.0.22 (X11/20090625)

Traffic shaping with tc and iptables are your friends. ;)
Of course, if you are genuinely running out of bandwidth nothing will solve the lack of bandwidth, but if you merely need to make sure it is distributed more fairly and sensibly between the machines, it can be done.

Typically I would put each guest VM into a class limited to 50% of the host bandwidth, and put them at equal priority, with a few packet scheduling tweaks (e.g. prioritizing small packets, ACKs, ssh, ping, etc.) That way the network I/O can be kept responsive, while no one VM would be allowed to eat all the available bandwidth.

Gordan

Mickey Mazarick wrote:
Can you tell us a little more about your setup? I'm running many hundreds of vms on our cluster but I found infiniband is necessary if you have any large amount of io (databases, lots of drive access etc).

You may simply be saturating your io if you only have a single gigabit interface to your storage. Things like NFS mount can direct all your io down one gig link and can be the death knell to your distributed parallel filer.

-Mic

Ran wrote:
Hi ,
I recently posted an issue regarding a situation that happen when
say 1 virtual machine images takes down the entire server IO
then the entire storage become slow in such way that noting work
emails , web etc...
gluster gays posted that virtual machine is a main goal of the storage
and that they probebly implement a fair share IO options to avoid this cases .

Can anyone tell me what are the plans for this , it apear to me that
this is 1 of the most important issues on such storage seens it is not
possible to run more then a few virtual machins in parallel .

Many thanks ,
_______________________________________________
Gluster-users mailing list
address@hidden
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users



_______________________________________________
Gluster-devel mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/gluster-devel





reply via email to

[Prev in Thread] Current Thread [Next in Thread]