gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Re: [Gluster-users] I/O fair share to avoid I/O bott


From: Jeff Darcy
Subject: Re: [Gluster-devel] Re: [Gluster-users] I/O fair share to avoid I/O bottlenecks on small clsuters
Date: Mon, 01 Feb 2010 10:27:33 -0500
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.5) Gecko/20091209 Fedora/3.0-3.fc11 Thunderbird/3.0

On 02/01/2010 10:14 AM, Gordan Bobic wrote:
> Optimizing 
> file systems is a relatively complex thing and a lot of the conventional 
> wisdom is just plain wrong at times.

After approximately fifteen years of doing that kind of tuning, I
couldn't agree more.

>> Unfortunately, I/O traffic shaping is still in its infancy
>> compared to what's available for networking - or perhaps even "infancy"
>> is too generous.  As far as the I/O stack is concerned, all of the
>> traffic is coming from the glusterfsd process(es) without
>> differentiation, so even if the functionality to apportion I/O amongst
>> tasks existed it wouldn't be usable without more information.  Maybe
>> some day...
> 
> I don't think this would even be useful. It sounds like seeking more 
> finely grained (sub-process level!) control over disk I/O prioritisation 
> without there even being a clearly presented case about the current 
> functionality (ionice) not being sufficient.

Does such a case really need to be made explicitly?  Dividing processes
into classes is all well and good, but there can still be contention
between processes in the same class.  Being able to resolve that
contention in a fair and/or deterministic way is still useful, and still
unaddressed.

In any case, that might be a moot point.  I interpreted Ran's problem as
VMs running on GlusterFS *clients* causing contention at the GlusterFS
*servers*.  Maybe that was incorrect, but even if Ran doesn't face that
problem others do.  I certainly see and hear about it a lot from where I
sit at Red Hat, and no amount of tweaking at the hypervisor (i.e.
GlusterFS client) level will solve it.

> Hold on, you seem to be talking about something else here. You're 
> talking about clients not distributing their requests evenly across 
> servers. Is that really what the original problem was about?

My reading of Ran's mail at 09:13am on 01/30 says yes, but greater
clarity would certainly be welcome.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]