gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gluster-devel] Performance Translators' Stability and Usefulness


From: Gordan Bobic
Subject: [Gluster-devel] Performance Translators' Stability and Usefulness
Date: Fri, 03 Jul 2009 23:24:48 +0100
User-agent: Thunderbird 2.0.0.22 (X11/20090625)

Just reading through the wiki on this and a few things are unclear, so I'm hoping someone can clarify.

1) readahead

- Is there any point in using this on systems where the interconnect <= 1Gb/s? The wiki implies there is no point in this, but doesn't quite state it explicitly.

- Is there any point in using this on a server that is also it's own client when use with replicate/afr? I'm guessing there isn't since the local fs will be doing it's own read-ahead but I'd like some confirmation on that.

2) io-threads

Is this (usefully) applicable on the client side?

3) io-cache

The wiki page has the same paragraph pasted for both io-threads and io-cache. Are they the same thing, or is this a documentation bug? What does io-cache do?

Finally - which translators are deemed stable (no know issues - memory leaks/bloat, crashes, corruption, etc.)?

Any particular suggestions on which performance translator combination would be good to apply for a shared root AFR over a WAN? I already have read-subvolume set to the local mirror, but any improvement is welcome when latencies soar to 100ms and b/w gets hammered down to 1-2.5 Mb/s.

Another thing - when a node works standalone in AFR, performance is pretty good, but as soon as a peer node joins, even though the original node is the primary, performance degrades on the primary node quite significantly, even though the interconnect is direct gigabit, which shouldn't be adding any particular latency (< 0.1ms) or overheads, especially on the primary node. Is there any particular reason for this degradation? It's OK in normal usage, but some operations (e.g. building an big bootstrapping initrd (50MB compressed, including all the gernel drivers) takes nearly 10x longer when the peers join than when the node is standalone. I expected some degradation, but only on the order of added network latency, and this is way, way more. I tried with and without direct-io=off, and that didn't make a great amount of difference. Which performance translators are likely to help with this use case?

Gordan




reply via email to

[Prev in Thread] Current Thread [Next in Thread]