gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] [RFC ] dictionary optimizations


From: Jeff Darcy
Subject: Re: [Gluster-devel] [RFC ] dictionary optimizations
Date: Wed, 04 Sep 2013 08:05:01 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130623 Thunderbird/17.0.7

On 09/04/2013 04:27 AM, Xavier Hernandez wrote:
I would also like to note that each node can store multiple elements.
Current implementation creates a node for each byte in the key. In my
implementation I only create a node if there is a prefix coincidence between
2 or more keys. This reduces the number of nodes and the number of
indirections.

Whatever we do, we should try to make sure that the changes are profiled
against real usage.  When I was making my own dict optimizations back in March
of last year, I started by looking at how they're actually used.  At that time,
a significant majority of dictionaries contained just one item.  That's why I
only implemented a simple mechanism to pre-allocate the first data_pair instead
of doing something more ambitious.  Even then, the difference in actual
performance or CPU usage was barely measurable.  Dict usage has certainly
changed since then, but I think you'd still be hard pressed to find a case
where a single dict contains more than a handful of entries, and approaches
that are optimized for dozens to hundreds might well perform worse than simple
ones (e.g. because of cache aliasing or branch misprediction).

If you're looking for other optimization opportunities that might provide even
bigger "bang for the buck" then I suggest that stack-frame or frame->local
allocations are a good place to start.  Or string copying in places like
loc_copy.  Or the entire fd_ctx/inode_ctx subsystem.  Let me know and I'll come
up with a few more.  To put a bit of a positive spin on things, the GlusterFS
code offers many opportunities for improvement in terms of CPU and memory
efficiency (though it's surprisingly still way better than Ceph in that regard).




reply via email to

[Prev in Thread] Current Thread [Next in Thread]