monotone-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Monotone-devel] Performance improvement splitout


From: Nathaniel Smith
Subject: Re: [Monotone-devel] Performance improvement splitout
Date: Thu, 7 Sep 2006 20:20:30 -0700
User-agent: Mutt/1.5.12-2006-07-14

On Mon, Sep 04, 2006 at 05:42:45PM -0700, Eric Anderson wrote:
> I've gotten a number of the performance improvements split out for
> merging: 
> 
> net.venge.monotone.experiment.performance.inline-verify

Merged to mainline.

> net.venge.monotone.experiment.performance.vcache-size-hook

Still looks plausible, but I am lazy and didn't want to figure out
if this needs any adjustment to deal with the recent changes to roster
caching.  If you could resolve any conflicts and figure out if there
should be more changes, that would be cool.

> net.venge.monotone.experiment.performance.whitespace-trim

Merged to mainline.

> net.venge.monotone.experiment.performance.xdelta-speedup

I just added a fast-path to widen<> on mainline for when the source
type is unsigned, and then turned all of this branches static_casts
back into widen's before merging to mainline.  This should still give
the same speedup (since all the widen's in question should compile
down into static_cast's anyway, now), but you might want to
double-check.

> net.venge.monotone.performance.experiment.botan-gzip

My memory on this one is that I didn't see anything wrong with not
clearing gzip memory, but I wasn't qualified to review the
implementation?

Looking at it now (having had a little brush in with botan's memory
allocation subsystem recently :-)), umm... this patch has an
unconditional printf("XXXXXXXXX\n") in it (in supposedly dead code,
but still), the comment on "paranoid_memory_clearing" is misleading
(it seems to say that if set, Botan in general will not memset(0)
things, but if you've ever tried this, Botan crashes horribly,
paranoia isn't really the issue), and I don't think the API is one we
could convince Jack to take upstream.

The idea is good, but I think we can manage this a little more
elegantly without a big investment.

> In addition to the ones that have been discussed earlier, in testing
> with pulls over the local network, I found that both sides would idle
> during the transfer.  Interestingly, this doesn't happen in testing on
> one machine, even with a dual CPU machine so that both server and
> client can run at the same time.  I haven't split this one out yet:
> 
> 5cc1ed0346c5129ddd77ae895985bf743f349cae: skip calling select
> Skip calling select again if we processed some data from the remote
> side.  Somehow the call to select with a 1us timeout ends up waiting a
> bunch of the time leading to idleness in the client (on a pull).
> Oddly making this change significantly increases the amount of user
> time; haven't investigated why as the tradeoff for reduced wall clock
> time is a win.  Best guess is that because the client is running
> faster, it makes more recv calls for less data.

Hmm, I don't get it -- if the socket is readable (and presumably it
is, if we're getting bytes when we call recv), shouldn't select be
returning instantly anyway?  Your description seems to indicate that
it's actually blocking...

-- Nathaniel

-- 
"Of course, the entire effort is to put oneself
 Outside the ordinary range
 Of what are called statistics."
  -- Stephan Spender




reply via email to

[Prev in Thread] Current Thread [Next in Thread]