monotone-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Monotone-devel] net.venge.monotone.experiment.performance


From: Nathaniel Smith
Subject: Re: [Monotone-devel] net.venge.monotone.experiment.performance
Date: Wed, 2 Aug 2006 13:17:33 -0700
User-agent: Mutt/1.5.12-2006-07-14

On Tue, Aug 01, 2006 at 12:39:05AM -0700, Eric Anderson wrote:
> I've created a net.venge.monotone.experiment.performance branch to put
> a bunch of performance enhancing patches on them that may or may not
> be appropriate for mainline.
> 
> I'll try to send a summary from time to time of the results, and I'm
> trying to remember to put performance benchmarking in with each of the
> updates.

Cool.

> Suitable for mainline:
>   eddb7e59361efeb8d9300ba0ddd7483272565097:
>     Make an upper bound on the amount of memory that will be consumed during
>     a single commit.  Right now a commit will keep all of the compressed 
>     differences in memory, which is not a good thing on a big import of
>     an existing project.  Patch limits the amount stored in memory to 16MB,
>     has no effect on sync because sync is flushing every 1MB.
>     Detailed performance improvement included at the bottom since I forgot
>     to include it in the commit message.

You use fprintf in one place.  For monotone, never use fprintf, or
stdio in general, or, well, any sort of standard library IO in general
:-).  Use P or W instead, with F for formatting.

Could you say a few more words to convince me of the correctness of
your approach?  I don't totally understand the existing pending write
stuff to comment more knowledgeably, but sqlite already has
bounded-memory write buffering, so if we're not using it we probably
have some reason, and this code makes it so we do silently use it in
some cases, and not in others.  For a more specific worrisome
instance, it doesn't look like cancel_pending_write can possibly
fulfill its contract now?  So, does this all work, and if so, why?

>   4e99cc37f548b5884d63c48bc486dfe98c8d0bd2:
>     Add support for expidited parsing of rosters during annotation.  
>     Also skip verification of SHA1 hashes only during annotation.
>     Worth 5-20x speedup on annotation, but the faster parsing code may
>     not succeed on all rosters that the standard code should parse.  I 
>     belive the faster parsing code will abort in any case where it might
>     do the wrong thing.

Do you have any measurements of how much of that gain is due to which
of the optimizations?  Each half has different hurdles to overcome to
make it into mainline, so it'd be nice to be able to prioritize them
separately...

-- Nathaniel

-- 
"If you can explain how you do something, then you're very very bad at it."
  -- John Hopfield




reply via email to

[Prev in Thread] Current Thread [Next in Thread]