monotone-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: HOWTO: benchmarking monotone (was Re: [Monotone-devel] "memory exhau


From: Nathaniel Smith
Subject: Re: HOWTO: benchmarking monotone (was Re: [Monotone-devel] "memory exhausted" error for 'mtn list status' command)
Date: Fri, 28 Jul 2006 14:48:02 -0700
User-agent: Mutt/1.5.12-2006-07-14

On Fri, Jul 28, 2006 at 11:58:29AM -0700, Eric Anderson wrote:
> Nathaniel Smith writes:
>  > Weirdly, the memtime data doesn't really reflect this.  For "before"
>  > I get:
>  > 
>  >   ls_unknown-avg-resident-MiB,4.35963439941,4.33722305298,4.40716743469
>  >   ls_unknown-avg-size-MiB,21.4379463196,21.345328331,21.6831884384
>  >   ls_unknown-max-resident-MiB,4.703125,4.69921875,4.70703125
>  >   ls_unknown-max-size-MiB,22.34375,22.34765625,22.34765625
>  >   ls_unknown-num-samples,78,65,75
>  >   ls_unknown-system-time,0.051,0.037,0.044
>  >   ls_unknown-user-time,0.478,0.470,0.477
>  >   ls_unknown-wall-time,0.549,0.519,0.529
>  > 
>  > And for "after":
>  > 
>  >   ls_unknown-avg-resident-MiB,3.36142158508,3.18865299225,3.06260299683
>  >   ls_unknown-avg-size-MiB,19.809679985,19.9870185852,16.5047159195
>  >   ls_unknown-max-resident-MiB,4.69140625,4.69140625,4.69140625
>  >   ls_unknown-max-size-MiB,21.83203125,21.83203125,21.83203125
>  >   ls_unknown-num-samples,299,208,93
>  >   ls_unknown-system-time,0.047,0.048,0.036
>  >   ls_unknown-user-time,0.458,0.477,0.461
>  >   ls_unknown-wall-time,2.092,1.467,0.743
>  > 
>  > If anything, this would suggest that memory usage had _increased_?
>  > That doesn't make much sense to me, and massif seems like the more
>  > trustworthy party here, with its fine-grained deterministic approach.
>  > The numbers also seem weirdly large -- maybe we're measuing how much
>  > of the binary has gotten swapped into memory, for instance?  Eric, any
>  > thoughts?
> 
> This looks perfectly correct assuming that "after" is the run that
> includes your fix.

Right.

> avg resident size has decreased from 4.35 to 3.1
> MiB, avg size has decreased from 21.4 MiB to 19.8 MiB. Similar on the
> maximum values.  The only odd thing is that all three "after" runs
> took way more wall clock time, but similar system and user time.

Hrm, weird -- I actually re-ran these when writing the email, since
I'd deleted my first run, and I could swear the numbers were weirder
before :-).  (Weirder, like, swap the average resident sizes between
the two runs.)  I still have some trouble interpreting these -- why are
the max-resident sizes the same?  I would have thought that was the
most interesting number, since peak memory usage is often more
important than average memory usage, and the total (as opposed to
resident) numbers are clearly counting, like, the size of all linked
libraries or something.

Probably I/we just need more experience in interpreting these numbers;
they're a little trickier to understand than I expected.  (It might
still be interesting to add an instrumenter that hooks into malloc and
gives peak _heap_ amounts in particular, just because those are
presumably less noisy and easier to interpret in terms of code.)

-- Nathaniel

-- 
Details are all that matters; God dwells there, and you never get to
see Him if you don't struggle to get them right. -- Stephen Jay Gould




reply via email to

[Prev in Thread] Current Thread [Next in Thread]