libunwind-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Libunwind-devel] run-ptrace-mapper performance


From: Nurdin Premji
Subject: Re: [Libunwind-devel] run-ptrace-mapper performance
Date: Wed, 21 Mar 2007 09:34:43 -0400
User-agent: Thunderbird 1.5.0.10 (X11/20070302)

Doing a strace of both the old 0.98.5 version of libunwind and the 20070224 snapshot gives the following data.
The old code had roughly 1599 reads.
The new code had roughly 706400 reads.

The old code was only reading the first 4k of the maps, while the new code reads the entire maps file (4k at a time).
http://sourceware.org/bugzilla/show_bug.cgi?id=4226
contains the strace logs for both.

Perhaps lowering the size of the maps files?

David Mosberger-Tang wrote:
80s *may* not be unreasonable since there is a lot of stuff going on
with DWARF-unwinding.  Unless there is a reason to believe something
is going wrong (a quick profile should tell that easily), I wouldn't
object to increasing the timeout.

 --david

On 3/20/07, Nurdin Premji <address@hidden> wrote:
About 80 seconds on an x86_64

David Mosberger-Tang wrote:
> Performance depends on a lot of factors (platform, machine, etc).  By
> default, remote-unwinding doesn't enable the cache, so it's certainly
> relatively slow.  I don't think I ever ran into troubles with the 30
> sec limit on ia64, but there isn't anything magic about that value
> either.  How big did you have to make the timeout to make it pass?
> What platform?  x86-64?
>
>  --david
>
> On 3/20/07, Nurdin Premji <address@hidden> wrote:
>> What is the performance of this test, I've found that with the
>> libunwind-20070224 snapshot I had to modify the alarm timeout to make it
>> pass. Is there a problem with the caching of the maps?
>>
>>
>> _______________________________________________
>> Libunwind-devel mailing list
>> address@hidden
>> http://lists.nongnu.org/mailman/listinfo/libunwind-devel
>>
>
>









reply via email to

[Prev in Thread] Current Thread [Next in Thread]