libunwind-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Libunwind-devel] Crash in libunwind 0.99 on x86_64


From: Arun Sharma
Subject: Re: [Libunwind-devel] Crash in libunwind 0.99 on x86_64
Date: Tue, 20 Apr 2010 09:52:20 -0700

On Tue, Apr 20, 2010 at 5:54 AM, Dave Wright <address@hidden> wrote:
I got a crash in libunwind (called from tcmalloc) on Ubuntu 0910 x86_64 (stack trace is below). It looks like it's trying to read address 0x8, which obviously is not valid. I noticed a post from a month ago where an explicit check was added for access to page 0, and a followup that mentioned msync doesn't reliably catch certain invalid accesses:
http://lists.gnu.org/archive/html/libunwind-devel/2009-12/msg00002.html

..I'm assuming the explicit check will prevent my specific crash in this instance, but my real question is whether libunwind is "reliable" enough on x86_64 to use in a production application during runtime (as tcmalloc does for sampling purposes) - as opposed to using it just for profiling / crash stack dumping. Some of the posts on this list would seem to indicate that the library does have a non-zero chance of segfaulting in some situations, which would seem to indicate that it isn't appropriate for use as a runtime feature of a production app.

libunwind is reliable only as long as the compiler generated unwind information is reliable.

Not all addresses go through msync() based validation. Enabling validation on every dereference has a pretty big performance cost to tcmalloc.

Debugging the problem normally requires figuring out which one of the 42 frames libunwind was unwinding when it accessed the bad address (UNW_DEBUG_LEVEL=x) and examining what went wrong.

I would suggest testing with the git version to see if the problem is resolved. Another alternative is to stub out GetStackTrace() (as in { return 0; }) from tcmalloc if you don't care about sampled allocations.

 -Arun

reply via email to

[Prev in Thread] Current Thread [Next in Thread]