bug-glibc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Very inefficient malloc behavior


From: Fritz Boehm
Subject: Very inefficient malloc behavior
Date: Tue, 12 Jul 2005 09:32:37 -0500
User-agent: Mutt/1.4i

>Submitter-Id:  net
>Originator:
>Organization:
Intrinsity, Inc.

>Confidential:  no
>Synopsis:      malloc fails to satisfy request when there should be ample
memory available, also MALLOC_CHECK_ will report "top chunk is corrupt"
incorrectly.
>Severity:      serious
>Priority:      high
>Category:      libc
>Class:         sw-bug
>Release:       libc-2.3.2
>Environment:
Scientific Linux SL Release 3.0.2 (SL)
Kernel 2.4.21-15.0.2.ELsmp on an i686
Host type: i386-redhat-linux-gnu
System: Linux sim010.farm.intrinsity.com 2.4.21-15.0.2.ELsmp #1 SMP Fri Jun 18
12:05:01 CDT 2004 i686 athlon i386 GNU/Linux
Architecture: i686

Addons: linuxthreads c_stubs glibc-compat
Build CFLAGS: -march=i386 -DNDEBUG=1 -finline-limit=2000 -g -O3
Build CC: gcc
Compiler version: 3.2.3 20030502 (Red Hat Linux 3.2.3-24)
Kernel headers: 2.4.20
Symbol versioning: yes
Build static: yes
Build shared: yes
Build pic-default: no
Build profile: yes
Build omitfp: no
Build bounded: no
Build static-nss: no

>Description:
I'm seeing some inefficient behavior on the part of malloc when comparing a
Scientific Linux 3.0.2 box when compared to a Red Hat 8.0 box.  In addition
to the inefficient behavior, I'm also seeing a bogus message when using the
MALLOC_CHECK_ environment variable.

Below I've got information listed for the SL box versus the RH box, along
with the simple short program that shows the behavior.  Let me know if there
is any other info needed to understand the problem.

Scientific Linux box
--------------------
CPU speed as listed in /proc/cpuinfo:
cpu MHz         : 1468.511

info listed when telnet'ing into box:
Scientific Linux SL Release 3.0.2 (SL)
Kernel 2.4.21-15.0.2.ELsmp on an i686

% uname -a
Linux sim010.farm.intrinsity.com 2.4.21-15.0.2.ELsmp #1 SMP Fri Jun 18 12:05:01
CDT 2004 i686 athlon i386 GNU/Linux

% gcc -v
Reading specs from /usr/lib/gcc-lib/i386-redhat-linux/3.2.3/specs
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man
--infodir=/usr/share/info --enable-shared --enable-threads=posix
--disable-checking --with-system-zlib --enable-__cxa_atexit
--host=i386-redhat-linux
Thread model: posix
gcc version 3.2.3 20030502 (Red Hat Linux 3.2.3-34)

% g++ -o prob prob.cc

% ./prob
out of memory at i 46674

Last info from a top run (redirected to a file) and examined after the
prob executable ended:
27504 fritz     25   0  168M 168M   604 R    93.5 16.7   1:49 prob


% setenv MALLOC_CHECK_ 1
% ./prob
malloc: using debugging hooks
malloc: top chunk is corrupt
out of memory at i 5901
27456 fritz     21   0 22920  22M   636 R    89.4  2.2   0:00 prob


Red Hat 8 machine
-----------------
CPU speed as listed in /proc/cpuinfo:
cpu MHz         : 1195.354

info listed when telnet'ing into the box:
Red Hat Linux release 8.0 (Psyche)
Kernel 2.4.18-14 on an i686

% uname -a
Linux schrems.eng.intrinsity.com 2.4.18-14 #1 Wed Sep 4 12:13:11 EDT 2002 i686
athlon i386 GNU/Linux

% g++ -o prob.rh8 prob.cc

% gcc -v
Reading specs from /usr/lib/gcc-lib/i386-redhat-linux/3.2/specs
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man
--infodir=/usr/share/info --enable-shared --enable-threads=posix
--disable-checking --host=i386-redhat-linux --with-system-zlib
--enable-__cxa_atexit
Thread model: posix
gcc version 3.2 20020903 (Red Hat Linux 8.0 3.2-7)



./prob.rh8
28220 fritz     15   0 2925M 777M   252 D     0.4 77.1   0:20 prob.rh8
out of memory at i 1882141


% setenv MALLOC_CHECK_ 1
% ./prob.rh8
out of memory at i 1872841
28287 fritz     17   0 2940M 766M   368 R    11.0 76.1   0:20 prob.rh8

----------------------------------------------------------------------

/* start of prob.cc */
#include <stdlib.h>
#include <stdio.h>

int main()
{
    for (int i = 0; i < 3000000; i++)
        {
        char *p = (char*)malloc(500000);
        if (p == NULL)
            {
            printf("out of memory at i %d\n", i);
            exit(1);
            }
        realloc(p, 500);
        malloc(1000);
        }
}
/* end of prob.cc */


Some notes:

1) The Scientific Linux box is a faster machine than the Red Hat 8.0 box.
Despite this, the executable takes 1:49 of cpu time on the SL box compared to
0:20 cpu time on the RH box.
2) The SL box has malloc report back via a NULL return that no more memory
is available, despite the image of the executable being 168 MB (on a 1 GB box).
The RH box has malloc fail via a NULL return when the executable is
at an image size of 2925 MB (also on a 1 GB box).
3) When the MALLOC_CHECK_ environment variable is used, the SL box informs
me that the "top chunk is corrupt" before returning NULL.  The image size only
makes it to 22 MB.  The RH box when using MALLOC_CHECK_ makes it to a nearly
identical sized image as when the MALLOC_CHECK_ is not used and in nearly
the same amount of time.  Also, very importantly, it does not complain that
the top chunk is corrupt.  I've got high confidence in the GNU tools and took
this message as golden, believing that I had corrupted the heap somewhere.
4) I can't be sure that the above program captures the problem I'm seeing in
my executable.  I took a wild guess to arrive at the above program which shows
the same symptoms.
5) I tried to use the glibcbug script, but I received this message back:
<address@hidden>: host mx10.gnu.org[199.232.76.166] said: 550
    unknown user (in reply to RCPT TO command).  Last week I tried sending
this directly to address@hidden, but it bounced.  I also tried
address@hidden, but that also bounced.  I'm sending this manually to
address@hidden  Hopefully the format is correct.

>How-To-Repeat:
See description.
>Fix:
I was able to get around the problem in my program by not malloc'ing a large
buffer, and then realloc'ing down after I knew the precise size, but instead
using stack space for the huge buffer, which avoided the malloc/realloc
situation completely.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]