bug-glibc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Memory allocation problems glibc-2.2.2


From: Brian Olsen
Subject: Memory allocation problems glibc-2.2.2
Date: Tue, 08 May 2001 09:26:15 -0600
User-agent: Mozilla/5.0 (X11; U; Linux 2.4.4 i686; en-US; rv:0.8.1+) Gecko/20010425

Here's the problem:

Our image caching code is causing memory usage to explode. The problem is that a lot of allocations are being done of the pattern:

uchar* bigData = new uchar[ 128 * 128 * 3 ]; // Alloc 128x128 rgb tile
int* cacheId = new int;       // Create cache id for lookup

Doing this in a loop causes memory usage to explode.

I've attached a test program that shows off this problem.

Btw, I reported this about 8 months ago when this was observed with redhat 6.2 (glibc 2.1.3) and was told that it would be fixed. Now I'm on redhat 7.1 with glibc 2.2.2and this still hasn't been fixed.

Thanks,
Brian

address@hidden
// 
//  -$RCSfile: stressmem.cc,v $
//  $Revision: 1.1.1.1 $
//  Rev-$Name:  $
//  Mod-$Date: 2001/05/08 15:09:37 $
// 
//  Copyright (c) 2000 Stellacore Corporation. All Rights Reserved.
//

/*! \file
\brief  Stress test memory allocation. Demonstrate potential new/delete bug.

This test program performs some stressful memory allocation. It has been
written in response to an apparent process memory allocation bug - observed
running under (several versions of) linux, with the GNU compiler and 
libraries. 

The program allocate and frees a sequence of data units. Each data unit
is progressively larger than the previous (size ramp-up model). For example
it allocates in sizes of 1, 2, 3,...  Therefore the total size allocated
accumulates as 1, 3, 6,...  When the total size exceeds the #maxCacheSize
value, the program then begins deleting the least recently allocated data
buffers and continues deleting them until the total allocated size in is
less than the #maxCacheSize.

Use one of the operating system tools to monitor the program - e.g. 'top'
or 'kpm' work well. Run the program with no arguments and it will behave
essentially as expected - e.g. it will grow in size until it consumes
an amount of memory approximately equal to the #maxCacheSize constant
below (plus some allowance for program and process overhead).

However, run the program with an extra argument and it complicates the
allocation scheme by adding an interruptive data allocation inbetween
the main allocation events. These interruptive allocation events allocate
a small (e.g. int) data units, which apparently interfere with the memory
management of the system. 

The observed behavior (on a linux 2.2.14) system is that the process size
"shoots the moon", rapidly growing far past the expected size of 'normal'
behavior, growing past the physical memory size and growing into swap...
(normally a good idea to kill the process somewhere along the way here).
NOTE: this behavior is *NOT* a leak of the integer allocations.

The interruptive data allocations (= new int;) are relatively small in
total size. Even if the memory allocation is performed on a per-paragraph
or per-page basis, the total allocated size would be minor! 

The interruptive data allocations are stored in the interruptive data queue,
#intQueue.  This queue has a size controlled by #intQueueSize. Once the
size of the intQueue is reached, the interruptive data elements are 
deleted starting at the tail of this queue. The size of this queue,
controls the bug behavior. 
*/

#define VERBOSE   // print usage message - else NO I/O used

#if defined(VERBOSE)
#include <stdio.h>   // for usage message
#endif

#include <string.h>  // for memset

// some parameters useful for tuning pathology to system resource limits
typedef unsigned char uchar;
static const int oneMB = 1024*1024;          // duh!
static const int numDataUnits = 128*1024;    // # allocations in linear rampUp
static const int maxCacheSize = 16 * oneMB;  // amount held in allocation
//
// interruptive allocation -- queueSize determines number of interrupt
// allocations held currently. E.g
//    int intQueueSize = 2;             // freed on next loop
//    int intQueueSize = numDataUnits;  // never freed
// probably controls memory fragmentation
//
static const int intQueueSize = 1024;

int main(int argc, char* argv[])
{
        int show_bug = (1 + argc) % 2;
#if defined(VERBOSE)
        printf("Usage: %s [showbug]\n", argv[0]);
        if (show_bug)
        {
                printf("Showing BUG. Be ready to kill process quickly!\n");
        }
        else
        {
                printf("Hey! I'm behaving...\n");
        }
#endif

        // construct administrative container for fake data units
        int idat=0;
        uchar**  myDataList = new uchar* [numDataUnits];
        int*  mySizeList = new int[numDataUnits];
        for (idat=0 ; idat<numDataUnits ; idat++)
        {
                myDataList[idat] = (uchar*)0;
                mySizeList[idat] = 0;
        }

        // construct administrative container for interruptive memory allocation
        int** intQueue = new int* [intQueueSize];
        int ique;
        for (ique=0 ; ique<intQueueSize ; ique++)
        {
                intQueue[ique] = (int*)0;
        }

#if defined(VERBOSE)
        // cheery predictions that all should be fine
        printf("Expected interruptive allocation %d elements\n", intQueueSize);
        printf("at sizeof(int*)          =%12d bytes\n", 
intQueueSize*sizeof(int*));
        printf("at 16* (e.g. paragraphs) =%12d bytes\n", 16*intQueueSize);
        printf("at 4k* (e.g. page sizes) =%12d bytes\n", 4*1024*intQueueSize);
#endif

        // perform some knarly memory excercises - without stretching first
        int head = 0;
        int tail = 0;
        int curSize = 0;
        for(idat=0 ; idat<numDataUnits ; idat++)
        {
                // allocate at head - write to force using space
                int dataSize = idat;  // rampUp model
                myDataList[head] = new uchar [dataSize];
                memset(myDataList[head], idat%256, dataSize);
                // track size for caching logic
                mySizeList[head] = dataSize;
                curSize += mySizeList[head];
                head++;

                // Note: Even with paragraph allocation in 16 byte chunks,
                // this should only add numDataUnits*16 bytes to process size
                // Even with entire page allocation, this should only add
                // order of intQueueSize * <x>kB - where x is page size.
                // But... It seems to really screw things up...
                if (show_bug)
                {
                        // leak a bit - but mostly just disrupt memory layout
                        int intHead = (  idat) % intQueueSize;
                        int intTail = (1+idat) % intQueueSize;
                        intQueue[intHead] = new int;
                        if (intQueue[intTail])
                        {
                                delete intQueue[intTail];
                        }
                }

                // delete from tail to maintain cache size
                while (curSize > maxCacheSize)
                {
                        delete[] myDataList[tail];
                        curSize -= mySizeList[tail];
                        tail++;
                }
        }
        // cleanup
        for (idat=tail ; idat<numDataUnits; idat++)
        {
                delete[] myDataList[idat];
                curSize -= mySizeList[tail];
                tail++;
        }

        // free collection container
        delete[] myDataList;
        delete[] mySizeList;
        return 0;
}


reply via email to

[Prev in Thread] Current Thread [Next in Thread]