gcl-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gcl-devel] efficiency of package symbol tables


From: Bruno Haible
Subject: Re: [Gcl-devel] efficiency of package symbol tables
Date: Wed, 24 Mar 2004 20:23:58 +0100
User-agent: KMail/1.5

Camm Maguire wrote:
> I can't think of anything better (right now) than assigning a mark
> cost proportional to the number of pointers in the object, and a sweep
> cost proportional to its size.  The difficulty comes with
> contiguous/relocatable blocks, i.e. in arrays, which could either be
> completely mark benign if the element type is fixnum, or quite costly
> if the element type is t.

True, but I consider this as fine-tuning. If the fine-tuning is not
perfect, the GC performances may not be optimal. But if you assume the
worst case (i.e. that arrays contain pointers everywhere), they are still
within a fixed percentage of the optimum.

> It would seem that a proper algorithm might also want to take into
> account the relative percentages of pages the user is actually
> allocating, or perhaps merely retaining.  One could automatically
> 'rebalance' the allocation percentages at some interval.

Yes, there are many ways to do fine-tuning. Some people base it on
the current state only, some also on the input/output statistics of
previous GCs. The former approach is more robust against unexpected
change in behaviour of the application; the latter approach may be
more economic on the average...

> I'm also a little unclear as to what you mean by moving the
> percentages closer together.

It's meant as a counter-goal that weighs against the goal of
optimizing GC time. It can be different in details. I thought that
when you measure that, say, GCing a STRING page costs 0.5 ms and
GCing a CONS page costs 1 ms, then it would be logical to allow
a fixed amount of "allocation units" until the next GC, and a STRING
page consists of 0.5 allocation units and a CONS page of 1 allocation
units. In other words, a GC would be triggered when

  0.5 * number of new STRING pages + 1 * number of new CONS pages > N.

But this point of view neglects the point of view of the other programs,
which compete about the memory pages in RAM, L2-cache, L1-cache. To take
this into account I would change the trigger criteria to

  0.75 * number of new STRING pages + 1 * number of new CONS pages > N.

It's quite arbitrary, sure.

Bruno





reply via email to

[Prev in Thread] Current Thread [Next in Thread]