bug-gnu-utils
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: burned by Makefile.in.in's $(DOMAIN).pot-update rule


From: Bruno Haible
Subject: Re: burned by Makefile.in.in's $(DOMAIN).pot-update rule
Date: Sat, 12 Jun 2010 01:19:34 +0200
User-agent: KMail/1.9.9

Paolo,

> > I think it is reasonable for 'grep' to allocate as much memory as the
> > longest line in the file has.
> 
> Of course. And unless your grep is broken in that it doesn't treat NUL
> bytes correctly, it cannot really special case holes in any way.

It could, in theory, use a data structure for its "current line" that uses
run-length encoding or a special encoding for blocks of NULs. But it's not
reasonable to expect that tools like 'grep' or 'sed' or 'cat' do this.

> > I think 'core' files of a.out format had holes under Linux, but this was 
> > fixed
> > with the adoption of ELF, in 1995: AFAICS, ELF core files are not sparse.
> 
> Why is this relevant?

I'm arguing that anyone who puts terabyte-sized files into directories that
are subject to development tools is responsible for the results. Jim can be
lucky that he's not using 'cvs': A simple "cvs status huge-file" would have
sent the huge-file over the network [1], thus most certainly bringing the CVS
server machine to its knees.

Bruno

[1] http://lists.gnu.org/archive/html/bug-cvs/2007-01/msg00019.html



reply via email to

[Prev in Thread] Current Thread [Next in Thread]