[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: burned by Makefile.in.in's $(DOMAIN).pot-update rule
From: |
Jim Meyering |
Subject: |
Re: burned by Makefile.in.in's $(DOMAIN).pot-update rule |
Date: |
Fri, 11 Jun 2010 23:48:40 +0200 |
Bruno Haible wrote:
>> This rule is the culprit:
>>
>> $(DOMAIN).pot-update: $(POTFILES) $(srcdir)/POTFILES.in remove-potcdate.sed
>> if LC_ALL=C grep 'GNU @PACKAGE@' $(top_srcdir)/* 2>/dev/null | grep -v
>> 'libtool:' >/dev/null; then \
>> package_gnu='GNU '; \
>> else \
>> package_gnu=''; \
>> fi; \
>
> These commands inspect only the files in the package to which that Makefile
> belongs.
If it inspected only the files in po/ where the Makefile.in.in
resides, it'd be no problem. I rarely use that as a work area.
However, it searches files in the top-level directory, $(top_srcdir),
which I do use.
In addition, it searches all (well, nearly all) files in that directory,
not just those that are part of the package. If you could limit the
search to say, automake's list of $(DISTFILES), that would be great.
If not, well, it's no big deal since I do have a work-around
(http://git.savannah.gnu.org/cgit/coreutils.git/commit/?id=155c8148ae06dc),
but I do create such files regularly in developing and testing at least grep,
coreutils and parted, so will probably end up applying the same
fix in those projects, too.
> I think it is reasonable to assume that a user doesn't have weird
> files (terabyte-sized files, sockets, named pipes, etc.) in such a place.
> Otherwise he can not even reasonably backup this directory.
Perhaps not by conventional tools, but with git it works fine
as long as you're content to back up only version-controlled files
and the repository itself.
> You (or some auxiliary script) might have issued similar 'grep' commands as
> well.
Not I. I learned long ago to avoid blind traversals and grep -r.
I often have outrageously abusive artifacts in a coreutils
working directory, since tests create (and sometimes are interrupted
or leave behind) hierarchies that take a very long time to traverse
using common tools.
>> I happened to have a sparse file with tiny actual size,
>> but an apparent size of a terabyte or two, so in a sense
>> it's my fault -- or maybe grep's for allocating so much
>> memory while looking for a newline.
>
> I think it is reasonable for 'grep' to allocate as much memory as the
> longest line in the file has. (There is some disagreement as to whether
> the time to process a sparse file should only take time proportional to
> the number of physical disk blocks, not to the file length. But given
> that there are no POSIX APIs to test for the extent of "holes" in files,
> only low-level ioctl calls, you can't expect all the utilities to use
> non-POSIX APIs.)
du can detect holes portably on most systems (compare running it with and
without --apparent-size), so it's feasible for a regular file, and with
a new enough linux kernel, the FIEMAP ioctl gives us a way to solve the
problem on at least four types of file systems: ext4, btrfs, xfs, and ocfs2.
> I think 'core' files of a.out format had holes under Linux, but this was fixed
> with the adoption of ELF, in 1995: AFAICS, ELF core files are not sparse.
>
>> One solution would be to limit the searched files
>> to those under version control, but that does not seem feasible.
>
> This rule that determines whether the package has the honorific "GNU" is
> known to be a heuristic. If you know a better way to do it, please say so?
> Probing www.gnu.org is probably not a good idea either, since that would
> not work without an internet connection.
>
>> Another is to allow people like me to replace the grep command
>> with something else.
>
> I think this would be too much tailored to the particular failure that you
> experienced. It is not general enough.
Letting me override some default
(with no change in before for everyone else)
is not general enough?
I don't understand that.