[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [gNewSense-users] KFV non-free reporting should recurse
From: |
Bake Timmons |
Subject: |
Re: [gNewSense-users] KFV non-free reporting should recurse |
Date: |
Sat, 28 Jun 2008 19:12:13 -0400 |
User-agent: |
Gnus/5.11 (Gnus v5.11) Emacs/22.1 (gnu/linux) |
> Of course, dates and adopters have no dependence on subsections. All
> other fields, however, do depend on not just the immediate content of
> the directory (e.g., "pci" in this case) but also the subsections.
>
snip
> What do you think? Unless I am wrong here, I will update kfv.el to do
> this right.
>
> Yes, recursive numbers are useful for a good overview.
>
> Can the final output of KFV be a wiki page that simply lists everything
> non-free for removal?
> They'll be removed in batch, and digging through the 300+ wiki pages would
> just be a hassle.
Although the digging is already unnecessary(*), I propose that the
page(s) for non-free items be *in addition* to the current pages for
these reasons:
+ We can do it *sooner* since it is *less work* than adapting existing
scripts to stop handling output in the existing format (at least in my
case).
+ Some people may find the existing pages to be a useful form of
documentation that goes into decisions. E.g., typically, every *file
entry* is linked to a separate page stating a reason for why it is
free or not.
+ The pages are good for giving the wiki a needed consistency, both in
look and purpose, which is particularly reassuring to newcomers, IMO.
IMO, the best implementation for batch use is a *cumulative* web page
or web pages of non-free items. The check for removals could then be
run on whether one or a few pages have changed at all. I could easily
add this feature to the next version of the kfv.el script. If you
want, the output could just be a plain list of path names of non-free
files on a page that would be updated every time a "section page"
containing non-free items is uploaded.
----------------------------------------------------------------------
(*) Even with the "reporting" error that I previously indicated, the
current wiki pages are *already* structured consistently enough to be
handled in batch. E.g., a script could just grep through the wiki
page sources for any table entry line not containing the string "N/A"
in the "Non-Free Reported" field and output the non-free file names
into a list. I am ignorant about PmWiki installation, etc., but I
assume that the grepping would just be on a bunch of files and not
have to hit the web server.