lynx-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

lynx-dev LYNX: more on L-page


From: David Combs
Subject: lynx-dev LYNX: more on L-page
Date: Thu, 27 Aug 1998 14:34:31 -0700 (PDT)

I use the L-page (the page gotten via L cmd) a lot,
especially on sites where lots of the links have
no label visible (maybe I should be using exploiter?),
and the L command saves the day, by showing me the
link-names -- often the names of the files indicate
the info they have, and the "#" stuff tells me
that it's in the same file.

Good stuff.

One thing I notice however is that lots of the links
have the SAME address -- you'd like to run the thing
through uniq, if you could.  :-)

What WOULD be useful, maybe, and this does involve some
coding, would be to put an asterisk or "(dup)" or something
to the left of all those that are the same as something
above it.  That is, only the first occurrence would NOT have
the "(dup)".

Anyway, such a notation would make it much easier to
not go down the same path twice.

(No, I do NOT want to back up and do another L so as
to get the fancy titles -- at least I think I don't --
although in THIS case, trying to avoid dups, it might
be ok.  But how often do I have to do this backing up?)

You know, a ^R on the L-page might be a neat way to do
this...  Certainly is no other use for it.

---

There is STILL a need to be able to do an L that produces
a page WITHOUT the addresses being replaced, so you can
cut-and-paste it with full addresses.  And WITH the 
(dup) thingies.

---

Hey, the (dup) thingies don't need the addresses following
them at all: just the vinumbers label of the FIRST one
to that address.  And, since that would take up very little
room on the line, you could THEN add the TITLE, as sort
of a freebie.  Of course, here you see the title only on
the duped lines.

...

Lots of ways to skin this cat!


----

I suppose the L-page dups would be detected via a hash
table.  I mean, suppose someone did an L on eg my bookmark
file, with maybe 1200 links in it so far...  That N-squared
search-entire-table for each one would get expensive at 
that size.

(Of course, I *could* read the code, and discover that all
kinds of things are already done via hashing...)

David

reply via email to

[Prev in Thread] Current Thread [Next in Thread]