lynx-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: LYNX-DEV Internal MIME types


From: Al Gilman
Subject: Re: LYNX-DEV Internal MIME types
Date: Sat, 26 Apr 1997 23:27:36 -0400 (EDT)

  From: Wayne Buttles <address@hidden>
  
  On Sat, 26 Apr 1997, Al Gilman wrote:
  
  > Orthogonalizing different dimensions of the content is moving in
  > the right direction, in my view, because of my assumptions about
  > the form of the solution to the access problem.
  
  OK, I don't think I have said anything up till now but I can't hold out
  any longer.  I have no idea what you are talking about most of the time.
  What does "Orthogonalizing different dimensions of the content" mean?
  
  I feel very under-educated reading your posts.
  
Thank you for speaking up.  I apologize.

However, to try to say this some different way, I need more of a
clue of where we can find common language.

My metaphoric use of physics and EE concepts taxes your parser.

I was addressing Klaus's one-liner about "what does the Netscape
parse strategy have to do with accessibility?"

The Netscape parse strategy is based on the idea that there are
different, mostly independent aspects of the material to be
presented.  How you manipulate one stack is independent of where
you are in another.

Have you looked into VRML?  I believe it has a world-model vs.
view-frame structure.

I am drawing heavily on a weird branch of data modeling called
information modeling.  It in turn, relies heavily on concepts
from geometry in which I am over-educated because of the way they
are used in mathematical physics.  I am not really conversant
with the particulars of VRML, but I believe that the principles
are the same.  If you look at the "model" MIME types, there are
more examples.

Let us take a very small example: the counter that tells you that
you are the 39457th visitor to the page.  Through a quirk of the
standards, to make this dynamic, you have to make it an image.
Despite the fact what the image shows is a bit map of text.
When the bitmap is build up by character values plus type face
modulation, the characters can be stripped out and passed through
text-to-speech for the blind.  When the bitmap of the characters
is only passed as a GIF, the independence is lost and you eyeball
it or nothing.  The text-plus-font code is more orthogonalized,
it is broken down into more components and the components are
more independent in how you process them.  In the GIF case, there
is a take-it-or-leave-it entirety, without any structure by
which you can decompose and re-compose presentation views.

Is this any clearer?

--
Al
;
; To UNSUBSCRIBE:  Send a mail message to address@hidden
;                  with "unsubscribe lynx-dev" (without the
;                  quotation marks) on a line by itself.
;

reply via email to

[Prev in Thread] Current Thread [Next in Thread]