gzz-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gzz-commits] manuscripts/AGPU paper.txt


From: Janne V. Kujala
Subject: [Gzz-commits] manuscripts/AGPU paper.txt
Date: Tue, 08 Apr 2003 04:24:29 -0400

CVSROOT:        /cvsroot/gzz
Module name:    manuscripts
Changes by:     Janne V. Kujala <address@hidden>        03/04/08 04:24:29

Modified files:
        AGPU           : paper.txt 

Log message:
        editing

CVSWeb URLs:
http://savannah.gnu.org/cgi-bin/viewcvs/gzz/manuscripts/AGPU/paper.txt.diff?tr1=1.2&tr2=1.3&r1=text&r2=text

Patches:
Index: manuscripts/AGPU/paper.txt
diff -u manuscripts/AGPU/paper.txt:1.2 manuscripts/AGPU/paper.txt:1.3
--- manuscripts/AGPU/paper.txt:1.2      Mon Apr  7 09:08:40 2003
+++ manuscripts/AGPU/paper.txt  Tue Apr  8 04:24:29 2003
@@ -1,101 +1,21 @@
+In this work, we are not using the GPU to simulate or model any
+real-world phenomena. Instead, we use the GPU to produce an infinite
+amount of different, novel shapes.
+
 We present a perceptually designed hardware-accelerated algorithm for
 generating unique background textures for distinguishing documents.
-The procedurally generated unique backgrounds are used as a visualization of
-document identity. In our approach, each document has a different,
-easily distinguishable background texture.  The user can thus identify
-an item at a glance, even if only a *fragment* of the item is shown,
-without reading the title (which the fragment may not even show).
-The motivating example for unique backgrounds is the BuoyOING
-(Buoy-Oriented Interface, Next Generation) user interface, a
-focus+context interface for navigating hypertext.
-
-figxupdfdiag: The motivating example for unique backgrounds: the
-              BuoyOING focus+context interface for browsing
-              bidirectionally hyperlinked documents.  The interface
-              shows the relevant *fragments* of the other ends of the
-              links and animates them fluidly to the focus upon
-              traversing the link.  a) shows a small document network.
-              b) and c) show what a user sees while browsing the
-              network, b) without and c) with background texture.
-              There are three keyframes where the animation stops.
-              Two frames of each animation between the keyframes are
-              shown.  The unique backgrounds help the user notice that
-              the upper right buoy in the last keyframe is actually a
-              part of the same document (1) which was in the focus in
-              the first keyframe.  Our (as yet untested) hypothesis is
-              that this will aid user orientation.
-
+The procedurally generated unique backgrounds are used as a
+visualization of document identity. In our approach, each document has
+a different, easily distinguishable background texture.  The user can
+thus identify an item at a glance, even if only a *fragment* of the
+item is shown, without reading the title (which the fragment may not
+even show).  The user should be able to learn the textures of the most
+often visited documents, as per Zipf's law.
+See figxupdfdiag.
 
 An initial experiment has shown that the generated textures are indeed
 recognizable.
 
-
-Generating Unique Background Textures
-=====================================
-
-To be useful, the unique backgrounds should be easily distinguishable
-and recognizable, and should not significantly impair the reading of
-black text on top of it.
-
-The ability to distinguish a particular texture from a large set
-depends on the distribution of textures in the set.  For instance, it
-is intuitively clear that textures with independently random texel
-values would be a very bad choice: all such textures would look alike,
-being just noise.  In order to design a distinguishable distribution
-of textures, we have to take into account the properties of the human
-visual system.
-
-The simple model of texture perception we use assumes that at some
-point, the results from the different pre-attentive feature detectors,
-such as different shapes and colors, are combined to form an abstract
-*feature vector* (see Fig.~\ref{fig-perceptual}).  
-However, only a limited number of different features detected can be
-grouped into objects, indicating that the spatial resolution of the
-feature vector is quite low - as a well-known example, conjunction
-coding is not preattentive - red squares are hard to find among green
-squares and red and green circles.
-
-fig-perceptual: The qualitative model of visual perception used to
-                create the algorithm.  The visual input is transformed
-                into a feature vector, which contains numbers
-                (activation levels) corresponding to e.g. colors,
-                edges, curves and small patterns.  The feature vector
-                is matched against the memorized textures.  In order
-                to generate recognizable textures, random seed values
-                should produce a distribution of feature vectors with
-                maximum entropy.
-
-From the model we can see that to be distinguishable, a feature vector
-for a given texture should always be the same.  Fragments of a
-non-repeating texture will be slightly different, resulting in
-slightly different vectors even if the local structure is the same.  A
-repeating texture should thus be easier to recognize.  Our anecdotal
-observations confirm this.
-
-Additionally, the entropy of the feature vectors over the distribution
-of textures should be maximized.  The distribution should contain
-occurrences of as many different features as possible, and the features
-should be distributed independently from each other.
-
-However, because of the limited spatial resolution of the feature
-vector, in any *single* texture, only a limited range of features
-should be used.
-
-In a sense, the model of perception should be *inverted* in order to
-produce a unique background from a random vector.  Features that are
-orthogonal for human perception (e.g., color and direction of fastest
-luminance change) should be independently random, and features not
-orthogonal (e.g. colors of neighbouring pixels) should be correlated
-so as to maximize the entropy.
-
-An important point is generating the backgrounds is that the texture
-appearance should have *no correlation* with any attribute or content
-of the document so that the textures of any hyperlinked documents are
-similar only by chance.
-
-Hardware-accelerated implementation
-===================================
-
 One major goal for the implementation is to support complicated
 mappings between paper and screen coordinates, such as fisheye
 distortion.  To make this simple, all processing when rendering the
@@ -111,78 +31,71 @@
 GL_ARB_fragment_program once suitable hardware and Linux drivers
 emerge.
 
-Colors
-------
+We use a small palette of colors for each unique background texture,
+selected randomly from a heuristic distribution.  The shapes of the
+final background texture are generated entirely from a small set of
+static *basis textures* bound to texture units with randomly chosen
+texture coordinate mappings. Even though the basis textures are RGB
+textures, they contain no color information: they are simply treated
+as 3- or 4-vectors and combined using the NVIDIA register combiners
+extension with the palette colors to produce the final fragment
+colors.
+
+Our need for the combiners is rather unconventional: we want to lose
+most of the original shapes of the basis textures in order to create
+new, different shapes from the interaction of the basis texture values
+and combiner parameters chosen randomly from the seed number.  For
+this, we use dot products of texture values with each other and with
+random constant vectors, and scale up with the register combiner
+output mappings to sharpen the result (see Fig.~\ref{fig-regcomb}).
+The resulting values are used for interpolating between the palette
+colors. 
 
-To maintain recognizability, we use a small palette of colors for each
-paper, selected randomly from a heuristic distribution.  The final
-image contains convex combinations of the palette colors.
-
-For readability, we only use colors with the CIE Lightness value over 80.
-
-Texture coordinates
--------------------
-
-The choice of the geometry of the repeating unit (a parallelogram)
-fixes an absolute scale for the paper.  The repeating unit should be
-fairly isotropic to avoid the degeneration of textures to diagonal
-lines, and the units for different textures should be relatively
-similar in size.  The repeating unit is chosen from a heuristic
-distribution satisfying these criteria.
-
-After a repeating unit is fixed, there is still freedom in choosing
-texture coordinates for each basis texture: any mapping of the texture
-is fine, as long as it repeats with the selected repeating unit. For
-example, a texture can repeat multiple times inside the repeating
-unit, or can be skewed w.r.t. the repeating unit.  Again, a heuristic
-distribution is used which does not skew or scale the basis texture
-too much too often.
-
-Basis textures
---------------
-
-The shapes of the final background texture are generated entirely from
-a small set of static *basis textures*.  Even though the basis
-textures are RGB textures, they contain no color information: they are
-simply treated as 3- or 4-vectors to be used in various ways to create
-shapes, and color is added by the register combiners using the palette
-selected as described above.
 
-fig-basis: The complete set of 2D basis textures used by our
-          implementation.  All textures shown in this article are
-          built from these textures and the corresponding HILO
-          textures for offsetting.
-
-On the NV25 architecture, the texture accesses can be customized
-further by the use of texture shading: the texture coordinates used by
-a texture unit can be made to depend on the result of a previous
-texture unit.  This can be used to create a large variety of
-shapes\cite{perlin-noise-intro}.  So far, we have only used offset
-textures with random offset matrices, but even they do improve the
-quality of the output.
-
-Register combiners
-------------------
-
-The NVIDIA register combiners extension is used to combine the the 3-
-and 4-vectors obtained from the basis textures and the palette colors
-into the final fragment color.  Our need for the combiners is rather
-unconventional: we want to lose most of the original shapes of the
-basis textures in order to create new, different shapes from the
-interaction of the basis texture values and combiner parameters chosen
-randomly from the seed number.  For this, we use dot products of
-texture values with each other and with random constant vectors, and
-scale up with the register combiner output mappings to sharpen the
-result (see Fig.~\ref{fig-regcomb}).  The resulting values are used
-for interpolating between the palette colors.  Because some basis
-textures have blurrier edges than others, the output scalings need to
-be adjusted depending on the basis textures selected.
 
 
+figxupdfdiag: The motivating example for unique backgrounds: the
+BuoyOING focus+context interface for browsing bidirectionally
+hyperlinked documents.  The interface shows the relevant *fragments*
+of the other ends of the links and animates them fluidly to the focus
+upon traversing the link.  a) shows a small document network.  b) and
+c) show what a user sees while browsing the network, b) without and c)
+with background texture.  There are three keyframes where the
+animation stops.  Two frames of each animation between the keyframes
+are shown.  The unique backgrounds help the user notice that the upper
+right buoy in the last keyframe is actually a part of the same
+document (1) which was in the focus in the first keyframe.  Our (as
+yet untested) hypothesis is that this will aid user orientation.
+
+fig-perceptual: The qualitative model of visual perception used to
+create the algorithm.  The visual input is transformed into a feature
+vector, which contains numbers (activation levels) corresponding to
+e.g. colors, edges, curves and small patterns.  The feature vector is
+matched against the memorized textures.  In order to generate
+recognizable textures, random seed values should produce a
+distribution of feature vectors with maximum entropy.
+
+fig-basis: The complete set of 2D basis textures used by our
+implementation.  All textures shown in this article are built from
+these textures and the corresponding HILO textures for offsetting.
+
 fig-regcomb: How the limited register combiners of the NV10
-            architecture can be used to generate shapes.  Top: the
-            two basis textures.  Bottom left: dot product of the
-            basis textures: 2(2a-1)\cdot(2b-1)+1/2, where a and b are
-            the texture RGB values.  Bottom right: dot product of the
-            basis textures squared: 32( (2a-1)\cdot(2b-1) )^2.  This
-            term can then be used to modulate between two colors.
+architecture can be used to generate shapes.  Top: the two basis
+textures.  Bottom left: dot product of the basis textures:
+2(2a-1)\cdot(2b-1)+1/2, where a and b are the texture RGB values.
+Bottom right: dot product of the basis textures squared: 32(
+(2a-1)\cdot(2b-1) )^2.  This term can then be used to modulate between
+two colors.
+
+fig-examples: A number of unique backgrounds generated by our system.
+This view can be rendered, without pre-rendering the textures, in 20
+ms on a GeForce4 Ti 4200 in a 1024x768 window (fill-rate/bandwidth
+limited).
+
+figxanalogicalexample: Two different screenshots of a structure of PDF
+documents viewed in a focus+context view.  The user interface shows
+relationships between specific points in the documents.  Each document
+has an unique background, which makes it easy to see that the fragment
+of a document on the left side of the bottom view is the document
+fully seen in the top view; without unique backgrounds, this would be
+relatively difficult and would require traversing the link.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]