gzz-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gzz-commits] gzz/Documentation/misc/hemppah-progradu mastert...


From: Hermanni Hyytiälä
Subject: [Gzz-commits] gzz/Documentation/misc/hemppah-progradu mastert...
Date: Thu, 27 Feb 2003 09:09:20 -0500

CVSROOT:        /cvsroot/gzz
Module name:    gzz
Changes by:     Hermanni Hyytiälä <address@hidden>      03/02/27 09:09:20

Modified files:
        Documentation/misc/hemppah-progradu: masterthesis.tex 

Log message:
        More tightly structured overlays

CVSWeb URLs:
http://savannah.gnu.org/cgi-bin/viewcvs/gzz/gzz/Documentation/misc/hemppah-progradu/masterthesis.tex.diff?tr1=1.93&tr2=1.94&r1=text&r2=text

Patches:
Index: gzz/Documentation/misc/hemppah-progradu/masterthesis.tex
diff -u gzz/Documentation/misc/hemppah-progradu/masterthesis.tex:1.93 
gzz/Documentation/misc/hemppah-progradu/masterthesis.tex:1.94
--- gzz/Documentation/misc/hemppah-progradu/masterthesis.tex:1.93       Thu Feb 
27 08:52:24 2003
+++ gzz/Documentation/misc/hemppah-progradu/masterthesis.tex    Thu Feb 27 
09:09:19 2003
@@ -386,10 +386,11 @@
 to implement identifier space.
 
 To store data into tightly structured overlay, each application-specific
-key is \emph{mapped} by the overlay to a existing peer in the overlay. Each
-peer in the structured overlay maintains a \emph{routing table}, which consists
-of identifiers and IP addresses of other peers in the overlay. These are peer's
-neighbors in the overlay network. Figure \ref{fig:structured_hashing} 
+key is \emph{mapped} by the overlay to a existing peer in the overlay. Thus, 
tightly
+structured overlay assigns a subset of all possible keys to every 
participating peer.
+Furtermore, each peer in the structured overlay maintains a \emph{routing 
table}, 
+which consists of identifiers and IP addresses of other peers in the overlay. 
+These are peer's neighbors in the overlay network. Figure 
\ref{fig:structured_hashing} 
 illustrates the process of data to key mapping in tightly strucuted overlays.
 
 \begin{figure}
@@ -419,20 +420,22 @@
 must be constructed and maintained adaptively.
 
 Currently, all proposed tightly structured overlays provide at least 
-poly-logaritmical data lookup operations. However, there are some key 
+poly--logaritmical data lookup operations. However, there are some key 
 differences in routing algoritms. For example, Chord, Skip graphs and 
 Skipnet maintain a local data structure which resembles skip lists 
\cite{78977}.
 In figure \ref{fig:structured_query}, we present overview of Chord's lookup 
process.
 On the left side of Chord's lookup process, we show the same data lookup 
process
 as binary-tree abstraction.  We can notice, that in each step, the distance 
between
 the query originator and the target in both methods is halved. Thus, the 
-locarithmic efficiency. Kademlia, Pastry and Tapestry uses balanced tree-like 
+locarithmic efficiency. 
+
+Kademlia, Pastry and Tapestry uses balanced tree-like 
 data structures. Figure \ref{fig:kademlia_lookup} shows the process of Kademlia
 data lookup. Viceroy maintains a butterfly data structure, which requires 
 only constant number of neighbor peers while providing $O(\log{n})$ data lookup
 efficiency. Koorde, recent modification of Chord, uses de Bruijn graphs to 
maintain
 local routing tables. Koorde requires each peer to have only about two links 
to other
-peers to to provide $O(\log{n})$ performance. 
+peers to to provide $O(\log{n})$ performance. Peernet 
 
 \begin{figure}
 \centering
@@ -450,6 +453,10 @@
 \end{figure}
 
 
+Chord example ?
+
+abstraction: DHT, DOLR, multicast/anycast
+
 
 -service is data block, node/peer is a physical computer
 -*servers* self-organize towards a lookup network
@@ -532,17 +539,7 @@
 
 \subsection{Protocols}
 
-Measures:
 
-degree:
-
-hop count: 
-
-fault-tolerance
-
-maintenance overhead
-
-load balance
 
 
 
@@ -619,6 +616,18 @@
 
 
 \section{Summary}
+
+Measures:
+
+degree:
+
+hop count: 
+
+fault-tolerance
+
+maintenance overhead
+
+load balance
 
 
 \subsection{Differences}




reply via email to

[Prev in Thread] Current Thread [Next in Thread]