gzz-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gzz-commits] gzz/Documentation/misc/hemppah-progradu mastert...


From: Hermanni Hyytiälä
Subject: [Gzz-commits] gzz/Documentation/misc/hemppah-progradu mastert...
Date: Thu, 27 Feb 2003 04:21:01 -0500

CVSROOT:        /cvsroot/gzz
Module name:    gzz
Changes by:     Hermanni Hyytiälä <address@hidden>      03/02/27 04:21:01

Modified files:
        Documentation/misc/hemppah-progradu: masterthesis.tex 

Log message:
        More gnutella

CVSWeb URLs:
http://savannah.gnu.org/cgi-bin/viewcvs/gzz/gzz/Documentation/misc/hemppah-progradu/masterthesis.tex.diff?tr1=1.87&tr2=1.88&r1=text&r2=text

Patches:
Index: gzz/Documentation/misc/hemppah-progradu/masterthesis.tex
diff -u gzz/Documentation/misc/hemppah-progradu/masterthesis.tex:1.87 
gzz/Documentation/misc/hemppah-progradu/masterthesis.tex:1.88
--- gzz/Documentation/misc/hemppah-progradu/masterthesis.tex:1.87       Thu Feb 
27 03:26:09 2003
+++ gzz/Documentation/misc/hemppah-progradu/masterthesis.tex    Thu Feb 27 
04:21:00 2003
@@ -218,14 +218,14 @@
 
 \section{Centralized}
 
-Napster \cite{napsterurl} \footnote{We decided to include Napster in this 
section only because it has
-historical value (see previous section).}  was designed to to allow people to 
share music. 
-It was a hybrid Peer-to-Peer file-sharing system, i.e., the search index was 
centralized 
-and the distribution storage and serving of files was distributed. Peers in 
the Napster 
-network performed requests to the central directory server to find other peers 
hosting 
-desirable content. Since service requests was totally based on centralized 
index, 
-Napster didn't scale well because of constantly updated central directory, and 
had a 
-possibility to single point of failure. 
+Napster\footnote{We decided to include Napster in this section only because it 
has
+historical value (see previous section).} \cite{napsterurl}  was designed to 
to allow 
+people to share music. It was a hybrid Peer-to-Peer file-sharing system, i.e., 
the search 
+index was centralized and the distribution storage and serving of files was 
distributed. 
+Peers in the Napster network performed requests to the central directory 
server to find 
+other peers hosting desirable content. Since service requests was totally 
based on 
+centralized index, Napster didn't scale well because of constantly updated 
central 
+directory, and had a possibility to single point of failure. 
 
 
 \section{Loosely structured}
@@ -244,17 +244,33 @@
 forwards the query to their neighbors. This leads in the situation where 
number of messages
 in the network can grow with $O(n^{2})$, where $n$ is the number of 
participating peers in the
 Gnutella network. To limit the amount of network traffic, Gnutella uses 
Time-To-Live-limited
-(TTL) flooding to distributed queries. Therefore, only peers that are TTL hops 
away from the
-query originator will forward the query or respond to the query.
+(TTL) flooding to distributed queries. Gnutella uses a breadt-First traversal 
with depth limit 
+$T$ (e.g., 7), where T is the system-wide maximum TTL of a message in hops. 
Therefore, only peers that 
+are TTL hops away from the query originator will forward the query or respond 
to the query. 
+In Gnutella network, search results are fast, because breadt-First traversal 
sends queries to 
+every possible neighbor. On the other hand, this method wastes resources and 
doesn't scale well.
+
+According to \cite{lv02searchreplication}, Gnutella's way to perform data 
lookups, \emph{flooding}, has
+following limitations. First, choosing the approriate TTL in practice is not 
easy. If the
+TTL is too high, query originator may unnecessarily strain the network. If the 
TTL is too
+low, the query originator might not find the desired data even it's available 
somewhere
+in the network. Second, there are many duplicate messages generated by 
flooding, especially
+in high connectivity graphs. It is obvious that with these limitations, 
flooding creates 
+significant message processing overhead for each query. Furthermore, as a 
result, 
+flooding may increase the load on participating to the point, where it has to 
leave the network. 
 
- 
 
-Recently, however, there has been done research on topology properties of the 
Internet \cite{adamic99small}
-and the Gnutella network \cite{adamic02localsearch}, 
\cite{adamic01powerlawsearch}. Studies show
-that both networks has a power law distribution of links, i.e., a few peers 
have high connectivity
-and major of peers have low connectivity. 
-peers prefential attach
-to popular peers
+
+Lately, there has been done lot of research to improve Gnutella's data lookup 
efficiency 
+and scalability. Adamic et. all \cite{adamic99small}, 
\cite{adamic02localsearch}, 
+\cite{adamic01powerlawsearch} has been studied different random walk methods 
in power-law 
+networks\footnote{In power-law networks only a few peers have high number of 
neighbor 
+links and major of peers have low nuber of neighbor links.} and they have 
found that by 
+instructing peers forwarding queries to select high degree peers the data 
lookup's 
+performance increases signficantly. However, it's not clear whether this 
algorithm
+is scalable or not.
+.
+
 
 
 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]