gzz-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gzz-commits] gzz/Documentation/misc/hemppah-progradu mastert...


From: Hermanni Hyytiälä
Subject: [Gzz-commits] gzz/Documentation/misc/hemppah-progradu mastert...
Date: Mon, 24 Feb 2003 08:36:49 -0500

CVSROOT:        /cvsroot/gzz
Module name:    gzz
Changes by:     Hermanni Hyytiälä <address@hidden>      03/02/24 08:36:48

Modified files:
        Documentation/misc/hemppah-progradu: masterthesis.tex 

Log message:
        More text

CVSWeb URLs:
http://savannah.gnu.org/cgi-bin/viewcvs/gzz/gzz/Documentation/misc/hemppah-progradu/masterthesis.tex.diff?tr1=1.64&tr2=1.65&r1=text&r2=text

Patches:
Index: gzz/Documentation/misc/hemppah-progradu/masterthesis.tex
diff -u gzz/Documentation/misc/hemppah-progradu/masterthesis.tex:1.64 
gzz/Documentation/misc/hemppah-progradu/masterthesis.tex:1.65
--- gzz/Documentation/misc/hemppah-progradu/masterthesis.tex:1.64       Mon Feb 
24 08:24:41 2003
+++ gzz/Documentation/misc/hemppah-progradu/masterthesis.tex    Mon Feb 24 
08:36:48 2003
@@ -1554,8 +1554,7 @@
    \cite{overneturl}
    \cite{edonkey2kurl}
    
-   \cite{bittorrenturl}
-\cite{maymounkov03ratelesscodes}
+
 
 \section{Overview}
 
@@ -1645,41 +1644,27 @@
 attacks. Additionally, if possible, it would be benefitial if Peer-to-Peer 
system 
 would represent all named resources as keys.
 
-\section{Benefits over existing Peer-to-Peer file sharing systems}
-
 Since Storm uses SHA-1 hash function for creating globally unique
 identifiers, if necessary, we can check the integrity of a scroll
 block by re-computing hash value for a scroll block, once fetched
 form the network. Indeed, all scroll blocks' identifiers are
-self-certifying. However, this not very efficient when we want
-fetch large amounts of data. One possibility is to use tree-based
-hash techiques (e.g., \cite{merkle87hashtree}, \cite{\cite{mohr02thex}) 
-for more efficient data fetcing. Tree based hash functions can be used 
+self-certifying. However, this not very efficient if we want to
+obtain large amounts of data. One possibility is to use tree-based
+hash techiques (e.g., \cite{merkle87hashtree}, \cite{\cite{mohr02thex}), 
+which makes possible multisource downloads. Tree based hash functions can be 
used 
 to verify fixed length segments of data file, instead of whole data file. 
 Currently, Shareaza \cite{shareazaurl}, Overnet \cite{overneturl} and 
 eDonkey2000 \cite{edonkey2kurl} uses tree based hashing for validating 
-segments of a data file.
-
-
-- Easy syncing:
-  - Just copy a bunch of blocks
-  %- Documents can be synced & merged
-  %- Inter-document structures can be synced & merged
-  - Syncing can be done without merging immediately,
-    leaving two alternative versions current
-    (so e.g. an automated process is entirely possible,
-    even when there are conflicts)
-- Versioning
-
-From Benja's (plus antont and me) article:
-- Reliability (old versions, links work always, accessibility, 
append-and-delete)
-- Usability in the face of intermittent connectivity 
-  (includes syncing, finding a document if available...)  
-- Xanalogical structure 
-  (includes versioning, non-breaking links etc.)
-
--current p2p systems don't support all of these properties together
+segments of a data file. Another options for more efficient data
+fething is use multisource downloading (\cite{bittorrenturl}) and online
+codes (\cite{maymounkov03ratelesscodes}).
+
+Multisource downloads can be very useful when video, images or sound
+is stored using Storm storage model. However, further research is
+required.
 
+For more detailed discussion about Storm's storage model, see 
+\cite{fallenstein03storm}.
 
 \section{Evaluation}
 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]