gzz-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gzz-commits] manuscripts/storm article.rst


From: Hermanni Hyytiälä
Subject: [Gzz-commits] manuscripts/storm article.rst
Date: Mon, 03 Feb 2003 04:34:41 -0500

CVSROOT:        /cvsroot/gzz
Module name:    manuscripts
Changes by:     Hermanni Hyytiälä <address@hidden>      03/02/03 04:34:38

Modified files:
        storm          : article.rst 

Log message:
        Cleaning and fixing

CVSWeb URLs:
http://savannah.gnu.org/cgi-bin/viewcvs/gzz/manuscripts/storm/article.rst.diff?tr1=1.73&tr2=1.74&r1=text&r2=text

Patches:
Index: manuscripts/storm/article.rst
diff -u manuscripts/storm/article.rst:1.73 manuscripts/storm/article.rst:1.74
--- manuscripts/storm/article.rst:1.73  Sun Feb  2 23:12:19 2003
+++ manuscripts/storm/article.rst       Mon Feb  3 04:34:38 2003
@@ -15,8 +15,8 @@
 However, recent developments in peer-to-peer systems have
 rendered this assumption obsolete. Distributed hashtables
 [ref chord, can, tapestry, pastry, kademlia, symphony, viceroy]
-and similar systems [skip graph, swan] allow *location independent* routing 
-based on random identifiers on a global scale. This, we believe,
+and similar systems [skip graph, swan, peernet] allow *location independent* 
+routing based on random identifiers on a global scale. This, we believe,
 may be the most important result of intense peer-to-peer 
 research with regard to hypermedia.
 
@@ -259,16 +259,18 @@
 
 Immutable blocks has several benefits over existing systems...
 
-1) Storm's block storage makes it easy to replicate data between systems.
+Storm's block storage makes it easy to replicate data between systems.
 Different versions of the same document can easily coexist at this level,
 stored in different blocks. 
 [Previous sentence doesn't parse to me (what level ?) :( -Hermanni]
 To replicate all data from computer A
 on computer B, it suffices to copy all blocks from A to B that B
-does not already store.
-[Example of Lotus Notes' replication conficts ? -Hermanni]
+does not already store. On the other hand, several popular database management 
+systems (e.g. Lotus Notes [ref]) have complex replication schemes, which may 
+led awkward replication conflicts. 
+[Or does this belong to diff section ? -Hermanni]
 
-2) Storm blocks are MIME messages [ref MIME], i.e., objects with
+Storm blocks are MIME messages [ref MIME], i.e., objects with
 a header and body as used in Internet mail or HTTP.
 This allows them to carry any metadata that can be carried
 in a MIME header, most importantly a content type.
@@ -283,12 +285,12 @@
     
 [analogy to regular Hash Table/DHT ? -Hermanni]
 
-3) Implementations may store blocks in RAM, in individual files,
+Implementations may store blocks in RAM, in individual files,
 in a Zip archive, in a database or through other means.
 We have implemented the first three (using hexadecimal
 representations of the block ids for file names).
 
-4) Storing all data in Storm blocks provides *reliability*:
+Storing all data in Storm blocks provides *reliability*:
 When saving a document, an application will only *add* blocks,
 never overwrite existing data. When a bug causes an application
 to write malformed data, only the changes from one session
@@ -296,11 +298,11 @@
 be accessible. (Footnote: This makes Storm well suited as a basis
 for implementing experimental projects (such as ours).)
 
-5) When used in a network environment, Storm ids do not provide
-a hint as to where in the network the matching block can be found.
+When used in a network environment, Storm ids do not provide
+a hint as to where in the network a specific block can be found.
 However, current peer-to-peer systems could be used to
 find blocks efficiently in a distributed fashion; for example, 
-Freenet [ref], a few recent Gnutella clients [e.g. ref: shareaza], 
+Freenet [ref], a few recent Gnutella clients (e.g. Shareaza [ref]), 
 Overnet/eDonkey2000 [ref] also use SHA-1-based identifiers 
 [e.g. ref: magnet uri].
 (Footnote:However, we have not put a network implementation into regular use
@@ -308,9 +310,9 @@
 implementation experience.)
 We discuss peer-to-peer implementations in Section 7, below.
 
-6) The immutability of blocks should make caching trivial, since it is
+The immutability of blocks should make caching trivial, since it is
 never necessary to check for new versions of blocks.
-Since the same namespace [mention urn-5 ? -Hermanni] is used for local data 
and data
+Since the same namespace is used for local data and data
 retrieved from the network, online documents that have been
 permanently downloaded to the local harddisk can also be found
 by the caching mechanism. This is convenient for offline browsing,
@@ -738,14 +740,18 @@
 network is created and maintained and how queries are performed. DHT is seen 
as 
 scalable approach and usually provides (poly)logarithmic bounds to *all* 
internal 
 operations (footnote about 'stable state' ?), while broadcasting can't achieve 
-either of these. 
+either of these.
+
+[footnote: It's not clear  whether all proposed DHT designs can preserve
+(poly)logarithmic properties when nodes join leave the system in a dynamic 
manner 
 
 In DHT approach, both keys and the addresses of peers are mapped into one 
virtual 
-key space. The form of key space depends on implementation. The mapping makes 
-possible to assign number of data items to a peer, based on how 'close' data 
-item's and peer's keys are each other. Thus, DHT's overlay connectivity graph 
-is structured. On the other hand, the overlay connectivity graph of 
broadcasting 
-approach is formed more or less (depends on implementation) in a random 
manner. 
+key space. The form of key space depends on implementation (e.g. can be a 
circle). 
+The mapping makes possible to assign number of data items to a peer, based on 
how 
+'close' (e.g. numerical, XOR) data item's and peer's keys are each other. 
Thus, 
+DHT's overlay connectivity graph is structured. On the other hand, the overlay 
+connectivity graph of broadcasting approach is formed more or less (depends on 
+implementation) in a random manner. 
 
 When performing queries, in broadcasting approach, peer sends a query request 
to a 
 subset of its neighbors and these peers to their subsequent neighbors. The 
@@ -763,14 +769,6 @@
 In broadcasting approach, implementations' differences mostly lie in the 
 *structural level* of overlay network, i.e. super peers and peer clusters.
 
-Recent work [ref: peernet] has concentrated on developing p2p infrastructure
-at the *network* layer. This is a different alternative to existing p2p 
-infrastructures, which operate at the *application* layer. Initial design
-of the system is promising, since it has several benefits over application 
-level p2p infrastructures (see paper). However, further research has to be
-done in order to affirm the applicability of the technique. 
-The storm design presented here is independent of the network layer, i.e. it
-may be implemented as an overlay on an IP-based network or something else.
 
 Review of the use cases: what does storm in each?
 -------------------------------------------------




reply via email to

[Prev in Thread] Current Thread [Next in Thread]