gzz-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gzz-commits] manuscripts/storm article.rst


From: Benja Fallenstein
Subject: [Gzz-commits] manuscripts/storm article.rst
Date: Sat, 08 Feb 2003 15:21:56 -0500

CVSROOT:        /cvsroot/gzz
Module name:    manuscripts
Changes by:     Benja Fallenstein <address@hidden>      03/02/08 15:21:56

Modified files:
        storm          : article.rst 

Log message:
        org

CVSWeb URLs:
http://savannah.gnu.org/cgi-bin/viewcvs/gzz/manuscripts/storm/article.rst.diff?tr1=1.115&tr2=1.116&r1=text&r2=text

Patches:
Index: manuscripts/storm/article.rst
diff -u manuscripts/storm/article.rst:1.115 manuscripts/storm/article.rst:1.116
--- manuscripts/storm/article.rst:1.115 Sat Feb  8 12:38:34 2003
+++ manuscripts/storm/article.rst       Sat Feb  8 15:21:55 2003
@@ -38,18 +38,19 @@
 However, recent developments in peer-to-peer systems have
 rendered this assumption obsolete. Structured overlay networks
 [ref chord, can, tapestry, pastry, kademlia, symphony, viceroy,
-skip graph, swan] allow *location independent* 
-routing based on random identifiers on a global scale. 
-It is now feasible to do a global search for all peers
-that have information about a given identifier.
+skip graph, swan] allow location-independent identifiers
+to be resolved on a global scale. 
+It is now feasible to do a global search for all information
+about a given identifier, on any peer in the network.
 This, we believe, may be the most important result of peer-to-peer 
 research with regard to hypermedia.
 
+We examine how location-independent identifiers can support *data mobility*.
 In today's computing world, documents move quite freely between 
 computers: being sent as e-mail attachments, carried around on disks,
 published on the web, moved between desktop and laptop systems,
 downloaded for off-line reading or copied between computers in a LAN. 
-We use *data mobility* as a collective term for the movement of documents
+We use 'data mobility' as a collective term for the movement of documents
 between computers (or locations on one computer, such as folders),
 and movement of content between documents (through copy&paste) [#]_.
 
@@ -58,9 +59,9 @@
    data mobility is neither the same as, nor limited to the physical
    movement of devices.
 
-We address two issues raised by data mobility:
+In this paper, we address two issues raised by data mobility:
 Dangling links and keeping track of alternative versions. 
-Resolvable location independent identifiers
+Resolvable location-independent identifiers
 make these issues much easier to deal with, since data
 can be recognized whereever it is moved [#]_. 
 
@@ -119,12 +120,16 @@
 byte sequences identified by cryptographic content hashes
 [ref ht'02 paper]. Additionally, Storm provides services
 for versioned data and Xanalogical storage [ref].
+We address the mobility of documents by block storage
+and versioning, while we use Xanalogical storage
+to address the movement of content between documents (copy&paste).
 
 .. [XXX figure of the different layers in Storm]
 
 The main contribution of this paper is the Storm design, 
-a hypermedia system built to make use of the emerging 
-peer-to-peer search technologies. Additionally, we hope to 
+a hypermedia system built to use the emerging 
+peer-to-peer search technologies to enhance data mobility. 
+Additionally, we hope to 
 provide an input to the ongoing discussion about peer-to-peer
 hypermedia systems [ref ht01, ht02].
 
@@ -168,13 +173,12 @@
 2. Related Work
 ===============
 
-2.1. Hypermedia and versioning
-------------------------------
+2.1. Dangling links
+-------------------
 
 The dangling link problem has received a lot of attention
 in hypermedia research [refs]. As examples, we examine the ways
-in which HTTP, Microcosm [ref], Chimera [ref] and Hyper-G [ref] 
-deal with the problem.
+in which HTTP, Microcosm [ref] and Hyper-G [ref] deal with the problem.
 
 In HTTP, servers are able to notify a client that a document
 has been moved, and redirect it accordingly [ref spec?]. However,
@@ -213,35 +217,20 @@
 is delivered to all interested servers, but requires that each
 interested server keeps a list of all the others.
 
-[XXX Chimera -- or any other distributed hypermedia system?]
+These approaches share the assumption that it is not possible
+to resolve a location-independent identifier. Otherwise,
+it would not be necessary to update links when a document
+is moved, nor would either of the servers storing two given documents
+need to know the links between them;
+knowing only a document's location-independent identifier,
+it would be possible to find both the document and links to it,
+no matter which peer in the network they are stored on.
+
+XXX Say something about the usual resolvable URN approaches
 
-All of these systems are built around the fundamental assumption
-that it is impossible to resolve a random [XXX] identifier.
-The use of location-independent identifiers
-for documents, resolved through a peer-to-peer lookup system, 
-makes notification of the servers storing links unnecessary; 
-when a document is moved, 
-but retains its identifier, it can be found by the same mechanism as
-before the move. It is possible to retrieve the document
-from any system storing a copy; this means that documents may be
-accessible even after the original publisher has taken them off-line [#]_.
-
-Conversely, an external link published by any host can be found
-when the endpoint of the link is known... XXX
-
-.. [#] Intentionally or unintentionally. We believe that it is 
-   a good thing if published documents remain available even when
-   the original publisher wants to retract them; however, discussion
-   of the ethical implications of this is outside the scope of this paper.
-   (But see [XXX search for refs! ;-)])
-   [Possible refs: http://www.openp2p.com/topics/p2p/p2p_law/.
-   However, they are necessarily directly related to this :( -Hermanni]
-
-Even Xanadu [ref], which went a long way to ensure that links do not break
-when their targets are copied from one document to another,
-required permanent connection to a network of servers to function. 
-Moreover, Xananu's 1988 incarnation [ref Green] addressed data 
-based on the address of a server holding a 'master copy.'
+
+2.2. Alternative versions
+-------------------------
 
 Likewise, version control systems like CVS or RCS [ref] usually assume
 a central server hosting a repository. The WebDAV/DeltaV protocols,
@@ -260,31 +249,12 @@
 as basis of communication channel among limited amount of participants. 
 Neither of these systems supports the immutability of data.
 
-CFS [ref], which is built upon Chord DHT peer-to-peer routing layer[ref], 
stores 
-data as blocks. However, CFS *splits* data (files) into several miniblocks and 
-spreads blocks over the available CFS servers. Freenet [ref] and PAST [ref],
-which is based on Pastry [ref], do not split files into blocks, since they 
store data 
-as whole files. All previously mentioned systems lack of the immutable 
-property which is used in Storm blocks.
-   
-Related work: we need something about p2p hypermedia: 
-[ref Bouvin, Wiil ("Peer-to-Peer Hypertext")]
-
-It's well recognized that references should not be by location [ref URN].
-
 [ref HTML version format proposal] Alternate versions important for
 authoring process [search refs]. (Note: Keeping track of versions
 structure is also \*hyper*media. Refs?) (WebDAV!)
 
-Nomadicity [ref]. Mobile users often have
-different machines, among which data must be Notes-replicated
-[ref Lotus Notes]. Also, caching of data for offline use.
-This also needs to be addressed for dialup users. Finally,
-train collaboration; this raises caching (local storage)
-as well as serverless versioning (like e-mail collaboration).
 
-
-2.2. Peer-to-peer systems
+2.3. Peer-to-peer systems
 -------------------------
 
 During the last few years, there have been a lot of research efforts related 
@@ -331,6 +301,26 @@
 In the broadcasting approach, implementations' differences mostly lie in the 
 *structural level* of the overlay network, i.e. super peers and peer clusters.
 
+CFS [ref], which is built upon Chord DHT peer-to-peer routing layer[ref], 
stores 
+data as blocks. However, CFS *splits* data (files) into several miniblocks and 
+spreads blocks over the available CFS servers. Freenet [ref] and PAST [ref],
+which is based on Pastry [ref], do not split files into blocks, since they 
store data 
+as whole files. All previously mentioned systems lack of the immutable 
+property which is used in Storm blocks.
+
+
+2.4. Peer-to-peer hypermedia
+----------------------------
+
+Related work: we need something about p2p hypermedia: 
+[ref Bouvin, Wiil ("Peer-to-Peer Hypertext")]
+
+.. (Probabilistic access to documents may be ok in e.g. workgroups,
+   but does not really seem desirable. (At the ht'02 panel, Bouvin
+   said they might be ok, which others found very... bold.) 
+   One example may be a user's public comments on documents; 
+   these might be only available when that user is online.
+   
 
 3. Block storage
 ================
@@ -994,27 +984,6 @@
 XXX remove this section: p2p should be discussed in the
 relevant sections above (2-6).
 
-.. (Probabilistic access to documents may be ok in e.g. workgroups,
-   but does not really seem desirable. (At the ht'02 panel, Bouvin
-   said they might be ok, which others found very... bold.) 
-   One example may be a user's public comments on documents; 
-   these might be only available when that user is online.
-
-.. cf half-life of peers (Mojo Nation): Is it desirable that 'weak' peers
-   participate in a DHT? -- In Circle, peers must have been online
-   for at least an hour... In which ways, then, can 'weak' peers contribute
-   to the network in a p2p fashion? Caching is certainly one central
-   way, esp. when combined with multisource downloading (this can
-   potentially boost download speeds to the full available bandwidth).
-   This is a performance/reliability issue rather than something
-   changing the fundamental qualities of the network, but still important.
-
-   The important point about p2p publishing is that no account and setup
-   is necessary to start publishing.
-
-   One possibility: Use IBP for limited-time publishing, referring to
-   the location through the DHT? This might be related to p2p publishing.
-
 
 8. Experience and future directions
 ===================================
@@ -1115,6 +1084,21 @@
 When Xanalogical storage is not applied, using Storm as a
 replacement/equivalent of a conventional file and versioning system is
 trivial? 
+
+.. p2p -> cf half-life of peers (Mojo Nation): Is it desirable that 'weak' 
peers
+   participate in a DHT? -- In Circle, peers must have been online
+   for at least an hour... In which ways, then, can 'weak' peers contribute
+   to the network in a p2p fashion? Caching is certainly one central
+   way, esp. when combined with multisource downloading (this can
+   potentially boost download speeds to the full available bandwidth).
+   This is a performance/reliability issue rather than something
+   changing the fundamental qualities of the network, but still important.
+
+   The important point about p2p publishing is that no account and setup
+   is necessary to start publishing.
+
+   One possibility: Use IBP for limited-time publishing, referring to
+   the location through the DHT? This might be related to p2p publishing.
 
 
 9. Conclusions




reply via email to

[Prev in Thread] Current Thread [Next in Thread]