gzz-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gzz-commits] manuscripts/storm article.rst


From: Hermanni Hyytiälä
Subject: [Gzz-commits] manuscripts/storm article.rst
Date: Tue, 04 Feb 2003 07:12:53 -0500

CVSROOT:        /cvsroot/gzz
Module name:    manuscripts
Changes by:     Hermanni Hyytiälä <address@hidden>      03/02/04 07:12:53

Modified files:
        storm          : article.rst 

Log message:
        Reorg, fixes etc.

CVSWeb URLs:
http://savannah.gnu.org/cgi-bin/viewcvs/gzz/manuscripts/storm/article.rst.diff?tr1=1.81&tr2=1.82&r1=text&r2=text

Patches:
Index: manuscripts/storm/article.rst
diff -u manuscripts/storm/article.rst:1.81 manuscripts/storm/article.rst:1.82
--- manuscripts/storm/article.rst:1.81  Tue Feb  4 07:01:25 2003
+++ manuscripts/storm/article.rst       Tue Feb  4 07:12:52 2003
@@ -14,7 +14,7 @@
 unique random identifiers are not globally feasible for this reason.
 
 However, recent developments in peer-to-peer systems have
-rendered this assumption obsolete. Distributed hashtables
+rendered this assumption obsolete. Structured overlay networks
 [ref chord, can, tapestry, pastry, kademlia, symphony, viceroy]
 and similar systems [skip graph, swan, peernet] allow *location independent* 
 routing based on random identifiers on a global scale. This, we believe,
@@ -22,11 +22,11 @@
 research with regard to hypermedia.
 
 In today's computing world, documents move quite freely between 
-computers, being sent as e-mail attachments, carried around on disks,
+computers: being sent as e-mail attachments, carried around on disks,
 published on the web, moved between desktop and laptop systems,
 downloaded for off-line reading or copied between computers in a LAN. 
-Often, the same document is independently modified 
-on two more more unconnected, separete systems. We address two issues
+Furthermore, the same document is independently modified 
+on two (or more) unconnected, separete systems. We address two issues
 raised by this ad hoc *data mobility* (see footnote) phenomenon: Dangling 
links, 
 and keeping track of alternative versions. Resolvable location independent 
identifiers
 make these issues much easier to deal with, since data
@@ -35,7 +35,7 @@
 situations with popular items.
 
 footnote: we emphasize the mobility of *data*. Indeed, technology used
-for moving data is not our primary concern. On the other hand, the
+for transferring data is not our primary concern. On the other hand, the
 mobility of people is obviously a particular reason for data mobility. 
 For data mobility, the mobility of people shows as the mobility of the
 data processing devices, between/among which the data is shared
@@ -49,22 +49,22 @@
 [ref ht'02 paper], unifying the namespaces of
 private data and documents published on the Internet by
 using the same identifiers for both.
-Storm has been partially implemented as a part of the Gzz project [ref], 
-which uses Storm exclusively for all disk storage. On top of Storm,
-we have built a system for storing mutable, versioned data
-and an implementation of Xanalogical storage [ref].
-[General figure of Storm, i.e. application layer, storm layer, 
-netowork layer ? -Hermanni]
+
 
 The main contributions of this paper are
 -Storm - design which employs new techniques for hypermedia 
 systems (location independent identifiers, immutable block storage, *working* 
links etc.)
 -use of p2p architecture in hypermedia domain 
 
-Gzz provides a platform to build hypermedia applications upon.
-So far, we have only used Storm in our experimental
-hypermedia system, Gzz. No work on integrating Storm
-with current programs (in the spirit of Open Hypermedia)
+Currently, Storm has been partially implemented as a part of the Gzz 
+project [ref], which uses Storm exclusively for all disk storage.
+Gzz provides a general platform for building hypermedia applications upon.
+On top of Storm, we have built a system for storing mutable, versioned data
+and an implementation of Xanalogical storage [ref].
+[General figure of Storm, i.e. application layer, storm layer, 
+netowork layer ? -Hermanni]
+ 
+No work on integrating Storm with current programs (in the spirit of Open 
Hypermedia)
 has been done so far. It is not clear how far this is possible
 without changing applications substantially, if advantage
 of our implementation of Xanalogical storage is to be taken.
@@ -85,35 +85,30 @@
 potential peer-to-peer implementations of Storm. In section 8, 
 we report on implementation experience and future directions. 
 Section 9 concludes the paper.
-[Suggestion: In next section, we describe 
-related work. In section 3, we give an overview of Storm. In 
-section 4, we discuss the details of basic storage unit. In 
-section 5, we discuss our implementation of Xanalogical storage 
-on top of the block system. In section 6 we discuss application-specific 
-reverse indexing of blocks by their content and techiques for 
-efficient versioned storage of mutable data on top of blocks. In 
-section 7, we discuss potential peer-to-peer implementations of Storm. 
-In section 8, we report on implementation experience and future 
-directions. Section 9 concludes the paper. -Hermanni]
 
 (where and how use cases? are not mentioned in either of the above, but are
 currently split in two - hinting somewhere in the beginning and reviewing at
 the and).
+[In appendix A ? -Hermanni]
 
 
 2. Related Work
 ===============
-...
-However, in advanced hypermedia systems such as Microcosm[] and Hyper-G[],
-several approaches to dealing with the dangling link and other link
-management problems have been developed. Microcosm addressed the linking
-problems of large archives by separating the links from the documents and
-storing them on dedicated linkbases, with the requirement that when a
-document where a position or document dependant link anchor occurs is moved
-or deleted, the hypermedia document management system, or hyperbase, ought
-to be informed [HymEbook?]. In Hyper-G, when there are similar changes, all
-other Hyper-G servers that reference that document can be informed, and an
-efficient protocol has been proposed for that purpose [kappe95scalable].
+
+In advanced hypermedia systems such as Microcosm[] and Hyper-G[],
+several approaches has been proposed to deal with the dangling/other link
+management problems. 
+
+Microcosm addressed the linking problems of large archives 
+by separating the links from the documents and storing them on dedicated 
+linkbases, with the the following requirement:  when a document (where a 
+position or document dependant link anchor occurs) is moved or deleted, 
+the hypermedia document management system (or hyperbase), ought to be informed 
+[HymEbook?]. 
+
+In Hyper-G, when there are similar changes, all other Hyper-G servers that 
+reference to specific document can be informed, and an efficient protocol has 
been 
+proposed for that purpose [kappe95scalable].
 (All that does not change the basic assumption, may even be seen as
 workarounds from the p2p solution's point of view?)
 [yet Chimera and other OHS?]
@@ -153,14 +148,15 @@
 systems lack of the immutable property which is used in Storm blocks.
 
    
-
+Related work: we need something about p2p hypermedia: 
+[ref Bouvin, Wiil ("Peer-to-Peer Hypertext")]
 
 [Note: The following are my notes for what should be written,
 not final text! --benja .. adding comments in the middle --antont]
 
 It's well recognized that references should not be by location [ref URN].
 
-(To explain data mobility:
+(To explain ad hoc *data mobility*:
 Data moves like this and that. The server/location paradigm
 is not suited to this: To support hypermedia functionality correctly,
 we need to recognize two copies of the *same* document.
@@ -171,7 +167,7 @@
 for this reason [ref TBL -- XXX note: not true like this; the ref I had
 in mind is http://www.w3.org/DesignIssues/NameMyth.html, and
 it's not about back links]. However, recent innovations in P2P have made
-scalable hashtables possible.
+scalable location independent routing/hashtables possible.
 
 ->
 Binding documents to servers has been necessary to make the Web scalable,
@@ -185,8 +181,8 @@
 
 {If standards could be agreed on, web servers should be able to
 self-organize into a DHT implementing bidi links. There has been
-interest in p2p hypermedia [ref Bouvin]. This would not, however,
-solve data mobility on disconnected clients.}?
+interest in p2p hypermedia [ref Bouvin, Wiil ("Peer-to-Peer Hypertext")]. 
+This would not, however, solve data mobility on disconnected clients.}?
 
 A second issue arising from data mobility is version consolidation
 (as well as simply keeping track of alt. versions). [ref HTML
@@ -267,17 +263,18 @@
 ================
 
 [Do we need a figure, which shows the overall structure of block storage
-with pointers and diffs ? -Hermanni]
+with pointers diffs etc ? -Hermanni]
 
 In our system, Storm (for *storage module*), all data is stored
 as *blocks*, byte sequences identified by a SHA-1 cryptographic content-hash 
-[ref SHA-1 and our ht'02 paper]. Blocks often have a similar granularity
+[ref SHA-1 and our ht'02 paper]. Blocks have a similar granularity
 as regular files, but they are immutable, since any change to the
 byte sequence would change the hash (and thus create a different block).
 Mutable data structures are built on top of the immutable blocks
 (see Section 6).
 
-Immutable blocks has several benefits over existing systems...
+Immutable blocks has several benefits over existing data storing 
+techiques:
 
 Storm's block storage makes it easy to replicate data between systems.
 Different versions of the same document can easily coexist at this level,
@@ -470,7 +467,7 @@
 implementation on top of a distributed hashtable
 will be trivial.
 
-[Benja, this might be useful for defining APIs for DHTs etc: 
+[Benja, this might be useful for defining Storm APIs for DHTs etc: 
 http://sahara.cs.berkeley.edu/jan2003-retreat/ravenben_api_talk.pdf
 Full paper will appear in IPTPS 2003 -Hermanni]
 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]