circle-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[circle] How are data items stored/forwarded?


From: Paul Campbell
Subject: [circle] How are data items stored/forwarded?
Date: Sun, 3 Oct 2004 08:30:48 -0700
User-agent: Mutt/1.5.6i

I was spending some time poking through the DHT code over the last couple days.

Most of it makes a lot of sense...but then I started trying to figure out how
the data cache portion works. This isn't the "cache" in cache.py. I'm talking
about how nody.py actually works internally.

It appears that node.publish stores the data in the local (node.data and
several ancillary dictionaries as well such as node.data_timeout and
node.data_priority which are used by the polling thread.

It appears that the polling thread scans the data cache periodically and
sends "store link" commands with the data towards the neighbor which is closest
(in a DHT-sense) to the destination within the DHT. I assume from this that
over time, the data item will migrate towards the intended destination
(log n * poll_interval expected seconds to be exact). So there will be a
"thread" across the DHT from the data sources to the point on the ring
where the data is indexed, right?

Most chord documents don't talk about doing it like that. A node first does
a lookup on the ring to locate the node responsible for holding the data key,
and then sends the key/value pair directly to the responsible node (or nodes).
Usually, the data has to be periodically refreshed by the sources (lease
mechanism).

Is this actually how it works? If so, is there any value in inflating the
redundancy of every item by log(n) across the network (in expectation) if
there's no perceived value by the storing nodes, rather than making a direct
publish to the correct destionation(s)?

I could see redundancy value here BUT that is already done as a separate
operation (inflating the number of copies by N where N is usually 4).




reply via email to

[Prev in Thread] Current Thread [Next in Thread]