social-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Social-discuss] What I think GNU Social's structure should be


From: Sean Corbett
Subject: Re: [Social-discuss] What I think GNU Social's structure should be
Date: Sun, 28 Mar 2010 18:35:16 -0400
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.8) Gecko/20100301 Shredder/3.0.3

> I think that the best design structure is one such that there is a core,

 which handles interactions between nodes, and a user interface, which
 communicates with the core over a defined protocol. This is inspired by
 the structure of the Deluge BitTorrent client[1], and the GNUnet
 system[2].

This is generally the layout I've had in my head since I started
thinking about how GNU Social would be implemented; definitely agree
that this is the best course of action.

 Running a node on personal computing hardware introduces the problem of
 how to receive data when a node is offline. There are two ways to solve
 this that I can see right now: the centralized way and the decentralized
 way.

 The centralized way is to have a persistent service running on $1  web
 hosting that functions as a caching proxy for the PC-based node. The
 node will advertise that proxy as its address, the proxy will send
 requests for data to that node, and will cache the response. If other
 nodes send data to the offline node, the proxy will cache those and send
 them when its node gets back online.

 The decentralized way is to presume that user's friends in the social
 network will have recently requested and cached their friend's data on
 their node. If a request to a given node fails, the request should be
 forwarded to that nodes friends. Likewise, we should assume that friends
 will be willing to deliver messages when their friends' nodes come back
 online. This is inspired by "active migration" in GNUnet and the same
 property of the Freenet network[4].

 If we are caching our data on other nodes, we want to make sure that our
 data is safe. I think the best way to do this is to create "groups" of
 other users, and encrypt content we only want them to see to their GNU
 Social public keys. For instance, let's say we want a status update to
 only be visible to a certain group. That status update will be

I'm confident this scheme would be functional, however the possibility
that someone's data just isn't available everywhere would still exist.
If one of my friends posts a link and some commentary, and a few hours
later one of my coworkers tells me to check it out, and it turns out
that data isn't available to me because I wasn't online while it was
cached somewhere, I'd be ticked. Or would this not happen?

Also, why is push delivery better than pull? (The following probably
stems from some misunderstanding) If I have a few hundred friends spread
across a hundred cores in the network, and there's also a lot of people
using this core, wouldn't the server running my core get bogged down?
Wouldn't it just be more efficient to serve that data on a per request
basis, and cache it at a remote server once this happens? So, if one of
my friends on a remote node in the example above requests my data, and
another on the same node requests it, that server can deal with the
second request directly (provided other updates haven't happened in the
meantime).

Please, please correct me if I've just plain overlooked something. :-/

--sean c.






reply via email to

[Prev in Thread] Current Thread [Next in Thread]