social-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Social-discuss] What I think GNU Social's structure should be


From: Ted Smith
Subject: Re: [Social-discuss] What I think GNU Social's structure should be
Date: Sun, 28 Mar 2010 20:16:38 -0400

On Sun, 2010-03-28 at 15:14 -0700, Jason Self wrote:
> Sean Corbett <address@hidden> wrote ..
> 
> > I'm confident this scheme would be functional, however the possibility
> > that someone's data just isn't available everywhere would still exist.
> > If one of my friends posts a link and some commentary, and a few hours
> > later one of my coworkers tells me to check it out, and it turns out
> > that data isn't available to me because I wasn't online while it was
> > cached somewhere, I'd be ticked. Or would this not happen?
> > 
> > Also, why is push delivery better than pull? (The following probably
> > stems from some misunderstanding) If I have a few hundred friends spread
> > across a hundred cores in the network, and there's also a lot of people
> > using this core, wouldn't the server running my core get bogged down?
> > Wouldn't it just be more efficient to serve that data on a per request
> > basis, and cache it at a remote server once this happens? So, if one of
> > my friends on a remote node in the example above requests my data, and
> > another on the same node requests it, that server can deal with the
> > second request directly (provided other updates haven't happened in the
> > meantime).
> > 
> > Please, please correct me if I've just plain overlooked something. :-/
> 
> I think it's best if my node, running my copy of GNU Social, is authoritative
> for my data: just like my webserver & my DNS server are authoritative for me. 
> In
> the model mentioned earlier, where there's multiple copies of my data, I don't
> think that there's a way to guarantee that everyone has a copy so why bother?

My model does not attempt to guarantee that everyone has a copy. It
exploits the likelihood that friends have recently requested and cached
their friend's data. This is possible to do in a monolithic program - if
I am address@hidden, and you are address@hidden, and I
request your profile data, or you push it to me, there's no reason I
can't cache it, and serve it to others if your node is down.

>  It greatly simplifies the design if we don't, and besides: In your example: 
> If a
> friend of yours received a link & a comment from me, but my node went offline
> before you could get it from me, there's nothing to stop your friend from
> including that link & comment in their feed to you. (Or you could get it from 
> me
> once my node is back online again.)
> 
I am proposing that friends cache other friend's pages when they request
them; that way there is some level of redundancy in the network. In
either system you can just wait and try later.

> It's very easy to overdesign GNU Social. It needs to be simple.
> 
This is a complex design, but I think overall, having the components
isolated along strictly defined interfaces will make implementing (and
changing) GNU Social much simpler.

Attachment: signature.asc
Description: This is a digitally signed message part


reply via email to

[Prev in Thread] Current Thread [Next in Thread]