social-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Social-discuss] PHP-Based GNU Social structure


From: cal
Subject: Re: [Social-discuss] PHP-Based GNU Social structure
Date: Tue, 30 Mar 2010 16:14:25 +0200
User-agent: Mutt/1.5.20 (2009-06-14)

On 09:52, Tue 30 Mar 10, Carlo von Loesch wrote:
> with a license that goes beyond the Affero GPL. It should be
> forbidden to run this software in virtual machines as the privacy
> of the users can no longer be secured. If you run the software on
> your own hardware server, even if it's in somebody else's rack,
> it is much harder to harvest. Maybe I'm just some five years early
> thinking about the dangers of VM computing, but that's traditionally
> my role hurrying ahead of time.

I'm not sure how could we check if such license is being applied, 
althought I see your point, and it is pretty scary...

Maybe we are abusing the term "federation" (meaning something in between
decentralized and distributed, depending on the day), and perhaps it is
a valid use case that different "federations" impose extra restrictions
for interconecting with their graph. We tend to draw a scenario where every 
site 
can talk with any other, but maybe *only maybe* some sites desire to
federate among equals and deny the access, let's say, to facebook users.
From my view, the problem with too much permissive gateways is that
they don't leave any incentives to part from the bigbrother jails. Do we 
still want to be viral?

> Concerning web interfaces as such.. I know well how powerful they
> have become and wouldn't be surprised if desktop interfaces were
> WebKit-based or similar. I would consider it a great move if we
> found a way to share HTML/JS or other web-oriented logic between a
> desktop and a web server implementation, so that the UI is
> fundamentally the same whether you run it locally or not.

Apart from the point about gnusocial itself forking some php 
codebase or not, I would agree with this approach. Never tested it, but 
projects like pyjamas seem promising at that respect (http://pyjs.org). 
Any experience with similar stuff?

> Concerning the protocol it still hurts to think it could be "web
> hook" based, but it could indeed be designed to be transport
> agnostic and run on several transports. I do however think it
> would be a good idea not to limit ourselves to round-robin
> distribution but to provide distribution trees. That means
> multicast structures on top of whatever is used.. be it HTTP
> or plain TCP/UDP. I can imagine designing a library for transport
> agnostic multicast, which allows distribution trees to use a
> hybrid of HTTP and plain networking.

I also  was missing some thoughts about the inverse travel: i.e., 
not only content publishing/distribution but also about information 
discovery and retrieval. (With public data or in a f2f darknet, the 
idea, no matter how complicated, can be the same).

One fancy thing about having data represented in RDF is how natural 
it results doing SPARQL queries against it. Not only raw data, but the
whole graph of concepts and relationships. SPARQL endpoints, again,
can be accessible from whatever transport you like, with any ACLs we are
capable of design (a la foaf+ssl, like taac does), and potentially could 
also route queries (there are some interesting algorithms about semantic 
query routing out there) to the endpoints they know are more likely to 
answer your query (of course  they can choose whether to reply your query 
or ignore it, depending on where you are in the  web of trust. And your 
SPARQL endpoint could be a webservice/daemon in the site/laptop of your 
choice). 
For sure we likely will be having slower query times, but I bet the quality 
of your social-filtered replies will be more optimal that the one-score-
for-all.

All this to say that searching in a distributed graph can
be the achilles heel of the system (we've seen that before), and that
I cannot imagine how else could we solve this in the classical GLAMP way, 
apart of waiting for any central crawler to come and index us. Surely my
problem is the one of only finding nails while holding a hammer :)

By the way, don't remember if has been mentioned before, but the approach 
of projects like http://smob.me  seem the right choice to me with
the goals in mind. ARC2 gives you the triplestore in php, frontend does
sparql... Savvy users can opt by deploying a more robust triplestore, sparql
endpoint can be encapsulated/translated to XMPP/PSYC, etc... freedom to 
imagine ;)

My doubt: Not sure if forking elgg/XXX and adapting the backend is gonna be 
more tedious that starting from scratch.

hail eris,
cal.

-- 
abra  su mente al dinero,
venda su alma al capital.

Attachment: signature.asc
Description: Digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]