gnunet-developers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [GNUnet-developers] A thought/"feature request" regarding "The messa


From: Christian Grothoff
Subject: Re: [GNUnet-developers] A thought/"feature request" regarding "The message from Tahrir Square"
Date: Thu, 08 Mar 2012 10:20:01 +0100
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.24) Gecko/20111114 Icedove/3.1.16

Hi!

The problem with dumping all data destined to a specific peer is that you don't know which data (a) belongs together (ECRS) and (b) goes to the same target, because of receiver-anonymity. Also, just tracking queries that came from a peer in the past and then answering via mail is likely not useful --- the peer likely doesn't care anymore (as he was just forwarding for someone else and has since then forgotten about the state).

Another key point is that getting information in a timely manner is likely also crucial --- we don't need censorship-resistant anonymous networks to distribute information to prosecute WW II criminals. On the other hand, preventing collateral murder that still happens today requires more timely information flow and protection for those leaking information.

Now, in terms of countries going off-line, the current technical solution we have in GNUnet is content migration and selective replication. So if the country is off-line, peers of people who are likely to travel (i.e. journalists and others with a ticket to get out) can be configured to soak up data from other peers (migration) and users that have particularly interesting information can tell their peers to 'push' that information more strongly into the network (by picking a higher replication level).

The result is similar to your proposal in that the peers that receive data essentially build up a dump (database) and then that dump can be moved. Now, there is no 'intended target' for the dump, other than "the outside world". But once that peer connects at the outside, the information would leak. And, what might be an additional benefit, the user who moved the data might never know what data he gathered and he might himself not be able to decrypt it.

So for censorship-resistant file-sharing, I think we already have a decent solution in place. I'm still interested in the area of ultra-high latency communication, but more for applications like e-mail where the target system might be offline, not so much for file-sharing. Now, maybe you were thinking about more messaging-like applications in your proposal; that's certainly something I'd like to eventually see; however, that's still rather a long way away and I'm not sure that this would be something to implement at the level of the transport API. This would probably be easier to do closer to the appplication level, especially as not all applications will be able to work with large delays.


Happy hacking!

Christian



On 03/08/2012 12:29 AM, address@hidden wrote:
Hello list,

hopefully my thoughts will not be considered too heretical. ;-) Honestly, 
please don't hesitate to tell me if this is the wrong project to ask for 
implementing this proposal. I know it is major feature request and this project 
at the moment aims at sharing data over active connections and in 
semi-real-time.

Taking "The message from Tahrir Square" into account it shows that the wired 
means of communication are shut down and the wireless wide-area alternatieves are either 
centralised and/or localiseable by technical measures. To adress this issue a trade of 
the luxury of real-time for a diversity of transport-channels should be possible.

Now my thought, rather a question, is:
Why not, as an alternative meta-method of transport, make a "dump" of all data 
destined to be sent to a specific peer and transport it otherwise as a file ?

These ways could be:
Sneakernet
USB dead drops
Bluetooth
(re)writeable CD/DVDs
diverse steganographic hacks
you name it...

Now I know, this is an asynchronous method of communication and would not be usable for 
VPN and TOR-like applications, but it would combine advantages of the afforementioned 
ways (no data retention, no means of time-correlation analysis and personal control over 
the data if you meet the respective person) with the "core values" of gnunet 
(end-to-end encyption, anonymous file sharing, mutual authentification and most important 
of all plausible deniability).


So why not implementing an API for these asynchronous transport channels ?


 From my understanding the current transport-API assumes the channels to be synchronous and in 
semi-real-time, since you can open a channel and have a session to close. An asynchronous 
transport-API would of course require some changes of the gnunet framework. Gnunet should keep an 
account of "ongoing" communications to other peer (what has been requested / sent via an 
"offline packet") and keep this information over a reboot.

The API would need a queue function to collect everything that needs to be sent 
until it can be.
Transport modules would center around the event of a suitable device being 
connected, upon connection files/data are read, answers and reactions get 
computed and written to the medium.

In this context a timeout would of course not exist.


As for me, I have some experience at coding but "just implementing it and sending 
you the patch" is definitely above my level, also for it means a major change to the 
core framework.

If my thoughts are not completely lunatic, please let me know and I will post a 
more detailled feature request in Mantis.


Sincerely,
        a.jhonson

_______________________________________________
GNUnet-developers mailing list
address@hidden
https://lists.gnu.org/mailman/listinfo/gnunet-developers



reply via email to

[Prev in Thread] Current Thread [Next in Thread]