[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[GNUnet-developers] Idea for file storage in GNUnet

From: hypothesys
Subject: [GNUnet-developers] Idea for file storage in GNUnet
Date: Thu, 6 Dec 2012 13:03:03 -0800 (PST)

Hello GNUnet Developers,

First of all I apologize if this is not the correct place for discussing a
possible new feature to GNUnet and since I am not from the IT field I cannot
even attempt to implement it. Still, perhaps if you find this feature
valuable you would consider implementing it so I wanted to share it. Please
bear in mind that I am no expert and this may not be feasible for technical
reasons not obvious to me. In that case please say so and I will not take
more of your time.

Some time ago I had the idea that gnunet (as well as other projects) could
benefit from increased disk space for storage and that using the free space
on disk should be a technically possible if difficult task.

On many OS filesystems, when a file is deleted, it is not truly erased, in
the FAT filesystem for example, the list of disk clusters occupied by the
file be erased from the file allocation table marking those sectors
available. On other filesystems I do not know how that is handled but, for
the sake of argument let's say that a header is instead applied to the file
indicating that the file portion of the hard disk is available to be

/header/ data block Nº1; /header/ data block Nº2; /header/ data block

If gnunet was able to split the file data into data blocks (encrypted of
course) and subsequently delete the data, while keeping both a checksum for
the data block and record of its disk location, the free disk space of
computers on which gnunet was installed could be used for storage without
compromising normal functioning of said computer.

This program, perhaps to be named gnunet-str (storage) would at the moment
of storage of data, create a checksum for every encrypted data block and for
every "contiguous" data group, as follows:


but also


and also


and continuing...

In this way, it would be possible to (quickly? - by going from the checksums
for the agglomerations of blocks to the individual blocks) ascertain which
data was corrupt (by usage of the main OS, or a disk defrag) and had to be
replaced. It would then signal to other GNUnet nodes "Of the data stored
only 70% (for example) is still not corrupted. I can share this 70% but give
me the 30% back, or new files to store in this space".

Such a solution would allow big amounts of storage - in theory, if all free
space in the the hard drive of host computer. Due to its nature it would not
be possible to rely on the data not being compromised without implementing
redundancy. If this gnunet-str made x copies of file y for example, the
probability of data corruption and loss could be greatly diminished.
Tahoe-Lafs and gnunet are based on this principle (although I could be wrong
as I'm no expert), redundancy of storage between multiple peers on the net.
If this redundancy could also be implemented locally, the total storage for
GNUnet would increase.

Alternatively to providing a greater amount of data storage, perhaps such a
feature could instead be used to boost GNUnet's efficiency as parts of a
file on a distant node could also be made available on more nodes
diminishing the distance between the "asking node" and the node who actually
has the file. 

Do you think such a feature could be useful for GNUnet? Once again do not
hesitate to say this idea is unfeasible for some reason, I just shared it in
the hopes of it being useful to an improved gnunet.

-- hypothesys
View this message in context:
Sent from the GnuNet - Dev mailing list archive at

reply via email to

[Prev in Thread] Current Thread [Next in Thread]