[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Gluster-devel] Archive integrity with hasing
From: |
Jeffry Molanus |
Subject: |
[Gluster-devel] Archive integrity with hasing |
Date: |
Mon, 23 Feb 2009 21:34:56 +0100 |
Hi all,
I've been using a CAS based storage solution for archiving and one of
the features that makes it an solution for archiving is the fact that it
has a mechanism in order to determine if the hash of the object (file)
is the same as on first write. The system uses replication for "HA", and
is a node based cluster implementation with a database containing the
meta data.
When the file is read from disc, the system performs calculation of the
hash; and checks if it matches the database. If this fails the copy of
the file that has been created during replication is checked. If this
one matches a new copy it is replicated and the damages file/disk is
deleted/retired and I have to working copies again. (if both fail: data
loss)
Another reason that the system is usable for archiving is because by
means of the hash, it can be determined if the file changed during
initial commit/write. This of course is not 100% safe, but it does add
to the "integrity" of the archive.
Is there any kind of support for this kind of extra checking in
gluster?
Regards, Jeffry
[Prev in Thread] |
Current Thread |
[Next in Thread] |
- [Gluster-devel] Archive integrity with hasing,
Jeffry Molanus <=