rdiff-backup-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [rdiff-backup-users] Atomicity of backups?


From: dean gaudet
Subject: Re: [rdiff-backup-users] Atomicity of backups?
Date: Tue, 10 Dec 2002 11:13:32 -0800 (PST)

> But now that you mention it, and I am thinking in those terms, that
> journalling stuff sounds like a good idea (especially since, as I
> mentioned before, new metadata stuff is going to make this harder).
> So I guess the basic idea is I write what I'm going to do to a file,
> and then do it, and the next instance reads the file and sees when the
> crash happened?  Do I have to keep flush()ing the file then?  In
> practice does this make things much slower?  If anyone has any opinion
> on whether journalling would be appropriate here, or wants to point me
> towards some basic information on the topic, that might be useful.

for performance reasons you'd batch things up -- create a bunch of
temporary files, log all their names / relevent data, flush() and fsync()
your log, then begin moving the files into place.

flush() is probably not sufficient (i'm guessing you're referring to
python flush... libc flush is definitely not sufficient) -- depends
on what it does, but it probably just writes userspace buffers to the
kernel... you also need to fsync() to get the data onto the disk.

a logging filesystem will take care of all the syncing details for the
rename()s to move the files into place.

you probably want to record the inode numbers in your log so that during
recovery it's straightforward to determine if a rename has occured or not.

you actually don't even really need a log per se -- if the filesystem
is a logging filesystem, and your temporary filenames are appropriately
named so that you know where they belong, then the contents of a single
temp directory serve as your log :)

-dean



reply via email to

[Prev in Thread] Current Thread [Next in Thread]