rdiff-backup-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[rdiff-backup-users] 17 days running "Metadata will be read from filesys


From: John DiMatteo
Subject: [rdiff-backup-users] 17 days running "Metadata will be read from filesystem instead"
Date: Sun, 30 Nov 2014 14:33:19 -0700

Hello,

I have an rdiff-backup process that has been running for 17 days with
the following output:

   Warning, could not find mirror_metadata file.
   Metadata will be read from filesystem instead.

Is there any way for me to check how much progress rdiff-backup has
made and whether I should expect this to finish in the next couple
days?  top shows rdiff-backup fluctuating around 8-14% cpu and iotop
shows disk read 10-12 MB/sec (with disk write at 0 when I checked),
and the backup is about 10TB in size.  backup.log just shows what was
written to stdout.  This job normally completes in 1-6 hours.

What is rdiff-backup doing and can I estimate how long it should take
to finish?  Even if it has to read the whole 10TB at 10MB/sec, that
should still have finished in about 12 days.  Here is the exact
command I am running:

rdiff-backup --backup-mode --exclude-other-filesystems --include
/grail/bam --include /grail/projects --include /grail/TONY --exclude
/grail /grail/ /crusader/backup/rdiff-backup/grail

Maybe it has to read both the backup directory and the target
directory so maybe I should estimate it will take ~24 days to finish
this run with my very slow IO?

For context, a couple weeks ago, rdiff-backup failed when the
filesystem ran out of space, and when it ran again it failed
regressing with "Exception 'Bad index order...".  So I deleted the
earlier of the two .data files and moved the later .snapshot.gz file
out of the way based off the suggestion at
http://www.nongnu.org/rdiff-backup/FAQ.html#regress_failure .  Could
my strange exclude everything then include just a couple
sub-directories be contributing to the slow time? (I made this change
after I ran out of space.) I have some more notes here:
https://github.com/BradnerLab/pipeline/issues/41

I am a part time volunteer at an open source cancer research lab and
we have some important genetic and research data to backup.  Any help
would be greatly appreciated!

Thanks,
John



reply via email to

[Prev in Thread] Current Thread [Next in Thread]