rdiff-backup-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [rdiff-backup-users] Forgetting to run as root: how to recover quick


From: Dominic Raferd
Subject: Re: [rdiff-backup-users] Forgetting to run as root: how to recover quickly
Date: Tue, 31 Jul 2018 17:52:33 +0100

On Tue, 31 Jul 2018 at 16:53, Bill Harris <address@hidden> wrote:

> I've used rdiff-backup for years, and I'm mostly very happy with it.  There
> is one problem that crops up occasionally; and I haven't found a way around
> it yet.
>
> AFAICT, rdiff-backup likes running as root.  On rare occasion, I forget and
> start it as myself.  rdiff-backup complains, and, as I recall, offers to
> sudo itself (I'm running Debian Stable, which is not normally set up as a
> sudo system).
>
> If I enter a password (and perhaps even if I don't) and then hit Ctrl-c
> because I realize I messed up, I get the "it appears the last backup
> failed" message, and then I'm in for a long (about a day), full backup
> instead of the usual 15-45 minute incremental backup.
>
> Is there a way to recover in such a situation so that I don't have to wait
> for such a long backup to complete?  I presume rdiff-backup won't react
> well to my changing files during the backup.
>
> Is there a secure way to keep this from happening?  I could learn how
> setuid works, but I think that's an insecure approach.
>

Unless you are backing up system files, rdiff-backup doesn't have to run as
root, but in my experience it is wise always to run it as the same user for
a given repository. For instance I think if you run a backup as root once,
then any subsequent run as another user to or from the same repository may
hit problems - because the user may be denied the necessary access to
certain rdiff-backup control files (in the rdiff-backup-data subdirectory).

The delay must be because after you break a backup run, rdiff-backup has to
regress the backup to a consistent state. For it to take a day to do so
suggests you have a very large backup dataset and/or a very slow computer.
If possible, can you break down the dataset into smaller components, then
if you make the same mistake again, the regression will be much quicker? I
realise this means starting a load of new backup repositories (which TBH is
why I haven't done it in one case where I have an unwieldy repository).


reply via email to

[Prev in Thread] Current Thread [Next in Thread]