Hi,
I'm using duplicity for backuping a jenkins instance. It is basically a dataset that only grows, with nearly no change to already backed up data as only builds are added and old builds are not changed, leaving only config files as potentially changing.
The data represents a big chunk of our backed up data, and doing a full
backup will double the data usage of the jenkins backup, possibly putting
us over quota if we keep multiple full backups....
What is the best backup strategy for such a dataset? Is incremental-only a viable option? What are the costs of incrementatl only regarding data safety and restore/verification speed?