[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Duplicity-talk] Backing up desktop virtual machines?
From: |
edgar . soldin |
Subject: |
Re: [Duplicity-talk] Backing up desktop virtual machines? |
Date: |
Tue, 22 Jul 2014 11:03:24 +0200 |
User-agent: |
Mozilla/5.0 (Windows NT 5.1; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 |
On 22.07.2014 09:18, Rubin Abdi wrote:
> address@hidden wrote on 2014-07-21 01:51:
>> how many GB's are we talking about. what's your OS?
>
> My generic Windows 7 testing vm is around 25GB, multiply that by a few
> and it adds up.
>
> The host machine is Debian Sid amd64.
>
>> incrementals do exactly that. make sure the virtual machine is
>> paused/stopped to prevent file system (fs) corruption.
>
> My understanding of incremental backups (which is what I believe I'm
> doing) doesn't actually only upload the deltas of what's been updated
> within a file. If a change is detected it'll just upload the whole file.
> So if I boot into a Windows .vdi and immediately shut down the vm,
> Duplicity through an incremental backup will reupload the whole 25GB
> .vdi file, and not just the smaller inner bits of that file that's change.
>
> Please correct me if my notion on how this works is wrong.
no, duplicity uses librsync to backup only changed parts of a file via rolling
checksum algo.
check the overview/stats output after an incremental backup. it shows how much
data it actually backed up.
>> possible, but different backup sets sounds a more natural approach.
>>
>> however, if done via in/exclude duplicity would mask the source fs
>> accordingly and the files would appear as "freshly" created in fulls
>> and deleted in following incrementals (because missing in the source
>> fs).
>
> Ah, ok, I hadn't thought about that at all. That is annoying.
>
>> try to find out the bottleneck in your vm backup first. is it the
>> upload or the backup itself (test backup to local file:// target).
>> knowing that you might be able to speed up things or in the worst
>> case split backups.
>
> I believe the issue simply is virtual machines are large, and I don't
> need to back them up as frequently as everything else on this machine.
>
there are issues with big backup sets wrt. file size on the remote backend and
such. so far i've not heard any reproducible performance issues.
..ede/duply.net