qemu-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-discuss] Incremental drive-backup with dirty bitmaps


From: Volker Cordes
Subject: Re: [Qemu-discuss] Incremental drive-backup with dirty bitmaps
Date: Mon, 28 Jan 2019 13:44:53 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.4.0

Hi,

I just wrote a little script doing this in an rsnapshot like way using
snapshots. You can take a look at

https://github.com/vcordes79/qemu-backup.

Feedback and improvements are welcome.

Volker

Am 22.01.19 um 20:29 schrieb Bharadwaj Rayala:
> Hi,
> 
> TL(Cant)R: I am trying to figure out a workflow for doing incremental
> drive-backups using dirty-bitmaps. Feels qemu lacks some essential features
> to achieve it.
> 
> I am trying to build a backup workflow(program) using drive-backup along
> with dirty bitmaps to take backups of kvm vms. EIther pull/push model works
> for me. Since drive-backup push model is already implemented, I am
> going forward with it. I am not able to figure out a few details and
> couldn't find any documentation around it. Any help would be appreciated
> 
> Context: I would like to take recoverable, consistent, incremental
> backups of kvm vms, whose disks are backed either by qcow2 or raw images.
> Lets say there is a vm:vm1 with drive1 backed by image chain( A <-- B ).
> This are the rough steps i would like to do.
> 
> Method 1:
> Backup:
> 1. Perform a full backup using `drive-backup(drive1, sync=full, dest =
> /nfs/vm1/drive1)`. Use transaction to do `block-dirty-bitmap-add(drive1,
> bitmap1)`. Store the vm config seperately
> 2. Perform an incremental backup using `drive-backup(drive1,
> sync=incremental, mode=existing, bitmap=bitmap1, dest=/nfs/vm1/drive1)`.
> Store the vm config seperately
> 3. Rinse and repeat.
> Recovery(Just the latest backup, incremental not required):
>     Copy the full qcow2 from nfs to host storage. Spawn a new vm with the
> same vm config.
> Temporary quick recovery:
>     Create a new qcow2 layer on top of existing /nfs/vm1/drive1 on the nfs
> storage itself. Spawn a new vm with disk on nfs storage itself.
> were
> Issues i face:
> 1. Does the drive-backup stall for the whole time the block job is in
> progress. This is a strict no for me. I didnot find any documentation
> regarding it but a powerpoint presentation(from kaskyapc) mentioning it.
> (Assuming yes!)
> 2. Is the backup consistent? Are the drive file-systems quiesced on backup?
> (Assuming no!)
> 
> To achieve both of the above, one hack i could think of was to take a
> snapshot and read from the snapshot.
> 
> Method 2:
> 1. Perform a full backup using `drive-backup(drive1, sync=full, dest =
> /nfs/vm1/drive1)`. Use transaction to do `block-dirty-bitmap-add(drive1,
> bitmap1)`. Store the vm config seperately
> 2. Perform the incremental backup by
>      a. add bitmap2 to drive1 `block-dirty-bitmap-add(drive1, bitmap2)`.
>      b. Take a vm snapshot with drive1(exclude memory, quiesce). The drive1
> image chain is now A<--B<--C.
>      c. Take incremental using bitmap1 but using data from node B.
> `drive-backup(*#nodeB*, sync=incremental, mode=existing, bitmap=bitmap1,
> dest=/nfs/vm1/drive1)`
>      d. Delete bitmap1 `block-dirty-bitmap-delete(drive1, bitmap1)`
>      e. Delete vm snapshot on drive1. The drive1 image chain is now A <--B.
>      f. bitmap2 now tracks the changes from incrementa 1 to incremental 2.
> 
> Drawbacks with this method would be(had it worked) that incremental backups
> would contain dirty blocks that are a superset of the actual blocks that
> are changed between the snapshot and the last snapshot.(Incremental x would
> contain blocks that have changed when incremental x-1 backup was in
> progress). But there are no correctness issues.
> 
> 
> *I cannot do this because drive-backup doesnot allow bitmap and node that
> the bitmap is attached to, to be different. :( *
> Some other issues i was facing that i worked around:
> 1. Lets say i have to backup a vm with 2 disks(both at a fixed point in
> time, either both fail or both pass). To atomically do a bitmap-add and
> drive-backup(sync=full) i can use transcations. To achieve a backup at a
> fixed point in time, i can use transaction with multiple drive-backups. To
> either fail the whole backup or succeed(when multiple drives are present),
> i can use completion-mode = grouped. But then i cant combine them as its
> not supported. i.e, do a
>     Transaction{drive-backup(drive1), dirty-bitmap-add(drive1,
> bitmap1),drive-backup(drive2), dirty-bitmap-add(drive2, bitmap1),
> completion-mode=grouped}.
>  Workaround: Create bitmaps first, then take full. Effect: Incrementals
> would be a small superset of actual changed blocks.
> 2. Why do I need to dismiss old jobs to start a new job on node. I want to
> retain the block-job end state for a day before i clear them. So i set
> auto-dismiss to false. This doesnot allow new jobs to run unless the old
> job is dismissed even if state=concluded.
>  Workaround: no workaround, store the end-job-status somewhere else.
> 3. Is there a way pre 2.12 to achieve auto-finalise = false in a
> transaction. Can I somehow add a dummy block job, that will only finish
> when i want to finalise the actual 2 disks block jobs? My backup workflow
> needs to run on env's pre 2.12.
>  Workaround: Couldnot achieve this. So if an incremental fails after block
> jobs succeed before i can ensure success(have to do some metadata
> operations on my side), i retry with sync=full mode.
> 
> 
> *So what is the recommeded way of taking backups with incremental bitmaps
> ? *
> Thanks you for taking time to read through this.
> 
> Best,
> Bharadwaj.
> 


-- 
Tel: +49 (0) 4489 408753
Fax: +49 (0) 4489 405735
mailto: address@hidden

freeline Computer GmbH & Co.KG
Wiekesch 1
26689 Apen
www.freeline-edv.de

Steuer-Nr. 69/201/57709
Amtsgericht Oldenburg HRA 203345
persönlich haft. Gesellschafterin: freeline Holding GmbH, Amtsgericht
Oldenburg HRB 206967
Geschäftsführer: Volker und Tobias Cordes



reply via email to

[Prev in Thread] Current Thread [Next in Thread]