|
From: | Eric Blake |
Subject: | Re: qcow2: Zero-initialization of external data files |
Date: | Thu, 9 Apr 2020 08:47:38 -0500 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.6.0 |
On 4/9/20 8:42 AM, Eric Blake wrote:
I'd argue that requiring the user to pre-zero the raw data file is undesirable; and that we should instead fix our code to not report the image as reading all zeroes when creating with data_file_raw=on.OK. I think that could be achieved by just enforcing @preallocation to be at least “metadata” whenever @data-file-raw is set. Would that make sense?Is a preallocation of metadata sufficient to report things correctly? If so, it seems like a reasonable compromise to me. I was more envisioning a fix elsewhere: if we are reporting block status of what looks like an unallocated cluster, but data-file-raw is set, we change our answer to instead report it as allocated with unknown contents. But with preallocation, you either force the qcow2 file to list no cluster as unallocated (which matches the fact that the raw image really is fully allocated) while not touching the raw image, or you can go one step further and request full preallocation to wipe the raw image to 0 in the process.
What happens when an operation attempts to unmap things? Do we reject all unmap operations when data-file-raw is set (thus leaving a cluster marked as allocated at all times, if we can first guarantee that preallocation set things up that way)?
-- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org
[Prev in Thread] | Current Thread | [Next in Thread] |