qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-block] [PATCH v3 0/4] virtio/block: handle zoned


From: John Snow
Subject: Re: [Qemu-devel] [Qemu-block] [PATCH v3 0/4] virtio/block: handle zoned backing devices
Date: Mon, 29 Jul 2019 17:23:06 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0


On 7/26/19 7:42 PM, Dmitry Fomichev wrote:
> John, please see inline...
> 
> Regards,
> Dmitry
> 
> On Thu, 2019-07-25 at 13:58 -0400, John Snow wrote:
>>
>> On 7/23/19 6:19 PM, Dmitry Fomichev wrote:
>>> Currently, attaching zoned block devices (i.e., storage devices
>>> compliant to ZAC/ZBC standards) using several virtio methods doesn't
>>> work properly as zoned devices appear as regular block devices at the
>>> guest. This may cause unexpected i/o errors and, potentially, some
>>> data corruption.
>>>
>>
>> Hi, I'm quite uninitiated here, what's a zoned block device? What are
>> the ZAC/ZBC standards?
> Zoned block devices (ZBDs) are HDDs that use SMR (shingled magnetic
> recording). This type of recording, if applied to the entire drive, would
> only allow the drive to be written sequentially. To make such devices more
> practical, the entire LBA range of the drive is divided into zones. All
> writes within a particular zone must be sequential, but writing different
> zones can be done concurrently in random manner. This sounds like a lot of
> hassle, but in return SMR can achieve up to 20% better areal data density
> compared to the most common PMR recording.
> 
> The same zoned model is used in up-and-coming NVMe ZNS standard, even
> though the reason for using it in ZNS is different compared to SMR HDDs -
> easier flash erase block management.
> 
> ZBC is an INCITS T10 (SCSI) standard and ZAC is the corresponding T13 (ATA)
> standard.
> 
> The lack of limelight for these standards is explained by the fact that
> these devices are mostly used by cloud infrastructure providers for "cold"
> data storage, a purely enterprise application. Currently, both WDC and
> Seagate produce SMR drives in significant quantities and Toshiba has
> announced support for ZBDs in their future products.
> 
>>>
>> I've found this:
>> https://www.snia.org/sites/default/files/SDC/2016/presentations/smr/DamienLeMoal_ZBC-ZAC_Linux.pdf
>>
> AFAIK, the most useful collection of public resources about zoned block
> devices is this website -
> http://zonedstorage.io
> The site is maintained by our group at WDC (shameless plug here :) ).
> BTW, here is the page containing the links to T10/T13 standards
> (the access might be restricted for non-members of T10/T13 committees) -
> http://zonedstorage.io/introduction/smr/#governing-standards
> 
>> It looks like ZAC/ZBC are new commands -- what happens if we just don't
>> use them, exactly?
> The standards define three models of zoned block devices: drive-managed,
> host-aware and host-managed.
> 
> Drive-managed zoned devices behave just like regular SCSI/ATA devices and
> don't require any additional support. There is no point for manufacturers
> to market such devices as zoned. Host-managed and host-aware devices can
> read data exactly the same way as common SCSI/ATA drives, but there are
> I/O pattern limitations in the write path that the host must adhere to.
> 
> Host-aware drives will work without I/O errors under purely random write
> workload, but their performance might be significantly degraded
> compared to running them under zone-sequential workload. With
> host-managed drives, any non-sequential writes within zones will lead
> to an I/O error, most likely, "unaligned write".
> 
> It is important to mention that almost all zoned devices that are
> currently on the market are host-managed.
> 

OK, understood.

> ZAC/ZBC standards do add some new commands to the common SCSI/ACS command
> sets, but, at least for the host-managed model, it wouldn't be sufficient
> just to never issue these commands to be able to utilize these devices.
> 
>>
>>> To be more precise, attaching a zoned device via virtio-pci-blk,
>>> virtio-scsi-pci/scsi-disk or virtio-scsi-pci/scsi-hd demonstrates the
>>> above behavior. The virtio-scsi-pci/scsi-block method works with a
>>> recent patch. The virtio-scsi-pci/scsi-generic method also appears to
>>> handle zoned devices without problems.
>>>
>>
>> What exactly fails, out of curiosity?
> The current Linux kernel is able to recognize zoned block devices and
> provide some means for the user to see that a particular device is zoned.
> For example, lsscsi will show "zbc" instead of "disk" for zoned devices.
> Another useful value is the "zoned" sysfs attribute that carries the
> zoned model of the drive. Without this patch, the attachment methods
> mentioned above present host-managed drives as regular drives at the
> guest system. There is no way for the user to figure out that they are
> dealing with a ZBD besides starting I/O and getting "unaligned write"
> error.
> 

Mmhmm...

> The folks who designed ZAC/ZBC were very careful about this not to
> happen and this doesn't happen on bare metal. Host-managed drives have
> a distinctive SCSI device type, 0x14, and old kernels without zoned
> device support simply are not be able to classify these drives during
> the device scan. The kernels with ZBD support are able to recognize
> a host-managed drive by its SCSI type and read some additional
> protocol-specific info from the drive that is necessary for the kernel
> to support it (how? see http://zonedstorage.io/linux/sched/).
> In QEMU, this SCSI device type mechanism currently only works for
> attachment methods that directly pass SCSI commands to the host OS
> during the initial device scan, i.e. scsi-block and scsi-generic.
> All other methods should be disabled until a meaningful way of handling
> ZBDs is developed for each of them (or disabled permanently for "legacy"
> attachment methods).
> 
>>
>> Naively, it seems strange to me that you'd have something that presents
>> itself as a block device but can't be used like one. Usually I expect to
>> see new features / types of devices used inefficiently when we aren't
>> aware of a special attribute/property they have, but not create data
>> corruption.
> Data corruption can theoretically happen, for example, if a regular hard
> drive is accidentally swapped for a zoned one in a complex environment
> under I/O. Any environment where this can potentially be a problem must
> have udev rules defined to prevent this situation. Without this type of
> patch, these udev rules will not work.
>>
>> The only reason I ask is because it seems odd that you need to add a
>> special flag to e.g. legacy IDE devices that explicitly says they don't
>> support zoned block devices -- instead of adding flags to virtio devices
>> that say they explicitly do support that feature set.
> The initial version of the patch set had some bits of code added in the
> drivers that are not capable of supporting zoned devices to check if the
> device is zoned and abort if it is. Kevin and Paolo suggested the current
> approach and I think it's a lot cleaner than the initial attempt since it
> minimizes the necessary changes across the whole set of block drivers. The
> flag is a true/false setting that is set individually by each driver. It
> is in line with two existing flags in blkconf_apply_backend_options(),
> "readonly" and "resizable". There is no "default" setting for any of these.

Thank you for the detailed explanation! This is good information to have
on the ML archive.

I'm still surprised that we need to prohibit IDE specifically from
interacting with drives of this type, as I would have hoped that the
kernel driver beneath our feet would have managed the access for us, but
I guess that's not true?

(If it isn't, I worry what happens if we have a format layer between us
and the baremetal: if we write qcow2 to the block device instead of raw,
even if we advertise to the emulated guest that we're using a zoned
device, we might remap things in/outside of zones and that coordination
is lost, wouldn't it?)

Not that I really desire people to use IDE emulators with fancy new
disks, it just seemed like an unusual patch.

If Kevin and Paolo are on board with the design, it's not my place to
try to begin managing this, it just caught my eye because it touched
something as old as IDE.

Thanks,
--js



reply via email to

[Prev in Thread] Current Thread [Next in Thread]