qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v0 2/2] block: allow to set 'drive' property on a realized bl


From: Kevin Wolf
Subject: Re: [PATCH v0 2/2] block: allow to set 'drive' property on a realized block device
Date: Mon, 16 Dec 2019 16:38:23 +0100
User-agent: Mutt/1.12.1 (2019-06-15)

Am 16.12.2019 um 15:51 hat Denis Plotnikov geschrieben:
> On 13.12.2019 13:32, Kevin Wolf wrote:
> > Am 18.11.2019 um 11:50 hat Denis Plotnikov geschrieben:
> >> Another problem here, is that the "size" of the device dev may not match
> >> after setting a drive.
> >> So, we should update it after the drive setting.
> >> It was found, that it could be done by calling
> >> BlockDevOps.bdrv_parent_cb_resize.
> >>
> >> But I have some concerns about doing it so. In the case of virtio scsi
> >> disk we have the following callstack
> >>
> >>       bdrv_parent_cb_resize calls() ->
> >>           scsi_device_report_change(dev, SENSE_CODE(CAPACITY_CHANGED)) ->
> >>               virtio_scsi_change ->
> >>                   virtio_scsi_push_event(s, dev, 
> >> VIRTIO_SCSI_T_PARAM_CHANGE,
> >>                                                       sense.asc |
> >> (sense.ascq << 8));
> > I think the safest option for now (and which should solve the case you
> > want to address) is checking whether old and new size match and
> > returning an error otherwise.
> >
> >> virtio_scsi_change  pushes the event to the guest to make the guest
> >> ask for size refreshing.  If I'm not mistaken, here we can get a race
> >> condition when some another request is processed with an unchanged
> >> size and then the size changing request is processed.
> > I think this is actually a problem even without resizing: We need to
> > quiesce the device between removing the old root and inserting the new
> > one. They way to achieve this is probably by splitting blk_drain() into
> > a blk_drain_begin()/end() and then draining the BlockBackend here while
> > we're working on it.
> >
> > Kevin
> Why don't we use bdrv_drained_begin/end directly? This is what
> blk_drain does.
> If we want to split blk_drain we must keep track if blk's brdv isn't
> change otherwise we can end up with drain_begin one and drain end
> another bdrv if we do remove/insert in between.

Hmm, true, we would have to keep track of draining at the BlockBackend
level and consider it in blk_remove_bs() and blk_insert_bs(). Maybe
that's not worth it.

If we use bdrv_drained_begin/end directly, I think we need to drain both
the old and the new root node during the process.

> Another thing is should we really care about this if we have VM
> stopped and the sizes matched?

How do we know that the VM is stopped? And why would we require this?
Your patch doesn't implement or at least check this, and it seems a bit
impractical for example when all you want is inserting a filter node.

Kevin




reply via email to

[Prev in Thread] Current Thread [Next in Thread]