qemu-s390x
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 07/18] s390x: protvirt: Inhibit balloon when switching to


From: David Hildenbrand
Subject: Re: [PATCH v5 07/18] s390x: protvirt: Inhibit balloon when switching to protected mode
Date: Fri, 27 Mar 2020 11:50:48 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.6.0

>> So, AFAIU, *any* virtio device (hypervisor side) has to present this
>> flag when PV is enabled. 
> 
> Yes, and balloon says bye bye when running in PV mode is only a secondary
> objective. I've compiled some references:

Thanks!

> 
> "To summarize, the necessary conditions for a hack along these lines
> (using DMA API without VIRTIO_F_ACCESS_PLATFORM) are that we detect that:
> 
>   - secure guest mode is enabled - so we know that since we don't share
>     most memory regular virtio code won't
>     work, even though the buggy hypervisor didn't set 
> VIRTIO_F_ACCESS_PLATFORM" 
> (Michael Tsirkin, https://lkml.org/lkml/2020/2/20/1021)
> I.e.: PV but !VIRTIO_F_ACCESS_PLATFORM \implies bugy hypervisor
> 
> 
> "If VIRTIO_F_ACCESS_PLATFORM is set then things just work.  If
> VIRTIO_F_ACCESS_PLATFORM is clear device is supposed to have access to
> all of memory.  You can argue in various ways but it's easier to just
> declare a behaviour that violates this a bug."
> (Michael Tsirkin, https://lkml.org/lkml/2020/2/21/1626)
> This one is about all memory guest, and not just the buffers transfered
> via the virtqueue, which surprised me a bit at the beginning. But balloon
> actually needs this.
> 
> "A device SHOULD offer VIRTIO_F_ACCESS_PLATFORM if its access to memory
> is through bus addresses distinct from and translated by the platform to
> physical addresses used by the driver, and/or if it can only access
> certain memory addresses with said access specified and/or granted by
> the platform. A device MAY fail to operate further if
> VIRTIO_F_ACCESS_PLATFORM is not accepted. "
> (https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-v1.1-cs01.html#x1-4120002)
> 

> 
>> In that regard, your patch makes perfect sense
>> (although I am not sure it's a good idea to overwrite these feature
>> bits
>> - maybe they should be activated on the cmdline permanently instead
>> when PV is to be used? (or enable )).
> 
> I didn't understand the last part. I believe conserving the user
> specified value when not running in PV mode is better than the hard
> overwrite I did here. I wanted a discussion starter.
> 
> I think the other option (with respect to let QEMU manage this for user,
> i.e. what I try to do here) is to fence the conversion if virtio devices
> that do not offer VIRTIO_F_ACCESS_PLATFORM are attached; and disallow
> hotplug of such devices at some point during the conversion.
> 
> I believe that alternative is even uglier.
> 
> IMHO we don't want the end user to fiddle with iommu_platform, because
> all the 'benefit' he gets from that is possibility to make a mistake.
> For example, I got an internal bug report saying virtio is broken with
> PV, which boiled down to an overlooked auto generated NIC, which of
> course had iommu_platform (VIRTIO_F_ACCESS_PLATFORM) not set.
> 
>>
>>>
>>> The actual problem is that the pages denoted by the buffer
>>> transmitted via the virtqueue are normally not shared pages. I.e.
>>> the hypervisor can not reuse them (what is the point of balloon
>>> inflate). To make this work, the guest would need to share the pages
>>> before saying 'host these are in my balloon, so you can use them'.
>>> This is a piece of logic we
>>
>> What exactly would have to be done in the hypervisor to support it?
> 
> AFAIK nothing. The guest needs to share the pages, and everything works.
> Janosch, can you help me with this one? 
> 

See below, making this work on the hypervisor side would be much cleaner
IMHO, but most probably not possible due to guest integrity.

FWIW, "Free page reporting" will (never) work with PV, where there is
basically no manual "deflation" step anymore.

>>
>> Assume we have to trigger sharing/unsharing - this sounds like a very
>> architecture specific thing?
> 
> It is, but any guest having sovereignty about its memory may need
> something similar.
> 
>> Or is this e.g., doing a map/unmap
>> operation like mapping/unmapping the SG?
> 
> No this is something different. We need stronger guarantees than the
> streaming portion of the DMA API provides. And what we actually want
> is not DMA but something very different.

Right, that's what I was expecting ...

> 
>>
>> Right now it sounds to me "we have to do $ARCHSPECIFIC when
>> inflating/deflating in the guest", which feels wrong.
>>
> 
> It is wrong in a sense. Drivers are mostly supposed to be portable. But
> balloon is not a run of the mill device. I don't see any other way to
> make this work.

Well, it is mostly architecture independent until now ...

> 
>>> need only if the host/the device does not have full access to the
>>> guest RAM. That is in my opinion why the balloon driver fences
>>> VIRTIO_F_ACCESS_PLATFORM.> Does that make sense?
>>
>> Yeah, I understood the "device has to set VIRTIO_F_ACCESS_PLATFORM"
>> part. Struggling with the "what can the guest driver actually do" part.
>>
> 
> Let me try to reword this. The point of PV is that the guest has
> exclusive access to his pages unless the guest decides to share some
> of the using a dedicated ultravisor call.
> 
> The point of the memballoon is, as far as I understand, to effectively
> dynamically manage the guests memory size within given boundaries, and
> without requiring memory hotplug. The basic idea is that the pages in
> the balloon belong to the host. The host attempting to re-use a
> non-shared page of a guest leads to problems. AFAIR the main problem
> was that shall we ever want to deflate such a page (make it again
> available for guest use) we would need to do an import, and that can
> only work if we have the exact same content as when it was exported.
> Otherwise integrity check fails as if we had a malicious hypervisor,
> that is trying to inject stuff into the guest.
> 
> I'm sure Janosch can provide a better explanation.
> 
> I really don't see another way, how memory ballooning could work with
> something like PV, without the balloon driver relinquishing the guests
> ownership of the pages that are going to leave the guest via the balloon.>
> On that note ccing the AMD SEV people. Balloon is at this point
> dysfunctional for them as well. @Tom: Right? If yes what problems need to
> be solved so virtio-balloon can work under SEV?

SEV even pins all guest memory, so it's useless and would be even
dangerous to use.


Some thoughts:


1. I would really prefer if there is a way to zero-out+share a page and
zero-out+unshare a page triggered by the hypervisor. Then only the
hypervisor has to do "the right thing" when
inflating/deflating/rebooting etc. I know we can "unshare all" via the
UV - we e.g., have to do that on reboots. But I assume this might mess
with "guest integrity" (being able to zero out random guest pages
technically) and is therefore not possible.


2. Have some other way to communicate "careful, ballooning won't work".
E.g., the guest detecting *itself* that it is running inside a PV
environment and not loading virtio-balloon until it properly
shares/unshares. Again, piggy-backing on IOMMU/VIRTIO_F_ACCESS_PLATFORM
somehow feels wrong.


E.g., once you would support inflation/deflation in virtio-balloon, free
page reporting could not be supported. So it's more than just a single
arch specific inflation/deflation callback.


And virtio-mem [1] will have similar issues once we want to use that on
s390x. But there, an arch-specific share/unshare callback should be
sufficient most probably. Still, there would have to be a way to block
it on s390x PV until implemented. Ideally it will be the same as for
virtio-balloon.

Again, being able to do that in the hypervisor instead of in the guest
would be much cleaner.

[1] https://lkml.kernel.org/r/address@hidden

-- 
Thanks,

David / dhildenb




reply via email to

[Prev in Thread] Current Thread [Next in Thread]