qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86: Add CPUID KVM support for new instruction WBNOINVD


From: Jim Mattson
Subject: Re: [PATCH] x86: Add CPUID KVM support for new instruction WBNOINVD
Date: Tue, 1 Oct 2019 10:23:31 -0700

On Tue, Oct 1, 2019 at 10:06 AM Sean Christopherson
<address@hidden> wrote:
>
> On Tue, Oct 01, 2019 at 07:20:17AM -0700, Jim Mattson wrote:
> > On Mon, Sep 30, 2019 at 5:45 PM Huang, Kai <address@hidden> wrote:
> > >
> > > On Mon, 2019-09-30 at 12:23 -0700, Jim Mattson wrote:
> > > > On Mon, Sep 30, 2019 at 10:54 AM Eduardo Habkost <address@hidden> wrote:
> > > > I had only looked at the SVM implementation of WBNOINVD, which is
> > > > exactly the same as the SVM implementation of WBINVD. So, the question
> > > > is, "why enumerate WBNOINVD if its implementation is exactly the same
> > > > as WBINVD?"
> > > >
> > > > WBNOINVD appears to be only partially documented in Intel document
> > > > 319433-037, "IntelĀ® Architecture Instruction Set Extensions and Future
> > > > Features Programming Reference." In particular, there is no
> > > > documentation regarding the instruction's behavior in VMX non-root
> > > > mode. Does WBNOINVD cause a VM-exit when the VM-execution control,
> > > > "WBINVD exiting," is set? If so, does it have the same VM-exit reason
> > > > as WBINVD (54), or a different one? If it does have the same VM-exit
> > > > reason (a la SVM), how does one distinguish a WBINVD VM-exit from a
> > > > WBNOINVD VM-exit? If one can't distinguish (a la SVM), then it would
> > > > seem that the VMX implementation also implements WBNOINVD as WBINVD.
> > > > If that's the case, the question for VMX is the same as for SVM.
> > >
> > > Unfortunately WBNOINVD interaction with VMX has not been made to public 
> > > yet.
>
> Hint: WBNOINVD uses a previously ignored prefix, i.e. it looks a *lot*
>       like WBINVD...

Because of the opcode selection, I would assume that we're not going
to see a VM-execution control for "enable WBNOINVD." To avoid breaking
legacy hypervisors, then, I would expect the "enable WBINVD exiting"
control to apply to WBNOINVD as well, and I would expect the exit
reason to be the same for both instructions. The exit qualification
field is cleared for WBINVD exits, so perhaps we will see a bit in
that field set to one for WBNOINVD. If so, will this new behavior be
indicated by a bit in one of the VMX capability MSRs? That seems to be
a closely guarded secret, for some reason.

> > > I am reaching out internally to see when it can be done. I agree it may 
> > > not be
> > > necessary to expose WBNOINVD if its implementation is exactly the same as
> > > WBINVD, but it also doesn't have any harm, right?
> >
> > If nested VMX changes are necessary to be consistent with hardware,
> > then enumerating WBNOINVD support in the guest CPUID information at
> > this time--without the attendant nested VMX changes--is premature. No
> > changes to nested SVM are necessary, so it's fine for AMD systems.
> >
> > If no changes to nested VMX are necessary, then it is true that
> > WBNOINVD can be emulated by WBINVD. However, it provides no value to
> > specifically enumerate the instruction.
> >
> > If there is some value that I'm missing, then why make guest support
> > for the instruction contingent on host support for the instruction?
> > KVM can implement WBNOINVD as WBINVD on any host with WBINVD,
> > regardless of whether or not the host supports WBNOINVD.
>
> Agreed.  To play nice with live migration, KVM should enumerate WBNOINVD
> regardless of host support.  Since WBNOINVD uses an ignored prefix, it
> will simply look like a regular WBINVD on platforms without WBNOINVD.
>
> Let's assume the WBNOINVD VM-Exit behavior is sane, i.e. allows software
> to easily differentiate between WBINVD and WBNOINVD.

That isn't the case with SVM, oddly.

> In that case, the
> value added would be that KVM can do WBNOINVD instead of WBINVD in the
> unlikely event that (a) KVM needs to executed WBINVD on behalf of the
> guest (because the guest has non-coherent DMA), (b) WBNOINVD is supported
> on the host, and (c) WBNOINVD is used by the guest (I don't think it would
> be safe to assume that the guest doesn't need the caches invalidated on
> WBINVD).

I agree that there would be value if KVM implemented WBNOINVD using
WBNOINVD, but that isn't what this change does. My question was, "What
is the value in enumerating WBNOINVD if KVM is just going to implement
it with WBINVD anyway?"



reply via email to

[Prev in Thread] Current Thread [Next in Thread]