qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] current QEMU can't start pc-q35-2.12 SEV guest


From: Bruce Rogers
Subject: [Qemu-devel] current QEMU can't start pc-q35-2.12 SEV guest
Date: Thu, 15 Aug 2019 00:49:58 +0000

Hi,

I ran into a case where a guest on a SEV capable host, which was
enabled to use SEV and using an older machine type was no longer able
to run when the QEMU version had been updated.

Specifically, when the guest was installed and running under a v2.12
QEMU, set up for SEV (ok it was v2.11 with SEV support backported, but
the details still apply), using a command line such as follows:

qemu-system-x86_64 -cpu EPYC-IBRS \
-machine pc-q35-2.12,accel=kvm,memory-encryption=sev0 \
-object sev-guest,id=sev0,...

The guest ran fine, using SEV memory enryption.

Later the version of QEMU was updated to v3.1.0, and the same guest now
hung at boot, when using the exact same command line. (Current QEMU
still has the same problem.)

Upon investigation, I find that the handling of xlevel in
target/i386/cpu.c relies includes an explicit detection of SEV being
enabled and sets the cpuid_min_xlevel in the CPUX86State structure to
0x8000001F as the required minimum for SEV support. This normally is
used to set the xlevel the guest sees, allowing it to use SEV.

The compat settings for the v2.12 machine type include an xlevel value
associated with it (0x8000000A). Unfortunately the processing of the
compat settings gets conflated with the logic of handling a user
explicitly specifying an xlevel on the command line, which is treated
as an "override" condition, overriding the other xlevel selections
which would otherwise be done in the QEMU cpu code.

So, in the scenario I describe above, the original, working case would
provide an cpuid xlevel value of 0x8000001F to the guest (correct), and
the failing case ends up providing the value 0x8000000A (incorrect).

It seems to me that the handling of the compat settings and the
explicit xlevel setting by the user should be processed separately, but
I don't see how to do that easily.

How should this problem be resolved?

In my case, I've added to the code which is for checking a user
provided xlevel value, the check again for sev_enabled(), and if that's
the case, I still apply the cpuid_min_xlevel value. This works for the
time being, but doesn't seem to be the right solution.

Looking forward to your help with this issue.

Thanks,

Bruce

reply via email to

[Prev in Thread] Current Thread [Next in Thread]