qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] docs: Add measurement calculation details to amd-memory-encr


From: Daniel P . Berrangé
Subject: Re: [PATCH] docs: Add measurement calculation details to amd-memory-encryption.txt
Date: Fri, 7 Jan 2022 20:18:59 +0000
User-agent: Mutt/2.1.3 (2021-09-10)

On Thu, Dec 16, 2021 at 11:41:27PM +0200, Dov Murik wrote:
> 
> 
> On 16/12/2021 18:09, Daniel P. Berrangé wrote:
> > On Thu, Dec 16, 2021 at 12:38:34PM +0200, Dov Murik wrote:
> >>
> >>
> >> On 14/12/2021 20:39, Daniel P. Berrangé wrote:
> >>> Is there any practical guidance we can give apps on the way the VMSAs
> >>> can be expected to be initialized ? eg can they assume essentially
> >>> all fields in vmcb_save_area are 0 initialized except for certain
> >>> ones ? Is initialization likely to vary at all across KVM or EDK2
> >>> vesions or something ?
> >>
> >> From my own experience, the VMSA of vcpu0 doesn't change; it is basically 
> >> what QEMU
> >> sets up in x86_cpu_reset() (which is mostly zeros but not all).  I don't 
> >> know if it
> >> may change in newer QEMU (machine types?) or kvm.  As for vcpu1+, in 
> >> SEV-ES the
> >> CS:EIP for the APs is taken from a GUIDed table at the end of the OVMF 
> >> image, and has
> >> actually changed a few months ago when the memory layout changed to 
> >> support both TDX
> >> and SEV.
> > 
> > That is an unplesantly large number of moving parts that could
> > potentially impact the expected state :-(  I think we need to
> > be careful to avoid gratuitous changes, to avoid creating a
> > combinatorial expansion in the number of possibly valid VMSA
> > blocks.
> > 
> > It makes me wonder if we need to think about defining some
> > standard approach for distro vendors (and/or cloud vendors)
> > to publish the expected contents for various combinations
> > of their software pieces.
> > 
> >>
> >>
> >> Here are the VMSAs for my 2-vcpu SEV-ES VM:
> >>
> >>
> >> $ hd vmsa/vmsa_cpu0.bin
> > 
> > ...snipp...
> > 
> > was there a nice approach / tool you used to capture
> > this initial state ?
> > 
> 
> I wouldn't qualify this as nice: I ended up modifying my
> host kernel's kvm (see patch below).  Later I wrote a
> script to parse that hex dump from the kernel log into
> proper 4096-byte binary VMSA files.
> 
> 
> 
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 7fbce342eec4..4e45fe37b93d 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -624,6 +624,12 @@ static int sev_launch_update_vmsa(struct kvm *kvm, 
> struct kvm_sev_cmd *argp)
>                  */
>                 clflush_cache_range(svm->vmsa, PAGE_SIZE);
> 
> +                /* dubek */
> +                pr_info("DEBUG_VMSA - cpu %d START ---------------\n", i);
> +                print_hex_dump(KERN_INFO, "DEBUG_VMSA", DUMP_PREFIX_OFFSET, 
> 16, 1, svm->vmsa, PAGE_SIZE, true);
> +                pr_info("DEBUG_VMSA - cpu %d END ---------------\n", i);
> +                /* ----- */
> +
>                 vmsa.handle = sev->handle;
>                 vmsa.address = __sme_pa(svm->vmsa);
>                 vmsa.len = PAGE_SIZE;

FWIW, I made a 1% less hacky solution by writing a systemtap
script. It will require changing to set the line number for
every single kernel version, but at least it doesn't require
building a custom kernel

$ cat sev-vmsa.stp 
function dump_vmsa(addr:long) {
  printf("VMSA\n")
  for (i = 0; i < 4096 ; i+= 64) {
    printf("%.64M\n", addr + i);
  }
}

probe 
module("kvm_amd").statement("__sev_launch_update_vmsa@arch/x86/kvm/svm/sev.c:618")
 {
  dump_vmsa($svm->vmsa)
}

the line number is that of the 'vmsa.handle = sev->handle' assignment

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




reply via email to

[Prev in Thread] Current Thread [Next in Thread]