[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PATCH v4 03/12] target/ppc: Correct handling of real mode accesses with
From: |
David Gibson |
Subject: |
[PATCH v4 03/12] target/ppc: Correct handling of real mode accesses with vhyp on hash MMU |
Date: |
Wed, 19 Feb 2020 13:14:00 +1100 |
On ppc we have the concept of virtual hypervisor ("vhyp") mode, where we
only model the non-hypervisor-privileged parts of the cpu. Essentially we
model the hypervisor's behaviour from the point of view of a guest OS, but
we don't model the hypervisor's execution.
In particular, in this mode, qemu's notion of target physical address is
a guest physical address from the vcpu's point of view. So accesses in
guest real mode don't require translation. If we were modelling the
hypervisor mode, we'd need to translate the guest physical address into
a host physical address.
Currently, we handle this sloppily: we rely on setting up the virtual LPCR
and RMOR registers so that GPAs are simply HPAs plus an offset, which we
set to zero. This is already conceptually dubious, since the LPCR and RMOR
registers don't exist in the non-hypervisor portion of the CPU. It gets
worse with POWER9, where RMOR and LPCR[VPM0] no longer exist at all.
Clean this up by explicitly handling the vhyp case. While we're there,
remove some unnecessary nesting of if statements that made the logic to
select the correct real mode behaviour a bit less clear than it could be.
Signed-off-by: David Gibson <address@hidden>
Reviewed-by: Cédric Le Goater <address@hidden>
---
target/ppc/mmu-hash64.c | 60 ++++++++++++++++++++++++-----------------
1 file changed, 35 insertions(+), 25 deletions(-)
diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
index a881876647..5fabd93c92 100644
--- a/target/ppc/mmu-hash64.c
+++ b/target/ppc/mmu-hash64.c
@@ -789,27 +789,30 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr
eaddr,
*/
raddr = eaddr & 0x0FFFFFFFFFFFFFFFULL;
- /* In HV mode, add HRMOR if top EA bit is clear */
- if (msr_hv || !env->has_hv_mode) {
+ if (cpu->vhyp) {
+ /*
+ * In virtual hypervisor mode, there's nothing to do:
+ * EA == GPA == qemu guest address
+ */
+ } else if (msr_hv || !env->has_hv_mode) {
+ /* In HV mode, add HRMOR if top EA bit is clear */
if (!(eaddr >> 63)) {
raddr |= env->spr[SPR_HRMOR];
}
- } else {
- /* Otherwise, check VPM for RMA vs VRMA */
- if (env->spr[SPR_LPCR] & LPCR_VPM0) {
- slb = &env->vrma_slb;
- if (slb->sps) {
- goto skip_slb_search;
- }
- /* Not much else to do here */
+ } else if (env->spr[SPR_LPCR] & LPCR_VPM0) {
+ /* Emulated VRMA mode */
+ slb = &env->vrma_slb;
+ if (!slb->sps) {
+ /* Invalid VRMA setup, machine check */
cs->exception_index = POWERPC_EXCP_MCHECK;
env->error_code = 0;
return 1;
- } else if (raddr < env->rmls) {
- /* RMA. Check bounds in RMLS */
- raddr |= env->spr[SPR_RMOR];
- } else {
- /* The access failed, generate the approriate interrupt */
+ }
+
+ goto skip_slb_search;
+ } else {
+ /* Emulated old-style RMO mode, bounds check against RMLS */
+ if (raddr >= env->rmls) {
if (rwx == 2) {
ppc_hash64_set_isi(cs, SRR1_PROTFAULT);
} else {
@@ -821,6 +824,8 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr
eaddr,
}
return 1;
}
+
+ raddr |= env->spr[SPR_RMOR];
}
tlb_set_page(cs, eaddr & TARGET_PAGE_MASK, raddr & TARGET_PAGE_MASK,
PAGE_READ | PAGE_WRITE | PAGE_EXEC, mmu_idx,
@@ -953,22 +958,27 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu,
target_ulong addr)
/* In real mode the top 4 effective address bits are ignored */
raddr = addr & 0x0FFFFFFFFFFFFFFFULL;
- /* In HV mode, add HRMOR if top EA bit is clear */
- if ((msr_hv || !env->has_hv_mode) && !(addr >> 63)) {
+ if (cpu->vhyp) {
+ /*
+ * In virtual hypervisor mode, there's nothing to do:
+ * EA == GPA == qemu guest address
+ */
+ return raddr;
+ } else if ((msr_hv || !env->has_hv_mode) && !(addr >> 63)) {
+ /* In HV mode, add HRMOR if top EA bit is clear */
return raddr | env->spr[SPR_HRMOR];
- }
-
- /* Otherwise, check VPM for RMA vs VRMA */
- if (env->spr[SPR_LPCR] & LPCR_VPM0) {
+ } else if (env->spr[SPR_LPCR] & LPCR_VPM0) {
+ /* Emulated VRMA mode */
slb = &env->vrma_slb;
if (!slb->sps) {
return -1;
}
- } else if (raddr < env->rmls) {
- /* RMA. Check bounds in RMLS */
- return raddr | env->spr[SPR_RMOR];
} else {
- return -1;
+ /* Emulated old-style RMO mode, bounds check against RMLS */
+ if (raddr >= env->rmls) {
+ return -1;
+ }
+ return raddr | env->spr[SPR_RMOR];
}
} else {
slb = slb_lookup(cpu, addr);
--
2.24.1
- [PATCH v4 00/12] target/ppc: Correct some errors with real mode handling, David Gibson, 2020/02/18
- [PATCH v4 02/12] ppc: Remove stub of PPC970 HID4 implementation, David Gibson, 2020/02/18
- [PATCH v4 01/12] ppc: Remove stub support for 32-bit hypervisor mode, David Gibson, 2020/02/18
- [PATCH v4 05/12] spapr, ppc: Remove VPM0/RMLS hacks for POWER9, David Gibson, 2020/02/18
- [PATCH v4 03/12] target/ppc: Correct handling of real mode accesses with vhyp on hash MMU,
David Gibson <=
- [PATCH v4 10/12] target/ppc: Only calculate RMLS derived RMA limit on demand, David Gibson, 2020/02/18
- [PATCH v4 12/12] target/ppc: Don't store VRMA SLBE persistently, David Gibson, 2020/02/18
- [PATCH v4 08/12] target/ppc: Streamline calculation of RMA limit from LPCR[RMLS], David Gibson, 2020/02/18
- [PATCH v4 06/12] target/ppc: Remove RMOR register from POWER9 & POWER10, David Gibson, 2020/02/18
- [PATCH v4 09/12] target/ppc: Correct RMLS table, David Gibson, 2020/02/18
- [PATCH v4 04/12] target/ppc: Introduce ppc_hash64_use_vrma() helper, David Gibson, 2020/02/18
- [PATCH v4 07/12] target/ppc: Use class fields to simplify LPCR masking, David Gibson, 2020/02/18
- [PATCH v4 11/12] target/ppc: Streamline construction of VRMA SLB entry, David Gibson, 2020/02/18