[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v3 04/12] target/ppc: Introduce ppc_hash64_use_vrma() helper
From: |
David Gibson |
Subject: |
Re: [PATCH v3 04/12] target/ppc: Introduce ppc_hash64_use_vrma() helper |
Date: |
Thu, 20 Feb 2020 14:10:48 +1100 |
On Wed, Feb 19, 2020 at 11:06:20AM -0300, Fabiano Rosas wrote:
> David Gibson <address@hidden> writes:
>
> > When running guests under a hypervisor, the hypervisor obviously needs to
> > be protected from guest accesses even if those are in what the guest
> > considers real mode (translation off). The POWER hardware provides two
> > ways of doing that: The old way has guest real mode accesses simply offset
> > and bounds checked into host addresses. It works, but requires that a
> > significant chunk of the guest's memory - the RMA - be physically
> > contiguous in the host, which is pretty inconvenient. The new way, known
> > as VRMA, has guest real mode accesses translated in roughly the normal way
> > but with some special parameters.
> >
> > In POWER7 and POWER8 the LPCR[VPM0] bit selected between the two modes, but
> > in POWER9 only VRMA mode is supported
>
> ... when translation is off, right? Because I see in the 3.0 ISA that
> LPCR[VPM1] is still there.
Right. This whole patch (and the whole series) is about when the
guest is in translation off mode.
>
> > and LPCR[VPM0] no longer exists. We
> > handle that difference in behaviour in ppc_hash64_set_isi().. but not in
> > other places that we blindly check LPCR[VPM0].
> >
> > Correct those instances with a new helper to tell if we should be in VRMA
> > mode.
> >
> > Signed-off-by: David Gibson <address@hidden>
> > Reviewed-by: Cédric Le Goater <address@hidden>
> > ---
> > target/ppc/mmu-hash64.c | 41 +++++++++++++++++++----------------------
> > 1 file changed, 19 insertions(+), 22 deletions(-)
> >
> > diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> > index 5fabd93c92..d878180df5 100644
> > --- a/target/ppc/mmu-hash64.c
> > +++ b/target/ppc/mmu-hash64.c
> > @@ -668,6 +668,19 @@ unsigned ppc_hash64_hpte_page_shift_noslb(PowerPCCPU
> > *cpu,
> > return 0;
> > }
> >
> > +static bool ppc_hash64_use_vrma(CPUPPCState *env)
> > +{
> > + switch (env->mmu_model) {
> > + case POWERPC_MMU_3_00:
> > + /* ISAv3.0 (POWER9) always uses VRMA, the VPM0 field and RMOR
> > + * register no longer exist */
> > + return true;
> > +
> > + default:
> > + return !!(env->spr[SPR_LPCR] & LPCR_VPM0);
> > + }
> > +}
> > +
> > static void ppc_hash64_set_isi(CPUState *cs, uint64_t error_code)
> > {
> > CPUPPCState *env = &POWERPC_CPU(cs)->env;
> > @@ -676,15 +689,7 @@ static void ppc_hash64_set_isi(CPUState *cs, uint64_t
> > error_code)
> > if (msr_ir) {
> > vpm = !!(env->spr[SPR_LPCR] & LPCR_VPM1);
> > } else {
> > - switch (env->mmu_model) {
> > - case POWERPC_MMU_3_00:
> > - /* Field deprecated in ISAv3.00 - interrupts always go to
> > hyperv */
> > - vpm = true;
> > - break;
> > - default:
> > - vpm = !!(env->spr[SPR_LPCR] & LPCR_VPM0);
> > - break;
> > - }
> > + vpm = ppc_hash64_use_vrma(env);
> > }
> > if (vpm && !msr_hv) {
> > cs->exception_index = POWERPC_EXCP_HISI;
> > @@ -702,15 +707,7 @@ static void ppc_hash64_set_dsi(CPUState *cs, uint64_t
> > dar, uint64_t dsisr)
> > if (msr_dr) {
> > vpm = !!(env->spr[SPR_LPCR] & LPCR_VPM1);
> > } else {
> > - switch (env->mmu_model) {
> > - case POWERPC_MMU_3_00:
> > - /* Field deprecated in ISAv3.00 - interrupts always go to
> > hyperv */
> > - vpm = true;
> > - break;
> > - default:
> > - vpm = !!(env->spr[SPR_LPCR] & LPCR_VPM0);
> > - break;
> > - }
> > + vpm = ppc_hash64_use_vrma(env);
> > }
> > if (vpm && !msr_hv) {
> > cs->exception_index = POWERPC_EXCP_HDSI;
> > @@ -799,7 +796,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr
> > eaddr,
> > if (!(eaddr >> 63)) {
> > raddr |= env->spr[SPR_HRMOR];
> > }
> > - } else if (env->spr[SPR_LPCR] & LPCR_VPM0) {
> > + } else if (ppc_hash64_use_vrma(env)) {
> > /* Emulated VRMA mode */
> > slb = &env->vrma_slb;
> > if (!slb->sps) {
> > @@ -967,7 +964,7 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu,
> > target_ulong addr)
> > } else if ((msr_hv || !env->has_hv_mode) && !(addr >> 63)) {
> > /* In HV mode, add HRMOR if top EA bit is clear */
> > return raddr | env->spr[SPR_HRMOR];
> > - } else if (env->spr[SPR_LPCR] & LPCR_VPM0) {
> > + } else if (ppc_hash64_use_vrma(env)) {
> > /* Emulated VRMA mode */
> > slb = &env->vrma_slb;
> > if (!slb->sps) {
> > @@ -1056,8 +1053,7 @@ static void ppc_hash64_update_vrma(PowerPCCPU *cpu)
> > slb->sps = NULL;
> >
> > /* Is VRMA enabled ? */
> > - lpcr = env->spr[SPR_LPCR];
> > - if (!(lpcr & LPCR_VPM0)) {
> > + if (ppc_hash64_use_vrma(env)) {
>
> Shouldn't this be !ppc_hash64_use_vrma(env)?
>
> And a comment about the original code: all other places that check
> LPCR_VPM0 do it after verifying that translation is off, except here
> (ppc_hash64_update_vrma). Could that be an issue?
>
> > return;
> > }
> >
> > @@ -1065,6 +1061,7 @@ static void ppc_hash64_update_vrma(PowerPCCPU *cpu)
> > * Make one up. Mostly ignore the ESID which will not be needed
> > * for translation
> > */
> > + lpcr = env->spr[SPR_LPCR];
> > vsid = SLB_VSID_VRMA;
> > vrmasd = (lpcr & LPCR_VRMASD) >> LPCR_VRMASD_SHIFT;
> > vsid |= (vrmasd << 4) & (SLB_VSID_L | SLB_VSID_LP);
>
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
signature.asc
Description: PGP signature
- [PATCH v3 00/12] target/ppc: Correct some errors with real mode handling, David Gibson, 2020/02/18
- [PATCH v3 04/12] target/ppc: Introduce ppc_hash64_use_vrma() helper, David Gibson, 2020/02/18
- [PATCH v3 02/12] ppc: Remove stub of PPC970 HID4 implementation, David Gibson, 2020/02/18
- [PATCH v3 01/12] ppc: Remove stub support for 32-bit hypervisor mode, David Gibson, 2020/02/18
- [PATCH v3 03/12] target/ppc: Correct handling of real mode accesses with vhyp on hash MMU, David Gibson, 2020/02/18
- [PATCH v3 08/12] target/ppc: Streamline calculation of RMA limit from LPCR[RMLS], David Gibson, 2020/02/18
- [PATCH v3 09/12] target/ppc: Correct RMLS table, David Gibson, 2020/02/18
- [PATCH v3 06/12] target/ppc: Remove RMOR register from POWER9 & POWER10, David Gibson, 2020/02/18
- [PATCH v3 05/12] spapr, ppc: Remove VPM0/RMLS hacks for POWER9, David Gibson, 2020/02/18