[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [RFC PATCH 1/3] target/ppc: Add LPAR-per-core vs per-thread mode fla
From: |
Joel Stanley |
Subject: |
Re: [RFC PATCH 1/3] target/ppc: Add LPAR-per-core vs per-thread mode flag |
Date: |
Fri, 30 Jun 2023 08:17:23 +0000 |
On Thu, 29 Jun 2023 at 02:17, Nicholas Piggin <npiggin@gmail.com> wrote:
>
> The Power ISA has the concept of sub-processors:
>
> Hardware is allowed to sub-divide a multi-threaded processor into
> "sub-processors" that appear to privileged programs as multi-threaded
> processors with fewer threads.
>
> POWER9 and POWER10 have two modes, either every thread is a
> sub-processor or all threads appear as one multi-threaded processor.
> In the user manuals these are known as "LPAR-per-thread" and "LPAR
> per core" (or "1LPAR"), respectively.
>
> The practical difference is in LPAR-per-thread mode, non-hypervisor SPRs
> are not shared between threads and msgsndp can not be used to message
> siblings. In 1LPAR mode some SPRs are shared and msgsndp is usable. LPPT
> allows multiple partitions to run concurrently on the same core,
> and is a requirement for KVM to run on POWER9/10.
>
> Traditionally, SMT in PAPR environments including PowerVM and the
> pseries machine with KVM acceleration beahves as in 1LPAR mode. In
behaves
> OPAL systems, LPAR-per-thread is used. When adding SMT to the powernv
> machine, it is preferable to emulate OPAL LPAR-per-thread, so to
> account for this difference a flag is added and SPRs may become either
> per-thread, per-core shared, or per-LPAR shared. Per-LPAR registers
> become either per-thread or per-core shared depending on the mode.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Nice description.
Reviewed-by: Joel Stanley <joel@jms.id.au>
As we make the emulation more accurate, we will want the 1LPAR state
to be reflected in the xscoms too.
> ---
> hw/ppc/spapr_cpu_core.c | 2 ++
> target/ppc/cpu.h | 3 +++
> target/ppc/cpu_init.c | 12 ++++++++++++
> target/ppc/translate.c | 16 +++++++++++++---
> 4 files changed, 30 insertions(+), 3 deletions(-)
>
> diff --git a/hw/ppc/spapr_cpu_core.c b/hw/ppc/spapr_cpu_core.c
> index a4e3c2fadd..b482d9754a 100644
> --- a/hw/ppc/spapr_cpu_core.c
> +++ b/hw/ppc/spapr_cpu_core.c
> @@ -270,6 +270,8 @@ static bool spapr_realize_vcpu(PowerPCCPU *cpu,
> SpaprMachineState *spapr,
> env->spr_cb[SPR_PIR].default_value = cs->cpu_index;
> env->spr_cb[SPR_TIR].default_value = thread_index;
>
> + cpu_ppc_set_1lpar(cpu);
> +
> /* Set time-base frequency to 512 MHz. vhyp must be set first. */
> cpu_ppc_tb_init(env, SPAPR_TIMEBASE_FREQ);
>
> diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
> index 94497aa115..beddc5db5b 100644
> --- a/target/ppc/cpu.h
> +++ b/target/ppc/cpu.h
> @@ -674,6 +674,8 @@ enum {
> POWERPC_FLAG_SCV = 0x00200000,
> /* Has >1 thread per core
> */
> POWERPC_FLAG_SMT = 0x00400000,
> + /* Using "LPAR per core" mode (as opposed to per-thread)
> */
> + POWERPC_FLAG_1LPAR = 0x00800000,
> };
>
> /*
> @@ -1435,6 +1437,7 @@ void store_booke_tsr(CPUPPCState *env, target_ulong
> val);
> void ppc_tlb_invalidate_all(CPUPPCState *env);
> void ppc_tlb_invalidate_one(CPUPPCState *env, target_ulong addr);
> void cpu_ppc_set_vhyp(PowerPCCPU *cpu, PPCVirtualHypervisor *vhyp);
> +void cpu_ppc_set_1lpar(PowerPCCPU *cpu);
> int ppcmas_tlb_check(CPUPPCState *env, ppcmas_tlb_t *tlb, hwaddr *raddrp,
> target_ulong address, uint32_t pid);
> int ppcemb_tlb_search(CPUPPCState *env, target_ulong address, uint32_t pid);
> diff --git a/target/ppc/cpu_init.c b/target/ppc/cpu_init.c
> index aeff71d063..dc3a65a575 100644
> --- a/target/ppc/cpu_init.c
> +++ b/target/ppc/cpu_init.c
> @@ -6601,6 +6601,18 @@ void cpu_ppc_set_vhyp(PowerPCCPU *cpu,
> PPCVirtualHypervisor *vhyp)
> env->msr_mask &= ~MSR_HVB;
> }
>
> +void cpu_ppc_set_1lpar(PowerPCCPU *cpu)
> +{
> + CPUPPCState *env = &cpu->env;
> +
> + /*
> + * pseries SMT means "LPAR per core" mode, e.g., msgsndp is usable
> + * between threads.
> + */
> + if (env->flags & POWERPC_FLAG_SMT) {
> + env->flags |= POWERPC_FLAG_1LPAR;
> + }
> +}
> #endif /* !defined(CONFIG_USER_ONLY) */
>
> #endif /* defined(TARGET_PPC64) */
> diff --git a/target/ppc/translate.c b/target/ppc/translate.c
> index 372ee600b2..ef186396b4 100644
> --- a/target/ppc/translate.c
> +++ b/target/ppc/translate.c
> @@ -256,6 +256,16 @@ static inline bool gen_serialize_core(DisasContext *ctx)
> }
> #endif
>
> +static inline bool gen_serialize_core_lpar(DisasContext *ctx)
> +{
> + /* 1LPAR implies SMT */
> + if (ctx->flags & POWERPC_FLAG_1LPAR) {
> + return gen_serialize(ctx);
> + }
> +
> + return true;
> +}
> +
> /* SPR load/store helpers */
> static inline void gen_load_spr(TCGv t, int reg)
> {
> @@ -451,7 +461,7 @@ static void spr_write_CTRL_ST(DisasContext *ctx, int
> sprn, int gprn)
>
> void spr_write_CTRL(DisasContext *ctx, int sprn, int gprn)
> {
> - if (!(ctx->flags & POWERPC_FLAG_SMT)) {
> + if (!(ctx->flags & POWERPC_FLAG_1LPAR)) {
> spr_write_CTRL_ST(ctx, sprn, gprn);
> goto out;
> }
> @@ -815,7 +825,7 @@ void spr_write_pcr(DisasContext *ctx, int sprn, int gprn)
> /* DPDES */
> void spr_read_dpdes(DisasContext *ctx, int gprn, int sprn)
> {
> - if (!gen_serialize_core(ctx)) {
> + if (!gen_serialize_core_lpar(ctx)) {
> return;
> }
>
> @@ -824,7 +834,7 @@ void spr_read_dpdes(DisasContext *ctx, int gprn, int sprn)
>
> void spr_write_dpdes(DisasContext *ctx, int sprn, int gprn)
> {
> - if (!gen_serialize_core(ctx)) {
> + if (!gen_serialize_core_lpar(ctx)) {
> return;
> }
>
> --
> 2.40.1
>
>
- [RFC PATCH 0/3] ppc/pnv: SMT support for powernv, Nicholas Piggin, 2023/06/28
- [RFC PATCH 1/3] target/ppc: Add LPAR-per-core vs per-thread mode flag, Nicholas Piggin, 2023/06/28
- Re: [RFC PATCH 1/3] target/ppc: Add LPAR-per-core vs per-thread mode flag,
Joel Stanley <=
- [RFC PATCH 2/3] target/ppc: SMT support for the HID SPR, Nicholas Piggin, 2023/06/28
- [RFC PATCH 3/3] ppc/pnv: SMT support for powernv, Nicholas Piggin, 2023/06/28
- Re: [RFC PATCH 0/3] ppc/pnv: SMT support for powernv, Cédric Le Goater, 2023/06/29
- Re: [RFC PATCH 0/3] ppc/pnv: SMT support for powernv, Cédric Le Goater, 2023/06/29