qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/4] target/ppc: Fix lqarx to set cpu_reserve


From: Nicholas Piggin
Subject: Re: [PATCH 1/4] target/ppc: Fix lqarx to set cpu_reserve
Date: Mon, 05 Jun 2023 12:33:44 +1000

On Mon Jun 5, 2023 at 2:05 AM AEST, Richard Henderson wrote:
> On 6/4/23 03:28, Nicholas Piggin wrote:
> > lqarx does not set cpu_reserve, which causes stqcx. to never succeed.
> > Fix this and slightly rearrange gen_load_locked so the two functions
> > match more closely.
> > 
> > Cc: qemu-stable@nongnu.org
> > Fixes: 94bf2658676 ("target/ppc: Use atomic load for LQ and LQARX")
> > Fixes: 57b38ffd0c6 ("target/ppc: Use tcg_gen_qemu_{ld,st}_i128 for LQARX, 
> > LQ, STQ")
> > Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> > ---
> > cpu_reserve got lost in the parallel part with the first patch, then
> > from serial part when it was merged with the parallel by the second
> > patch.
>
> Oops, sorry about that.

No problem, I really appreciate your work on ppc, ppc just should have
more unit tests particularly for non-trivial instructions like lqarx
which would have caught it. That's the real problem.

>
> > 
> > Thanks,
> > Nick
> > 
> >   target/ppc/translate.c | 3 ++-
> >   1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/target/ppc/translate.c b/target/ppc/translate.c
> > index 3650d2985d..e129cdcb8f 100644
> > --- a/target/ppc/translate.c
> > +++ b/target/ppc/translate.c
> > @@ -3583,8 +3583,8 @@ static void gen_load_locked(DisasContext *ctx, MemOp 
> > memop)
> >   
> >       gen_set_access_type(ctx, ACCESS_RES);
> >       gen_addr_reg_index(ctx, t0);
> > -    tcg_gen_qemu_ld_tl(gpr, t0, ctx->mem_idx, memop | MO_ALIGN);
> >       tcg_gen_mov_tl(cpu_reserve, t0);
> > +    tcg_gen_qemu_ld_tl(gpr, t0, ctx->mem_idx, memop | MO_ALIGN);
> >       tcg_gen_mov_tl(cpu_reserve_val, gpr);
>
> This change is wrong.  Reserve should not be set if the load faults.

Oh yeah, good catch.

Thanks
Nick



reply via email to

[Prev in Thread] Current Thread [Next in Thread]