[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH 12/24] tcg/aarch64: Implement negsetcond_*
From: |
Peter Maydell |
Subject: |
Re: [PATCH 12/24] tcg/aarch64: Implement negsetcond_* |
Date: |
Thu, 10 Aug 2023 17:39:08 +0100 |
On Tue, 8 Aug 2023 at 04:13, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Trivial, as aarch64 has an instruction for this: CSETM.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> tcg/aarch64/tcg-target.h | 4 ++--
> tcg/aarch64/tcg-target.c.inc | 12 ++++++++++++
> 2 files changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/tcg/aarch64/tcg-target.h b/tcg/aarch64/tcg-target.h
> index 6080fddf73..e3faa9cff4 100644
> --- a/tcg/aarch64/tcg-target.h
> +++ b/tcg/aarch64/tcg-target.h
> @@ -94,7 +94,7 @@ typedef enum {
> #define TCG_TARGET_HAS_mulsh_i32 0
> #define TCG_TARGET_HAS_extrl_i64_i32 0
> #define TCG_TARGET_HAS_extrh_i64_i32 0
> -#define TCG_TARGET_HAS_negsetcond_i32 0
> +#define TCG_TARGET_HAS_negsetcond_i32 1
> #define TCG_TARGET_HAS_qemu_st8_i32 0
>
> #define TCG_TARGET_HAS_div_i64 1
> @@ -130,7 +130,7 @@ typedef enum {
> #define TCG_TARGET_HAS_muls2_i64 0
> #define TCG_TARGET_HAS_muluh_i64 1
> #define TCG_TARGET_HAS_mulsh_i64 1
> -#define TCG_TARGET_HAS_negsetcond_i64 0
> +#define TCG_TARGET_HAS_negsetcond_i64 1
>
> /*
> * Without FEAT_LSE2, we must use LDXP+STXP to implement atomic 128-bit load,
> diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc
> index 35ca80cd56..7d8d114c9e 100644
> --- a/tcg/aarch64/tcg-target.c.inc
> +++ b/tcg/aarch64/tcg-target.c.inc
> @@ -2262,6 +2262,16 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
> TCG_REG_XZR, tcg_invert_cond(args[3]));
> break;
>
> + case INDEX_op_negsetcond_i32:
> + a2 = (int32_t)a2;
> + /* FALLTHRU */
I see this is what we already do for setcond and movcond,
but how does it work when the 2nd input is a register?
Or is reg-reg guaranteed to always use the _i64 op?
> + case INDEX_op_negsetcond_i64:
> + tcg_out_cmp(s, ext, a1, a2, c2);
> + /* Use CSETM alias of CSINV Wd, WZR, WZR, invert(cond). */
> + tcg_out_insn(s, 3506, CSINV, ext, a0, TCG_REG_XZR,
> + TCG_REG_XZR, tcg_invert_cond(args[3]));
> + break;
> +
> case INDEX_op_movcond_i32:
> a2 = (int32_t)a2;
> /* FALLTHRU */
> @@ -2868,6 +2878,8 @@ static TCGConstraintSetIndex
> tcg_target_op_def(TCGOpcode op)
> case INDEX_op_sub_i64:
> case INDEX_op_setcond_i32:
> case INDEX_op_setcond_i64:
> + case INDEX_op_negsetcond_i32:
> + case INDEX_op_negsetcond_i64:
> return C_O1_I2(r, r, rA);
>
> case INDEX_op_mul_i32:
thanks
-- PMM
- [PATCH 08/24] target/sparc: Use tcg_gen_movcond_i64 in gen_edge, (continued)
- [PATCH 08/24] target/sparc: Use tcg_gen_movcond_i64 in gen_edge, Richard Henderson, 2023/08/07
- [PATCH 09/24] target/tricore: Replace gen_cond_w with tcg_gen_negsetcond_tl, Richard Henderson, 2023/08/07
- [PATCH 10/24] tcg/ppc: Implement negsetcond_*, Richard Henderson, 2023/08/07
- [PATCH 11/24] tcg/ppc: Use the Set Boolean Extension, Richard Henderson, 2023/08/07
- [PATCH 12/24] tcg/aarch64: Implement negsetcond_*, Richard Henderson, 2023/08/07
- Re: [PATCH 12/24] tcg/aarch64: Implement negsetcond_*,
Peter Maydell <=
[PATCH 13/24] tcg/arm: Implement negsetcond_i32, Richard Henderson, 2023/08/07
[PATCH 14/24] tcg/riscv: Implement negsetcond_*, Richard Henderson, 2023/08/07
[PATCH 15/24] tcg/s390x: Implement negsetcond_*, Richard Henderson, 2023/08/07
[PATCH 16/24] tcg/sparc64: Implement negsetcond_*, Richard Henderson, 2023/08/07