[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL v2 35/60] tcg/optimize: Split out fold_xx_to_i
From: |
Richard Henderson |
Subject: |
[PULL v2 35/60] tcg/optimize: Split out fold_xx_to_i |
Date: |
Thu, 28 Oct 2021 21:33:04 -0700 |
Pull the "op r, a, a => movi r, 0" optimization into a function,
and use it in the outer opcode fold functions.
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/optimize.c | 41 ++++++++++++++++++++++++-----------------
1 file changed, 24 insertions(+), 17 deletions(-)
diff --git a/tcg/optimize.c b/tcg/optimize.c
index 5f1bd7cd78..2f55dc56c0 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -695,6 +695,15 @@ static bool fold_const2(OptContext *ctx, TCGOp *op)
return false;
}
+/* If the binary operation has both arguments equal, fold to @i. */
+static bool fold_xx_to_i(OptContext *ctx, TCGOp *op, uint64_t i)
+{
+ if (args_are_copies(op->args[1], op->args[2])) {
+ return tcg_opt_gen_movi(ctx, op, op->args[0], i);
+ }
+ return false;
+}
+
/*
* These outermost fold_<op> functions are sorted alphabetically.
*/
@@ -744,7 +753,11 @@ static bool fold_and(OptContext *ctx, TCGOp *op)
static bool fold_andc(OptContext *ctx, TCGOp *op)
{
- return fold_const2(ctx, op);
+ if (fold_const2(ctx, op) ||
+ fold_xx_to_i(ctx, op, 0)) {
+ return true;
+ }
+ return false;
}
static bool fold_brcond(OptContext *ctx, TCGOp *op)
@@ -1224,7 +1237,11 @@ static bool fold_shift(OptContext *ctx, TCGOp *op)
static bool fold_sub(OptContext *ctx, TCGOp *op)
{
- return fold_const2(ctx, op);
+ if (fold_const2(ctx, op) ||
+ fold_xx_to_i(ctx, op, 0)) {
+ return true;
+ }
+ return false;
}
static bool fold_sub2_i32(OptContext *ctx, TCGOp *op)
@@ -1234,7 +1251,11 @@ static bool fold_sub2_i32(OptContext *ctx, TCGOp *op)
static bool fold_xor(OptContext *ctx, TCGOp *op)
{
- return fold_const2(ctx, op);
+ if (fold_const2(ctx, op) ||
+ fold_xx_to_i(ctx, op, 0)) {
+ return true;
+ }
+ return false;
}
/* Propagate constants and copies, fold constant expressions. */
@@ -1739,20 +1760,6 @@ void tcg_optimize(TCGContext *s)
break;
}
- /* Simplify expression for "op r, a, a => movi r, 0" cases */
- switch (opc) {
- CASE_OP_32_64_VEC(andc):
- CASE_OP_32_64_VEC(sub):
- CASE_OP_32_64_VEC(xor):
- if (args_are_copies(op->args[1], op->args[2])) {
- tcg_opt_gen_movi(&ctx, op, op->args[0], 0);
- continue;
- }
- break;
- default:
- break;
- }
-
/*
* Process each opcode.
* Sorted alphabetically by opcode as much as possible.
--
2.25.1
- [PULL v2 27/60] tcg/optimize: Split out fold_movcond, (continued)
- [PULL v2 27/60] tcg/optimize: Split out fold_movcond, Richard Henderson, 2021/10/29
- [PULL v2 28/60] tcg/optimize: Split out fold_extract2, Richard Henderson, 2021/10/29
- [PULL v2 30/60] tcg/optimize: Split out fold_deposit, Richard Henderson, 2021/10/29
- [PULL v2 32/60] tcg/optimize: Split out fold_bswap, Richard Henderson, 2021/10/29
- [PULL v2 33/60] tcg/optimize: Split out fold_dup, fold_dup2, Richard Henderson, 2021/10/29
- [PULL v2 34/60] tcg/optimize: Split out fold_mov, Richard Henderson, 2021/10/29
- [PULL v2 23/60] tcg/optimize: Split out fold_brcond, Richard Henderson, 2021/10/29
- [PULL v2 37/60] tcg/optimize: Split out fold_xi_to_i, Richard Henderson, 2021/10/29
- [PULL v2 31/60] tcg/optimize: Split out fold_count_zeros, Richard Henderson, 2021/10/29
- [PULL v2 36/60] tcg/optimize: Split out fold_xx_to_x, Richard Henderson, 2021/10/29
- [PULL v2 35/60] tcg/optimize: Split out fold_xx_to_i,
Richard Henderson <=
- [PULL v2 29/60] tcg/optimize: Split out fold_extract, fold_sextract, Richard Henderson, 2021/10/29
- [PULL v2 39/60] tcg/optimize: Split out fold_to_not, Richard Henderson, 2021/10/29
- [PULL v2 38/60] tcg/optimize: Add type to OptContext, Richard Henderson, 2021/10/29
- [PULL v2 40/60] tcg/optimize: Split out fold_sub_to_neg, Richard Henderson, 2021/10/29
- [PULL v2 41/60] tcg/optimize: Split out fold_xi_to_x, Richard Henderson, 2021/10/29
- [PULL v2 42/60] tcg/optimize: Split out fold_ix_to_i, Richard Henderson, 2021/10/29
- [PULL v2 43/60] tcg/optimize: Split out fold_masks, Richard Henderson, 2021/10/29
- [PULL v2 45/60] tcg/optimize: Expand fold_addsub2_i32 to 64-bit ops, Richard Henderson, 2021/10/29
- [PULL v2 48/60] tcg/optimize: Stop forcing z_mask to "garbage" for 32-bit values, Richard Henderson, 2021/10/29
- [PULL v2 60/60] softmmu: fix for "after access" watchpoints, Richard Henderson, 2021/10/29