[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL 42/56] tcg/optimize: Split out fold_ix_to_i
From: |
Richard Henderson |
Subject: |
[PULL 42/56] tcg/optimize: Split out fold_ix_to_i |
Date: |
Wed, 27 Oct 2021 19:41:17 -0700 |
Pull the "op r, 0, b => movi r, 0" optimization into a function,
and use it in fold_shift.
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/optimize.c | 28 ++++++++++------------------
1 file changed, 10 insertions(+), 18 deletions(-)
diff --git a/tcg/optimize.c b/tcg/optimize.c
index f5ab0500b7..bf74b77355 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -731,6 +731,15 @@ static bool fold_to_not(OptContext *ctx, TCGOp *op, int
idx)
return false;
}
+/* If the binary operation has first argument @i, fold to @i. */
+static bool fold_ix_to_i(OptContext *ctx, TCGOp *op, uint64_t i)
+{
+ if (arg_is_const(op->args[1]) && arg_info(op->args[1])->val == i) {
+ return tcg_opt_gen_movi(ctx, op, op->args[0], i);
+ }
+ return false;
+}
+
/* If the binary operation has first argument @i, fold to NOT. */
static bool fold_ix_to_not(OptContext *ctx, TCGOp *op, uint64_t i)
{
@@ -1384,6 +1393,7 @@ static bool fold_sextract(OptContext *ctx, TCGOp *op)
static bool fold_shift(OptContext *ctx, TCGOp *op)
{
if (fold_const2(ctx, op) ||
+ fold_ix_to_i(ctx, op, 0) ||
fold_xi_to_x(ctx, op, 0)) {
return true;
}
@@ -1552,24 +1562,6 @@ void tcg_optimize(TCGContext *s)
break;
}
- /* Simplify expressions for "shift/rot r, 0, a => movi r, 0",
- and "sub r, 0, a => neg r, a" case. */
- switch (opc) {
- CASE_OP_32_64(shl):
- CASE_OP_32_64(shr):
- CASE_OP_32_64(sar):
- CASE_OP_32_64(rotl):
- CASE_OP_32_64(rotr):
- if (arg_is_const(op->args[1])
- && arg_info(op->args[1])->val == 0) {
- tcg_opt_gen_movi(&ctx, op, op->args[0], 0);
- continue;
- }
- break;
- default:
- break;
- }
-
/* Simplify using known-zero bits. Currently only ops with a single
output argument is supported. */
z_mask = -1;
--
2.25.1
- [PULL 28/56] tcg/optimize: Split out fold_extract2, (continued)
- [PULL 28/56] tcg/optimize: Split out fold_extract2, Richard Henderson, 2021/10/27
- [PULL 39/56] tcg/optimize: Split out fold_to_not, Richard Henderson, 2021/10/27
- [PULL 08/56] tcg/optimize: Remove do_default label, Richard Henderson, 2021/10/27
- [PULL 22/56] tcg/optimize: Split out fold_brcond2, Richard Henderson, 2021/10/27
- [PULL 26/56] tcg/optimize: Split out fold_addsub2_i32, Richard Henderson, 2021/10/27
- [PULL 32/56] tcg/optimize: Split out fold_bswap, Richard Henderson, 2021/10/27
- [PULL 37/56] tcg/optimize: Split out fold_xi_to_i, Richard Henderson, 2021/10/27
- [PULL 40/56] tcg/optimize: Split out fold_sub_to_neg, Richard Henderson, 2021/10/27
- [PULL 41/56] tcg/optimize: Split out fold_xi_to_x, Richard Henderson, 2021/10/27
- [PULL 46/56] tcg/optimize: Sink commutative operand swapping into fold functions, Richard Henderson, 2021/10/27
- [PULL 42/56] tcg/optimize: Split out fold_ix_to_i,
Richard Henderson <=
- [PULL 48/56] tcg/optimize: Use fold_xx_to_i for orc, Richard Henderson, 2021/10/27
- [PULL 44/56] tcg/optimize: Expand fold_mulu2_i32 to all 4-arg multiplies, Richard Henderson, 2021/10/27
- [PULL 45/56] tcg/optimize: Expand fold_addsub2_i32 to 64-bit ops, Richard Henderson, 2021/10/27
- [PULL 47/56] tcg/optimize: Stop forcing z_mask to "garbage" for 32-bit values, Richard Henderson, 2021/10/27
- [PULL 50/56] tcg/optimize: Use fold_xi_to_x for div, Richard Henderson, 2021/10/27
- [PULL 55/56] tcg/optimize: Propagate sign info for bit counting, Richard Henderson, 2021/10/27
- [PULL 56/56] tcg/optimize: Propagate sign info for shifting, Richard Henderson, 2021/10/27
- [PULL 43/56] tcg/optimize: Split out fold_masks, Richard Henderson, 2021/10/27
- [PULL 51/56] tcg/optimize: Use fold_xx_to_i for rem, Richard Henderson, 2021/10/27
- [PULL 49/56] tcg/optimize: Use fold_xi_to_x for mul, Richard Henderson, 2021/10/27