[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL 04/33] target/arm: Consistently use finalize_memop_asimd() for ASI
From: |
Peter Maydell |
Subject: |
[PULL 04/33] target/arm: Consistently use finalize_memop_asimd() for ASIMD loads/stores |
Date: |
Mon, 19 Jun 2023 15:28:45 +0100 |
In the recent refactoring we missed a few places which should be
calling finalize_memop_asimd() for ASIMD loads and stores but
instead are just calling finalize_memop(); fix these.
For the disas_ldst_single_struct() and disas_ldst_multiple_struct()
cases, this is not a behaviour change because there the size
is never MO_128 and the two finalize functions do the same thing.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/translate-a64.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index d271449431a..1108f8287b8 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -3309,6 +3309,7 @@ static void disas_ldst_reg_roffset(DisasContext *s,
uint32_t insn,
if (!fp_access_check(s)) {
return;
}
+ memop = finalize_memop_asimd(s, size);
} else {
if (size == 3 && opc == 2) {
/* PRFM - prefetch */
@@ -3321,6 +3322,7 @@ static void disas_ldst_reg_roffset(DisasContext *s,
uint32_t insn,
is_store = (opc == 0);
is_signed = !is_store && extract32(opc, 1, 1);
is_extended = (size < 3) && extract32(opc, 0, 1);
+ memop = finalize_memop(s, size + is_signed * MO_SIGN);
}
if (rn == 31) {
@@ -3333,7 +3335,6 @@ static void disas_ldst_reg_roffset(DisasContext *s,
uint32_t insn,
tcg_gen_add_i64(dirty_addr, dirty_addr, tcg_rm);
- memop = finalize_memop(s, size + is_signed * MO_SIGN);
clean_addr = gen_mte_check1(s, dirty_addr, is_store, true, memop);
if (is_vector) {
@@ -3398,6 +3399,7 @@ static void disas_ldst_reg_unsigned_imm(DisasContext *s,
uint32_t insn,
if (!fp_access_check(s)) {
return;
}
+ memop = finalize_memop_asimd(s, size);
} else {
if (size == 3 && opc == 2) {
/* PRFM - prefetch */
@@ -3410,6 +3412,7 @@ static void disas_ldst_reg_unsigned_imm(DisasContext *s,
uint32_t insn,
is_store = (opc == 0);
is_signed = !is_store && extract32(opc, 1, 1);
is_extended = (size < 3) && extract32(opc, 0, 1);
+ memop = finalize_memop(s, size + is_signed * MO_SIGN);
}
if (rn == 31) {
@@ -3419,7 +3422,6 @@ static void disas_ldst_reg_unsigned_imm(DisasContext *s,
uint32_t insn,
offset = imm12 << size;
tcg_gen_addi_i64(dirty_addr, dirty_addr, offset);
- memop = finalize_memop(s, size + is_signed * MO_SIGN);
clean_addr = gen_mte_check1(s, dirty_addr, is_store, rn != 31, memop);
if (is_vector) {
@@ -3861,7 +3863,7 @@ static void disas_ldst_multiple_struct(DisasContext *s,
uint32_t insn)
* promote consecutive little-endian elements below.
*/
clean_addr = gen_mte_checkN(s, tcg_rn, is_store, is_postidx || rn != 31,
- total, finalize_memop(s, size));
+ total, finalize_memop_asimd(s, size));
/*
* Consecutive little-endian elements from a single register
@@ -4019,7 +4021,7 @@ static void disas_ldst_single_struct(DisasContext *s,
uint32_t insn)
total = selem << scale;
tcg_rn = cpu_reg_sp(s, rn);
- mop = finalize_memop(s, scale);
+ mop = finalize_memop_asimd(s, scale);
clean_addr = gen_mte_checkN(s, tcg_rn, !is_load, is_postidx || rn != 31,
total, mop);
--
2.34.1
- [PULL 00/33] target-arm queue, Peter Maydell, 2023/06/19
- [PULL 01/33] target/arm: Fix return value from LDSMIN/LDSMAX 8/16 bit atomics, Peter Maydell, 2023/06/19
- [PULL 02/33] target/arm: Return correct result for LDG when ATA=0, Peter Maydell, 2023/06/19
- [PULL 03/33] target/arm: Pass memop to gen_mte_check1_mmuidx() in reg_imm9 decode, Peter Maydell, 2023/06/19
- [PULL 04/33] target/arm: Consistently use finalize_memop_asimd() for ASIMD loads/stores,
Peter Maydell <=
- [PULL 06/33] target/arm: Convert barrier insns to decodetree, Peter Maydell, 2023/06/19
- [PULL 07/33] target/arm: Convert CFINV, XAFLAG and AXFLAG to decodetree, Peter Maydell, 2023/06/19
- [PULL 12/33] target/arm: Convert LDXP, STXP, CASP, CAS to decodetree, Peter Maydell, 2023/06/19
- [PULL 13/33] target/arm: Convert load reg (literal) group to decodetree, Peter Maydell, 2023/06/19
- [PULL 05/33] target/arm: Convert hint instruction space to decodetree, Peter Maydell, 2023/06/19
- [PULL 09/33] target/arm: Convert MSR (reg), MRS, SYS, SYSL to decodetree, Peter Maydell, 2023/06/19
- [PULL 11/33] target/arm: Convert load/store exclusive and ordered to decodetree, Peter Maydell, 2023/06/19
- [PULL 10/33] target/arm: Convert exception generation instructions to decodetree, Peter Maydell, 2023/06/19
- [PULL 08/33] target/arm: Convert MSR (immediate) to decodetree, Peter Maydell, 2023/06/19
- [PULL 15/33] target/arm: Convert ld/st reg+imm9 insns to decodetree, Peter Maydell, 2023/06/19