[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PATCH v3 17/20] target/arm: Move mte check for store-exclusive
From: |
Richard Henderson |
Subject: |
[PATCH v3 17/20] target/arm: Move mte check for store-exclusive |
Date: |
Tue, 30 May 2023 12:14:35 -0700 |
Push the mte check behind the exclusive_addr check.
Document the several ways that we are still out of spec
with this implementation.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/arm/tcg/translate-a64.c | 42 +++++++++++++++++++++++++++++-----
1 file changed, 36 insertions(+), 6 deletions(-)
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 49cb7a7dd5..9654c5746a 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -2524,17 +2524,47 @@ static void gen_store_exclusive(DisasContext *s, int
rd, int rt, int rt2,
*/
TCGLabel *fail_label = gen_new_label();
TCGLabel *done_label = gen_new_label();
- TCGv_i64 tmp, dirty_addr, clean_addr;
+ TCGv_i64 tmp, clean_addr;
MemOp memop;
- memop = (size + is_pair) | MO_ALIGN;
- memop = finalize_memop(s, memop);
-
- dirty_addr = cpu_reg_sp(s, rn);
- clean_addr = gen_mte_check1(s, dirty_addr, true, rn != 31, memop);
+ /*
+ * FIXME: We are out of spec here. We have recorded only the address
+ * from load_exclusive, not the entire range, and we assume that the
+ * size of the access on both sides match. The architecture allows the
+ * store to be smaller than the load, so long as the stored bytes are
+ * within the range recorded by the load.
+ */
+ /* See AArch64.ExclusiveMonitorsPass() and AArch64.IsExclusiveVA(). */
+ clean_addr = clean_data_tbi(s, cpu_reg_sp(s, rn));
tcg_gen_brcond_i64(TCG_COND_NE, clean_addr, cpu_exclusive_addr,
fail_label);
+ /*
+ * The write, and any associated faults, only happen if the virtual
+ * and physical addresses pass the exclusive monitor check. These
+ * faults are exceedingly unlikely, because normally the guest uses
+ * the exact same address register for the load_exclusive, and we
+ * would have recognized these faults there.
+ *
+ * It is possible to trigger an alignment fault pre-LSE2, e.g. with an
+ * unaligned 4-byte write within the range of an aligned 8-byte load.
+ * With LSE2, the store would need to cross a 16-byte boundary when the
+ * load did not, which would mean the store is outside the range
+ * recorded for the monitor, which would have failed a corrected monitor
+ * check above. For now, we assume no size change and retain the
+ * MO_ALIGN to let tcg know what we checked in the load_exclusive.
+ *
+ * It is possible to trigger an MTE fault, by performing the load with
+ * a virtual address with a valid tag and performing the store with the
+ * same virtual address and a different invalid tag.
+ */
+ memop = size + is_pair;
+ if (memop == MO_128 || !dc_isar_feature(aa64_lse2, s)) {
+ memop |= MO_ALIGN;
+ }
+ memop = finalize_memop(s, memop);
+ gen_mte_check1(s, cpu_reg_sp(s, rn), true, rn != 31, memop);
+
tmp = tcg_temp_new_i64();
if (is_pair) {
if (size == 2) {
--
2.34.1
- [PATCH v3 09/20] target/arm: Load/store integer pair with one tcg operation, (continued)
- [PATCH v3 09/20] target/arm: Load/store integer pair with one tcg operation, Richard Henderson, 2023/05/30
- [PATCH v3 13/20] target/arm: Pass single_memop to gen_mte_checkN, Richard Henderson, 2023/05/30
- [PATCH v3 07/20] target/arm: Use tcg_gen_qemu_{ld, st}_i128 in gen_sve_{ld, st}r, Richard Henderson, 2023/05/30
- [PATCH v3 04/20] target/arm: Use tcg_gen_qemu_ld_i128 for LDXP, Richard Henderson, 2023/05/30
- [PATCH v3 06/20] target/arm: Use tcg_gen_qemu_st_i128 for STZG, STZ2G, Richard Henderson, 2023/05/30
- [PATCH v3 11/20] target/arm: Hoist finalize_memop out of do_fp_{ld, st}, Richard Henderson, 2023/05/30
- [PATCH v3 08/20] target/arm: Sink gen_mte_check1 into load/store_exclusive, Richard Henderson, 2023/05/30
- [PATCH v3 10/20] target/arm: Hoist finalize_memop out of do_gpr_{ld, st}, Richard Henderson, 2023/05/30
- [PATCH v3 12/20] target/arm: Pass memop to gen_mte_check1*, Richard Henderson, 2023/05/30
- [PATCH v3 16/20] target/arm: Relax ordered/atomic alignment checks for LSE2, Richard Henderson, 2023/05/30
- [PATCH v3 17/20] target/arm: Move mte check for store-exclusive,
Richard Henderson <=
- [PATCH v3 14/20] target/arm: Check alignment in helper_mte_check, Richard Henderson, 2023/05/30
- [PATCH v3 20/20] target/arm: Enable FEAT_LSE2 for -cpu max, Richard Henderson, 2023/05/30
- [PATCH v3 18/20] tests/tcg/aarch64: Use stz2g in mte-7.c, Richard Henderson, 2023/05/30
- [PATCH v3 15/20] target/arm: Add SCTLR.nAA to TBFLAG_A64, Richard Henderson, 2023/05/30
- [PATCH v3 19/20] tests/tcg/multiarch: Adjust sigbus.c, Richard Henderson, 2023/05/30