[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL 39/45] linux-user/aarch64: Move sve record checks into restore
From: |
Peter Maydell |
Subject: |
[PULL 39/45] linux-user/aarch64: Move sve record checks into restore |
Date: |
Mon, 11 Jul 2022 14:57:44 +0100 |
From: Richard Henderson <richard.henderson@linaro.org>
Move the checks out of the parsing loop and into the
restore function. This more closely mirrors the code
structure in the kernel, and is slightly clearer.
Reject rather than silently skip incorrect VL and SVE record sizes,
bringing our checks in to line with those the kernel does.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220708151540.18136-40-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
linux-user/aarch64/signal.c | 51 +++++++++++++++++++++++++------------
1 file changed, 35 insertions(+), 16 deletions(-)
diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
index 9ff79da4be0..22d0b8b4ece 100644
--- a/linux-user/aarch64/signal.c
+++ b/linux-user/aarch64/signal.c
@@ -250,12 +250,36 @@ static void target_restore_fpsimd_record(CPUARMState *env,
}
}
-static void target_restore_sve_record(CPUARMState *env,
- struct target_sve_context *sve, int vq)
+static bool target_restore_sve_record(CPUARMState *env,
+ struct target_sve_context *sve,
+ int size)
{
- int i, j;
+ int i, j, vl, vq;
- /* Note that SVE regs are stored as a byte stream, with each byte element
+ if (!cpu_isar_feature(aa64_sve, env_archcpu(env))) {
+ return false;
+ }
+
+ __get_user(vl, &sve->vl);
+ vq = sve_vq(env);
+
+ /* Reject mismatched VL. */
+ if (vl != vq * TARGET_SVE_VQ_BYTES) {
+ return false;
+ }
+
+ /* Accept empty record -- used to clear PSTATE.SM. */
+ if (size <= sizeof(*sve)) {
+ return true;
+ }
+
+ /* Reject non-empty but incomplete record. */
+ if (size < TARGET_SVE_SIG_CONTEXT_SIZE(vq)) {
+ return false;
+ }
+
+ /*
+ * Note that SVE regs are stored as a byte stream, with each byte element
* at a subsequent address. This corresponds to a little-endian load
* of our 64-bit hunks.
*/
@@ -277,6 +301,7 @@ static void target_restore_sve_record(CPUARMState *env,
}
}
}
+ return true;
}
static int target_restore_sigframe(CPUARMState *env,
@@ -287,7 +312,7 @@ static int target_restore_sigframe(CPUARMState *env,
struct target_sve_context *sve = NULL;
uint64_t extra_datap = 0;
bool used_extra = false;
- int vq = 0, sve_size = 0;
+ int sve_size = 0;
target_restore_general_frame(env, sf);
@@ -321,15 +346,9 @@ static int target_restore_sigframe(CPUARMState *env,
if (sve || size < sizeof(struct target_sve_context)) {
goto err;
}
- if (cpu_isar_feature(aa64_sve, env_archcpu(env))) {
- vq = sve_vq(env);
- sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
- if (size == sve_size) {
- sve = (struct target_sve_context *)ctx;
- break;
- }
- }
- goto err;
+ sve = (struct target_sve_context *)ctx;
+ sve_size = size;
+ break;
case TARGET_EXTRA_MAGIC:
if (extra || size != sizeof(struct target_extra_context)) {
@@ -362,8 +381,8 @@ static int target_restore_sigframe(CPUARMState *env,
}
/* SVE data, if present, overwrites FPSIMD data. */
- if (sve) {
- target_restore_sve_record(env, sve, vq);
+ if (sve && !target_restore_sve_record(env, sve, sve_size)) {
+ goto err;
}
unlock_user(extra, extra_datap, 0);
return 0;
--
2.25.1
- [PULL 21/45] target/arm: Export unpredicated ld/st from translate-sve.c, (continued)
- [PULL 21/45] target/arm: Export unpredicated ld/st from translate-sve.c, Peter Maydell, 2022/07/11
- [PULL 24/45] target/arm: Implement FMOPA, FMOPS (non-widening), Peter Maydell, 2022/07/11
- [PULL 17/45] target/arm: Implement SME RDSVL, ADDSVL, ADDSPL, Peter Maydell, 2022/07/11
- [PULL 26/45] target/arm: Implement FMOPA, FMOPS (widening), Peter Maydell, 2022/07/11
- [PULL 31/45] target/arm: Reset streaming sve state on exception boundaries, Peter Maydell, 2022/07/11
- [PULL 33/45] linux-user/aarch64: Clear tpidr2_el0 if CLONE_SETTLS, Peter Maydell, 2022/07/11
- [PULL 37/45] linux-user/aarch64: Do not allow duplicate or short sve records, Peter Maydell, 2022/07/11
- [PULL 34/45] linux-user/aarch64: Reset PSTATE.SM on syscalls, Peter Maydell, 2022/07/11
- [PULL 30/45] target/arm: Implement SCLAMP, UCLAMP, Peter Maydell, 2022/07/11
- [PULL 36/45] linux-user/aarch64: Tidy target_restore_sigframe error return, Peter Maydell, 2022/07/11
- [PULL 39/45] linux-user/aarch64: Move sve record checks into restore,
Peter Maydell <=
- [PULL 35/45] linux-user/aarch64: Add SM bit to SVE signal context, Peter Maydell, 2022/07/11
- [PULL 44/45] target/arm: Enable SME for user-only, Peter Maydell, 2022/07/11
- [PULL 28/45] target/arm: Implement PSEL, Peter Maydell, 2022/07/11
- [PULL 20/45] target/arm: Implement SME LD1, ST1, Peter Maydell, 2022/07/11
- [PULL 32/45] target/arm: Enable SME for -cpu max, Peter Maydell, 2022/07/11
- [PULL 38/45] linux-user/aarch64: Verify extra record lock succeeded, Peter Maydell, 2022/07/11
- [PULL 40/45] linux-user/aarch64: Implement SME signal handling, Peter Maydell, 2022/07/11
- [PULL 41/45] linux-user: Rename sve prctls, Peter Maydell, 2022/07/11