[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PATCH v2 07/54] accel/tcg: Assert bits in range in tlb_flush_range_by_m
From: |
Richard Henderson |
Subject: |
[PATCH v2 07/54] accel/tcg: Assert bits in range in tlb_flush_range_by_mmuidx* |
Date: |
Thu, 14 Nov 2024 08:00:43 -0800 |
The only target that does not use TARGET_LONG_BITS is Arm, which
only reduces bits based on TBI. There is no point in handling
odd combinations of parameters.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/cputlb.c | 16 ++++------------
1 file changed, 4 insertions(+), 12 deletions(-)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 1346a26d90..5510f40333 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -792,20 +792,16 @@ void tlb_flush_range_by_mmuidx(CPUState *cpu, vaddr addr,
assert_cpu_is_self(cpu);
assert(len != 0);
+ assert(bits > TARGET_PAGE_BITS && bits <= TARGET_LONG_BITS);
/*
* If all bits are significant, and len is small,
* this devolves to tlb_flush_page.
*/
- if (bits >= TARGET_LONG_BITS && len <= TARGET_PAGE_SIZE) {
+ if (bits == TARGET_LONG_BITS && len <= TARGET_PAGE_SIZE) {
tlb_flush_page_by_mmuidx(cpu, addr, idxmap);
return;
}
- /* If no page bits are significant, this devolves to tlb_flush. */
- if (bits < TARGET_PAGE_BITS) {
- tlb_flush_by_mmuidx(cpu, idxmap);
- return;
- }
/* This should already be page aligned */
d.addr = addr & TARGET_PAGE_MASK;
@@ -832,20 +828,16 @@ void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState
*src_cpu,
CPUState *dst_cpu;
assert(len != 0);
+ assert(bits > TARGET_PAGE_BITS && bits <= TARGET_LONG_BITS);
/*
* If all bits are significant, and len is small,
* this devolves to tlb_flush_page.
*/
- if (bits >= TARGET_LONG_BITS && len <= TARGET_PAGE_SIZE) {
+ if (bits == TARGET_LONG_BITS && len <= TARGET_PAGE_SIZE) {
tlb_flush_page_by_mmuidx_all_cpus_synced(src_cpu, addr, idxmap);
return;
}
- /* If no page bits are significant, this devolves to tlb_flush. */
- if (bits < TARGET_PAGE_BITS) {
- tlb_flush_by_mmuidx_all_cpus_synced(src_cpu, idxmap);
- return;
- }
/* This should already be page aligned */
d.addr = addr & TARGET_PAGE_MASK;
--
2.43.0
- Re: [PATCH v2 03/54] accel/tcg: Split out tlbfast_{index,entry}, (continued)
- [PATCH v2 02/54] accel/tcg: Split out tlbfast_flush_locked, Richard Henderson, 2024/11/14
- [PATCH v2 06/54] accel/tcg: Assert non-zero length in tlb_flush_range_by_mmuidx*, Richard Henderson, 2024/11/14
- [PATCH v2 04/54] accel/tcg: Split out tlbfast_flush_range_locked, Richard Henderson, 2024/11/14
- [PATCH v2 08/54] accel/tcg: Flush entire tlb when a masked range wraps, Richard Henderson, 2024/11/14
- [PATCH v2 09/54] accel/tcg: Add IntervalTreeRoot to CPUTLBDesc, Richard Henderson, 2024/11/14
- [PATCH v2 07/54] accel/tcg: Assert bits in range in tlb_flush_range_by_mmuidx*,
Richard Henderson <=
- [PATCH v2 12/54] accel/tcg: Remove IntervalTree entries in tlb_flush_range_locked, Richard Henderson, 2024/11/14
- [PATCH v2 11/54] accel/tcg: Remove IntervalTree entry in tlb_flush_page_locked, Richard Henderson, 2024/11/14
- [PATCH v2 10/54] accel/tcg: Populate IntervalTree in tlb_set_page_full, Richard Henderson, 2024/11/14
- [PATCH v2 13/54] accel/tcg: Process IntervalTree entries in tlb_reset_dirty, Richard Henderson, 2024/11/14
- [PATCH v2 24/54] accel/tcg: Preserve tlb flags in tlb_set_compare, Richard Henderson, 2024/11/14