[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL 02/39] accel/tcg: Fix the comment for CPUTLBEntryFull
From: |
Richard Henderson |
Subject: |
[PULL 02/39] accel/tcg: Fix the comment for CPUTLBEntryFull |
Date: |
Fri, 15 Sep 2023 20:29:34 -0700 |
From: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
When memory region is ram, the lower TARGET_PAGE_BITS is not the
physical section number. Instead, its value is always 0.
Add comment and assert to make it clear.
Signed-off-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-Id: <20230901060118.379-1-zhiwei_liu@linux.alibaba.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
include/exec/cpu-defs.h | 12 ++++++------
accel/tcg/cputlb.c | 11 +++++++----
2 files changed, 13 insertions(+), 10 deletions(-)
diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h
index fb4c8d480f..350287852e 100644
--- a/include/exec/cpu-defs.h
+++ b/include/exec/cpu-defs.h
@@ -100,12 +100,12 @@
typedef struct CPUTLBEntryFull {
/*
* @xlat_section contains:
- * - in the lower TARGET_PAGE_BITS, a physical section number
- * - with the lower TARGET_PAGE_BITS masked off, an offset which
- * must be added to the virtual address to obtain:
- * + the ram_addr_t of the target RAM (if the physical section
- * number is PHYS_SECTION_NOTDIRTY or PHYS_SECTION_ROM)
- * + the offset within the target MemoryRegion (otherwise)
+ * - For ram, an offset which must be added to the virtual address
+ * to obtain the ram_addr_t of the target RAM
+ * - For other memory regions,
+ * + in the lower TARGET_PAGE_BITS, the physical section number
+ * + with the TARGET_PAGE_BITS masked off, the offset within
+ * the target MemoryRegion
*/
hwaddr xlat_section;
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index c643d66190..03e27b2a38 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1193,6 +1193,7 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx,
write_flags = read_flags;
if (is_ram) {
iotlb = memory_region_get_ram_addr(section->mr) + xlat;
+ assert(!(iotlb & ~TARGET_PAGE_MASK));
/*
* Computing is_clean is expensive; avoid all that unless
* the page is actually writable.
@@ -1255,10 +1256,12 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx,
/* refill the tlb */
/*
- * At this point iotlb contains a physical section number in the lower
- * TARGET_PAGE_BITS, and either
- * + the ram_addr_t of the page base of the target RAM (RAM)
- * + the offset within section->mr of the page base (I/O, ROMD)
+ * When memory region is ram, iotlb contains a TARGET_PAGE_BITS
+ * aligned ram_addr_t of the page base of the target RAM.
+ * Otherwise, iotlb contains
+ * - a physical section number in the lower TARGET_PAGE_BITS
+ * - the offset within section->mr of the page base (I/O, ROMD) with the
+ * TARGET_PAGE_BITS masked off.
* We subtract addr_page (which is page aligned and thus won't
* disturb the low bits) to give an offset which can be added to the
* (non-page-aligned) vaddr of the eventual memory access to get
--
2.34.1
[PULL 02/39] accel/tcg: Fix the comment for CPUTLBEntryFull,
Richard Henderson <=
[PULL 03/39] util: Delete checks for old host definitions, Richard Henderson, 2023/09/15
[PULL 05/39] thunk: Delete checks for old host definitions, Richard Henderson, 2023/09/15
[PULL 04/39] softmmu: Delete checks for old host definitions, Richard Henderson, 2023/09/15
[PULL 07/39] tcg/loongarch64: Lower basic tcg vec ops to LSX, Richard Henderson, 2023/09/15
[PULL 08/39] tcg: pass vece to tcg_target_const_match(), Richard Henderson, 2023/09/15
[PULL 10/39] tcg/loongarch64: Lower add/sub_vec to vadd/vsub, Richard Henderson, 2023/09/15
[PULL 11/39] tcg/loongarch64: Lower vector bitwise operations, Richard Henderson, 2023/09/15
[PULL 09/39] tcg/loongarch64: Lower cmp_vec to vseq/vsle/vslt, Richard Henderson, 2023/09/15
[PULL 12/39] tcg/loongarch64: Lower neg_vec to vneg, Richard Henderson, 2023/09/15
[PULL 13/39] tcg/loongarch64: Lower mul_vec to vmul, Richard Henderson, 2023/09/15