[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL 09/22] accel/tcg: Replace target_ulong with vaddr in *_mmu_lookup(
From: |
Richard Henderson |
Subject: |
[PULL 09/22] accel/tcg: Replace target_ulong with vaddr in *_mmu_lookup() |
Date: |
Mon, 26 Jun 2023 17:39:32 +0200 |
From: Anton Johansson <anjo@rev.ng>
Update atomic_mmu_lookup() and cpu_mmu_lookup() to take the guest
virtual address as a vaddr instead of a target_ulong.
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230621135633.1649-10-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/cputlb.c | 6 +++---
accel/tcg/user-exec.c | 6 +++---
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index d873e58a5d..e02cfc550e 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1898,15 +1898,15 @@ static bool mmu_lookup(CPUArchState *env, vaddr addr,
MemOpIdx oi,
* Probe for an atomic operation. Do not allow unaligned operations,
* or io operations to proceed. Return the host address.
*/
-static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
- MemOpIdx oi, int size, uintptr_t retaddr)
+static void *atomic_mmu_lookup(CPUArchState *env, vaddr addr, MemOpIdx oi,
+ int size, uintptr_t retaddr)
{
uintptr_t mmu_idx = get_mmuidx(oi);
MemOp mop = get_memop(oi);
int a_bits = get_alignment_bits(mop);
uintptr_t index;
CPUTLBEntry *tlbe;
- target_ulong tlb_addr;
+ vaddr tlb_addr;
void *hostaddr;
CPUTLBEntryFull *full;
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index d71e26a7b5..f8b16d6ab8 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -889,7 +889,7 @@ void page_reset_target_data(target_ulong start,
target_ulong last) { }
/* The softmmu versions of these helpers are in cputlb.c. */
-static void *cpu_mmu_lookup(CPUArchState *env, abi_ptr addr,
+static void *cpu_mmu_lookup(CPUArchState *env, vaddr addr,
MemOp mop, uintptr_t ra, MMUAccessType type)
{
int a_bits = get_alignment_bits(mop);
@@ -1324,8 +1324,8 @@ uint64_t cpu_ldq_code_mmu(CPUArchState *env, abi_ptr addr,
/*
* Do not allow unaligned operations to proceed. Return the host address.
*/
-static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
- MemOpIdx oi, int size, uintptr_t retaddr)
+static void *atomic_mmu_lookup(CPUArchState *env, vaddr addr, MemOpIdx oi,
+ int size, uintptr_t retaddr)
{
MemOp mop = get_memop(oi);
int a_bits = get_alignment_bits(mop);
--
2.34.1
- [PULL 00/22] tcg patch queue, Richard Henderson, 2023/06/26
- [PULL 07/22] accel/tcg: Widen pc to vaddr in CPUJumpCache, Richard Henderson, 2023/06/26
- [PULL 05/22] accel/tcg/cputlb.c: Widen addr in MMULookupPageData, Richard Henderson, 2023/06/26
- [PULL 08/22] accel: Replace target_ulong with vaddr in probe_*(), Richard Henderson, 2023/06/26
- [PULL 02/22] accel/tcg/translate-all.c: Widen pc and cs_base, Richard Henderson, 2023/06/26
- [PULL 03/22] target: Widen pc/cs_base in cpu_get_tb_cpu_state, Richard Henderson, 2023/06/26
- [PULL 09/22] accel/tcg: Replace target_ulong with vaddr in *_mmu_lookup(),
Richard Henderson <=
- [PULL 04/22] accel/tcg/cputlb.c: Widen CPUTLBEntry access functions, Richard Henderson, 2023/06/26
- [PULL 06/22] accel/tcg/cpu-exec.c: Widen pc to vaddr, Richard Henderson, 2023/06/26
- [PULL 01/22] accel: Replace target_ulong in tlb_*(), Richard Henderson, 2023/06/26
- [PULL 18/22] tcg: Add host memory barriers to cpu_ldst.h interfaces, Richard Henderson, 2023/06/26
- [PULL 21/22] accel/tcg: Move TLB_WATCHPOINT to TLB_SLOW_FLAGS_MASK, Richard Henderson, 2023/06/26
- [PULL 17/22] tcg: Do not elide memory barriers for !CF_PARALLEL in system mode, Richard Henderson, 2023/06/26
- [PULL 10/22] accel/tcg: Replace target_ulong with vaddr in translator_*(), Richard Henderson, 2023/06/26
- [PULL 20/22] accel/tcg: Store some tlb flags in CPUTLBEntryFull, Richard Henderson, 2023/06/26
- [PULL 11/22] cpu: Replace target_ulong with hwaddr in tb_invalidate_phys_addr(), Richard Henderson, 2023/06/26
- [PULL 12/22] softfloat: use QEMU_FLATTEN to avoid mistaken isra inlining, Richard Henderson, 2023/06/26