qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 6/6] hw/ppc/epapr: Do not swap ePAPR magic value


From: BALATON Zoltan
Subject: Re: [PATCH v4 6/6] hw/ppc/epapr: Do not swap ePAPR magic value
Date: Sun, 22 Dec 2024 20:08:21 +0100 (CET)

On Fri, 20 Dec 2024, Philippe Mathieu-Daudé wrote:
The ePAPR magic value in $r6 doesn't need to be byte swapped.

See ePAPR-v1.1.pdf chapter 5.4.1 "Boot CPU Initial Register State"
and the following mailing-list threads:
https://lore.kernel.org/qemu-devel/CAFEAcA_NR4XW5DNL4nq7vnH4XRH5UWbhQCxuLyKqYk6_FCBrAA@mail.gmail.com/
D6F93NM6OW2L.2FDO88L38PABR@gmail.com/">https://lore.kernel.org/qemu-devel/D6F93NM6OW2L.2FDO88L38PABR@gmail.com/

Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>

The Linux image I have still seems to boot on sam460ex so

Tested-by: BALATON Zoltan <balaton@eik.bme.hu>

---
hw/ppc/sam460ex.c     | 2 +-
hw/ppc/virtex_ml507.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/ppc/sam460ex.c b/hw/ppc/sam460ex.c
index 78e2a46e753..db9c8f3fa6e 100644
--- a/hw/ppc/sam460ex.c
+++ b/hw/ppc/sam460ex.c
@@ -234,7 +234,7 @@ static void main_cpu_reset(void *opaque)

        /* Create a mapping for the kernel.  */
        booke_set_tlb(&env->tlb.tlbe[0], 0, 0, 1 << 31);
-        env->gpr[6] = tswap32(EPAPR_MAGIC);
+        env->gpr[6] = EPAPR_MAGIC;
        env->gpr[7] = (16 * MiB) - 8; /* bi->ima_size; */

    } else {
diff --git a/hw/ppc/virtex_ml507.c b/hw/ppc/virtex_ml507.c
index f378e5c4a90..6197d31d88f 100644
--- a/hw/ppc/virtex_ml507.c
+++ b/hw/ppc/virtex_ml507.c
@@ -119,7 +119,7 @@ static void main_cpu_reset(void *opaque)
    /* Create a mapping spanning the 32bit addr space. */
    booke_set_tlb(&env->tlb.tlbe[0], 0, 0, 1U << 31);
    booke_set_tlb(&env->tlb.tlbe[1], 0x80000000, 0x80000000, 1U << 31);
-    env->gpr[6] = tswap32(EPAPR_MAGIC);
+    env->gpr[6] = EPAPR_MAGIC;
    env->gpr[7] = bi->ima_size;
}


reply via email to

[Prev in Thread] Current Thread [Next in Thread]