qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 3/5] target/i386: Fix physical address truncation


From: Paolo Bonzini
Subject: Re: [PATCH 3/5] target/i386: Fix physical address truncation
Date: Sat, 23 Dec 2023 12:47:38 +0100



Il sab 23 dic 2023, 11:34 Michael Brown <mcb30@ipxe.org> ha scritto:
I am confused by how BOUND can result in an access to a linear address
outside of the address-size range.  I don't know the internals well
enough, but I'm guessing it might be in the line in helper_boundl():

     high = cpu_ldl_data_ra(env, a0 + 4, GETPC());

where an address is calculated as (a0+4) using a 64-bit target_ulong
type with no truncation to 32 bits applied.

If so, then ought the truncation to be applied on this line instead (and
the equivalent in helper_boundw())?  My understanding (which may well be
incorrect) is that the linear address gets truncated to the instruction
address size (16 or 32 bits) before any conversion to a physical address
takes place.

The linear address is the one that has the segment base added, and it is not truncated to 16 bits (otherwise the whole A20 thing would not exist). The same should be true of e.g. an FSAVE instruction; it would allow access slightly beyond the usual 1M+64K limit that is possible in real mode with 286 and later processors.

In big real mode with 32-bit addresses, it should not be possible to go beyond 4G physical address by adding the segment base, it should wrap around and that's what I implemented. However you're probably right that this patch has a hole for accesses made from 32-bit code segments with paging enabled. I think LMA was the wrong bit to test all the time, and I am not even sure if the masking must be applied even before the call to mmu_translate(). I will ponder it a bit and possibly send a revised version.

Paolo


Regardless: this updated patch (in isolation) definitely fixes the issue
that I observed, so I'm happy for an added

Tested-by: Michael Brown <mcb30@ipxe.org>

Thanks,

Michael


reply via email to

[Prev in Thread] Current Thread [Next in Thread]