[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH 06/37] target/i386: add ALU load/writeback core
From: |
Richard Henderson |
Subject: |
Re: [PATCH 06/37] target/i386: add ALU load/writeback core |
Date: |
Mon, 12 Sep 2022 11:02:01 +0100 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.11.0 |
On 9/12/22 00:03, Paolo Bonzini wrote:
Add generic code generation that takes care of preparing operands
around calls to decode.e.gen in a table-driven manner, so that ALU
operations need not take care of that.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
target/i386/tcg/decode-new.c.inc | 20 +++-
target/i386/tcg/decode-new.h | 1 +
target/i386/tcg/emit.c.inc | 152 +++++++++++++++++++++++++++++++
target/i386/tcg/translate.c | 24 +++++
4 files changed, 195 insertions(+), 2 deletions(-)
diff --git a/target/i386/tcg/decode-new.c.inc b/target/i386/tcg/decode-new.c.inc
index de8ef51a2d..7f76051b2d 100644
--- a/target/i386/tcg/decode-new.c.inc
+++ b/target/i386/tcg/decode-new.c.inc
@@ -228,7 +228,7 @@ static bool decode_op_size(DisasContext *s, X86OpEntry *e,
X86OpSize size, MemOp
*ot = MO_64;
return true;
}
- if (s->vex_l && e->s0 != X86_SIZE_qq) {
+ if (s->vex_l && e->s0 != X86_SIZE_qq && e->s1 != X86_SIZE_qq) {
return false;
}
Squash back?
diff --git a/target/i386/tcg/emit.c.inc b/target/i386/tcg/emit.c.inc
index e86364ffc1..6fa0062d6a 100644
--- a/target/i386/tcg/emit.c.inc
+++ b/target/i386/tcg/emit.c.inc
@@ -29,3 +29,155 @@ static void gen_load_ea(DisasContext *s, AddressParts *mem)
TCGv ea = gen_lea_modrm_1(s, *mem);
gen_lea_v_seg(s, s->aflag, ea, mem->def_seg, s->override);
}
+
+static void gen_mmx_offset(TCGv_ptr ptr, X86DecodedOp *op)
+{
+ if (!op->has_ea) {
+ op->offset = offsetof(CPUX86State, fpregs[op->n].mmx);
+ } else {
+ op->offset = offsetof(CPUX86State, mmx_t0);
+ }
+ tcg_gen_addi_ptr(ptr, cpu_env, op->offset);
It's a shame to generate this so early, when you don't know if you'll need it. Better to
build these in the gen_binary_int_sse helper, immediately before they're required?
+
+ /*
+ * ptr is for passing to helpers, and points to the MMXReg; op->offset
+ * is for TCG ops and points to the operand.
+ */
+ if (op->ot == MO_32) {
+ op->offset += offsetof(MMXReg, MMX_L(0));
+ }
I guess you'd need an op->offset_base if you do the above...
Switch and g_assert_not_reached on invalid ot?
+static int xmm_offset(MemOp ot)
+{
+ if (ot == MO_8) {
+ return offsetof(ZMMReg, ZMM_B(0));
+ } else if (ot == MO_16) {
+ return offsetof(ZMMReg, ZMM_W(0));
+ } else if (ot == MO_32) {
+ return offsetof(ZMMReg, ZMM_L(0));
+ } else if (ot == MO_64) {
+ return offsetof(ZMMReg, ZMM_Q(0));
+ } else if (ot == MO_128) {
+ return offsetof(ZMMReg, ZMM_X(0));
+ } else if (ot == MO_256) {
+ return offsetof(ZMMReg, ZMM_Y(0));
+ } else {
+ abort();
Switch, g_assert_not_reached().
+static void gen_load_sse(DisasContext *s, TCGv temp, MemOp ot, int dest_ofs)
+{
+ if (ot == MO_8) {
+ gen_op_ld_v(s, MO_8, temp, s->A0);
+ tcg_gen_st8_tl(temp, cpu_env, dest_ofs);
+ } else if (ot == MO_16) {
+ gen_op_ld_v(s, MO_16, temp, s->A0);
+ tcg_gen_st16_tl(temp, cpu_env, dest_ofs);
+ } else if (ot == MO_32) {
+ gen_op_ld_v(s, MO_32, temp, s->A0);
+ tcg_gen_st32_tl(temp, cpu_env, dest_ofs);
+ } else if (ot == MO_64) {
+ gen_ldq_env_A0(s, dest_ofs);
+ } else if (ot == MO_128) {
+ gen_ldo_env_A0(s, dest_ofs);
+ } else if (ot == MO_256) {
+ gen_ldy_env_A0(s, dest_ofs);
+ }
Likewise.
+static void gen_writeback(DisasContext *s, X86DecodedOp *op)
+{
+ switch (op->unit) {
+ case X86_OP_SKIP:
+ break;
+ case X86_OP_SEG:
+ /* Note that reg == R_SS in gen_movl_seg_T0 always sets is_jmp. */
+ gen_movl_seg_T0(s, op->n);
+ if (s->base.is_jmp) {
+ gen_jmp_im(s, s->pc - s->cs_base);
+ if (op->n == R_SS) {
+ s->flags &= ~HF_TF_MASK;
+ gen_eob_inhibit_irq(s, true);
+ } else {
+ gen_eob(s);
+ }
+ }
+ break;
+ case X86_OP_CR:
+ case X86_OP_DR:
+ /* TBD */
+ break;
Leave these adjacent with default abort until needed?
+ default:
+ abort();
+ }
g_assert_not_reached.
+static inline void gen_ldy_env_A0(DisasContext *s, int offset)
+{
+ int mem_index = s->mem_index;
+ gen_ldo_env_A0(s, offset);
+ tcg_gen_addi_tl(s->tmp0, s->A0, 16);
+ tcg_gen_qemu_ld_i64(s->tmp1_i64, s->tmp0, mem_index, MO_LEUQ);
+ tcg_gen_st_i64(s->tmp1_i64, cpu_env, offset + offsetof(ZMMReg, ZMM_Q(2)));
+ tcg_gen_addi_tl(s->tmp0, s->A0, 24);
+ tcg_gen_qemu_ld_i64(s->tmp1_i64, s->tmp0, mem_index, MO_LEUQ);
+ tcg_gen_st_i64(s->tmp1_i64, cpu_env, offset + offsetof(ZMMReg, ZMM_Q(3)));
+}
+
+static inline void gen_sty_env_A0(DisasContext *s, int offset)
+{
+ int mem_index = s->mem_index;
+ gen_sto_env_A0(s, offset);
+ tcg_gen_addi_tl(s->tmp0, s->A0, 16);
+ tcg_gen_ld_i64(s->tmp1_i64, cpu_env, offset + offsetof(ZMMReg, ZMM_Q(2)));
+ tcg_gen_qemu_st_i64(s->tmp1_i64, s->tmp0, mem_index, MO_LEUQ);
+ tcg_gen_addi_tl(s->tmp0, s->A0, 24);
+ tcg_gen_ld_i64(s->tmp1_i64, cpu_env, offset + offsetof(ZMMReg, ZMM_Q(3)));
+ tcg_gen_qemu_st_i64(s->tmp1_i64, s->tmp0, mem_index, MO_LEUQ);
+}
No need for inline markers.
Note that there's an outstanding patch set that enforces alignment restrictions (for
ldy/sty it would only be for vmovdqa etc):
https://lore.kernel.org/qemu-devel/20220830034816.57091-2-ricky@rzhou.org/
but it's definitely something that ought to build into the new decoder from the
start.
r~
- [RFC PATCH 00/37] target/i386: new decoder + AVX implementation, Paolo Bonzini, 2022/09/11
- [PATCH 02/37] target/i386: make ldo/sto operations consistent with ldq, Paolo Bonzini, 2022/09/11
- [PATCH 05/37] target/i386: add core of new i386 decoder, Paolo Bonzini, 2022/09/11
- [PATCH 01/37] target/i386: Define XMMReg and access macros, align ZMM registers, Paolo Bonzini, 2022/09/11
- [PATCH 03/37] target/i386: REPZ and REPNZ are mutually exclusive, Paolo Bonzini, 2022/09/11
- [PATCH 06/37] target/i386: add ALU load/writeback core, Paolo Bonzini, 2022/09/11
- Re: [PATCH 06/37] target/i386: add ALU load/writeback core,
Richard Henderson <=
- [PATCH 07/37] target/i386: add CPUID[EAX=7, ECX=0].ECX to DisasContext, Paolo Bonzini, 2022/09/11
- [PATCH 08/37] target/i386: add CPUID feature checks to new decoder, Paolo Bonzini, 2022/09/11
- [PATCH 04/37] target/i386: introduce insn_get_addr, Paolo Bonzini, 2022/09/11
- [PATCH 10/37] target/i386: validate VEX prefixes via the instructions' exception classes, Paolo Bonzini, 2022/09/11
- [PATCH 09/37] target/i386: add AVX_EN hflag, Paolo Bonzini, 2022/09/11