qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] MTTCG qemu-softmmu aborted on watchpoint hit by atomic


From: Emilio G. Cota
Subject: Re: [Qemu-devel] MTTCG qemu-softmmu aborted on watchpoint hit by atomic instruction
Date: Tue, 15 Jan 2019 09:51:17 -0500
User-agent: Mutt/1.9.4 (2018-02-28)

On Mon, Jan 14, 2019 at 18:49:43 -0800, Max Filippov wrote:
> Hello,
> 
> I tried to debug guest application on SMP xtensa softmmu QEMU
> through the gdbserver and found that QEMU aborts when guest
> uses atomic operation to modify memory location watched by the
> debugger. It exits with the following message:
> 
> ERROR: cpus.c:1848:qemu_mutex_lock_iothread_impl: assertion failed:
> (!qemu_mutex_iothread_locked())
> 
> and the reason is that io_writex invoked from the atomic operation
> calls qemu_mutex_lock_iothread but doesn't have a chance to call
> qemu_mutex_unlock_iothread, because it exits the cpu loop at the
> following place:
> 
> #0  __libc_siglongjmp (env=0x55555628c720, val=1) at longjmp.c:28
> #1  0x000055555577ef24 in cpu_loop_exit (cpu=0x55555628c660) at
> /home/jcmvbkbc/ws/m/awt/emu/xtensa/qemu/accel/tcg/cpu-exec-common.c:68
> #2  0x00005555556e23dd in check_watchpoint (offset=3700, len=4,
> attrs=..., flags=2) at
> /home/jcmvbkbc/ws/m/awt/emu/xtensa/qemu/exec.c:2762
(snip)
> #12 0x000055555577dfa1 in cpu_exec_step_atomic (cpu=0x55555628c660) at
> /home/jcmvbkbc/ws/m/awt/emu/xtensa/qemu/accel/tcg/cpu-exec.c:259
(snip)
> 
> It doesn't look like an xtensa-specific issue, any idea how to fix it?

Can you please try the appended?

Thanks,

                Emilio

diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index 870027d435..a5258bcbc8 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -266,6 +266,9 @@ void cpu_exec_step_atomic(CPUState *cpu)
 #ifndef CONFIG_SOFTMMU
         tcg_debug_assert(!have_mmap_lock());
 #endif
+        if (qemu_mutex_iothread_locked()) {
+            qemu_mutex_unlock_iothread();
+        }
         assert_no_pages_locked();
     }



reply via email to

[Prev in Thread] Current Thread [Next in Thread]