qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] Semihost SYS_READC implementation (v6)


From: Alex Bennée
Subject: Re: [PATCH] Semihost SYS_READC implementation (v6)
Date: Tue, 17 Dec 2019 09:51:13 +0000
User-agent: mu4e 1.3.5; emacs 27.0.50

Paolo Bonzini <address@hidden> writes:

> On 17/12/19 09:38, Alex Bennée wrote:
>>   Thread 3 (Thread 0x7f8b1959e700 (LWP 14017)):
>>   #0  0x00007f8b2ada900c in futex_wait_cancelable (private=0, expected=0, 
>> futex_word=0x56213f5482e8 <console+136>) at 
>> ../sysdeps/unix/sysv/linux/futex-internal.h:88
>>   #1  0x00007f8b2ada900c in __pthread_cond_wait_common (abstime=0x0, 
>> mutex=0x56213f548298 <console+56>, cond=0x56213f5482c0 <console+96>) at 
>> pthread_cond_wait.c:502
>>   #2  0x00007f8b2ada900c in __pthread_cond_wait 
>> (cond=cond@entry=0x56213f5482c0 <console+96>, 
>> mutex=mutex@entry=0x56213f548298 <console+56>) at pthread_cond_wait.c:655
>>   #3  0x000056213ea31a40 in qemu_semihosting_console_inc 
>> (env=env@entry=0x56214138a680) at 
>> /home/alex/lsrc/qemu.git/hw/semihosting/console.c:151
>>   #4  0x000056213eab96e8 in do_arm_semihosting 
>> (env=env@entry=0x56214138a680) at 
>> /home/alex/lsrc/qemu.git/target/arm/arm-semi.c:805
>>   #5  0x000056213eacd521 in handle_semihosting (cs=<optimized out>) at 
>> /home/alex/lsrc/qemu.git/target/arm/helper.c:8476
>>   #6  0x000056213eacd521 in arm_cpu_do_interrupt (cs=<optimized out>) at 
>> /home/alex/lsrc/qemu.git/target/arm/helper.c:8522
>>   #7  0x000056213e9e53d0 in cpu_handle_exception (ret=<synthetic pointer>, 
>> cpu=0x5621411fe2f0) at /home/alex/lsrc/qemu.git/accel/tcg/cpu-exec.c:503
>>   #8  0x000056213e9e53d0 in cpu_exec (cpu=cpu@entry=0x562141381550) at 
>> /home/alex/lsrc/qemu.git/accel/tcg/cpu-exec.c:711
>>   #9  0x000056213e9b4f1f in tcg_cpu_exec (cpu=0x562141381550) at 
>> /home/alex/lsrc/qemu.git/cpus.c:1473
>>   #10 0x000056213e9b715b in qemu_tcg_cpu_thread_fn 
>> (arg=arg@entry=0x562141381550) at /home/alex/lsrc/qemu.git/cpus.c:1781
>>   #11 0x000056213ef026fa in qemu_thread_start (args=<optimized out>) at 
>> /home/alex/lsrc/qemu.git/util/qemu-thread-posix.c:519
>>   #12 0x00007f8b2ada2fa3 in start_thread (arg=<optimized out>) at 
>> pthread_create.c:486
>>   #13 0x00007f8b2acd14cf in clone () at 
>> ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
>>
>>   Thread 1 (Thread 0x7f8b1c151680 (LWP 14010)):
>>   #0  0x00007f8b2ada900c in futex_wait_cancelable (private=0, expected=0, 
>> futex_word=0x56213f52c7c8 <qemu_pause_cond+40>) at 
>> ../sysdeps/unix/sysv/linux/futex-internal.h:88
>>   #1  0x00007f8b2ada900c in __pthread_cond_wait_common (abstime=0x0, 
>> mutex=0x56213f52c8c0 <qemu_global_mutex>, cond=0x56213f52c7a0 
>> <qemu_pause_cond>) at pthread_cond_wait.c:502
>>   #2  0x00007f8b2ada900c in __pthread_cond_wait 
>> (cond=cond@entry=0x56213f52c7a0 <qemu_pause_cond>, 
>> mutex=mutex@entry=0x56213f52c8c0 <qemu_global_mutex>) at 
>> pthread_cond_wait.c:655
>>   #3  0x000056213ef02e2b in qemu_cond_wait_impl (cond=0x56213f52c7a0 
>> <qemu_pause_cond>, mutex=0x56213f52c8c0 <qemu_global_mutex>, 
>> file=0x56213ef43700 "/home/alex/lsrc/qemu.git/cpus.c", line=1943) at 
>> /home/alex/lsrc/qemu.git/util/qemu-thread-posix.c:173
>>   #4  0x000056213e9b74a4 in pause_all_vcpus () at 
>> /home/alex/lsrc/qemu.git/cpus.c:1943
>>   #5  0x000056213e9b74a4 in pause_all_vcpus () at 
>> /home/alex/lsrc/qemu.git/cpus.c:1923
>>   #6  0x000056213e9b7532 in do_vm_stop (state=RUN_STATE_SHUTDOWN, 
>> send_stop=<optimized out>) at /home/alex/lsrc/qemu.git/cpus.c:1102
>>   #7  0x000056213e96b8fc in main (argc=<optimized out>, argv=<optimized 
>> out>, envp=<optimized out>) at /home/alex/lsrc/qemu.git/vl.c:4473
>>
>> I guess my first question is why do we need a separate mutex/cond
>> variable for this operation? This seems like the sort of thing that the
>> BQL could protect.
>
> No, please do not introduce more uses of the BQL from the CPU thread.
> The problem seems to lie with the condition variable, not the mutex.

Well in this case we are holding the BQL anyway as we are being called
from the interrupt context. The BQL protects all shared HW state outside
of MMIO which is explicitly marked as doing it's own locking. That said
I don't know if the semihosting console will always be called from a
BQL held context.

>
>> Secondly if the vCPU is paused (via console or gdbstub) we need to
>> unwind from our blocking position and be in a position to restart
>> cleanly.
>
> Perhaps if fifo8_is_empty(&c->fifo) the CPU could update the PC back to
> the SVC instruction and enter a halted state?  Perhaps with a new
> CPU_INTERRUPT_* flag that would be checked in arm_cpu_has_work.

I don't think the PC has been updated at this point - but we don't want
that logic in the common semihosting code. If we cpu_loop_exit the
exception is still in effect and will re-run when we start again.

What we really want to do is fall back to the same halting semantics
that leave us in qemu_wait_io_event until there is something to process.
Is there any particular reason a blocking semihosting event isn't like
any other IO event?

>
> Paolo


--
Alex Bennée



reply via email to

[Prev in Thread] Current Thread [Next in Thread]