qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] chardev: race condition with tcp_chr_disconnect


From: Andrey Shinkevich
Subject: Re: [Qemu-devel] [PATCH] chardev: race condition with tcp_chr_disconnect
Date: Tue, 16 Jul 2019 13:08:31 +0000


On 15/07/2019 21:27, Paolo Bonzini wrote:
> On 15/07/19 19:23, Max Reitz wrote:
>> On 12.07.19 21:17, Andrey Shinkevich wrote:
>>> When tcp_chr_disconnect() is called, other thread may be still writing
>>> to the channel. This patch protects only read operations that initiate
>>> the disconnection.
>>>
>>> Signed-off-by: Andrey Shinkevich <address@hidden>
>>> ---
>>
>> Have you looked at
>> https://lists.nongnu.org/archive/html/qemu-devel/2019-02/msg06174.html
>> already?  From a glance, it looks like that series supersedes this one.
>>
>> (No, I don’t know why the other series is delayed.
> 
> Because it broke some testcases in tests/vhost-user-test.  They are
> disabled by default, because AFAIR they broke on some CI environment,
> but they are supposed to work.
> 
> Paolo
> 
>> I keep reminding
>> Paolo of it.)
>>
>> Max
>>
> 

The test check-qtest-x86_64: tests/qos-test hangs with the 
QTEST_VHOST_USER_FIXME set even without applying the series:

Thread 4 (Thread 0x2ade7a2bb700 (LWP 492566)):
#0  0x00002ade6f5431c9 in syscall () at 
../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
#1  0x00005599dec08bb6 in qemu_futex_wait (f=0x5599df6651d4 
<rcu_call_ready_event>, val=4294967295) at 
/home/andrey/git/qemu/include/qemu/futex.h:29
#2  0x00005599dec08d7f in qemu_event_wait (ev=0x5599df6651d4 
<rcu_call_ready_event>) at util/qemu-thread-posix.c:442
#3  0x00005599dec21ea1 in call_rcu_thread (opaque=0x0) at util/rcu.c:260
#4  0x00005599dec08f2c in qemu_thread_start (args=0x5599e10568f0) at 
util/qemu-thread-posix.c:502
#5  0x00002ade6f236dd5 in start_thread (arg=0x2ade7a2bb700) at 
pthread_create.c:307
#6  0x00002ade6f548ead in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Thread 3 (Thread 0x2ade7a4bc700 (LWP 492567)):
#0  0x00002ade6f53e20d in poll () at ../sysdeps/unix/syscall-template.S:81
#1  0x00002ade56e7c32c in g_main_context_iterate.isra.19 () at 
/lib64/libglib-2.0.so.0
#2  0x00002ade56e7c67a in g_main_loop_run () at /lib64/libglib-2.0.so.0
#3  0x00005599de7f6772 in iothread_run (opaque=0x5599e1196a30) at 
iothread.c:82
#4  0x00005599dec08f2c in qemu_thread_start (args=0x5599e11a87a0) at 
util/qemu-thread-posix.c:502
#5  0x00002ade6f236dd5 in start_thread (arg=0x2ade7a4bc700) at 
pthread_create.c:307
#6  0x00002ade6f548ead in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Thread 2 (Thread 0x2ade7a7ae700 (LWP 492568)):
#0  0x00002ade6f23e361 in __sigwait (sig=0x2ade7a7ab750, set=<optimized 
out>) at ../sysdeps/unix/sysv/linux/sigwait.c:60
#1  0x00002ade6f23e361 in __sigwait (set=0x2ade7a7ab760, 
sig=0x2ade7a7ab750) at ../sysdeps/unix/sysv/linux/sigwait.c:95
#2  0x00005599de655fee in qemu_dummy_cpu_thread_fn (arg=0x5599e11a9eb0) 
at /home/andrey/git/qemu/cpus.c:1331
#3  0x00005599dec08f2c in qemu_thread_start (args=0x5599e11cd140) at 
util/qemu-thread-posix.c:502
#4  0x00002ade6f236dd5 in start_thread (arg=0x2ade7a7ae700) at 
pthread_create.c:307
#5  0x00002ade6f548ead in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Thread 1 (Thread 0x2ade5661c280 (LWP 492564)):
#0  0x00002ade6f53e2cf in __GI_ppoll (fds=0x5599e20e1360, nfds=9, 
timeout=<optimized out>, sigmask=0x0) at 
../sysdeps/unix/sysv/linux/ppoll.c:56
#1  0x00005599dec0232d in qemu_poll_ns (fds=0x5599e20e1360, nfds=9, 
timeout=43793998964000) at util/qemu-timer.c:334
#2  0x00005599dec03510 in os_host_main_loop_wait 
(timeout=43793998964000) at util/main-loop.c:240
#3  0x00005599dec03634 in main_loop_wait (nonblocking=0) at 
util/main-loop.c:521
#4  0x00005599de7ff439 in main_loop () at vl.c:1791
#5  0x00005599de806dca in main (argc=19, argv=0x7ffe5b3b66a8, 
envp=0x7ffe5b3b6748) at vl.c:4473

Andrey
-- 
With the best regards,
Andrey Shinkevich


reply via email to

[Prev in Thread] Current Thread [Next in Thread]