[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [Qemu-block] RFC cdrom in own thread?
From: |
Peter Lieven |
Subject: |
Re: [Qemu-devel] [Qemu-block] RFC cdrom in own thread? |
Date: |
Thu, 18 Jun 2015 11:29:23 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 |
Am 18.06.2015 um 10:42 schrieb Kevin Wolf:
Am 18.06.2015 um 10:30 hat Peter Lieven geschrieben:
Am 18.06.2015 um 09:45 schrieb Kevin Wolf:
Am 18.06.2015 um 09:12 hat Peter Lieven geschrieben:
Thread 2 (Thread 0x7ffff5550700 (LWP 2636)):
#0 0x00007ffff5d87aa3 in ppoll () from /lib/x86_64-linux-gnu/libc.so.6
No symbol table info available.
#1 0x0000555555955d91 in qemu_poll_ns (fds=0x5555563889c0, nfds=3,
timeout=4999424576) at qemu-timer.c:326
ts = {tv_sec = 4, tv_nsec = 999424576}
tvsec = 4
#2 0x0000555555956feb in aio_poll (ctx=0x5555563528e0, blocking=true)
at aio-posix.c:231
node = 0x0
was_dispatching = false
ret = 1
progress = false
#3 0x000055555594aeed in bdrv_prwv_co (bs=0x55555637eae0, offset=4292007936,
qiov=0x7ffff554f760, is_write=false, flags=0) at block.c:2699
aio_context = 0x5555563528e0
co = 0x5555563888a0
rwco = {bs = 0x55555637eae0, offset = 4292007936,
qiov = 0x7ffff554f760, is_write = false, ret = 2147483647, flags = 0}
#4 0x000055555594afa9 in bdrv_rw_co (bs=0x55555637eae0, sector_num=8382828,
buf=0x7ffff44cc800 "(", nb_sectors=4, is_write=false, flags=0)
at block.c:2722
qiov = {iov = 0x7ffff554f780, niov = 1, nalloc = -1, size = 2048}
iov = {iov_base = 0x7ffff44cc800, iov_len = 2048}
#5 0x000055555594b008 in bdrv_read (bs=0x55555637eae0, sector_num=8382828,
buf=0x7ffff44cc800 "(", nb_sectors=4) at block.c:2730
No locals.
#6 0x000055555599acef in blk_read (blk=0x555556376820, sector_num=8382828,
buf=0x7ffff44cc800 "(", nb_sectors=4) at block/block-backend.c:404
No locals.
#7 0x0000555555833ed2 in cd_read_sector (s=0x555556408f88, lba=2095707,
buf=0x7ffff44cc800 "(", sector_size=2048) at hw/ide/atapi.c:116
ret = 32767
Here is the problem: The ATAPI emulation uses synchronous blk_read()
instead of the AIO or coroutine interfaces. This means that it keeps
polling for request completion while it holds the BQL until the request
is completed.
I will look at this.
I need some further help. My way to "emulate" a hung NFS Server is to
block it in the Firewall. Currently I face the problem that I cannot mount
a CD Iso via libnfs (nfs://) without hanging Qemu (i previously tried with
a kernel NFS mount). It reads a few sectors and then stalls (maybe another bug):
(gdb) thread apply all bt full
Thread 3 (Thread 0x7ffff0c21700 (LWP 29710)):
#0 qemu_cond_broadcast (address@hidden) at util/qemu-thread-posix.c:120
err = <optimized out>
__func__ = "qemu_cond_broadcast"
#1 0x0000555555911164 in rfifolock_unlock (address@hidden) at
util/rfifolock.c:75
__PRETTY_FUNCTION__ = "rfifolock_unlock"
#2 0x0000555555875921 in aio_context_release (address@hidden) at async.c:329
No locals.
#3 0x000055555588434c in aio_poll (address@hidden, address@hidden) at
aio-posix.c:272
node = <optimized out>
was_dispatching = false
i = <optimized out>
ret = <optimized out>
progress = false
timeout = 611734526
__PRETTY_FUNCTION__ = "aio_poll"
#4 0x00005555558bc43d in bdrv_prwv_co (address@hidden, address@hidden,
address@hidden, address@hidden, address@hidden(unknown: 0)) at block/io.c:552
aio_context = 0x5555562598b0
co = <optimized out>
rwco = {bs = 0x55555627c0f0, offset = 7038976, qiov = 0x7ffff0c208f0,
is_write = false, ret = 2147483647, flags = (unknown: 0)}
#5 0x00005555558bc533 in bdrv_rw_co (bs=0x55555627c0f0, address@hidden, address@hidden
"(", address@hidden, address@hidden,
address@hidden(unknown: 0)) at block/io.c:575
qiov = {iov = 0x7ffff0c208e0, niov = 1, nalloc = -1, size = 2048}
iov = {iov_base = 0x555557874800, iov_len = 2048}
#6 0x00005555558bc593 in bdrv_read (bs=<optimized out>, address@hidden, address@hidden
"(", address@hidden) at block/io.c:583
No locals.
#7 0x00005555558af75d in blk_read (blk=<optimized out>, address@hidden, address@hidden
"(", address@hidden) at block/block-backend.c:493
ret = <optimized out>
#8 0x00005555557abb88 in cd_read_sector (sector_size=<optimized out>, buf=0x555557874800
"(", lba=3437, s=0x55555760db70) at hw/ide/atapi.c:116
ret = <optimized out>
#9 ide_atapi_cmd_reply_end (s=0x55555760db70) at hw/ide/atapi.c:190
byte_count_limit = <optimized out>
size = <optimized out>
ret = 2
#10 0x00005555556398a6 in memory_region_write_accessor (mr=0x5555577f85d0, addr=<optimized
out>, value=0x7ffff0c20a68, size=2, shift=<optimized out>, mask=<optimized out>,
attrs=...)
at /home/lieven/git/qemu/memory.c:459
tmp = <optimized out>
#11 0x000055555563956b in access_with_adjusted_size (address@hidden, address@hidden,
address@hidden, access_size_min=<optimized out>, access_size_max=<optimized
out>,
address@hidden <memory_region_write_accessor>, address@hidden,
address@hidden) at /home/lieven/git/qemu/memory.c:518
access_mask = 65535
access_size = 2
i = <optimized out>
r = 0
#12 0x000055555563b3a9 in memory_region_dispatch_write (address@hidden, addr=0,
data=0, size=2, attrs=...) at /home/lieven/git/qemu/memory.c:1174
No locals.
#13 0x00005555555fcc00 in address_space_rw (as=0x555555d7c7c0 <address_space_io>,
address@hidden, attrs=..., address@hidden, address@hidden "", address@hidden,
address@hidden)
at /home/lieven/git/qemu/exec.c:2357
l = 2
ptr = <optimized out>
val = 0
addr1 = 0
mr = 0x5555577f85d0
result = 0
#14 0x0000555555638610 in kvm_handle_io (count=1, size=2, direction=<optimized out>,
data=<optimized out>, attrs=..., port=368) at /home/lieven/git/qemu/kvm-all.c:1636
i = 0
ptr = 0x7ffff7ff1000 ""
#15 kvm_cpu_exec (address@hidden) at /home/lieven/git/qemu/kvm-all.c:1804
attrs = {unspecified = 0, secure = 0, user = 0, stream_id = 0}
run = 0x7ffff7ff0000
---Type <return> to continue, or q <return> to quit---
ret = <optimized out>
run_ret = <optimized out>
#16 0x00005555556232f2 in qemu_kvm_cpu_thread_fn (arg=0x555556295c30) at
/home/lieven/git/qemu/cpus.c:976
cpu = 0x555556295c30
r = <optimized out>
#17 0x00007ffff5a49182 in start_thread (arg=0x7ffff0c21700) at
pthread_create.c:312
__res = <optimized out>
pd = 0x7ffff0c21700
now = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140737232639744,
6130646130327736738, 1, 0, 140737232640448, 140737232639744,
-6130648513365749342, -6130659796022144606}, mask_was_saved = 0}}, priv = {pad
= {
0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0,
canceltype = 0}}}
not_first_call = <optimized out>
pagesize_m1 = <optimized out>
sp = <optimized out>
freesize = <optimized out>
__PRETTY_FUNCTION__ = "start_thread"
#18 0x00007ffff577647d in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:111
No locals.
Thread 2 (Thread 0x7ffff1911700 (LWP 29709)):
#0 syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
No locals.
#1 0x00005555559006a2 in futex_wait (val=4294967295, ev=0x55555620a124
<rcu_call_ready_event>) at util/qemu-thread-posix.c:301
No locals.
#2 qemu_event_wait (address@hidden <rcu_call_ready_event>) at
util/qemu-thread-posix.c:399
value = <optimized out>
#3 0x00005555559114e6 in call_rcu_thread (opaque=<optimized out>) at
util/rcu.c:233
tries = 0
n = <optimized out>
node = <optimized out>
#4 0x00007ffff5a49182 in start_thread (arg=0x7ffff1911700) at
pthread_create.c:312
__res = <optimized out>
pd = 0x7ffff1911700
now = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140737246205696,
6130646130327736738, 1, 0, 140737246206400, 140737246205696,
-6130651373813968478, -6130659796022144606}, mask_was_saved = 0}}, priv = {pad
= {
0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0,
canceltype = 0}}}
not_first_call = <optimized out>
pagesize_m1 = <optimized out>
sp = <optimized out>
freesize = <optimized out>
__PRETTY_FUNCTION__ = "start_thread"
#5 0x00007ffff577647d in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:111
No locals.
Thread 1 (Thread 0x7ffff7fc8a80 (LWP 29705)):
#0 __lll_lock_wait () at
../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
No locals.
#1 0x00007ffff5a4b657 in _L_lock_909 () from
/lib/x86_64-linux-gnu/libpthread.so.0
No symbol table info available.
#2 0x00007ffff5a4b480 in __GI___pthread_mutex_lock (mutex=0x555555dd5880
<qemu_global_mutex>) at ../nptl/pthread_mutex_lock.c:79
__PRETTY_FUNCTION__ = "__pthread_mutex_lock"
type = 4294966784
#3 0x0000555555900039 in qemu_mutex_lock (address@hidden <qemu_global_mutex>)
at util/qemu-thread-posix.c:73
err = <optimized out>
__func__ = "qemu_mutex_lock"
#4 0x0000555555624cbc in qemu_mutex_lock_iothread () at
/home/lieven/git/qemu/cpus.c:1152
No locals.
#5 0x00005555558823fb in os_host_main_loop_wait (timeout=11000972) at
main-loop.c:241
ret = 1
spin_counter = 0
---Type <return> to continue, or q <return> to quit---
#6 main_loop_wait (nonblocking=<optimized out>) at main-loop.c:493
ret = 1
timeout = 1000
#7 0x00005555555f19ee in main_loop () at vl.c:1808
nonblocking = <optimized out>
last_io = 1
#8 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at
vl.c:4470
i = <optimized out>
snapshot = <optimized out>
linux_boot = <optimized out>
initrd_filename = <optimized out>
kernel_filename = <optimized out>
kernel_cmdline = <optimized out>
boot_order = <optimized out>
boot_once = 0x0
ds = <optimized out>
cyls = <optimized out>
heads = <optimized out>
secs = <optimized out>
translation = <optimized out>
hda_opts = <optimized out>
opts = <optimized out>
icount_opts = <optimized out>
olist = <optimized out>
optind = 12
optarg = 0x0
loadvm = <optimized out>
machine_class = 0x55555623d910
cpu_model = <optimized out>
vga_model = 0x55555592b65b "std"
qtest_chrdev = <optimized out>
qtest_log = <optimized out>
pid_file = <optimized out>
incoming = <optimized out>
defconfig = <optimized out>
userconfig = 48
log_mask = <optimized out>
log_file = <optimized out>
mem_trace = {malloc = 0x55555570b380 <malloc_and_trace>, realloc = 0x55555570b340
<realloc_and_trace>, free = 0x55555570b300 <free_and_trace>, calloc = 0x0, try_malloc
= 0x0, try_realloc = 0x0}
trace_events = <optimized out>
trace_file = <optimized out>
maxram_size = <optimized out>
ram_slots = <optimized out>
vmstate_dump_file = <optimized out>
main_loop_err = 0x0
__func__ = "main"
Any ideas?
Peter
- Re: [Qemu-devel] [Qemu-block] RFC cdrom in own thread?, (continued)
- Re: [Qemu-devel] [Qemu-block] RFC cdrom in own thread?, Kevin Wolf, 2015/06/17
- Re: [Qemu-devel] [Qemu-block] RFC cdrom in own thread?, Peter Lieven, 2015/06/18
- Re: [Qemu-devel] [Qemu-block] RFC cdrom in own thread?, Peter Lieven, 2015/06/18
- Re: [Qemu-devel] [Qemu-block] RFC cdrom in own thread?, Paolo Bonzini, 2015/06/18
- Re: [Qemu-devel] [Qemu-block] RFC cdrom in own thread?, Peter Lieven, 2015/06/18
- Re: [Qemu-devel] [Qemu-block] RFC cdrom in own thread?, Peter Lieven, 2015/06/18
- Re: [Qemu-devel] [Qemu-block] RFC cdrom in own thread?, Kevin Wolf, 2015/06/18
- Re: [Qemu-devel] [Qemu-block] RFC cdrom in own thread?, Peter Lieven, 2015/06/18
- Re: [Qemu-devel] [Qemu-block] RFC cdrom in own thread?, Kevin Wolf, 2015/06/18
- Re: [Qemu-devel] [Qemu-block] RFC cdrom in own thread?,
Peter Lieven <=
- Re: [Qemu-devel] [Qemu-block] RFC cdrom in own thread?, Stefan Hajnoczi, 2015/06/18
- Re: [Qemu-devel] [Qemu-block] RFC cdrom in own thread?, Peter Lieven, 2015/06/18
- Re: [Qemu-devel] [Qemu-block] RFC cdrom in own thread?, Peter Lieven, 2015/06/19
- Re: [Qemu-devel] [Qemu-block] RFC cdrom in own thread?, Stefan Hajnoczi, 2015/06/22
- Re: [Qemu-devel] [Qemu-block] RFC cdrom in own thread?, Peter Lieven, 2015/06/22
- Re: [Qemu-devel] [Qemu-block] RFC cdrom in own thread?, John Snow, 2015/06/22
- Re: [Qemu-devel] [Qemu-block] RFC cdrom in own thread?, Peter Lieven, 2015/06/23
- Re: [Qemu-devel] [Qemu-block] RFC cdrom in own thread?, Peter Lieven, 2015/06/18