[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH 5/9] memory: let address_space_rw/ld*/st* run ou
From: |
Alex Bennée |
Subject: |
Re: [Qemu-devel] [PATCH 5/9] memory: let address_space_rw/ld*/st* run outside the BQL |
Date: |
Wed, 24 Jun 2015 19:50:01 +0100 |
Paolo Bonzini <address@hidden> writes:
> On 24/06/2015 18:56, Alex Bennée wrote:
>> This is where I get confused between the advantage of this over however
>> same pid recursive locking. If you use recursive locking you don't need
>> to add a bunch of state to remind you of when to release the lock. Then
>> you'd just need:
>>
>> static void prepare_mmio_access(MemoryRegion *mr)
>> {
>> if (mr->global_locking) {
>> qemu_mutex_lock_iothread();
>> }
>> if (mr->flush_coalesced_mmio) {
>> qemu_mutex_lock_iothread();
>> qemu_flush_coalesced_mmio_buffer();
>> qemu_mutex_unlock_iothread();
>> }
>> }
>>
>> and a bunch of:
>>
>> if (mr->global_locking)
>> qemu_mutex_unlock_iothread();
>>
>> in the access functions. Although I suspect you could push the
>> mr->global_locking up to the dispatch functions.
>>
>> Am I missing something here?
>
> The semantics of recursive locking with respect to condition variables
> are not clear. Either cond_wait releases all locks, and then the mutex
> can be released when the code doesn't expect it to be, or cond_wait
> doesn't release all locks and then you have deadlocks.
>
> POSIX says to do the latter:
>
> It is advised that an application should not use a
> PTHREAD_MUTEX_RECURSIVE mutex with condition variables because
> the implicit unlock performed for a pthread_cond_timedwait() or
> pthread_cond_wait() may not actually release the mutex (if it
> had been locked multiple times). If this happens, no other
> thread can satisfy the condition of the predicate."
>
> So, recursive locking is okay if you don't have condition variables
> attached to the lock (and if you cannot do without it), but
> qemu_global_mutex does have them.
Ahh OK, so I was missing something ;-)
>
> QEMU has so far tried to use the solution that Stevens outlines here:
> https://books.google.it/books?id=kCTMFpEcIOwC&pg=PA434
>
> ... Leave the interfaces to func1 and func2 unchanged, and avoid
> a recursive mutex by providing a private version of func2,
> called func2_locked. To call func2_locked, hold the mutex
> embedded in the data structure whose address we pass as the
> argument.
>
> as a way to avoid recursive locking. This is much better because a) it
> is more efficient---taking locks can be expensive even if they're
> uncontended, especially if your VM spans multiple NUMA nodes on the
> host; b) it is always clear when a lock is taken and when it isn't.
>
> Note that Stevens has another example right after this one of recursive
> locking, involving callbacks, but it's ill-defined. There's no reason
> for the "timeout" function in page 437 to hold the mutex when it calls
> "func". It can unlock before and re-lock afterwards, like QEMU's own
> timerlist_run_timers function.
Unfortunately I can't read that link but it sounds like I should get
myself a copy of the book. I take it that approach wouldn't approve of:
static __thread int iothread_lock_count;
void qemu_mutex_lock_iothread(void)
{
if (iothread_lock_count == 0) {
qemu_mutex_lock(&qemu_global_mutex);
}
iothread_lock_count++;
}
void qemu_mutex_unlock_iothread(void)
{
iothread_lock_count--;
if (iothread_lock_count==0) {
qemu_mutex_unlock(&qemu_global_mutex);
}
if (iothread_lock_count < 0) {
fprintf(stderr,"%s: error, too many unlocks %d\n", __func__,
iothread_lock_count);
}
}
Which should achieve the same "only one lock held" semantics but still
make the calling code a little less worried about tracking the state.
I guess it depends if there is ever going to be a situation where we say
"lock is held, do something different"?
>
> Paolo
--
Alex Bennée
- Re: [Qemu-devel] [PATCH 1/9] main-loop: use qemu_mutex_lock_iothread consistently, (continued)
[Qemu-devel] [PATCH 3/9] memory: Add global-locking property to memory regions, Paolo Bonzini, 2015/06/18
[Qemu-devel] [PATCH 4/9] exec: pull qemu_flush_coalesced_mmio_buffer() into address_space_rw/ld*/st*, Paolo Bonzini, 2015/06/18
[Qemu-devel] [PATCH 5/9] memory: let address_space_rw/ld*/st* run outside the BQL, Paolo Bonzini, 2015/06/18
[Qemu-devel] [PATCH 6/9] kvm: First step to push iothread lock out of inner run loop, Paolo Bonzini, 2015/06/18
[Qemu-devel] [PATCH 7/9] kvm: Switch to unlocked PIO, Paolo Bonzini, 2015/06/18
[Qemu-devel] [PATCH 8/9] acpi: mark PMTIMER as unlocked, Paolo Bonzini, 2015/06/18
[Qemu-devel] [PATCH 9/9] kvm: Switch to unlocked MMIO, Paolo Bonzini, 2015/06/18