stumpwm-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [STUMP] Recursive lock attempt error


From: Stefan Reichör
Subject: Re: [STUMP] Recursive lock attempt error
Date: Tue, 23 Sep 2008 15:37:08 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/23.0.60 (gnu/linux)

Yann Ramin <address@hidden> writes:

> I got this on my laptop quite a bit. I think I've managed to mask the
> problem by setting the sbcl/stumpwm processor affinity to just include a
> single processor. The syscall on Linux is sched_setaffinity
>
> Interestingly, on my 8 processor workstation, I never run into this
> problem. Only on the dual processor laptop. This may just be voodoo and
> have nothing to do with anything :-)

Thanks for the tip.
I use a machine that uses hyperthreading.

I coded the script below to change the affinity from 3 (2 bits set == 2 
processors) to 1

But it does not seem to help :-(

I see two possible reasons:
* My script does not work
* sched_setaffinity does not fix my problem

Any ideas?

,----
| #!/usr/bin/env python
| 
| import ctypes, os
| 
| libc = ctypes.CDLL("libc.so.6")
| sched_setaffinity = libc.sched_setaffinity
| sched_getaffinity = libc.sched_getaffinity
| 
| sched_setaffinity.argtypes = [ctypes.c_int, ctypes.c_int, 
ctypes.POINTER(ctypes.c_ulong)]
| sched_getaffinity.argtypes = [ctypes.c_int, ctypes.c_int, 
ctypes.POINTER(ctypes.c_ulong)]
| 
| def get_affinity(pid):
|     mask = ctypes.c_ulong(pid)
|     c_ulong_size = ctypes.sizeof(ctypes.c_ulong)
|     rv=sched_getaffinity(pid, c_ulong_size, mask)
|     if rv != 0:
|         print 'CPU affinity mask for %d is: %d, rv=%s' % (pid, mask.value, rv)
|     return mask.value
| 
| def single_processor(pid):
|     affinity = 1
|     c_ulong_size = ctypes.sizeof(ctypes.c_ulong)
|     rv=sched_setaffinity(pid, c_ulong_size, ctypes.c_ulong(affinity))
|     if rv != 0:
|         print "sched_set %d := %d, rv=%s" % (pid, affinity, rv)
|     return get_affinity(stumpwm_pid)
| 
| stumpwm_processes = [line.strip().split() for line in os.popen("ps -C 
stumpwm").readlines()[1:]]
| # print stumpwm_processes
| if len(stumpwm_processes) > 1:
|     print "Warning: more than one stumpwm processes running:", 
stumpwm_processes
| if len(stumpwm_processes) > 0:
|     stumpwm_pid = int(stumpwm_processes[0][0])
|     # print stumpwm_pid
|     old_affinity = get_affinity(stumpwm_pid)
|     new_affinity = single_processor(stumpwm_pid)
|     print "Changed scheduler affinity mask for stumpwm (pid=%d) from %d to 
%d" % (stumpwm_pid, old_affinity, new_affinity)
| else:
|     print "Warning: no running stumpwm found"
`----



> Stefan Reichör wrote:
>> Hi!
>> 
>> I use stumpwm daily at work. I get the following error 6-7 times a day.
>> stumpwm restarts itself and I can continue to work with all my open
>> applications.
>> 
>> But it would be nice to see this problem fixed.
>> 
>> 
>> Thanks,
>>   Stefan.
>> 
>> Recursive lock attempt #S(SB-THREAD:MUTEX
>>                           :NAME "Scheduler lock"
>>                           :%OWNER NIL
>>                           :STATE 0).
>> 0: (SB-DEBUG::MAP-BACKTRACE #<CLOSURE (LAMBDA #) {B3ADAED}>)[:EXTERNAL]
>> 1: (SB-DEBUG:BACKTRACE 100 #<SB-IMPL::STRING-OUTPUT-STREAM {B3ADA99}>)
>> 2: (STUMPWM::BACKTRACE-STRING)
>> 3: ((LAMBDA (STUMPWM::C)) #<SIMPLE-ERROR {B3AD861}>)
>> 4: (SIGNAL #<SIMPLE-ERROR {B3AD861}>)[:EXTERNAL]
>> 5: (ERROR "Recursive lock attempt ~S.")[:EXTERNAL]
>> 6: (SB-THREAD:GET-MUTEX
>>     #<unavailable argument>
>>     #<unavailable argument>
>>     #<unavailable argument>)
>> 7: ((FLET SB-THREAD::%CALL-WITH-SYSTEM-MUTEX))
>> 8: ((FLET #:WITHOUT-INTERRUPTS-BODY-[CALL-WITH-SYSTEM-MUTEX]268))
>> 9: ((FLET #:CLEANUP-FUN-[RUN-EXPIRED-TIMERS]636))[:CLEANUP]
>> 10: (SB-IMPL::RUN-EXPIRED-TIMERS)
>> 11: ((FLET #:WITHOUT-INTERRUPTS-BODY-[INVOKE-INTERRUPTION]11))
>> 12: (SB-SYS:INVOKE-INTERRUPTION
>>      #<CLOSURE (FLET SB-UNIX::INTERRUPTION) {A9CFDD}>)
>> 13: ((FLET SB-UNIX::RUN-HANDLER)
>>      14
>>      #.(SB-SYS:INT-SAP #X00238008)
>>      #.(SB-SYS:INT-SAP #X00A9D5B8))
>> 14: ("foreign function: call_into_lisp")
>> 15: ("foreign function: funcall3")
>> 16: ("foreign function: interrupt_handle_now")
>> 17: ("foreign function: interrupt_handle_pending")
>> 18: ("foreign function: handle_trap")
>> 19: ("foreign function: #x8053F7A")
>> 20: ((FLET #:WITHOUT-INTERRUPTS-BODY-[CALL-WITH-RECURSIVE-LOCK]469))
>> 21: ((LAMBDA ()))
>> 22: ((LAMBDA ()))
>> 23: ((FLET SB-THREAD::%CALL-WITH-SYSTEM-MUTEX))
>> 24: ((FLET #:WITHOUT-INTERRUPTS-BODY-[CALL-WITH-SYSTEM-MUTEX]268))
>> 25: (SB-THREAD::CALL-WITH-SYSTEM-MUTEX
>>      #<CLOSURE (LAMBDA #) {B352E5D}>
>>      #S(SB-THREAD:MUTEX
>>         :NAME "Scheduler lock"
>>         :%OWNER #<SB-THREAD:THREAD "initial thread" {B2635A9}>
>>         :STATE 1)
>>      NIL)
>> 26: (SB-EXT:UNSCHEDULE-TIMER #<SB-EXT:TIMER {B34D051}>)
>> 27: ((FLET #:|CLEANUP-FUN-[#:G1102]1106|))[:CLEANUP]
>> 28: (XLIB:EVENT-LISTEN #<XLIB:DISPLAY :0 (The X.Org Foundation R60802000)> 
>> 59)
>> 29: (STUMPWM::STUMPWM-INTERNAL-LOOP)
>> 30: (STUMPWM::STUMPWM-INTERNAL-LOOP)[:EXTERNAL]
>> 31: (STUMPWM::STUMPWM-INTERNAL ":0")
>> 32: (STUMPWM ":0")
>> 33: ((LAMBDA ()))
>> 34: ((LABELS SB-IMPL::RESTART-LISP))
>> 
>> 
>> _______________________________________________
>> Stumpwm-devel mailing list
>> address@hidden
>> http://lists.nongnu.org/mailman/listinfo/stumpwm-devel




reply via email to

[Prev in Thread] Current Thread [Next in Thread]