[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Chicken-hackers] pending patches

From: Alan Post
Subject: Re: [Chicken-hackers] pending patches
Date: Fri, 11 Nov 2011 17:23:08 -0700

On Fri, Nov 11, 2011 at 02:29:59PM +0100, Christian Kellermann wrote:
> >
> This I will need a little more time, if someone likes to take over
> (or work in parallel) she is more than welcome to.  This is also a critical
> part that could need more eyes than mine.

I have tried to devote some attention to this one and found that I
am still no good at modifying Chicken and then successfully running
the result.  I do have some comments on the patch, which I had hoped
to provide as a patch, but since I seem unable to do that, I will
provide them more informally and please beg your forgiveness.

diff --git a/library.scm b/library.scm
index 8075c2f..9b65ad2 100644
--- a/library.scm
+++ b/library.scm
@@ -1736,9 +1746,17 @@ EOF
 (define ##sys#stream-port-class
   (vector (lambda (p)                  ; read-char
-           (##core#inline "C_read_char" p) )
+           (let loop ()
+             (let ((c (##core#inline "C_read_char" p)))
+               (if (eq? -1 c)          ; EINTR
+                   (##sys#dispatch-interrupt loop)
+                   c))))
          (lambda (p)                   ; peek-char
-           (##core#inline "C_peek_char" p) )
+           (let loop ()
+             (let ((c (##core#inline "C_peek_char" p)))
+               (if (eq? -1 c)          ; EINTR
+                   (##sys#dispatch-interrupt loop)
+                   c))))
          (lambda (p c)                 ; write-char
            (##core#inline "C_display_char" p c) )
          (lambda (p s)                 ; write-string

There are a whole class of functions that might return EINTR, and
this set, read/peek, seem arbitrarily chosen to me.  fcntl,
nanosleep, wait, close amongst others can return EINTR.  In my
program that does signal handling, I catch this exception in the
Scheme code and restart the exception.  This is working for me,
so I don't strictly need the library to do it.  if it's going
to, however, it seems like it should do it everywhere it can happen.
(Other places in the code also do this, please consider this my
representative comment for al of those places.)

diff --git a/library.scm b/library.scm
index 8075c2f..9b65ad2 100644
--- a/library.scm
+++ b/library.scm
@@ -4344,10 +4373,23 @@ EOF
 (define ##sys#context-switch (##core#primitive "C_context_switch"))
+(define ##sys#signal-vector (make-vector 256 #f))
 (define (##sys#interrupt-hook reason state)

It turns out that there is a variable, _NSIG, defined in signal.h.
I am pretty certain this variable is stable across Solaris, Linux,
and BSD.  It also is a value much closer to 32 than it is to 256.

While 256 is nice and arbitrarily high, the actual size of this
vector is properly defined in _NSIG.  I think this vector should
use that value rather than this magic number.

diff --git a/runtime.c b/runtime.c
index a6b2d35..3b00673 100644
--- a/runtime.c
+++ b/runtime.c
@@ -159,6 +160,8 @@ extern void _C_do_apply_hack(void *proc, C_word
*args, int 
count) C_noret;
 #define FILE_INFO_SIZE                 7
+#define MAX_PENDING_INTERRUPTS         100
 #ifdef C_DOUBLE_IS_32_BITS
 # define FLONUM_PRINT_PRECISION         7
@@ -441,6 +444,9 @@ static C_TLS FINALIZER_NODE
 static C_TLS void *current_module_handle;
 static C_TLS int flonum_print_precision = FLONUM_PRINT_PRECISION;
 static C_TLS HDUMP_BUCKET **hdump_table;
+static C_TLS int 
+  pending_interrupts[ MAX_PENDING_INTERRUPTS ],
+  pending_interrupts_count;
 /* Prototypes: */
@@ -690,6 +696,7 @@ int CHICKEN_initialize(int heap, int stack, int
void *toplevel)
   chicken_is_running = chicken_ran_once = 0;
   interrupt_reason = 0;
+  pending_interrupts_count = 0;
   last_interrupt_latency = 0;
   C_interrupts_enabled = 1;
   C_initial_timer_interrupt_period =
@@ -4197,16 +4217,25 @@ C_regparm void C_fcall 
 C_regparm void C_fcall C_raise_interrupt(int reason)
   if(C_interrupts_enabled) {
-    saved_stack_limit = C_stack_limit;
+    if(interrupt_reason) {
+      if(reason != C_TIMER_INTERRUPT_NUMBER) {
+       if(pending_interrupts_count < MAX_PENDING_INTERRUPTS) 
+         /* drop signals if too many */
+         pending_interrupts[ pending_interrupts_count++ ] = reason;
+      }
+    }
+    else {
+      saved_stack_limit = C_stack_limit;
     C_stack_limit = C_stack_pointer + 1000;
@@ -9168,3 +9209,19 @@ C_i_file_exists_p(C_word name, C_word file,
C_word dir)
+C_regparm C_word C_fcall
+C_i_pending_interrupt(C_word dummy)
+  int i;
+  if(interrupt_reason && interrupt_reason !=
+    i = interrupt_reason;
+    interrupt_reason = 0;
+    return C_fix(i);
+  }
+  if(pending_interrupts_count > 0)
+    return C_fix(pending_interrupts[ --pending_interrupts_count ]);
+  return C_SCHEME_FALSE;

I've been running this patch, and this code is working exactly like
you think it would.  After further investigating how signal handling
is implemented, we can also get away with a different implementation.

Rather than storing a queue of pending signals, we are permitted to
store a masked set of pending signals, mimicking the same structure
as it appears in the kernel.  When a signal arrives, we can use the
signal number as an index and mark the signal as pending.  Then we
can iterate over the signal maskset and deliver those signals.  It
is possible that we'll get a signal delivered twice or more and only
deliver it to our Scheme code once, but the kernel already behaves
this way, and we will always deliver *a* signal.  In my testing, I
was seeing SIGCHLD delivered 32 times for 256 child processes.  30
or 31 signals instead of 32 is truly not a problem.

There is nothing at all wrong with this implementation as it appears
in the patch, save for the theoretically potential case of receiving
more signals than we have space in the pending queue.  I don't think
we could actually exceed 100, because of kernel will discard
duplicates and there just aren't enough signals to make it happen.
So I'm stating that we can get away with a weaker guarantee (a mask
rather than a queue) and still be every bit as reliable as we are now.

Again, I appologize that this is not in the form of a patch.

.i ma'a lo bradi cu penmi gi'e du

reply via email to

[Prev in Thread] Current Thread [Next in Thread]