[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [MIT-Scheme-devel] reading a TCP socket

From: Taylor R Campbell
Subject: Re: [MIT-Scheme-devel] reading a TCP socket
Date: Tue, 28 Apr 2015 13:32:16 +0000
User-agent: IMAIL/1.21; Edwin/3.116; MIT-Scheme/9.1.99

   Date: Tue, 28 Apr 2015 14:55:46 +0300
   From: David Gray <address@hidden>

   I'm reading some data over a raw TCP socket and the 
   server program sends me 0d and what I read is 0a
   I've used both read-string! and read-char and experience
   the same result. Is there some character encoding default
   that I need to override or some binary mode?

By default, the MIT Scheme TCP sockets map {CR, CRLF, LF} -> LF on
input, and LF -> CRLF on output.  (Yes, that's rather silly.  It
happens to do the right thing for text-oriented protocols like SMTP
and HTTP.)  You can disable it with:

(port/set-line-ending socket 'NEWLINE)

reply via email to

[Prev in Thread] Current Thread [Next in Thread]