|
From: | address@hidden |
Subject: | [lwip-users] sock_set_errno semantics in lwip 2.1.2 |
Date: | Wed, 19 May 2021 12:01:32 +0200 (CEST) |
Hi there, we have been using lwip in a FreeRTOS based firmware for many years now and are upgrading from lwip 1.4.1 to 2.1.2.
We are using the following (pseudo)code to encapsulate recv() in a C++ class:
do
{
int aCurrentCount = recv(<socket instance>,<ptr into recv buffer>,aCharsRequested,theTimeout?MSG_DONTWAIT:0);
if (aCurrentCount>0)
{
<there were characters received from the socket, handle>
}
else
{
socklen_t aReturnSize = sizeof(aError);
int aError;
getsockopt(<socket instance>, SOL_SOCKET, SO_ERROR,(void *)&aError,&aReturnSize);
if (!aCurrentCount || (aCurrentCount<0 && (aError!= EWOULDBLOCK)))
{
< connection gone down, close the socket, return failure or partial read if applicable>
}
}
} < until timeout occurred or all requested characters have been received>
On 1.4.1 this worked fine. On 2.1.2, every recv() falls into the close socket case.
I debugged this and found that in 2.1.2, sock_set_errno() has been rewritten to not set the connection specific error code member pending_err but the global variable errno.
The getsockopt funtion, though, retrieves the pending_err member which hasn't been set by sock_set_errno.
Needless to say, in a multi tasking, multiple socket environment it is not feasible to use a global variable for connection specific errors, so I can't look at the global errno to determine the failure code for this socket.
1.4.1 implemented the sock_set_errno macro to store the error in a socket specific variable.
If there is a thread safe and per-socket way to implement the above sequence, how do I need to rewrite the code? If not, I consider the implementation a bug, how to report this?
Thank you!
[Prev in Thread] | Current Thread | [Next in Thread] |