lwip-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[lwip-devel] [patch #9753] tcp: dont reset dupack count upon non-empty p


From: Simon Goldschmidt
Subject: [lwip-devel] [patch #9753] tcp: dont reset dupack count upon non-empty packet recieve.
Date: Wed, 30 Jan 2019 15:09:54 -0500 (EST)
User-agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36

URL:
  <https://savannah.nongnu.org/patch/?9753>

                 Summary: tcp: dont reset dupack count upon non-empty packet
recieve.
                 Project: lwIP - A Lightweight TCP/IP stack
            Submitted by: goldsimon
            Submitted on: Wed 30 Jan 2019 08:09:53 PM UTC
                Category: TCP
                Priority: 5 - Normal
                  Status: None
                 Privacy: Public
             Assigned to: goldsimon
        Originator Email: 
             Open/Closed: Open
         Discussion Lock: Any
         Planned Release: None

    _______________________________________________________

Details:

>From lwip-users by Solganik Alexander <address@hidden>:


According to rfc5681:

https://tools.ietf.org/html/rfc5681

Paragraph 3.2.  Fast Retransmit/Fast Recovery
The TCP sender SHOULD use the "fast retransmit" algorithm to detect
and repair loss, based on incoming duplicate ACKs.  The fast
retransmit algorithm uses the arrival of 3 duplicate ACKs (as defined
in section 2, without any intervening ACKs which move SND.UNA) as an
indication that a segment has been lost.  After receiving 3 duplicate
ACKs, TCP performs a retransmission of what appears to be the missing
segment, without waiting for the retransmission timer to expire.

Now consider the following scenario:
Server sends packets to client P0, P1, P2 .. PK.
Client sends packets to server P`0 P`1 ... P`k.

I.e. it is a pipelined conversation. Now lets assume that P1 is lost, Client
will
send an empty "duplicate" ack upon receive of P2, P3... In addition client
will
also send a new packet with "Client Data", P`0 P`1 .. e.t.c. according to
sever receive
window and client congestion window.

Current implementation resets "duplicate" ack count upon receive of packets
from client
that holds new data. This in turn prevents server from fast recovery upon
3-duplicate acks
receive.
This is not required as in this case "sender unacknowledged window" is not
moving.

Signed-off-by: Solganik Alexander <address@hidden>
---
 src/core/tcp_in.c | 7 -------
 1 file changed, 7 deletions(-)

diff --git a/src/core/tcp_in.c b/src/core/tcp_in.c
index f1f0dd0e..c4c1de94 100644
--- a/src/core/tcp_in.c
+++ b/src/core/tcp_in.c
@@ -1332,7 +1332,6 @@ tcp_receive(struct lwip_context *ctx, struct tcp_pcb
*pcb)
   s16_t m;
   u32_t right_wnd_edge;
   u16_t new_tot_len;
-  int found_dupack = 0;
 #if TCP_OOSEQ_MAX_BYTES || TCP_OOSEQ_MAX_PBUFS
   u32_t ooseq_blen;
   u16_t ooseq_qlen;
@@ -1409,7 +1408,6 @@ tcp_receive(struct lwip_context *ctx, struct tcp_pcb
*pcb)
             /* Clause 5 */
             if (pcb->lastack == ctx->ackno) {
               MIB2_STATS_INC(mib2.tcpdupacks);
-              found_dupack = 1;
               if ((u8_t)(pcb->dupacks + 1) > pcb->dupacks) {
                 ++pcb->dupacks;
               }
@@ -1427,11 +1425,6 @@ tcp_receive(struct lwip_context *ctx, struct tcp_pcb
*pcb)
           }
         }
       }
-      /* If Clause (1) or more is true, but not a duplicate ack, reset
-       * count of consecutive duplicate acks */
-      if (!found_dupack) {
-        pcb->dupacks = 0;
-      }
     } else if (TCP_SEQ_BETWEEN(ctx->ackno, pcb->lastack+1, pcb->snd_nxt)) {
       /* We come here when the ACK acknowledges new data. */
       tcpwnd_size_t acked;
-- 
2.17.1




    _______________________________________________________

Reply to this item at:

  <https://savannah.nongnu.org/patch/?9753>

_______________________________________________
  Message sent via Savannah
  https://savannah.nongnu.org/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]