[gPXE] any negative implications of setting TCP window size to(65536 - 4) ?

Arends, R.R. r.r.arends at hro.nl
Fri Feb 19 03:10:22 EST 2010


In our environment (gbit to 100mbit lan) i see an negative impact on using an 65532 tcp window size.. 
Atleast with some workstations. For example with an vm in vmware workstation on windows i see alot of TCP Retransmissions...

This is the case with 16k, 32k and 64k.

Switching back to 8k or 4k shows an much smoother tcp flow.

An dynamic tcp window would probably work alot better for us atleast. 

-reni


>>> On 18-2-2010 at 18:55, in message <4B7D7F0F.6070608 at myri.com>, Glenn Brown
<glenn at myri.com> wrote:
> Oops: I accidentally posted before providing details:
> 
> If gPXE TCP advertises a large window but the Ethernet driver only 
> provides a little buffering and cannot keep up with receives, dropping 
> when overwhelmed, this is no problem.  This case is handled by TCP 
> congestion control by design: It is merely a case of a slow link with 
> little buffering.
> 
> Similarly, the limited buffering is not a problem for multiple TCP 
> connections, because TCP congestion control will dynamically slow the 
> TCP senders to share the limited resources.
> 
> In fact, huge Ethernet receive buffers can actually hurt TCP performance 
> when they fill, because they increase the end-to-end latency of the 
> connection, increasing the round-trip-time measured by TCP, increasing 
> TCP's retransmission delay, slowing lost-packet-recovery... and 
> lost-packet recovery is especially important in gPXE where out-of-order 
> receives are not supported.
> 
> Ideally, one wants to keep the Ethernet buffering just large enough to 
> smooth over transient spikes in the packet arrival rate and to smooth 
> over transient delays in received packet processing, such as interrupt 
> coalescing.  For gPXE you probably want just a few MTUs worth of 
> Ethernet buffering.  If TCP wants to support out-of-order receives, it 
> can provide its own buffering for this, dropping frames when buffering 
> is not available.  (Good driver<->TCP interfaces avoid copying 
> out-of-order receives to TCP storage by swapping buffers with the 
> Ethernet driver.  I do not know if gPXE can do this.)  This way, TCP can 
> drop-or-NACK based on buffer availability, speeding recovery when frames 
> are dropped by TCP instead of by Ethernet.  This approach also allows 
> large amounts of TCP buffering without the buffering increasing the TCP 
> connection round trip time, speeding recovery when frames are dropped in 
> the Internet or by the Ethernet driver.
> 
> So, IMHO, it would be nice to see gPXE support large dynamic TCP window 
> sizes without increasing buffering at the Ethernet level.  It should 
> work fine with gPXE's current no-out-of-order implementation, and 
> keeping the Ethernet buffering independent of TCP max window size is the 
> right thing to do even if gPXE later supports out-of-order receives.
> 
> FWIW,
> --Glenn
> _______________________________________________
> gPXE mailing list
> gPXE at etherboot.org 
> http://etherboot.org/mailman/listinfo/gpxe



More information about the gPXE mailing list