Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
soc:2010:cooldavid:journal:week9 [2010/07/20 00:06]
cooldavid
soc:2010:cooldavid:journal:week9 [2010/07/29 07:08]
cooldavid
Line 20: Line 20:
   * We can now safely modify the heap_size as we want. :)   * We can now safely modify the heap_size as we want. :)
  
 +==== Michael'​s reply of my recent patchset ====
 +<​file>​
 +Thank you for your TCP patches, some of which I have applied directly to iPXE,
 +some of which have inspired me to make my own improvements. ​ I have not been
 +commenting on your patches, but I have carefully reviewed every version of
 +your patch set, picked out those that I want to apply or rework, and
 +maintained a list of those that are not yet integrated into iPXE.  At this
 +point, I'm down to a list of three patches:
 +
 +[tcp] Receive and Close flow adjustment
 +
 +This is difficult to apply following the changes to support out-of-order
 +packets, and I'm not sure how valuable it is.  Since commit 9ff8229 ("​[tcp]
 +Update received sequence number before delivering received data"​),​ which fixes
 +a problem that you identified (thank you!), I think that the TCP state is
 +consistent at the time the data is delivered to the upper layer, so any
 +actions it might take are safe.
 +
 +[tcp] Fix possible misjudged SYN/FIN ACKed status
 +
 +I believe this is unnecessary;​ I'm pretty sure that iPXE will never send SYN
 +or FIN in a packet that also contains data.  Therefore, if new data is ACKed
 +while we are sending one of these flags, it must be ACKing these flags. ​ Have
 +you found a way in which it is possible for us to send both SYN/FIN and data
 +at the same time?
 +
 +[tcp] Distinguish passive and active close with proper actions
 +
 +This one I have not yet reviewed thoroughly, though I am expecting that it
 +will be applied in some form.
 +
 +I am sorry that you have been disheartened by the lack of comments and
 +feedback on the gPXE mailing list.  I tend to avoid commenting on the gPXE
 +lists, because I do not find them to be a welcoming environment any more.
 +
 +Michael
 +</​file>​
 +
 +<​file>​
 +On Thursday 22 Jul 2010 11:09:23 Michael Brown wrote:
 +> [tcp] Receive and Close flow adjustment
 +
 +> This is difficult to apply following the changes to support out-of-order
 +> packets, and I'm not sure how valuable it is.  Since commit 9ff8229 ("​[tcp]
 +> Update received sequence number before delivering received data"​),​ which
 +> fixes a problem that you identified (thank you!), I think that the TCP
 +> state is consistent at the time the data is delivered to the upper layer,
 +> so any actions it might take are safe.
 +
 +Ignore that; I've just worked out that this patch makes perfect sense *if* we
 +also distinguish between passive and active close, and I see how it can be
 +implemented cleanly on top of the recent out-of-order changes.
 +
 +tcp_rx_data() should add the I/O buffer to a list rather than delivering it via
 +xfer_deliver_iob(). ​ tcp_rx_fin() should not call tcp_close().
 +tcp_process_rx_queue() should be adjusted to do something like:
 +
 +  struct list_head received = LIST_HEAD_INIT ( received );
 +  ...
 +  while ( ! list_empty ( &​tcp->​rx_queue ) ) {
 +     ...
 +     ​tcp_rx_data ( tcp, seq, iob_disown ( iobuf ), &​received );
 +     ...
 +  }
 +
 +  list_for_each_entry_safe ( iobuf, tmp, &​received,​ list ) {
 +    // deliver iobuf via xfer_deliver_iob()
 +  }
 +
 +  if ( tcp->​state & TCP_STATE_RCVD ( TCP_FIN ) )
 +    tcp_close ( tcp, 0 );
 +
 +I think that would handle everything sensibly, and would mean that we could
 +properly handle passive close since, in the case of a received data+FIN packet
 +that will also cause our higher layer protocol (e.g. http.c) to close the
 +connection, tcp_rx_fin() would see the FIN before http.c called xfer_close().
 +
 +Does this seem correct to you?  If so, would you like to put together a patch?
 +
 +Michael
 +</​file>​
 +
 +<​file>​
 +Yes! And I think it is be better than what I did in
 +"[tcp] Receive and Close flow adjustment"​. to put xfer_deliver_iob()
 +in tcp_process_rx_queue() seems more reasonable.
 +
 +I would be honored to put together a patch of the passive close
 +facility and the above you suggested on top of current iPXE TCP stack.
 +
 +BTW, do you think it's reasonable to do something like:
 +"[tcp] Cleanup TCP closing actions"​ patch which it sumbitted
 +on gpxe-devel list?[2]
 +
 +I think it might be useful for following reasons:
 +  1. We don't have to think of what would tcp_close() do if we call it
 +somewhere. The behavior of calling tcp_close() would be always the same.
 +  2. Reduced some duplicate code.
 +  3. We don't have to separate tcp_close() from tcp_rx_fin(). And have the same
 +behavior. Seems a little more neat to me.
 +
 +Above results are accomplished by:
 +  1. Separate terminate action.
 +  2. Separate terminate action.
 +  3. Separate nullify xfer interface.
 +     ​(Which is part of "[tcp] Receive and Close flow adjustment"​[1])
 +
 +[1]:
 +http://​git.etherboot.org/?​p=people/​cooldavid/​gpxe.git;​a=commitdiff;​h=8fc73d18c8528cbcc1b1c3849b51d3ee3682c937
 +[2]:
 +http://​git.etherboot.org/?​p=people/​cooldavid/​gpxe.git;​a=commitdiff;​h=660e96200f67c981e7397eb05fbb4e91ed253f50
 +
 +Guo-Fu Tseng
 +</​file>​

Navigation

* [[:start|Home]] * [[:about|About our Project]] * [[:download|Download]] * [[:screenshots|Screenshots]] * Documentation * [[:howtos|HowTo Guides]] * [[:appnotes|Application Notes]] * [[:faq:|FAQs]] * [[:doc|General Doc]] * [[:talks|Videos, Talks, and Papers]] * [[:hardwareissues|Hardware Issues]] * [[:mailinglists|Mailing lists]] * [[http://support.etherboot.org/|Bugtracker]] * [[:contributing|Contributing]] * [[:editing_permission|Wiki Edit Permission]] * [[:wiki:syntax|Wiki Syntax]] * [[:contact|Contact]] * [[:relatedlinks|Related Links]] * [[:commerciallinks|Commercial Links]] * [[:acknowledgements|Acknowledgements]] * [[:logos|Logo Art]]

QR Code
QR Code soc:2010:cooldavid:journal:week9 (generated for current page)