This is an old revision of the document!


A PCRE internal error occured. This might be caused by a faulty plugin

===== Week 7 [ Mon 5 Jul 2010 - Sun 11 Jul 2010 ] ===== ==== Day 1 [ Mon 5 Jul 2010 ] ==== Git commit: [[http://git.etherboot.org/?p=people/andreif/gpxe.git;a=commit;h=959381fd2d516e50324047113bf7a5bd160d07e3|959381fd2d516e50324047113bf7a5bd160d07e3]] Started working on the .transmit routine. As a minor off-topic, I just want to say that even though this driver hasn't been so much fun to develop ( because it involved a lot more copy and paste than the old one ), it definitely compensated through a large number of theoretical concepts ( see my previous entries for more details ). The first thing I ran into today was [[http://en.wikipedia.org/wiki/Large_segment_offload |Large segment offloading]]. LSO works by delegating the responsibility of packet fragmentation to the NIC, thus relieving the CPU of the task of splitting packets to the appropriate size. This saves CPU cycles and increases performance. I made the connection with a [[http://portal.acm.org/citation.cfm?id=1298483|paper]] I've read earlier this year, also related to offloading. Anyway, since gPXE does not does not support LSO, the related code will be ignored but it was an interesting concept to read about nevertheless. After that I removed the old way of representing the circular buffer (using two pointers), cleaned up the dma-tx-related code, and got .transmit done. There isn't much to say about it, you just put the buffer's address into the descriptor, along with the size and the ownership bit. I suppose the NVREG_TXRXCTL_KICK write into the NvRegTxRxControl registers makes the NIC re-evaluate the descriptors and send any new packets. FIXME : forgot to fill the size. ==== Day 2 [ Tue 6 Jul 2010 ] ==== Git commit: [[http://git.etherboot.org/?p=people/andreif/gpxe.git;a=commit;h=f75a876c5036ed50b81a6b7569eadf86c24474d7|f75a876c5036ed50b81a6b7569eadf86c24474d7]] When I started working today on the implementation of the .poll routine I realised that the Linux driver followed the general principles that the pcnet32 driver did. I adapted the pcnet32 code to use the forcedeth descriptors and it turned out pretty well. The Linux driver does some advanced error checking but I think that is just for reporting stats to userspace so I'll skip that. All I have to do now is cleanup rx and re-implement the iobuf allocation routine so it can be used both at start up and afterwards, when refilling rx entries. ==== Day 3 [ Wed 7 Jul 2010 ] ==== Git commit: [[http://git.etherboot.org/?p=people/andreif/gpxe.git;a=commit;h=e230901ae95b2abe730f704e302b8b495d8ec344|e230901ae95b2abe730f704e302b8b495d8ec344]] I started fixing the .poll routine so it now processes packets only if interrupts are "signalled" in the NvRegIrqStatus register. Cleaned up rx, and implemented a routine that refills the rx descriptors. After that, I started testing the NIC. I did manage to fix some bugs related to bad initializations, too many descriptors, and the fact that I did not call netdev_link_up(). The problem I have right now is that ''alloc_iob()'' fails and thus the whole ''forcedeth_alloc_rx()'' routine fails. I think I may be overallocating memory somewhere or forgetting to free it. I can't think of any other reason for memory allocation to fail. I didn't get to dig in too much into this, but I'm sure tomorrow will prove to be more fruitful. In other news, Piotr reported a problem with the pcnet32 driver which, fortunately, was easy to fix. Thanks Piotr!


QR Code
QR Code soc:2010:andreif:journal:week7 (generated for current page)