===== Week 4 [ Mon 14 Jun 2010 - Sun 20 Jun 2010 ] ===== ==== Day 1 [ Mon 14 Jun 2010 ] ==== Git commit: [[http://git.etherboot.org/?p=people/andreif/gpxe.git;a=commit;h=50c10591aaba4a808abd935df81df03cd0562e84|50c10591aaba4a808abd935df81df03cd0562e84]] I've started working on the forcedeth driver's .probe routine. After several lines of code I remembered stefanha's advice that I should start to familiarise myself with gPXE's core source code. And so I did. Since I was working on .probe I wanted to see where it was called from. The first hit I got was ''probe_devices()'' from ''core/device.c''. This goes through all root devices (which are system buses) and calls their .probe routine. In turn, this routine (''pcibus_probe()'' in PCI) scans the bus and registers all devices by calling their .probe routine. One such device is the NIC I am writing the driver for :). So this is the code path from ''probe_devices()'' to my driver. Naturally, the next question is, who calls ''probe_devices()''? Well, this routine is pointed to by the .startup field of a ''struct startup_fn'' structure. gPXE keeps a table of such structures, one for each "entity" that has to be started/shutdown at some point. These "entities" include the serial port, device drivers and UNDI. These .startup routines are called by the ''startup'' function from ''core/init.c''. Finally, ''startup'' is called from ''core/main.c'''s ''main()'' which is gPXE's entry point. I will do this with every exposed routine from the forcedeth driver, so I can get a better understanding on how gPXE works. This might make my driver writing slow some times but on the long run I will gain the ability to contribute with more diverse patches (not just drivers). I did spend several hours just browsing the code without realising how much time has passed and I didn't get to do as much work as I wanted to on the forcedeth driver. OTOH, the code browsing I did was a lot fun, and I don't feel like I have wasted any time. ==== Day 2 [ Tue 15 Jun 2010 ] ==== Git commit: [[http://git.etherboot.org/?p=people/andreif/gpxe.git;a=commit;h=706b0f41964e606f49b51250d13417a7c5874a26|706b0f41964e606f49b51250d13417a7c5874a26]] Work continued on .probe, and I imported the PCI_ROMs and some header stuff from the Linux driver. First question I had today was related to the way the driver obtained the I/O address. On the pcnet32 driver, I simply read the address from the ''pci_device->ioaddr'' and that's it. Apparently for some devices this is not correct, so they have to call ioremap. I'm not sure what the explanation is, but I have a hunch that some devices don't have their registers in a PCI region that maps to I/O address space, but in a memory mapped PCI region. Since x86 does have separate address spaces for these two, the way of accessing the two spaces differ. Thus, you have to use ''ioremap'' to make the registers accessible through the usual inb/outb. Perhaps someone can correct me on this :) The next problem is related to ''pci_resource_flags()''. Well, not the routine itself, but the specific sequence of [[http://lxr.linux.no/linux+*/drivers/net/forcedeth.c#L5657|code]]. I'll dig into this tomorrow. Finally, the driver makes use of driver specific data that is set based on the detected NIC. gPXE does have a ''driver_data'' field in ''struct pci_device_id'' but I noticed that it wasn't used in the r8169.c driver (this is a driver I usually refer to for coding style and code organization). This has spawned a patch that I hope will be accepted into mainline :) ==== Day 3 [ Wed 16 Jun 2010 ] ==== me.away() ==== Day 4 [ Thu 17 Jun 2010 ] ==== Git commit pcnet32: [[http://git.etherboot.org/?p=people/andreif/gpxe.git;a=commit;h=b6ecc1433ea4c364aefaaf3eec7cf35953c72def|b6ecc1433ea4c364aefaaf3eec7cf35953c72def]] Git commit forcedeth: [[http://git.etherboot.org/?p=people/andreif/gpxe.git;a=commit;h=4daff4d50954e0670e5e8e24e43ea30ccbec4ab0|4daff4d50954e0670e5e8e24e43ea30ccbec4ab0]] I started getting some feedback on my pcnet32 driver so I had to make some changes to it. The feedback keeps coming, so I'm going to wait until tomorrow to post the updated patch. Some nice things I'm learning these days are all sort of developer tricks related to patch sending, and protocols that one should follow when sending a patch. Stefan also suggested that i test a patch sent on qemu-devel related to their pcnet emulation, to see if our driver still works with it. I'll do this tomorrow. Work on the forcedeth driver continued today, for now it is just a bunch of configuration settings derived from the NIC type, which are saved in the driver's private structure. I figure that once I get to parts that actually do something, these will all make sense. ==== Day 5 [ Fri 18 Jun 2010 ] ==== Git commit: [[http://git.etherboot.org/?p=people/andreif/gpxe.git;a=commit;h=479f777706bb4c64f9a244846f7d63bdf1c47ae7|479f777706bb4c64f9a244846f7d63bdf1c47ae7]] After making the pcnet32 modifications I decided to test it again, on all three vms: VMWare, VBox and Qemu. I realised then that I haven't been testing the driver as thoroughly as I should have. The following table shows the results I get after downloading an 100mb image five times in a row. The results are the following: ^ VMWare ^ VBox ^ Qemu | | 59s | 169s | 67s | | 34s | 151s | 71s | | 41s | 153s | 65s | | 52s | 148s | 71s | | 45s | 166s | 67s | Qemu behaves strangely in that it freezes the second time it tries to fetch the image. This also happens when using the old (the one before GSoC, not mine) driver. It hangs for a while, after which it starts again, until eventually it stops for good. Apparently, I wasn't giving qemu enough RAM. The "-m 512" option did the trick. Thanks, stefanha! Another problem is Virtual Box. I remember getting speeds similar to VMWare on VBox too, but these results show a different situation. I wonder what might be the cause of such big differences between the two. The e1000 driver performs similarly. Here are two captures made with Wireshark that show the packet flow in VMWare and in VBox, for comparison: VMWARE {{:soc:2010:andreif:journal:vmwarecapture.png?1000| VMWare capture}} VBOX {{:soc:2010:andreif:journal:vboxcap.png?1000| Virtual Box capture}} Note to self: Try Guo-Fu's TCP patch, that might improve things in VBox. The script I used: #!gpxe ifopen net0 dhcp net0 time imgfetch http://${net0/gateway}/100mb time imgfetch http://${net0/gateway}/100mb time imgfetch http://${net0/gateway}/100mb time imgfetch http://${net0/gateway}/100mb time imgfetch http://${net0/gateway}/100mb exit Someone should really make a test suite for drivers. Perhaps this would be a good first script. Tomorrow TODO: Finish patch, leave init_block like it was before, return error when MAC address is not valid. Post some details in the patch description, perhaps comparison against the old driver. ==== Day 6 [ Sat 19 Jun 2010 ] ==== I finished the pcnet32 patch today and I uploaded version 2 of the patch. Hopefully, people will find it ok. Stefan reports some results in KVM and apparently disabling the BIOS console yields some definitive speed improvements which I will have to test too in VBox. I also received a response from one of the forcedeth driver devs and it was awesome that he took some time to write me back. The forcedeth driver writing will mostly be a direct port of the Linux one. Another small project rose up today, I have to find out why gPXE is so slow in VBox. I don't know yet if the BIOS thing is causing it, but I will find out these days. ==== Day 7 [ Sun 20 Jun 2010 ] ==== Version 3 of the patch is up. At this rate, I will probably hold the record for most revisions of a patch :). Also sent an email to Andrew de Quincey regarding forcedeth documentation.