Git commit: caf738b675be03f6b190518cba0e6ac5cadbc5ee
Started working on the .open implementation today, and the first couple of lines of code were related to the NIC's power mode. Now I don't think you can interact with a NIC when it's off :) so I suspect there is some low-power mode involved that the NIC starts in.
Next up I learned about pause frames which are a means of flow control in Ethernet. What happens is that a receiver overwhelmed by the amount of traffic that is sent to it, starts sending out pause frames that cause the sender to limit the amount of traffic sent. This might sound good but apparently it interferes with TCP's flow control mechanisms and this leads to poor performance. Some of the nForce NICs have support for PAUSE frames.
Finally, the descriptor rings had to be set up. These differ a little from the pcnet32 descriptor. For starters, there are no separate descriptor formats for rx and tx. Both of them have the same layout:
u32 buf u32 flaglen
There is also an extended descriptor format that newer NICs use. The result is that all of the descriptors (rx+tx) are stored in one large circular buffer.
Besides these, the driver stores an array of struct nv_skb_map
which has the following layout:
struct sk_buff *skb dma_addr_t dma; unsigned int dma_len:31; unsigned int dma_single:1; struct ring_desc_ex *first_tx_desc; struct nv_skb_map *next_tx_ctx;
The equivalent of the sk_buff
structure is gPXE's iobuf
so I renamed it struct nv_iob_map
. Now, the question is: what does it do? The linked list format suggests some sort of packet fragmentation but I'm not sure yet, once I reach _transmit I'll probably figure it out though.
Git commit: 97ac5f59a2b43b760d85318f31fd175fca9be1ee
Continued work on the .open routine today. I left off yesterday at rx and tx ring initialization. This involves filling the fields described above with the appropriate values. The problematic part was understanding what pci_map_single() does, to replace it with the appropriate functionality in the gPXE driver.
First off, Stefan mentioned that gPXE does not do DMA mapping, which I suppose means that we can't explicitly declare a memory area that is accessible by both the CPU and the device. To be more exact, we do not have an API in place to do this automatically. Linux does because there exist architectures that need more work to use DMA because of IOMMUs for example. An IOMMU is used to hold mappings between a device's address space and physical address space. This can be useful to overcome limitations imposed by a limited number of address pins on the device's part.
Another use for the DMA API is implementing bounce buffers which allow devices to DMA to/from regions that are non-DMA-able using buffers from a DMA-able region. The Linux API also offers a scatter-gather DMA abstraction.
In conclusion, I replaced the pci_map_single()
call to a simple virt_to_bus
call.
The next steps taken by the driver are:
Next, the driver sets up a slot time register. I already knew that slot time is the time it takes for the signal to travel the distance between two nodes, but I didn't expect to have control over it. The driver uses some pseudo-random number generator to set this value. I thought slot time had to be considered only in half-duplex modes but some constant names (NVREG_SLOTTIME_10_100_FULL) have me thinking otherwise. I'm not too sure of the role of this slot time register.
From what I've read, .open continues with interrupt management, which I'll do tomorrow, and hopefully conclude the .open implementation with it.
Git commit: 6640adc6f8dad60ff709c51039568a6000cfcc6b
I managed to get .open done today, and since .close didn't involve much, I managed to get that done too.
There wasn't much left to do. The Linux driver disabled and enabled the interrupts, but since gPXE doesn't use them in non-UNDI mode, I just left them disabled. It continues by setting multicast masks which I suspect enable the receival of all packets (earlier these masks were zeroed which I think ignored everything on the wire). A similar mask setup was done in pcnet32. The bulk of today's code was in the nv_update_linkspeed
routine which treats the following cases:
The .close routine disables interrupts, stops rx and tx, and clears up the rings basically. Tomorrow I'll clear the rings and implement the nv_drain_rxtx
routine. I'll also do the switch setup that Stefan suggested and see if .open and .close work :)
me.away()
Today I started to set up the switch, which in fact is a router. Every issue mentioned in that page, I encountered it. I had to make some settings multiple times before the router saved them, plus it disconnects every now and then. I disabled its internal DHCP server, its DNS, and enabled NAT, and now the setup looks like this:
+-------------+ |192.168.2.101| + + | DHCP server | + + +--------------+ | dev PC | | forcedeth PC | +-------------+ +--------------+ | | +------------------+ | 192.168.2.1 | + + | best router ever | +------------------+ | +-------------+ |teh internets| +-------------+
After that, I installed dhcp-server3 on my dev box and set it up. This took some time and I ran into some issues because I didn't have my IP statically configured. In the end, I booted a DSL from the dev pc using the old forcedeth driver. Tomorrow, I'll configure it so it gets an image containing my forcedeth driver and finally test it.
The amount of issues with that router are staggering, and I hope now that it is stable, I can finally get some work done. I see some mentioning of low speeds when using it so I'll keep this in mind when I test the driver's speeds.
me.rest()
Finally got around to testing .open and .close and they seem to work fine. I'll only be sure of this when I implement .transmit and .poll and see if the settings made in .open are correct.
Got some nice feedback regarding the driver after the weekly meeting. The plan is to finish the driver until the end of the next week. So far so good.