3

I have a Acer Aspire R1600-U910H with a nForce gigabit network adapter. The maximum TCP throughput of it is about 25MB/s, and apparently it is limited by the single core Intel Atom 230; when the maximum throughput is reached, the CPU usage is about 50%-60%, which corresponds to full utilization considering this is a Hyper-threading enabled CPU.

The same problem occurs on both Windows XP and on Ubuntu 8.04. On Windows, I have installed the latest nForce chipset driver, disabled power saving features, and enabled checksum offload. On Linux, the default driver has checksum offload enabled. There is no Linux driver available on Nvidia's website.

ethtool -k eth0 shows that checksum offload is enabled:

Offload parameters for eth0:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp segmentation offload: on
udp fragmentation offload: off
generic segmentation offload: off

The following is the output of powertop when the network is idle:

Wakeups-from-idle per second : 61.9     interval: 10.0s
no ACPI power usage estimate available

Top causes for wakeups:
  90.9% (101.3)       <interrupt> : eth0
   4.5% (  5.0)             iftop : schedule_timeout (process_timeout)
   1.8% (  2.0)     <kernel core> : clocksource_register (clocksource_watchdog)
   0.9% (  1.0)            dhcdbd : schedule_timeout (process_timeout)
   0.5% (  0.6)     <kernel core> : neigh_table_init_no_netlink (neigh_periodic_timer)

And when the maximum throughput of about 25MB/s is reached:

Wakeups-from-idle per second : 11175.5  interval: 10.0s
no ACPI power usage estimate available

Top causes for wakeups:
  99.9% (22097.4)       <interrupt> : eth0
   0.0% (  5.0)             iftop : schedule_timeout (process_timeout)
   0.0% (  2.0)     <kernel core> : clocksource_register (clocksource_watchdog)
   0.0% (  1.0)            dhcdbd : schedule_timeout (process_timeout)
   0.0% (  0.6)     <kernel core> : neigh_table_init_no_netlink (neigh_periodic_timer)

Notice the 20000 interrupts per second. Could this be the cause for the high CPU usage and low throughput? If so, how can I improve the situation?

As a reference, the other computers in the network can usually transfer at 50+MB/s without problems. A computer with a Core 2 CPU generates only 5000 interrupts per second when it's transferring at 110MB/s. The number of interrupts is about 20 times less than the Atom system (if interrupts scale linearly with throughput.)

Can increasing the TCP window size solves the problem? Is it a general setting in the OS, or application specific?

And a minor question: How can I find out what is the driver in use for eth0?

3
  • 1
    Suggestion: Perhaps enabling jumbo frames would reduce the overhead and increase throughput. superuser.com/questions/95546/… Commented Mar 26, 2010 at 13:09
  • Thanks, I'll try. I did that Windows with no effects, but now I realize I didn't turn it on on the other computers. I guess jumbo frame probably requires support from both ends to work.
    – netvope
    Commented Mar 26, 2010 at 13:17
  • Jumbo frames definitely require both ends (aka computer and switch) to support this. I doubt you can increase the throughput on this hardware. I guess you would need to swap the ethernet phy for a better one (e.g. with hardare checksumming) which is unlikely to be possible on this hardware. (Likely to be an onboard soldered ethernet chip rather than a pci module.)
    – MacLemon
    Commented Mar 26, 2010 at 18:44

3 Answers 3

2

It sounds like the networking card has a fairly small buffer and is operating in interrupt mode, you might be able to increase throughput by switching to polling, if it's supported by your NIC & driver.

However, the problem likely can't be completely resolved without switching to a NIC with a larger buffer, which probably isn't possible with that hardware.

1
  • Thanks. How do I switch to polling mode?
    – netvope
    Commented Mar 31, 2010 at 1:17
1

How can I find out what is the driver in use for eth0?

Inspecting the output of dmesg might help.

Here is a particular case which I get on this computer where I type the answer:

$ dmesg | grep ethernet
forcedeth: Reverse Engineered nForce ethernet driver. Version 0.62.

In certain cases NIC support is built straight into the kernel (not as module). So it at least won't appear in the output of lsmod.

1
  • Nice! I got "forcedeth: Reverse Engineered nForce ethernet driver. Version 0.61."
    – netvope
    Commented Mar 31, 2010 at 1:16
0

Try using TCP Optimizer. Selecting it's optimized recommendations consistently ups the throughput over default installation TCP settings.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .