Skip to main content
added 116 characters in body
Source Link
netvope
  • 5.3k
  • 14
  • 59
  • 78

Can increasing the TCP window size solves the problem? Is it a general setting in the OS, or application specific?

And a minor question: How can I find out what is the driver in use for eth0?

And a minor question: How can I find out what is the driver in use for eth0?

Can increasing the TCP window size solves the problem? Is it a general setting in the OS, or application specific?

And a minor question: How can I find out what is the driver in use for eth0?

edited title
Link
netvope
  • 5.3k
  • 14
  • 59
  • 78

Gigabit A gigabit network limitedinterface is CPU-limited to 25MB/s by CPU. How to make it fastercan I maximize the throughput?

added 242 characters in body
Source Link
netvope
  • 5.3k
  • 14
  • 59
  • 78

I have a Acer Aspire R1600-U910H with a nForce gigabit network adapter. The maximum TCP throughput of it is about 25MB/s, and apparently it is limited by the single core Intel Atom 230; when the maximum throughput is reached, the CPU usage is about 50%-60%, which corresponds to full utilization considering this is a Hyper-threading enabled CPU.

The same problem occurs on both Windows XP and on Ubuntu 8.04. On Windows, I have installed the latest nForce chipset driver, disabled power saving features, and enabled checksum offload. On Linux, the default driver has checksum offload enabled. There is no Linux driver available on Nvidia's website.

ethtool -k eth0 shows that checksum offload is enabled:

Offload parameters for eth0:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp segmentation offload: on
udp fragmentation offload: off
generic segmentation offload: off

The following is the output of powertop when the network is idle:

Wakeups-from-idle per second : 61.9     interval: 10.0s
no ACPI power usage estimate available

Top causes for wakeups:
  90.9% (101.3)       <interrupt> : eth0
   4.5% (  5.0)             iftop : schedule_timeout (process_timeout)
   1.8% (  2.0)     <kernel core> : clocksource_register (clocksource_watchdog)
   0.9% (  1.0)            dhcdbd : schedule_timeout (process_timeout)
   0.5% (  0.6)     <kernel core> : neigh_table_init_no_netlink (neigh_periodic_timer)

And when the maximum throughput of about 25MB/s is reached:

Wakeups-from-idle per second : 11175.5  interval: 10.0s
no ACPI power usage estimate available

Top causes for wakeups:
  99.9% (22097.4)       <interrupt> : eth0
   0.0% (  5.0)             iftop : schedule_timeout (process_timeout)
   0.0% (  2.0)     <kernel core> : clocksource_register (clocksource_watchdog)
   0.0% (  1.0)            dhcdbd : schedule_timeout (process_timeout)
   0.0% (  0.6)     <kernel core> : neigh_table_init_no_netlink (neigh_periodic_timer)

Notice the 20000 interrupts per second. Could this be the cause for the high CPU usage and low throughput? If so, how can I improve the situation? The

As a reference, the other computers in the network can usually transfer at 50+MB/s without problems. A computer with a Core 2 CPU generates only 5000 interrupts per second when it's transferring at 110MB/s. The number of interrupts is about 20 times less than the Atom system (if interrupts scale linearly with throughput.)

And a minor question: How can I find out what is the driver in use for eth0?

I have a Acer Aspire R1600-U910H with a nForce gigabit network adapter. The maximum TCP throughput of it is about 25MB/s, and apparently it is limited by the single core Intel Atom 230; when the maximum throughput is reached, the CPU usage is about 50%-60%, which corresponds to full utilization considering this is a Hyper-threading enabled CPU.

The same problem occurs on both Windows XP and on Ubuntu 8.04. On Windows, I have installed the latest nForce chipset driver, disabled power saving features, and enabled checksum offload. On Linux, the default driver has checksum offload enabled. There is no Linux driver available on Nvidia's website.

ethtool -k eth0 shows that checksum offload is enabled:

Offload parameters for eth0:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp segmentation offload: on
udp fragmentation offload: off
generic segmentation offload: off

The following is the output of powertop when the network is idle:

Wakeups-from-idle per second : 61.9     interval: 10.0s
no ACPI power usage estimate available

Top causes for wakeups:
  90.9% (101.3)       <interrupt> : eth0
   4.5% (  5.0)             iftop : schedule_timeout (process_timeout)
   1.8% (  2.0)     <kernel core> : clocksource_register (clocksource_watchdog)
   0.9% (  1.0)            dhcdbd : schedule_timeout (process_timeout)
   0.5% (  0.6)     <kernel core> : neigh_table_init_no_netlink (neigh_periodic_timer)

And when the maximum throughput of about 25MB/s is reached:

Wakeups-from-idle per second : 11175.5  interval: 10.0s
no ACPI power usage estimate available

Top causes for wakeups:
  99.9% (22097.4)       <interrupt> : eth0
   0.0% (  5.0)             iftop : schedule_timeout (process_timeout)
   0.0% (  2.0)     <kernel core> : clocksource_register (clocksource_watchdog)
   0.0% (  1.0)            dhcdbd : schedule_timeout (process_timeout)
   0.0% (  0.6)     <kernel core> : neigh_table_init_no_netlink (neigh_periodic_timer)

Notice the 20000 interrupts per second. Could this be the cause for the high CPU usage and low throughput? If so, how can I improve the situation? The other computers in the network can usually transfer at 50+MB/s without problems.

And a minor question: How can I find out what is the driver in use for eth0?

I have a Acer Aspire R1600-U910H with a nForce gigabit network adapter. The maximum TCP throughput of it is about 25MB/s, and apparently it is limited by the single core Intel Atom 230; when the maximum throughput is reached, the CPU usage is about 50%-60%, which corresponds to full utilization considering this is a Hyper-threading enabled CPU.

The same problem occurs on both Windows XP and on Ubuntu 8.04. On Windows, I have installed the latest nForce chipset driver, disabled power saving features, and enabled checksum offload. On Linux, the default driver has checksum offload enabled. There is no Linux driver available on Nvidia's website.

ethtool -k eth0 shows that checksum offload is enabled:

Offload parameters for eth0:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp segmentation offload: on
udp fragmentation offload: off
generic segmentation offload: off

The following is the output of powertop when the network is idle:

Wakeups-from-idle per second : 61.9     interval: 10.0s
no ACPI power usage estimate available

Top causes for wakeups:
  90.9% (101.3)       <interrupt> : eth0
   4.5% (  5.0)             iftop : schedule_timeout (process_timeout)
   1.8% (  2.0)     <kernel core> : clocksource_register (clocksource_watchdog)
   0.9% (  1.0)            dhcdbd : schedule_timeout (process_timeout)
   0.5% (  0.6)     <kernel core> : neigh_table_init_no_netlink (neigh_periodic_timer)

And when the maximum throughput of about 25MB/s is reached:

Wakeups-from-idle per second : 11175.5  interval: 10.0s
no ACPI power usage estimate available

Top causes for wakeups:
  99.9% (22097.4)       <interrupt> : eth0
   0.0% (  5.0)             iftop : schedule_timeout (process_timeout)
   0.0% (  2.0)     <kernel core> : clocksource_register (clocksource_watchdog)
   0.0% (  1.0)            dhcdbd : schedule_timeout (process_timeout)
   0.0% (  0.6)     <kernel core> : neigh_table_init_no_netlink (neigh_periodic_timer)

Notice the 20000 interrupts per second. Could this be the cause for the high CPU usage and low throughput? If so, how can I improve the situation?

As a reference, the other computers in the network can usually transfer at 50+MB/s without problems. A computer with a Core 2 CPU generates only 5000 interrupts per second when it's transferring at 110MB/s. The number of interrupts is about 20 times less than the Atom system (if interrupts scale linearly with throughput.)

And a minor question: How can I find out what is the driver in use for eth0?

Source Link
netvope
  • 5.3k
  • 14
  • 59
  • 78
Loading