31

As part of my course I’ve been reading the paper Ethernet: Distributed Packet Switching for Local Computer Networks. I understand that “classic” Ethernet (over coaxial cable) has a maximum length of 2500m while Ethernet over twisted pair has a maximum length of just 100m.

While Googling for an answer I found a question on superuser whose accepted answer is:

The specification of 328 feet has to do entirely with collision detection in a CSMA/CD (Carrier Sense Multi Access / Collision Detection network. The length is limited by the fact that the shortest possible frame size (64 bytes) can be sent out on the wire and if a collision occurs, the sending node will still be sending that frame when it hears the collision.

However, I understand that full duplex, packet switched Ethernet networks do not require collision detection because the connection is point and point (i.e. your computer is connected to an Ethernet switch - there are no other computers physically sharing the same cable with you) and data is sent and received on separate wires. Full duplex communication provides every network node with a unique collision domain. This operation completely avoids collisions and does not even implement the traditional Ethernet CSMA/CD protocol.

So, I must ask: why is Ethernet over Cat5 limited to 100m? It can't be because of collision detection, since full duplex Ethernet (which I suspect make up almost 99% of all LANs, unless anyone is still running a bus network from 1995) does not suffer from collisions.

If I had to guess I would guess that it is due to attenuation and signal degradation over the copper wire.

3 Answers 3

18

First, you're correct in saying that it's not linked to CSMA/CD.

Second, you referenced a common, but incorrect belief that CSMA/CD was the reason for 10Base-T [half-duplex] 100m limit. This was a reason for - as you called it - classic Ethernet network length of 2500m (with ample margin - minimum frame of 64 bytes at 10Mb/s would 'occupy' around 11000m of cable - or to word it differently - collision would be heard back by sender at about middle of transmission)1.

So why 100m? It is linked to the electrical interface and signal characteristic outlined in the standard. One of the ideas behind twisted pair was to use existing cabling - and 100m was around max length that still satisfied parameters like attenuation, crosstalk, etc.

From 802.3-2012 standard:

14.4.1 Overview
The medium for 10BASE-T is twisted-pair wiring. A significant number of 10BASE-T networks are installed utilizing in-place unshielded telephone wiring and typical telephony installation practices, the end to- end path including different types of wiring, cable connectors, and cross connects must be considered.

(...omitted)

14.4.2 Transmission parameters
Each simplex link segment shall have the following characteristics. All characteristics specified apply to the total simplex link segment unless otherwise noted. These characteristics are generally met by 100 m of twisted-pair cable composed of 0.5 mm [24 AWG] twisted pairs.

That probably got carried over to newer/related (like EIA/TIA mentioned) standards (although I have no hard proof of that).

I also found an interesting section in the Ethernet/IEEE 802.2 Family AMD Handbook confirming that 100m was not set in stone:

AM79C940 10Base-T interface
(...omitted) when Low Receive Threshold bit is set, (...) sensitivity of the the 10Base-T MAU receiver is increased. This allows longer line lengths to be employed, exceeding the 100m target distance of normal 10Base-T (assuming typical 24AWG cable)

1 Of course propagation delay had its role in twisted pair too, hence 5-4-3 rule used in hub only networks.

8

There are standards for the certification of copper cable that define tests that the cable must pass to be certified.

The one covering Cat5 is TIA/EIA-568.

Source The Evolution of Copper Cabling Systems from Cat5 to Cat5e to Cat6

The TIA-EIA-568-A standard defined the testing limits for the following parameters for testing Category 5 cabling installations:

Length, Attenuation, Wiremap and Near End Crosstalk (NEXT).

The length requirements defined that the maximum length a cable could be run from a Telecommunications Room to a work area outlet in a commercial building could not exceed 90 meters (295 feet).

This 90 meter distance is defined as the horizontal link. When adding patch cables in the Telecommunications Rooms to either cross-connect or interconnect with electronic equipment and to connect devices at the work area outlet, the standard allows for a total of ten meters for these patch cables to be added to the horizontal link. This 100 meter maximum distance, the maximum 90 meter horizontal link plus 10 meters of patch cords, is defined as the horizontal channel

Attenuation is the loss of signal strength as it is transmitted from the end of the cable which the signal is generated to the opposite end at which it is received. Attenuation, also referred to as Insertion Loss, is measured in decibels (dB). For attenuation, the lower the dB v alue, the better the performance, less signal is lost. This decrease is typically caused by absorption, reflection, diffusion, scattering, deflection, or dispersion from the original signal and usually not as a result of geometric spreading.

Wiremap is a continuity test. It assures that the conductors that make up the four twisted pairs in the cable are continuous from the termination point of one end of the link to the other. This test assures that the conductors are terminated correctly at each end and that none of the conductor pairs are crossed or short- circuited.

Near End Crosstalk (NEXT) measures the amount of signal coupled from one pair to another within the cable caused by radiation emission at the transmitting end, near end, of the cable. An exam ple of crosstalk on voice channels is when extraneous conversations can be heard in the background over the phone line while on a telephone conversation. Those signals are being induced onto the voice channel from another channel. The same instance occurs in data signal transmission. If the crosstalk is great enough, it will interfere with signals received across the circuit. Crosstalk is measured in dB. The higher the dB value the better the performance, more of the signal is transmitted and less is lost d ue to coupling.

It must be 100m or less in length to be certified.

It is possible a longer cable will work - but it is not guaranteed. Shorter Cat 5 cables may also not work if there is a lot of EMI. Signal attenuation appears to be the limiting factor - too much signal loss and you can't guarantee 100 megabits per second.

4
  • 1
    That's a strange requirement that the cable should not be able to operate beyond 90 meters (+10 meters of patch). Do you know why that is?
    – user376151
    Commented Oct 7, 2014 at 19:56
  • 3
    It is possible a longer cable will work - but it is not guaranteed. Shorter Cat 5 cables may also not work if there is a lot of EMI. Signal attenuation appears to be the limiting factor - too much signal loss and you can't guarantee 100 megabits per second.
    – DavidPostill
    Commented Oct 7, 2014 at 20:30
  • 1
    @GeorgeRobinson Building on DavidPostill's point: All cables have resistance and impedance that affect the signal strength at the receiving NIC. Longer cables mean more attenuation. The signal must be strong enough to differentiate a high from a low. The higher frequency a nic uses to communicate over a cable, the more impedance there is. I.E. CAT5e will support gigabit (at 350 mhz) up to a rated 100 meters. Longer CAT5e cables with 10/100/1000 NICs attached could potentially negotiate 100 megabit (at 31.25 MHz) on cables significantly longer than 100 meters because of less impedance. Commented Aug 19, 2015 at 18:23
  • I have a remote site that has a ~500 foot cable run... they get 10Mbps and intermittent packet loss.
    – Nanban Jim
    Commented Jan 22, 2018 at 18:25
3

Most standards are written with a margin for error, typically for safety, and to ensure minimal predictable performance in widely varying environments. The TIA-EIA-568-A standard is no different, meaning there is some room to fudge if you (and/or your customer) are willing to accept the increased risk that your system may not work as hoped/planned compared to shorter cabling.

For instance, the 0.393 percent change in resistance per degree C Temperature Coefficient of Copper near room temperature provides negligible improvement at colder temperatures.

I just installed several wireless access points (WAPs) 420 feet from the switch in a -10 degree F freezer warehouse. The Fluke DTX 1500 Cable Analyzer says the Cat 6 cables fail due to distance and attenuation. Similarly the APs warn they're not getting enough Power over Ethernet (PoE) voltage, yet they still work fine with no packets lost and Air Magnet showing blue/green RF coverage. Except for the expected and annoying warnings, the customer is quite happy with performance and the cost savings of not having to install electricity for a heated IDF mounted 35-feet up in a sub-zero ceiling to meet the somewhat arbitrary 100m requirement.

Of course, YMMV your milage may vary and I can't be held liable if your network fails at 100.01 meters of cabling in a hot, high vibration, EMI noisy environment.

1
  • 1
    Could also just change cabling technology used :shrug: Commented Dec 20, 2019 at 22:19

You must log in to answer this question.