5

Traditional 802.3ad link aggregation only works when all of the links in the group are going to/from the same devices/switches. So you couldn't have a system with one half of a bonded link going into switch A and the other half going into switch B and expect LACP to work. I suppose STP (if enabled) should block one of them to prevent a loop. Is that correct?

I realise that bonding/trunking can only provide double bandwidth in specific circumstances, e.g. when there are separate communication flows. It wouldn't give double the bandwidth if both source and destination are the same.

What I'm looking for is a way of connecting 2 switches together with multiple links to provide N times the bandwidth between them. I guess with LACP on the uplink ports between the 2 switches the bandwidth would still be limited to each discrete traffic flow:

  • so traffic flows from switch-port A-3 to B-9 and B-4 to A-8 would each be able to hit close to 1Gbps (supposing there are 2 links in LACP)
  • but A-6 to B-3 would not be able to exceed 1Gbps

    1. Is my understanding above all correct?

    2. Are there any vendor-specific implementations/extensions that can allow a single physical server to run LACP across 2 switches?
      I suppose this is where stackable switches come in? Multiple physical switches configured as a single logical switch?

    3. Are there any vendor-specific implementations/extensions that can increase the bandwidth of a single traffic flow by simultaneously using multiple links?
      EDIT: On further thought this would be useless, as the rate of data going into switch would still only be the bandwidth of a single port. Unless your server was connected to a 1Gbps port but the switches were connected together using a pair of 100Mbps ports.

4
  • 3
    One clarification is that you don't really increase the bandwidth by bonding ports from one device to another. A single flow will only traverse a single link. There are several algorithms to assign different flows to different links, but each flow will only exist on a single link, and each link may (probably will) have multiple flows. Ideally, a mix of traffic flows can experience the theoretical increase in bandwidth, but any single flow is limited to the shared bandwidth on a single link, with the other flows on that link.
    – Ron Maupin
    Commented May 6, 2015 at 18:09
  • Yes understood, thanks for the clarificatino, that's what I was trying to explain but in a roundabout way.
    – batfastad
    Commented May 6, 2015 at 21:58
  • More generally, are there any reasons to not use an LACP bonded pair of links between 2 switches? It gives multiple traffic flows access to a fatter pipe making the transition between switches less of a bottleneck and can withstand an outage on either link (unlikely). I suppose access switches usually have faster uplink ports to the core switches than the access ports. In my lab environment I'm looking to have my storage host and VM host connected to a pair of switches with the environment able to survive a failure of either switch.
    – batfastad
    Commented May 6, 2015 at 21:59
  • 1
    Cisco recommends a 20:1 aggregate access port bandwidth to distribution (uplink port) bandwidth ratio. You can design bonded uplinks to maintain at, or above, that ratio in the event of a single link failure, and using bonded uplinks provides the fastest failover. Slowest falover, by at least an order of magnitude, is STP (even RSTP). Next slowest failover, by another order of magnitude, is a fast IGP, Best failover, by far, is PAgP/LACP/unconditional bonding.
    – Ron Maupin
    Commented May 7, 2015 at 2:30

2 Answers 2

6
  1. Yes, if I am reading things correctly it appears your understanding is correct.
  2. Yes, there are implementations that will allow you to do link aggregation between a host and two switches. Switch stacking will allow a stack of individual switches to be managed as one device. Typically one of switches in the stack becomes the master for the stack allowing it to manage link aggregation across multiple switches. A second option is virtual switching which also allows this functionality across multiple switches even if they are not stacked. This typically requires higher end hardware, specific software versions and additional requirements in order to implement. Examples are virtual switching system (VSS)/multichassis EtherChannel (MEC)/virtual port channel (VPC) from Cisco or virtual chassis from Juniper.
  3. No. One of the hard invariants (i.e. absolute requirements) for L2 networking is the sequential delivery of frames. In link aggregation this is enforced by requiring a flow to traverse only one link in the group. If there is any sort of delay on that link, this invariant can still be maintained. If a flow were traversing two links and one of the links were to experience a delay (even a very short one), this could result in frames being delivered out of order violating this invariant.

Ultimately, if you are running into a need to exceed the speed of a link for a single flow, you would need to upgrade your interfaces to the next available speed technology (i.e. 1G to 10G, 10G to 40G, etc). Cisco is also spearheading a push for *multigigabit" providing speeds of 2.5G or 5G across Cat5e/6 cabling at distances up to 100 meters.

2
  • Does Ethernet make any guarantee about frames being delivered in-order? My understanding was the forwarding plane places all packets in a flow on a single link due to the final load-balance/hash value being the same. Or is the choice of hashing vs per-packet load-balancing driven by the normally unstated in-order delivery requirement?
    – cpt_fink
    Commented May 7, 2015 at 4:56
  • 2
    @cpt_fink, L2 networking has both hard and soft invariants that it must abide by to function correctly. Sequential delivery is one of them as there is no other way to to sequence the frames. The way that link aggregation is designed to operate is to maintain this invariant. So yes, flow on a single link is due to the hash being used, but this was chosen to enforce the invariant.
    – YLearn
    Commented May 7, 2015 at 6:11
-1

Many LAN switches have the ability to stack whereby they operate as a single logical device (e.g. some Juniper and Cisco). In these cases, multi-chassis etherchannel is possible.

If your switches cannot be logically converged, operating as a single switch, then you can achieve full link utilization across several links using IP load balancing. You would configure a unique subnet per uplink then use a routing protocol to balance the traffic across the links. Of course, this requires layer 3 switches (aka switches with routing capability).

Hope that helps.

Not the answer you're looking for? Browse other questions tagged or ask your own question.