1

I'm currently building a new version of my DIY router, I run Ubuntu 22.04 LTS with a minimal install, and run firewalld on top of it - and the newer hardware I run has a 'mix' of different port speeds - 10G, 2.5G and gigabit at the very least .

I have two issues here, and I'm hoping to resolve them by setting a virtual switch between all my ports.

My 'traditional' builds bridge all the ports - and as I understand, a bridge will only run as fast as the slowest port in use. I'm working around this by creating a bridge per interface speed. This is messy and for DHCP servers, this means one subnet per link speed I need to support.

I'd rather run one subnet, and my DHCP server seems a little temperamental with multiple subnets. Reducing complexity in managing DHCP and other services and flattening my network would be desirable. I don't need multiple subnets in my base install.

My Netplan config for my test box looks like this - but I think some of the 10 gig ports are actually 1 gig ports (different models have different ports, and I forgot the model I got was a mix) . If so, I would have to split off the gigabit ports to another bridge

# This is the network config written by 'subiquity'
network:
  ethernets:
#10G ports Jumbo frames give slightly better speed
    eno1:
      dhcp4: true
      match:
          macaddress: 20:7c:14:f3:XX:X1
      set-name: SFP4
      mtu: "9000"
    # Top Left SFP cage

    eno2:
      dhcp4: true
      match:
          macaddress: 20:7c:14:f3:XX:X2
      set-name: SFP2
      mtu: "9000"
    # Bottom Left SFP cage

    eno3:
      dhcp4: true
      match:
          macaddress: 20:7c:14:f3:XX:X3
      set-name: SFP3
      mtu: "9000"
    # Top Right SFP cage

    eno4:
      dhcp4: true
      match:
          macaddress: 20:7c:14:f3:XX:X4
      set-name: SFP1
      mtu: "9000"
    # Bottom Right SFP cage

#2.5 gig ports follow physical labels

    enp4s0:
      dhcp4: true
      match:
          macaddress: 20:7c:14:f3:XX:Xa
      set-name: LAN1

    enp5s0:
      dhcp4: true
      match:
          macaddress: 20:7c:14:f3:XX:Xb
      set-name: LAN2

    enp6s0:
      dhcp4: true
      match:
          macaddress: 20:7c:14:f3:XX:Xc
      set-name: LAN3

    enp7s0:
      dhcp4: true
      match:
          macaddress: 20:7c:14:f3:XX:Xd
      set-name: LAN4

#wan port
    enp8s0:
      dhcp4: true
      match:
          macaddress: 20:7c:14:f3:XX:Xe
      set-name: WAN

#bridges - Apparently bridges are only as fast as the slowest port, 
#and I can only host one subnet a bridge. Will be splitting up 10G and 2.5G ports

  bridges:
    SFPBR:
      dhcp4: no
      addresses: [10.0.0.1/24]
      interfaces:
        - eno1
        - eno2
        - eno3
        - eno4

    LANBR:
      dhcp4: no
      addresses: [10.0.1.1/24]
      interfaces:
        - enp4s0
        - enp5s0
        - enp6s0
        - enp7s0

What I'd like to do is to replace SFPBR and LANBR with a single virtual switch (and practically a single virtual interface), and expose a single IP/interface to talk to the 'switch'. I do actually like Netplan so ideally I'd like to stick to that for configuration if possible.

How would I do this?

2
  • Granted, I never tried, but it would very much surprise me if a software bridge, which is more flexible even than a hardware switch, could not cope with different-speed interfaces. Since the question more or less hinges on that point: Are you sure your assumption is correct?
    – Daniel B
    Commented Mar 10 at 8:53
  • No I'm not. And "put it all on a bridge, it'll be fine" is a very practical answer if I'm wrong.
    – Journeyman Geek
    Commented Mar 10 at 9:04

1 Answer 1

1

and as I understand, a bridge will only run as fast as the slowest port in use

That's not usually the case with bridges.

Your description sounds like talking about the old physical Ethernet bridges from decades ago, from before switching was invented (or dual-speed hubs for that matter), which had this limitation because of how they worked physically. That's not what a bridge is in Netplan – the term only refers to the general functionality of forwarding Ethernet frames (by learnt MAC addresses), not to the specifics and limitations of how that's done.

(In fact, if I remember correctly, it wasn't even "bridges" that had this issue in the first place – bridges were what one would use to solve the problem, by having the bridge forward between two different-rate ethernets. But I'm not really clear on the terminology that nearly predates me.)

A Netplan bridge (or more accurately a Linux bridge) is literally a "virtual software switch". It's called a "bridge" rather than a "switch" because calling it a "switch" would generally imply hardware-acceleration of some kind, while in most cases it really just does the forwarding via CPU. (If you have e.g. a multi-port NIC and the drivers support it, a Linux bridge will try to enable hardware offloading, but that wouldn't change things much.)

So even if the ports are bridged, it still remains the case that each port has its own point-to-point link which runs at whatever rate it needs, independently from the other ports which also have their own rates; the bridge receives frames from port A, buffers them, sends them through port B. You can have a 10G port bridged to an 10M port and they'll work "as expected".

That being said, I can see some issues cropping up:

  • If the hardware was in fact a pair of multi-port NICs that Linux happens to support hardware switch offloading for, they might be doing the bridging in hardware in your current configuration but forced to do 10G forwarding in CPU when joined into a single bridge. From your "jumbo frames" comment, though, it seems that this is not the case.

  • All ports that are bridged to a single subnet – whether in software (CPU) or hardware, whether in a 'bridge' or a 'switch' – really ought to have identical MTU, otherwise things only are going to become more complex.

1
  • More of they're bridged to a single interface that serves a subnet. And being incorrect about this makes my life a lot easier, and it has two (or more?) multiport interfaces. I'll be running jumbo frames throughout the lan, simply cause it makes sense for me to with my setup, and it shouldn't be too hard to do that across the network
    – Journeyman Geek
    Commented Mar 10 at 9:03

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .