2

Environment:

  • A router with DHCP server. This router manages a subnet of 172.16.0.1/16
  • A host with Ubuntu 20.04 installed, and it has a NIC named eno0.
  • A QEMU virtual machine running on host.

Purpose:

Bridge the virtual machine with host's NIC to obtain IP via DHCP protocol.

What I have tried:

  • Setup a bridge on host, and add the eno0 interface in it:

    ip link add name br0 type bridge
    ip link set br0 up
    ip link set eno0 master br0
    
  • Assign IP to br0 via DHCP:

    dhclient br0
    
  • Run qemu machine with tap network:

    qemu-system-x86_64 \
         -enable-kvm \
         -nographic \
         -drive format=raw,file=/path/to/img \
         -netdev tap,id=nic0,br=br0,helper=/opt/qemu/libexec/qemu-bridge-helper \
         -device e1000e,mac=52:54:00:12:34:50,netdev=nic0
    
  • After virtual machine booting, try to obtain IP via udhcpc command (a dhclient variant in busybox):

    udpchc eth0
    

Till now, the eth0 cannot be assigned a IP in subnet 172.16.x.x/16.

So, is there anything wrong with my configuration?


Update:

To simplify this experiment, I use a dummy interface, say dm0, instead of eno0, and setup a DHCP server on that dm0 interface.

# Create a dummy interface
ip link add dm0 type dummy

# Create bridge and add dm0 to it
ip link add br0 type bridge
ip link set dm0 master br0

# Bring interfaces up
ip link set br0 up
ip link set dm0 up

# Setup a DHCP server on dm0, managed subnet is 10.0.0.1/24.
# Assign address to dm0 interface
ip addr add dev dm0 10.0.0.1/24

# Initiate QEMU virtual machine
qemu-system-x86_64 \
       -enable-kvm \
       -nographic \
       -drive format=raw,file=/path/to/img \
       -netdev tap,id=nic0,br=br0,helper=/opt/qemu/libexec/qemu-bridge-helper \
       -device e1000e,mac=52:54:00:12:34:50,netdev=nic0

# Try to obtain IP address in virtual machine
udhcpc -i eth0

Run tcpdump both on dm0 and tap0 interface by command:

tcpdump -nli dm0
tcpdump -nli tap0

I can see that the DHCP request is received by DHCP server, and the server gives responses by allocating a new IP address.

# From dm0 interface
17:44:50.237133 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 52:54:00:12:34:51, length 300
17:44:50.237239 IP 10.0.0.1.67 > 10.0.0.200.68: BOOTP/DHCP, Reply, length 300

However, on interface tap0, only DHCP request packages can be seen. The response packages are missing.

17:44:50.237122 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 52:54:00:12:34:51, length 300

It seems that the response packages are filter out by bridge, which is the reason that VM cannot obtain a valid address, and I don't how to figure out what has happend.

18
  • 1. You'd expect VM address in 172.16.0.x/24, not in 172.16.x.x/16. I insist on a correct netmask! 2. After VM started, did you verify your tap interface was put into bridge? Did you try to check that bridge itself sees the traffic on tap port (with bridge fdb show, check for VM MAC)? (Also, btw, why are you are using e1000e and not virtio-net-pci?) Commented Apr 11, 2021 at 16:58
  • 1
    Usually , you need to bring down eno0 before making it a bridge "slave". Have you tried that. Commented Apr 11, 2021 at 18:06
  • Have you enabled IP forwarding on the host? If not, traffic that enters the host but is not destined to the host, e.g. DHCP responses for the VM, will be ignored. You could also trace traffic to see if anything comes out of the VM or the DHCP server: tcpdump -neli NIC port 67 or port 68, where you set NIC to the interfaces between the VM and the DHCP server. Commented Apr 11, 2021 at 23:27
  • 2
    @berndbausch: no, ip forwarding isn't needed for Ethernet bridging. It is only needed to actually route IPv4 packets. Commented Apr 12, 2021 at 8:14
  • 2
    I had ~same issue and it got solved by disabling docker and containerd services + reboot more info here unix.stackexchange.com/questions/594387/…
    – Salamek
    Commented Jun 10, 2021 at 23:15

2 Answers 2

2

I promised to formulate my comment which solved the problem in the form of an answer and forgot. Shame on me. Let me fix that already; @DouglasSu please accept for future readers to be aware of the solution.

The problem is that Netfilter (the Linux firewall) blocked forwarding of said packet. Why it blocked a bridged packet?

There is a knob in the kernel whether it calls the IP-level firewall for bridged packets. In ancient distros it was called net.bridge.bridge-nf-call-iptables, which controlled overall system behaviour (e.g. for all bridges simultaneously). In such old system you may add into /etc/sysctl.d/bridge.conf:

net.bridge.bridge-nf-call-iptables = 0

And iptables rules won't get traversed for bridged packets. To check a setting, you may run sysctl | grep bridge-nf-call-iptables.

In new systems, this is controllable per-bridge with nf_call_iptables variable (and its friends):

# ip link add type bridge help
...
                  [ nf_call_iptables NF_CALL_IPTABLES ]
                  [ nf_call_ip6tables NF_CALL_IP6TABLES ]
                  [ nf_call_arptables NF_CALL_ARPTABLES ]
...

E.g. you may set it for each bridge individually, for example, during creation:

# ip link add name br0 type bridge nf_call_iptables 0

and check value using ip -d link show br0 or in /sys/class/net/br0/bridge/nf_call_iptables. There you can also change its value at runtime.

And, at last, you may properly configure iptables to allow bridged packets. There is physdev match to determine which bridge port the packet was entered from.

3
  • Well, I wanted to tune up my dnsmasq.config for ipv6 clients. I created 3xVM with tap networking. In the host system I connected all taps (tap1, tap2, tap3) to br1, configured with ip link add name br1 type bridge nf_call_iptables 0 nf_call_ip6tables 0 . I configured the VM1 with dnsmasq to offer the IP and IPv6 on the MAC address request: dhcp-host=mac,ipv4,ipv6,nameX.test.home,12h . If I set IP4/6 to all VM manually, all VM communicate well. When VM1 is set manually, and I let VM2 and 3 ask for DHCP, I can see the DHCPREQUEST with wireshark, but no answer with DHCPOFFER . Any hint?
    – schweik
    Commented Jul 17, 2022 at 12:45
  • Excellent, thanks for writing up. The issue occurred in my case only on servers running Docker. What appears to cause the issue is that installing Docker changes the default for the iptables FORWARD chain from ACCEPT to DROP. The cleaner solution (apart from reporting a bug with the docker.io package) would then be to add an ACCEPT at the end of the chain.
    – zwets
    Commented Jun 4, 2023 at 19:20
  • Not at all. In case of Docker, you'd better move QEMU virtual machines away from Docker hosts. Docker is very neglecting of other network-related software and it is very intrusive into networking stack, so basically it is the best not to mix it with anything else on the host. Commented Jun 5, 2023 at 1:57
0

Try this:

sudo sysctl -w "net.ipv4.ip_forward=1"
sudo iptables -P FORWARD ACCEPT
1
  • 2
    Both commands in general have nothing to do with bridging, which is this question all about. Commented May 24, 2022 at 12:05

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .