3

I am trying to setup LXC containers on top of Amazon. I am very much new to Amazon and VPC especially. In fact created VPC for the first time to experiment lxc.

My Goal: My goal is to have lxc containers on Amazon instances and have them in the bridge type network. That means, I should be able to either assign public ips or private ips reachable to other amazon instances/lxc containers just as in physical LAN. For this I have been trying virsh(libvirt) with bridge networking. With this I never was able to achieve what I wanted.

What I have done: I have created a VPC with single subnet(public). Launched a debian instance in it. Installed LXC and could successfully achieve nat mode and route mode. But this gave me 192.168.122.0(lxc's default) ip addresses. But I was able to get internet in the containers with some iptable rules. After trying with libvirt, manually by creating bridge using bridge-utils, got no luck in assigning an IP to the container. My assumption is the container should get a DHCP lease from the Amazon's DHCP service. Finally I associated another Elastic IP to the debian instance and memorized its nated private ip. After that created a simple bridge and added eth0 to the bridge on the host. Then created a simple host-bridge network using libvirt. And in the lxc config hardcoded the nated ip i memorized. Then I started the lxc container. The container could get the nated ip on it. I could ssh to it from host. But I am not getting internet in that container.

/etc/network/interfaces(host) auto lo iface lo inet loopback auto eth0 iface eth0 inet manual

auto br0
iface br0 inet dhcp
    bridge_ports eth0
    bridge_fd 0
    bridge_maxwait 0

virsh net-dumpxml host-bridge

 <network>
 <name>host-bridge</name>
 <uuid>7c41e4ce-311c-c78f-5ea3-a03a224e4a3c</uuid>
 <forward mode='bridge'/>
 <bridge name='br0' />
 </network>

lxc config file

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
#lxc.network.name = eth0
lxc.network.ipv4 = 10.0.0.207/24(natted memorized ip)

container's interfaces file auto lo iface lo inet loopback

auto eth0
iface eth0 inet static
address 10.0.0.207
netmask 255.255.255.0
gateway 10.0.0.1

My questions:

  1. What VPC should I choose for this scenario?
  2. What network mode makes my job easier?
  3. At the least, with what I have achieved, how can I get internet to the container?
  4. Without Elastic IP, cant I have a private IP in the same subnet which is reachable to other instances and containers?

2 Answers 2

3

Amazon does not support dynamic IP addresses for servers inside VPC. The one IP address that is assigned when a server is started is the only one you can send with and receive from the network. Just dynamically adding LXC container servers to an instance and assuming bridged networking can make it work is not going to happen. Amazon IP network is a special purpose network for routing between instances and it does not mimic a common ethernet subnet - most features you would expect from one do not work.

If you need such features, there is a possibility that you can configure a virtual networking subnet on top of the Amazon IP addresses - using some thing like GRE tunneling, VDE (virtual distributed ethernet) or OpenVPN.

However, if you were to do it with Amazon networking purely, I will try to explain your options.

Elastic IP addresses work so that the private IP address stays the same, but there's a one-to-one NAT happening to the elastic IP address if (and only if) the address gets routed to the internet gateway.

There is also the possibility of removing the source and destination check from the network interface of the instance, which is usually done for routers. It is possible that using it would allow multiple servers to reside on one instance, but it seems doubtful as I don't believe Amazon would route the ARP requests to the server even in that case.

Currently Amazon VPC supports multiple interfaces per instance (up to 2) and multiple IP addresses per instance (up to 8). This is something you can use if you are willing to configure the instance each time you add a server. There addresses are simply additional addresses communication is allowed as on the virtual interface of the instance. There is no DHCP support for them, so you need to manually configure the IP addresses after you've added them to your server from the management console. The normal way to use them is to just use "ip addr add" to add multiple IP addresses to the same interface, but I see no apparent reason why using bridging and passing the addresses on to different interfaces wouldn't work as well.

1
0

I have the same need and I might have just found a way to do it.

My setup is:

Additional eth1 card with HWaddr 02:5f:fc:b3:0b:b9 and multiple IPs 10.2.132.61 to `10.2.132.64'

I want to use bridging and share the same NIC to give each of my containers one of the IPs configured for that interface.

My first attempt was to create a br0 on eth1 and then use this configuration

lxc.network.type = veth                
lxc.network.hwaddr = 02:5f:fc:b3:0b:b9 
lxc.network.flags = up                 
lxc.network.link = br0

The problem here is that when you start the container, the dhcp will receive packets from 02:5f:fc:b3:0b:b9 (the outer bridge) going to its interface which has the same HDaddr, and it will complain with:

 kernel: [16809131.333956] br0: received packet on vethVIAEYB with own address as source address

The solution that I have found is to change the MAC of the outer interface (br0 and eth1 on the host) to a random one. This way starting the container it is able to get the DHCP information straight away!

So I did

sudo ifconfig eth1 down
sudo ifconfig eth1 hw ether 00:80:48:BA:d1:30
sudo ifconfig br0  hw ether 00:80:48:BA:d1:30
sudo ifconfig eth1 up

Then restarting the Container did the trick!

It is now working with DHCP. To use the other registered IPs for that interface I will obviously have to rely on statically setting the IP in the CONTAINER/config file.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .