Search This Blog

Thursday, July 25, 2013

Set up networkings between linux containers across EC2 instances (Ubuntu)

LXC is a good option to provide isolated execution environments to individual programs, which includes isolated physical resources like CPU, memory, block I/O, and network devices, as well as software resources. However, setting up networks in EC2 environments poses a unique challenge: Amazon web services allow only known IP packets to them.

Assuming your are using AWS VPC service that allows you to use private addresses, you need to following three major tasks:
1. Obtain IP addresses from AWS.
2. Update host OS
3. Properly config LXC instances.


For step 1, you can use AWS web interface, their software development kits or ec2 APIs.
For step 2, you need to  update your /etc/network/interfaces in Ubuntu.

Here, we will use eth0 as management interface for ssh-ing into the EC2 instance. And we will use eth1 as a production network interface for hosting a service, which will run inside the linux container.

On ubuntu host,
ubuntu@ip-10-0-1-xxx:~$ cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp
post-up ip route add default via 10.0.1.1 dev eth0 tab 1 #10.0.1.1 is your gateway.
post-up ip rule add from 10.0.1.170/32 tab 1 priority 500

auto eth1
iface eth1 inet dhcp
post-up ip route add default via 10.0.1.1 dev eth1 tab 2
post-up ip rule add from 10.0.1.190/32 tab 2 priority 600

For step 3, perform ifconfig on the host and get eth1
ubuntu@ip-10-0-1-xxx:~$ ifconfig eth1

Let's say, if AWS assigned 10.0.1.218 for eth1,  on container configuration file,
lxc.network.type = phys
lxc.network.flags = up
lxc.network.link = eth1
lxc.network.name = eth1
lxc.network.ipv4 = 10.0.1.218

After starting up your linux container, using lxc-console or  the container's rootfs on the host updates the /etc/network/interfaces file of the container. Somehow, they don't put default routing rule inside the container.

post-up ip route add default via 10.0.1.1 (your gateway address)
Then, now, you can log-in to the container from remote EC2 instance in the same subnet.


3 comments:

  1. I realize this is an old post, but if 10.0.1.218 is the address AWS assigned to eth1, what are the other IP addresses referenced here, 10.0.1.170 and 10.0.1.190?

    ReplyDelete
    Replies
    1. As it was 3 years ago, I cannot recall it correctly. I guess at that time I tried to set up a cluster of VMs, so I might have added them.

      Delete
  2. I take it you were successful in getting containers to communicate with each other across EC2 hosts? We're tackling this problem ourselves now and have not had a lot of luck. We're dealing with CentOS and libvirt-lxc, but your posting seemed to offer some clues.

    ReplyDelete