Assuming your are using AWS VPC service that allows you to use private addresses, you need to following three major tasks:
1. Obtain IP addresses from AWS.
2. Update host OS
3. Properly config LXC instances.
For step 1, you can use AWS web interface, their software development kits or ec2 APIs.
For step 2, you need to update your /etc/network/interfaces in Ubuntu.
Here, we will use eth0 as management interface for ssh-ing into the EC2 instance. And we will use eth1 as a production network interface for hosting a service, which will run inside the linux container.
On ubuntu host,
ubuntu@ip-10-0-1-xxx:~$ cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0 inet dhcp
post-up ip route add default via 10.0.1.1 dev eth0 tab 1 #10.0.1.1 is your gateway.
post-up ip rule add from 10.0.1.170/32 tab 1 priority 500
auto eth1
iface eth1 inet dhcp
post-up ip route add default via 10.0.1.1 dev eth1 tab 2
post-up ip rule add from 10.0.1.190/32 tab 2 priority 600
For step 3, perform ifconfig on the host and get eth1
ubuntu@ip-10-0-1-xxx:~$ ifconfig eth1
Let's say, if AWS assigned 10.0.1.218 for eth1, on container configuration file,
lxc.network.type = phys
lxc.network.flags = up
lxc.network.link = eth1
lxc.network.name = eth1
lxc.network.ipv4 = 10.0.1.218
After starting up your linux container, using lxc-console or the container's rootfs on the host updates the /etc/network/interfaces file of the container. Somehow, they don't put default routing rule inside the container.
post-up ip route add default via 10.0.1.1 (your gateway address)Then, now, you can log-in to the container from remote EC2 instance in the same subnet.