How to run Docker inside a Nova/LXD container

I’ve been experimenting with deploying OpenStack using Nova/LXD (instead of Nova/KVM) for quite some time, using conjure-up as the deployment tool. It is simple, easy to set up and use and produces a usable OpenStack cluster.

However, I’ve been unable to run Docker inside a Nova instance (implemented as an LXD container) using an out-of-the-box installation deployed by conjure-up. The underlying reason is that the LXD container where nova-compute is hosted lacks some privileges. Also, inside this nova-compute container Nova/LXD spawns nested LXD containers, one for each Nova instance, which again lack some additional privileges required by Docker.

Short story, you can apply the docker LXD profile to both the nova-compute container and those nested LXD containers inside it where you want to run Docker, and Docker will run fine:

⟫ juju status nova-compute
⟫ juju status nova-compute
Model                         Controller                Cloud/Region         Version    SLA
conjure-openstack-novalx-1d1  conjure-up-localhost-718  localhost/localhost  2.2-beta4  unsupported

App                  Version  Status  Scale  Charm                Store       Rev  OS      Notes
lxd                  2.0.9    active      1  lxd                  jujucharms   10  ubuntu
neutron-openvswitch  10.0.0   active      1  neutron-openvswitch  jujucharms  240  ubuntu
nova-compute         15.0.2   active      1  nova-compute         jujucharms  266  ubuntu

Unit                      Workload  Agent  Machine  Public address  Ports  Message
nova-compute/0*           active    idle   4        10.0.8.61              Unit is ready
  lxd/0*                  active    idle            10.0.8.61              Unit is ready
  neutron-openvswitch/0*  active    idle            10.0.8.61              Unit is ready

Machine  State    DNS        Inst id        Series  AZ  Message
4        started  10.0.8.61  juju-59ffc3-4  xenial      Running
...

From the previous output, notice how the nova-compute/0 unit is running in machine #4, and that the underlying LXD container is named juju-59ffc3-4. Now, let’s see the LXD profiles used by this container:

⟫ lxc info juju-59ffc3-4 | grep Profiles
Profiles: default, juju-conjure-openstack-novalx-1d1

The docker LXD profile is missing from this container, and this will cause that any nested container trying to use Docker will fail. Entering the nova-compute/0 container, we see initially no nested containers. That is, since there are no Nova instances, there are no LXD containers. Remember that when using Nova/LXD, there is a 1:1 mapping between a Nova instance and an LXD container:

⟫ lxc exec juju-59ffc3-4 /bin/bash
root@juju-59ffc3-4:~# lxc list
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+

Let’s spawn a Nova instance for testing:

⟫ juju ssh nova-cloud-controller/0
ubuntu@juju-59ffc3-13:~$ source novarc
ubuntu@juju-59ffc3-13:~$ openstack server create --flavor m1.small --image xenial-lxd --nic net-id=ubuntu-net test1

Now, if we take a look inside the nova-compute/0 container, we will see a nested container:

⟫ juju ssh nova-compute/0
ubuntu@juju-59ffc3-4:~$ sudo -i
root@juju-59ffc3-4:~# lxc list
+-------------------+---------+-------------------+------+------------+-----------+
|       NAME        |  STATE  |       IPV4        | IPV6 |    TYPE    | SNAPSHOTS |
+-------------------+---------+-------------------+------+------------+-----------+
| instance-00000001 | RUNNING | 10.101.0.9 (eth0) |      | PERSISTENT | 0         |
+-------------------+---------+-------------------+------+------------+-----------+
root@juju-59ffc3-4:~# lxc info instance-00000001 | grep Profiles
Profiles: instance-00000001

Here one can see that the nested container is using a profile named after the Nova instance. Let’s enter this nested container, install Docker and try to spawn a Docker container:

root@juju-59ffc3-4:~# lxc exec instance-00000001 /bin/bash
root@test1:~# apt-get update
...
root@test1:~# apt-get -y install docker.io
...
root@test1:~# docker run -it ubuntu /bin/bash
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
b6f892c0043b: Pull complete
55010f332b04: Pull complete
2955fb827c94: Pull complete
3deef3fcbd30: Pull complete
cf9722e506aa: Pull complete
Digest: sha256:382452f82a8bbd34443b2c727650af46aced0f94a44463c62a9848133ecb1aa8
Status: Downloaded newer image for ubuntu:latest
docker: Error response from daemon: containerd: container not started.

Here we can see that Docker was unable to spawn the Docker container.

First thing we are going to try is to add the docker LXD profile to the nested container, the one hosting our Nova instance:

⟫ juju ssh nova-compute/0
ubuntu@juju-59ffc3-4:~$ sudo -i
root@juju-59ffc3-4:~# lxc list
+-------------------+---------+-------------------+------+------------+-----------+
|       NAME        |  STATE  |       IPV4        | IPV6 |    TYPE    | SNAPSHOTS |
+-------------------+---------+-------------------+------+------------+-----------+
| instance-00000001 | RUNNING | 10.101.0.5 (eth0) |      | PERSISTENT | 0         |
+-------------------+---------+-------------------+------+------------+-----------+
root@juju-59ffc3-4:~# lxc info instance-00000001 | grep Profiles
Profiles: instance-00000001
root@juju-59ffc3-4:~# lxc profile apply instance-00000001 instance-00000001,docker
Profile instance-00000001,docker applied to instance-00000001

Now, let’s try again to run a Docker container:

root@juju-59ffc3-4:~# lxc exec instance-00000001 /bin/bash
root@test1:~# docker run -it ubuntu /bin/bash
root@7fc441a9b0a5:/# uname -r
4.10.0-21-generic
root@7fc441a9b0a5:/#

But this, besides being a manual process, it is not elegant. There’s another solution which requires no operation intervention. It consists of a Python code patch to the Nova/LXD driver that allows selectively adding additional LXD profiles to Nova containers:

$ juju ssh nova-compute/0
ubuntu@juju-59ffc3-4:~$ sudo -i
root@juju-59ffc3-4:~# patch -d/ -p0 << EOF
--- /usr/lib/python2.7/dist-packages/nova_lxd/nova/virt/lxd/config.py.orig      2017-06-07 19:41:47.685278274 +0000
+++ /usr/lib/python2.7/dist-packages/nova_lxd/nova/virt/lxd/config.py   2017-06-07 19:42:58.891624467 +0000
@@ -56,11 +56,17 @@
         instance_name = instance.name
         try:
 
+            # Profiles to be applied to the container
+            profiles = [str(instance.name)]
+            lxd_profiles = instance.flavor.extra_specs.get('lxd:profiles')
+            if lxd_profiles:
+                profiles += lxd_profiles.split(',')
+
             # Fetch the container configuration from the current nova
             # instance object
             container_config = {
                 'name': instance_name,
-                'profiles': [str(instance.name)],
+                'profiles': profiles,
                 'source': self.get_container_source(instance),
                 'devices': {}
             }
EOF
root@juju-59ffc3-4:~# service nova-compute restart

Now, let’s create a new flavor named docker with the extra spec to include the docker LXD profile to all instances that rely on this flavor:

⟫ juju ssh nova-cloud-controller/0
ubuntu@juju-59ffc3-13:~$ source novarc
ubuntu@juju-59ffc3-13:~$ openstack flavor create --disk 20 --vcpus 2 --ram 1024 docker
ubuntu@juju-59ffc3-13:~$ openstack flavor set --property lxd:profiles=docker docker
ubuntu@juju-59ffc3-13:~$ openstack server create --flavor docker --image xenial-lxd --nic net-id=ubuntu-net test2

Then, inside the nova-compute container:

⟫ juju ssh nova-compute/0
ubuntu@juju-59ffc3-4:~$ sudo -i
root@juju-59ffc3-4:~# lxc list
+-------------------+---------+--------------------------------+------+------------+-----------+
|       NAME        |  STATE  |              IPV4              | IPV6 |    TYPE    | SNAPSHOTS |
+-------------------+---------+--------------------------------+------+------------+-----------+
| instance-00000001 | RUNNING | 172.17.0.1 (docker0)           |      | PERSISTENT | 0         |
|                   |         | 10.101.0.9 (eth0)              |      |            |           |
+-------------------+---------+--------------------------------+------+------------+-----------+
| instance-00000003 | RUNNING | 10.101.0.8 (eth0)              |      | PERSISTENT | 0         |
+-------------------+---------+--------------------------------+------+------------+-----------+
root@juju-59ffc3-4:~# lxc info instance-00000003 | grep Profiles
Profiles: instance-00000003, docker
root@juju-59ffc3-4:~# lxc exec instance-00000003 /bin/bash
root@test2:~# apt-get update
...
root@test2:~# apt-get -y install docker.io
...
root@test2:~# docker run -it ubuntu /bin/bash
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
b6f892c0043b: Pull complete
55010f332b04: Pull complete
2955fb827c94: Pull complete
3deef3fcbd30: Pull complete
cf9722e506aa: Pull complete
Digest: sha256:382452f82a8bbd34443b2c727650af46aced0f94a44463c62a9848133ecb1aa8
Status: Downloaded newer image for ubuntu:latest
root@fd74cfa04876:/# uname -r
4.10.0-21-generic
root@fd74cfa04876:/#

So, that’s it. With this small patch, which enables support for the lxd:profiles extra spec, it is easier to allow Docker to run inside Nova instances hosted in LXD containers.

Juju and apt-cacher

I’ve been playing quite a lot lately with Juju and other related software projects, like conjure-up and LXD. They make so easy to spin up and down complex software stacks like OpenStack that you don’t even realize until your hosting provider start alerting you of high traffic consumption. And guess where most of this traffic usage comes from? From installing packages.

So I decided to save on bandwidth by using apt-cacher. It is straightforward and easy to set up and getting it running. In the end, if you follow the steps described in the previous link or this, you will end up with a Perl program listening on your machine in port 3142 that you can use as an Apt cache.

For Juju, one can use a YAML configuration file like this:

apt-http-proxy: http://localhost:3142
apt-https-proxy: http://localhost:3142

Then bootstrap Juju using the following command:

$ juju bootstrap --config config.yaml localhost lxd

For conjure-up is also very easy:

$ conjure-up \
    --apt-proxy http://localhost:3142 \
    --apt-https-proxy http://localhost:3142 \
    ...

Persistent loopback interfaces in Mac OS X

One of the things that I miss in Mac OS X is support for multiple loopback addresses. Not just 127.0.0.1, but anything in the form 127.* (e.g. 127.0.1.1 or 127.0.0.2).

To add an additional IPv4 address to the loopback interface, one can use the following command in a Mac OS X terminal:

$ sudo ifconfig lo0 alias 127.0.1.1

Problems is that this doesn’t persist across reboots. To make it persist across reboots, one can create a “launchd” daemon that configures this additional IPv4 address. Something like this:

$ cat << EOF | sudo tee -a /Library/LaunchDaemons/com.felipe-alfaro.loopback1.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
  <dict>
    <key>Label</key>
    <string>com.felipe-alfaro.loopback1</string>
    <key>ProgramArguments</key>
    <array>
        <string>/sbin/ifconfig</string>
        <string>lo0</string>
        <string>alias</string>
        <string>127.0.1.1</string>
    </array>
    <key>RunAtLoad</key>
    <true/>
  </dict>
</plist>
EOF

Then, start the service up:

$ sudo launchctl load /Library/LaunchDaemons/com.felipe-alfaro.loopback1.plist

And make sure it did work:

$ sudo launchctl list | grep com.felipe-alfaro
-   0   com.felipe-alfaro.loopback1
$ ifconfig lo0
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
    options=1203<RXCSUM,TXCSUM,TXSTATUS,SW_TIMESTAMP>
    inet 127.0.0.1 netmask 0xff000000 
    inet6 ::1 prefixlen 128 
    inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 
    inet 127.0.1.1 netmask 0xff000000 
    nd6 options=201<PERFORMNUD,DAD>

And, for the record, it is totally possible to have multiple services like the one before, each one for every additional IPv4 address. Just make sure to name the .plist files differently as well as the services name in the Label tag.

Default user in WSL

The Windows Subsystem for Linux (WSL) defaults to running as the “root” user. In order to change that behavior, just create a Linux user.Let’s imagine this user is named “jdoe”. To have WSL start the session as “jdoe” instead of “root”, just run the following command from a “cmd.exe” window:

C:\Users\JohnDoe> lxrun /setdefaultuser jdoe

Take into account that running any running WSL will be killed inmmediately.

QPID and OpenStack

If you are still using QPID in your OpenStack deployment, be careful with the QPID topology version used. It seems some components in Havana default to version 2 while others in Icehouse default to 1.

To avoid problems, perhaps you want to explicitly set the following configuration option in files like /etc/nova/nova.conf:

qpid_topology_version=1

OpenStack with devstack in Ubuntu

Introduction

To play with OpenStack using devstack, I chose Ubuntu Server 12.04 LTS as the base operating system. To make things even easier, I decided to deploy the complete OpenStack stack inside a virtual machine under VMware (in my case, Fusion). Make sure you enable the following options, which are reachable under Virtual Machine > Settings … > Processors & Memory (tab), section Advanced options:

  • Enable hypervisor applications in this virtual machine: Enables running modern virtualisation applications by providing support for Intel VT-x/EPT inside the virtual machine.
  • Enable code profiling applications in this virtual machine: Enables running modern code profiling applications by providing support for CPU performance monitoring counters inside this virtual machine.

In addition, a custom VMware vmnet3 network interface is being used, configured to do NAT and to use the 192.168.100.0/24 subnet.

The actual deployment of OpenStack will use Neutron for networking and will install Ceilometer for monitoring and instrumentation.

Installing Ubuntu Server

The only relevant part is the partition scheme. I decided to use a 500MiB /boot partition formatted as Ext4, and to create an LVM volume group named cinder-volumes. Make sure this volume group is big enough to store the root file system, plus the swap file and other logical volumes that will be used by Cinder.

Upon a running system, make sure to apply any updates and security fixes and to install some dependencies, like Git:

sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get install git

Network configuration

I prefer to use static IP addresses rather than relying on static leases via DHCP:

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
  address 192.168.100.10
  netmask 255.255.255.0
  gateway 192.168.100.2
  dns-servers 192.168.100.2

Prepare devstack

First, clone the devstack repository and switch to the proper branch. In this post, the stable/havana branch is used, but feel feee to use something else:

git clone https://github.com/openstack-dev/devstack.git
cd devstack
git checkout stable/havana

Customize devstack

Devstack provides some sane defaults, but I prefer to use Neutron networking and to install Ceilometer. Next, an example of a possible localrc configuration file (which must be placed in the root of the devstack repository):

# MySQL
MYSQL_PASSWORD=nova

# RabbitMQ
RABBIT_PASSWORD=nova

# Keystone
ADMIN_PASSWORD=nova
SERVICE_PASSWORD=nova
SERVICE_TOKEN=nova

# Glance
# Nothing to config

# Nova
disable_service n-net

# Neutron
HOST_IP=192.168.100.10
Q_PLUGIN=ml2
Q_AGENT_EXTRA_OVS_OPTS=(tenant_network_type=local)
OVS_VLAN_RANGE=physnet1
PHYSICAL_NETWORK=physnet1
OVS_PHYSICAL_BRIDGE=br-eth0
enable_service neutron,q-svc,q-agt,q-dhcp,q-meta

# Ceilometer
enable_service ceilometer-acompute,ceilometer-acentral,ceilometer-anotification,ceilometer-collector
enable_service ceilometer-alarm-evaluator,ceilometer-alarm-notifier
enable_service ceilometer-api

# Heat
enable_service heat,h-api,h-api-cfn,h-api-cw,h-eng

# Others
LOGFILE=$DEST/logs/stack.sh.log

 Install devstack

# ./stack.sh

This takes a very long time, specially on slow Internet connections.

Post-installation

Remove any bridges created by libvirtd that are not going to be used:

virsh net-destroy default
virsh net-undefine default

Next step is to configure the Open VSwitch interface that will be used to provide access to the real physical network (and to the Internet). The steps to follow are to add the eth0 interface to the br-eth0 bridge:

# ovs-vsctl add-port br-eth0 eth0
# ifconfig br-eth0 promisc up

Then, move the static IP address from the eth0 into the br-eth0 bridge interface:

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
  up ifconfig $IFACE 0.0.0.0 up
  up ip link set $IFACE promisc on
  down ip link set $IFACE promisc off
  down ifconfig $IFACE down

# The Open VSwitch network interface
auto br-eth0
iface br-eth0 inet static
  address 192.168.100.10
  netmask 255.255.255.0
  gateway 192.168.100.2
  dns-nameservers 192.168.100.2
  up ip link set $IFACE promisc on
  down ip link set $IFACE promisc off

Authentication

In order to easy authentication when using command-line tools in OpenStack, I suggest that you create script files, one for each tenant and user, each one exporting the right environment variables to operate as a particular tenant and user:

# cat keystone-admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=nova
PS1="\u@\h:\w (keystone-$OS_USERNAME)\$ "
source openrc

This file can be sourced anytime you want to operate as that user and tenant:

# source keystone-admin

Configuring a flat network

A flat network is such as that OpenStack instances attached to it share a physical network, using a flat address space. Commonly, this flat network corresponds to a physical LAN segment that usually allows public connectivity or Internet connectivity.

# neutron net-create --tenant-id admin sharednet1 --shared --provider:network_type flat --provider:physical_network physnet1
# neutron subnet-create --tenant-id admin sharednet1 192.168.100.0/24 --gateway 192.168.100.2 --dns-nameserver 192.168.100.2 --allocation-pool start=192.168.100.150,end=192.168.100.200

Allow any traffic to/from the OpenStack instances

From now on, let’s use the “demo” tenant and “demo” user:

# source keystone-demo

The security group of the demo user in the demo tenant will be changed to allow any ingress and egress IP traffic:

# Obtain TenantA's default security group ID
# neutron --os-tenant-name demo --os-username demo security-group-list

# Enable ICMP and TCP ports
# neutron --os-tenant-name demo --os-username demo security-group-rule-create --protocol icmp --direction ingress {TenantA security group ID}
# neutron --os-tenant-name demo --os-username demo security-group-rule-create --protocol icmp --direction egress {TenantA security group ID}
# neutron --os-tenant-name demo --os-username demo security-group-rule-create --protocol tcp --direction egress --port-range-min 1 --port-range-max 65535 {TenantA security group ID}
# neutron --os-tenant-name demo --os-username demo security-group-rule-create --protocol tcp --direction ingress --port-range-min 1 --port-range-max 65535 {TenantA security group ID}
# neutron --os-tenant-name demo --os-username demo security-group-rule-create --protocol icmp --direction ingress {TenantA security group ID}
# neutron --os-tenant-name demo --os-username demo security-group-rule-create --protocol udp --direction egress --port-range-min 1 --port-range-max 65535 {TenantA security group ID}
# neutron --os-tenant-name demo --os-username demo security-group-rule-create --protocol udp --direction egress --port-range-min 1 --port-range-max 65535 {TenantA security group ID}

References

http://wiki.stackinsider.com/index.php/DevStack_-_Single_Node_using_Neutron_FLAT_-_Havana

http://wiki.stackinsider.com/index.php/Native_Stack_-_Single_Node_using_Neutron_FLAT_-_Havana#Prepare_Tenant_Network

Installing OpenStack all-in-one using packstack from the master branch

If you ever want to install OpenStack using packstack from the master branch (Git repository), follow these steps:

$ git clone --recursive git://github.com/stackforge/packstack.git
$ cd packstack
$ ./bin/packstack --allinone --os-neutron-install=y --provision-demo=n --provision-all-in-one-ovs-bridge=n

Those commands will clone and fetch the latest snapshot of packstack from its Git repository. Then, will invoke packstack to install OpenStack, enabling Neutron (networking) and disabling the provisioning of the demo tenant (and related network configuration). The command-line flags provided to packstack are intended for running OpenStack in a single machine.

OpenStack with devstack in Ubuntu

Introduction

To play with OpenStack using devstack, I chose Ubuntu Server 12.04 LTS as the base operating system. To make things even easier, I decided to deploy the complete OpenStack stack inside a virtual machine under VMware (in my case, Fusion). Make sure you enable the following options, which are reachable under Virtual Machine > Settings … > Processors & Memory (tab), section Advanced options:

  • Enable hypervisor applications in this virtual machine: Enables running modern virtualisation applications by providing support for Intel VT-x/EPT inside the virtual machine.
  • Enable code profiling applications in this virtual machine: Enables running modern code profiling applications by providing support for CPU performance monitoring counters inside this virtual machine.

In addition, a custom VMware vmnet3 network interface is being used, configured to do NAT and to use the 192.168.100.0/24 subnet.

The actual deployment of OpenStack will use Neutron for networking and will install Ceilometer for monitoring and instrumentation.

Installing Ubuntu Server

The only relevant part is the partition scheme. I decided to use a 500MiB /boot partition formatted as Ext4, and to create an LVM volume group named cinder-volumes. Make sure this volume group is big enough to store the root file system, plus the swap file and other logical volumes that will be used by Cinder.

Upon a running system, make sure to apply any updates and security fixes and to install some dependencies, like Git:

sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get install git

Network configuration

I prefer to use static IP addresses rather than relying on static leases via DHCP:

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
  address 192.168.100.10
  netmask 255.255.255.0
  gateway 192.168.100.2
  dns-servers 192.168.100.2

Prepare devstack

First, clone the devstack repository and switch to the proper branch. In this post, the stable/havana branch is used, but feel feee to use something else:

git clone https://github.com/openstack-dev/devstack.git
cd devstack
git checkout stable/havana

Customize devstack

Devstack provides some sane defaults, but I prefer to use Neutron networking and to install Ceilometer. Next, an example of a possible localrc configuration file (which must be placed in the root of the devstack repository):

# MySQL
MYSQL_PASSWORD=nova

# RabbitMQ
RABBIT_PASSWORD=nova

# Keystone
ADMIN_PASSWORD=nova
SERVICE_PASSWORD=nova
SERVICE_TOKEN=nova

# Glance
# Nothing to config

# Nova
disable_service n-net

# Neutron
HOST_IP=192.168.100.10
Q_PLUGIN=ml2
Q_AGENT_EXTRA_OVS_OPTS=(tenant_network_type=local)
OVS_VLAN_RANGE=physnet1
PHYSICAL_NETWORK=physnet1
OVS_PHYSICAL_BRIDGE=br-eth0
enable_service neutron,q-svc,q-agt,q-dhcp,q-meta

# Ceilometer
enable_service ceilometer-acompute,ceilometer-acentral,ceilometer-anotification,ceilometer-collector
enable_service ceilometer-alarm-evaluator,ceilometer-alarm-notifier
enable_service ceilometer-api

# Heat
enable_service heat,h-api,h-api-cfn,h-api-cw,h-eng

# Others
LOGFILE=$DEST/logs/stack.sh.log

 Install devstack

# ./stack.sh

This takes a very long time, specially on slow Internet connections.

Post-installation

Remove any bridges created by libvirtd that are not going to be used:

virsh net-destroy default
virsh net-undefine default

Next step is to configure the Open VSwitch interface that will be used to provide access to the real physical network (and to the Internet). The steps to follow are to add the eth0 interface to the br-eth0 bridge:

# ovs-vsctl add-port br-eth0 eth0
# ifconfig br-eth0 promisc up

Then, move the static IP address from the eth0 into the br-eth0 bridge interface:

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
  up ifconfig $IFACE 0.0.0.0 up
  up ip link set $IFACE promisc on
  down ip link set $IFACE promisc off
  down ifconfig $IFACE down

# The Open VSwitch network interface
auto br-eth0
iface br-eth0 inet static
  address 192.168.100.10
  netmask 255.255.255.0
  gateway 192.168.100.2
  dns-nameservers 192.168.100.2
  up ip link set $IFACE promisc on
  down ip link set $IFACE promisc off

Authentication

In order to easy authentication when using command-line tools in OpenStack, I suggest that you create script files, one for each tenant and user, each one exporting the right environment variables to operate as a particular tenant and user:

# cat keystone-admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=nova
PS1="\u@\h:\w (keystone-$OS_USERNAME)\$ "
source openrc

This file can be sourced anytime you want to operate as that user and tenant:

# source keystone-admin

Configuring a flat network

A flat network is such as that OpenStack instances attached to it share a physical network, using a flat address space. Commonly, this flat network corresponds to a physical LAN segment that usually allows public connectivity or Internet connectivity.

# neutron net-create --tenant-id admin sharednet1 --shared --provider:network_type flat --provider:physical_network physnet1
# neutron subnet-create --tenant-id admin sharednet1 192.168.100.0/24 --gateway 192.168.100.2 --dns-nameserver 192.168.100.2 --allocation-pool start=192.168.100.150,end=192.168.100.200

Allow any traffic to/from the OpenStack instances

From now on, let’s use the “demo” tenant and “demo” user:

# source keystone-demo

The security group of the demo user in the demo tenant will be changed to allow any ingress and egress IP traffic:

# Obtain TenantA's default security group ID
# neutron --os-tenant-name demo --os-username demo security-group-list

# Enable ICMP and TCP ports
# neutron --os-tenant-name demo --os-username demo security-group-rule-create --protocol icmp --direction ingress {TenantA security group ID}
# neutron --os-tenant-name demo --os-username demo security-group-rule-create --protocol icmp --direction egress {TenantA security group ID}
# neutron --os-tenant-name demo --os-username demo security-group-rule-create --protocol tcp --direction egress --port-range-min 1 --port-range-max 65535 {TenantA security group ID}
# neutron --os-tenant-name demo --os-username demo security-group-rule-create --protocol tcp --direction ingress --port-range-min 1 --port-range-max 65535 {TenantA security group ID}

References

http://wiki.stackinsider.com/index.php/DevStack_-_Single_Node_using_Neutron_FLAT_-_Havana

http://wiki.stackinsider.com/index.php/Native_Stack_-_Single_Node_using_Neutron_FLAT_-_Havana#Prepare_Tenant_Network