Status bar in “screen”

In order to have screen behave a little bit more like byobu, edit or create (if not present) /etc/screenrc or ~/.screenrc and add the following lines:

autodetach on 
startup_message off 
hardstatus alwayslastline 
shelltitle 'bash'
hardstatus string '%{gk}[%{wk}%?%-Lw%?%{=b kR}(%{W}%n*%f %t%?(%u)%?%{=b kR})%{= w}%?%+Lw%?%? %{g}][%{d}%l%{g}][ %{= w}%Y/%m/%d %0C:%s%a%{g} ]%{W}'

Debugging MAAS 2.x Ephemeral images

MAAS 2.x relies on Ephemeral images during commissioning of nodes. Basically, an Ephemeral image consists of a kernel, a RAM disk and a squashfs file-system that is booted over the network (PXE) and relies on cloud-init to perform discovery of a node’s hardware (e.g. number of CPUs, RAM, disk, etc.)

There are times that, for some reason, the commissioning process fails and you need to perform some troubleshooting. Typically, the node boots over PXE but cloud-init fails and you are left on the login screen with an non-configured host (e.g. hostname is ‘ubuntu”). But Ephemeral images don’t allow anyone to log in interactively. The solution consists of injecting some backdoor into the Ephemeral image. Such backdoor could be enabling some password for the root user, for example. Next, I will explain how to do this.

Ephemeral images are downloaded from the Internet by the MAAS region controller and synchronized to MAAS rack controllers. These files are kept on disk under:

https://images.maas.io/ephemeral-v3/daily/

Inside this directory, there is a subdirectory named after the Ubuntu release code name (e.g. Xenial):

https://images.maas.io/ephemeral-v3/daily/xenial/amd64/20171011/

Under this, another subdirectory named after the CPU architecture (e.g. AMD64):

https://images.maas.io/ephemeral-v3/daily/xenial/amd64/

And under this, another subdirectory named with some timestamp:

https://images.maas.io/ephemeral-v3/daily/xenial/amd64/20171011/

If you browse this location, you will find something like this:

[DIR]  ga-16.04/                12-Oct-2017 01:57.      -    
[DIR]  hwe-16.04-edge/          12-Oct-2017 01:57       -    
[DIR]  hwe-16.04/               12-Oct-2017 01:57.      -    
[   ]  squashfs                 12-Oct-2017 01:57       156M     
[TXT]  squashfs.manifest        12-Oct-2017 01:57       13K

The squashfs filesystem is shared among different types of kernels/ramdisk combinations (GA which stands for General Availability, HWE or HWE Edge). As mentioned before, these files are downloaded and kept updated in MAAS rack controllers under:

/var/lib/maas/boot-resources/snapshot-20171020-091808/ubuntu/amd64/hwe-16.04-edge/xenial/daily

On-disk layout is different from the Web layout, as each kernel/ramdisk combination has its own subdirectory together with the squashfs filesystem. But let’s no diverge. To introduce a backdoor, such as a password for the root user, let’s do the following:

# cd /var/lib/maas/boot-resources/snapshot-20171020-091808/ubuntu/amd64/hwe-16.04-edge/xenial/daily
# unsquashfs squashfs
# openssl passwd -1 ubuntu
$1$lqVUYmVl$6QatT6qYPVpFo8nbEO4Ve1
# cp -r squashfs-root/etc/passwd squashfs-root/etc/passwd~
# sed 's,^root:x:0:0:root:/root:/bin/bash$,root:$1$lqVUYmVl$6QatT6qYPVpFo8nbEO4Ve1:0:0:root:/root:/bin/bash,g' > squashfs-root/etc/passwd < squashfs-root/etc/passwd~
# cp -r squashfs squashfs.dist
# mksquashfs squashfs-root squashfs -xattrs -comp xz -noappend
# chown maas:maas squashfs

Now that the squashfs image has been unpacked, patched and re-packed, one can try commissioning the node again. If it fails, one can log in interactively as user root and password ubuntu.

Juju and apt-cacher

I’ve been playing quite a lot lately with Juju and other related software projects, like conjure-up and LXD. They make so easy to spin up and down complex software stacks like OpenStack that you don’t even realize until your hosting provider start alerting you of high traffic consumption. And guess where most of this traffic usage comes from? From installing packages.

So I decided to save on bandwidth by using apt-cacher. It is straightforward and easy to set up and getting it running. In the end, if you follow the steps described in the previous link or this, you will end up with a Perl program listening on your machine in port 3142 that you can use as an Apt cache.

For Juju, one can use a YAML configuration file like this:

apt-http-proxy: http://localhost:3142
apt-https-proxy: http://localhost:3142

Then bootstrap Juju using the following command:

$ juju bootstrap --config config.yaml localhost lxd

For conjure-up is also very easy:

$ conjure-up \
    --apt-proxy http://localhost:3142 \
    --apt-https-proxy http://localhost:3142 \
    ...

OpenStack Newton and LXD

Background

This post is about deploying a minimal OpenStack newton cluster atop LXD on a single machine. Most of what is mentioned here is based on OpenStack on LXD.

Introduction

The rationale behind using LXD is simplicity and feasibility: it doesn’t require more than one x86_64 server with 8 CPU cores, 64GB of RAM and a SSD drive large enough to perform an all-in-one deployment of OpenStack Newton.

According to Canonical, “LXD is a pure-container hypervisor that runs unmodified Linux guest operating systems with VM-style operations at incredible speed and density.”. Instead of using pure virtual machines to run the different OpenStack components, LXD is used which allows for higher “machine” (container) density. In practice, an LXD container behaves pretty much like a virtual or baremetal machine.

For all purposes, I will be using Ubuntu 16.04.02 for this experiment on a 128GB machine with 12 CPU cores and 4x240GB SSD drives configured using software RAID0. For increased performance and efficiency ZFS is also used (dedicated partition separate from the base OS) as a backing store for LXD.

Preparation

$ sudo add-apt-repository ppa:juju/devel
$ sudo add-apt-repository ppa:ubuntu-lxc/lxd-stable
$ sudo apt update
$ sudo apt install \
    juju lxd zfsutils-linux squid-deb-proxy \
    python-novaclient python-keystoneclient \
    python-glanceclient python-neutronclient \
    python-openstackclient curl
$ git clone https://github.com/falfaro/openstack-on-lxd.git

It is important to run all the following commands inside the openstack-on-lxd directory where the Git repository has been cloned locally.

LXD set up
$ sudo lxd init

The relevant part here is the network configuration. IPv6 is not properly supported by Juju so make sure to not enable. For IPv4 use the 10.0.8.0/24 subnet and assign the 10.0.8.1 IPv4 address for LXD itself. The DHCP range could be something like 10.0.8.2 to 10.0.8.200.

NOTE: Having LXD listen on the network is also an option for remotely managing LXD, but beware of security issues when exposing it over a public network. Using ZFS (or btrfs) should also increase performance and efficiency (e.g. copy-on-write shall save disk space by prevent duplicate bits from all the containers running the same base image).

Using an MTU of 9000 for container interfaces will likely increase performance:

$ lxc profile device set default eth0 mtu 9000

Next step is to spawn an LXC container for testing purposes:

$ lxc launch ubuntu-daily:xenial openstack
$ lxc exec openstack bash
# exit

An specific LXC profile named juju-default will be used when deploying OpenStack. In particular this profile allows for nesting LXD (required by nova-compute), allows running privileged containers, and preloads certain kernel modules required inside OpenStack containers.

$ lxc profile create juju-default 2>/dev/null || \
  echo "juju-default profile already exists"
$ cat lxd-profile.yaml | lxc profile edit juju-default
Bootstrap Juju controller
$ juju bootstrap --config config.yaml localhost lxd
Deploy OpenStack
$ juju deploy bundle-newton-novalxd.yaml
$ watch juju status
Testing

After Juju has finished deploying OpenStack, make sure there is a file named novarc in the current directory. This file is required to be sourced in order to use the OpenStack CLI:

$ source novarc
$ openstack catalog list
$ nova service-list
$ neutron agent-list
$ cinder service-list

Create Nova flavors:

$ openstack flavor create --public \
    --ram   512 --disk  1 --ephemeral  0 --vcpus 1 m1.tiny
$ openstack flavor create --public \
    --ram  1024 --disk 20 --ephemeral 40 --vcpus 1 m1.small
$ openstack flavor create --public \
    --ram  2048 --disk 40 --ephemeral 40 --vcpus 2 m1.medium
$ openstack flavor create --public \
    --ram  8192 --disk 40 --ephemeral 40 --vcpus 4 m1.large
$ openstack flavor create --public \
    --ram 16384 --disk 80 --ephemeral 40 --vcpus 8 m1.xlarge

Add the typical SSH key:

$ openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey

Create a Neutron external network and a virtual network for testing:

$ ./neutron-ext-net \
    -g 10.0.8.1 -c 10.0.8.0/24 \
    -f 10.0.8.201:10.0.8.254 ext_net
$ ./neutron-tenant-net \
    -t admin -r provider-router \
    -N 10.0.8.1 internal 192.168.20.0/24

CAVEAT: Nova/LXD does not support use of QCOW2 images in Glance. Instead one has to use RAW images. For example:

$ curl http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-root.tar.gz | \
  glance image-create --name xenial --disk-format raw --container-format bare

Then:

$ openstack server create \
    --image xenial --flavor m1.tiny --key-name mykey --wait \
    --nic net-id=$(neutron net-list | grep internal | awk '{ print $2 }') \
    openstack-on-lxd-ftw

NOTE: For reasons I yet do not understand, one can’t use a flavor other than m1.tiny. Reason is that this flavor is the only one that does not request any ephemeral disk. As soon as ephemeral disk is requested, the LXD subsystem inside the nova-compute container will complain with the following error:

$ juju ssh nova-compute/0
$ sudo tail -f /var/log/nova/nova-compute.log
...
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2078, in _build_resources
    yield resources
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1920, in _build_and_run_instance
    block_device_info=block_device_info)
  File "/usr/lib/python2.7/dist-packages/nova/virt/lxd/driver.py", line 317, in spawn
    self._add_ephemeral(block_device_info, lxd_config, instance)
  File "/usr/lib/python2.7/dist-packages/nova/virt/lxd/driver.py", line 1069, in _add_ephemeral
    raise exception.NovaException(reason)
NovaException: Unsupport LXD storage detected. Supported storage drivers are zfs and btrfs.

If Cinder is available, create a test Cinder volume:

$ cinder create --name testvolume 10

Default user in WSL

The Windows Subsystem for Linux (WSL) defaults to running as the “root” user. In order to change that behavior, just create a Linux user.Let’s imagine this user is named “jdoe”. To have WSL start the session as “jdoe” instead of “root”, just run the following command from a “cmd.exe” window:

C:\Users\JohnDoe> lxrun /setdefaultuser jdoe

Take into account that running any running WSL will be killed inmmediately.

Create a local mirror of Ubuntu packages using apt-mirror

Sometimes, having a local mirror of Ubuntu packages can be useful. Not only this can save tons of network bandwidth when installing an Ubuntu system multiple times. An example of this are testing, development and QA environments that rely on virtual machines. When installing a new Ubuntu system, just point the installer to the local Ubuntu mirror and you’ll save time and reduce your WAN/Internet traffic considerably.

In order to create and keep a local mirror of Ubuntu, you can use apt-mirror which is available in the universe repository. And, for the record, this post is heavily based on another one — Ubuntu – Set Up A Local Mirror.

Ubuntu, as many other Linux distributions, retrieve packages for installation over HTTP. Therefore, the first thing to do is to install Apache, if not already installed. And, at the same time, let’s install apt-mirror too:

$ sudo apt-get install apache2 apt-mirror

Next step consists of configuring apt-mirror. The configuration is very similar to /etc/apt/sources.list. apt-mirror reads its configuration from /etc/apt/mirror.list. By default, it mirrors packages for the architecture on which it’s running, but you’ll likely want it to mirror packages for x86_64 and i386 systems. Also, beware of the size of the local mirror: mirroring all the repositories can consume quite a lot of disk space in the local system (30GB or even more). It’s a good idea to mirror those repositories that you need. Here’s an example of my /etc/apt/mirror.list:

############# config ##################
#
# set base_path    /var/spool/apt-mirror
#
# set mirror_path  $base_path/mirror
# set skel_path    $base_path/skel
# set var_path     $base_path/var
# set cleanscript $var_path/clean.sh
# set defaultarch  
# set postmirror_script $var_path/postmirror.sh
# set run_postmirror 0
set nthreads     20
set _tilde 0
#
############# end config ##############

deb-amd64 http://archive.ubuntu.com/ubuntu trusty main restricted
deb-amd64 http://archive.ubuntu.com/ubuntu trusty-security main restricted
deb-amd64 http://archive.ubuntu.com/ubuntu trusty-updates main restricted
deb-i386 http://archive.ubuntu.com/ubuntu trusty main restricted
deb-i386 http://archive.ubuntu.com/ubuntu trusty-security main restricted
deb-i386 http://archive.ubuntu.com/ubuntu trusty-updates main restricted

clean http://archive.ubuntu.com/ubuntu

This configuration requests 20 download threads, and mirrors the main and restricted repositories for x86_64 and i386 systems exclusively.

To initiate the mirror process, just run:

$ sudo apt-mirror

This will spawn workers threads that will mirror the configured repositories into /var/spool/apt-mirror.

In order to serve this mirror via Apache, just create a symlink into the root Apache directory:

$ sudo ln -s /var/spool/apt-mirror/mirror/archive.ubuntu.com/ubuntu/ /var/www/html/ubuntu

It might also be a good idea to remove or rename /var/www/html/index.html so that one can browse the repository using a Web browser.

And finally, you can configure cron to run apt-mirror periodically. For example, by adding the following line to your crontab:

@daily /usr/bin/apt-mirror

Self-signed certificates with OpenSSL

I’ve found that the easiest way to generate self-signed certificates in Debian derivatives, like Ubuntu, is by installing and using make-ssl-cert:

$ sudo apt-get install ssl-cert
$ make-ssl-cert /usr/share/ssl-cert/ssleay.cnf /path/to/cert-file.crt

This will invoke OpenSSL to generate a pair of RSA public and private keys. OpenSSL will ask for some information, like the Common Name for the certificate. When used to protect Web sites, the Common Name has to match the associated FQDN (fully-qualified domain name). For example, blog.felipe-alfaro.com.

More information can be found by reading the README.Debian.gz file from Apache2 documentation set:

$ zless /usr/share/doc/apache2/README.Debian.gz

Or online, by reading Apache and SSL, The Easy Way.

OpenVPN Server and OpenVPN Client on Android

The OpenVPN server configuration:

# cat /etc/openvpn/server.conf
port 1194
proto udp
dev tun

ca /etc/openvpn/open-rsa/keys/ca.crt
cert /etc/openvpn/open-rsa/keys/server.crt
key /etc/openvpn/open-rsa/keys/server.key
dh /etc/openvpn/open-rsa/keys/dh1024.pem

server 10.8.0.0 255.255.255.0
push "dhcp-option DNS 8.8.8.8"
push "dhcp-option DNS 8.8.4.4"

keepalive 10 60
comp-lzo
persist-key
persist-tun
status 1194.log
verb 3

client-config-dir ccd

The client-specific configuration, which specifies which subnets are accessible on the client:

# cat /etc/openvpn/ccd/android
iroute 10.42.242.0 255.255.255.0

Enable IP forwarding

# grep ip_forward /etc/sysctl.conf
net.ipv4.ip_forward=1
# sysctl -p

Enable NAT

# iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE
# service iptables save

Export the client certificate and private key using PKCS12 in order to then import them into the OpenVPN Client for Android:

# openssl pkcs12 -export -in /etc/openvpn/open-rsa/keys/android.crt -inkey /etc/openvpn/open-rsa/keys/android.key -certfile /etc/openvpn/open-rsa/keys/ca.crt -name android -out /tmp/android.p12

The resulting android.p12 file can be sent to the Android device, and from there have it imported into the OpenVPN Client for Android.

BTRFS and Ubuntu Lucid 10.04

Although Ubuntu Lucid 10.04 has native support for BTRFS, it does not like very much auto-mounting a BTRFS volume during start up. The problem seems to be that btrfsctl -a is not invoked during the boot process.

It takes some hacks to udev and initramfs to get this working. I found the solution in HowTO: Btrfs Root Installation:

This initramfs script will make sure the btrfsctl binary gets copied to the RAM disk:

$ cat /usr/share/initramfs-tools/hooks/btrfs
#!/bin/sh -e
# initramfs hook for btrfs

if [ "$1" = "prereqs" ]; then
    exit 0
fi

. /usr/share/initramfs-tools/hook-functions

if [ -x "`which btrfsctl`" ]; then
    copy_exec "`which btrfsctl`" /sbin
fi

I believe the following is not strictly necessary unless you plan on having a BTRFS-based root filesystem:

$ cat /usr/share/initramfs-tools/modules.d/btrfs
# initramfs modules for btrfs
libcrc32c
crc32c
zlib_deflate
btrfs

This will load the BTRFS module while the system boots up, and calls btrfsctl -a to prepare the BTRFS volumes:

$ cat /usr/share/initramfs-tools/scripts/local-premount/btrfs
#!/bin/sh -e
# initramfs script for btrfs

if [ "$1" = "prereqs" ]; then
    exit 0
fi

modprobe btrfs

if [ -x /sbin/btrfsctl ]; then
    /sbin/btrfsctl -a 2>/dev/null
fi

Mark the scripts executable:

chmod +x /usr/share/initramfs-tools/scripts/local-premount/btrfs
chmod +x /usr/share/initramfs-tools/hooks/btrfs

Rebuild the initial RAM disks and GRUB environment:

update-initramfs -u -k all
update-grub

That should do it.