Tor with Brew in Mac OS X

To install the Tor service using Brew in Mac OS X:

$ brew install tor torsocks

However, this does not load the Tor service automatically (either manually or automatically at log in). Since I don’t link things to be loaded automatically for me, I’ve created the following shell script to load or unload (start or stop) the Tor service manually in Mac OS X:

#!/bin/bash

function usage() {
  echo "usage: $0 start|stop";
  exit 1;
}

function tor_service() {
  launchctl $1 /usr/local/opt/tor/homebrew.mxcl.tor.plist
}

function start() {
  echo "$0: starting tor service...";
  tor_service load
}

function stop() {
  echo "$0: stopping tor service...";
  tor_service unload
}

function check() {
  echo "$0: checking if tor works...";
  if torsocks curl -s https://check.torproject.org | grep -q 'Congratulations. This browser is configured to use Tor.'; then
    echo 'The tor service works';
  else
    echo 'The tor service does NOT work';
  fi
}

case "$1" in
  help|--help|-h)
    usage;;

  start)
    start;;

  stop)
    stop;;

  check)
    check;;

  *)
    echo "error: missing or unrecognized command-line argument";
    usage;;
esac

To start (load) the Tor service:

./tor.sh start

To stop (unload) the Tor service:

./tor.sh start

To check whether the Tor service is working:

./tor.sh check

To tor-ify command-line tools like curl or wget:

torsocks wget https://check.torproject.org/

High-availability in OpenStack Neutron (Icehouse)

If you ever want to deploy Neutron in OpenStack (Icehouse) in high-availability mode, where you have more than one network controller (node), you’ll have to take into account that most of Neutron components will have to run in active-passive mode. Furthermore, virtual routers get associated to a L3 agent at creation time, and virtual networks to a DHCP agent. This association is established via the host name of the agent (L3 or DHCP). Unless explicitly configured, Neutron agents register themselves with a host name that matches the FQDN of the host where they are running.

An example: let’s imagine a scenario where we have two network nodes: nn1.example.com and nn2.example.com. By default, the L3 agent running on the host nn1.example.com will register itself with a host name of nn1.example.com. The same holds true for the the DHCP agent. The L3 agent on host nn2.example.com is not running yet, but it’s configured in the same way as the other L3 agent. Hence, the L3 agent on host nn2.example.com will register itself using a host named nn2.example.com.

Now, a user creates a virtual router and, at creation time, it gets associated with the L3 agent running on host nn1.example.com. At some point, host nn1.example.com fails. The L3 agent on host nn2.example.com will be brought up (for example, via Pacemaker). The problem is that the virtual router is associated with an L3 agent named nn1.example.com, which is now unreachable. There’s an L3 agent named nn2.example.com, but that won’t do it.

What’s the proper solution to fix this mess? To tell Neutron agents to register themselves with a fictitious, unique host name. Since there will only be one agent of the same type running at the same time (active-passive), it won’t cause any problems. How does one tell the Neutron agents in OpenStack (Icehouse) to use this fictitious name? Just add the following configuration option to /etc/neutron/neutron.conf inside the [DEFAULT] section:

[DEFAULT]
host = my-fictitious-host-name

LG G3 (D855): How to remove custom carrier boot and shutdown animations

Tired of your carrier from dropping a custom boot and shutdown animation? Like Movistar does? Do you have an LG G3 D855 phone? Is it rooted? Then it’s just a matter of removing the following files:

adb shell
su -
rm /data/shared/cust/bootanimation.zip
rm /data/shared/cust/shutdownanimation.zip

This will, hopefully, revert to using the stock boot and shutdown animations.

Installing python-glanceclient using Brew on Mac OS X 10.10

I was getting clang errors on ffi.h when trying to install python-glanceclient using pip:

$ pip install python-glanceclient
...
Installing collected packages: python-glanceclient, cryptography, jsonschema, jsonpatch, cffi, jsonpointer, pycparser
Running setup.py install for cryptography
Package libffi was not found in the pkg-config search path.
Perhaps you should add the directory containing `libffi.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libffi' found
...
----------------------------------------
Cleaning up...
Command /usr/local/opt/python/bin/python2.7 -c "import setuptools, tokenize;__file__='/private/tmp/pip_build_brew/cryptography/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-jghJZG-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /private/tmp/pip_build_brew/cryptography
Storing debug log for failure in /Users/brew/.pip/pip.log

A fix that seems to work is manually installing libffi and exporting PKG_CONFIG_PATH pointing to it:

$ brew install pkg-config libffi
$ export PKG_CONFIG_PATH=/usr/local/Cellar/libffi/3.0.13/lib/pkgconfig/
$ pip install cffi
$ pip install python-glanceclient
$ glance --version
0.14.1

Create a local mirror of Ubuntu packages using apt-mirror

Sometimes, having a local mirror of Ubuntu packages can be useful. Not only this can save tons of network bandwidth when installing an Ubuntu system multiple times. An example of this are testing, development and QA environments that rely on virtual machines. When installing a new Ubuntu system, just point the installer to the local Ubuntu mirror and you’ll save time and reduce your WAN/Internet traffic considerably.

In order to create and keep a local mirror of Ubuntu, you can use apt-mirror which is available in the universe repository. And, for the record, this post is heavily based on another one — Ubuntu – Set Up A Local Mirror.

Ubuntu, as many other Linux distributions, retrieve packages for installation over HTTP. Therefore, the first thing to do is to install Apache, if not already installed. And, at the same time, let’s install apt-mirror too:

$ sudo apt-get install apache2 apt-mirror

Next step consists of configuring apt-mirror. The configuration is very similar to /etc/apt/sources.list. apt-mirror reads its configuration from /etc/apt/mirror.list. By default, it mirrors packages for the architecture on which it’s running, but you’ll likely want it to mirror packages for x86_64 and i386 systems. Also, beware of the size of the local mirror: mirroring all the repositories can consume quite a lot of disk space in the local system (30GB or even more). It’s a good idea to mirror those repositories that you need. Here’s an example of my /etc/apt/mirror.list:

############# config ##################
#
# set base_path    /var/spool/apt-mirror
#
# set mirror_path  $base_path/mirror
# set skel_path    $base_path/skel
# set var_path     $base_path/var
# set cleanscript $var_path/clean.sh
# set defaultarch  
# set postmirror_script $var_path/postmirror.sh
# set run_postmirror 0
set nthreads     20
set _tilde 0
#
############# end config ##############

deb-amd64 http://archive.ubuntu.com/ubuntu trusty main restricted
deb-amd64 http://archive.ubuntu.com/ubuntu trusty-security main restricted
deb-amd64 http://archive.ubuntu.com/ubuntu trusty-updates main restricted
deb-i386 http://archive.ubuntu.com/ubuntu trusty main restricted
deb-i386 http://archive.ubuntu.com/ubuntu trusty-security main restricted
deb-i386 http://archive.ubuntu.com/ubuntu trusty-updates main restricted

clean http://archive.ubuntu.com/ubuntu

This configuration requests 20 download threads, and mirrors the main and restricted repositories for x86_64 and i386 systems exclusively.

To initiate the mirror process, just run:

$ sudo apt-mirror

This will spawn workers threads that will mirror the configured repositories into /var/spool/apt-mirror.

In order to serve this mirror via Apache, just create a symlink into the root Apache directory:

$ sudo ln -s /var/spool/apt-mirror/mirror/archive.ubuntu.com/ubuntu/ /var/www/html/ubuntu

It might also be a good idea to remove or rename /var/www/html/index.html so that one can browse the repository using a Web browser.

And finally, you can configure cron to run apt-mirror periodically. For example, by adding the following line to your crontab:

@daily /usr/bin/apt-mirror

How to configure MAAS to be able to boot KVM virtual machines

In order to allow MAAS to be able to boot KVM virtual machines (via libvirt), these are the steps that one has to follow. They are intended for a Ubuntu system, but you can easily figure out how to make them work on Fedora or CentOS:

$ sudo apt-get install libvirt-bin

When adding nodes to MAAS that run as KVM virtual machines, the node configuration in MAAS will have to be updated to properly reflect the power type. In this case, the power type will be virsh. The virsh power type requires two fields: the “address” and the “power ID”. The “address” is just a libvirt URL. For example, qemu:///system for accessing libvirt on the local host, or qemu+ssh://root@hostname/system to access libvirt as root over SSH. The “power ID” field is just the virtual machine name or identifier.

In order to use SSH to access libvirt from MAAS, an SSH private key will have to be generated, and the public key uploaded to the host where the libvirt server is running:

$ sudo mkdir -p /home/maas
sudo chown maas:maas /home/maas
sudo chsh -s /bin/bash maas
sudo -u maas ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/maas/.ssh/id_rsa): 
Created directory '/home/maas/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/maas/.ssh/id_rsa.
Your public key has been saved in /home/maas/.ssh/id_rsa.pub.

Finally, add the public key to /root/.ssh/authorized_keys2 where the libvirt server is running, so that virsh can SSH into it without a password:

$ sudo -u maas ssh-copy-id root@hostname

Finally, as the maas user, test the connection:

$ sudo -u maas virsh -c qemu+ssh://root@hostname/system list -all