vsftpd: anonymous FTP uploads

In order to have an anonymous-only FTP server that allows anonymous uploads, based on vsftpd running on Ubuntu 9.10, I applied the following configuration changes:

--- /etc/vsftpd.conf.orig	2010-01-13 02:00:46.287216196 +0100
+++ /etc/vsftpd.conf	2010-01-13 01:59:34.787215294 +0100
@@ -26,16 +26,18 @@
 #local_enable=YES
 #
 # Uncomment this to enable any form of FTP write command.
-#write_enable=YES
+write_enable=YES
 #
 # Default umask for local users is 077. You may wish to change this to 022,
 # if your users expect that (022 is used by most other ftpd's)
 #local_umask=022
+anon_umask=0222
+file_open_mode=0666
 #
 # Uncomment this to allow the anonymous FTP user to upload files. This only
 # has an effect if the above global write enable is activated. Also, you will
 # obviously need to create a directory writable by the FTP user.
-#anon_upload_enable=YES
+anon_upload_enable=YES
 #
 # Uncomment this if you want the anonymous FTP user to be able to create
 # new directories.

Chromium and ERR_NAME_NOT_RESOLVED

While trying to use Chromium on a Ubuntu 64-bit machine, I discovered I wasn’t able to browse to any web page. I always got the following error message:

This webpage is not available.

The webpage at http://www.google.com/ might be temporarily down or it may have moved permanently to a new web address.

Here are some suggestions:
Reload this web page later.
More information on this error
Below is the original error message

Error 105 (net::ERR_NAME_NOT_RESOLVED): The server could not be found.

DNS name resolution was working properly, so it was something else. I searched for this error and most of the search results were about Chrome on Windows having problems with proxy or firewall configuration. But, who cares about Windows? So, after spending a little bit more, I found the following issue in the official Google Code web site.

In the end, it was just a matter of:

$ sudo apt-get install lib32nss-mdns

Why Chromium has a an explicit dependency on mDNS is something that still puzzles me out.

libvirt and bridged networking

libvirt and virt-manager are a blessing. They bring powerful, free, open source management to Xen- and KVM-based virtualization environments.

I’ve been using both for quite a while. Also, I’ve always prefered bridged networking support for my virtual machines over NAT. While NAT is non-disruptive and allows for isolation, I typically like to easily access services provided by my virtual machines, like SSH or NFSv4. Turns out that setting bridged networking support in libvirt is very easy, as long as bridged interface is detected by libvirt.

The simplest solution consists of creating a bridge interface that enslaves all the physical networks interfaces used to connect to the LAN or the Internet. For example, in Ubuntu, in order to enslave eth0 to a br0 bridge interface, while using DHCP for IPv4 address configuration, /etc/network/interfaces needs to look like this:

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet manual

# The bridge
auto br0
iface br0 inet dhcp
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0

Next time, when creating a new virtual machine, it will be possible to use bridged networking in addition to NAT-based networking. There is one caveat, at least in Ubuntu: libvirt and virt-manager by default connect to qemu:///user instead of qemu:///system. This is neither good nor bad by itself. qemu:///user allows a non-privileged user to create and use virtual machines and the process of creating and destroying the virtual network interfaces used by the virtual machines is done within the context of the user running virt-manager. Due to lack of root privileges, virtual machines are limited to QEMU’s usermode networking support. In order to use advanced networking feautures like bridged networking, make sure you connect to qemu:///system instead. That is typically achieved by running virt-manager as root (which is not necessarily nice). I tried playing with udev and device ownership and permission masks but it all boils down to the inability of a non-privileged user to use brcrl to enslave network interfaces to a bridge.

Comments on building a new RAID5 array

I’ve rescued the following e-mail from Neil Brown about building a new RAID5 array in Linux and why one the disks, while the array is being constructed, is marked as a spare:

When creating a new raid5 array, we need to make sure the parity
blocks are all correct (obviously). There are several ways to do
this.

  1. Write zeros to all drives. This would make the array unusable until the clearing is complete, so isn’t a good option.
  2. Read all the data blocks, compute the parity block, and then write out the parity block. This works, but is not optimal. Remembering that the parity block is on a different drive for each ‘stripe’, think about what the read/write heads are doing. The heads on the ‘reading’ drives will be somewhere ahead of the heads on the ‘writing’ drive. Every time we step to a new stripe and change which is the ‘writing’ head, the other reading heads have to wait for the head that has just changes from ‘writing’ to ‘reading’ to catch up (finish writing, then start reading). Waiting slows things down, so this is uniformly sub-optimal.
  3. Read all data blocks and parity blocks, check the parity block to see if it is correct, and only write out a new block if it wasn’t. This works quite well if most of the parity blocks are correct as all heads are reading in parallel and are pretty-much synchronised. This is how the raid5 ‘resync’ process in md works. It happens after an unclean shutdown if the array was active at crash-time. However if most or even many of the parity blocks are wrong, this process will be quite slow as the parity-block drive will have to read-a-bunch, step-back, write-a-bunch. So it isn’t good for initially setting the parity.
  4. Assume that the parity blocks are all correct, but that one drive is missing (i.e. the array is degraded). This is repaired by reconstructing what should have been on the missing drive, onto a spare. This involves reading all the ‘good’ drives in parallel, calculating them missing block (whether data or parity) and writing it to the ‘spare’ drive. The ‘spare’ will be written to a few (10s or 100s of) blocks behind the blocks being read off the ‘good’ drives, but each drive will run completely sequentially and so at top speed.

On a new array where most of the parity blocks are probably bad, ‘4’
is clearly the best option. ‘mdadm’ makes sure this happens by creating a raid5 array not with N good drives, but with N-1 good drives and one spare. Reconstruction then happens and you should see exactly what was reported: reads from all but the last drive, writes to that last drives.

LVM snapshots and non-destructive Linux upgrades

This post roughly describes what I do when I want to non-destructively upgrade my Linux system. By non-destructive means a procedure that allows me to upgrade but also to rollback if something goes wrong. As an example, I wanted to upgrade my Ubuntu system from Jaunty to Karmic. Since Karmic is now Alpha 1, the chances of the upgrade going bad or eating my data were high. Here is where LVM and LVM snapshots come into scene.

Basically, the idea consists of taking a snapshot of the root filesystem using an LVM snapshot, then reboot the system to use the filesystem from the LVM snapshot as the root filesystem, perform an in-place upgrade of Jaunty to Karmic and see what happens. Since the upgrade takes place on a system running with the root filesystem mounted from the LVM snapshot, the original root volume is kept intact. Hence, if something goes wrong I can always reboot in order to use the original root filesystem and the system would behave as if no modifications were done at all during the upgrade.

The LVM snapshot volume should be big enough to, in the worst case, store a completely new Linux installation. The average Ubuntu installation requires less than 8GB of disk, so the LVM snapshot should be about that size plus some slack required to download the packages. In my case, and since I have free enough disk space, I chose 16GB just to be on the safe side.

Resize the root filesystem and root volume

This step is only required if there is no space in the volume group to accommodate for the snapshot volume. In my case, the volume group is full so I need to shrink the root filesystem and the root volume. resize2fs does not currently allow one to shrink a filesystem online, so I booted from the Jaunty LiveCD and entered rescue mode. From there:

# e2fsck -f /dev/root/root
# resize2fs /dev/root/root 80G
# lvresize -L 80G /dev/root/root

Create the LVM snapshot

To create an 16GB LVM snapshot volume of my root volume:

# lvcreate -s -n karmic-root -L 16G /dev/root/root

Prepare the boot environment

Mount the filesystem from the LVM snapshot volume and modify /etc/fstab to replace the device name where the original root filesystem is with the device name where the snapshotted rool filesystem lives:

# mount /dev/root/karmic-root /mnt
# vi /mnt/etc/fstab

In my case, the line for the new root filesystem looks like:

/dev/root/karmic-root /  ext4 defaults 0 1

Reboot into the snapshot

Trigger the grub menu and modify the kernel entry that corresponds to the Ubuntu system. The idea is to use the device name for the LVM snapshot. This is how the new kernel in the menu looks like:

kernel /vmlinuz-2.6.28-11-generic root=/dev/root/karmic-root ro

Then press b to boot the system. The system should boot normally, but instead of using the original root filesystem it should be using the filesystem from the LVM snapshot:

$ grep /dev/mapper /proc/mounts
/dev/mapper/root-karmic--root / ext4 rw,relatime,errors=remount-ro,barrier=1,data=ordered 0 0

In-place upgrade

I won’t describe how to do an in-place upgrade of Ubuntu. There are many resources out there that describe how to do that. The point here is that the upgrade will modify the snapshot while the original root filesystem is kept intact.

Destroy the snapshot

If something goes wrong with the update, and it usually goes wrong when upgrading to an Alpha version, to bring the system back to a usable state is just a matter of rebooting the system and using the right entry listed in grub. In-place upgrades of Ubuntu will typically add a new kernel to the list of entries in grub but won’t modify the existing ones.

After rebooting into the original system, the snapshot can be removed:

# lvremove /dev/root/karmic-root

If you don’t intend to experiment with upgrades, perhaps you want to resize the root LVM volume, then the root filesystem back to their original size.

Scrolling with the Thinkpad's TrackPoint in Ubuntu 8.10 Intrepid

Recently, I upgraded to Ubuntu 8.10 Intrepid Ibex and found that my Thinkpad’s TrackPoint scrolling stopped working. While searching on the Internet, I found a post called Scrolling with the Thinkpad’s TrackPoint in Ubuntu 8.10 Intrepid by Phil Sung that explains in a clear and concise way how to enabling scrolling in Ubuntu 8.10 using the ThinkPad TrackPoint.

I’m quoting what Phil says in his post:

Ubuntu Intrepid (8.10) switches to evdev for X server input, which has the unfortunate side effect of breaking old EmulateWheel configurations. So scrolling using the middle button + TrackPoint (which I absolutely love) was broken for a while. However, the version of evdev in Intrepid has now caught up and supports these features. Instead of modifying your xorg.conf, create a new file called /etc/hal/fdi/policy/mouse-wheel.fdi with the following contents:

<match key="info.product" string="TPPS/2 IBM TrackPoint">
 <merge key="input.x11_options.EmulateWheel" type="string">true</merge>
 <merge key="input.x11_options.EmulateWheelButton" type="string">2</merge>
 <merge key="input.x11_options.XAxisMapping" type="string">6 7</merge>
 <merge key="input.x11_options.YAxisMapping" type="string">4 5</merge>
 <merge key="input.x11_options.ZAxsisMapping" type="string">4 5</merge>
 <merge key="input.x11_options.Emulate3Buttons" type="string">true</merge>
</match>

Thanks, Phil!

Linux, mplayer and ipcrm

For the past few days, and after a few hours of uptime, mplayer refused to play videos. It was hanging while trying to open the ALSA audio. I could verify this because using -ao none as a command-line argument to mplayer fixed (as in being able to play a video with no sound) the problem.

Tired of this, I decided to strace mplayer. I could see it was hanging on the semop() system call for a semaphore with an ID of 32768. Looking at /proc/sysvipc/sem I could see that semaphore ID 32768 existed even when mplayer was not running, and that this IPC resource was created by a process running as me.

I used the ipcrm -s 32768 command to kill this IPC resource, and I saw that this fixed the problem: I could listen to audio and videos again. I haven’t been able to determine if mplayer has a bug that prevents it from freeing/destroying IPC semaphores or if this is a bug of the ALSA library, though.

Configuring a diskless Ubuntu

This post is not about doing a PXE-based network installation of Ubuntu. There are already many posts describing how to do this. This post is about setting up an Ubuntu workstation in diskless mode, such as the workstation boots via PXE and the root filesystem is mounted over NFS.

The process consists on the following main steps:

  1. Setting up the DHCP server
  2. Setting up the TFTP server and configuring PXE boot
  3. Setting up the NFS server
  4. Bootstrapping a Ubuntu installation into the client’s root filesystem
  5. Booting up the diskless workstation

1. Setting up the DHCP server

PXE-enabled workstations need to get an additional option during the DHCP negotiation that will point them to a TFTP server where the PXE-compatible boot loader code can be downloaded. Configuring dnsmasq to hand this option to clients is just as easy as adding the following line to /etc/dnsmasq.conf:

dhcp-boot=pxelinux.0,tftp.lan,10.42.242.13

and restarting the dnsmasq service.

2. Setting up the TFTP server and configuring PXE boot

Now that DHCP has been configured, the next step is setting up the TFTP server. The TFTP server will be used by PXE-compatible clients to download the PXE boot loader code, and also the Linux kernel and Linux initial RAM disk.

I will use H. P. Anvin’s TFTP server as it’s widely used and works fairly well:

root@tftp.lan:# apt-get install tftpd-hpa

tftpd-hpa does not integrate automatically with xinetd so if you want to run the TFTP server under xinetd, you will have to create the following file, which was ported from the inetd description that is created automatically in /etc/inetd.conf when tftpd-hpa is installed:

service tftp
{
        disable         = no
        id              = chargen-dgram
        socket_type     = dgram
        protocol        = udp
        user            = root
        wait            = yes
        server          = /usr/sbin/in.tftpd
        server_args     = -s /var/lib/tftpboot/
}

then restarting the xinetd service.

Configuring PXE boot is just a matter of copying the PXE boot loader code, a configuration file, the Linux kernel and initial RAM disks under the TFTP root. First, install syslinux and copy the PXE boot loader code to the TFTP server root:

root@tftp.lan:# apt-get install syslinux
root@tftp.lan:# cp /usr/lib/syslinux/pxelinux.0 /var/lib/tftpboot/
root@tftp.lan:# mkdir /var/lib/tftpboot/pxelinux.cfg

The Linux PXE boot loader code, syslinux, expects that a configuration file describing what kernel, its boot parameters, and the initial RAM disk to use to be stored within a directory named pxelinux.cfg just under the TFTP server root.

Client-specific config files can be created (based on the client MAC address, for example). We will create a config file that is suitable for any client. This configuration file is called default and can be created by running the following commands:

root@tftp.lan:# KERNEL_VERSION=2.6.22-14-generic
root@tftp.lan:# NFS_IPADDR=$(host nfs.lan | cut -d' ' -f4)
root@tftp.lan:# cat > /var/lib/tftpboot/pxelinux.cfg/default << EOF
> LABEL linux
> KERNEL vmlinuz-${KERNEL_VERSION}
> APPEND root=/dev/nfs initrd=initrd.img-${KERNEL_VERSION} nfsroot=${NFS_IPADDR}:/home/nfsroot ip=dhcp rw
> EOF

3. Setting up the NFS server

In this step, we will configure the NFS server and export the directory where the client’s root filesystem will be stored.

Let’s start by installing the NFS server packages:

root@nfs.lan:# apt-get install nfs-kernel-server nfs-common

I will use /home/nfsroot as the root for the client’s root filesystem

root@nfs.lan:# mkdir /home/nfsroot

Next, add the following line to /etc/exports in order to export the the client’s root filesystem:

/home/nfsroot *,gss/krb5(rw,no_subtree_check,async,no_root_squash)

Then re-export all the filesystems:

root@nfs.lan:# exportfs -avr

4. Bootstrapping a Ubuntu installation into the client’s root filesystem

The following steps will bootstrap the installation of a minimal Ubuntu Hardy Heron GNU/Linux system into the client’s root:

root@nfs.lan:# debootstrap --arch i386 hardy 
  /home/nfsroot http://ch.archive.ubuntu.com/ubuntu/

Only the minimum required packages will be downloaded from the Internet and installed into /home/nfsroot. The output of the previous command should look like this:

I: Retrieving Release
I: Retrieving Packages
I: Validating Packages
I: Resolving dependencies of required packages...
I: Resolving dependencies of base packages...
I: Found additional required dependencies: libdb4.6
I: Checking component main on http://ch.archive.ubuntu.com/ubuntu...
I: Retrieving adduser
I: Validating adduser
...
I: Base system installed successfully.

Once the system has been bootstrapped, we need to populate fstab. At least, /proc and / must get mounted, but other filesystems might be referenced in this file too, like additional NFS exports, swap files, and so on. The difference with respect a traditional, disk-based Ubuntu installation, is that the root filesystem gets mounted via NFS by the intial RAM disk, and is referenced by the kernel’s /dev/nfs block device.

The contents of /home/nfsroot/etc/fstab should look like this:

# /etc/fstab: static file system information.
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc            /proc         proc   defaults       0      0
/dev/nfs        /             nfs    defaults       0      0

Another difference with a traditional Ubuntu system is that we don’t want the network interfaces be managed by Network Manager. In fact, the main (probably Ethernet) network interface was already configured by the kernel based on PXE’s supplied configuration. Letting Nework Manager reconfigure or manage this network interface might mean losing the connection to the NFS server and, thus, rendering the system unusable.

To stop Network Manager from managing the main network interface, the contents of /home/nfsroot/etc/network/interfaces must look like this:

# Used by ifup(8) and ifdown(8). See the interfaces(5) manpage or
# /usr/share/doc/ifupdown/examples for more information.

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface, commented out for NFS root
iface eth0 inet manual

Finally, we will give the client a generic name. This can be done by storing that generic name into /etc/hostname:

root@nfs.lan:# echo client.lan > /home/nfsroot/etc/hostname

Booting up the diskless workstation

Once everything is set up, the last thing to do is testing that the diskless workstation is able to boot via PXE. The mechanics of booting via PXE depend on the machine. On some machines, it’s just a matter of pressing ESC to get into a menu that allows to override the boot device. On others, the same can be done by pressing F12 during the POST. On most modern systems it’s possible to reconfigure the system to always boot from the network.

If the client is able to boot from PXE successfully, this is what you will briefly see on your screen:

CLIENT MAC ADDR: 00 11 22 33 44 55 66  GUID: 00000000 0000 0000 0000 000000000000
CLIENT IP: 192.168.0.2  MASK: 255.255.255.0  DHCP IP: 192.168.0.1
GATEWAY IP: 192.168.0.1

PXELNUX 3.36 Debian-2007-08-30  Copyright (C) 1994-2007 H. Peter Anvin
UNDI data segment at:   00094140
UNDI data segment size: 94B0
UNDI code segment at:   0009D5F0
UNDI code segment size: 20B0
PXE entry point found (we hope) at 9D5F:0106
My IP address seems to be C0A80001 192.168.0.2
ip=192.168.0.2:192.168.0.1:192.168.0.1:255.255.255.0
TFTP prefix:
Tring to load: pxelinux.cfg/00-01-02-03-04-05
Tring to load: pxelinux.cfg/C0A80001
Tring to load: pxelinux.cfg/C0A8000
Tring to load: pxelinux.cfg/C0A800
Tring to load: pxelinux.cfg/C0A80
Tring to load: pxelinux.cfg/C0A8
Tring to load: pxelinux.cfg/C0A
Tring to load: pxelinux.cfg/C0
Tring to load: pxelinux.cfg/C
Tring to load: pxelinux.cfg/default
boot:
Loading vmlinuz...

If the system boots up, we will be dropped into a text-mode console where we can log in as root with no password. Of course, the first thing you should do is creating a password for the root user, but if you have been able to get to this point, you probably know how to do it 🙂

Once you are logged in as your in your new diskless workstation, you will probably want to add more functionality. For example, installing the OpenSSH server, or the packages for the typical Ubuntu desktop system. Before you do that, we will need to add more Ubuntu repositories to /etc/apt/sources.list. This is how mine looks like:

deb http://ch.archive.ubuntu.com/ubuntu/ feisty main restricted
deb-src http://ch.archive.ubuntu.com/ubuntu/ feisty main restricted
deb http://ch.archive.ubuntu.com/ubuntu/ feisty-updates main restricted
deb-src http://ch.archive.ubuntu.com/ubuntu/ feisty-updates main restricted
deb http://ch.archive.ubuntu.com/ubuntu/ feisty universe
deb-src http://ch.archive.ubuntu.com/ubuntu/ feisty universe
deb http://ch.archive.ubuntu.com/ubuntu/ feisty multiverse
deb-src http://ch.archive.ubuntu.com/ubuntu/ feisty multiverse
deb http://ch.archive.ubuntu.com/ubuntu/ feisty-backports main restricted universe multiverse
deb-src http://ch.archive.ubuntu.com/ubuntu/ feisty-backports main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu feisty-security main restricted
deb-src http://security.ubuntu.com/ubuntu feisty-security main restricted
deb http://security.ubuntu.com/ubuntu feisty-security universe
deb-src http://security.ubuntu.com/ubuntu feisty-security universe
deb http://security.ubuntu.com/ubuntu feisty-security multiverse
deb-src http://security.ubuntu.com/ubuntu feisty-security multiverse

Finally, I decided to install OpenSSH, OpenNTPD and the typical Ubuntu desktop:

root@client.lan:# apt-get update
root@client.lan:# apt-get install openssh-server openntpd ubuntu-desktop

Installing FreeNX 0.7.1 on Ubuntu

Introduction

DISCLAIMER: The contents of this post are mostly based on Manual Installation How-To. Thanks to Brent Davidson and Fabian Franz for writing such a nice HowTo and the beautiful open and free implementation of FreeNX, respectively.

I decided to use FreeNX instead of NoMachine’s own implementation due to the instability of the latter. Most of the times, I could not reconnect to my running sessions, or else NX decided to kill my running session and start a new one. FreeNX is a collection of shell scripts, which makes it easier to debug and troubleshoot problems.

The process described in this post starts with the NoMachine’s binary components, downloadable from the Web, and then overwrites or replaces key components that are binary-only and closed with FreeNX’s open and free shell scripts, which provides much more flexibility. In my own experience, FreeNX is more robust, stable, predictable, easier to customize and to debug than NoMachine’s closed binary components.

Installing the base NoMachine’s NX binary components

Download nxclient, nxnode and nxserver from NoMachine as .tar.gz files. The files can be found in the NoMachine downloads or can be downloaded directly from this site, if you trust me:

For IA-32 systems:

For x86_64 systems:

Extract the files to /usr. Since the .tar.gz packages always contain relative pathnames that start with ./NX, this will create a whole directory tree under /usr/NX.

# tar -C /usr –xzf nxserver-3.0.0-79.i386.tar.gz
# tar -C /usr –xzf nxclient-3.0.0-84.i386.tar.gz
# tar -C /usr –xzf nxnode-3.0.0-93.i386.tar.gz

Compiling the NX compression libraries

Compiling nxcomp

Download the source code for nxcomp from NoMachine’s source code or here from
nxcomp-3.0.0-48.tar.gz.

The ./configure is not very robust and doesn’t check for missing dependencies. This are the packages that are needed to compile nxcomp:

# apt-get install zlib1g-dev libX11-dev libjpeg-dev libpng12-dev 
    x11proto-xext-dev libxdamage-dev libxrandr-dev libxtst-dev 
    libaudiofile-dev

Configuring, building the library and copying it to its final location is just as easy as running:

# tar -xzf nxcomp-3.0.0-48.tar.gz
# cd nxcomp
# ./configure --prefix=/usr/NX
# make
# cp -P libXcomp.so* /usr/NX/lib

Compiling nxcompext

Download the source code for nxcompext and nx-X11 from NoMachine’s source code or here from
nxcompext-3.0.0-18.tar.gz and nx-X11-3.0.0-37.tar.gz, and extract them:

# tar -xzf nxcompext-3.0.0-18.tar.gz
# tar -xzf nx-X11-3.0.0-37.tar.gz

Before compiling nxcompext, apply the NXlib-xgetioerror.patch.

# cd nxcompext
# patch -p0 < NXlib-xgetioerror.patch

This is required or else the resulting libXcomp.so shared library will complain about _XGetIOError symbol being undefined. In order to troubleshoot this, I had to enable logging in /usr/NX/etc/node.conf:

NX_LOG_LEVEL=7
SESSION_LOG_CLEAN=0
NX_LOG_SECURE=0

Then, looking at /var/log/nxserver.log I found the following error message:

Info: Established X client connection.
Info: Using shared memory parameters 1/1/1/4096K.
Info: Using alpha channel in render extension.
Info: Not using local device configuration changes.
/usr/NX/bin/nxagent: symbol lookup error: /usr/NX/lib/libXcompext.so.3:
undefined symbol: _XGetIOError
NX> 1006 Session status: closed

Applying the patch solves the problem:

# ./configure --x-includes="/usr/include/xorg -I/usr/include/X11" --prefix=/usr/NX
# make
# cp -P libXcompext.so* /usr/NX/lib

Compiling nxcompshad

Download the source code for nxcompshad from NoMachine’s source code or here from
nxcompshad-3.0.0-19.tar.gz.

# tar -xzf nxcompshad-3.0.0-19.tar.gz
# cd nxcompshad
# ./configure --prefix=/usr/NX
# make
# cp -P libXcompshad.so* /usr/NX/lib

Compiling nxesd

Download the source code for nxesd from NoMachine’s source code or here from
nxesd-3.0.0-4.tar.gz.

# tar -xzf nxesd-3.0.0-4.tar.gz
# cd nxesd
# ./configure --prefix=/usr/NX
# make
# make install

Installing FreeNX

Download FreeNX from FreeNX downloads, or from this Web site at freenx-0.7.1.tar.gz and extract them and apply the gentoo-machine.diff patch:

# tar -xzf freenx-X.Y.Z.tar.gz
# cd freenx-X.Y.Z
# patch -p0 < gentoo-nomachine.diff

The gentoo-machine.diff patch must be applied if you are using the /usr/NX directory structure that the NoMachine libraries use.

Next, we replace the original NoMachine key binaries (in fact, they are compiled Perl scripts) with the FreeNX shell scripts:

# cp -f nxkeygen /usr/NX/bin/
# cp -f nxloadconfig /usr/NX/bin/
# cp -f nxnode /usr/NX/bin/
# cp -f nxnode-login /usr/NX/bin/
# cp -f nxserver /usr/NX/bin/
# cp -f nxsetup /usr/NX/bin/
# cp -f nxcups-gethost /usr/NX/bin/

Next, we need to compile the nxserver-helper binary, which is used by the slave mode of nxnode. Basically, nxserver-helper runs a command that has both /dev/fd/3 and /dev/fd/4 mapped into both ends of a UNIX SOCKET.

# cd nxserver-helper
# make
# cp -f nxserver-helper /usr/NX/bin/

Before being able to set up the FreeNX, install expect, the OpenSSH server and smbmount and smbumount:

$ sudo apt-get install expect smbfs openssh-server

The next step creates symbolic links in /usr/bin to all FreeNX scripts that live in /usr/NX/bin and additional symbolic links for NX compatibility:

# ln -s /usr/NX/bin/nxserver /usr/bin/nxserver
# ln -s /usr/NX/bin/nxsetup /usr/sbin/nxsetup
# ln -s /usr/NX/bin/nxloadconfig /usr/sbin/nxloadconfig
# ln -s /usr/NX/lib/libXrender.so.1.2.2 /usr/NX/lib/libXrender.so.1.2
# ln -s /usr/NX/bin/nxagent /usr/NX/bin/nxdesktop
# ln -s /usr/NX/bin/nxagent /usr/NX/bin/nxviewer
# ln -s /usr/bin/foomatic-ppdfile /usr/lib/cups/driver/foomatic-ppdfile
# ln -s /etc/X11/xinit /etc/X11/xdm
# ln -s /sbin/mount.cifs /sbin/smbmount
# ln -s /sbin/umount.cifs /sbin/smbumount

The final step consists is running the installation stage of FreeNX:

# nxsetup --install

This will create /usr/NX/var directory tree, create the nx user, install the appropiate SSH keys (either the NoMachine’s keys or custom keys).

Before being able to use FreeNX, create the node.conf configuration file that allow changing the behavior of FreeNX, like logging, path names to several scripts used to start GNOME or KDE, and so on:

# cd freenx-X.Y.Z
# cp node.conf.sample /usr/NX/etc/node.conf

Future development and ideas

  • Not having to depend on any single binary file from NoMachine.

    The idea is compiling all the components from source code, instead of starting with a binary distribution and replacing key components with their open and free counterparts.

  • Customizing FreeNX so that I can bypass NoMachine’s nxclient completely.

    Most of my network components are Kerberized and having to keep supplying my password to nxclient seems like a thing of the past to me. The idea is customizing FreeNX in such a way that I can leverage Kerberos authentication and drop password-based authentication completely.

Persistent storage device names in Linux

Recently I purchased two external USB hard-drives, one from LACIE and another one from IOMEGA, that I plugged into one of my Linux workstations. I wanted these drives to be auto-mounted during boot, so I created a couple of entries in /etc/fstab. The problem is that, somehow, the Linux kernel is not always able to enumerate devices on a bus in the same order between reboots. Thus, sometimes the LACIE hard disk gets /dev/sdb and IOMEGA gets /dev/sdc, and other times it is the other way around. In fact, it gets even worse as my machine as a built-in 6×1 card-reader, so I also have /dev/sd{d,e,f,g}.

Looking at /etc/udev/rules.d, I found that Ubuntu has a nice 65-persistent-disk.rules file with very smart and useful udev entries that create symlinks under /dev/disk by disk ID, disk label, device path and device UUID:

  • /dev/disk/by-id contains one symlink per physical device and partition. The symlink name is built using the device’s ID.
  • /dev/disk/by-label contains one symlink per partition. The symlink name is built using the partition volume label.
  • /dev/disk/by-path contains one symlink per physical device and parition. The symlink name is built using the device bus name.
  • /dev/disk/by-uuid contains one symlink per partition. The symlink name is built using the partition UUID.

For example:

$ tree /dev/disk
/dev/disk
|-- by-id
| |-- scsi-1ATA_WDC_WD1600JS-75NCB2_WD-WCANM44 -> ../../sda
| |-- scsi-1ATA_WDC_WD1600JS-75NCB2_WD-WCANM44-part1 -> ../../sda1
| |-- scsi-1ATA_WDC_WD1600JS-75NCB2_WD-WCANM44-part2 -> ../../sda2
| |-- scsi-1ATA_WDC_WD1600JS-75NCB2_WD-WCANM44-part5 -> ../../sda5
| |-- scsi-1ATA_WDC_WD1600JS-75NCB2_WD-WCANM44-part6 -> ../../sda6
| |-- usb-Generic_Flash_HS-CF_26020128B005 -> ../../sdd
| `-- usb-Generic_Flash_HS-COMBO_26020128B005 -> ../../sde
|-- by-label
| `-- DellUtility -> ../../sda1
|-- by-path
| |-- pci-0000:00:1f.1-scsi-0:0:0:0 -> ../../scd0
| |-- pci-0000:00:1f.2-scsi-0:0:0:0 -> ../../sda
| |-- pci-0000:00:1f.2-scsi-0:0:0:0-part1 -> ../../sda1
| |-- pci-0000:00:1f.2-scsi-0:0:0:0-part2 -> ../../sda2
| |-- pci-0000:00:1f.2-scsi-0:0:0:0-part5 -> ../../sda5
| |-- pci-0000:00:1f.2-scsi-0:0:0:0-part6 -> ../../sda6
| |-- usb-26020128B005:0:0:0 -> ../../sdd
| `-- usb-26020128B005:0:0:1 -> ../../sde
`-- by-uuid
|-- 07D6-0701 -> ../../sda1
|-- 29982b9e-63a4-4d3f-8b88-ebc9d000c09f -> ../../sda5
`-- 6aa8745b-dbe2-4386-9697-cc0c2dee27d4 -> ../../sda6

Basically, what I did is replacing the entries I previously created in /et/fstab using absolute device names, like /dev/sdb1, with /dev/disk/by-label/<VOLUME> instead.

Name stability and predictability across reboots for devices is quite a nice feature in Ubuntu Linux. And this can be easily ported to other Linux distributions simply by copying /etc/udev/rules.d/65-persistent-disk.rules.