Status bar in “screen”

In order to have screen behave a little bit more like byobu, edit or create (if not present) /etc/screenrc or ~/.screenrc and add the following lines:

autodetach on 
startup_message off 
hardstatus alwayslastline 
shelltitle 'bash'
hardstatus string '%{gk}[%{wk}%?%-Lw%?%{=b kR}(%{W}%n*%f %t%?(%u)%?%{=b kR})%{= w}%?%+Lw%?%? %{g}][%{d}%l%{g}][ %{= w}%Y/%m/%d %0C:%s%a%{g} ]%{W}'
Advertisements

OpenSolaris b134 text-based installation

Beginning with OpenSolaris b134 it is now possible to do a text-based installation. Some of the main differences between the new text-based installation and the LiveCD-based installation are:

  • The text-based installation supports serial console.
  • The text-based installation performs a minimal installation. X11, GNOME and many other packages will not be installed by default, but can be installed later on by following instructions from Text Installer b134-based project gate images.
  • The text-based installation offers more installation options, like installing into a whole disk, a partition or a slice within a Solaris2 partition. Again, this can be achieved by following instructions from Text Installer b134-based project gate images.

Being able to install OpenSolaris into a slice within a partition can be useful. For example, my OpenSolaris box has 1x2TB disk and 3×1.5TB disks. I want one ZFS pool to span 4×1.5TB slices (across the 4 drives) and the ZFS root pool on the remaining space in the 2TB disk. With the LiveCD it requires a lot of hacking as described in HP Proliant DL180 G6 and ZFS (part V). With the text-based installation it is straightforward and it does not require booting a Linux distribution to modify the partition table, or from an OpenSolaris LiveCD to adjust the slices.

Solaris, HP SmartArray and data corruption on shutdown

For quite some time I had been experiencing power-off problems on a HP Proliant server running Solaris (or OpenSolaris). Most of the time, the poweroff or init 5 commands will not cut the power and the machine will hang and stay up with fans spinning at full speed. The only solution was to manually power cycle or use LOM to shut the machine down. This has caused data corruption problems for me several times, specially when the HP SmartArray batteries get discharged.

Turns out there is a HP support document, SHUTDOWN PROCEDURE REQUIRED for ProLiant Server Running Sun Solaris 10 to Properly Flush Cache on Smart Array Controller Prior to Shutdown that precisely describes my situation. It seems it’s an interaction problem between Solaris and the cpqary3 drivers (2.1.0 or older) that causes buffers in the HP SmartArray controller to not be flushed on shut down. If the batteries drain, eventually data loss might occur. And precisely, since I was using cpqary3 driver version 2.1.0, I have just upgraded to cpqary3-2.2.0 to see if this solves my problems for once and all. We’ll see 🙂

Creating an OpenSolaris LiveUSB

Creating a bootable OpenSolaris LiveUSB might prove useful in some situations, for example when installing OpenSolaris on machines without a CD/DVD drive. First, download the OpenSolaris LiveCD image.

Next, install SUNWdistro-const:

$ pfexec pkg install SUNWdistro-const

Generate a USB image from the contents of the LiveCD image:

$ pfexec usbgen osol-0906-x86.iso osol-0906-x86-usb.img /var/tmp/solaris

Dump the USB image into the USB device, but first make sure the USB device is not already mounted (for example, automatically mounted by GNOME):

$ pfexec usbcopy osol-0906-x86-usb.img

ZFS, GRUB boot problems and "inconsistent file system structure"

Today one of my OpenSolaris boxes could not get to the GRUB menu. Instead, I was dropped at the GRUB prompt. I tried running:

bootfs rpool/ROOT/opensolaris
kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS

But this was failing with a “inconsistent file system structure” error. I searched the Internet and found ZFS rpool Upgrade and GRUB. What I did is booting from an OpenSolaris CD, then tried to import the rpool ZFS pool:

mkdir /tmp/rpool
zpool import -R /tmp/rpool rpool

But this was failed with an error message that indicated someone else might be using that ZFS pool. To force the import I used the following command instead:

zpool import -f -R /tmp/rpool rpool

This time it worked flawlessly and I could check the GRUB menu.lst file was in place and looked okay. So, before trying to rebuilt the boot archives or reinstalling GRUB, I rebooted the box. This time, the GRUB menu came up and I could boot the system. I’m not entirely sure what happened yet.

OpenSolaris and power-off problems

Since I installed OpenSolaris on the HP Proliant DL180 G5, I’m constantly having problems with init 5 not being able to switch power off on shut down. Today, while searching a bit to see if anyone else had this problem, I came up with the following thread: Shutting down PC. The comment by perksta is the most useful one:

Hi there,
I too was having this issue, hanging on shutdown. My setup is a ASUS P5E3-WS-Pro with QUAD core Q6600 2.4Ghz with AOC-SAT2-MV8. I found this blog :-

http://masafumi-ohta.blogspot.com/2008/10/workaroundeee-901-has-shutdown-problem_08.html

about an ASUS eee 901 which was worked around by offlining the extra cores before shutdown.

add “/usr/sbin/psradm -f 1 2 3” before “init 5” line in the file “/usr/lib/hal/sunos/hal-system-power-shutdown-sunos”

This has worked for me consistently now (about 20 shutdowns) I am using Solaris Express CE build 103.

I still get lots of ‘svc-syseventd stop’ errors during shutdown but at least it turns off reliably

To keep it short, running:

pfexec /usr/sbin/psradm -f 1 2 3
pfexec /sbin/init 5

Seems to do the trick, although I confess I’ve only used it a couple of times. Time will tell if this workaround works reliably or not.

HP Proliant DL180 G6 and ZFS (part V)

Once I got the OpenSolaris installation kicked I faced another problem. The computer has 3 x 1.5TB and 1 x 2TB disks installed. This is suboptimal when configuring a RAID volume but I thought that I could cheat here. The cheat consists of slicing the 2TB disk in a 1.5TB slice and using the remaining 0.5TB to store the ZFS rpool.

As nice as it sounds this turned out tobe a little bit tricky. More than I expected. The challenge was convincing the OpenSolaris installer to create one 0.5TB slice and another 1.5TB slice. But I could not: OpenSolaris creates a partition and creates a slice that fills it up entirely. So, I either got a 2.0TB partition and a 2.0TB slice or a 0.5TB partition and a 0.5TB slice. In the end, what I did is to create a 0.5TB partition leaving the rest of the disk unused. The OpenSolaris installer created a SOLARIS2 partition and, inside it, a 0.5TB slice for the ZFS rpool. Once the installation was finished, I booted from a Ubuntu 9.04 LiveCD. As usual, Ubuntu did not require any driver disk or updates: it recognized the HP Smart Array controller automatically. Then I used fdisk to destroy and recreate the SOLARIS2 partition but this time I recreated it to cover the whole disk. I had to make sure the partition was makred active or GRUB would not boot. Since OpenSolaris doesn’t like multiple partitions of SOLARIS2 (hexadecimal bf) type, this is the only way I can think of to do what I want.

Next step was to boot from the OpenSolaris x86 LiveCD, apply the driver update and check that fdisk reported a single Solaris partition that fills the entire 2TB disk. Then I summoned format to create a new s3 slice that covered the remaining 1.5TB. This is how the slices look like now:

Current partition table (original):
Total disk cylinders available: 60796 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders         Size            Blocks
  0       root    wm       1 - 15198      465.69GB    (15198/0/0)  976623480
  1 unassigned    wm       0                0         (0/0/0)              0
  2     backup    wu       0 - 60795        1.82TB    (60796/0/0) 3906750960
  3 unassigned    wm   15199 - 60795        1.36TB    (45597/0/0) 2930063220
  4 unassigned    wm       0                0         (0/0/0)              0
  5 unassigned    wm       0                0         (0/0/0)              0
  6 unassigned    wm       0                0         (0/0/0)              0
  7 unassigned    wm       0                0         (0/0/0)              0
  8       boot    wu       0 -     0       31.38MB    (1/0/0)          64260
  9 unassigned    wm       0                0         (0/0/0)              0

Then, I only had to apply a label to the remaining 3 x 1.5TB disks and create the corresponding s3 slices on each of them. All s3 slices are about the same size:

Current partition table (original):
Total disk cylinders available: 60796 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders         Size            Blocks
  0 unassigned    wm       0                0         (0/0/0)              0
  1 unassigned    wm       0                0         (0/0/0)              0
  2     backup    wu       0 - 60795        1.36TB    (60796/0/0) 2930063220
  3 unassigned    wm       0 - 60795        1.36TB    (60796/0/0) 2930063220
  4 unassigned    wm       0                0         (0/0/0)              0
  5 unassigned    wm       0                0         (0/0/0)              0
  6 unassigned    wm       0                0         (0/0/0)              0
  7 unassigned    wm       0                0         (0/0/0)              0
  8       boot    wu       0 -     0       23.53MB    (1/0/0)          48195
  9 unassigned    wm       0                0         (0/0/0)              0

And finally,

# zpool create -f zfs raidz1 c9t0d0s3 c9t1d0s3 c9t2d0s3 c9t3d0s3
# zpool list
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
rpool   464G  5.50G   458G     1%  ONLINE  -
zfs    5.44T   152K  5.44T     0%  ONLINE  -