Once I got the OpenSolaris installation kicked I faced another problem. The computer has 3 x 1.5TB and 1 x 2TB disks installed. This is suboptimal when configuring a RAID volume but I thought that I could cheat here. The cheat consists of slicing the 2TB disk in a 1.5TB slice and using the remaining 0.5TB to store the ZFS rpool.
As nice as it sounds this turned out tobe a little bit tricky. More than I expected. The challenge was convincing the OpenSolaris installer to create one 0.5TB slice and another 1.5TB slice. But I could not: OpenSolaris creates a partition and creates a slice that fills it up entirely. So, I either got a 2.0TB partition and a 2.0TB slice or a 0.5TB partition and a 0.5TB slice. In the end, what I did is to create a 0.5TB partition leaving the rest of the disk unused. The OpenSolaris installer created a SOLARIS2
partition and, inside it, a 0.5TB slice for the ZFS rpool. Once the installation was finished, I booted from a Ubuntu 9.04 LiveCD. As usual, Ubuntu did not require any driver disk or updates: it recognized the HP Smart Array controller automatically. Then I used fdisk
to destroy and recreate the SOLARIS2
partition but this time I recreated it to cover the whole disk. I had to make sure the partition was makred active or GRUB would not boot. Since OpenSolaris doesn’t like multiple partitions of SOLARIS2
(hexadecimal bf
) type, this is the only way I can think of to do what I want.
Next step was to boot from the OpenSolaris x86 LiveCD, apply the driver update and check that fdisk
reported a single Solaris partition that fills the entire 2TB disk. Then I summoned format
to create a new s3
slice that covered the remaining 1.5TB. This is how the slices look like now:
Current partition table (original):
Total disk cylinders available: 60796 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 1 - 15198 465.69GB (15198/0/0) 976623480
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 - 60795 1.82TB (60796/0/0) 3906750960
3 unassigned wm 15199 - 60795 1.36TB (45597/0/0) 2930063220
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 - 0 31.38MB (1/0/0) 64260
9 unassigned wm 0 0 (0/0/0) 0
Then, I only had to apply a label to the remaining 3 x 1.5TB disks and create the corresponding s3
slices on each of them. All s3
slices are about the same size:
Current partition table (original):
Total disk cylinders available: 60796 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 - 60795 1.36TB (60796/0/0) 2930063220
3 unassigned wm 0 - 60795 1.36TB (60796/0/0) 2930063220
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 - 0 23.53MB (1/0/0) 48195
9 unassigned wm 0 0 (0/0/0) 0
And finally,
# zpool create -f zfs raidz1 c9t0d0s3 c9t1d0s3 c9t2d0s3 c9t3d0s3
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rpool 464G 5.50G 458G 1% ONLINE -
zfs 5.44T 152K 5.44T 0% ONLINE -