Once I got the OpenSolaris installation kicked I faced another problem. The computer has 3 x 1.5TB and 1 x 2TB disks installed. This is suboptimal when configuring a RAID volume but I thought that I could cheat here. The cheat consists of slicing the 2TB disk in a 1.5TB slice and using the remaining 0.5TB to store the ZFS rpool.
As nice as it sounds this turned out tobe a little bit tricky. More than I expected. The challenge was convincing the OpenSolaris installer to create one 0.5TB slice and another 1.5TB slice. But I could not: OpenSolaris creates a partition and creates a slice that fills it up entirely. So, I either got a 2.0TB partition and a 2.0TB slice or a 0.5TB partition and a 0.5TB slice. In the end, what I did is to create a 0.5TB partition leaving the rest of the disk unused. The OpenSolaris installer created a
SOLARIS2 partition and, inside it, a 0.5TB slice for the ZFS rpool. Once the installation was finished, I booted from a Ubuntu 9.04 LiveCD. As usual, Ubuntu did not require any driver disk or updates: it recognized the HP Smart Array controller automatically. Then I used
fdisk to destroy and recreate the
SOLARIS2 partition but this time I recreated it to cover the whole disk. I had to make sure the partition was makred active or GRUB would not boot. Since OpenSolaris doesn’t like multiple partitions of
bf) type, this is the only way I can think of to do what I want.
Next step was to boot from the OpenSolaris x86 LiveCD, apply the driver update and check that
fdisk reported a single Solaris partition that fills the entire 2TB disk. Then I summoned
format to create a new
s3 slice that covered the remaining 1.5TB. This is how the slices look like now:
Current partition table (original): Total disk cylinders available: 60796 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 root wm 1 - 15198 465.69GB (15198/0/0) 976623480 1 unassigned wm 0 0 (0/0/0) 0 2 backup wu 0 - 60795 1.82TB (60796/0/0) 3906750960 3 unassigned wm 15199 - 60795 1.36TB (45597/0/0) 2930063220 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 8 boot wu 0 - 0 31.38MB (1/0/0) 64260 9 unassigned wm 0 0 (0/0/0) 0
Then, I only had to apply a label to the remaining 3 x 1.5TB disks and create the corresponding
s3 slices on each of them. All
s3 slices are about the same size:
Current partition table (original): Total disk cylinders available: 60796 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 unassigned wm 0 0 (0/0/0) 0 1 unassigned wm 0 0 (0/0/0) 0 2 backup wu 0 - 60795 1.36TB (60796/0/0) 2930063220 3 unassigned wm 0 - 60795 1.36TB (60796/0/0) 2930063220 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 8 boot wu 0 - 0 23.53MB (1/0/0) 48195 9 unassigned wm 0 0 (0/0/0) 0
# zpool create -f zfs raidz1 c9t0d0s3 c9t1d0s3 c9t2d0s3 c9t3d0s3 # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT rpool 464G 5.50G 458G 1% ONLINE - zfs 5.44T 152K 5.44T 0% ONLINE -
3 thoughts on “HP Proliant DL180 G6 and ZFS (part V)”
Pingback: Felipe Alfaro Solana » Blog Archive » OpenSolaris b134 text-based installation
I got the feeling he was a rather dapper , about town’ man who appeared to exude confidence and calm. He popular and warm with a good sense of comic timing and humour.
Another side to him was thoughtful and very deep, he was possibly troubled by something and he carried that with him for many years.
I feel he was sad/unhappy when he passed over……..so many things left undone!
Hey! This is kind of off topic but I need some guidance from an established blog. Is it difficult to set up your own blog? I’m not very techincal but I can figure things out pretty fast. I’m thinking about setting up my own but I’m not sure where to begin. Do you have any ideas or suggestions? Appreciate it