Archives For May 2010

Dynamic Root Diks, or DRD for short, is a nice and handy tool that IMHO every HP-UX Sysadmin must know. In an HPVM related post I showed how to use DRD to clone a virtual machine but today I will explain the purpose DRD was intended when it was first introduced… patching a server. I’m going to suppose you have an spare disk for the task and of course have DRD installed in the server.

1.- Clone the root disk.

root@sheldon:/ # drd clone -x overwrite=true -v -t /dev/disk/disk2

=======  04/21/10 09:05:53 EDT  BEGIN Clone System Image (user=root)  (jobid=sheldon-01)

* Reading Current System Information
* Selecting System Image To Clone
* Selecting Target Disk
* Selecting Volume Manager For New System Image
* Analyzing For System Image Cloning
* Creating New File Systems
* Copying File Systems To New System Image
* Making New System Image Bootable
* Unmounting New System Image Clone
* System image: "sysimage_001" on disk "/dev/disk/disk2"

=======  04/21/10 09:38:48 EDT  END Clone System Image succeeded. (user=root)  (jobid=sheldon-01)

root@sheldon:/ #

2.- Mount the image.

root@sheldon:/ # drd mount

=======  04/21/10 09:41:20 EDT  BEGIN Mount Inactive System Image (user=root)  (jobid=sheldon)

 * Checking for Valid Inactive System Image
 * Locating Inactive System Image
 * Mounting Inactive System Image

=======  04/21/10 09:41:31 EDT  END Mount Inactive System Image succeeded. (user=root)  (jobid=sheldon)

root@sheldon:/ #

Check the mount by displaying the drd00 volume group.

root@sheldon:/ # vgdisplay drd00

VG Name                     /dev/drd00
VG Write Access             read/write     
VG Status                   available                 
Max LV                      255    
Cur LV                      8      
Open LV                     8      
Max PV                      16     
Cur PV                      1      
Act PV                      1      
Max PE per PV               4356         
VGDA                        2   
PE Size (Mbytes)            32              
Total PE                    4346    
Alloc PE                    2062    
Free PE                     2284    
Total PVG                   0        
Total Spare PVs             0              
Total Spare PVs in use      0  

root@sheldon:/ #

3.- Apply the patches on the mounted clone.

root@sheldon:/ # drd runcmd swinstall -s /tmp/patches_01.depot

=======  04/21/10 09:42:55 EDT  BEGIN Executing Command On Inactive System Image (user=root)  (jobid=sheldon)

 * Checking for Valid Inactive System Image
 * Analyzing Command To Be Run On Inactive System Image
 * Locating Inactive System Image
 * Accessing Inactive System Image for Command Execution
 * Setting Up Environment For Command Execution
 * Executing Command On Inactive System Image
 * Using unsafe patch list version 20061206
 * Starting swagentd for drd runcmd
 * Executing command: "/usr/sbin/swinstall -s /tmp/patches_01.depot"

=======  04/21/10 09:42:59 EDT  BEGIN swinstall SESSION
 (non-interactive) (jobid=sheldon-0006) (drd session)

 * Session started for user "root@sheldon".

 * Beginning Selection

 ...
 ...
 ...

=======  04/21/10 09:44:37 EDT  END swinstall SESSION (non-interactive)
 (jobid=sheldon-0006) (drd session)

 * Command "/usr/sbin/swinstall -s /tmp/patches_01.depot" completed with the return
 code "0".
 * Stopping swagentd for drd runcmd
 * Cleaning Up After Command Execution On Inactive System Image

=======  04/21/10 09:44:38 EDT  END Executing Command On Inactive System Image succeeded. (user=root)  (jobid=sheldon)

root@sheldon:/ #

4.- Check the installed patches on the DRD image.

root@sheldon:/ # drd runcmd swlist patches_01

=======  04/21/10 09:45:29 EDT  BEGIN Executing Command On Inactive System Image (user=root)  (jobid=sheldon)

 * Checking for Valid Inactive System Image
 * Analyzing Command To Be Run On Inactive System Image
 * Locating Inactive System Image
 * Accessing Inactive System Image for Command Execution
 * Setting Up Environment For Command Execution
 * Executing Command On Inactive System Image
 * Executing command: "/usr/sbin/swlist patches_01"
# Initializing...
# Contacting target "sheldon"...
#
# Target:  sheldon:/
#

 # patches_01                    1.0            ACME Patching depot
   patches_01.acme-RUN
 * Command "/usr/sbin/swlist patches_01" completed with the return code "0".
 * Cleaning Up After Command Execution On Inactive System Image

=======  04/21/10 09:45:32 EDT  END Executing Command On Inactive System Image succeeded. (user=root)  (jobid=sheldon)

root@sheldon:/ #

5.- Activate the image and reboot the server.

At this point you only have to activate the patched image with the drd activate command and schedule a reboot of the server.

If you want to activate and reboot at the same time use the -x reboot=true option as in the example below.

root@sheldon:/ # drd activate -x reboot=true

=======  04/21/10 09:52:26 EDT  BEGIN Activate Inactive System Image
 (user=root)  (jobid=sheldon)

 * Checking for Valid Inactive System Image
 * Reading Current System Information
 * Locating Inactive System Image
 * Determining Bootpath Status
 * Primary bootpath : 0/1/1/0.0.0 [/dev/disk/disk1] before activate.
 * Primary bootpath : 0/1/1/0.1.0 [/dev/disk/disk2] after activate.
 * Alternate bootpath : 0 [unknown] before activate.
 * Alternate bootpath : 0 [unknown] after activate.
 * HA Alternate bootpath : <none> [] before activate.
 * HA Alternate bootpath : <none> [] after activate.
 * Activating Inactive System Image
 * Rebooting System

If everything goes well after the reboot give the patched server some time, I leave this to your own criteria, before restoring the mirror.

Juanma.

If you need to determine the version of a Veritas diskgroup it can be done by two ways:

  • vxdg command:

Execute vxdg list <diskgroup> and look for the version field in the output.

root@vmnode1:~# vxdg list dg_sap
Group:     dg_sap
dgid:      1273503890.14.vmnode1
import-id: 1024.10
flags:     cds
version:   140 <--- VERSION!
alignment: 8192 (bytes)
local-activation: read-write
ssb:            on
detach-policy: global
dg-fail-policy: dgdisable
copies:    nconfig=default nlog=default
config:    seqno=0.1076 permlen=24072 free=24068 templen=2 loglen=3648
config disk disk27 copy 1 len=24072 state=clean online
config disk disk28 copy 1 len=24072 state=clean online
log disk disk27 copy 1 len=3648
log disk disk28 copy 1 len=3648
root@vmnode1:~#
  • vxprint command:

Run vxprint -l <diskgroup> and again look for the versión field as shown in the example.

root@vmnode1:~# vxprint -l dg_sap
Disk group: dg_sap

Group:    dg_sap
info:     dgid=1273503890.14.vmnode1
version:  140 <--- VERSION!
alignment: 8192 (bytes)
activation: read-write
detach-policy: global
dg-fail-policy: dgdisable
copies:   nconfig=default nlog=default
devices:  max=32767 cur=1
minors:   >= 4000
cds=on

root@vmnode1:~#

And as Nelson Muntz like to say… smell you later ;-)

Juanma.

In today post I will show how to create and brake a mirrored volume in Veritas Volume Manager and Logical Volume Manager.

## LVM ##

Creating a mirror of a volume and later split it in LVM is quite easy an can be done with a few commands. I’m going to suppose that the original volume is already created.

  • Extend the Volume Group that contain the lvol.

It has to be done with the same number of disks and of the same size that the ones within the VG.

[root@sheldon] / # vgextend vg_oracle /dev/disk/disk26
Volume group "vg_oracle" has been successfully extended.
Volume Group configuration for /dev/vg_oracle has been saved in /etc/lvmconf/vg_oracle.conf
[root@sheldon] / #
  • Create the mirror.
[root@sheldon] / # lvextend -m 1 /dev/vg_oracle/lv_oracle /dev/disk/disk26
The newly allocated mirrors are now being synchronized. This operation will
take some time. Please wait ....
Logical volume "/dev/vg_oracle/lv_oracle" has been successfully extended.
Volume Group configuration for /dev/vg_oracle has been saved in /etc/lvmconf/vg_oracle.conf
[root@sheldon] / #
  • Check the configuration.
[root@sheldon] / # lvdisplay /dev/vg_oracle/lv_oracle
--- Logical volumes ---
LV Name                     /dev/vg_oracle/lv_oracle
VG Name                     /dev/vg_oracle
LV Permission               read/write                
LV Status                   available/syncd           
Mirror copies               1            
Consistency Recovery        MWC                 
Schedule                    parallel      
LV Size (Mbytes)            208             
Current LE                  13             
Allocated PE                26             
Stripes                     0       
Stripe Size (Kbytes)        0                   
Bad block                   NONE         
Allocation                  strict                    
IO Timeout (Seconds)        default             
Number of Snapshots         0  

[root@sheldon] / #
  • Perform the split.
[root@sheldon] / # lvsplit -s copy /dev/vg_oracle/lv_oracle
Logical volume "/dev/vg_oracle/lv_oraclecopy" has been successfully created with
character device "/dev/vg_oracle/rlv_oraclecopy".
Logical volume "/dev/vg_oracle/lv_oracle" has been successfully split.
Volume Group configuration for /dev/vg_oracle has been saved in /etc/lvmconf/vg_oracle.conf
[root@sheldon] / #
  • Reestablish the mirror.

If the VG are 1.0 or 2.0 version the merge can not be performed if the group is in shared mode, for the 2.1 volume groups the lvmerge can be done in any mode.

The order to do the merge is the copy FIRST and the master SECOND. This is very important if don’t want to sync the mirror in wrong direction.

[root@sheldon] / # lvmerge /dev/vg_oracle/lv_oraclecopy /dev/vg_oracle/lv_oracle
Logical volume "/dev/vg_oracle/lv_oraclecopy" has been successfully merged
with logical volume "/dev/vg_oracle/lv_oracle".
Logical volume "/dev/vg_oracle/lv_oraclecopy" has been successfully removed.
Volume Group configuration for /dev/vg_oracle has been saved in /etc/lvmconf/vg_oracle.conf
[root@sheldon] / #

## VXVM ##

The process in VxVM is in many ways similar to the LVM one.

  • Add a new disk/disks to the diskgroup

Launch vxdiskadm tool and select Add or initialize one or more disks.

[root@sheldon] / # vxdiskadm 

Volume Manager Support Operations
Menu: VolumeManager/Disk

 1      Add or initialize one or more disks
 2      Remove a disk
 3      Remove a disk for replacement
 4      Replace a failed or removed disk
 5      Mirror volumes on a disk
 6      Move volumes from a disk
 7      Enable access to (import) a disk group
 8      Remove access to (deport) a disk group
 9      Enable (online) a disk device
 10     Disable (offline) a disk device
 11     Mark a disk as a spare for a disk group
 12     Turn off the spare flag on a disk
 13     Remove (deport) and destroy a disk group
 14     Unrelocate subdisks back to a disk
 15     Exclude a disk from hot-relocation use
 16     Make a disk available for hot-relocation use
 17     Prevent multipathing/Suppress devices from VxVM's view
 18     Allow multipathing/Unsuppress devices from VxVM's view
 19     List currently suppressed/non-multipathed devices
 20     Change the disk naming scheme
 21     Change/Display the default disk layouts
 22     Mark a disk as allocator-reserved for a disk group
 23     Turn off the allocator-reserved flag on a disk
 list   List disk information

 ?      Display help about menu
 ??     Display help about the menuing system
 q      Exit from menus

Select an operation to perform:  1

Enter the disk and answer the questions according to your configuration and exit the tool when the process is done.

Add or initialize disks
Menu: VolumeManager/Disk/AddDisks

 Use this operation to add one or more disks to a disk group.  You can
 add the selected disks to an existing disk group or to a new disk group
 that will be created as a part of the operation. The selected disks may
 also be added to a disk group as spares. Or they may be added as
 nohotuses to be excluded from hot-relocation use. The selected
 disks may also be initialized without adding them to a disk group
 leaving the disks available for use as replacement disks.

 More than one disk or pattern may be entered at the prompt.  Here are
 some disk selection examples:

 all:          all disks
 c3 c4t2:      all disks on both controller 3 and controller 4, target 2
 c3t4d2:       a single disk (in the c#t#d# naming scheme)
 xyz_0:        a single disk (in the enclosure based naming scheme)
 xyz_:         all disks on the enclosure whose name is xyz

 disk#:        a single disk (in the new naming scheme)

Select disk devices to add: [<pattern-list>,all,list,q,?]  disk28
Here is the disk selected.  Output format: [Device_Name]

 disk28

Continue operation? [y,n,q,?] (default: y)  

 You can choose to add this disk to an existing disk group, a
 new disk group, or leave the disk available for use by future
 add or replacement operations.  To create a new disk group,
 select a disk group name that does not yet exist.  To leave
 the disk available for future use, specify a disk group name
 of "none".

Which disk group [<group>,none,list,q,?] (default: none)  dg_sap

Use a default disk name for the disk? [y,n,q,?] (default: y)  n

Add disk as a spare disk for dg_sap? [y,n,q,?] (default: n)  

Exclude disk from hot-relocation use? [y,n,q,?] (default: n)  

Add site tag to disk? [y,n,q,?] (default: n)  

 The selected disks will be added to the disk group dg_sap with
 disk names that you will specify interactively.

 disk28

Continue with operation? [y,n,q,?] (default: y)  

 Initializing device disk28.

Enter desired private region length
[<privlen>,q,?] (default: 32768)  

Enter disk name for disk28 [<name>,q,?] (default: dg_sap02)  

 VxVM  NOTICE V-5-2-88
Adding disk device disk28 to disk group dg_sap with disk
 name dg_sap02.

Add or initialize other disks? [y,n,q,?] (default: n)
  • Check the configuration.
[root@sheldon] / # vxprint -g dg_sap
TY NAME         ASSOC        KSTATE   LENGTH   PLOFFS   STATE    TUTIL0  PUTIL0
dg dg_sap       dg_sap       -        -        -        -        -       -

dm dg_sap01     disk27       -        228224   -        -        -       -
dm dg_sap02     disk28       -        228224   -        -        -       -

v  sapvol       fsgen        ENABLED  204800   -        ACTIVE   -       -
pl sapvol-01    sapvol       ENABLED  204800   -        ACTIVE   -       -
sd dg_sap01-01  sapvol-01    ENABLED  204800   0        -        -       -
[root@sheldon] / #
  • Create the mirror.
[root@sheldon] / # vxassist -g dg_sap mirror sapvol dg_sap02
[root@sheldon] / #
[root@sheldon] / # vxprint -g dg_sap                        
TY NAME         ASSOC        KSTATE   LENGTH   PLOFFS   STATE    TUTIL0  PUTIL0
dg dg_sap       dg_sap       -        -        -        -        -       -

dm dg_sap01     disk27       -        228224   -        -        -       -
dm dg_sap02     disk28       -        228224   -        -        -       -

v  sapvol       fsgen        ENABLED  204800   -        ACTIVE   -       -
pl sapvol-01    sapvol       ENABLED  204800   -        ACTIVE   -       -
sd dg_sap01-01  sapvol-01    ENABLED  204800   0        -        -       -
pl sapvol-02    sapvol       ENABLED  204800   -        ACTIVE   -       -
sd dg_sap02-01  sapvol-02    ENABLED  204800   0        -        -       -
[root@sheldon] / #
  • Break the mirror.

To do this just disassociate the corresponding plex from the volume.

[root@sheldon] / # vxplex -g dg_sap dis sapvol-02
[root@sheldon] / # vxprint -g dg_sap             
TY NAME         ASSOC        KSTATE   LENGTH   PLOFFS   STATE    TUTIL0  PUTIL0
dg dg_sap       dg_sap       -        -        -        -        -       -

dm dg_sap01     disk27       -        228224   -        -        -       -
dm dg_sap02     disk28       -        228224   -        -        -       -

pl sapvol-02    -            DISABLED 204800   -        -        -       -
sd dg_sap02-01  sapvol-02    ENABLED  204800   0        -        -       -

v  sapvol       fsgen        ENABLED  204800   -        ACTIVE   -       -
pl sapvol-01    sapvol       ENABLED  204800   -        ACTIVE   -       -
sd dg_sap01-01  sapvol-01    ENABLED  204800   0        -        -       -
[root@sheldon] / #
  • Reattach the plex to the volume and reestablish the mirror.
[root@sheldon] / # vxplex -g dg_sap att sapvol sapvol-02

And we are done for now, more VxVM stuff in a future post :-)

Juanma.

DISCLAIMER NOTE: This method is based only on my personal experience working with HP-UX 11iv2, 11iv3 and EMC Symmetrix. I tested it with near a hundred LUNs from a DMX-3 and with six different servers. As far as I know this isn’t an official or supported procedure neither from EMC nor from HP.

Every time the storage people add a new LUN to your servers from an EMC disk array they provide you with a Logical device ID (or LUN ID) to identify the disk with PowerPath. If you are in HP-UX 11iv2 no problem here, just run a simple powermt command and look for the new LUN.

[root@totoro] / # powermt display dev=all | more
...
...
Symmetrix ID=000281150123
Logical device ID=0CED
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
---------------- Host ---------------   - Stor -   -- I/O Path -  -- Stats ---
###  HW Path                I/O Paths    Interf.   Mode    State  Q-IOs Errors
==============================================================================
20 0/0/10/1/0.11.15.0.0.1.3 c7t1d3 SP A0 active alive 0 1
23 0/0/10/1/0.11.47.0.0.1.3 c8t1d3 SP B0 active alive 0 1
26 1/0/8/1/0.21.15.0.0.1.3 c10t1d3 SP A1 active alive 0 1
29 1/0/8/1/0.21.47.0.0.1.3 c11t1d3 SP B1 active alive 0 1
...
...

But if you are in 11.31 you will find a small problem to perform this. PowerPath is not recommended in HP-UX 11iv3 because it can cause conflicts with the new native multipathing of the v3.

You can use the trick of doing a simple ll -tr in the /dev/disk directory just after the hardware scan and the device file creation, but this way is valid only if you have one or two disks with the same size. What if you have several disks with multiple sizes and want to use each disk for a different VG and/or task? The storage people will only provide the LUN IDs but you will not have the tool to match those IDs with your disks.

Fortunately there is way to circumvent the lack of PowerPath in 11iv3. We are going to use the same disk as in the previous example, the 0CED.

First get the disks serial number with scsimgr.

[root@totoro] / # scsimgr get_attr -D /dev/rdisk/disk30 -a serial_number

 SCSI ATTRIBUTES FOR LUN : /dev/rdisk/disk30

name = serial_number
current = "100123CED000"
default =
saved =

Take note of the serial number.

100123CED000

As you can see the last the last three digits of the LUN ID are included in the disk serial number and if look carefully will see also the four last digits the Symmetrix ID (0123) just after the LUN ID.

Juanma.

PowerPath is a multipathing software for Unix operating systems from EMC. If you have ever worked or you are going to work in an environment that includes EMC storage systems it is more than sure that Powerpath will be installed in the Unix hosts.

Following are some notes and tips I’ve been creating since the very first time I found Powerpath, of course this isn’t a full user guide but a sort of personal quick reference. I decide to put it here in the hope that it will be helpful to anyone and for my personal use.

  • Show powermt command version
[root@totoro] / # powermt version
EMC powermt for PowerPath (c) Version 5.1.0 (build 160)
  • Display PowerPath configuration.
[root@totoro] / # powermt display
Symmetrix logical device count=898
CLARiiON logical device count=0
Hitachi logical device count=0
Invista logical device count=0
HP xp logical device count=0
Ess logical device count=0
HP HSx logical device count=0
==============================================================================
----- Host Bus Adapters ---------  ------ I/O Paths -----  ------ Stats ------
###  HW Path                       Summary   Total   Dead  IO/Sec Q-IOs Errors
==============================================================================
 5 0/2/1/0.101.16.19.0           optimal      61      0       -     0      0
 6 0/2/1/0.101.16.19.1           optimal     102      0       -     0      0
 7 0/2/1/0.101.16.19.2           optimal      97      0       -     0      0
 8 0/2/1/0.101.16.19.3           optimal     113      0       -     0      0
 9 0/2/1/0.101.16.19.4           optimal      82      0       -     0      0
 11 0/2/1/0.101.43.19.0           optimal     128      0       -     0      0
 12 0/2/1/0.101.43.19.1           optimal      49      0       -     0      0
 13 0/2/1/0.101.43.19.2           optimal      57      0       -     0      0
 14 0/2/1/0.101.43.19.3           optimal      83      0       -     0      0
 15 0/2/1/0.101.43.19.4           optimal      74      0       -     0      0
 16 0/2/1/0.101.43.19.5           optimal      33      0       -     0      0
 17 0/2/1/0.101.43.19.6           optimal      19      0       -     0      0
 19 0/5/1/0.102.16.19.0           optimal      61      0       -     0      0
 20 0/5/1/0.102.16.19.1           optimal     102      0       -     0      0
 21 0/5/1/0.102.16.19.2           optimal      97      0       -     0      0
 22 0/5/1/0.102.16.19.3           optimal     113      0       -     0      0
 23 0/5/1/0.102.16.19.4           optimal      82      0       -     0      0
 25 0/5/1/0.102.43.19.0           optimal     128      0       -     0      0
 26 0/5/1/0.102.43.19.1           optimal      49      0       -     0      0
 27 0/5/1/0.102.43.19.2           optimal      57      0       -     0      0
 28 0/5/1/0.102.43.19.3           optimal      83      0       -     0      0
 29 0/5/1/0.102.43.19.4           optimal      74      0       -     0      0
 30 0/5/1/0.102.43.19.5           optimal      33      0       -     0      0
 31 0/5/1/0.102.43.19.6           optimal      19      0       -     0      0

[root@totoro] / #
  • Check for death paths and remove them.
[root@sheldon] / # powermt display
Symmetrix logical device count=34
CLARiiON logical device count=0
Hitachi logical device count=0
Invista logical device count=0
HP xp logical device count=0
Ess logical device count=0
HP HSx logical device count=0
==============================================================================
----- Host Bus Adapters ---------  ------ I/O Paths -----  ------ Stats ------
###  HW Path                       Summary   Total   Dead  IO/Sec Q-IOs Errors
==============================================================================
 17 UNKNOWN                       failed        1      1       -     0      0
 31 UNKNOWN                       failed        1      1       -     0      0
 37 1/0/14/1/0.109.85.19.0        optimal      32      0       -     0      0
 39 0/0/14/1/0.110.85.19.0        optimal      32      0       -     0      0

[root@sheldon] / # powermt check
Warning: Symmetrix device path c17t9d6 is currently dead.
Do you want to remove it (y/n/a/q)? y
Warning: Symmetrix device path c31t9d6 is currently dead.
Do you want to remove it (y/n/a/q)? y
[root@sheldon] / #
  • List all devices.
[root@totoro] / # powermt display dev=all
  • Remove all devices.
[root@totoro] / # powermt remove dev=all
  • Add a new disk in HP-UX, configure it and save the config:

After a rescan of the disks with ioscan and the creation of the device files with insf run the following command  to add the new disk to PowerPath

[root@totoro] / # powermt config

Now display all the devices and look the for the Logical device ID of the disk.

[root@totoro] / # powermt display dev=all | more
...
...
Symmetrix ID=000287750035
Logical device ID=0004
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
---------------- Host ---------------   - Stor -   -- I/O Path -  -- Stats ---
###  HW Path                I/O Paths    Interf.   Mode    State  Q-IOs Errors
==============================================================================
20 0/0/10/1/0.11.15.0.0.1.3 c20t1d3 SP A0 active alive 0 1
23 0/0/10/1/0.11.47.0.0.1.3 c23t1d3 SP B0 active alive 0 1
26 1/0/8/1/0.21.15.0.0.1.3 c26t1d3 SP A1 active alive 0 1
29 1/0/8/1/0.21.47.0.0.1.3 c29t1d3 SP B1 active alive 0 1
...
...

If everything went fine save the config.

[root@totoro] / # powermt save

And these are the most common tasks I’ve been doing with PowerPath. I’ll try to put some order into my notes and personal how-to files and write more posts like this one.

Juanma.

Last week was, without any doubt, one of the most exciting of the year. The new Integrity Servers have been finally unveiled.

This new whole line of Integrity machines are based on Tukwila, the latest iteration of the Itanium processor line which was presented by Intel early this year, and with one exception all of them are based in the blade form factor. Let’s take a quick look of the new servers.

  • Entry-level

In this area, and as the only rack server of the new line, we have the rx28000, at first look it seems no more than a remake of the rx2660 but if you go deeper will find a powerful machine with 2 Quad-core or Dual-core Itanium 9300 processors and a maximum of 192GB of RAM.

That’s a considerable amount of power for a server of this kind. I personally like this server and have to convince my manager to kindly donate one for my home lab ;-)

  • Mid-range

In the mid-range line there are three beautiful babies named BL860c_i2, BL870c_i2 and BL890c_i2.

The key for this new servers is modularity, the BL860c_i2 is the base of her bigger sisters. HP has developed a new piece of hardware known as Integrity Blade Link Assembly which makes possible to combine  blade modules. The 870 is composed by two blade modules and the 890 by four. The 860 is no more than a single blade module with a single Link Assembly on its front. This way of combining the blades makes the 890 the only 8 socket blade currently available.

The 870 and the 890 with 16 and 32 cores respectively are the logical replacement for the rx7640 and rx8640 but as many people have been saying since they were publicly presented there is of  the OLAR question or really the apparently lack of OLAR which in fact was one of the key features of the mid-range cell-based Integrity servers. We’ll see how this issue is solved.

  • High-End

The new rx2800 and the new blades are great but the real shock for everybody came when HP announced the new Superdome 2. Ladies and gentlemen the new mission critical computing era is here, forget those fat and proprietary racks, forget everything you know about high-end servers and be welcome to the blade land.

This new version of the HP flagship is based on the blade concept. Instead of cells we have cell-blades inside a new 18U enclosure based in the HP C7000 Blade Enclosure. Just remember one word… commonality. The new Superdome 2 will share a lot of parts with the C7000 and can be also managed through the same tools like the Onboard Administrator.

The specs of this baby are astonishing and during the presentation at the HP Technology At Work event four different configurations were outlined ranging from 8 sockets/32 cores in four blade-cells to a maximum of 64 sockets/256 cores in 32 cell-blades distributed through four enclosures in two racks. Like I said, astonishing :-D

There have been a lot rumors during last year about HP-UX and Itanium future mainly because the delays of the Tukwilla processor. The discussion has recently reach ITRC.

But if any of you had doubts about HP-UX future I firmly believe that HP sent a clear message on the opposite direction. HP-UX is probably the more robust and reliable Unix in the enterprise arena. And to be serious, what are you going to use to replace it? Linux? Solaris? Please ;-)

Juanma.

Hi everybody! No you aren’t lucky, I’m not dead yet ;-). It’s only that I’ve been very busy these weeks, my current project is almost finished and there is a lot of stuff to deliver to the customer. This part also reminds me that, even knowing that is an essential part of any project, I hate to write documentation with all my heart.

But there are also great news for me and a new opportunity has arise and I’ve been offered to become not just a full time HP-UX Sysadmin but also a storage one in another customer. I never been a full storage guy but I love new challenges and working with HP storage hardware like XP and EVA looks great to me. If I finally manage to be assigned to that project you will see here storage related posts from time to time.

But I don’t want write more, I have to be patient and let the events develop by themselves, wish me luck my dear readers.

Juanma.