Archive

Archive for June, 2010

Configure AVIO Lan in HPVM Linux guests

The  AVIO Lan drivers for Linux HPVM guests are supported since HPVM4.0 but as you will see enabling it is a little more complicated than in HP-UX guests.

The first prerequisite is to have installed the HPVM management software, once you have this package installed look for a RPM package called hpvm_lgssn in /opt/hpvm/guest-images/linux/DRIVERS.

root@hpvm-host:/opt/hpvm/guest-images/linux/DRIVERS # ll
total 584
 0 drwxr-xr-x 2 bin bin     96 Apr 13 18:47 ./
 0 drwxr-xr-x 5 bin bin     96 Apr 13 18:48 ../
 8 -r--r--r-- 1 bin bin   7020 Mar 27  2009 README
576 -rw-r--r-- 1 bin bin 587294 Mar 27  2009 hpvm_lgssn-4.1.0-3.ia64.rpm
root@hpvm-host:/opt/hpvm/guest-images/linux/DRIVERS #

Copy the package to the virtual machine with your favorite method and install it.

[sles10]:/var/tmp # rpm -ivh hpvm_lgssn-4.1.0-3.ia64.rpm
Preparing...                ########################################### [100%]
Installing...               ########################################### [100%]

[sles10]:/var/tmp #

Check the installation of the package.

[sles10]:~ # rpm -qa | grep hpvm
hpvm-4.1.0-1
hpvmprovider-4.1.0-1
hpvm_lgssn-4.1.0-3
[sles10]:~ #
[sles10]:~ # rpm -ql hpvm_lgssn
/opt/hpvm_drivers
/opt/hpvm_drivers/lgssn
/opt/hpvm_drivers/lgssn/LICENSE
/opt/hpvm_drivers/lgssn/Makefile
/opt/hpvm_drivers/lgssn/README
/opt/hpvm_drivers/lgssn/hpvm_guest.h
/opt/hpvm_drivers/lgssn/lgssn.h
/opt/hpvm_drivers/lgssn/lgssn_ethtool.c
/opt/hpvm_drivers/lgssn/lgssn_main.c
/opt/hpvm_drivers/lgssn/lgssn_recv.c
/opt/hpvm_drivers/lgssn/lgssn_recv.h
/opt/hpvm_drivers/lgssn/lgssn_send.c
/opt/hpvm_drivers/lgssn/lgssn_send.h
/opt/hpvm_drivers/lgssn/lgssn_trace.h
/opt/hpvm_drivers/lgssn/rh4
/opt/hpvm_drivers/lgssn/rh4/u5
/opt/hpvm_drivers/lgssn/rh4/u5/lgssn.ko
/opt/hpvm_drivers/lgssn/rh4/u6
/opt/hpvm_drivers/lgssn/rh4/u6/lgssn.ko
/opt/hpvm_drivers/lgssn/sles10
/opt/hpvm_drivers/lgssn/sles10/SP1
/opt/hpvm_drivers/lgssn/sles10/SP1/lgssn.ko
/opt/hpvm_drivers/lgssn/sles10/SP2
/opt/hpvm_drivers/lgssn/sles10/SP2/lgssn.ko
[sles10]:~ #

There are two ways to install the driver, compile it or use one of the pre-compiled modules. These pre-compiled modules are for the following distributions and kernels:

  • Red Hat 4 release 5 (2.6.9-55.EL)
  • Red Hat 4 release 6 (2.6.9-67.EL)
  • SLES10 SP1 (2.6.16.46-0.12)
  • SLES10 SP2 (2.6.16.60-0.21)

For other kernels you must compile the driver. In the Linux box of the example I had a supported kernels and distro (SLES10 SP2) but instead of using the pre-compiled one I decided to go through the whole process.

Go the path /opt/hpvm_drivers/lgssn, there you will find the sources of the driver. To compile and install execute a simple make install.

[sles10]:/opt/hpvm_drivers/lgssn # make install
make -C /lib/modules/2.6.16.60-0.21-default/build SUBDIRS=/opt/hpvm_drivers/lgssn modules
make[1]: Entering directory `/usr/src/linux-2.6.16.60-0.21-obj/ia64/default'
make -C ../../../linux-2.6.16.60-0.21 O=../linux-2.6.16.60-0.21-obj/ia64/default modules
 CC [M]  /opt/hpvm_drivers/lgssn/lgssn_main.o
 CC [M]  /opt/hpvm_drivers/lgssn/lgssn_send.o
 CC [M]  /opt/hpvm_drivers/lgssn/lgssn_recv.o
 CC [M]  /opt/hpvm_drivers/lgssn/lgssn_ethtool.o
 LD [M]  /opt/hpvm_drivers/lgssn/lgssn.o
 Building modules, stage 2.
 MODPOST
 CC      /opt/hpvm_drivers/lgssn/lgssn.mod.o
 LD [M]  /opt/hpvm_drivers/lgssn/lgssn.ko
make[1]: Leaving directory `/usr/src/linux-2.6.16.60-0.21-obj/ia64/default'
find /lib/modules/2.6.16.60-0.21-default -name lgssn.ko -exec rm -f {} \; || true
find /lib/modules/2.6.16.60-0.21-default -name lgssn.ko.gz -exec rm -f {} \; || true
install -D -m 644 lgssn.ko /lib/modules/2.6.16.60-0.21-default/kernel/drivers/net/lgssn/lgssn.ko
/sbin/depmod -a || true
[sles10]:/opt/hpvm_drivers/lgssn #

This will copy the driver to /lib/module/<KERNEL_VERSION>/kernel/drivers/net/lgssn/.

To ensure that the new driver will loaded during the startup of the operative system first add the following line to /etc/modprobe.conf, one line for each interface configured for AVIO Lan.

alias eth1 lgssn

The HPVM 4.2 manual said you have to issue the command depmod -a in order to inform the kernel about the change but if you look the above log will see that the last command executed by the make install is a depmod -a. Look into the modules.dep file to check that the corresponding line for the lgssn driver has been added.

[sles10]:~ # grep lgssn /lib/modules/2.6.16.60-0.21-default/modules.dep
/lib/modules/2.6.16.60-0.21-default/kernel/drivers/net/lgssn/lgssn.ko:
[sles10]:~ #

At this point and if you have previously reconfigured the virtual machine, load the module and restart the network services.

[sles10]:/opt/hpvm_drivers/lgssn # insmod /lib/modules/2.6.16.60-0.21-default/kernel/drivers/net/lgssn/lgssn.ko
[sles10]:/opt/hpvm_drivers/lgssn # lsmod |grep lgssn
lgssn                 576136  0
[sles10]:/opt/hpvm_drivers/lgssn #
[sles10]:/opt/hpvm_drivers/lgssn # service network restart
Shutting down network interfaces:
    eth0      device: Intel Corporation 82540EM Gigabit Ethernet Controller
    eth0      configuration: eth-id-2a:87:14:5c:f9:ed
    eth0                                                              done
    eth1      device: Hewlett-Packard Company Unknown device 1338
    eth1      configuration: eth-id-66:f3:f8:4e:37:d5
    eth1                                                              done
    eth2      device: Intel Corporation 82540EM Gigabit Ethernet Controller
    eth2      configuration: eth-id-0a:dc:fd:cb:2c:62
    eth2                                                              done
Shutting down service network  .  .  .  .  .  .  .  .  .  .  .  .  .  done
Hint: you may set mandatory devices in /etc/sysconfig/network/config
Setting up network interfaces:
    lo        
    lo       
              IP address: 127.0.0.1/8   
              IP address: 127.0.0.2/8   
Checking for network time protocol daemon (NTPD):                     running
    lo                                                                done
    eth0      device: Intel Corporation 82540EM Gigabit Ethernet Controller
    eth0      configuration: eth-id-2a:87:14:5c:f9:ed
Warning: Could not set up default route via interface
 Command ip route replace to default via 10.31.12.1 returned:
 . RTNETLINK answers: Network is unreachable
 Configuration line: default 10.31.12.1 - -
 This needs NOT to be AN ERROR if you set up multiple interfaces.
 See man 5 routes how to avoid this warning.

Checking for network time protocol daemon (NTPD):                     running
    eth0                                                              done
    eth1      device: Hewlett-Packard Company Unknown device 1338
    eth1      configuration: eth-id-66:f3:f8:4e:37:d5
    eth1      IP address: 10.31.4.16/24   
Warning: Could not set up default route via interface
 Command ip route replace to default via 10.31.12.1 returned:
 . RTNETLINK answers: Network is unreachable
 Configuration line: default 10.31.12.1 - -
 This needs NOT to be AN ERROR if you set up multiple interfaces.
 See man 5 routes how to avoid this warning.

Checking for network time protocol daemon (NTPD):                     running
    eth1                                                              done
    eth2      device: Intel Corporation 82540EM Gigabit Ethernet Controller
    eth2      configuration: eth-id-0a:dc:fd:cb:2c:62
    eth2      IP address: 10.31.12.11/24   
Checking for network time protocol daemon (NTPD):                     running
    eth2                                                              done
Setting up service network  .  .  .  .  .  .  .  .  .  .  .  .  .  .  done
[sles10]:/opt/hpvm_drivers/lgssn #

If you have not configured the networking interface of the virtual machine shutdown the virtual machine and from the host modify each virtual NIC of the guest. Take into account that AVIO Lan drivers are not supported with localnet virtual switches.

root@hpvm-host:~ # hpvmmodify -P sles10 -m network:avio_lan:0,2:vswitch:vlan2:portid:4
root@hpvm-host:~ # hpvmstatus -P sles10 -d
[Virtual Machine Devices]
...
[Network Interface Details]
network:lan:0,0,0x2A87145CF9ED:vswitch:localnet:portid:4
network:avio_lan:0,1,0x66F3F84E37D5:vswitch:vlan1:portid:4
network:avio_lan:0,2,0x0ADCFDCB2C62:vswitch:vlan2:portid:4
...
root@hpvm-host:~ #

Finally start the virtual machine and check that everything went well and the drivers have been loaded.

Juanma

Boot disk structure on Integrity servers

June 24, 2010 1 comment

The boot disk/disks of every Integrity server are divided into three partitions:

  1. EFI Partition: Contains the necessary tools and files to find and load the appropriate kernel. Here resides for example the hpux.efi utility.
  2. OS Partition: In the case of HP-UX contains the LVM or VxVM structure, the kernel and any filesystem that play a role during the boot process.
  3. HP Service Partition (HPSP).

EFI Partition

The Extensible Firmware Interface (EFI) partition is subdivided into three main areas:

  • MBR: The Master Boot Record, located at the top of the disk, a legacy Intel structure ignored by EFI.
  • GPT: Every EFI partition is assigned a unique identifier known as GUID (Globally Unique Identifier). The locations of the GUID s are stored in the EFI GUID Partition Table or GPT. This very critical structure is replicated at the top and the bottom of the disk.
  • EFI System Partition: This partition contains the OS loader responsible of loading the operative system during the boot process. On HP-UX disks the OS loader is the famous \efi\hpux\hpux.efi file. Here is contained also the \efi\hpux\auto file which stores the system boot string and some utilities as well.

OS Partition

The OS Partition obviously contains the Operative System that runs on the server. An HP-UX partition contains a LIF area, private region and public region.

The Logical Interchange Format (LIF) boot area stores the following files:

  • ISL. Not used on Integrity.
  • AUTO. Not used on Integrity.
  • HPUX. Not used on Integrity.
  • LABEL. A binary file that contains the records of the locations of /stand and the primary swap.

The private region contains LVM and VxVM configuration information.

And the public region contains the corresponding volumes for:

  • stand: /stand filesystem including the HP-UX kernel.
  • swap: Primary swap space.
  • root: The root filesystem that includes /, /etc, /dev and /sbin.

HP Service Partition

The HP Service Partition, or HPSP, is a FAT-32 filesystem that contains several offline diagnostic utilities to be used on unbootable systems.

Juanma.

Howto enable HP-UX 11iv3 agile naming mode in VxVM

By default Veritas Volume Manager uses HP-UX legacy naming scheme instead of the agile view one, of course for any HP-UX Sysadim this is completely unacceptable ;-) below is a small procedure to change this behavior.

Display VxVM disk information  and get the current naming scheme.

root@robin:~# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
c0t0d0s2     auto:LVM        -            -            LVM
c0t1d0       auto:LVM        -            -            LVM
c0t2d0       auto:cdsdisk    labdg01      labdg        online
c0t3d0       auto:cdsdisk    labdg02      labdg        online
c0t4d0       auto:cdsdisk    labdg03      labdg        online
c0t5d0       auto:none       -            -            online invalid
c0t6d0       auto:none       -            -            online invalid
c0t7d0       auto:none       -            -            online invalid
c0t8d0s2     auto:hpdisk     rootdisk01   rootdg       online
c0t9d0s2     auto:hpdisk     rootdisk02   rootdg       online
root@robin:~#
root@robin:~# vxddladm get namingscheme
NAMING_SCHEME       PERSISTENCE         MODE                
===============================================
OS Native           Yes                 Legacy              
root@robin:~#

As you can see the mode is se tot legacy and the disks are shown with their legacy device names. To change this use again the vxddladm command.

root@robin:~# vxddladm set namingscheme=osn mode=new

The parameter used are namingscheme and mode. The available option for the first are:

  • ebn – Enclosure based names.
  • osn – Operative system names.

If ebn is used neither legacy mode nor new mode can be set since hardware names provided by the disk array will be used so use osn as namingscheme.

The second parameter is mode and of course defines which naming model will be used in the osn naming scheme. The following three values can be set:

  • default
  • legacy
  • new

Now check the chancge by executing vxdisk and vxddladm commands.

root@robin:~# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
disk4_p2     auto:LVM        -            -            LVM
disk6        auto:LVM        -            -            LVM
disk8        auto:cdsdisk    labdg01      labdg        online
disk10       auto:cdsdisk    labdg02      labdg        online
disk12       auto:cdsdisk    labdg03      labdg        online
disk14       auto:none       -            -            online invalid
disk16       auto:none       -            -            online invalid
disk18       auto:none       -            -            online invalid
disk20_p2    auto:hpdisk     rootdisk01   rootdg       online
disk22_p2    auto:hpdisk     rootdisk02   rootdg       online
root@robin:~#
root@robin:~# vxddladm get namingscheme
NAMING_SCHEME       PERSISTENCE         MODE                
===============================================
OS Native           Yes                 New                 
root@robin:~#

Of course the naming scheme can be set back to the legacy scheme using the same procedure.

Juanma.

Categories: HP-UX Tags: , ,

Basic volume tasks with the HP Lefthand CLIQ command-line

June 16, 2010 3 comments

Following with the series of posts about the HP Lefthand SAN systems in this post I will explain the basic volume operations with CLIQ, the HP Lefthand SAN/iQ command-line.

I used the Lefthand VSA and the ESX4 servers from my home lab to illustrate the procedure. The commands are executed locally in the VSA via SSH. The following tasks will be covered:

  • Volume creation.
  • Assign a volume to one or more hosts.
  • Volume deletion.

Volume creation

The command to use is createVolume. The available options for this command are:

  • Volumename.
  • Clustername: the cluster where the volume will be created on.
  • Replication:  The replication level from 1 (none) to 4 (4-way replication).
  • Thinprovision: 1 (Thin provisioning) or 2 (Full provision).
  • Description.
  • Size: The size can be set in MB, GB or TB.
CLIQ>createVolume volumeName=vjm-cluster2 size=2GB clusterName=iSCSI-CL replication=1 thinProvision=1 description="vep01-02 datastore" 

SAN/iQ Command Line Interface, v8.1.00.0047
(C) Copyright 2007-2009 Hewlett-Packard Development Company, L.P.

RESPONSE
 result         0
 processingTime 2539
 name           CliqSuccess
 description    Operation succeeded

CLIQ>

Assign a volume to the hosts

The command to use in this task is assignVolume. Few parameters are accepted by this command:

  • Volumename.
  • Initiator: The host/hosts IQNs.If the volume is going to be presented to more than one host the IQNs of the server must be separated by semicolons. One important tip, the operation must be done in one command, you can not assign the volume to a host in one command and to a new host in a second command, the last one will overwrite the first instead of adding the volume to one more host.
  • Access wrights: The default is read-write (rw), read-only (r) or write-only (w) can also be set.
CLIQ>assignVolume volumeName=vjm-cluster2 initiator=iqn.1998-01.com.vmware:vep01-45602bf3;iqn.1998-01.com.vmware:vep02-5f779b32

SAN/iQ Command Line Interface, v8.1.00.0047
(C) Copyright 2007-2009 Hewlett-Packard Development Company, L.P.

RESPONSE
 result         0
 processingTime 4069
 name           CliqSuccess
 description    Operation succeeded

CLIQ>

And now that the volume is created and assigned to several servers check its configuration with getVolumeInfo.

CLIQ>getVolumeInfo volumeName=vjm-cluster2

SAN/iQ Command Line Interface, v8.1.00.0047
(C) Copyright 2007-2009 Hewlett-Packard Development Company, L.P.

RESPONSE
 result         0
 processingTime 1480
 name           CliqSuccess
 description    Operation succeeded

 VOLUME
 thinProvision  true
 stridePages    32
 status         online
 size           2147483648
 serialNumber   17a1c11e939940a4f7e91ee43654c94b000000000000006b
 scratchQuota   4194304
 reserveQuota   536870912
 replication    1
 name           vjm-cluster2
 minReplication 1
 maxSize        14587789312
 iscsiIqn       iqn.2003-10.com.lefthandnetworks:vdn:107:vjm-cluster2
 isPrimary      true
 initialQuota   536870912
 groupName      VDN
 friendlyName   
 description    vep01-02 datastore
 deleting       false
 clusterName    iSCSI-CL
 checkSum       false
 bytesWritten   18087936
 blockSize      1024
 autogrowPages  512

 PERMISSION
 targetSecret    
 loadBalance     true
 iqn             iqn.1998-01.com.vmware:vep01-45602bf3
 initiatorSecret
 chapRequired    false
 chapName        
 authGroup       vep01
 access          rw

 PERMISSION
 targetSecret    
 loadBalance     true
 iqn             iqn.1998-01.com.vmware:vep02-5f779b32
 initiatorSecret
 chapRequired    false
 chapName        
 authGroup       vep02
 access          rw

CLIQ>

If you refresh the storage configuration of the ESXs hosts through vSphere Client the new volume will be displayed.

Volume deletion

Finally we are going to delete another volume that is no longer in use by the server of my lab.

CLIQ>deleteVolume volumeName=testvolume

SAN/iQ Command Line Interface, v8.1.00.0047
(C) Copyright 2007-2009 Hewlett-Packard Development Company, L.P.

This operation is potentially irreversible.  Are you sure? (y/n) 

RESPONSE
 result         0
 processingTime 1416
 name           CliqSuccess
 description    Operation succeeded

CLIQ>
CLIQ>getvolumeinfo volumename=testvolume

SAN/iQ Command Line Interface, v8.1.00.0047
(C) Copyright 2007-2009 Hewlett-Packard Development Company, L.P.

RESPONSE
 result         8000100c
 processingTime 1201
 name           CliqVolumeNotFound
 description    Volume 'testvolume' not found

CLIQ>

And we are done. As always comments are welcome :-)

Juanma.

Categories: Storage Tags: , , ,

Identifying the HP EVA LUNs on HP-UX 11iv3

June 15, 2010 6 comments

Yesterday’s post about CLARiiON reminded me a similar issue I observed when the storage array is an HP EVA. If you ask for the disk serial number with scsimgr you always get the same number, in fact this number is the serial of the HSV controller.

The key to match your disk in the HP-UX host with the LUN provided by the EVA arrays is the wwid attribute of the disk.

root@ignite:/ # scsimgr get_attr -D /dev/rdisk/disk10 -a wwid

        SCSI ATTRIBUTES FOR LUN : /dev/rdisk/disk10

name = wwid
current = 0x600508b40006cb700000600008bb0000
default =
saved = 

root@ignite:/ #

If you look for this value in Command View will see that is the same as the World Wide LUN Name and the UUID.

### UPDATE ###

Thanks to my friend Jean and to Greg who reminded me that like Greg said in his comment is much easier to match the Word Wide LUN Name with the evainfo tool. Thanks to both of you :-)

root@hpux-server # evainfo -aP

Devicefile                      Array                   WWNN                            Capacity        Controller/Port/Mode
/dev/rdisk/disk20       5001-4380-04C7-2D90 6005-08B4-000F-3EED-0000-5000-003A-0000      204800MB       Ctl-A/FP-2/Optimized
/dev/rdisk/disk21       5001-4380-04C7-2D90 6005-08B4-000F-3EED-0000-5000-0042-0000      204800MB       Ctl-A/FP-1/Optimized
/dev/rdisk/disk22       5001-4380-04C7-2D90 6005-08B4-000F-3EED-0000-5000-004A-0000       20480MB       Ctl-A/FP-1/Optimized
/dev/rdisk/disk23       5001-4380-04C7-2D90 6005-08B4-000F-3EED-0000-5000-004E-0000       71680MB       Ctl-A/FP-2/Optimized
/dev/rdisk/disk24       5001-4380-04C7-2D90 6005-08B4-000F-3EED-0000-5000-0052-0000       10240MB       Ctl-A/FP-1/Optimized
/dev/rdisk/disk25       5001-4380-04C7-2D90 6005-08B4-000F-3EED-0000-5000-0056-0000       10240MB       Ctl-A/FP-1/Optimized
/dev/rdisk/disk26       5001-4380-04C7-2D90 6005-08B4-000F-3EED-0000-5000-005A-0000       20480MB       Ctl-A/FP-1/Optimized
/dev/rdisk/disk27       5001-4380-04C7-2D90 6005-08B4-000F-3EED-0000-5000-005E-0000      245760MB       Ctl-A/FP-1/Optimized

Where can I get EVAinfo? Like Greg said EVAinfo is distributed on the HP StorageWorks Storage System Scripting Utility CD (SSSU) since 8.0 version. Unfortunately I couldn’t find, yet, a public download URL but the CD is distributed with the hardware so if you own an EVA is probably you already have the media.

Thanks to Jean, man it seems that I owe you more than a couple of beers ;-D, here it is the URL of the CD. You will find the EVAinfo utility inside the HP StorageWorks Command View SSSU v9.2 software ISO.

https://h20392.www2.hp.com/portal/swdepot/displayProductInfo.do?productNumber=CommandViewEVA9.2

Juanma.

Howto identify the EMC LUN ID in HP-UX 11iv3 – CLARiiON arrays

June 14, 2010 2 comments

In my previous post about EMC storage I showed a procedure to identify the ID of LUN presented to an 11iv3 host without Powerpath, which is not recommended to be installed on 11.31. I stated that I only did test the procedure with Symmetrix DMX arrays.

Jean Mesquida, a reader of this blog and a friend, tried the same procedure with CLARiiON cabinets and discovered that it didn’t work because the serial number attribute was the same for every disk. After that he performed some tests and provided a similar solution using to the EMC tool inquiry.

The inquiry utility can be downloaded in the following URL:

ftp://ftp.emc.com/pub/symm3000/inquiry/

Following are his results, all credit of this post goes to him I’m just publishing his work here. I also want to thank my friend Jesus at EMC who confirmed Jean’s procedure. Many thanks to both of you, without people like you this blog wouldn’t be possible :-)

And now the procedure:

Launch an inq against the CLARiiON array.

[hpux-server]root:/ #/usr/local/bin/inq -clariion
Inquiry utility, Version V7.3-891 (Rev 2.0)      (SIL Version V6.5.2.0 (Edit Level 891)
Copyright (C) by EMC Corporation, all rights reserved.
For help type inq -h.

................................................................

-------------------------------------------------------------------------------------------------
DEVICE              :VEND    :PROD            :REV   :SER NUM    :CAP(kb)      :VLU :CLUN:State
-------------------------------------------------------------------------------------------------
...
/dev/rdsk/c7t0d1    :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :    83886080:   1:  2a:ASSIGNED
/dev/rdsk/c7t0d2    :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :    83886080:   2:  2b:ASSIGNED
/dev/rdsk/c7t0d3    :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :     5242880:   3:  2c:ASSIGNED
/dev/rdsk/c7t0d4    :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :     5242880:   4:  2d:ASSIGNED
/dev/rdsk/c7t0d5    :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :     5242880:   5:  2e:ASSIGNED
/dev/rdsk/c7t0d6    :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :     5242880:   6:  2f:ASSIGNED
/dev/rdsk/c7t0d7    :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :   419430400:   7:  30:ASSIGNED
/dev/rdsk/c8t0d1    :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :    83886080:   1:  2a:ASSIGNED
/dev/rdsk/c8t0d2    :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :    83886080:   2:  2b:ASSIGNED
/dev/rdsk/c8t0d3    :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :     5242880:   3:  2c:ASSIGNED
/dev/rdsk/c8t0d4    :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :     5242880:   4:  2d:ASSIGNED
/dev/rdsk/c8t0d5    :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :     5242880:   5:  2e:ASSIGNED
/dev/rdsk/c8t0d6    :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :     5242880:   6:  2f:ASSIGNED
/dev/rdsk/c8t0d7    :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :   419430400:   7:  30:ASSIGNED
/dev/rdsk/c9t0d1    :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :    83886080:   1:  2a:ASSIGNED
/dev/rdsk/c9t0d2    :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :    83886080:   2:  2b:ASSIGNED
/dev/rdsk/c9t0d3    :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :     5242880:   3:  2c:ASSIGNED
/dev/rdsk/c9t0d4    :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :     5242880:   4:  2d:ASSIGNED
/dev/rdsk/c9t0d5    :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :     5242880:   5:  2e:ASSIGNED
/dev/rdsk/c9t0d6    :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :     5242880:   6:  2f:ASSIGNED
/dev/rdsk/c9t0d7    :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :   419430400:   7:  30:ASSIGNED
/dev/rdsk/c10t0d1   :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :    83886080:   1:  2a:ASSIGNED
/dev/rdsk/c10t0d2   :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :    83886080:   2:  2b:ASSIGNED
/dev/rdsk/c10t0d3   :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :     5242880:   3:  2c:ASSIGNED
/dev/rdsk/c10t0d4   :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :     5242880:   4:  2d:ASSIGNED
/dev/rdsk/c10t0d5   :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :     5242880:   5:  2e:ASSIGNED
/dev/rdsk/c10t0d6   :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :     5242880:   6:  2f:ASSIGNED
/dev/rdsk/c10t0d7   :DGC     :CX4-240WDR5     :HP03  :Ch2 CONT   :   419430400:   7:  30:ASSIGNED
...

We are going to use the last disk (c10t0d7). Take a look at the CLUN column, this column gives the LUN ID (0×30=48 for instance) on the Clariion array. So is Jean understanding, and I fully agree with him, that c10t0d7 disk match the 48 LUN on this CLARiiON array.

Finally, and to be more accurate, I modified the title of the other post to reflect that is only for Symmetrix arrays.

Juanma.

Categories: EMC, HP-UX, Storage Tags: , , , , ,

Retrieving Veritas license information

If you need to get the licensing information from the Veritas products installed on an HP-UX just execute the vxlicrep command..

Additionally if you want to see what feature of Veritas Volume Manager are available you can do it with vxdctl license.

Juanma.

Categories: HP-UX Tags: , , ,

Strange NUMA errors on my virtualized ESX 4 hosts

The first time I installed an ESX 4 Update 1 on VMware Workstation an awful red message reporting some NUMA errors appeared on the main console screen.

At that time I decided to ignore it. It didn’t interfere with the normal functioning of the ESXs and since I never got again to the console of the ESX, I just fired up the VM in Workstation and then started to work from the vSphere Client, for a long time the error fall into the oblivion.

This week I decided to install a new ESX4 and couple of ESXi4 VMs in my home lab and the error appeared again and this time the geek inside me couldn’t resist and after doing some research I found this VMware Knowledge Base article which also pointed to a Dell document, both of them said that the error could be ignored because there is no real loss of service, something that I already knew x-). I finally found the solution in a VMTN post.

From the vSphere Client go to Configuration -> Software -> Adavanced Settings and in the VMkernel area disable the VMkernel.Boot.userNUMAInfo setting.

After that reboot your ESX and will see that the error has disappear.

I also noticed that the error is present on the virtualized ESXi but to see it from the ESXi console press Alt-F11 and you will get to a screen almost identical as the one from the first screenshot.

Juanma.

CLIQ – The HP Lefthand SAN/iQ command-line

It seems that a very popular post, if not the most popular one, is the one about my first experiences with the P4000 virtual storage appliance and because of that I decided to go deeper inside the VSA and the p4000 series and write about it.

The first post of this series is going to be about one of the less known features of the P4000 series, its command line known as CLIQ.

Typically any Sysadmin would configure and manage P4000 storage nodes through the HP Lefthand Centralized Management Console graphical interface. But there is another way to do it, the SAN/iQ software has a very powerful command line that allow you to perform almost any task as in the CMC.

The command line can be accessed through two ways, remotely from any windows machine or via SSH.

  • SSH access

To access the CLIQ via SSH the connection has to be done to the TCP port 16022 instead of the SSH standard port and with the management group administration user. The connection can be established to any storage node of the group, the operation will apply to the whole group.

  • Remote access

The other way to use the CLIQ shell is from a windows host with the HP Lefthand CLI shell installed on it. The software is included in the SAN/iQ Management Software DVD can be obtained along with other tools and documentation for the P4000 series in the following URL: http://www.hp.com/go/p4000downloads.

Regarding the use of the CLIQ there is one main difference with the On-Node CLIQ, every command must include the address or DNS name of the storage node where the task is going to be performed and at least the username. The password can also be included but for security reasons is best to don’t do it and be prompted. An encrypted key file with the necessary credentials can be used instead if you don’t want to use the username and password parameters within the command.

Of course this kind of access is perfect for scripting and automate some tasks.

Juanma.

HPVM memory management

Like other virtualization software, HP Integrity Virtual Machines comes with several memory management capabilities. In this new post about HPVM I will try to explain what are these capabilities, their purpose and how to configure and use them.

  • Dynamic memory

Dynamic memory is an HPVM feature that allow you to resize the amount of memory of a guest without rebooting it. The HPVM manual mention an example in which dynamic memory is applicable.

…this feature allows a guest that is a Serviceguard node to be used as a standby server for multiple Serviceguard packages. When a package fails over to the guest, the guest memory can be changed to suit the requirements of the package before, during, and after the failover process.

Dynamic memory is only available on HP-UX guests with the guest management software installed.

Lets see how to enable an configure dynamic memory.

First thing to do is to enable dynamic memory.

root@hinata:~ # hpvmmodify -P batman -x ram_dyn_type=driver

There are three possible values for the ram_dyn_type option:

  1. None: Self explanatory.
  2. Any: In the next boot of the guest it will check if dynamic memory is enabled and if the driver is loaded. If the dynamic memory driver is in place the option will change its value to driver.
  3. Driver: When the ram_dyn_type is set to driver it means that every dynamic memory control and range is functional.

Specify the minimun amount of RAM to be allocated to the guest, the default unit is MB but GB can also be used.

root@hinata:~ # hpvmmodify -P batman -x ram_dyn_min=1024

Next set the maximum memory.

root@hinata:~ # hpvmmodify -P batman -x ram_dyn_max=4G

Set the amount of memory to be allocated when the guests starts, this value must be greater than the minimum one.

root@hinata:~ # hpvmmodify -P batman -x ram_dyn_target_start=2048

Check the status of the guest to see the newly configured options.

root@hinata:~ # hpvmstatus -r -P batman
[Virtual Machine entitlements]
 Percent       Cumulative
#VCPUs Entitlement Maximum   Usage            Usage
====== =========== ======= ======= ================
 6       10.0%  100.0%    0.0%                0 

[Virtual CPU details]
vCPU Cumulative       Guest   Host    Cycles   Sampling
ID   Usage            percent percent achieved Interval
==== ================ ======= ======= ======== ===========
 0                0    0.0%    0.0%     0MHz   0 seconds
 1                0    0.0%    0.0%     0MHz   0 seconds
 2                0    0.0%    0.0%     0MHz   0 seconds
 3                0    0.0%    0.0%     0MHz   0 seconds
 4                0    0.0%    0.0%     0MHz   0 seconds
 5                0    0.0%    0.0%     0MHz   0 seconds 

[Virtual Machine Memory Entitlement]
DynMem  Memory   DynMem  DynMem DynMem  Comfort Total    Free   Avail    Mem    AMR     AMR
 Min   Entitle   Max    Target Current   Min   Memory  Memory  Memory  Press  Chunk   State
======= ======= ======= ======= ======= ======= ======= ======= ======= ===== ======= ========
 1024MB     0MB     4GB  4096MB  4096MB     0MB     4GB     0MB     0MB   0       0MB DISABLED

Once dynamic memory is properly configured, from the VM host, the memory of a guest can be manually resized to a value between the ram_dyn_min and ram_dyn_max parameters in increments of the default chunk size, which is 64MB.

root@hinata:~ # hpvmmodify -P batman -x ram_target=3136

There is one final option named dynamic_memory_control, with this option the system administration can allow the root user of the guest to change dynamic memory options, from the guest side, while it is running. The dynamic_memory_control option is incompatible with automatic memory reallocation.

Just to show a small example from the guest side, to view the dynamic memory configuration:

root@batman:~# hpvmmgmt -V -l ram
[Dynamic Memory Information]
=======================================
Type                    : driver
Minimum memory          : 1024 MB
Target memory           : 4090 MB
Maximum memory          : 4096 MB
Current memory          : 4090 MB
Comfortable minimum     : 1850 MB
Boot memory             : 4090 MB
Free memory             : 2210 MB
Available memory        : 505 MB
Memory pressure         : 0
Memory chunksize        : 65536 KB
Driver Mode(s)          : STARTED ENABLED 

root@batman:~#
  • Automatic memory reallocation

The new HPVM 4.2 version from March expands dynamic memory with an interesting feature called Automatic Memory Reallocation. This new feature provides the possibility of automated changes in the amount of memory used by a guest based on load conditions.

Automatic memory reallocation is only supported on HP-UX guests with dynamic memory enabled and with the guest management software installed.

Automatic memory reallocation can be configured through two ways:

  1. System-wide values.
  2. On a per-VM basis.

Each way doesn’t exclude the other one, you can set the system-wide parameters for every VM and later customize some of the virtual machines adjusting their parameters to any additional requirement.

Automatic memory reallocation is enabled by default on the VM host. Open the file /etc/rc.config.d/hpvmconf and check that HPVMAMRENABLE=0 is not set to verify that automatic memory reallocation is enabled. The process hpmvmamrd, the automatic memory reallocation daemon, can also be check with a simple ps.

In the same file two system-wide tunables can be configured.

  1. HPVMCHUNKSIZE
  2. HPVMAMRWAITTIME

The first parameter determine the number of megabytes by the guest will attempt to grow if there is memory pressure. If the parameter is not set the default value will be 256MB. The best practice for this parameter is to be a multiple of the dynamic memory chunk size.

The second one set the maximum number of seconds that any VM startup process will wait for memory before reporting a failure due to insufficient memory. The default value is 60 seconds and the maximum configurable 600 seconds.

With the above parameter set to its defaults or customized the next step is to enable automatic memory reallocation in the virtual machines. The amr feature is DISABLED by default on the VMs. To enable use the amr_enable option.

root@hinata:~ # hpvmmodify -P batman -x amr_enable=1

Now set the memory entitlement for the virtual machine. The entitlement is the minimum amount of RAM guaranteed to the virtual machine.

root@hinata:~ # hpvmmodify -P batman -x ram_dyn_entitlement=1500

Take into account that if amr is not enabled the entitlement could be set but it will not work and any VM without the entitlement parameter set will be ignored by automatic memory reallocation.

The entitlement value can be modified online by the system administrator at any time, but there are some rules that apply:

  1. If there is not enough memory to grow the VM memory to the specified entitlement the operation will fail.
  2. The memory of virtual machine can not be grown beyond its maximum memory.
  3. The virtual machine memory always have to be set to a value between ram_dyn_max and ram_dyn_min parameters, no more no less.

When the memory of a guest is resized by default the HPVMCHUNKSIZE value is used but a per-VM chunk size can also be set. To do so use the amr_chunk_size parameter.

root@hinata:~ # hpvmmodify -P batman -x amr_chunk_resize=512

As in the system-wide parameter the recommendation is to set the chunk size to a multple of the dynamic memory chunks size.

Finally to display the configuration and the current use of the virtual machines resource entitlements use hpvmstatus -r.

root@hinata:~ # hpvmstatus -r
[Virtual Machine Resource Entitlement]
[Virtual CPU entitlement]
 Percent       Cumulative
Virtual Machine Name VM #  #VCPUs Entitlement Maximum   Usage            Usage
==================== ===== ====== =========== ======= ======= ================
rh-www                   1      4       50.0%  100.0%    0.0%                0
sql-dev                  2      4       50.0%  100.0%    0.3%         21611866
rhino                    3      4       50.0%  100.0%    0.0%                0
batman                   4      8       20.0%  100.0%    0.8%          1318996
robin                    5      8       20.0%  100.0%    0.8%            97993
falcon                   6      2       10.0%  100.0%    0.0%                0 

[Virtual Machine Memory Entitlement]
 DynMem  Memory   DynMem  DynMem DynMem  Comfort Total    Free   Avail    Mem    AMR     AMR
Virtual Machine Name  VM #   Min   Entitle   Max    Target Current   Min   Memory  Memory  Memory  Press  Chunk   State
==================== ===== ======= ======= ======= ======= ======= ======= ======= ======= ======= ===== ======= ========
rh-www                   1   512MB     0MB     8GB  8192MB  8192MB     0MB     8GB     0MB     0MB   0       0MB DISABLED
sql-dev                  2   512MB     0MB     8GB  8192MB  8192MB     0MB     8GB     0MB     0MB   0       0MB DISABLED
rhino                    3  1024MB  1500MB     6GB  2048MB  6144MB     0MB     6GB     0MB     0MB   0     256MB  ENABLED
batman                   4  1024MB  1500MB     4GB  4090MB  4090MB  1850MB     4GB  2214MB   500MB   0     256MB  ENABLED
robin                    5  1024MB  1500MB     4GB  4090MB  4090MB  1914MB     4GB  2165MB   531MB   0     256MB  ENABLED
falcon                   6   512MB     0MB     6GB  6144MB  6144MB     0MB     6GB     0MB     0MB   0       0MB DISABLED

I hope this helps to clarify how HPVM manage the memory of the virtual machines and how to customize its configuration. As always any comment would be welcome :-)

Juanma.

Follow

Get every new post delivered to your Inbox.

Join 197 other followers

%d bloggers like this: