Archives For February 2011

The following post will discuss about iSCSI initiator configuration in RedHat Enterprise Linux 5, this method is also applicable to all RHEL5 derivatives. The iSCSI LUNs will be provided by an HP P4000 array.

First of all we need to get and install the iscsi-initiator-utils RPM package, you can use yum to get and install the package from any supported repository for CentOS or RHEL. You can also download the package from RedHat Network if you have a valid RHN account and your system doesn’t have internet connection.

[root@rhel5 ~]# rpm -ivh /tmp/iscsi-initiator-utils-6.2.0.871-0.16.el5.x86_64.rpm
Preparing...                ########################################### [100%]
   1:iscsi-initiator-utils  ########################################### [100%]
[root@rhel5 ~]#
[root@rhel5 ~]#rpm -qa | grep iscsi
iscsi-initiator-utils-6.2.0.871-0.16.el5
[root@rhel5 ~]# rpm -qi iscsi-initiator-utils-6.2.0.871-0.16.el5
Name        : iscsi-initiator-utils        Relocations: (not relocatable)
Version     : 6.2.0.871                         Vendor: Red Hat, Inc.
Release     : 0.16.el5                      Build Date: Tue 09 Mar 2010 09:16:29 PM CET
Install Date: Wed 16 Feb 2011 11:34:03 AM CET      Build Host: x86-005.build.bos.redhat.com
Group       : System Environment/Daemons    Source RPM: iscsi-initiator-utils-6.2.0.871-0.16.el5.src.rpm
Size        : 1960412                          License: GPL
Signature   : DSA/SHA1, Wed 10 Mar 2010 04:26:37 PM CET, Key ID 5326810137017186
Packager    : Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>
URL         : http://www.open-iscsi.org
Summary     : iSCSI daemon and utility programs
Description :
The iscsi package provides the server daemon for the iSCSI protocol,
as well as the utility programs used to manage it. iSCSI is a protocol
for distributed disk access using SCSI commands sent over Internet
Protocol networks.
[root@rhel5 ~]#

Next we are going to configure the initiator. The iSCSI initiator is composed by two services, iscsi and iscsid, enable them to start at system startup using chkconfig.

[root@rhel5 ~]# chkconfig iscsi on
[root@rhel5 ~]# chkconfig iscsid on
[root@rhel5 ~]#
[root@rhel5 ~]# chkconfig --list | grep iscsi
iscsi           0:off   1:off   2:on    3:on    4:on    5:on    6:off
iscsid          0:off   1:off   2:on    3:on    4:on    5:on    6:off
[root@rhel5 ~]#
[root@rhel5 ~]#

Once iSCSI is configured start the service.

[root@rhel5 ~]# service iscsi start
iscsid is stopped
Starting iSCSI daemon:                                     [  OK  ]
                                                           [  OK  ]
Setting up iSCSI targets: iscsiadm: No records found!
                                                           [  OK  ]
[root@rhel5 ~]#
[root@rhel5 ~]# service iscsi status
iscsid (pid  14170) is running...
[root@rhel5 ~]#

From the P4000 CMC we need to add the server to the management group configuration like we would do with any other server.

The server iqn can be found in the file /etc/iscsi/initiatorname.iscsi.

[root@cl-node1 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:2551bf29b48
[root@cl-node1 ~]#

Create any iSCSI volumes you need in the P4000 arrays and assign them to the RedHat system. Then to discover the presented LUNs, from the Linux server run the iscsiadm command.

[root@rhel5 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.126.60
192.168.126.60:3260,1 iqn.2003-10.com.lefthandnetworks:mlab:62:lv-rhel01
[root@rhel5 ~]#

Restart the iSCSI initiator to make the new block device available to the operative system.

[root@rhel5 ~]# service iscsi restart
Stopping iSCSI daemon:
iscsid dead but pid file exists                            [  OK  ]
Starting iSCSI daemon:                                     [  OK  ]
                                                           [  OK  ]
Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2003-10.com.lefthandnetworks:mlab:62:lv-rhel01, portal: 192.168.126.60,3260]
Login to [iface: default, target: iqn.2003-10.com.lefthandnetworks:mlab:62:lv-rhel01, portal: 192.168.126.60,3260]: successful
                                                           [  OK  ]
[root@rhel5 ~]#

Then check that the new disk is available, I used lsscsi but fdisk -l will do.

[root@rhel5 ~]# lsscsi
[0:0:0:0]    disk    VMware,  VMware Virtual S 1.0   /dev/sda
[2:0:0:0]    disk    LEFTHAND iSCSIDisk        9000  /dev/sdb
[root@rhel5 ~]#
[root@rhel5 ~]# fdisk -l /dev/sdb 

Disk /dev/sdb: 156.7 GB, 156766306304 bytes
255 heads, 63 sectors/track, 19059 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table
[root@rhel5 ~]#

At this point the iSCSI configuration is done, the new LUNs will be available through a system reboot as long as the iscsi service is enabled.

Juanma.

Anyone with some experience and knowledge about VMware HA knows how to perform a Reconfigure for HA operation in a host from the vSphere client and I’m no exception to that rule. However I never did with PowerCLI.

I created a new cluster in my homelab with a problem in one of the hosts, I fixed the problem, put my mind to work and after an hour or so digging through PowerCLI and the vSphere API Reference Documentation I came up with the following easy way to do it.

First we are going to create a variable that contained the configuration of the ESXi I wanted to reconfigure.

C:\
[vSphere PowerCLI] % $vmhost = Get-VMHost esxi06.vjlab.local
C:\
[vSphere PowerCLI] %
C:\
[vSphere PowerCLI] % $vmhost | Format-List

State                 : Connected
ConnectionState       : Connected
PowerState            : PoweredOn
VMSwapfileDatastoreId :
VMSwapfilePolicy      : Inherit
ParentId              : ClusterComputeResource-domain-c121
IsStandalone          : False
Manufacturer          : VMware, Inc.
Model                 : VMware Virtual Platform
NumCpu                : 2
CpuTotalMhz           : 5670
CpuUsageMhz           : 869
MemoryTotalMB         : 2299
MemoryUsageMB         : 868
ProcessorType         : Intel(R) Core(TM)2 Quad CPU    Q9550  @ 2.83GHz
HyperthreadingActive  : False
TimeZone              : UTC
Version               : 4.1.0
Build                 : 260247
Parent                : cluster3
VMSwapfileDatastore   :
StorageInfo           : HostStorageSystem-storageSystem-143
NetworkInfo           : esxi06:vjlab.local
DiagnosticPartition   : mpx.vmhba1:C0:T0:L0
FirewallDefaultPolicy :
ApiVersion            : 4.1
CustomFields          : {[com.hp.proliant, ]}
ExtensionData         : VMware.Vim.HostSystem
Id                    : HostSystem-host-143
Name                  : esxi06.vjlab.local
Uid                   : /VIServer=administrator@vcenter1.vjlab.local:443/VMHost=HostSystem-host-143/

C:\
[vSphere PowerCLI] %

Next with the cmdlet Get-View I retrieved the .NET objects of the host ID and stored them in another variable.

C:\
[vSphere PowerCLI] % Get-View $vmhost.Id

Runtime             : VMware.Vim.HostRuntimeInfo
Summary             : VMware.Vim.HostListSummary
Hardware            : VMware.Vim.HostHardwareInfo
Capability          : VMware.Vim.HostCapability
ConfigManager       : VMware.Vim.HostConfigManager
Config              : VMware.Vim.HostConfigInfo
Vm                  : {}
Datastore           : {Datastore-datastore-144}
Network             : {Network-network-11}
DatastoreBrowser    : HostDatastoreBrowser-datastoreBrowser-host-143
SystemResources     : VMware.Vim.HostSystemResourceInfo
Parent              : ClusterComputeResource-domain-c121
CustomValue         : {}
OverallStatus       : red
ConfigStatus        : red
ConfigIssue         : {0}
EffectiveRole       : {-1}
Permission          : {}
Name                : esxi06.vjlab.local
DisabledMethod      : {ExitMaintenanceMode_Task, PowerUpHostFromStandBy_Task, ReconnectHost_Task}
RecentTask          : {}
DeclaredAlarmState  : {alarm-1.host-143, alarm-101.host-143, alarm-102.host-143, alarm-103.host-143...}
TriggeredAlarmState : {}
AlarmActionsEnabled : True
Tag                 : {}
Value               : {}
AvailableField      : {com.hp.proliant}
MoRef               : HostSystem-host-143
Client              : VMware.Vim.VimClient

C:\
[vSphere PowerCLI] % $esxha = Get-View $vmhost.Id

Now through the $esxha variable I invoked the method ReconfigureHostForDAS to reconfigure the ESXi, this method is part of the HostSystem object and its description can be found here in the vSphere API reference.

As it can be seen in the above screenshot, the task is displayed in the vSphere client. You can also monitor the operation with the Get-Task cmdlet.

Finally I created the below script to simplify things in the future :-)

# Reconfigure-VMHostHA.ps1
# PowerCLI script to reconfigure for VMware HA a VM Host
#
# Juan Manuel Rey - juanmanuel (dot) reyportal (at) gmail (dot) com
# https://jreypo.wordpress.com
#

param([string]$esx)

$vmhost = Get-VMHost $esx
$esxha = Get-View $vmhost.Id
$esxha.ReconfigureHostForDAS()

Juanma.

Hpasmcli, HP Management Command Line Interface, is a scriptable command line tool to manage and monitor the HP ProLiant servers through the hpasmd and hpasmxld daemons. It is part of the hp-health package that comes with the HP Proliant Support Pack, or PSP.

[root@rhel4 ~]# rpm -qa | grep hp-health
hp-health-8.1.1-14.rhel4
[root@rhel4 ~]#
[root@rhel4 ~]# rpm -qi hp-health-8.1.1-14.rhel4
Name        : hp-health                    Relocations: (not relocatable)
Version     : 8.1.1                             Vendor: Hewlett-Packard Company
Release     : 14.rhel4                      Build Date: Fri 04 Jul 2008 07:04:51 PM CEST
Install Date: Thu 02 Apr 2009 05:10:48 PM CEST      Build Host: rhel4ebuild.M73C253-lab.net
Group       : System Environment            Source RPM: hp-health-8.1.1-14.rhel4.src.rpm
Size        : 1147219                          License: 2008 Hewlett-Packard Development Company, L.P.
Signature   : (none)
Packager    : Hewlett-Packard Company
URL         : http://www.hp.com/go/proliantlinux
Summary     : hp System Health Application and Command line Utility Package
Description :
This package contains the System Health Monitor for all hp Proliant systems
with ASM, ILO, & ILO2 embedded management asics.  Also contained are the
command line utilities.
[root@rhel4 ~]#
[root@rhel4 ~]# rpm -ql hp-health-8.1.1-14.rhel4
/etc/init.d/hp-health
/opt/hp/hp-health
/opt/hp/hp-health/bin
/opt/hp/hp-health/bin/IrqRouteTbl
/opt/hp/hp-health/bin/hpasmd
/opt/hp/hp-health/bin/hpasmlited
/opt/hp/hp-health/bin/hpasmpld
/opt/hp/hp-health/bin/hpasmxld
/opt/hp/hp-health/hprpm.xpm
/opt/hp/hp-health/sh
/opt/hp/hp-health/sh/hpasmxld_reset.sh
/sbin/hpasmcli
/sbin/hpbootcfg
/sbin/hplog
/sbin/hpuid
/usr/lib/libhpasmintrfc.so
/usr/lib/libhpasmintrfc.so.2
/usr/lib/libhpasmintrfc.so.2.0
/usr/lib/libhpev.so
/usr/lib/libhpev.so.1
/usr/lib/libhpev.so.1.0
/usr/lib64/libhpasmintrfc64.so
/usr/lib64/libhpasmintrfc64.so.2
/usr/lib64/libhpasmintrfc64.so.2.0
/usr/share/man/man4/hp-health.4.gz
/usr/share/man/man4/hpasmcli.4.gz
/usr/share/man/man7/hp_mgmt_install.7.gz
/usr/share/man/man8/hpbootcfg.8.gz
/usr/share/man/man8/hplog.8.gz
/usr/share/man/man8/hpuid.8.gz
[root@rhel4 ~]#

This handy tool can be used to view and modify several BIOS settings of the server and to monitor the status of the different hardware components like fans, memory modules, temperature, power supplies, etc.

It can be used in two ways:

  • Interactive shell
  • Within a script

The interactive shell supports TAB command completion and command recovery through a history buffer.

[root@rhel4 ~]# hpasmcli
HP management CLI for Linux (v1.0)
Copyright 2004 Hewlett-Packard Development Group, L.P.

--------------------------------------------------------------------------
NOTE: Some hpasmcli commands may not be supported on all Proliant servers.
      Type 'help' to get a list of all top level commands.
--------------------------------------------------------------------------
hpasmcli> help
CLEAR  DISABLE  ENABLE  EXIT  HELP  NOTE  QUIT  REPAIR  SET  SHOW
hpasmcli>

As it can be seen in the above example several main tasks can be done, to get the usage of every command simply use HELP followed by the command.

hpasmcli> help show
USAGE: SHOW [ ASR | BOOT | DIMM | F1 | FANS | HT | IML | IPL | NAME | PORTMAP | POWERSUPPLY | PXE | SERIAL | SERVER | TEMP | UID | WOL ]
hpasmcli>
hpasmcli> HELP SHOW BOOT
USAGE: SHOW BOOT: Shows boot devices.
hpasmcli>

In my experience SHOW is the most used command above the others. Following are examples for some of the tasks.

- Display general information of the server

hpasmcli> SHOW SERVER
System        : ProLiant DL380 G5
Serial No.    : XXXXXXXXX     
ROM version   : P56 11/01/2008
iLo present   : Yes
Embedded NICs : 2
        NIC1 MAC: 00:1c:c4:62:42:a0
        NIC2 MAC: 00:1c:c4:62:42:9e

Processor: 0
        Name         : Intel Xeon
        Stepping     : 11
        Speed        : 2666 MHz
        Bus          : 1333 MHz
        Core         : 4
        Thread       : 4
        Socket       : 1
        Level2 Cache : 8192 KBytes
        Status       : Ok

Processor: 1
        Name         : Intel Xeon
        Stepping     : 11
        Speed        : 2666 MHz
        Bus          : 1333 MHz
        Core         : 4
        Thread       : 4
        Socket       : 2
        Level2 Cache : 8192 KBytes
        Status       : Ok

Processor total  : 2

Memory installed : 16384 MBytes
ECC supported    : Yes
hpasmcli>

- Show current temperatures

hpasmcli> SHOW TEMP
Sensor   Location              Temp       Threshold
------   --------              ----       ---------
#1        I/O_ZONE             49C/120F   70C/158F
#2        AMBIENT              23C/73F    39C/102F
#3        CPU#1                30C/86F    127C/260F
#4        CPU#1                30C/86F    127C/260F
#5        POWER_SUPPLY_BAY     52C/125F   77C/170F
#6        CPU#2                30C/86F    127C/260F
#7        CPU#2                30C/86F    127C/260F

hpasmcli>

- Get the status of the server fans

hpasmcli> SHOW FAN
Fan  Location        Present Speed  of max  Redundant  Partner  Hot-pluggable
---  --------        ------- -----  ------  ---------  -------  -------------
#1   I/O_ZONE        Yes     NORMAL 45%     Yes        0        Yes          
#2   I/O_ZONE        Yes     NORMAL 45%     Yes        0        Yes          
#3   PROCESSOR_ZONE  Yes     NORMAL 41%     Yes        0        Yes          
#4   PROCESSOR_ZONE  Yes     NORMAL 36%     Yes        0        Yes          
#5   PROCESSOR_ZONE  Yes     NORMAL 36%     Yes        0        Yes          
#6   PROCESSOR_ZONE  Yes     NORMAL 36%     Yes        0        Yes           

hpasmcli>

- Show device boot order configuration

hpasmcli> SHOW BOOT
First boot device is: CDROM.
One time boot device is: Not set.
hpasmcli>

- Set USB key as first boot device

hpasmcli> SET BOOT FIRST USBKEY

- Show memory modules status

hpasmcli> SHOW DIMM
DIMM Configuration
------------------
Cartridge #:   0
Module #:      1
Present:       Yes
Form Factor:   fh
Memory Type:   14h
Size:          4096 MB
Speed:         667 MHz
Status:        Ok

Cartridge #:   0
Module #:      2
Present:       Yes
Form Factor:   fh
Memory Type:   14h
Size:          4096 MB
Speed:         667 MHz
Status:        Ok

Cartridge #:   0
Module #:      3
Present:       Yes
Form Factor:   fh
Memory Type:   14h
Size:          4096 MB
Speed:         667 MHz
Status:        Ok
...

In the scripting mode hpasmcli can be used directly from the shell prompt with the -s option and the command between quotation marks, this of course allow you to process the output of the commands  like in the below exampl.

[root@rhel4 ~]# hpasmcli -s "show dimm" | egrep "Module|Status"
Module #:      1
Status:        Ok
Module #:      2
Status:        Ok
Module #:      3
Status:        Ok
Module #:      4
Status:        Ok
Module #:      5
Status:        Ok
Module #:      6
Status:        Ok
Module #:      7
Status:        Ok
Module #:      8
Status:        Ok
[root@rhel4 ~]#

To execute more than one command sequentially separate them with a semicolon.

[root@rhel4 ~]# hpasmcli -s "show fan; show temp"

Fan  Location        Present Speed  of max  Redundant  Partner  Hot-pluggable
---  --------        ------- -----  ------  ---------  -------  -------------
#1   I/O_ZONE        Yes     NORMAL 45%     Yes        0        Yes          
#2   I/O_ZONE        Yes     NORMAL 45%     Yes        0        Yes          
#3   PROCESSOR_ZONE  Yes     NORMAL 41%     Yes        0        Yes          
#4   PROCESSOR_ZONE  Yes     NORMAL 36%     Yes        0        Yes          
#5   PROCESSOR_ZONE  Yes     NORMAL 36%     Yes        0        Yes          
#6   PROCESSOR_ZONE  Yes     NORMAL 36%     Yes        0        Yes           

Sensor   Location              Temp       Threshold
------   --------              ----       ---------
#1        I/O_ZONE             47C/116F   70C/158F
#2        AMBIENT              21C/69F    39C/102F
#3        CPU#1                30C/86F    127C/260F
#4        CPU#1                30C/86F    127C/260F
#5        POWER_SUPPLY_BAY     50C/122F   77C/170F
#6        CPU#2                30C/86F    127C/260F
#7        CPU#2                30C/86F    127C/260F

[root@rhel4 ~]#

If you want to play more with hpasmcli go to its man page and to the ProLiant Support Pack documentation.

Juanma.

Today I received a mail from the HP-UX Admin mailing list. The  subject was “[HPADM] final message, list shutting down permanently” and the body contained the following message:

“dear all,

i truly regret that i have to announce the end of operations of the “hpux-admin” mailing list as you all have known it for a long time.

The last few years i have virtually lost all support for this activity from management. No surprise there, as there are almost no HP-UX systems remaining in use within my organization. And now i am more or less ‘ordered’ to shutdown, because the location of the system(s) will no longer be available. It was a converted office room and will be converted back to normal office space shortly.

This, combined with the fact that traffic on the list has decreased to almost zero, has prompted me to take the inevitable decision and ‘pull the plug’ …

[HPADM] closing down …”

Must admit that the lack of traffic is completely true, the last year no more than a couple of dozens messages were sent to the list. But that fact doesn’t ease the pain of losing one of  the few resources about HP-UX “out there” and one of the oldest. My personal gratitude and recognition to Bart Muyzer who has been running the list all these years.

It seems that the old mailing lists have no place in the 2.0 new world. Is the end of an era… not sure if I would like it.

Juanma.

Today while I was setting up a new vCloud lab at home I just notice that by mistake I added one of the ESXi to the wrong cluster and in the wrong datacenter.

To be sincere, correcting this is not a big deal, just put the host in maintenance mode, get it out of the cluster and move to the correct datacenter. With the vSphere Client it can be done with a couple of clicks and a simple drag and drop. But my mistake gave me the opportunity to correct it using PowerCLI and wirte this small but hopefully useful blog post.

To explain a bit the scenario. I currently have two datacenters in my homelab, one for my day to day tests and labs and another one for vCloud Director.

1 – Put the host in maintenance mode.

To do so we re going to use the Set-VMHost cmdlet.

C:\
[vSphere PowerCLI] % Set-VMHost -VMHost vcloud-esxi1.vjlab.local -State "Maintenance"

Name            ConnectionState PowerState      Id CpuUsage CpuTotal  Memory  Memory
                                                        Mhz      Mhz UsageMB TotalMB
----            --------------- ----------      -- -------- -------- ------- -------
vcloud-esxi1... Maintenance     PoweredOn  ...t-88      126     5670     873    3071

C:\
[vSphere PowerCLI] %

2 – Move the host out of the cluster.

To perform this use the Move-VMHost cmdlet.

C:\
[vSphere PowerCLI] % Move-VMHost -VMHost vcloud-esxi1.vjlab.local -Destination vjlab-dc

Name            ConnectionState PowerState      Id CpuUsage CpuTotal  Memory  Memory
                                                        Mhz      Mhz UsageMB TotalMB
----            --------------- ----------      -- -------- -------- ------- -------
vcloud-esxi1... Maintenance     PoweredOn  ...t-88       92     5670     870    3071

C:\
[vSphere PowerCLI] %

If you check now the vSphere Client will see the host out of the cluster but still in the same datacenter.

3 – Move the host to the correct datacenter.

Now that our host is in maintenance mode and out of the cluster it is time to move it to the correct datacenter. Again we will use Move-VMHost.

C:\
[vSphere PowerCLI] % Move-VMHost -VMHost vcloud-esxi1.vjlab.local -Destination vjlab-vcloud -Verbose
VERBOSE: 03/02/2011 22:30:39 Move-VMHost Started execution
VERBOSE: Move host 'vcloud-esxi1.vjlab.local' into 'vjlab-vcloud'.
VERBOSE: 03/02/2011 22:30:41 Move-VMHost Finished execution

Name            ConnectionState PowerState      Id CpuUsage CpuTotal  Memory  Memory
                                                        Mhz      Mhz UsageMB TotalMB
----            --------------- ----------      -- -------- -------- ------- -------
vcloud-esxi1... Maintenance     PoweredOn  ...t-88       63     5670     870    3071

C:\
[vSphere PowerCLI] %

Finally put the ESXi out of maintenance mode.

C:\
[vSphere PowerCLI] % Set-VMHost -VMHost vcloud-esxi1.vjlab.local -State Connected

Name            ConnectionState PowerState      Id CpuUsage CpuTotal  Memory  Memory
                                                        Mhz      Mhz UsageMB TotalMB
----            --------------- ----------      -- -------- -------- ------- -------
vcloud-esxi1... Connected       PoweredOn  ...t-88       98     5670     870    3071

C:\
[vSphere PowerCLI] %

Check that everything is OK with the vSphere Client and we are done.

Juanma.