Archive

Posts Tagged ‘HP Lefthand’

iSCSI initiator configuration in RedHat Enterprise Linux 5

February 22, 2011 8 comments

The following post will discuss about iSCSI initiator configuration in RedHat Enterprise Linux 5, this method is also applicable to all RHEL5 derivatives. The iSCSI LUNs will be provided by an HP P4000 array.

First of all we need to get and install the iscsi-initiator-utils RPM package, you can use yum to get and install the package from any supported repository for CentOS or RHEL. You can also download the package from RedHat Network if you have a valid RHN account and your system doesn’t have internet connection.

[root@rhel5 ~]# rpm -ivh /tmp/iscsi-initiator-utils-6.2.0.871-0.16.el5.x86_64.rpm
Preparing...                ########################################### [100%]
   1:iscsi-initiator-utils  ########################################### [100%]
[root@rhel5 ~]#
[root@rhel5 ~]#rpm -qa | grep iscsi
iscsi-initiator-utils-6.2.0.871-0.16.el5
[root@rhel5 ~]# rpm -qi iscsi-initiator-utils-6.2.0.871-0.16.el5
Name        : iscsi-initiator-utils        Relocations: (not relocatable)
Version     : 6.2.0.871                         Vendor: Red Hat, Inc.
Release     : 0.16.el5                      Build Date: Tue 09 Mar 2010 09:16:29 PM CET
Install Date: Wed 16 Feb 2011 11:34:03 AM CET      Build Host: x86-005.build.bos.redhat.com
Group       : System Environment/Daemons    Source RPM: iscsi-initiator-utils-6.2.0.871-0.16.el5.src.rpm
Size        : 1960412                          License: GPL
Signature   : DSA/SHA1, Wed 10 Mar 2010 04:26:37 PM CET, Key ID 5326810137017186
Packager    : Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>
URL         : http://www.open-iscsi.org
Summary     : iSCSI daemon and utility programs
Description :
The iscsi package provides the server daemon for the iSCSI protocol,
as well as the utility programs used to manage it. iSCSI is a protocol
for distributed disk access using SCSI commands sent over Internet
Protocol networks.
[root@rhel5 ~]#

Next we are going to configure the initiator. The iSCSI initiator is composed by two services, iscsi and iscsid, enable them to start at system startup using chkconfig.

[root@rhel5 ~]# chkconfig iscsi on
[root@rhel5 ~]# chkconfig iscsid on
[root@rhel5 ~]#
[root@rhel5 ~]# chkconfig --list | grep iscsi
iscsi           0:off   1:off   2:on    3:on    4:on    5:on    6:off
iscsid          0:off   1:off   2:on    3:on    4:on    5:on    6:off
[root@rhel5 ~]#
[root@rhel5 ~]#

Once iSCSI is configured start the service.

[root@rhel5 ~]# service iscsi start
iscsid is stopped
Starting iSCSI daemon:                                     [  OK  ]
                                                           [  OK  ]
Setting up iSCSI targets: iscsiadm: No records found!
                                                           [  OK  ]
[root@rhel5 ~]#
[root@rhel5 ~]# service iscsi status
iscsid (pid  14170) is running...
[root@rhel5 ~]#

From the P4000 CMC we need to add the server to the management group configuration like we would do with any other server.

The server iqn can be found in the file /etc/iscsi/initiatorname.iscsi.

[root@cl-node1 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:2551bf29b48
[root@cl-node1 ~]#

Create any iSCSI volumes you need in the P4000 arrays and assign them to the RedHat system. Then to discover the presented LUNs, from the Linux server run the iscsiadm command.

[root@rhel5 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.126.60
192.168.126.60:3260,1 iqn.2003-10.com.lefthandnetworks:mlab:62:lv-rhel01
[root@rhel5 ~]#

Restart the iSCSI initiator to make the new block device available to the operative system.

[root@rhel5 ~]# service iscsi restart
Stopping iSCSI daemon:
iscsid dead but pid file exists                            [  OK  ]
Starting iSCSI daemon:                                     [  OK  ]
                                                           [  OK  ]
Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2003-10.com.lefthandnetworks:mlab:62:lv-rhel01, portal: 192.168.126.60,3260]
Login to [iface: default, target: iqn.2003-10.com.lefthandnetworks:mlab:62:lv-rhel01, portal: 192.168.126.60,3260]: successful
                                                           [  OK  ]
[root@rhel5 ~]#

Then check that the new disk is available, I used lsscsi but fdisk -l will do.

[root@rhel5 ~]# lsscsi
[0:0:0:0]    disk    VMware,  VMware Virtual S 1.0   /dev/sda
[2:0:0:0]    disk    LEFTHAND iSCSIDisk        9000  /dev/sdb
[root@rhel5 ~]#
[root@rhel5 ~]# fdisk -l /dev/sdb 

Disk /dev/sdb: 156.7 GB, 156766306304 bytes
255 heads, 63 sectors/track, 19059 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table
[root@rhel5 ~]#

At this point the iSCSI configuration is done, the new LUNs will be available through a system reboot as long as the iscsi service is enabled.

Juanma.

HP P4000: Setup a two-node cluster

December 6, 2010 4 comments

This post will outline the necessary steps to create a standard (no-multisite) HP P4000 cluster with two nodes. Creating a two-node cluster is a very similar process as the one-node cluster described in my first post about P4000 systems.

The cluster is composed by:

  • 2 HP P4000 Virtual Storage Appliances
  • 1 HP P4000 Failover Manager

The Failover Manager, or FOM, is a specialized version of the SAN/iQ software. It runs as a virtual appliance in VMware, thought the most common situation is to run it in a ESX/ESXi servers running it under VMware player or Server is also supported.

The FOM integrates into a management group as a real manager and is intended only to provide quorum to the cluster, one of its main purposes is to provide quorum in Multi-site clusters. I decided to use it in this post to provide an example as real as possible.

To setup this cluster I used virtual machines inside VMware Workstation, but the same design can also be created with physical servers and P4000 storage systems.

From the Getting started screen launch the clusters wizard.

VJM-P4000

Select the two P4000 storage systems and enter the name of the Management Group

During the group creation will ask to create a cluster, choose the two nodes as members of the cluster, will add the FOM later, and assign a name to the cluster.

Next assign a virtual IP address to the cluster.

Enter the administrative level credentials for the cluster.

Finally the wizard will ask if you want to create volumes in the cluster, I didn’t take that option and finished the cluster creation process. You can also add the volumes later as I described in one of my previous posts.

Now the that cluster is formed we are going to add the Failover Manager.

It’s is important that the FOM requires the same configuration as any VSA as I depicted in my first post about the P4000 storage systems.

In the Central Management Console right-click into the FOM and select Add to existing management group.

Select the management group and click Add.

With this operation the cluster configuration is done. If everything went well in the end you should have something like this.

Juanma.

HP P4000: Generating the CLIQ key file

November 2, 2010 Leave a comment

As I explained in my first post about the SAN/iQ command line, to remotely manage a P4000 storage array instead of providing the username/password credentials in every command you can specify an encrypted file which contains the user/password information.

To create this file, known as the key file, just use the createKey command and provide the username, password, array IP address or DNS name and the name of the file.

By default the key file is created in the user’s home directory, c:\Documents and Settings\<username> in Windows XP/2003 and C:\Users\<username> in Windows Vista/2008/7.

The file can also be stored in a secure location on the local network, in that case the full path to the key file must be provided.

Of course the main reason to create a key file, apart from ease the daily management, is to provide a valid authentication mechanism for any automation script that you can create using the cliq.

Juanma.

Categories: HP, Storage Tags: , , , , ,

HP resources for VMware

October 20, 2010 2 comments

The reason for this post is trying to be a single point of reference for HP related VMware resources.

I created the list for my personal use while ago but in the hope that it can be useful for someone else I decided to review and share it. I will try to keep the list up to date and also add it as a permanent page in the menu above.

General resources

VMware on ProLiant

HP StorageWorks

VDI

vCloud Director

HP Lefthand P4000 VSA verbose boot

October 13, 2010 3 comments

If you are a user of the P4000 VSA you’ll be use to the quiet boot sequence of the SAN/iQ software. Just a couple of messages until you get the login prompt.

But how about if anyone want to watch the whole boot process to check error messages or something alike? There is an easy and simple solution, at the begining of the boot sequence press ESC in order to stop the bootloader and when the boot: prompt appears type vga and press Enter.

After that you will have a normal boot process like with any other Linux system.

Juanma.

HP Lefthand VSA minimum memory requirements

September 17, 2010 4 comments

These week I’ve trying to stretch the virtualization resources of my homelab as much as possible. In my obsession to run as many VMs as possible I decided to lower the memory of some of them, including my storage appliances.

My VSAs are configured with various amounts of RAM ranging from 384MB to 1GB. I took the one I have in my laptop for demo purposes, powered it off, set the RAM to 256MB and fired it up again.

The VSA seemed to start without any problems and from the console everything looked fine.

I started the CMC and quickly noticed that something was wrong, the status of the storage server was “offline”.

I then looked into the alerts area and found one saying that there was not enough ram to start the configured features.

OK then, the VSA doesn’t work with 256MB of RAM; so which value is the minimum required in order to run the storage services?

After looking into several docs I found the answer in the P4000 Quick Start VSA user guide. The minimum amount of RAM required is 384MB for the laptop version and 1GB for the ESX version. Also in the VSA Install and Configure Guide, that comes with the VSA, the following values are provided for the ESX version and for the new Hyper-V version:

  • <500GB to 4.5TB – 1GB of RAM
  • 4.5TB to 9TB – 2GB of RAM
  • 9TB to 10TB – 3GB of RAM

After that I configured again the VSA with 384MB and the problem was fixed and the alarm disappeared.

Juanma.

Installing the HP Lefthand CMC in Linux

July 27, 2010 2 comments

May be some of you are not aware of this but the HP Lefthand Central Management Console application is available not only for Windows but for Linux and HP-UX also. The application is included on the SAN/iQ Management Software DVD that can be downloaded from here.

Burn the iso  or mount it in your Linux system. Navigate trough the iso to GUI/Linux/Disk1/InstData, there you will find two files and a directory named VM. Get into the directory and will find the installer CMC_Installer.bin.

Launch the installer passing it the full path to the installer properties file, in this case the file MediaId.properties that can be found on GUI/Linux/Disk1/InstData.

root@wopr:/mnt/iso/GUI/Linux/Disk1/InstData/VM# ./CMC_Installer.bin -f /mnt/iso/Linux/Disk1/InstData/MediaId.properties

The CMC will be installed in /opt/LeftHandNetworks/UI. Once the installation is finished launch the CMC from the shell or create a launcher on your Gnome/KDE desktop and voilà you can now control your Lefthand Storage systems from your favorite Linux distro.

Juanma.

Basic volume tasks with the HP Lefthand CLIQ command-line

June 16, 2010 3 comments

Following with the series of posts about the HP Lefthand SAN systems in this post I will explain the basic volume operations with CLIQ, the HP Lefthand SAN/iQ command-line.

I used the Lefthand VSA and the ESX4 servers from my home lab to illustrate the procedure. The commands are executed locally in the VSA via SSH. The following tasks will be covered:

  • Volume creation.
  • Assign a volume to one or more hosts.
  • Volume deletion.

Volume creation

The command to use is createVolume. The available options for this command are:

  • Volumename.
  • Clustername: the cluster where the volume will be created on.
  • Replication:  The replication level from 1 (none) to 4 (4-way replication).
  • Thinprovision: 1 (Thin provisioning) or 2 (Full provision).
  • Description.
  • Size: The size can be set in MB, GB or TB.
CLIQ>createVolume volumeName=vjm-cluster2 size=2GB clusterName=iSCSI-CL replication=1 thinProvision=1 description="vep01-02 datastore" 

SAN/iQ Command Line Interface, v8.1.00.0047
(C) Copyright 2007-2009 Hewlett-Packard Development Company, L.P.

RESPONSE
 result         0
 processingTime 2539
 name           CliqSuccess
 description    Operation succeeded

CLIQ>

Assign a volume to the hosts

The command to use in this task is assignVolume. Few parameters are accepted by this command:

  • Volumename.
  • Initiator: The host/hosts IQNs.If the volume is going to be presented to more than one host the IQNs of the server must be separated by semicolons. One important tip, the operation must be done in one command, you can not assign the volume to a host in one command and to a new host in a second command, the last one will overwrite the first instead of adding the volume to one more host.
  • Access wrights: The default is read-write (rw), read-only (r) or write-only (w) can also be set.
CLIQ>assignVolume volumeName=vjm-cluster2 initiator=iqn.1998-01.com.vmware:vep01-45602bf3;iqn.1998-01.com.vmware:vep02-5f779b32

SAN/iQ Command Line Interface, v8.1.00.0047
(C) Copyright 2007-2009 Hewlett-Packard Development Company, L.P.

RESPONSE
 result         0
 processingTime 4069
 name           CliqSuccess
 description    Operation succeeded

CLIQ>

And now that the volume is created and assigned to several servers check its configuration with getVolumeInfo.

CLIQ>getVolumeInfo volumeName=vjm-cluster2

SAN/iQ Command Line Interface, v8.1.00.0047
(C) Copyright 2007-2009 Hewlett-Packard Development Company, L.P.

RESPONSE
 result         0
 processingTime 1480
 name           CliqSuccess
 description    Operation succeeded

 VOLUME
 thinProvision  true
 stridePages    32
 status         online
 size           2147483648
 serialNumber   17a1c11e939940a4f7e91ee43654c94b000000000000006b
 scratchQuota   4194304
 reserveQuota   536870912
 replication    1
 name           vjm-cluster2
 minReplication 1
 maxSize        14587789312
 iscsiIqn       iqn.2003-10.com.lefthandnetworks:vdn:107:vjm-cluster2
 isPrimary      true
 initialQuota   536870912
 groupName      VDN
 friendlyName   
 description    vep01-02 datastore
 deleting       false
 clusterName    iSCSI-CL
 checkSum       false
 bytesWritten   18087936
 blockSize      1024
 autogrowPages  512

 PERMISSION
 targetSecret    
 loadBalance     true
 iqn             iqn.1998-01.com.vmware:vep01-45602bf3
 initiatorSecret
 chapRequired    false
 chapName        
 authGroup       vep01
 access          rw

 PERMISSION
 targetSecret    
 loadBalance     true
 iqn             iqn.1998-01.com.vmware:vep02-5f779b32
 initiatorSecret
 chapRequired    false
 chapName        
 authGroup       vep02
 access          rw

CLIQ>

If you refresh the storage configuration of the ESXs hosts through vSphere Client the new volume will be displayed.

Volume deletion

Finally we are going to delete another volume that is no longer in use by the server of my lab.

CLIQ>deleteVolume volumeName=testvolume

SAN/iQ Command Line Interface, v8.1.00.0047
(C) Copyright 2007-2009 Hewlett-Packard Development Company, L.P.

This operation is potentially irreversible.  Are you sure? (y/n) 

RESPONSE
 result         0
 processingTime 1416
 name           CliqSuccess
 description    Operation succeeded

CLIQ>
CLIQ>getvolumeinfo volumename=testvolume

SAN/iQ Command Line Interface, v8.1.00.0047
(C) Copyright 2007-2009 Hewlett-Packard Development Company, L.P.

RESPONSE
 result         8000100c
 processingTime 1201
 name           CliqVolumeNotFound
 description    Volume 'testvolume' not found

CLIQ>

And we are done. As always comments are welcome :-)

Juanma.

Categories: Storage Tags: , , ,

CLIQ – The HP Lefthand SAN/iQ command-line

It seems that a very popular post, if not the most popular one, is the one about my first experiences with the P4000 virtual storage appliance and because of that I decided to go deeper inside the VSA and the p4000 series and write about it.

The first post of this series is going to be about one of the less known features of the P4000 series, its command line known as CLIQ.

Typically any Sysadmin would configure and manage P4000 storage nodes through the HP Lefthand Centralized Management Console graphical interface. But there is another way to do it, the SAN/iQ software has a very powerful command line that allow you to perform almost any task as in the CMC.

The command line can be accessed through two ways, remotely from any windows machine or via SSH.

  • SSH access

To access the CLIQ via SSH the connection has to be done to the TCP port 16022 instead of the SSH standard port and with the management group administration user. The connection can be established to any storage node of the group, the operation will apply to the whole group.

  • Remote access

The other way to use the CLIQ shell is from a windows host with the HP Lefthand CLI shell installed on it. The software is included in the SAN/iQ Management Software DVD can be obtained along with other tools and documentation for the P4000 series in the following URL: http://www.hp.com/go/p4000downloads.

Regarding the use of the CLIQ there is one main difference with the On-Node CLIQ, every command must include the address or DNS name of the storage node where the task is going to be performed and at least the username. The password can also be included but for security reasons is best to don’t do it and be prompted. An encrypted key file with the necessary credentials can be used instead if you don’t want to use the username and password parameters within the command.

Of course this kind of access is perfect for scripting and automate some tasks.

Juanma.

Add iSCSI volumes from HP P4000 VSA to VMware ESXi4

April 17, 2010 6 comments

In today’s post I will try to explain step by step how to add an iSCSI volume from the HP Lefthand P4000 VSA to a VMware ESXi4 server.

Step One: Create a volume.

Lets suppose we already have a configured storage appliance, I showed how to create a cluster in my previous post so I will not repeat that part here. Open the Centralized Management Console and go to the Management group -> Cluster ->Volumes and Snapshots.

Click on Tasks and choose New Volume.

Enter the volume name, a small description and the size. The volume can also be assigned to any server already connected to the cluster, as we don’t have any server assigned this option can be ignored, for now.

In the Advanced tab the volume can be assigned to an existing cluster and the RAID level, the volume type and the provisioning type can be set.

When everything is configured click OK and the volume will be created. After the creation process, the new volume will be showed up in the CMC.

At this point we have a new volume with some RAID protection level, none in this particular case since it’s a single node cluster. Next step is to assign the volume to a server.

Step Two: ESXi iSCSI configuration.

Connect to the chosen ESXi4 Server through vSphere Client and from the Configuration tab in the right pane go to the Networking area and click the Add Networking link

In the Add Networking Wizard select VMkernel as Connection Type.

Create a new virtual switch.

Enter the Port Group Properties, in my case the label as no other properties where relevant for me.

Set the IP settings, go to Summary screen and click Finish.

The newly created virtual switch will appear in the Networking area.

With the new virtual switch created go to Storage Adapters there you will see an iSCSI software adapter.

Click on properties and on the General tab click the Configure button and check the Enabled status box.

Once iSCSI is enabled its properties window will be populated.

Click close, the server will ask for rescan of the adapter but at this point it is not necessary so it can be skipped.

Step Three:  Add the volume to the ESXi server.

Now, that we have our volume created and the iSCSI adapter of our ESXi server activated, the next logical step is to add the storage to server.

On the HP Lefthand CMC go to the servers area add a new server.

Add a name for the server, a small description, check the Allow access via iSCSI box and select the authentication. In the example I choose CHAP not required. With this option you only have to enter the Initiator Node Name, you can grab it from the details of the ESXi iSCSI adapter.

To finish the process click OK and you will see the newly added server. Go to the tab Volume and Snapshots tab on the server configuration and from the Tasks menu assign a volume to the server.

Select the volume created at the beginning,

Now go back to  the vSphere client and launch again the properties of the iSCSI adapter. On the Dynamic Discovery tab add the virtual IP address of the VSA cluster.

Click Close and the server will ask again to rescan the adapter, now say yes and after the rescanning process the iSCSI LUN will show up.

Now in the ESXi a new Datastore can be created with the newly configured storage. Of course the same LUN can also be used to provide shared storage for more ESXi servers and used for VMotion, HA or any other interesting VMware vSphere features. May be in another post ;-)

Juanma.

Follow

Get every new post delivered to your Inbox.

Join 197 other followers

%d bloggers like this: