Archives For P4000 VSA

Today has been a great day for the people of VMware, they presented vSphere 5 in a huge online event and a lot of new interesting features have been finally unveiled.

  • Storage DRS.
  • Profile-Driven Storage.
  • VAAI v2.
  • vSphere Auto Deploy.
  • Software FCoE initiator.
  • Up to 32 vCPUs and 1TB of RAM per virtual machine.
  • VMFS5.
  • New vSphere HA framework.
  • vSphere Web Client (we’ve been expecting something like this for Linux ad Mac OS X)
  • vCenter Server Appliance. On Linux!
  • New vSphere Storage Appliance.

All of them are incredibly cool features however there is one that instantly got my attention:

“VSA – VMware vSphere Storage Appliance” a virtual appliance that will allow to turn the DAS storage of the server into shared storage in those customers with no budget to purchase a full working SAN or where the performance is not a key factor. Wait a minute this sounds familiar.

This is not a new feature. In fact HP, and Lefthand Networks before it was acquired by HP, and VMware have been provided this feature for years through the HP P4000 VSA. After HP/Lefthand other vendors, like NetApp, have released their own VSAs, not as feature rich as the HP appliance of course ;-)

This is an interesting movement by VMware but I think it has its cons. The VSA is not included by default with the vSphere software, you have to pay for it, this is not bad of course since you also have to pay for the P4000 VSA.

That said, what is the advantage of running the VMware provided VSA instead of the HP one? The only one I can think of is the integration of the management interface with the vCenter Server and vSphere Client but with the HP Insight Control for VMware vCenter Server you can also manage, at least in part, your VSAs and I believe the HP CMC interface is rich-feature enough to use it.

Anyway, I’m dying to get my hands on this storage appliance and test it against my beloved P4000 VSA. In the meantime a nice introduction and a couple of videos of the product can be found here.

Like Calvin (@HPStorageGuy) said today in a tweet “Welcome to the past” ;-)

Juanma.

This post will outline the necessary steps to create a standard (no-multisite) HP P4000 cluster with two nodes. Creating a two-node cluster is a very similar process as the one-node cluster described in my first post about P4000 systems.

The cluster is composed by:

  • 2 HP P4000 Virtual Storage Appliances
  • 1 HP P4000 Failover Manager

The Failover Manager, or FOM, is a specialized version of the SAN/iQ software. It runs as a virtual appliance in VMware, thought the most common situation is to run it in a ESX/ESXi servers running it under VMware player or Server is also supported.

The FOM integrates into a management group as a real manager and is intended only to provide quorum to the cluster, one of its main purposes is to provide quorum in Multi-site clusters. I decided to use it in this post to provide an example as real as possible.

To setup this cluster I used virtual machines inside VMware Workstation, but the same design can also be created with physical servers and P4000 storage systems.

From the Getting started screen launch the clusters wizard.

VJM-P4000

Select the two P4000 storage systems and enter the name of the Management Group

During the group creation will ask to create a cluster, choose the two nodes as members of the cluster, will add the FOM later, and assign a name to the cluster.

Next assign a virtual IP address to the cluster.

Enter the administrative level credentials for the cluster.

Finally the wizard will ask if you want to create volumes in the cluster, I didn’t take that option and finished the cluster creation process. You can also add the volumes later as I described in one of my previous posts.

Now the that cluster is formed we are going to add the Failover Manager.

It’s is important that the FOM requires the same configuration as any VSA as I depicted in my first post about the P4000 storage systems.

In the Central Management Console right-click into the FOM and select Add to existing management group.

Select the management group and click Add.

With this operation the cluster configuration is done. If everything went well in the end you should have something like this.

Juanma.

HP resources for VMware

October 20, 2010 — 2 Comments

The reason for this post is trying to be a single point of reference for HP related VMware resources.

I created the list for my personal use while ago but in the hope that it can be useful for someone else I decided to review and share it. I will try to keep the list up to date and also add it as a permanent page in the menu above.

General resources

VMware on ProLiant

HP StorageWorks

VDI

vCloud Director

If you are a user of the P4000 VSA you’ll be use to the quiet boot sequence of the SAN/iQ software. Just a couple of messages until you get the login prompt.

But how about if anyone want to watch the whole boot process to check error messages or something alike? There is an easy and simple solution, at the begining of the boot sequence press ESC in order to stop the bootloader and when the boot: prompt appears type vga and press Enter.

After that you will have a normal boot process like with any other Linux system.

Juanma.

These week I’ve trying to stretch the virtualization resources of my homelab as much as possible. In my obsession to run as many VMs as possible I decided to lower the memory of some of them, including my storage appliances.

My VSAs are configured with various amounts of RAM ranging from 384MB to 1GB. I took the one I have in my laptop for demo purposes, powered it off, set the RAM to 256MB and fired it up again.

The VSA seemed to start without any problems and from the console everything looked fine.

I started the CMC and quickly noticed that something was wrong, the status of the storage server was “offline”.

I then looked into the alerts area and found one saying that there was not enough ram to start the configured features.

OK then, the VSA doesn’t work with 256MB of RAM; so which value is the minimum required in order to run the storage services?

After looking into several docs I found the answer in the P4000 Quick Start VSA user guide. The minimum amount of RAM required is 384MB for the laptop version and 1GB for the ESX version. Also in the VSA Install and Configure Guide, that comes with the VSA, the following values are provided for the ESX version and for the new Hyper-V version:

  • <500GB to 4.5TB – 1GB of RAM
  • 4.5TB to 9TB – 2GB of RAM
  • 9TB to 10TB – 3GB of RAM

After that I configured again the VSA with 384MB and the problem was fixed and the alarm disappeared.

Juanma.

Following with the series of posts about the HP Lefthand SAN systems in this post I will explain the basic volume operations with CLIQ, the HP Lefthand SAN/iQ command-line.

I used the Lefthand VSA and the ESX4 servers from my home lab to illustrate the procedure. The commands are executed locally in the VSA via SSH. The following tasks will be covered:

  • Volume creation.
  • Assign a volume to one or more hosts.
  • Volume deletion.

Volume creation

The command to use is createVolume. The available options for this command are:

  • Volumename.
  • Clustername: the cluster where the volume will be created on.
  • Replication:  The replication level from 1 (none) to 4 (4-way replication).
  • Thinprovision: 1 (Thin provisioning) or 2 (Full provision).
  • Description.
  • Size: The size can be set in MB, GB or TB.
CLIQ>createVolume volumeName=vjm-cluster2 size=2GB clusterName=iSCSI-CL replication=1 thinProvision=1 description="vep01-02 datastore" 

SAN/iQ Command Line Interface, v8.1.00.0047
(C) Copyright 2007-2009 Hewlett-Packard Development Company, L.P.

RESPONSE
 result         0
 processingTime 2539
 name           CliqSuccess
 description    Operation succeeded

CLIQ>

Assign a volume to the hosts

The command to use in this task is assignVolume. Few parameters are accepted by this command:

  • Volumename.
  • Initiator: The host/hosts IQNs.If the volume is going to be presented to more than one host the IQNs of the server must be separated by semicolons. One important tip, the operation must be done in one command, you can not assign the volume to a host in one command and to a new host in a second command, the last one will overwrite the first instead of adding the volume to one more host.
  • Access wrights: The default is read-write (rw), read-only (r) or write-only (w) can also be set.
CLIQ>assignVolume volumeName=vjm-cluster2 initiator=iqn.1998-01.com.vmware:vep01-45602bf3;iqn.1998-01.com.vmware:vep02-5f779b32

SAN/iQ Command Line Interface, v8.1.00.0047
(C) Copyright 2007-2009 Hewlett-Packard Development Company, L.P.

RESPONSE
 result         0
 processingTime 4069
 name           CliqSuccess
 description    Operation succeeded

CLIQ>

And now that the volume is created and assigned to several servers check its configuration with getVolumeInfo.

CLIQ>getVolumeInfo volumeName=vjm-cluster2

SAN/iQ Command Line Interface, v8.1.00.0047
(C) Copyright 2007-2009 Hewlett-Packard Development Company, L.P.

RESPONSE
 result         0
 processingTime 1480
 name           CliqSuccess
 description    Operation succeeded

 VOLUME
 thinProvision  true
 stridePages    32
 status         online
 size           2147483648
 serialNumber   17a1c11e939940a4f7e91ee43654c94b000000000000006b
 scratchQuota   4194304
 reserveQuota   536870912
 replication    1
 name           vjm-cluster2
 minReplication 1
 maxSize        14587789312
 iscsiIqn       iqn.2003-10.com.lefthandnetworks:vdn:107:vjm-cluster2
 isPrimary      true
 initialQuota   536870912
 groupName      VDN
 friendlyName   
 description    vep01-02 datastore
 deleting       false
 clusterName    iSCSI-CL
 checkSum       false
 bytesWritten   18087936
 blockSize      1024
 autogrowPages  512

 PERMISSION
 targetSecret    
 loadBalance     true
 iqn             iqn.1998-01.com.vmware:vep01-45602bf3
 initiatorSecret
 chapRequired    false
 chapName        
 authGroup       vep01
 access          rw

 PERMISSION
 targetSecret    
 loadBalance     true
 iqn             iqn.1998-01.com.vmware:vep02-5f779b32
 initiatorSecret
 chapRequired    false
 chapName        
 authGroup       vep02
 access          rw

CLIQ>

If you refresh the storage configuration of the ESXs hosts through vSphere Client the new volume will be displayed.

Volume deletion

Finally we are going to delete another volume that is no longer in use by the server of my lab.

CLIQ>deleteVolume volumeName=testvolume

SAN/iQ Command Line Interface, v8.1.00.0047
(C) Copyright 2007-2009 Hewlett-Packard Development Company, L.P.

This operation is potentially irreversible.  Are you sure? (y/n) 

RESPONSE
 result         0
 processingTime 1416
 name           CliqSuccess
 description    Operation succeeded

CLIQ>
CLIQ>getvolumeinfo volumename=testvolume

SAN/iQ Command Line Interface, v8.1.00.0047
(C) Copyright 2007-2009 Hewlett-Packard Development Company, L.P.

RESPONSE
 result         8000100c
 processingTime 1201
 name           CliqVolumeNotFound
 description    Volume 'testvolume' not found

CLIQ>

And we are done. As always comments are welcome :-)

Juanma.

It seems that a very popular post, if not the most popular one, is the one about my first experiences with the P4000 virtual storage appliance and because of that I decided to go deeper inside the VSA and the p4000 series and write about it.

The first post of this series is going to be about one of the less known features of the P4000 series, its command line known as CLIQ.

Typically any Sysadmin would configure and manage P4000 storage nodes through the HP Lefthand Centralized Management Console graphical interface. But there is another way to do it, the SAN/iQ software has a very powerful command line that allow you to perform almost any task as in the CMC.

The command line can be accessed through two ways, remotely from any windows machine or via SSH.

  • SSH access

To access the CLIQ via SSH the connection has to be done to the TCP port 16022 instead of the SSH standard port and with the management group administration user. The connection can be established to any storage node of the group, the operation will apply to the whole group.

  • Remote access

The other way to use the CLIQ shell is from a windows host with the HP Lefthand CLI shell installed on it. The software is included in the SAN/iQ Management Software DVD can be obtained along with other tools and documentation for the P4000 series in the following URL: http://www.hp.com/go/p4000downloads.

Regarding the use of the CLIQ there is one main difference with the On-Node CLIQ, every command must include the address or DNS name of the storage node where the task is going to be performed and at least the username. The password can also be included but for security reasons is best to don’t do it and be prompted. An encrypted key file with the necessary credentials can be used instead if you don’t want to use the username and password parameters within the command.

Of course this kind of access is perfect for scripting and automate some tasks.

Juanma.

In today’s post I will try to explain step by step how to add an iSCSI volume from the HP Lefthand P4000 VSA to a VMware ESXi4 server.

Step One: Create a volume.

Lets suppose we already have a configured storage appliance, I showed how to create a cluster in my previous post so I will not repeat that part here. Open the Centralized Management Console and go to the Management group -> Cluster ->Volumes and Snapshots.

Click on Tasks and choose New Volume.

Enter the volume name, a small description and the size. The volume can also be assigned to any server already connected to the cluster, as we don’t have any server assigned this option can be ignored, for now.

In the Advanced tab the volume can be assigned to an existing cluster and the RAID level, the volume type and the provisioning type can be set.

When everything is configured click OK and the volume will be created. After the creation process, the new volume will be showed up in the CMC.

At this point we have a new volume with some RAID protection level, none in this particular case since it’s a single node cluster. Next step is to assign the volume to a server.

Step Two: ESXi iSCSI configuration.

Connect to the chosen ESXi4 Server through vSphere Client and from the Configuration tab in the right pane go to the Networking area and click the Add Networking link

In the Add Networking Wizard select VMkernel as Connection Type.

Create a new virtual switch.

Enter the Port Group Properties, in my case the label as no other properties where relevant for me.

Set the IP settings, go to Summary screen and click Finish.

The newly created virtual switch will appear in the Networking area.

With the new virtual switch created go to Storage Adapters there you will see an iSCSI software adapter.

Click on properties and on the General tab click the Configure button and check the Enabled status box.

Once iSCSI is enabled its properties window will be populated.

Click close, the server will ask for rescan of the adapter but at this point it is not necessary so it can be skipped.

Step Three:  Add the volume to the ESXi server.

Now, that we have our volume created and the iSCSI adapter of our ESXi server activated, the next logical step is to add the storage to server.

On the HP Lefthand CMC go to the servers area add a new server.

Add a name for the server, a small description, check the Allow access via iSCSI box and select the authentication. In the example I choose CHAP not required. With this option you only have to enter the Initiator Node Name, you can grab it from the details of the ESXi iSCSI adapter.

To finish the process click OK and you will see the newly added server. Go to the tab Volume and Snapshots tab on the server configuration and from the Tasks menu assign a volume to the server.

Select the volume created at the beginning,

Now go back to  the vSphere client and launch again the properties of the iSCSI adapter. On the Dynamic Discovery tab add the virtual IP address of the VSA cluster.

Click Close and the server will ask again to rescan the adapter, now say yes and after the rescanning process the iSCSI LUN will show up.

Now in the ESXi a new Datastore can be created with the newly configured storage. Of course the same LUN can also be used to provide shared storage for more ESXi servers and used for VMotion, HA or any other interesting VMware vSphere features. May be in another post ;-)

Juanma.

The P400 Virtual Storage Appliance is a storage product from HP, its features include:

  • Synchronous replication to remote sites.
  • Snapshot capability.
  • Thin provisioning.
  • Network RAID.

It can be used to create a virtual iSCSI SAN to provide shared storage for ESX environments.

A 60-day trial is available here, it requires to log-in with your HP Passport account. As I wanted to test it is what I did, there are two versions available one for ESX and a second one labeled as Laptop Demo which is optimized for VMware Workstation and also comes with the Centralized Management Console software. I choose the last one.

After the import the appliance into VMware Workstation you will see that it comes configured as “Other Linux 2.6x kernel” as guest OS, with 1 vCPU, 384MB of RAM, 2 308MB disks used for the OS of the VSA and a 7.2 GB disk. The three disks are SCSI and are configured as Independent-Persistent.

At that point I fired up the appliance and started to play with my VSA.

  • First Step: Basic configuration.

A couple of minutes after have been started the appliance will show the log-in prompt.

As instructed enter “start”. You will be redirected to another log-in screen where you only have to press Enter and then the configuration screen will appear.

The first section “General Settings” allow you to create an administrator account and to set the password of the default account.

Move to the networking settings. The first screen ask you to choose the network interface, in my case I only had one.

And now you can proceed with the NIC configuration. Will ask for confirmation prior to commit any changes you made in the VSA network configuration.

In the next area of the main menu, Network TCP Status, the speed of the network card can be forced to 1000MBs Full Duplex and the Frame Size can be set.

The final part is for group management configuration, in fact to remove the VSA from a management group, and to reset the VSA to its defaults settings.

Now we have our P4000 configured to be managed through CMC. I will not explain the CMC installation since it’s just an almost “next->next->…” tasks.

  • Second step: VSA management through Centralized Management Console.

Launch the CMC. The Find Nodes Wizard will pop-up.

Choose the appropriate option in the next screen. To add a single VSA choose the second one.

Enter the IP address of the appliance.

Click Finish and if the VSA is online the wizard will find it and add it to the CMC.

Now the VSA is managed through the CMC but it is not part of a management group.

  • Step Three: Add more storage.

The first basic tasks we’re going to do with the VSA prior to Management Creation is to add more storage.

As the VSA is a virtual machine go toVMware Workstation or vSphere Client, depends on which VSA version are you using, and edit the appliance settings.

If you look into the advanced configuration of the third disk, the 7.2GB one, you will see that it has the 1:0 SCSI address.

This is very important because the new disks must be added sequentially from 1:1 through 1:4 addresses in order to be detected by the VSA. If there is disk added to the VSA the 1:0 SCSI address must be used for the first disk.

Add the new disk and before finishing the process edit the advanced settings of the disk and set the SCSI address.

Now in the CMC go to the storage configuration. You will see the new disk/disks as uninitialized.

Right click on the disk and select Add Disk to RAID.

Next you will see the disk active and already added to the RAID.

  • Step four: Management group creation.

We’re going to create the most basic configuration possible with a P4000 VSA. One VSA in a single management group and part of a single-node cluster.

From the Getting Started screen launch the Management Groups, Clusters and Volume Wizard.

Select New Management Group and enter the data of the new group.

Add the administrative user.

Set the time of the Management Group.

Create a new Standard Cluster.

Enter the name of the cluster and select the nodes of the cluster, in this particular set-up there is only one node.

Finally add a virtual IP for the cluster.

Once the cluster is created the wizard will ask to create a volume. The volume can also be added later to the cluster.

After we click finish the management group and the cluster will be created.

And we are done. In the next post about the P4000 I will show how to add and iSCSI volume to an ESXi4 server.

Juanma.