Archives For iSCSI

After my previous post about getting the iqn of an ESXi using esxcli Andy Banta (@andybanta) commented on Twitter that you can also change the iqn of the host with esxcli.

As he said it would be tremendously useful if you need to physically replace the server and don’t want to modify all your storage infrastructure, it’s easier to just modify the iqn of the new server and set it to the old name.

The task is as easier as the one described in last post. Using esxcli command with the iscsi namespace you can change the name and the alias of the adapter.

Screenshot from 2012-08-02 21_15_52

As a precaution first retrieve the current iqn to check that it’s the correct server.

Screenshot from 2012-08-02 21_20_08

To change the name you have to provide the adapter and the new name.

Screenshot from 2012-08-02 21_22_03

Hope you find this useful, any comments and suggestions are welcome as always.

Juanma.

Back in 2010 I wrote a post about how to get the iSCSI iqn of an ESXi 4.x server using vSphere CLI from the vMA or any other system with the vCLI installed on it.

The method described in that article is still valid for ESXi 5.0 since the old vicfg and esxcfg commands are still available, however with 5.0 version you can get a similar result using the new esxcli namespaces, following is how to do it.

First task is to get a list of the iSCSI HBAs in order to know the name of the software iSCSI initiator.

image

Next we get the info of the adapter.

image

Look at the Name field to get the iqn and we are done.

Juanma.

The following post will discuss about iSCSI initiator configuration in RedHat Enterprise Linux 5, this method is also applicable to all RHEL5 derivatives. The iSCSI LUNs will be provided by an HP P4000 array.

First of all we need to get and install the iscsi-initiator-utils RPM package, you can use yum to get and install the package from any supported repository for CentOS or RHEL. You can also download the package from RedHat Network if you have a valid RHN account and your system doesn’t have internet connection.

[root@rhel5 ~]# rpm -ivh /tmp/iscsi-initiator-utils-6.2.0.871-0.16.el5.x86_64.rpm
Preparing...                ########################################### [100%]
   1:iscsi-initiator-utils  ########################################### [100%]
[root@rhel5 ~]#
[root@rhel5 ~]#rpm -qa | grep iscsi
iscsi-initiator-utils-6.2.0.871-0.16.el5
[root@rhel5 ~]# rpm -qi iscsi-initiator-utils-6.2.0.871-0.16.el5
Name        : iscsi-initiator-utils        Relocations: (not relocatable)
Version     : 6.2.0.871                         Vendor: Red Hat, Inc.
Release     : 0.16.el5                      Build Date: Tue 09 Mar 2010 09:16:29 PM CET
Install Date: Wed 16 Feb 2011 11:34:03 AM CET      Build Host: x86-005.build.bos.redhat.com
Group       : System Environment/Daemons    Source RPM: iscsi-initiator-utils-6.2.0.871-0.16.el5.src.rpm
Size        : 1960412                          License: GPL
Signature   : DSA/SHA1, Wed 10 Mar 2010 04:26:37 PM CET, Key ID 5326810137017186
Packager    : Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>
URL         : http://www.open-iscsi.org
Summary     : iSCSI daemon and utility programs
Description :
The iscsi package provides the server daemon for the iSCSI protocol,
as well as the utility programs used to manage it. iSCSI is a protocol
for distributed disk access using SCSI commands sent over Internet
Protocol networks.
[root@rhel5 ~]#

Next we are going to configure the initiator. The iSCSI initiator is composed by two services, iscsi and iscsid, enable them to start at system startup using chkconfig.

[root@rhel5 ~]# chkconfig iscsi on
[root@rhel5 ~]# chkconfig iscsid on
[root@rhel5 ~]#
[root@rhel5 ~]# chkconfig --list | grep iscsi
iscsi           0:off   1:off   2:on    3:on    4:on    5:on    6:off
iscsid          0:off   1:off   2:on    3:on    4:on    5:on    6:off
[root@rhel5 ~]#
[root@rhel5 ~]#

Once iSCSI is configured start the service.

[root@rhel5 ~]# service iscsi start
iscsid is stopped
Starting iSCSI daemon:                                     [  OK  ]
                                                           [  OK  ]
Setting up iSCSI targets: iscsiadm: No records found!
                                                           [  OK  ]
[root@rhel5 ~]#
[root@rhel5 ~]# service iscsi status
iscsid (pid  14170) is running...
[root@rhel5 ~]#

From the P4000 CMC we need to add the server to the management group configuration like we would do with any other server.

The server iqn can be found in the file /etc/iscsi/initiatorname.iscsi.

[root@cl-node1 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:2551bf29b48
[root@cl-node1 ~]#

Create any iSCSI volumes you need in the P4000 arrays and assign them to the RedHat system. Then to discover the presented LUNs, from the Linux server run the iscsiadm command.

[root@rhel5 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.126.60
192.168.126.60:3260,1 iqn.2003-10.com.lefthandnetworks:mlab:62:lv-rhel01
[root@rhel5 ~]#

Restart the iSCSI initiator to make the new block device available to the operative system.

[root@rhel5 ~]# service iscsi restart
Stopping iSCSI daemon:
iscsid dead but pid file exists                            [  OK  ]
Starting iSCSI daemon:                                     [  OK  ]
                                                           [  OK  ]
Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2003-10.com.lefthandnetworks:mlab:62:lv-rhel01, portal: 192.168.126.60,3260]
Login to [iface: default, target: iqn.2003-10.com.lefthandnetworks:mlab:62:lv-rhel01, portal: 192.168.126.60,3260]: successful
                                                           [  OK  ]
[root@rhel5 ~]#

Then check that the new disk is available, I used lsscsi but fdisk -l will do.

[root@rhel5 ~]# lsscsi
[0:0:0:0]    disk    VMware,  VMware Virtual S 1.0   /dev/sda
[2:0:0:0]    disk    LEFTHAND iSCSIDisk        9000  /dev/sdb
[root@rhel5 ~]#
[root@rhel5 ~]# fdisk -l /dev/sdb 

Disk /dev/sdb: 156.7 GB, 156766306304 bytes
255 heads, 63 sectors/track, 19059 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table
[root@rhel5 ~]#

At this point the iSCSI configuration is done, the new LUNs will be available through a system reboot as long as the iscsi service is enabled.

Juanma.

In yesterday’s post I showed how to get the iSCSI iwn fo an ESX(i) server using the vSphere CLI from the vMA and from the root shell of an ESX itself. Today it’s turn to use PowerCLI to perform the same task.

The approach to be taken is very similar to the one we used to manage the multipathing configuration.

[vSphere PowerCLI] C:\> $h = Get-VMhost esx02.mlab.local
[vSphere PowerCLI] C:\> $hostview = Get-View $h.id
[vSphere PowerCLI] C:\> $storage = Get-View $hostview.ConfigManager.StorageSystem
[vSphere PowerCLI] C:\>
[vSphere PowerCLI] C:\> $storage.StorageDeviceInfo

HostBusAdapter              : {key-vim.host.ParallelScsiHba-vmhba0, key-vim.host.BlockHba-vmhba1, key-vim.host.BlockHba-vmhba32, key-vim.host.InternetScsiHba-vmhba33}
ScsiLun                     : {key-vim.host.ScsiLun-0005000000766d68626133323a303a30, key-vim.host.ScsiDisk-0000000000766d686261303a303a30}
ScsiTopology                : VMware.Vim.HostScsiTopology
MultipathInfo               : VMware.Vim.HostMultipathInfo
PlugStoreTopology           : VMware.Vim.HostPlugStoreTopology
SoftwareInternetScsiEnabled : True
DynamicType                 :
DynamicProperty             : 

[vSphere PowerCLI] C:\>
[vSphere PowerCLI] C:\> $storage.StorageDeviceInfo.HostBusAdapter | select IScsiName

IScsiName                                                                                                                                                               
---------                                                                                                                                                                

iqn.1998-01.com.vmware:esx02-42b0f47e                                                                                                                                    

[vSphere PowerCLI] C:\>

Juanma,

When you are trying to configure iSCSI of and ESX(i) server from the command line is clear that at some point you are going to need the iqn. Of course you can  use the vSphere Client to get the iqn but the Unix Geek inside me really wants to do it from the shell.

After a small research through the vSphere CLI documentation and several blogs I found this post by Jon Owings (@2vcps).

First list the SCSI devices available in the system to get the iSCSI hba.

[root@esx02 ~]# esxcfg-scsidevs -a
vmhba0  mptspi            link-n/a  pscsi.vmhba0                            (0:0:16.0) LSI Logic / Symbios Logic LSI Logic Parallel SCSI Controller
vmhba1  ata_piix          link-n/a  ide.vmhba1                              (0:0:7.1) Intel Corporation Virtual Machine Chipset
vmhba32 ata_piix          link-n/a  ide.vmhba32                             (0:0:7.1) Intel Corporation Virtual Machine Chipset
vmhba33 iscsi_vmk         online    iscsi.vmhba33                           iSCSI Software Adapter         
[root@esx02 ~]#

After that Jon uses the command vmkiscsi-tool to get the iqn.

[root@esx02 ~]# vmkiscsi-tool -I -l vmhba33
iSCSI Node Name: iqn.1998-01.com.vmware:esx02-42b0f47e
[root@esx02 ~]#

Beauty, isn’t it? But I found one glitch. This method is done from the ESX root shell but how do I get the iqn from the vMA? Some of my hosts are ESXi and even for the ESX I use the vMA to perform my everyday administration tasks.

There is no vmkiscsi-tool command in the vMA, instead we are going to use the vicfg-iscsi or the vicfg-scsidevs command.

With vicfg-scsidevs we can obtain the iqn listed in the UID colum.

[vi-admin@vma ~][esx02.mlab.local]$ vicfg-scsidevs -a             
Adapter_ID  Driver      UID                                     PCI      Vendor & Model
vmhba0      mptspi      pscsi.vmhba0                            (0:16.0) LSI Logic Parallel SCSI Controller
vmhba1      ata_piix    unknown.vmhba1                          (0:7.1)  Virtual Machine Chipset
vmhba32     ata_piix    ide.vmhba32                             (0:7.1)  Virtual Machine Chipset
vmhba33     iscsi_vmk   iqn.1998-01.com.vmware:esx02-42b0f47e   ()       iSCSI Software Adapter
[vi-admin@vma ~][esx02.mlab.local]$

And with vicfg-iscsi we can get the iqn providing the vmhba device.

[vi-admin@vma ~][esx02.mlab.local]$ vicfg-iscsi --iscsiname --list vmhba33
iSCSI Node Name   : iqn.1998-01.com.vmware:esx02-42b0f47e
iSCSI Node Alias  :
[vi-admin@vma ~][esx02.mlab.local]$

The next logical step is to use PowerCLI to retrive the iqn, but I’ll leave that for a future post.

Juanma.

This post will outline the necessary steps to create a standard (no-multisite) HP P4000 cluster with two nodes. Creating a two-node cluster is a very similar process as the one-node cluster described in my first post about P4000 systems.

The cluster is composed by:

  • 2 HP P4000 Virtual Storage Appliances
  • 1 HP P4000 Failover Manager

The Failover Manager, or FOM, is a specialized version of the SAN/iQ software. It runs as a virtual appliance in VMware, thought the most common situation is to run it in a ESX/ESXi servers running it under VMware player or Server is also supported.

The FOM integrates into a management group as a real manager and is intended only to provide quorum to the cluster, one of its main purposes is to provide quorum in Multi-site clusters. I decided to use it in this post to provide an example as real as possible.

To setup this cluster I used virtual machines inside VMware Workstation, but the same design can also be created with physical servers and P4000 storage systems.

From the Getting started screen launch the clusters wizard.

VJM-P4000

Select the two P4000 storage systems and enter the name of the Management Group

During the group creation will ask to create a cluster, choose the two nodes as members of the cluster, will add the FOM later, and assign a name to the cluster.

Next assign a virtual IP address to the cluster.

Enter the administrative level credentials for the cluster.

Finally the wizard will ask if you want to create volumes in the cluster, I didn’t take that option and finished the cluster creation process. You can also add the volumes later as I described in one of my previous posts.

Now the that cluster is formed we are going to add the Failover Manager.

It’s is important that the FOM requires the same configuration as any VSA as I depicted in my first post about the P4000 storage systems.

In the Central Management Console right-click into the FOM and select Add to existing management group.

Select the management group and click Add.

With this operation the cluster configuration is done. If everything went well in the end you should have something like this.

Juanma.

Following with the series of posts about the HP Lefthand SAN systems in this post I will explain the basic volume operations with CLIQ, the HP Lefthand SAN/iQ command-line.

I used the Lefthand VSA and the ESX4 servers from my home lab to illustrate the procedure. The commands are executed locally in the VSA via SSH. The following tasks will be covered:

  • Volume creation.
  • Assign a volume to one or more hosts.
  • Volume deletion.

Volume creation

The command to use is createVolume. The available options for this command are:

  • Volumename.
  • Clustername: the cluster where the volume will be created on.
  • Replication:  The replication level from 1 (none) to 4 (4-way replication).
  • Thinprovision: 1 (Thin provisioning) or 2 (Full provision).
  • Description.
  • Size: The size can be set in MB, GB or TB.
CLIQ>createVolume volumeName=vjm-cluster2 size=2GB clusterName=iSCSI-CL replication=1 thinProvision=1 description="vep01-02 datastore" 

SAN/iQ Command Line Interface, v8.1.00.0047
(C) Copyright 2007-2009 Hewlett-Packard Development Company, L.P.

RESPONSE
 result         0
 processingTime 2539
 name           CliqSuccess
 description    Operation succeeded

CLIQ>

Assign a volume to the hosts

The command to use in this task is assignVolume. Few parameters are accepted by this command:

  • Volumename.
  • Initiator: The host/hosts IQNs.If the volume is going to be presented to more than one host the IQNs of the server must be separated by semicolons. One important tip, the operation must be done in one command, you can not assign the volume to a host in one command and to a new host in a second command, the last one will overwrite the first instead of adding the volume to one more host.
  • Access wrights: The default is read-write (rw), read-only (r) or write-only (w) can also be set.
CLIQ>assignVolume volumeName=vjm-cluster2 initiator=iqn.1998-01.com.vmware:vep01-45602bf3;iqn.1998-01.com.vmware:vep02-5f779b32

SAN/iQ Command Line Interface, v8.1.00.0047
(C) Copyright 2007-2009 Hewlett-Packard Development Company, L.P.

RESPONSE
 result         0
 processingTime 4069
 name           CliqSuccess
 description    Operation succeeded

CLIQ>

And now that the volume is created and assigned to several servers check its configuration with getVolumeInfo.

CLIQ>getVolumeInfo volumeName=vjm-cluster2

SAN/iQ Command Line Interface, v8.1.00.0047
(C) Copyright 2007-2009 Hewlett-Packard Development Company, L.P.

RESPONSE
 result         0
 processingTime 1480
 name           CliqSuccess
 description    Operation succeeded

 VOLUME
 thinProvision  true
 stridePages    32
 status         online
 size           2147483648
 serialNumber   17a1c11e939940a4f7e91ee43654c94b000000000000006b
 scratchQuota   4194304
 reserveQuota   536870912
 replication    1
 name           vjm-cluster2
 minReplication 1
 maxSize        14587789312
 iscsiIqn       iqn.2003-10.com.lefthandnetworks:vdn:107:vjm-cluster2
 isPrimary      true
 initialQuota   536870912
 groupName      VDN
 friendlyName   
 description    vep01-02 datastore
 deleting       false
 clusterName    iSCSI-CL
 checkSum       false
 bytesWritten   18087936
 blockSize      1024
 autogrowPages  512

 PERMISSION
 targetSecret    
 loadBalance     true
 iqn             iqn.1998-01.com.vmware:vep01-45602bf3
 initiatorSecret
 chapRequired    false
 chapName        
 authGroup       vep01
 access          rw

 PERMISSION
 targetSecret    
 loadBalance     true
 iqn             iqn.1998-01.com.vmware:vep02-5f779b32
 initiatorSecret
 chapRequired    false
 chapName        
 authGroup       vep02
 access          rw

CLIQ>

If you refresh the storage configuration of the ESXs hosts through vSphere Client the new volume will be displayed.

Volume deletion

Finally we are going to delete another volume that is no longer in use by the server of my lab.

CLIQ>deleteVolume volumeName=testvolume

SAN/iQ Command Line Interface, v8.1.00.0047
(C) Copyright 2007-2009 Hewlett-Packard Development Company, L.P.

This operation is potentially irreversible.  Are you sure? (y/n) 

RESPONSE
 result         0
 processingTime 1416
 name           CliqSuccess
 description    Operation succeeded

CLIQ>
CLIQ>getvolumeinfo volumename=testvolume

SAN/iQ Command Line Interface, v8.1.00.0047
(C) Copyright 2007-2009 Hewlett-Packard Development Company, L.P.

RESPONSE
 result         8000100c
 processingTime 1201
 name           CliqVolumeNotFound
 description    Volume 'testvolume' not found

CLIQ>

And we are done. As always comments are welcome :-)

Juanma.

It seems that a very popular post, if not the most popular one, is the one about my first experiences with the P4000 virtual storage appliance and because of that I decided to go deeper inside the VSA and the p4000 series and write about it.

The first post of this series is going to be about one of the less known features of the P4000 series, its command line known as CLIQ.

Typically any Sysadmin would configure and manage P4000 storage nodes through the HP Lefthand Centralized Management Console graphical interface. But there is another way to do it, the SAN/iQ software has a very powerful command line that allow you to perform almost any task as in the CMC.

The command line can be accessed through two ways, remotely from any windows machine or via SSH.

  • SSH access

To access the CLIQ via SSH the connection has to be done to the TCP port 16022 instead of the SSH standard port and with the management group administration user. The connection can be established to any storage node of the group, the operation will apply to the whole group.

  • Remote access

The other way to use the CLIQ shell is from a windows host with the HP Lefthand CLI shell installed on it. The software is included in the SAN/iQ Management Software DVD can be obtained along with other tools and documentation for the P4000 series in the following URL: http://www.hp.com/go/p4000downloads.

Regarding the use of the CLIQ there is one main difference with the On-Node CLIQ, every command must include the address or DNS name of the storage node where the task is going to be performed and at least the username. The password can also be included but for security reasons is best to don’t do it and be prompted. An encrypted key file with the necessary credentials can be used instead if you don’t want to use the username and password parameters within the command.

Of course this kind of access is perfect for scripting and automate some tasks.

Juanma.

In today’s post I will try to explain step by step how to add an iSCSI volume from the HP Lefthand P4000 VSA to a VMware ESXi4 server.

Step One: Create a volume.

Lets suppose we already have a configured storage appliance, I showed how to create a cluster in my previous post so I will not repeat that part here. Open the Centralized Management Console and go to the Management group -> Cluster ->Volumes and Snapshots.

Click on Tasks and choose New Volume.

Enter the volume name, a small description and the size. The volume can also be assigned to any server already connected to the cluster, as we don’t have any server assigned this option can be ignored, for now.

In the Advanced tab the volume can be assigned to an existing cluster and the RAID level, the volume type and the provisioning type can be set.

When everything is configured click OK and the volume will be created. After the creation process, the new volume will be showed up in the CMC.

At this point we have a new volume with some RAID protection level, none in this particular case since it’s a single node cluster. Next step is to assign the volume to a server.

Step Two: ESXi iSCSI configuration.

Connect to the chosen ESXi4 Server through vSphere Client and from the Configuration tab in the right pane go to the Networking area and click the Add Networking link

In the Add Networking Wizard select VMkernel as Connection Type.

Create a new virtual switch.

Enter the Port Group Properties, in my case the label as no other properties where relevant for me.

Set the IP settings, go to Summary screen and click Finish.

The newly created virtual switch will appear in the Networking area.

With the new virtual switch created go to Storage Adapters there you will see an iSCSI software adapter.

Click on properties and on the General tab click the Configure button and check the Enabled status box.

Once iSCSI is enabled its properties window will be populated.

Click close, the server will ask for rescan of the adapter but at this point it is not necessary so it can be skipped.

Step Three:  Add the volume to the ESXi server.

Now, that we have our volume created and the iSCSI adapter of our ESXi server activated, the next logical step is to add the storage to server.

On the HP Lefthand CMC go to the servers area add a new server.

Add a name for the server, a small description, check the Allow access via iSCSI box and select the authentication. In the example I choose CHAP not required. With this option you only have to enter the Initiator Node Name, you can grab it from the details of the ESXi iSCSI adapter.

To finish the process click OK and you will see the newly added server. Go to the tab Volume and Snapshots tab on the server configuration and from the Tasks menu assign a volume to the server.

Select the volume created at the beginning,

Now go back to  the vSphere client and launch again the properties of the iSCSI adapter. On the Dynamic Discovery tab add the virtual IP address of the VSA cluster.

Click Close and the server will ask again to rescan the adapter, now say yes and after the rescanning process the iSCSI LUN will show up.

Now in the ESXi a new Datastore can be created with the newly configured storage. Of course the same LUN can also be used to provide shared storage for more ESXi servers and used for VMotion, HA or any other interesting VMware vSphere features. May be in another post ;-)

Juanma.

The P400 Virtual Storage Appliance is a storage product from HP, its features include:

  • Synchronous replication to remote sites.
  • Snapshot capability.
  • Thin provisioning.
  • Network RAID.

It can be used to create a virtual iSCSI SAN to provide shared storage for ESX environments.

A 60-day trial is available here, it requires to log-in with your HP Passport account. As I wanted to test it is what I did, there are two versions available one for ESX and a second one labeled as Laptop Demo which is optimized for VMware Workstation and also comes with the Centralized Management Console software. I choose the last one.

After the import the appliance into VMware Workstation you will see that it comes configured as “Other Linux 2.6x kernel” as guest OS, with 1 vCPU, 384MB of RAM, 2 308MB disks used for the OS of the VSA and a 7.2 GB disk. The three disks are SCSI and are configured as Independent-Persistent.

At that point I fired up the appliance and started to play with my VSA.

  • First Step: Basic configuration.

A couple of minutes after have been started the appliance will show the log-in prompt.

As instructed enter “start”. You will be redirected to another log-in screen where you only have to press Enter and then the configuration screen will appear.

The first section “General Settings” allow you to create an administrator account and to set the password of the default account.

Move to the networking settings. The first screen ask you to choose the network interface, in my case I only had one.

And now you can proceed with the NIC configuration. Will ask for confirmation prior to commit any changes you made in the VSA network configuration.

In the next area of the main menu, Network TCP Status, the speed of the network card can be forced to 1000MBs Full Duplex and the Frame Size can be set.

The final part is for group management configuration, in fact to remove the VSA from a management group, and to reset the VSA to its defaults settings.

Now we have our P4000 configured to be managed through CMC. I will not explain the CMC installation since it’s just an almost “next->next->…” tasks.

  • Second step: VSA management through Centralized Management Console.

Launch the CMC. The Find Nodes Wizard will pop-up.

Choose the appropriate option in the next screen. To add a single VSA choose the second one.

Enter the IP address of the appliance.

Click Finish and if the VSA is online the wizard will find it and add it to the CMC.

Now the VSA is managed through the CMC but it is not part of a management group.

  • Step Three: Add more storage.

The first basic tasks we’re going to do with the VSA prior to Management Creation is to add more storage.

As the VSA is a virtual machine go toVMware Workstation or vSphere Client, depends on which VSA version are you using, and edit the appliance settings.

If you look into the advanced configuration of the third disk, the 7.2GB one, you will see that it has the 1:0 SCSI address.

This is very important because the new disks must be added sequentially from 1:1 through 1:4 addresses in order to be detected by the VSA. If there is disk added to the VSA the 1:0 SCSI address must be used for the first disk.

Add the new disk and before finishing the process edit the advanced settings of the disk and set the SCSI address.

Now in the CMC go to the storage configuration. You will see the new disk/disks as uninitialized.

Right click on the disk and select Add Disk to RAID.

Next you will see the disk active and already added to the RAID.

  • Step four: Management group creation.

We’re going to create the most basic configuration possible with a P4000 VSA. One VSA in a single management group and part of a single-node cluster.

From the Getting Started screen launch the Management Groups, Clusters and Volume Wizard.

Select New Management Group and enter the data of the new group.

Add the administrative user.

Set the time of the Management Group.

Create a new Standard Cluster.

Enter the name of the cluster and select the nodes of the cluster, in this particular set-up there is only one node.

Finally add a virtual IP for the cluster.

Once the cluster is created the wizard will ask to create a volume. The volume can also be added later to the cluster.

After we click finish the management group and the cluster will be created.

And we are done. In the next post about the P4000 I will show how to add and iSCSI volume to an ESXi4 server.

Juanma.