Archives For Storage

kvmWelcome to the third post of my series about OpenStack. In the first and second posts we saw in detail how to prepare the basic network infrastructure of our future OpenStack cloud using VMware NSX. In this third one we are going to install and configure the KVM compute host and the shared storage of the lab.

KVM setup

Create and install two CentOS 6.4 virtual machines with 2 vCPU, 2 GB of RAM, 2 network interfaces (E1000) and one 16GB disk. For the partitioning schema I have used the following one:

  • sda1 – 512MB – /boot
  • sda2 – Rest of the disk – LVM PV
    • lv_root – 13.5GB – /
    • lv_swap – 2GB – swap

Mark Base and Standard groups to be installed and leave the rest unchecked. Set the hostname during the installation and leave the networking configuration with the default values. Please have in mind that you will need to have a DHCP server on your network, in my case I’m using the one that comes with VMware Fusion if you don’t have one then you will have to set here a temporary IP address in order to able to install the KVM software. Once the installation is done reboot your virtual machine and open a root SSH session to proceed with the rest of the configuration tasks.

Disable SELinux with setenfornce command, also modify SELinux config to disable it during OS boot. I do not recommend to disable SELinux in a production environment but for a lab it will simplify things.

setenforce 0
cp /etc/selinux/config /etc/selinux/config.orig
sed -i s/SELINUX\=enforcing/SELINUX\=disabled/ /etc/selinux/config

Check that hardware virtualization support is activated.

egrep -i 'vmx|svm' /proc/cpuinfo

Install KVM packages.

yum install kvm libvirt python-virtinst qemu-kvm

After installing a ton of dependencies and if t nothing failed enable and start the libvirtd service.

[root@kvm1 ~]# chkconfig libvirtd on
[root@kvm1 ~]# service libvirtd start
Starting libvirtd daemon:                                  [  OK  ]
[root@kvm1 ~]#

Verify that KVM has been correctly installed and it’s loaded and running on the system.

[root@kvm1 ~]# lsmod | grep kvm
kvm_intel              53484  0
kvm                   316506  1 kvm_intel
[root@kvm1 ~]#
[root@kvm1 ~]# virsh -c qemu:///system list
 Id    Name                           State
----------------------------------------------------

[root@kvm1 ~]#

Hypervisor networking setup

With KVM software installed and ready we can now move on to configure the networking for both hosts and integrate them into our NSX deployment.

Disable Network Manager for both interfaces. Edit /etc/sysconfig/network-scripts/ifcfg-ethX files and change NM_CONTROLLED value to no.

By default libvirt creates virbr0 network bridge to be used for the virtual machines to access the external network through a NAT connection. We need to disable it to ensure that bridge components of Open vSwitch can load without any errors.

virsh net-destroy default
virsh net-autostart --disable default

Install Open vSwitch

Copy the NSX OVS package to the KVM host and extract it.

[root@kvm1 nsx-ovs]# tar vxfz nsx-ovs-2.1.0-build33849-rhel64_x86_64.tar.gz
./
./nicira-flow-stats-exporter/
./nicira-flow-stats-exporter/nicira-flow-stats-exporter-4.1.0.32691-1.x86_64.rpm
./tcpdump-ovs-4.4.0.ovs2.1.0.33849-1.x86_64.rpm
./kmod-openvswitch-2.1.0.33849-1.el6.x86_64.rpm
./openvswitch-2.1.0.33849-1.x86_64.rpm
./nicira-ovs-hypervisor-node-2.1.0.33849-1.x86_64.rpm
./nicira-ovs-hypervisor-node-debuginfo-2.1.0.33849-1.x86_64.rpm
[root@kvm1 nsx-ovs]#

Install Open vSwitch packages.

rpm -Uvh kmod-openvswitch-2.1.0.33849-1.el6.x86_64.rpm
rpm -Uvh openvswitch-2.1.0.33849-1.x86_64.rpm

Verify that Open vSwitch service is enabled and start it.

[root@kvm1 ~]# chkconfig --list openvswitch
openvswitch     0:off   1:off   2:on    3:on    4:on    5:on    6:off
[root@kvm1 ~]#
[root@kvm1 ~]#
[root@kvm1 ~]# service openvswitch start
/etc/openvswitch/conf.db does not exist ... (warning).
Creating empty database /etc/openvswitch/conf.db           [  OK  ]
Starting ovsdb-server                                      [  OK  ]
Configuring Open vSwitch system IDs                        [  OK  ]
Inserting openvswitch module                               [  OK  ]
Starting ovs-vswitchd                                      [  OK  ]
Enabling remote OVSDB managers                             [  OK  ]
[root@kvm1 ~]#

Install nicira-ovs-hypervisor-node package, this utility provides the infrastructure for distributed routing on the hypervisor. With the installation the integration bridge br-int and OVS SSL credentials will be created.

[root@kvm1 ~]# rpm -Uvh nicira-ovs-hypervisor-node*.rpm
Preparing...                ########################################### [100%]
   1:nicira-ovs-hypervisor-n########################################### [ 50%]
   2:nicira-ovs-hypervisor-n########################################### [100%]
Running '/usr/sbin/ovs-integrate init'
successfully generated self-signed certificates..
successfully created the integration bridge..
[root@kvm1 ~]#

There are other packages like nicira-flow-stats-exporter and tcpdump-ovs but they are not needed for OVS functioning. We can proceed now with OVS configuration.

Configure Open vSwitch

The first step is to create OVS bridges for each network interface card of the hypervisor.

ovs-vsctl add-br br0
ovs-vsctl br-set-external-id br0 bridge-id br0
ovs-vsctl set Bridge br0 fail-mode=standalone
ovs-vsctl add-port br0 eth0

If you were logged in by an SSH session you have probably noticed that your connection is lost, this is because br0 interface has taken control of the networking of the host and it doesn’t have an IP address configured. To solve this access the host console and edit ifcfg-eth0 file and modify to look like this.

DEVICE=eth0
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br0
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
NAME=eth0
HOTPLUG=no
HWADDR=00:0C:29:CA:34:FE
NM_CONTROLLED=no

Next create and edit ifcfg-br0 file.

DEVICE=br0
DEVICETYPE=ovs
TYPE=OVSBridge
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.82.42
NETMASK=255.255.255.0
GATEWAY=192.168.82.2
IPV6INIT=no
HOTPLUG=no

Restart the network service and test the connection.

service network restart

Repeat all the above steps for the second network interface.

Finally configure NSX Controller Cluster as manager in Open vSwitch.

ovs-vsctl set-manager ssl:192.168.82.44

Execute ovs-vsctl show command to review OVS current configuration.

[root@kvm1 ~]# ovs-vsctl show
383c3f17-5c53-4992-be8e-6e9b195e51d8
    Manager "ssl:192.168.82.44"
    Bridge "br1"
        fail_mode: standalone
        Port "br1"
            Interface "br1"
                type: internal
        Port "eth1"
            Interface "eth1"
    Bridge "br0"
        fail_mode: standalone
        Port "eth0"
            Interface "eth0"
        Port "br0"
            Interface "br0"
                type: internal
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.1.0.33849"
[root@kvm1 ~]#

Register OVS in NSX Controller

With our OVS instance installed and running we can now inform NSX Controller of its existence either via NVP API or NSX Manager, in our case we will use the later.

Log into NSX Manager as admin user and go to Dashboard, from Summary of Transport Components table click Add in the Hypervisors row. Verify that Hypervisor is selected as transport node and move to the Basics screen. Enter a name for the hypervisor, usually the hostname of the server.

Screen Shot 2014-05-05 at 23.18.22

In Properties enter:

  • Integration bridge ID, for us is br-int.
  • Admin Status Enabled –  Enabled by default.

Screen Shot 2014-05-05 at 23.29.03

For the Credential screen we are going to need the SSL certificate that was created along with the integration bridge during the NSX OVS installation. The PEM certificate file is ovsclient-cert.pem and is in /etc/openvswitch directory.

[root@kvm1 ~]# cat /etc/openvswitch/ovsclient-cert.pem
-----BEGIN CERTIFICATE-----
MIIDwjCCAqoCCQDZUob5H9tzvjANBgkqhkiG9w0BAQUFADCBojELMAkGA1UEBhMC
VVMxCzAJBgNVBAgTAkNBMRIwEAYDVQQHEwlQYWxvIEFsdG8xFTATBgNVBAoTDE9w
ZW4gdlN3aXRjaDEfMB0GA1UECxMWT3BlbiB2U3dpdGNoIGNlcnRpZmllcjE6MDgG
A1UEAxMxb3ZzY2xpZW50IGlkOjA4NWQwMTFiLTJiMzYtNGQ5My1iMWIyLWJjODIz
MDczYzE0YzAeFw0xNDA1MDQyMjE3NTVaFw0yNDA1MDEyMjE3NTVaMIGiMQswCQYD
VQQGEwJVUzELMAkGA1UECBMCQ0ExEjAQBgNVBAcTCVBhbG8gQWx0bzEVMBMGA1UE
ChMMT3BlbiB2U3dpdGNoMR8wHQYDVQQLExZPcGVuIHZTd2l0Y2ggY2VydGlmaWVy
MTowOAYDVQQDEzFvdnNjbGllbnQgaWQ6MDg1ZDAxMWItMmIzNi00ZDkzLWIxYjIt
YmM4MjMwNzNjMTRjMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwgqT
hvG72vat0hXvTuukZOs6fM4CAphmN34l4415q/vReSM3upN+vOLoyGJ/8VJGdNXH
3Bsu6V58f6o8EPbfnhgqf2rCP0r5kiiN5SivsAWI5//ltV1GDFO4+8VpYAwn4Cbd
sNOuFEM1mKOR//IL3Riy9Nkh16wfLy44KEE9745uhZ9gW96AkSkBx1ajjUiApnjL
M6L2w/E4sxNeMDLf/VYlc/SuEg775D9iaPpA1haJt8FFw1g769FsR9Q0Fl+CoT7f
ggBZTKwwcoU+5Ew1mNlPV0Hm8vpFcXbtMBeuT9Fe7k4bC+UuQPaSnbPpbZMpx/wd
fHOdJpemcog/0EjOJQIDAQABMA0GCSqGSIb3DQEBBQUAA4IBAQDBPNM/uI25ofIl
AgCpG42UD3M/RZRPX0/6Be4jCTaAuET6J8wAKA4k1btA6UPt0M98N6o4y60Du2D+
ZwFOa2LSTXZB43X70XnDKxapDVqmhKtrmX2hL1NRD9RjTTx3TOXMOlUiUizRB1+L
d8MNhX3qrvOLeFOUnxm6C5RnI/HdqvS9TyxybX+Qfqit9Q66hbjAt9RribXSw21G
Ix8d9S4NyDO91mDstIcXeNRUk8K64gEQSKxQO9QKmVAQBIlYAJVVXzfkXyHEiKTe
0zIsW/oknwWeQMD9xSrKomY/5+LCuDM1jT5LcL8vxmrEVIrUjNqt4nQsT4mjooG+
XYf2HdXj
-----END CERTIFICATE-----
[root@kvm1 ~]#

Copy the contents of the file and paste them in the Security Certificate text box.

Screen Shot 2014-05-05 at 23.36.28

Finally add the Transport Connector with the values:

  • Transport Type: STT
  • Transport Zone UUID: The transport zone, in my case the UUID corresponding to vlab-transport-zone.
  • IP Address – The address of the br0 interface of the host.

Screen Shot 2014-05-05 at 23.41.57

Click Save & View and check that Management and OpenFlow connections are up.

Screen Shot 2014-05-05 at 23.52.16

GlusterFS setup

gluster-logo-300x115I choose GlusterFS for my OpenStack lab for two reasons.  I have used it in the past so this has been a good opportunity for me to refresh and enhance my rusty gluster skills, and it’s supported as storage backend for Glance in OpenStack. Instead of going with CentOS again this time I choose Fedora 20 for my gluster VM, a real world GlusterFS cluster will have at least two node but for our lab one will be enough.

Create a Fedora x64 virtual machine with 1 vCPU, 1GB of RAM and one network interface. For the storage part use the following:

  • System disk: 16GB
  • Data disk: 72GB

Use the same partitioning schema of the KVM hosts for the system disk. Choose a Minimal installation and add the Standard group. Configure the hostname and the IP address of the node, set the root password and create a user as administrator, I’m using here my personal user jrey.

Disable SELinux.

sudo setenforce 0
sudo cp /etc/selinux/config /etc/selinux/config.orig
sudo sed -i s/SELINUX\=enforcing/SELINUX\=disabled/ /etc/selinux/config

Stop and disable firewalld.

sudo systemctl disable firewalld.service
sudo systemctl stop firewalld.service

Install GlusterFS packages. There is no need to add any additional yum repository since Gluster is included in the standard Fedora repos.

sudo systemctl install glusterfs-server

Enable Gluster services.

sudo systemctl enable glusterd.service
sudo systemctl enable glusterfsd.service

Start Gluster services.

[jrey@gluster ~]$ sudo systemctl start glusterd.service
[jrey@gluster ~]$ sudo systemctl start glusterfsd.service
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo systemctl status glusterd.service
glusterd.service - GlusterFS an clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)
   Active: active (running) since Mon 2014-04-28 17:17:35 CEST; 20s ago
  Process: 1496 ExecStart=/usr/sbin/glusterd -p /run/glusterd.pid (code=exited, status=0/SUCCESS)
 Main PID: 1497 (glusterd)
   CGroup: /system.slice/glusterd.service
           └─1497 /usr/sbin/glusterd -p /run/glusterd.pid

Apr 28 17:17:35 gluster.vlab.local systemd[1]: Started GlusterFS an clustered file-system server.
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo systemctl status glusterfsd.service
glusterfsd.service - GlusterFS brick processes (stopping only)
   Loaded: loaded (/usr/lib/systemd/system/glusterfsd.service; enabled)
   Active: active (exited) since Mon 2014-04-28 17:17:45 CEST; 15s ago
  Process: 1515 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
 Main PID: 1515 (code=exited, status=0/SUCCESS)

Apr 28 17:17:45 gluster.vlab.local systemd[1]: Starting GlusterFS brick processes (stopping only)...
Apr 28 17:17:45 gluster.vlab.local systemd[1]: Started GlusterFS brick processes (stopping only).
[jrey@gluster ~]$

Since we are running a one-node cluster there is no need to add any node to the trusted pool. In case you decide to run a multinode environment you can setup the pool by running the following command on each node of the clsuter. .

gluster peer probe <IP_ADDRESS_OF_OTHER_NODE>

Edit the data disk with fdisk and create a single partition. Format the partition as XFS.

[jrey@gluster ~]$ sudo mkfs.xfs -i size=512 /dev/sdb1
meta-data=/dev/sdb1              isize=512    agcount=4, agsize=4718528 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=18874112, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=9215, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[jrey@gluster ~]$

Create the mount point for the new filesystem, mount the partition and edit /etc/fstab accordingly to make it persistent to reboots.

sudo mkdir -p /data/glance/
sudo mount /dev/sdb1 /data/glance
sudo mkdir -p /data/glance/brick1
sudo echo "/dev/sdb1 /data/glance xfs defaults 0 0" >> /etc/fstab

Create the Gluster volume and start it.

[jrey@gluster ~]$ sudo gluster volume create gv0 gluster.vlab.local:/data/glance/brick1
volume create: gv0: success: please start the volume to access data
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo gluster volume start gv0
volume start: gv0: success
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo gluster volume info

Volume Name: gv0
Type: Distribute
Volume ID: d1ad2d00-6210-4856-a5eb-26ddcba77a70
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: gluster.vlab.local:/data/glance/brick1
[jrey@gluster ~]$

The configuration of the Gluster node is finished. In the next article we will install and configure OpenStack using the different components detailed during current and previous parts of the series.

Please feel free to add any comment or correction.

Juanma.

The following post will discuss about iSCSI initiator configuration in RedHat Enterprise Linux 5, this method is also applicable to all RHEL5 derivatives. The iSCSI LUNs will be provided by an HP P4000 array.

First of all we need to get and install the iscsi-initiator-utils RPM package, you can use yum to get and install the package from any supported repository for CentOS or RHEL. You can also download the package from RedHat Network if you have a valid RHN account and your system doesn’t have internet connection.

[root@rhel5 ~]# rpm -ivh /tmp/iscsi-initiator-utils-6.2.0.871-0.16.el5.x86_64.rpm
Preparing...                ########################################### [100%]
   1:iscsi-initiator-utils  ########################################### [100%]
[root@rhel5 ~]#
[root@rhel5 ~]#rpm -qa | grep iscsi
iscsi-initiator-utils-6.2.0.871-0.16.el5
[root@rhel5 ~]# rpm -qi iscsi-initiator-utils-6.2.0.871-0.16.el5
Name        : iscsi-initiator-utils        Relocations: (not relocatable)
Version     : 6.2.0.871                         Vendor: Red Hat, Inc.
Release     : 0.16.el5                      Build Date: Tue 09 Mar 2010 09:16:29 PM CET
Install Date: Wed 16 Feb 2011 11:34:03 AM CET      Build Host: x86-005.build.bos.redhat.com
Group       : System Environment/Daemons    Source RPM: iscsi-initiator-utils-6.2.0.871-0.16.el5.src.rpm
Size        : 1960412                          License: GPL
Signature   : DSA/SHA1, Wed 10 Mar 2010 04:26:37 PM CET, Key ID 5326810137017186
Packager    : Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>
URL         : http://www.open-iscsi.org
Summary     : iSCSI daemon and utility programs
Description :
The iscsi package provides the server daemon for the iSCSI protocol,
as well as the utility programs used to manage it. iSCSI is a protocol
for distributed disk access using SCSI commands sent over Internet
Protocol networks.
[root@rhel5 ~]#

Next we are going to configure the initiator. The iSCSI initiator is composed by two services, iscsi and iscsid, enable them to start at system startup using chkconfig.

[root@rhel5 ~]# chkconfig iscsi on
[root@rhel5 ~]# chkconfig iscsid on
[root@rhel5 ~]#
[root@rhel5 ~]# chkconfig --list | grep iscsi
iscsi           0:off   1:off   2:on    3:on    4:on    5:on    6:off
iscsid          0:off   1:off   2:on    3:on    4:on    5:on    6:off
[root@rhel5 ~]#
[root@rhel5 ~]#

Once iSCSI is configured start the service.

[root@rhel5 ~]# service iscsi start
iscsid is stopped
Starting iSCSI daemon:                                     [  OK  ]
                                                           [  OK  ]
Setting up iSCSI targets: iscsiadm: No records found!
                                                           [  OK  ]
[root@rhel5 ~]#
[root@rhel5 ~]# service iscsi status
iscsid (pid  14170) is running...
[root@rhel5 ~]#

From the P4000 CMC we need to add the server to the management group configuration like we would do with any other server.

The server iqn can be found in the file /etc/iscsi/initiatorname.iscsi.

[root@cl-node1 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:2551bf29b48
[root@cl-node1 ~]#

Create any iSCSI volumes you need in the P4000 arrays and assign them to the RedHat system. Then to discover the presented LUNs, from the Linux server run the iscsiadm command.

[root@rhel5 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.126.60
192.168.126.60:3260,1 iqn.2003-10.com.lefthandnetworks:mlab:62:lv-rhel01
[root@rhel5 ~]#

Restart the iSCSI initiator to make the new block device available to the operative system.

[root@rhel5 ~]# service iscsi restart
Stopping iSCSI daemon:
iscsid dead but pid file exists                            [  OK  ]
Starting iSCSI daemon:                                     [  OK  ]
                                                           [  OK  ]
Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2003-10.com.lefthandnetworks:mlab:62:lv-rhel01, portal: 192.168.126.60,3260]
Login to [iface: default, target: iqn.2003-10.com.lefthandnetworks:mlab:62:lv-rhel01, portal: 192.168.126.60,3260]: successful
                                                           [  OK  ]
[root@rhel5 ~]#

Then check that the new disk is available, I used lsscsi but fdisk -l will do the trick too.

[root@rhel5 ~]# lsscsi
[0:0:0:0]    disk    VMware,  VMware Virtual S 1.0   /dev/sda
[2:0:0:0]    disk    LEFTHAND iSCSIDisk        9000  /dev/sdb
[root@rhel5 ~]#
[root@rhel5 ~]# fdisk -l /dev/sdb 

Disk /dev/sdb: 156.7 GB, 156766306304 bytes
255 heads, 63 sectors/track, 19059 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table
[root@rhel5 ~]#

At this point the iSCSI configuration is done, the new LUNs will be available through a system reboot as long as the iSCSI service is enabled.

Juanma.

What I like the most of PowerCLI is the power that gives you, instead of relaying on a GUI every second you just issue a couple of commands and everything gets done. Of course thin provisioning is no exception to this and following is the way to convert from thin to thick a VM hard disk and vice versa.

The first task is to get the type of the hard disks. You can use the vSphere Client to check the format of the disk, just edit the VM setting and go the the hard disk as shown in the screenshot below.

However using the vSphere Client is slow since you can only check one VM at a time, fortunately for us PowerCLI can offer a clean and elegant way to retrieve type for every disk of every virtual machine.

Next we are going to convert from thin to thick the hard disk of the debian-vm1 virtual machine. To do so we are going to use the cmdlet Set-HardDisk.

Now that the disk is converted, how can we revert it using only PowerCLI? The way to do it is to move the disk to another datastore and convert it during the process.

The cmdlet to use is again Set-HardDisk with the -Datastore and -StorageFormat options. Take into account that these options can only be used if you are connected to the vCenter Server, you can’t do this when directly connected to the ESX(i) server.

Any other methods to execute the same operation are always welcome so please comment :-)

Juanma.

Welcome to the third post of the Virtual Connect series!

The first two posts were about initial VC Domain creation and the Network setup. In this one I’ll explain the storage configuration of the Domain using the Fibre Channel Setup Wizard.

Please take into account that iSCSI is supported with Virtual Connect since the version 3.10 of Virtual Connect Manager and only with the Flex-10 and FlexFabric modules. However I’m going to leave iSCSI configuration for a future post, since I didn’t have many opportunities to try it with VC, and write only about Fibre Channel.

Before we start with the wizard and all the setup task is important to explain the Virtual Connect storage fundamentals.

The first concept to understand are the several key Fibre Channel port types. There a three basic FC ports:

  • N_Port (Node Port) – An N_Port  is a port within a node that provides Fibre Channel attachment like an HBA port. VC-FC module uplink ports are N_ports.
  • F_Port (Fabric Port) – This a port on a FC switch connected to an N_port and addressable by it. These are commonly used in Edge or Core switches. The VC-FC module’s downlink ports are F_ports in order to allow the HBAs to login into them.
  • E_Port (Expansion Port) – These are switch ports used for switch-to-switch connections known as Inter Switch Link or ISL.

Additionally there are two other ports, however these ports are not typically seen in Virtual Connect environments.

  • NL_Port (Node Loop Port) – An N_port capable of Arbitrated Loop function.
  • FL_Port (Fabric Loop Port) – An F_port capable of Arbitrated Loop function.

The next key concept to understand in N_Port ID Virtualization or NPIV. It’s a T11 FC standard than can be defined as a Fibre Channel facility that allows to assign multiple N_Port_IDs to a single N_Port, this is a physical N_port having multiple port WWNs. Of course the VC-FC module must be connected to a Fibre Channel switch that supports NPIV.

And how manages Virtual Connect all this port stuff? I believe that an image is worth a thousand words, so first I will show with the below diagrams illustrate how FC ports and SAN are managed with and without Virtual Connect.

As it can be seen the SAN switches, like the Cisco MDS 9124e,  that can be used in any blade enclosure including the HP ones are part of the SAN Fabric, that means the enclosure itself is part of the Fabric. These switches are connected to the SAN Core via E_ports or ISL.

In this configuration the SAN boundary has been moved out of the enclosure. The VC-FC module includes an HBA Aggregator which is an NPIV device. It passes, transparently, the signals from multiple HBAs to a single switch port.

Here it is how the whole process would go:

  1. VC-FC module uplink port issue a Login Request, an FLOGI to the SAN and advertize itself as NPIV capable port.
  2. Upon receiving an ACCept from the Fabric it would begin to process server requests.
  3. Server HBAs would begin normal Fabric login process with the WWNs.
  4. VC-FC module would translate FLOGI requests into an FDISC requests since a single N_Port can only receive one FLOGI request.
  5. SAN switch would reply with an ACCept and provide HBAs with Fabric addresses.
  6. The ACCept frames would reach uninterrupted the HBAs.
  7. From then on all the traffic will be carried over the sane link for all HBA connections.

Now the the basic concepts are explained and, hopefully clear, it’s time to configure the storage.

We are going to use the Fibre Channel Setup Wizard to:

  • Identify the World Wide Names (WWNs) to be used by the servers.
  • Define the available SAN fabrics.

You can launch the wizard either from the Tools menu in the Virtual Connect page or right after finishing the Network Setup Wizard. From the welcome screen click Next and move into the World Wide Name (WWN) Settings page.

In this first page you can specify if you want to use the WWN settings that comes with the Fiber Channel HBA card or if the HP Virtual Connect supplied WWN settings.

Virtual Connect will assign both a port WWN and a node WWN to a Fibre Channel port, the node WWN will always be the same as the port WWN incremented by one.

There is key advantage when configuring Virtual Connect to assign the WWNs and is that, since it maintains a consistent storage identity, it allows blades to be replaced in case of failure without affecting the external SAN.

In the wizard select Virtual Connect assigned WWNs and click Next to move into the Assigned WWNs screen.

This screen is very similar to the MAC address range selection screen we saw in the previous post. Here you have to choose between an user defined WWNs range and an HP defined one. You must ensure that the selected range is unique within the environment.

Next we are going to define the Fabric, first you’ll be presented with a screen asking if you want to define the fabric.

After that we have to enter the Fabric name, assign the uplink ports and configure the speed.

After applying the configuration the wizard will move to the next screen where it will ask if you want to create more Fabric, for the example purposes I decided to create a another one named fabric_prod2.

When you are done with the second fabric finish the wizard and the storage setup will be done. You can review and modify the configuration from the Virtual Connect main interface.

The next post will be the last of the series and I will discuss about Virtual Connect Server Profiles. As always any feedback would be welcome :-)

Juanma.

During a previous project I had the opportunity to work very closely with the EMC people and Symmetrix arrays, in fact I got a couple of very good friends from that project. At the time I created a bunch of text files for my self  reference about EMC SRDF and Timefinder technologies.

Today I decided to review that files, give them some order, well sort of, and put them here as a survival guide/quick reference in the hope that will be of help to any of you. The first of this guides will be about EMC Symmetrix Timefinder.

I don’t have sample output for every command, been more than a year since the last time I work with Timefinder, to complement my own samples I got several outputs from the Timefinder manuals.

This is not a complete Timefinder usage guide, just my personal notes taken from my direct experience with product. 

Timefinder Basics

EMC Timefinder is a replication solution that creates full volume copies. For the full-HP guys out there this is very similar to the XP or EVA Business Copy product.

There are two basic types of replication:

  • TimefinderClone – Creates point-in-time copies.
  • Timefinder/Snap – Creates pointer-based replicas, snapshots, only the changed data is written.

There are several optional components.

  • Timefinder/Mirror.
  • Timefinder/CG (Consistency Groups)
  • Timefinder/EIM (Exchange Integration Modules)
  • Timefinder/SIM (SQL Integration Modules)

Timefinder allows to retain multiple copies at different checkpoints for lowered RPO and RTO.

Symcli basics

Following is a list of the most basic symcli commands necessary to get your way around when you perform any Symmetrix task, including Timefinder.

- Get the list of the Symmetrix devices

root:/# symdev list
Symmetrix ID: 00029xxxxxxx
        Device Name           Directors                  Device
--------------------------- ------------- -------------------------------------
                                                                           Cap
Sym  Physical               SA :P DA :IT  Config        Attribute Sts      (MB)
--------------------------- ------------- -------------------------------------
0000 Not Visible            ???:? 01A:C0  BCV           Asst'd    RW       8632
0001 Not Visible            ???:? 16C:D0  BCV           Asst'd    RW       8632
0002 Not Visible            ???:? 01B:D0  BCV           Asst'd    RW       8632
0003 Not Visible            ???:? 16D:C0  BCV           Asst'd    RW       8632
…………………………………………………………………………………………………………………………………………………………………………………………………………………
0048 Not Visible            ???:? ???:??  VDEV          N/Grp'd   RW       8632
0049 Not Visible            ???:? ???:??  VDEV          N/Grp'd   RW       8632
004A Not Visible            ???:? ???:??  VDEV          N/Grp'd   RW       8632
004B Not Visible            ???:? ???:??  VDEV          N/Grp'd   NR       8632
004C Not Visible            ???:? ???:??  VDEV          N/Grp'd   NR       8632
004D Not Visible            ???:? ???:??  VDEV          N/Grp'd   NR       8632
004E Not Visible            ???:? ???:??  VDEV          N/Grp'd   NR       8632
004F Not Visible            ???:? ???:??  VDEV          N/Grp'd   NR       8632
0050 Not Visible            ???:? ???:??  VDEV          N/Grp'd   NR       8632
0051 Not Visible            ???:? ???:??  VDEV          N/Grp'd   NR       8632
0052 Not Visible            ???:? 16B:D1  2-Way Mir     N/A (SV)  RW       8632
0053 Not Visible            ???:? 01C:C0  2-Way Mir     N/A (SV)  RW       8632
0054 Not Visible            ???:? 16B:C0  2-Way Mir     N/A (SV)  RW       8632
…………………………………………………………………………………………………………………………………………………………………………………………………………………

- List all available devices from a device group

root:/# symld -g dg_oradev_01 list

- List host physical devices

root:/# sympd list

- List the disk groups:

root:/# /usr/symcli/bin/symdg list

                       D E V I C E      G R O U P S

                                                          Number of
 Name               Type     Valid  Symmetrix ID  Devs   GKs  BCVs  VDEVs

 dg_oracle_prod1    REGULAR  Yes    00029xxxxxxx    26     0    26      0
 dg_oracle_prod2    REGULAR  Yes    00029xxxxxxx    21     0    21      0
 dg_rac_01          RDF1     Yes    00029xxxxxxx    23     0    23      0
 dg_clvx_01         RDF1     Yes    00029xxxxxxx     5     0     5      0
 dg_oradev_01       REGULAR  Yes    00029xxxxxxx     3     0     0      0
 dg_timetest_02     RDF1     Yes    00029xxxxxxx    16     0    16      0
 grupo1             RDF1     Yes    00029xxxxxxx    22     0     0      0

root:/#

- Add devices to a disk group

  • Add physical devices
root:/# symld -g dg_oradev_01 add pd /dev/dsk/c2t4d12
  • Add Symmetrix devices
root:/# symld -g dg_oradev_01 add 006E

- Get diskgroup detailed info.

root:/# /usr/symcli/bin/symdg show dg_prod_01

Group Name:  dg_prod_01

    Group Type                                   : RDF1     (RDFA)
    Device Group in GNS                          : No
    Valid                                        : Yes
    Symmetrix ID                                 : 00029xxxxxxx
    Group Creation Time                          : Mon Nov 29 18:49:29 2007
    Vendor ID                                    : EMC Corp
    Application ID                               : ECC

    Number of STD Devices in Group               :    2
    Number of Associated GK's                    :    0
    Number of Locally-associated BCV's           :    2
    Number of Locally-associated VDEV's          :    0
    Number of Remotely-associated VDEV's(STD RDF):    0
    Number of Remotely-associated BCV's (STD RDF):    0
    Number of Remotely-associated BCV's (BCV RDF):    0
    Number of Remotely-assoc'd RBCV's (RBCV RDF) :    0

    Standard (STD) Devices (2):
        {
        --------------------------------------------------------------------
        Sym               Cap
        LdevName              PdevName                Dev  Att. Sts     (MB)
        --------------------------------------------------------------------
        DEV001                N/A                     01C8      RW      8714
        DEV002                N/A                     01C9      RW      8714
        }

    BCV Devices Locally-associated (2):
        {
        --------------------------------------------------------------------
        Sym               Cap
        LdevName              PdevName                Dev  Att. Sts     (MB)
        --------------------------------------------------------------------
        BCV001                N/A                     08A8      RW      8714
        BCV002                N/A                     08A9      RW      8714
        }

    Device Group RDF Information
        {
        RDF Type                               : R1
        RDF (RA) Group Number                  : 2               (01)

        Remote Symmetrix ID                    : 000287xxxxxx

        R2 Device Is Larger Than The R1 Device : False

        RDF Pair Configuration                 : Normal
        RDF STAR Mode                          : False

        RDF Mode                               : Synchronous
        RDF Adaptive Copy                      : Disabled
        RDF Adaptive Copy Write Pending State  : N/A
        RDF Adaptive Copy Skew (Tracks)        : 32767

        RDF Device Domino                      : Disabled

        RDF Link Configuration                 : Fibre
        RDF Link Domino                        : Disabled
        Prevent Automatic RDF Link Recovery    : Disabled
        Prevent RAs Online Upon Power ON       : Enabled

        Device RDF Status                      : Ready           (RW)

        Device RA Status                       : Ready           (RW)
        Device Link Status                     : Ready           (RW)

        Device Suspend State                   : N/A
        Device Consistency State               : Disabled
        RDF R2 Not Ready If Invalid            : Disabled

        Device RDF State                       : Ready           (RW)
        Remote Device RDF State                : Write Disabled  (WD)

        RDF Pair State (  R1 <===> R2 )        : Synchronized

        Number of R1 Invalid Tracks            : 0
        Number of R2 Invalid Tracks            : 0

        RDFA Information:
            {
            Session Number                     : 1
            Cycle Number                       : 0
            Number of Devices in the Session   : 491
            Session Status                     : Inactive

            Session Consistency State          : N/A
            Minimum Cycle Time                 : 00:00:30
            Average Cycle Time                 : 00:00:00
            Duration of Last cycle             : 00:00:00
            Session Priority                   : 33

            Tracks not Committed to the R2 Side: 0
            Time that R2 is behind R1          : 00:00:00
            R1 Side Percent Cache In Use       :  0
            R2 Side Percent Cache In Use       :  0

            }
        }

root:/#

Timfinder commands

- Associate BCVs to a device group. There are two ways:

root:/# symbcv -sid xxxx -g dg_oradev_01 associate dev 0001

- Establish the mirrors

root:/# symmir -g dg_oradev_01 -full establish DEV001 BCV001

- Split operations.

root:/# symmir -g dg_oradev_01 split

There are several additional split modes and/or modifiers.

  • Instant
root:/# symmir -g dg_oradev_01 split -instant
  • Force
root:/# symmir -g dg_oradev_01 split -force
  • Differential
root:/# symmir -g dg_oradev_01 split -differential
  • Reverse
root/# symmir -g dg_oradev_01 reverse split
  • Reverse differential
root:/# symmir -g dg_oradev_01 reverse split -differential

- Restore the BCV mirrors. The restore operation will copy the data from the BCV to the Standard device.

  • Differential restore
root:/# symmir -g dg_oradev_01 restore
  • Full restore
root:/# symmir -g dg_oradev_01 -full restore

- Reestablish operations. It is very important to tell the difference between Restore and Reestablish. Reestablish will do a differential update from the Standard device to the BCV device.

root:/# symmir -g dg_oradev_01 establish

- Get the list of BCV devices

root:/# symbcv list

Symmetrix ID: 00029xxxxxxx
              BCV Device                   Standard Device          Status
------------------------------------ --------------------------- ------------
                              Inv.                        Inv.
Physical         Sym RDF Att. Tracks Physical        Sym  Tracks BCV <=> STD
------------------------------------ --------------------------- ------------
Not Visible      0030    (M)       0 N/A             N/A       0 NeverEstab
Not Visible      0031    (m)       - N/A             N/A       - NeverEstab
……………………………………………………………………………………………………………………………………………………………………………………………………………
c4t1d0s2         0088              0 c4t0d0s2        0084      0 Split
c4t1d1s2         0089              0 c4t0d1s2        0085      0 Split
c4t1d2s2         008A              0 c4t0d2s2        0086      0 Split
c4t1d3s2         008B              0 c4t0d3s2        0087      0 Split

- Get the state of mirroring of the device pairs within a device group

root:/# /usr/symcli/bin/symmir -g dg_oracle_prod_01 query

Device Group (DG) Name: dg_oracle_prod_01
DG's Type             : RDF1
DG's Symmetrix ID     : 00029xxxxxxx

     Standard Device                    BCV Device                  State
-------------------------- ------------------------------------- ------------
                    Inv.                                  Inv.
Logical        Sym  Tracks Logical              Sym       Tracks STD <=> BCV
-------------------------- ------------------------------------- ------------

DEV001         0184      0 BCV001               039C *         0 Split
DEV002         0186      0 BCV002               039E *         0 Split
DEV003         0187      0 BCV003               039F *         0 Split
DEV004         0188      0 BCV004               03A0 *         0 Split
DEV005         0189      0 BCV005               03A1 *         0 Split
DEV006         018E      0 BCV006               03A6 *         0 Split
DEV007         018F      0 BCV007               03A7 *         0 Split
DEV008         0190      0 BCV008               03A8 *         0 Split
DEV009         0191      0 BCV009               03A9 *         0 Split
DEV010         01C7      0 BCV010               08A7 *         0 Split
DEV011         01CD      0 BCV011               08AA *         0 Split

Total              -------                               -------
  Track(s)               0                                     0
  MB(s)                0.0                                   0.0

Legend:

(*): The paired BCV device is associated with this group.

root:/#

- List all BCV sessions in a Symmetrix array

root:/# symmir list -sid xxxx

Symmetrix ID: 00000000xxxx

  Standard Device       BCV Device              State
-------------------- ----------------------- --------------
           Invalid             Invalid   GBE
Sym        Tracks     Sym      Tracks         STD <=> BCV
-------------------- ----------------------- --------------
002B               0 0E0B            0   ... Synchronized
002E               0 0E00            0   ..X Synchronized
002E               0 0E0A            0   ... Synchronized
0032               0 0E0F            0   ... Split
00FF               0 00FD            0   ... Split
0DF5               0 0DA5            0   ..X Synchronized
0DF5               0 0DA4            0   ..X Synchronized
0F70               0 001B         3592   X.. SyncInProg
0F71               0 001C         4496   X.. SyncInProg
0F93               0 0DF9            0   ..X Split
1015               0 1069            0   X.. Synchronized

Total       --------          --------
  Tracks           0              8088
  MB(s)          0.0             505.5

And we are done. As I said this is not a full guide so please if there is anything that you don’t get please leave a comment and I will try to clarify. Also if any of you have additional tips or “recipes” for Timefinder please comment :-)

Juanma.

This post will outline the necessary steps to create a standard (no-multisite) HP P4000 cluster with two nodes. Creating a two-node cluster is a very similar process as the one-node cluster described in my first post about P4000 systems.

The cluster is composed by:

  • 2 HP P4000 Virtual Storage Appliances
  • 1 HP P4000 Failover Manager

The Failover Manager, or FOM, is a specialized version of the SAN/iQ software. It runs as a virtual appliance in VMware, thought the most common situation is to run it in a ESX/ESXi servers running it under VMware player or Server is also supported.

The FOM integrates into a management group as a real manager and is intended only to provide quorum to the cluster, one of its main purposes is to provide quorum in Multi-site clusters. I decided to use it in this post to provide an example as real as possible.

To setup this cluster I used virtual machines inside VMware Workstation, but the same design can also be created with physical servers and P4000 storage systems.

From the Getting started screen launch the clusters wizard.

VJM-P4000

Select the two P4000 storage systems and enter the name of the Management Group

During the group creation will ask to create a cluster, choose the two nodes as members of the cluster, will add the FOM later, and assign a name to the cluster.

Next assign a virtual IP address to the cluster.

Enter the administrative level credentials for the cluster.

Finally the wizard will ask if you want to create volumes in the cluster, I didn’t take that option and finished the cluster creation process. You can also add the volumes later as I described in one of my previous posts.

Now the that cluster is formed we are going to add the Failover Manager.

It’s is important that the FOM requires the same configuration as any VSA as I depicted in my first post about the P4000 storage systems.

In the Central Management Console right-click into the FOM and select Add to existing management group.

Select the management group and click Add.

With this operation the cluster configuration is done. If everything went well in the end you should have something like this.

Juanma.

Getting the multipathing policy using PowerCLI is a very simple an straight-forward process that can be done with a few commands.

I test this procedure in the past with ESX/ESXi 3.5 and 4.0.

Get the multipahing policy

[vSphere PowerCLI] C:\> $h = get-vmhost esx01.mlab.local
[vSphere PowerCLI] C:\> $hostview = get-view $h.id
[vSphere PowerCLI] C:\> $storage = get-view $hostView.ConfigManager.StorageSystem
[vSphere PowerCLI] C:\> $storage.StorageDeviceInfo.MultipathInfo.lun | select ID,Path,Policy

Id → → Path → → → → Policy
-- → → ---- → → → → ------
vmhba0:0:0 → {vmhba0:0:0} → → → VMware.Vim.HostMultipathInfoFixedLogicalUnitPolicy
vmhba1:1:0 → {vmhba1:1:0, vmhba1:0:0} → VMware.Vim.HostMultipathInfoFixedLogicalUnitPolicy
vmhba1:0:5 → {vmhba1:1:5, vmhba1:0:5} VMware.Vim.HostMultipathInfoFixedLogicalUnitPolicy
vmhba1:0:1 → {vmhba1:1:1, vmhba1:0:1} → VMware.Vim.HostMultipathInfoFixedLogicalUnitPolicy
vmhba1:0:12 → {vmhba1:1:12, vmhba1:0:12} → VMware.Vim.HostMultipathInfoFixedLogicalUnitPolicy

Change the policy from fixed to round-robin

We are going to change the policy for the LUN 12.

[vSphere PowerCLI] C:\> $lunId = "vmhba1:0:12"
[vSphere PowerCLI] C:\> $storagepolicy = new-object VMware.Vim.HostMultipathInfoLogicalUnitPolicy
[vSphere PowerCLI] C:\> $storagepolicy.policy = "rr"
[vSphere PowerCLI] C:\> $storageSystem.SetMultipathLunPolicy($lunId, $policy)

Finally check the new configuration

If you look closely at the last line will see that the value has change from VMware.Vim.HostMultipathInfoFixedLogicalUnitPolicy to VMware.Vim.HostMultipathInfoLogicalUnitPolicy.

[vSphere PowerCLI] C:\> $h = get-vmhost "ESXIPAddress"
[vSphere PowerCLI] C:\> $hostview = get-view $h.id
[vSphere PowerCLI] C:\> $storage = get-view $hostView.ConfigManager.StorageSystem
[vSphere PowerCLI] C:\> $storage.StorageDeviceInfo.MultipathInfo.lun | select ID,Path,Policy

Id → → Path → → → → Policy
-- → → ---- → → → → ------
vmhba0:0:0 → {vmhba0:0:0} → → → VMware.Vim.HostMultipathInfoFixedLogicalUnitPolicy
vmhba1:1:0 → {vmhba1:1:0, vmhba1:0:0} → VMware.Vim.HostMultipathInfoFixedLogicalUnitPolicy
vmhba1:0:5 → {vmhba1:1:5, vmhba1:0:5} VMware.Vim.HostMultipathInfoFixedLogicalUnitPolicy
vmhba1:0:1 → {vmhba1:1:1, vmhba1:0:1} → VMware.Vim.HostMultipathInfoFixedLogicalUnitPolicy
vmhba1:0:12 → {vmhba1:1:12, vmhba1:0:12} → VMware.Vim.HostMultipathInfoLogicalUnitPolicy

Juanma.

As a small follow-up to yesterday’s post about NFS shares with Openfiler in the following article I will show how to add a new datastore to an ESX server using the vMA and PowerCLI.

- vMA

From the vMA shell we are going to use the command vicfg-nas. To clarify things a bit for teh newcomers, vicfg-nas and esxcfg-nas are the same command, in fact esxcfg-nas is no more than a link to the first.

The option to create a new datastore is -a and additionally the address/hostname of teh NFS servers, the share and a label for teh new datastore must be provided.

[vi-admin@vma ~][esx01.mlab.local]$ vicfg-nas -l
No NAS datastore found
[vi-admin@vma ~][esx01.mlab.local]$ vicfg-nas -a -o openfiler.mlab.local -s /mnt/vg_nfs/lv_nfs01/nfs_datastore1 nfs_datastore1
Connecting to NAS volume: nfs_datastore1
nfs_datastore1 created and connected.
[vi-admin@vma ~][esx01.mlab.local]$

When the operation is done you can check the new datastore with vicfg-nas -l.

[vi-admin@vma ~][esx01.mlab.local]$ vicfg-nas -l
nfs_datastore1 is /mnt/vg_nfs/lv_nfs01/nfs_datastore1 from openfiler.mlab.local mounted
[vi-admin@vma ~][esx01.mlab.local]$

- PowerCLI

In the second part of the post we are going to use vSphere PowerCLI, which as you already know is a PowerShell snapin to manage vSphere/VI3 infrastructure. I will write more about PowerCLI in the since I’m very fond with it.

The cmdlet to create the new NFS datastore is New-Datastore and you must provide the ESX host, the NFS server, the path of the share and a name for the datastore. Then you can check that the new datastore has been properly added with Get-Datastore.

Juanma.

Even if you have access to the enterprise-class storage appliances, like the HP P4000 VSA or the EMC Celerra VSA, an Openfiler storage appliance can be a great asset to your homelab. Specially if you, like myself, run an “all virtual” homelab within VMware Workstation, since Openfiler is by far less resource hunger than its enterprise counterparts.

Simon Seagrave (@Kiwi_Si) from TechHead.co.uk wrote an excellent article explaining how to add iSCSI LUNs from an Openfiler instance to your ESX/ESXi servers, if iSCSI is your “thing” you should check it.

In this article I’ll explain how-to configure a NFS share in Openfiler and then add it as a datastore to your vSphere servers. I’ll take for granted that you already have an Openfiler server up and running.

1 – Enable NFS service

As always point your browser to https://<openfiler_address&gt;:446, login and from the main screen go to the Services tab and enable the NFSv3 service as shown below.

2 – Setup network access

From the System tab add the network of the ESX servers as authorized. I added the whole network segment but you can also create network access rules per host in order to setup a more secure and granular access policy.

3 – Create the volumes

The next step is to create the volumes we are going to use as the base for the NFS shares. If like me you’re a Unix/Linux Geek it is for sure that you understand perfectly the PV -> VG -> LV concepts if not I strongly recommend you to check the TechHead article mentioned above where Simon explained it very well or if you want to go a little deeper with volumes in Unix/Linux my article about volume and filesystem basics in Linux and HP-UX.

First we need to create the physical volumes; go to the Volumes tab, enter the Block Devices section and edit the disk to be used for the volumes.

Create a partition and set the type to Physical Volume.

Once the Physical Volume is created go to the Volume Groups section and create a new VG and use for it the new PV.

Finally click on Add Volume. In this section you will have to choose the new VG that will contain the new volume, the size, name descrption and more important the Filesystem/Volume Type. There are three type:

  • iSCSI
  • XFS
  • Ext3

The first is obviously intended for iSCSI volume and the other two for NFS, the criteria to follow here is the scalibility since esxt3 supports up to 8TB and XFS up to 10TB.

Click Create and the new volume will be created.

4 – Create the NFS share

Go to the Shares tab, there you will find the new volume as an available share.

Just to clarify concepts, this volume IS NOT the real NFS share. We are going to create a folder into the volume and share that folder through NFS to our ESX/ESXi servers.

Click into the volume name and in the pop-up enter the name of the folder and click Create folder.

Select the folder and in the pop-up click the Make Share button.

Finally we are going to configure the newly created share; select the share to enter its configuration area.

Edit the share data to your suit and select the Access Control Mode. Two modes are available:

  • Public guest access – There is no user based authentication.
  • Controlled access – The authentication is defined in the Accounts section.

Since this is only for my homelab I choose Public access.

Next select the share type, for our purposes case I obviously choose NFS and set the permissions as Read-Write.

You can also edit the NFS options and configure to suit your personal preferences and/or specifications.

Just a final tip for the non-Unix people, if you want to check the NFS share open a SSH session with the openfiler server and as root issue the command showmount -e. The output should look like this.

The Openfiler configuration is done, now we are going to create a new datastore in our ESX servers.

5 – Add the datastore to the ESX servers

Now that the share is created and configured it is time to add it to our ESX servers.

As usually from the vSphere Client go to Configuration -> Storage -> Add storage.

In the pop-up window choose Network File System.

Enter in the Server, Folder and Datastore Name label.

Finally check the data and click finish. If everything goes well after a few seconds the new datastore should appear.

And with this we are finished. If you see any mistake or have anything to add please comment :-)

Juanma.

As I explained in my first post about the SAN/iQ command line, to remotely manage a P4000 storage array instead of providing the username/password credentials in every command you can specify an encrypted file which contains the user/password information.

To create this file, known as the key file, just use the createKey command and provide the username, password, array IP address or DNS name and the name of the file.

By default the key file is created in the user’s home directory, c:\Documents and Settings\<username> in Windows XP/2003 and C:\Users\<username> in Windows Vista/2008/7.

The file can also be stored in a secure location on the local network, in that case the full path to the key file must be provided.

Of course the main reason to create a key file, apart from ease the daily management, is to provide a valid authentication mechanism for any automation script that you can create using the cliq.

Juanma.