Archives For ESX

One of features I like the most of esxtop/resxtop is the ability to create customized configurations. This feature gives you the ability to have several pre-defined configuration files to be used under certain circumstances, for example you can have one only to check if there are virtual machines swapping during a heavy workload period.

The post will cover esxtop 4.x, the version that comes with vSphere 4.x, however it can be applicable to previous versions as well. First it’s important to know that by default esxtop/resxtop stores its configuration in the file .esxtop4rc, in the vMA this file is stored in the vi-admin user home directory and in the root home directory in the ESX(i) servers.

Now lets create one as an example. I’m using resxtop from the vMA so first launch it against the vCenter Server and select one of the ESX(i) hosts.

Now you should see the default esxtop screen. We are going to create a configuration that show only some of the memory related counters.

Show the memory screen by pressing m and from there press f to edit the fields to display.

Press the corresponding keys to enable/disable the fields and a or o keys to toggle its order, then press the space bar to finish. Next esxtop returns the memory view and show the newly selected counters.

At this point you can customized the field to display in the other views (CPU, network, etc). When you are done press W to save the config and enter the file name to save the new config in. If you don’t enter a file name esxtop will save the changes in its default config file, /home/vi-admin/.esxtop4rc in the example.

Exit esxtop and run it again but loading the saved config file, instead of the default one, by using -c <config_file>.

Finally my advise is to read carefully the Interpreting esxtop 4.1 Statistics document and use the counters that better suits your needs.


Here are two quick ways to check the driver version of a network interface card. The commands must be executed in the ESX COS or ESX(i) Tech Support Mode.

  • vmkload_mod

vmkload_mod is a tool to manage VMkernel modules. It can be used to load and unload modules, list the loaded modules and get the general information and available parameters of each module.

~ # vmkload_mod -s bnx2x | grep Version
 Version: Version 1.54.1.v41.1-1vmw, Build: 260247, Interface: ddi_9_1 Built on: May 18 2010
~ #
  • ethtool

ethtool is a Linux command line tool that allow us to retrieve and modify the parameters of an ethernet device. It is present in the vast majority of Linux systems, including the ESX Service Console. Fortunately for us VMware has also included it within the busybox environment of the ESXi.

~ # ethtool -i vmnic0
driver: bnx2x
version: 1.54.1.v41.1-1vmw
firmware-version: BC:5.2.7 PHY:baa0:0105
bus-info: 0000:02:00.0
~ #

If you want to use PowerCLI for this task you should check Julian Wood (@julian_wood) excellent post about it.


Anyone with some experience and knowledge about VMware HA knows how to perform a Reconfigure for HA operation in a host from the vSphere client and I’m no exception to that rule. However I never did with PowerCLI.

I created a new cluster in my homelab with a problem in one of the hosts, I fixed the problem, put my mind to work and after an hour or so digging through PowerCLI and the vSphere API Reference Documentation I came up with the following easy way to do it.

First we are going to create a variable that contained the configuration of the ESXi I wanted to reconfigure.

[vSphere PowerCLI] % $vmhost = Get-VMHost esxi06.vjlab.local
[vSphere PowerCLI] %
[vSphere PowerCLI] % $vmhost | Format-List

State                 : Connected
ConnectionState       : Connected
PowerState            : PoweredOn
VMSwapfileDatastoreId :
VMSwapfilePolicy      : Inherit
ParentId              : ClusterComputeResource-domain-c121
IsStandalone          : False
Manufacturer          : VMware, Inc.
Model                 : VMware Virtual Platform
NumCpu                : 2
CpuTotalMhz           : 5670
CpuUsageMhz           : 869
MemoryTotalMB         : 2299
MemoryUsageMB         : 868
ProcessorType         : Intel(R) Core(TM)2 Quad CPU    Q9550  @ 2.83GHz
HyperthreadingActive  : False
TimeZone              : UTC
Version               : 4.1.0
Build                 : 260247
Parent                : cluster3
VMSwapfileDatastore   :
StorageInfo           : HostStorageSystem-storageSystem-143
NetworkInfo           : esxi06:vjlab.local
DiagnosticPartition   : mpx.vmhba1:C0:T0:L0
FirewallDefaultPolicy :
ApiVersion            : 4.1
CustomFields          : {[com.hp.proliant, ]}
ExtensionData         : VMware.Vim.HostSystem
Id                    : HostSystem-host-143
Name                  : esxi06.vjlab.local
Uid                   : /VIServer=administrator@vcenter1.vjlab.local:443/VMHost=HostSystem-host-143/

[vSphere PowerCLI] %

Next with the cmdlet Get-View I retrieved the .NET objects of the host ID and stored them in another variable.

[vSphere PowerCLI] % Get-View $vmhost.Id

Runtime             : VMware.Vim.HostRuntimeInfo
Summary             : VMware.Vim.HostListSummary
Hardware            : VMware.Vim.HostHardwareInfo
Capability          : VMware.Vim.HostCapability
ConfigManager       : VMware.Vim.HostConfigManager
Config              : VMware.Vim.HostConfigInfo
Vm                  : {}
Datastore           : {Datastore-datastore-144}
Network             : {Network-network-11}
DatastoreBrowser    : HostDatastoreBrowser-datastoreBrowser-host-143
SystemResources     : VMware.Vim.HostSystemResourceInfo
Parent              : ClusterComputeResource-domain-c121
CustomValue         : {}
OverallStatus       : red
ConfigStatus        : red
ConfigIssue         : {0}
EffectiveRole       : {-1}
Permission          : {}
Name                : esxi06.vjlab.local
DisabledMethod      : {ExitMaintenanceMode_Task, PowerUpHostFromStandBy_Task, ReconnectHost_Task}
RecentTask          : {}
DeclaredAlarmState  : {,,,}
TriggeredAlarmState : {}
AlarmActionsEnabled : True
Tag                 : {}
Value               : {}
AvailableField      : {com.hp.proliant}
MoRef               : HostSystem-host-143
Client              : VMware.Vim.VimClient

[vSphere PowerCLI] % $esxha = Get-View $vmhost.Id

Now through the $esxha variable I invoked the method ReconfigureHostForDAS to reconfigure the ESXi, this method is part of the HostSystem object and its description can be found here in the vSphere API reference.

As it can be seen in the above screenshot, the task is displayed in the vSphere client. You can also monitor the operation with the Get-Task cmdlet.

Finally I created the below script to simplify things in the future :-)

# Reconfigure-VMHostHA.ps1
# PowerCLI script to reconfigure for VMware HA a VM Host
# Juan Manuel Rey - juanmanuel (dot) reyportal (at) gmail (dot) com


$vmhost = Get-VMHost $esx
$esxha = Get-View $vmhost.Id


Today while I was setting up a new vCloud lab at home I just notice that by mistake I added one of the ESXi to the wrong cluster and in the wrong datacenter.

To be sincere, correcting this is not a big deal, just put the host in maintenance mode, get it out of the cluster and move to the correct datacenter. With the vSphere Client it can be done with a couple of clicks and a simple drag and drop. But my mistake gave me the opportunity to correct it using PowerCLI and wirte this small but hopefully useful blog post.

To explain a bit the scenario. I currently have two datacenters in my homelab, one for my day to day tests and labs and another one for vCloud Director.

1 – Put the host in maintenance mode.

To do so we re going to use the Set-VMHost cmdlet.

[vSphere PowerCLI] % Set-VMHost -VMHost vcloud-esxi1.vjlab.local -State "Maintenance"

Name            ConnectionState PowerState      Id CpuUsage CpuTotal  Memory  Memory
                                                        Mhz      Mhz UsageMB TotalMB
----            --------------- ----------      -- -------- -------- ------- -------
vcloud-esxi1... Maintenance     PoweredOn  ...t-88      126     5670     873    3071

[vSphere PowerCLI] %

2 – Move the host out of the cluster.

To perform this use the Move-VMHost cmdlet.

[vSphere PowerCLI] % Move-VMHost -VMHost vcloud-esxi1.vjlab.local -Destination vjlab-dc

Name            ConnectionState PowerState      Id CpuUsage CpuTotal  Memory  Memory
                                                        Mhz      Mhz UsageMB TotalMB
----            --------------- ----------      -- -------- -------- ------- -------
vcloud-esxi1... Maintenance     PoweredOn  ...t-88       92     5670     870    3071

[vSphere PowerCLI] %

If you check now the vSphere Client will see the host out of the cluster but still in the same datacenter.

3 – Move the host to the correct datacenter.

Now that our host is in maintenance mode and out of the cluster it is time to move it to the correct datacenter. Again we will use Move-VMHost.

[vSphere PowerCLI] % Move-VMHost -VMHost vcloud-esxi1.vjlab.local -Destination vjlab-vcloud -Verbose
VERBOSE: 03/02/2011 22:30:39 Move-VMHost Started execution
VERBOSE: Move host 'vcloud-esxi1.vjlab.local' into 'vjlab-vcloud'.
VERBOSE: 03/02/2011 22:30:41 Move-VMHost Finished execution

Name            ConnectionState PowerState      Id CpuUsage CpuTotal  Memory  Memory
                                                        Mhz      Mhz UsageMB TotalMB
----            --------------- ----------      -- -------- -------- ------- -------
vcloud-esxi1... Maintenance     PoweredOn  ...t-88       63     5670     870    3071

[vSphere PowerCLI] %

Finally put the ESXi out of maintenance mode.

[vSphere PowerCLI] % Set-VMHost -VMHost vcloud-esxi1.vjlab.local -State Connected

Name            ConnectionState PowerState      Id CpuUsage CpuTotal  Memory  Memory
                                                        Mhz      Mhz UsageMB TotalMB
----            --------------- ----------      -- -------- -------- ------- -------
vcloud-esxi1... Connected       PoweredOn  ...t-88       98     5670     870    3071

[vSphere PowerCLI] %

Check that everything is OK with the vSphere Client and we are done.


In yesterday’s post I showed how to get the iSCSI iqn from an ESX(i) server using the vSphere CLI from the vMA and from the root shell of ESX itself. Today it’s turn to use PowerCLI to perform the same task.

The approach to be taken is very similar to the one we used to manage the multipathing configuration.

[vSphere PowerCLI] C:\> $h = Get-VMhost esx02.mlab.local
[vSphere PowerCLI] C:\> $hostview = Get-View $
[vSphere PowerCLI] C:\> $storage = Get-View $hostview.ConfigManager.StorageSystem
[vSphere PowerCLI] C:\>
[vSphere PowerCLI] C:\> $storage.StorageDeviceInfo

HostBusAdapter              : {,,,}
ScsiLun                     : {,}
ScsiTopology                : VMware.Vim.HostScsiTopology
MultipathInfo               : VMware.Vim.HostMultipathInfo
PlugStoreTopology           : VMware.Vim.HostPlugStoreTopology
SoftwareInternetScsiEnabled : True
DynamicType                 :
DynamicProperty             : 

[vSphere PowerCLI] C:\>
[vSphere PowerCLI] C:\> $storage.StorageDeviceInfo.HostBusAdapter | select IScsiName


[vSphere PowerCLI] C:\>


When you are trying to configure iSCSI of and ESX(i) server from the command line is clear that at some point you are going to need the iqn. Of course you can  use the vSphere Client to get the iqn but the Unix Geek inside me really wants to do it from the shell.

After a small research through the vSphere CLI documentation and several blogs I found this post by Jon Owings (@2vcps).

First list the SCSI devices available in the system to get the iSCSI hba.

[root@esx02 ~]# esxcfg-scsidevs -a
vmhba0  mptspi            link-n/a  pscsi.vmhba0                            (0:0:16.0) LSI Logic / Symbios Logic LSI Logic Parallel SCSI Controller
vmhba1  ata_piix          link-n/a  ide.vmhba1                              (0:0:7.1) Intel Corporation Virtual Machine Chipset
vmhba32 ata_piix          link-n/a  ide.vmhba32                             (0:0:7.1) Intel Corporation Virtual Machine Chipset
vmhba33 iscsi_vmk         online    iscsi.vmhba33                           iSCSI Software Adapter         
[root@esx02 ~]#

After that Jon uses the command vmkiscsi-tool to get the iqn.

[root@esx02 ~]# vmkiscsi-tool -I -l vmhba33
iSCSI Node Name:
[root@esx02 ~]#

Beauty, isn’t it? But I found one glitch. This method is done from the ESX root shell but how do I get the iqn from the vMA? Some of my hosts are ESXi and even for the ESX I use the vMA to perform my everyday administration tasks.

There is no vmkiscsi-tool command in the vMA, instead we are going to use the vicfg-iscsi or the vicfg-scsidevs command.

With vicfg-scsidevs we can obtain the iqn listed in the UID colum.

[vi-admin@vma ~][esx02.mlab.local]$ vicfg-scsidevs -a             
Adapter_ID  Driver      UID                                     PCI      Vendor & Model
vmhba0      mptspi      pscsi.vmhba0                            (0:16.0) LSI Logic Parallel SCSI Controller
vmhba1      ata_piix    unknown.vmhba1                          (0:7.1)  Virtual Machine Chipset
vmhba32     ata_piix    ide.vmhba32                             (0:7.1)  Virtual Machine Chipset
vmhba33     iscsi_vmk   ()       iSCSI Software Adapter
[vi-admin@vma ~][esx02.mlab.local]$

And with vicfg-iscsi we can get the iqn providing the vmhba device.

[vi-admin@vma ~][esx02.mlab.local]$ vicfg-iscsi --iscsiname --list vmhba33
iSCSI Node Name   :
iSCSI Node Alias  :
[vi-admin@vma ~][esx02.mlab.local]$

The next logical step is to use PowerCLI to retrive the iqn, but I’ll leave that for a future post.


Getting the multipathing policy using PowerCLI is a very simple an straight-forward process that can be done with a few commands.

I test this procedure in the past with ESX/ESXi 3.5 and 4.0.

Get the multipahing policy

[vSphere PowerCLI] C:\> $h = get-vmhost esx01.mlab.local
[vSphere PowerCLI] C:\> $hostview = get-view $
[vSphere PowerCLI] C:\> $storage = get-view $hostView.ConfigManager.StorageSystem
[vSphere PowerCLI] C:\> $storage.StorageDeviceInfo.MultipathInfo.lun | select ID,Path,Policy

Id → → Path → → → → Policy
-- → → ---- → → → → ------
vmhba0:0:0 → {vmhba0:0:0} → → → VMware.Vim.HostMultipathInfoFixedLogicalUnitPolicy
vmhba1:1:0 → {vmhba1:1:0, vmhba1:0:0} → VMware.Vim.HostMultipathInfoFixedLogicalUnitPolicy
vmhba1:0:5 → {vmhba1:1:5, vmhba1:0:5} VMware.Vim.HostMultipathInfoFixedLogicalUnitPolicy
vmhba1:0:1 → {vmhba1:1:1, vmhba1:0:1} → VMware.Vim.HostMultipathInfoFixedLogicalUnitPolicy
vmhba1:0:12 → {vmhba1:1:12, vmhba1:0:12} → VMware.Vim.HostMultipathInfoFixedLogicalUnitPolicy

Change the policy from fixed to round-robin

We are going to change the policy for the LUN 12.

[vSphere PowerCLI] C:\> $lunId = "vmhba1:0:12"
[vSphere PowerCLI] C:\> $storagepolicy = new-object VMware.Vim.HostMultipathInfoLogicalUnitPolicy
[vSphere PowerCLI] C:\> $storagepolicy.policy = "rr"
[vSphere PowerCLI] C:\> $storageSystem.SetMultipathLunPolicy($lunId, $policy)

Finally check the new configuration

If you look closely at the last line will see that the value has change from VMware.Vim.HostMultipathInfoFixedLogicalUnitPolicy to VMware.Vim.HostMultipathInfoLogicalUnitPolicy.

[vSphere PowerCLI] C:\> $h = get-vmhost "ESXIPAddress"
[vSphere PowerCLI] C:\> $hostview = get-view $
[vSphere PowerCLI] C:\> $storage = get-view $hostView.ConfigManager.StorageSystem
[vSphere PowerCLI] C:\> $storage.StorageDeviceInfo.MultipathInfo.lun | select ID,Path,Policy

Id → → Path → → → → Policy
-- → → ---- → → → → ------
vmhba0:0:0 → {vmhba0:0:0} → → → VMware.Vim.HostMultipathInfoFixedLogicalUnitPolicy
vmhba1:1:0 → {vmhba1:1:0, vmhba1:0:0} → VMware.Vim.HostMultipathInfoFixedLogicalUnitPolicy
vmhba1:0:5 → {vmhba1:1:5, vmhba1:0:5} VMware.Vim.HostMultipathInfoFixedLogicalUnitPolicy
vmhba1:0:1 → {vmhba1:1:1, vmhba1:0:1} → VMware.Vim.HostMultipathInfoFixedLogicalUnitPolicy
vmhba1:0:12 → {vmhba1:1:12, vmhba1:0:12} → VMware.Vim.HostMultipathInfoLogicalUnitPolicy


As a small follow-up to yesterday’s post about NFS shares with Openfiler in the following article I will show how to add a new datastore to an ESX server using the vMA and PowerCLI.

– vMA

From the vMA shell we are going to use the command vicfg-nas. To clarify things a bit for teh newcomers, vicfg-nas and esxcfg-nas are the same command, in fact esxcfg-nas is no more than a link to the first.

The option to create a new datastore is -a and additionally the address/hostname of teh NFS servers, the share and a label for teh new datastore must be provided.

[vi-admin@vma ~][esx01.mlab.local]$ vicfg-nas -l
No NAS datastore found
[vi-admin@vma ~][esx01.mlab.local]$ vicfg-nas -a -o openfiler.mlab.local -s /mnt/vg_nfs/lv_nfs01/nfs_datastore1 nfs_datastore1
Connecting to NAS volume: nfs_datastore1
nfs_datastore1 created and connected.
[vi-admin@vma ~][esx01.mlab.local]$

When the operation is done you can check the new datastore with vicfg-nas -l.

[vi-admin@vma ~][esx01.mlab.local]$ vicfg-nas -l
nfs_datastore1 is /mnt/vg_nfs/lv_nfs01/nfs_datastore1 from openfiler.mlab.local mounted
[vi-admin@vma ~][esx01.mlab.local]$

– PowerCLI

In the second part of the post we are going to use vSphere PowerCLI, which as you already know is a PowerShell snapin to manage vSphere/VI3 infrastructure. I will write more about PowerCLI in the since I’m very fond with it.

The cmdlet to create the new NFS datastore is New-Datastore and you must provide the ESX host, the NFS server, the path of the share and a name for the datastore. Then you can check that the new datastore has been properly added with Get-Datastore.


This post is mostly for self-reference but may be someone would find it useful. Last night I decided to change the IP address of one of the Openfiler instances in my homelab and instead of previously removing the NFS shares from the ESX servers I simply made the changes.

After a restart of the network services in the Openfiler server to commit the changes I found that the ESX servers saw the datastore as inactive.

First I tried to remove it from the vSphere Client and I received the following error message:

I quickly switched to an SSH session in the vMA to check the state of the datastore, it appeared as not mounted.

[vi-admin@vma /][esx01.mlab.local]$ esxcfg-nas -l
nfs_datastore1 is /mnt/vg_nfs/lv_nfs01/nfs_datastore1 from openfiler.mlab.local not mounted
[vi-admin@vma /][esx01.mlab.local]$

At this point I used esxcfg-nas command to remove the datastore.

[vi-admin@vma /][esx01.mlab.local]$ esxcfg-nas -d nfs_datastore1
NAS volume nfs_datastore1 deleted.
[vi-admin@vma /][esx01.mlab.local]$ esxcfg-nas -l
No NAS datastore found
[vi-admin@vma /][esx01.mlab.local]$

Very easy, isn’t it? Oh by the way this just confirm one of my personal beliefs “Where there is shell, there is a way” ;-)


Even if you have access to the enterprise-class storage appliances, like the HP P4000 VSA or the EMC Celerra VSA, an Openfiler storage appliance can be a great asset to your homelab. Specially if you, like myself, run an “all virtual” homelab within VMware Workstation, since Openfiler is by far less resource hunger than its enterprise counterparts.

Simon Seagrave (@Kiwi_Si) from wrote an excellent article explaining how to add iSCSI LUNs from an Openfiler instance to your ESX/ESXi servers, if iSCSI is your “thing” you should check it.

In this article I’ll explain how-to configure a NFS share in Openfiler and then add it as a datastore to your vSphere servers. I’ll take for granted that you already have an Openfiler server up and running.

1 – Enable NFS service

As always point your browser to https://<openfiler_address&gt;:446, login and from the main screen go to the Services tab and enable the NFSv3 service as shown below.

2 – Setup network access

From the System tab add the network of the ESX servers as authorized. I added the whole network segment but you can also create network access rules per host in order to setup a more secure and granular access policy.

3 – Create the volumes

The next step is to create the volumes we are going to use as the base for the NFS shares. If like me you’re a Unix/Linux Geek it is for sure that you understand perfectly the PV -> VG -> LV concepts if not I strongly recommend you to check the TechHead article mentioned above where Simon explained it very well or if you want to go a little deeper with volumes in Unix/Linux my article about volume and filesystem basics in Linux and HP-UX.

First we need to create the physical volumes; go to the Volumes tab, enter the Block Devices section and edit the disk to be used for the volumes.

Create a partition and set the type to Physical Volume.

Once the Physical Volume is created go to the Volume Groups section and create a new VG and use for it the new PV.

Finally click on Add Volume. In this section you will have to choose the new VG that will contain the new volume, the size, name descrption and more important the Filesystem/Volume Type. There are three type:

  • iSCSI
  • XFS
  • Ext3

The first is obviously intended for iSCSI volume and the other two for NFS, the criteria to follow here is the scalibility since esxt3 supports up to 8TB and XFS up to 10TB.

Click Create and the new volume will be created.

4 – Create the NFS share

Go to the Shares tab, there you will find the new volume as an available share.

Just to clarify concepts, this volume IS NOT the real NFS share. We are going to create a folder into the volume and share that folder through NFS to our ESX/ESXi servers.

Click into the volume name and in the pop-up enter the name of the folder and click Create folder.

Select the folder and in the pop-up click the Make Share button.

Finally we are going to configure the newly created share; select the share to enter its configuration area.

Edit the share data to your suit and select the Access Control Mode. Two modes are available:

  • Public guest access – There is no user based authentication.
  • Controlled access – The authentication is defined in the Accounts section.

Since this is only for my homelab I choose Public access.

Next select the share type, for our purposes case I obviously choose NFS and set the permissions as Read-Write.

You can also edit the NFS options and configure to suit your personal preferences and/or specifications.

Just a final tip for the non-Unix people, if you want to check the NFS share open a SSH session with the openfiler server and as root issue the command showmount -e. The output should look like this.

The Openfiler configuration is done, now we are going to create a new datastore in our ESX servers.

5 – Add the datastore to the ESX servers

Now that the share is created and configured it is time to add it to our ESX servers.

As usually from the vSphere Client go to Configuration -> Storage -> Add storage.

In the pop-up window choose Network File System.

Enter in the Server, Folder and Datastore Name label.

Finally check the data and click finish. If everything goes well after a few seconds the new datastore should appear.

And with this we are finished. If you see any mistake or have anything to add please comment :-)