Archives For ESXi

In the series of posts about OpenStack and KVM we saw how to add a KVM node to NSX for multi-hypervisor environments as a transport node. In this post we will discuss how to perform the same procedure for an ESXi host.

NSX vSwitch installation

Before proceeding with the installation keep in mind that NSX vSwitch can run on an ESXi host simultaneously only with VMware Standard Switch, distributed switches are not supported.

Install the NSX vSwitch vib file using esxcli.

~ # esxcli software vib install --no-sig-check -v /tmp/vmware-nsxvswitch-2.1.3-35984-prod2013-stage-release.vib
Installation Result
   Message: Operation finished successfully.
   Reboot Required: false
   VIBs Installed: VMware_bootbank_vmware-nsxvswitch_2.1.3-35984
   VIBs Removed:
   VIBs Skipped:
~ #
~ # esxcli software vib list | grep nsx
vmware-nsxvswitch              2.1.3-35984                           VMware  VMwareCertified   2014-07-13
~ #

Check that the a new virtual switch has been created on the host, don’t use esxcli but the good old esxcfg-vswitch command because for now there is no namespace available in esxcli for NSX vSwitch.

~ # esxcfg-vswitch -l
Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
vSwitch0         1536        7           128               1500    vmnic0,vmnic1

  PortGroup Name        VLAN ID  Used Ports  Uplinks
  vMotion               0        1           vmnic0,vmnic1
  Management Network    0        1           vmnic0,vmnic1

Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
vSwitch1         1536        6           128               1500    vmnic2,vmnic3

  PortGroup Name        VLAN ID  Used Ports  Uplinks
  vsan                  0        1           vmnic2,vmnic3

Switch Name      Num Ports   Used Ports  Uplinks
nsx-vswitch      1536        1

~ #

NSX vSwitch configuration

With NSX vSwitch installed proceed to the conifguration. First connect an uplink to the switch, this will create an NVS bridge which is the equivalent of an OVS bridge in Open vSwitch.

nsxcli uplink/connect vmnic4

Set an IP address for the uplink, this IP address will be used later to create the transport tunneling endpoint when we connect the ESXi as a transport node to NSX. You can also specify the VLAN tag by appending vlan=<vlan_id> as an additional parameter to the command.

nsxcli uplink/set-ip vmnic4 192.168.110.123 255.255.255.0

Validate that the bridge is correctly configured. Use nsxcli port/show to verify the bridge and nsxcli uplink/show for the uplink.

~ # nsxcli port/show
br-int:
-------

br-vmnic4:
----------
vmnic4
vmk3

~ #

In the uplink/show output look for an entry like the one below.

==============================
vmnic4:
MAC       : 00:50:56:01:08:ca
Link      : Up
MTU       : 1500
IP config :
------------------------------
VMK intf  : vmk3
MAC addr  : 00:50:56:6b:ca:dd
Services  : NSX-Tunneling
VLAN      : 0
IP        : 192.168.110.123(Static)
Mask      : 255.255.255.0(Static)
..............................
------------------------------
Connection : NVS (uplink0)
Configured as standalone interface
==============================

You can also check the status of the vmkernel interface with esxcli and with nsxcli.

 ~ # esxcli network ip interface ipv4 get -i vmk3
Name  IPv4 Address     IPv4 Netmask   IPv4 Broadcast   Address Type  DHCP DNS
----  ---------------  -------------  ---------------  ------------  --------
vmk3  192.168.110.123  255.255.255.0  192.168.110.255  STATIC           false
~ #
~ # nsxcli vmknic/show vmk3
vmk3:
MAC addr  : 00:50:56:6b:ca:dd
Services  : NSX-Tunneling
VLAN      : 0
IP        : 192.168.110.123(Static)
Mask      : 255.255.255.0(Static)
Assoc with: vmnic4
..............................
~ #

The next step is configure the gateway  for NSX vSwitch.

~ # nsxcli gw/set tunneling 192.168.110.2
~ #
~ # nsxcli gw/show tunneling
Tunneling:
Configured default gateway       : 192.168.110.2
Currently active default gateway : 192.168.110.2 (Manual)
~ #

Connect NSX vSwitch instance to NSX controller cluster.

~ # nsxcli manager/set ssl:192.168.110.31
~ #
~ # nsx-dbctl show
e42912a7-693f-43ee-84d5-11b5bb3491eb
    Manager "ssl:192.168.110.31:6632"
    Bridge br-int
        fail_mode: secure
    Bridge "br-vmnic4"
        fail_mode: standalone
        Port "vmk3"
            Interface "vmk3"
        Port "vmnic4"
            Interface "vmnic4"
    ovs_version: "2.1.3.35984"
~ #

Create an opaque network. An opaque network is basically a transport bridge that will provide the network backend for the virtual machines. Opaque networks must be identified during its creation based on its type and ID.

In this particular case the ESXi will be added later to a cluster acting as nova compute backend for my OpenStack lab so the network type must be nsx.network and the UUID have to match the configured one for the integration_bridge setting in nova.conf file. We also need to specify the port attach mode, for OpenStack environments is manual.

~ # nsxcli network/add NSX-Bridge NSX-Bridge nsx.network manual
success
~ #
~ # nsxcli network/show
UUID                                        Name                    Type            Mode
----                                        ----                    ----            ----
NSX-Bridge                                  NSX-Bridge              nsx.network     manual
~ #

Add ESXi as transport node

The final part of the procedure is to add our new ESXi server as transport node to NSX. Log into NSX Manager web UI and initiate the wizard to add a new Hypervisor. First specify the name of the new hypervisor.

Screen Shot 2014-07-14 at 02.13.30

Set the integration bridge.

Screen Shot 2014-07-14 at 02.22.44

Select Security Certificate as credential type and paste the NSX vSwitch SSL certificate. The certificate can be retrieved from /etc/nsxvswitch/nsxvswitch-cert.pem.

Screen Shot 2014-07-14 at 02.29.50

Add an SST transport connector, using the IP address configured for the uplink.

Screen Shot 2014-07-14 at 02.31.57

Click Save & View and verify the new hypervisor configuration in NSX.

Screen Shot 2014-07-14 at 02.36.15

The setup of our new ESXi server within NSX is done. As always comments are welcomed.

Juanma.

ESXi 5.1 comes with many improvements and one of them is new namespaces and commands in esxcli.

Those new commands enable a system administrator to perform a shutdown, a reboot or a maintenance operation in a host.

Under the system namespace the new commands are the equivalents of the classic vicfg/esxcfg-hostops which until now was the only way to perform such kind of operations with vCLI and are also accessible locally on ESXi Shell.

image

Maintenance mode operations

Getting the basic usage of the command is as simple as always. You can perform two operations.

  • Get the state of the host
  • Put the the host in or out of Maintenance Mode
~ # esxcli system maintenanceMode 
Usage: esxcli system maintenanceMode {cmd} [cmd options]
Available Commands: 
  get                   Get the maintenance mode state of the system. 
  set                   Enable or disable the maintenance mode of the system. 
~ #
  • Get the state of the host
~ # esxcli system maintenanceMode get 
Disabled 
~ #
  • Put the host in Maintenance Mode
~ # esxcli system maintenanceMode set -e true -t 0 
~ # 
~ # esxcli system maintenanceMode get 
Enabled 
~ #

Power operations

With the shutdown command the host can be either rebooted or shutdown. If the ESXi server is not in Maintenance Mode mode the operation will not be allowed.

~ # esxcli system shutdown 
Usage: esxcli system shutdown {cmd} [cmd options]
Available Commands: 
  poweroff              Power off the system. The host must be in maintenance mode. 
  reboot                Reboot the system. The host must be in maintenance mode. 
~ #

For both task the delay and reason parameter must be provided.

~ # esxcli system shutdown poweroff 
Error: Missing required parameter -r|--reason
Usage: esxcli system shutdown poweroff [cmd options]
Description: 
  poweroff              Power off the system. The host must be in maintenance mode.
Cmd options: 
  -d|--delay=<long>     Delay interval in seconds 
  -r|--reason=<str>     Reason for performing the operation (required) 
~ #
  • Power off the host
~ # esxcli system shutdown poweroff --delay=10 --reason=”Hardware maintenance”
  • Reboot the host
~ # esxcli system shutdown reboot -d 10 –r “Patches applied”

Juanma.

After my previous post about getting the iqn of an ESXi using esxcli Andy Banta (@andybanta) commented on Twitter that you can also change the iqn of the host with esxcli.

As he said it would be tremendously useful if you need to physically replace the server and don’t want to modify all your storage infrastructure, it’s easier to just modify the iqn of the new server and set it to the old name.

The task is as easier as the one described in last post. Using esxcli command with the iscsi namespace you can change the name and the alias of the adapter.

Screenshot from 2012-08-02 21_15_52

As a precaution first retrieve the current iqn to check that it’s the correct server.

Screenshot from 2012-08-02 21_20_08

To change the name you have to provide the adapter and the new name.

Screenshot from 2012-08-02 21_22_03

Hope you find this useful, any comments and suggestions are welcome as always.

Juanma.

Back in 2010 I wrote a post about how to get the iSCSI iqn of an ESXi 4.x server using vSphere CLI from the vMA or any other system with the vCLI installed on it.

The method described in that article is still valid for ESXi 5.0 since the old vicfg and esxcfg commands are still available, however with 5.0 version you can get a similar result using the new esxcli namespaces, following is how to do it.

First task is to get a list of the iSCSI HBAs in order to know the name of the software iSCSI initiator.

image

Next we get the info of the adapter.

image

Look at the Name field to get the iqn and we are done.

Juanma.

Last night during a patching job in a customer I found the following error for several VMs when I put a host in maintenance mode and DRS tried to evacuate the virtual machines to the other nodes of the cluster.

image

Very strange since as far as I could see the virtual machines were running without errors, I was logged into some of them through SSH, and they also appeared as powered on in vSphere Client.

I decided to go to Tech Support Mode on the ESXi and check the virtual machine power state.

image

Everything looked exactly as it should be, no error logs, nothing. At this point I decided to restart the ESXi management agents.

image

And it worked, after a few seconds I was able to perform a successful vMotion and the host could be evacuated.

Juanma.

This a quick follow-up post to the How to check the driver version of a network interface in ESX(i) one. That post covered ESX(i) 4.x so I decided to write a small update for ESXi 5.0.

First I have to say that the two methods described in my first post still work in ESXi 5.0 Shell.

~ # vmware -l
VMware ESXi 5.0.0 GA
~ #
~ # vmkload_mod -s e1000 | grep Version
Version: Version 8.0.3.1-NAPI, Build: 456551, Interface: 9.2 Built on: Jul 29 2011
~ #
~ # ethtool -i vmnic0
driver: e1000
version: 8.0.3.1-NAPI
firmware-version: N/A
bus-info: 0000:02:00.0
~ #

Thanks to the new changes made by VMware in ESXi 5.0 we can now use esxcli to get the same result.

~ # esxcli system module get -m e1000
   Module: e1000
   Module File: /usr/lib/vmware/vmkmod/e1000
   License: GPL
   Version: Version 8.0.3.1-NAPI, Build: 456551, Interface: 9.2 Built on: Jul 29 2011
   Signed Status: VMware Signed
   Signature Issuer: VMware, Inc.
   Signature Digest: 1049 0611 a944 efc3 b683 341d 34b1 bebc 552d cb81 a874 ef4c 0562 8f25 2775 8c8d
   Signature FingerPrint: cb44 247a 1614 cea1 2079 362d ec86 9d0e
   Provided Namespaces:
   Required Namespaces: com.vmware.driverAPI@9.2.0.0, com.vmware.vmkapi@v2_0_0_0
~ #
~ # esxcli system module get -m e1000 | grep Version
   Version: Version 8.0.3.1-NAPI, Build: 456551, Interface: 9.2 Built on: Jul 29 2011
~ #

There is a big advantage on using esxcli over the other methods. In ESX(i) 4.x and ESXi 5.0 with the old procedure you had to be logged into the host but with esxcli it can be performed remotely using vSphere CLI.

vi-admin@vma:~[esxi5.vjlab.local]> esxcli system module get -m e1000
   Module: e1000
   Module File: /usr/lib/vmware/vmkmod/e1000
   License: GPL
   Version: Version 8.0.3.1-NAPI, Build: 456551, Interface: 9.2 Built on: Jul 29 2011
   Signed Status: VMware Signed
   Signature Issuer: VMware, Inc.
   Signature Digest: 1049 0611 a944 efc3 b683 341d 34b1 bebc 552d cb81 a874 ef4c 0562 8f25 2775 8c8d
   Signature FingerPrint: cb44 247a 1614 cea1 2079 362d ec86 9d0e
   Provided Namespaces:
   Required Namespaces: com.vmware.driverAPI@9.2.0.0, com.vmware.vmkapi@v2_0_0_0
vi-admin@vma:~[esxi5.vjlab.local]>
vi-admin@vma:~[esxi5.vjlab.local]> esxcli system module get -m e1000 | grep Version
   Version: Version 8.0.3.1-NAPI, Build: 456551, Interface: 9.2 Built on: Jul 29 2011
vi-admin@vma:~[esxi5.vjlab.local]>

But there is more, thanks to Get-EsxCli cmdleet the same operation can be done using PowerCLI, here it is how.

First we need to setup the esxcli instance.

image

And now we issue the command using the name of the module as the argument, please pay attention to the syntax.

image

As you should have imagined this procedure can be used to get info about any VMkernel module in the host, not just the network interface one,.

Juanma.

A first look into the vMA 5

September 21, 2011 — 7 Comments

Like the rest of components of vSphere the vMA, vSphere Management Appliance, has been updated to the new version. In this post I will discuss the changes and features of the new vMA 5 and will show how to deploy and configure it.

First of all for those of you who have been hiding in a cave and know nothing about the vMA all you need to know for now is that is a virtual appliance provided by VMware that allows System Administrators to manage their virtual infrastructure and run scripts and agents that interact directly with the vCenter Server and ESXi. It can do it also without having to authenticate each time. I will not go deeper on this since there are tons of blog posts out there explaining it and also it’s very well detailed in the vMA documentation.

– vMA 5 features and changes:

The vMA 5 is composed by the following elements:

  • SUSE Linux Enterprise Server 11 SP1 64-bit: This is a major change. Previous versions of the vMA were all based on Red Hat, either Red Hat Enterprise Linux or CentOS, but with the introduction of vSphere 5 all virtual appliances have been migrated to SUSE Linux Enterprise Server 11.
  • VMware Tools
  • vSphere CLI
  • vSphere SDK for Perl
  • Java JRE 1.6
  • vi-fastpass: The authentication component of the vMA.

As you can observe many of them are also present in former versions of the vMA.

Regarding the hardware requirements they are again very similar to the vMA4.

  • ESXi host capable of running 64-bit guests
  • 1 vCPU
  • 3GB of storage space
  • 512MB of RAM. This is recommended memory size, however the vMA can be run with less RAM but its performance can be penalized.

The new vMA can be deployed from the vSphere Client connected to a vCenter Server 5.0 or vCenter Server 4.x and can be run on the following vSphere releases:

  • vSphere 5.0
  • vSphere 4.1 and 4.1 Update 1
  • vSphere 4.0 Update 2

The systems that can be managed from the vMA are:

  • ESXi 3.5 Update 5
  • ESXi 4.0 Update 2
  • vSphere 4.1 and 4.1 Update 1
  • vSphere 5.0

- vMA 5 deployment and configuration:

We are going to deploy the vMA 5 through the vSphere Client in the same manner as the vMA 4. Go to File -> Deploy OVF template and when the pop-up shows up browse for the vMA OVF and click next.

Follow the screens until the last one to configure the datacenter and host or cluster where you want to deploy the vMA, configure the appliance to match your environment and click Finish to start the deployment.

When the deployment is finished open the vMA virtual machine console and power it on. When the vMA boots for the first time it should be configured.

Once the OS is up the first prompt will ask for the network configuration.

In the next step you’ll be asked for the password of vi-admin password. There is a major change here in comparison with the vMA 4, the vi-admin password must has an increased complexity and must contain at least:

  • Eight characters.
  • One upper case character.
  • One lower case character.
  • One numeral character.
  • One symbol.

The reason to this new password policy comes from the SUSE Enterprise Linux Server operating system the vMA 5 is based on. William Lamw (@lamw) provides the link to the Novell Knowledge Base article related to this topic in his excellent post Tips and tricks for the vMA 5.

Once network parameters and vi-admin password are configured the vMA should be ready to manage your vSphere servers and the console screen will appear.

The vMA5 can be managed from two ways:

  • Text-based console

From the text-based console you can launch the initial configuration for the vMA networking, set the Timezone of the vMA and login into the Linux command-line interface like in the previous releases of the vMA to manage the appliance from the Linux shell and of course to manage your vSphere infrastructure. As always SSH access is also allowed to the last one.

  • Browser-based Web UI

The Web UI enables you only to manage the vMA itself and not the vCenter and ESX(i) servers. To access the Web UI point your browser to https://<vma_address_or_hostname&gt;:5480 and login as vi-admin. From there you can do the following tasks:

  • Check the status of the appliance, set the timezone and perform a system reboot or shutdown

  • Manage the appliance network and proxy server settings

  • Update the vMA 5

This last option is of significance since now this is the way to update the vMA because the vma-update utility has been removed.

Juanma.

If you need to put a host in maintenance mode and only have access through ESXi Tech Support Mode, either local from DCUI or remote with SSH, in the following quick post I’ll show you how to do it using vim-cmd command.

Put the host in maintenance mode:

Check the state of the host.

Get the ESXi out of maintenance mode.

This procedure works in ESXi 4.x and ESXi 5.

Juanma.

If you are willing to see a so great whitebox like Phil Jaenke’s (@RootWyrm) BabyDragon I’m sorry to say that you’ll be terribly disappointed because unlike Phil’s beauty mine wasn’t built on purpose.

I’ve been running all my labs within VMware Workstation on my custom made workstation, whihc by the way was running Windows 7 (64-bit). But recently I decided that it was time to move to a more reliable solution so I converted my Windows 7 system in an ESXi server.

Surprisingly when I installed ESXi 4.1 Update 1 everything was properly recognized so I thought it could be of help to post the hardware configuration for other vGeeks out there that could be looking for working hardware components for their homelabs.

  • Processor: Intel Core 2 Quad Q9550. Supports FT!
  • Memory: 8GB
  • Motherboard: Asrock Penryn 1600SLI-110dB
  • Nic: Embedded nVidia NForce Network Controller. Supported under the forcedeth driver
~ # ethtool -i vmnic0
driver: forcedeth
version: 0.61.0.1-1vmw
firmware-version:
bus-info: 0000:00:14.0
~ #
  • SATA controller: nVidia MCP51 SATA Controller.
~ # vmkload_mod -s sata_nv
vmkload_mod module information
 input file: /usr/lib/vmware/vmkmod/sata_nv.o
 Version: Version 2.0.0.1-1vmw, Build: 348481, Interface: ddi_9_1 Built on: Jan 12 2011
 License: GPL
 Required name-spaces:
  com.vmware.vmkapi@v1_1_0_0
 Parameters:
  heap_max: int
    Maximum attainable heap size for the driver.
  heap_initial: int
    Initial heap size allocated for the driver.
~ #
  • HDD1: 1 x 120GB SATA 7200RPM Seagate ST3120026AS.
  • HDD2: 1 x 1TB SATA 7200RPM Seagate ST31000528AS.

Finally here it is a screenshot of the vSphere Client connected to the vCenter VM and showing the summary of the host.

The other components of my homelab are a QNAP TS-219P NAS and an HP ProCurve 1810G-8 switch. I also have plans to add two more NICs and a SSD to the server as soon as possible and of course to build a new whitebox.

Juanma.

We are going to suppose that you are trying to troubleshoot your ESXi network problems and as an experienced sysadmin one of the first things to do is getting the network connections of the host. But you are in ESXi and that means there is no netstat command, that handy Unix command that saved your life so many times in the past.

Please don’t panic yet, because as always in VMware there is a solution for that: esxcli to the rescue. Here it is the way to list the network connections of your ESXi host, both for ESXi 4.1 and ESXi 5 :-)

ESXi 4.1

I tested it in ESXi 4.1 and ESXi 4.1 Update 1. The network namespace is not available in ESXi 4.0.

ESXi 5

I used Remote Tech Support (SSH), simply known as SSH in ESXi5, in both examples but you can also launch the command from the vMA or using vSphere CLI from a Windows or a Linux machine.

vMA 4.1

[vi-admin@vma ~]$ esxcli --server=arrakis.jreypo.local --username=root network connection list

vMA 5

vi-admin@vma5:~> esxcli --server=esxi5.jreypo.local --username=root network ip connection list

Juanma.