Archives For VMware

Team+OpenStack+@+VMwareIn a previous article I showed the process to patch an existing VIO 1.0 installation, which as you were able to see it is a clean and easy process. VMware announced VMware Integrated OpenStack 2.0 during VMware US and it finally became GA a few weeks ago.

This new version of VIO has all OpenStack code updated up to the latest Kilo release and comes packaged with many interesting features like Load-Balancing-as-a-Service (LBaaS) or auto-scaling capabilities based on Heat and Ceilometer.

With a new VIO version hot of the press it is time now to upgrade your VIO 1.0.x environment to 2.0 and take advantage of all those new great goodies. The upgrade process is pretty straightforward and consist of three main stages.

  • Upgrade VIO Management Server
  • Deploy a new VIO 2.0 environment
  • Perform the data migration

Keep in mind that you will need to have enough hardware resources in your management cluster to be able to host two full fledged VIO installations at the same time during the migration process. Just for the sake of transparency, the lab environment where I test the upgrade is based on vSphere 5.5 Update 2, NSX for vSphere 6.1.4 and VIO 1.0.2.

Step 1 – Upgrade VIO Management Server

From VMware website download the .deb upgrade package and upload it to VIO Management Server.

Download VIO upgrade package

Stage the upgrade package.

viouser@vio-oms:~$ sudo viopatch add -l vio-1.0-upgrade_2.0.0.3037964_all.deb
[sudo] password for viouser:
vio-1.0-upgrade_2.0.0.3037964_all.deb patch has been added.
viouser@vio-oms:~$ viopatch list
Name            Version       Type   Installed
--------------- ------------- ------ -----------
vio-1.0-upgrade infra  No
vio-patch-2 infra  Yes


Upgrade the management server with viopatch command.

viouser@vio-oms:~$ sudo viopatch install -p vio-1.0-upgrade -v
Installing patch vio-1.0-upgrade version
Installation complete for patch vio-1.0-upgrade version

Go to the vSphere Web Client, logout and log back in to verify that the new version is correct.


Step 2 – Deploy a new VIO 2.0 environment

With VIO Management Server upgraded is now time to deploy a fresh 2.0 environment. In the VIO plugin go to Manage section and a new Upgrades tab will be there. Before


Before starting with the deployment check in the Networks tab that there are enough free IP address for the new deployment, if there aren’t then add a new IP range.


Click on the Upgrade Screen Shot 2015-10-20 at 13.49.34 icon. Select if you want to participate in the Customer experience improvement program, my recommendation here is to say yes to help our engineering team to improve VIO upgrade experience even more ;-), and enter the name for the new deployment.


Enter the IP addresses for the public and private load balanced IP addresses, keep in mind that these IP addresses must belong to the API subnet of the existing VIO 1.0 environment in case of the public and to the management network segment in the case of the private one.


In the last screen review the configured values and click Finish. The new environment will be deployed and you will be able to monitor it from the Upgrades tab.


Step 3 – Migrate the data

With the new environment up and ready we can start the data migration. From the Upgrades tab right-click in the your existing VIO 1.0 installation and select Migrate Data.


The migration wizard will ask for confirmation, click OK. During the data migration all OpenStack service will be unavailable.


When the migration process is finished the status of the new VIO 2.0 environment will appear as Migrated and the previous VIO 1.0 will appear as Stopped.


Open a browser and connect the VIO 2.0 public IP to access OpenStack Horizon interface, login and verify that all your workloads, networks, image, etc have been properly migrated. Logout from Horizon and go back to the Web Client. Now that the data has been migrated we need to migrate the original Public Virtual IP to the new environment.

Right-click on VIO 1.0 deployment and from the menu select Switch To New Deployment.


A new pop-up will appear asking for confirmation since again the OpenStack service will be unavailable during the IP reconfiguration.

After the reconfiguration the new VIO 2.0 deployment will be in Running status and the Public Virtual IP will be the same as the former 1.0 deployment.


The upgrading procedure is finished. We can now access now Horizon using the existing FQDN, verify that everything is still working and enjoy your new OpenStack Kilo environment.


In the same way as patching, with VIO upgrading your OpenStack cloud does not have to be a painful experience, VIO provides the best OpenStack experience in a vSphere environment. Kudos to my colleagues of the Team OpenStack @ VMware.

Happy Stacking!


As with the rest of NSX for vSphere components any competent admin would like to configure a remote syslog server for the NSX Controllers, in my homelab I have vRealize Log Insight and recently I decided to configure it on my NSX Controllers and document the procedure here mostly as self-reference.

NSX Manager has the option to configure a remote syslog server using its management web site, but where is the option for the Controllers? Well, if you lurk around NSX interface in vSphere Web Client will quickly notice that the option is somehow missing.  Actually the only option to enable it is using NSX REST API. Let’s see how to do it.

For this post I will use Firefox REST Client Add-on but you can use your favorite REST client. Firstly any REST API call will require at least the Authentication header, in Firefox REST Client click on Authentication drop-down menu, select Basic Authentication and enter the admin credentials.

Screen Shot 2015-09-30 at 02.25.19

Additionally PUT and POST methods will require you to set a custom header with the following values that will define the content of the HTTP Request body. Use these values.

  • Name: Content-Type
  • Value: application/xml

With these two headers set enter the API URL, in my case it is:


To construct this URL you will need the controller ID that can be get in the NSX interface in vSphere Web Client as shown below.

Screen Shot 2015-09-30 at 02.50.38

Select POST as the method. You will need to enter the body for the HTTP Request, in XML format. Use the below code as an example to build the content for the HTTP Request body.

This XML code will indicate the NSX Manager to set the IP address in the syslogServer node as the remote syslog server for the controller in the URL. The protocol, port and log level are also defined.

Screen Shot 2015-09-17 at 01.14.19

Submit the request and if everything is configured as described you will receive a 200 OK status code.

Screen Shot 2015-09-17 at 01.15.06

At this point the syslog server is configured for all NSX Controllers, you can check the status using also an API call with the same URL and selecting GET method.

Comments are welcome.


It occurred to me recently that after recovering a VIO failed deployment, in my case an issue with one of the database nodes, in the Web Client Plugin the OpenStack Cluster still was in Error state.

Screen Shot 2015-06-01 at 12.25.15

After investigating a bit internally and thanks to my colleague Dimitri Desmidt I was able to solve it.

Open an SSH connection as viouser to the VIO Management Server and elevate to root. Launch psql with the following command:

/opt/vmware/vpostgres/current/bin/psql -U omsdb

From psql prompt execute:

update cluster set status='RUNNING';

After that logout and login back to Web Client and the cluster will be now in Running status.


Team+OpenStack+@+VMwareVMware has released the first patch for VMware Integrated OpenStack. This patch release comes with improvements around the installer, Keystone service and fixes some security issues. Review the Release Notes to get full details of what is included.

After the patch was released I thought it was a perfect time to upgrade my VIO lab, document the procedure and publish it, so without further ado lets get some patches installed!

Step 1 – Upload and install the patch

Get the patch from VIO product download page, of course you need to have the proper rights to do it, the patch is a Debian package in deb format. There are some caveats here, the way to upload and install the patch is using vSphere Web Client from Manage ->  Updates.

Screen Shot 2015-04-28 at 23.38.55

However after the immediate release of the patch an issue was identified using this method and currently until it is solved the safest way to do it is using the CLI. Use your favorite SCP/SFTP client to upload the patch to VIO Management Server as viouser.

Add the patch using viopatch command.

viouser@vio-manager:~$ sudo viopatch add -l /home/viouser/vio-patch-1_1.0.1.2668568_all.deb
[sudo] password for viouser:
/home/viouser/vio-patch-1_1.0.1.2668568_all.deb patch has been added.

List the patches to verify that has been added.

viouser@vio-manager:~$ viopatch list
Name         Version        Type    Installed
-----------  -------------  ------  -----------
vio-patch-1  infra   No


Install the patch, before installing verify that VIO Cluster is in Running status or the update will fail. The patch can also be applied before deploying OpenStack.

viouser@vio-manager:~$ sudo viopatch install --patch vio-patch-1 --version
[sudo] password for viouser:
Installing patch vio-patch-1 version
Installation complete for patch vio-patch-1 version

Step 2 – Verify the installation

Log out and log in back in vSphere Web Client. The new version and build number can be verified in the Summary tab.

Screen Shot 2015-04-29 at 01.09.21

Also in Manage -> Updates the newly installed patch can be seen in more detail.

Screen Shot 2015-04-29 at 01.09.53

Ans this is it. Anyone that has ever to endure the pain of patching an OpenStack installation, either lab or production environment, I am sure that will appreciate how clean and easy is the process in VIO.


Team+OpenStack+@+VMwareVMware Integrated OpenStack was made available for customers last Thursday. It is a exciting time to be part of VMware.

Coincidentally with this announcement I have updated my original post about VIO installation with new screenshots and information from the GA version of VIO and this post has been also published in our official VMware OpenStack Blog.

Also if you want to quickly experience VMware and OpenStack all together there is a new Hands On Lab available about VIO, the name is HOL-SDC-1420 VMware Integrated OpenStack and NSX.

Happy Stacking!


VMware Integrated OpenStack, or VIO, was announced during last year VMworld in San Francisco and has been finally released today by VMware.

For me this is a very special release because I have been one of the lucky internal adopters and beta testers of VIO. I have spent many hours working with several VIO builds and trying to help our incredible engineering team. This is in my opinion a really solid and well designed product and will be a game changer in the OpenStack world. Honestly I cannot be more excited :)


VIO is basically a VMware supported OpenStack distribution prepared to run on top of an existing VMware infrastructure. VMware Integrated OpenStack will empower any VMware Administrator to easily deliver and operate an Enterprise production grade OpenStack cloud on VMware components. This means that you will be able at to take advantage of all VMware vSphere great features like HA, DRS or VSAN for your OpenStack cloud and also extend and integrate it with other VMware management components like vRealize Operations and vRealize Log Insight.

VIO components

VIO is made by two main building blocks, first the VIO Manager and second OpenStack components. VIO is packaged as an OVA file that contains the VIO Manager server and an Ubuntu Linux virtual machine to be used as the template for the different OpenStack components.

The OpenStack services in VIO are deployed as a distributed highly available solution formed by the following components:

  • OpenStack controllers. Two virtual machines running Horizon Dashboard, Nova (API, scheduler and VNC) services, Keystone, Heat, Glance, and Cinder services in an active-active cluster.
  • Memcached cluster.
  • RabbitMQ cluster, for messaging services used by all OpenStack services.
  • Load Balancer virtual machines, an active-active cluster managing the internal and public virtual IP addresses.
  • Nova Compute machine, running the n-cpu service.
  • Database cluster. A three node MariaDB Galera cluster that stores the OpenStack metadata.
  • Object Storage machine, running Swift services.
  • DHCP nodes. These nodes are only required if NSX is not selected as provider for Neutron.

Installation requirements

To be able to successfully deploy VIO you will need at least the following:

  • One management cluster with two to three hosts, depending on the hardware resources of the hosts.
  • One Edge cluster. As with any NSX for vSphere deployment it is recommended to deploy a separate cluster to run all Edge gateway instances.
  • One compute cluster to be used by Nova to run instances. One ESXi host will be enough but again that will depend on how much resources are available and what kind of workloads you want to run.
  • Management network with at least 15 static IP addresses available.
  • External network with a minimum of two IP addresses available. This is the network where Horizon portal will be exposed and that will be used by the tenants to access OpenStack APIs and services.
  • Data network, only needed if NSX is going to be used. The different tenant logical network will be created on top of this, the management network can be used but it is recommended to have a separate network.
  • NSX for vSphere, 6.1.2 at minimum. It has to be setup prior to VIO deployment if NSX plugin is going to be used with Neutron.
  • Distributed Port Group. In case of choosing DVS-based networking a vSphere port-group tagged with VLAN 4095 must be setup. This port group will be used as the data network.

The hardware requirements are around 56 vCPU, 192GB of memory and 605GB of storage. To that you have to add NSX for vSphere required resources for the NSX Manager, the three NSX Controllers and the NSX Edge pool, if NSX is going to be used.

Anyway in a future post I will review in detail all the pre-requisites and their setup process for VIO, and the integration between NSX-v and Neutron.

VIO Installation

Now that we have seen a bit of VIO I am going to show how to perform an installation.

Deploying VIO Manager

The first step is to deploy VIO OVA on our management cluster. From vSphere Web Client launch the Deploy OVF Template wizard and enter the URL to the VIO OVA file.


Accept the EULA and proceed to configure the template. First as with any OVA template enter the name and the folder,


Select the datastore and the storage format.


Select the network for VIO Manager.


Now we will customize the template, this includes entering the VIO Manager server networking settings, NTP, SSO lookup service URL and Syslog server.

Screen Shot 2015-01-31 at 00.32.27

Go through the next two screens, click finish and start the deployment. Once it is finished you will have a new vApp with the two virtual machines. Our next step is to register the management server with vCenter, power on the OMS vApp and when the management server is fully started logout of vSphere Web Client. Log in back to vSphere Web Client, you will notice a new icon in the Home page.

Screen Shot 2015-02-01 at 02.34.35

Access the VIO plugin interface and in the Summary you should see that VIO Manager has automatically registered itself with vCenter.


From this screen you can also change the VIO Manager server in case you need to re-deploy a new one. To do so select the management server in the pop-up and click OK.

Screen Shot 2015-01-31 at 17.24.49

Accept the SSL certificate to finish the procedure.

Screen Shot 2014-08-23 at 01.41.07

VIO Manager Server will now be displayed as connected in the Summary tab.


Deploying OpenStack

With VIO Manager running and connected to our vCenter it is time now to deploy OpenStack. Proceed to the Getting Started tab and click Deploy OpenStack.


A new wizard will be launched. In the first screen we must select the deployment type. VIO allows to deploy a new OpenStack installation or deploy from a previously saved template file.


Provide the vCenter administrative credentials.


Select the management cluster where we are going to deploy VIO.


Next you need to configure the Management and External networks. Select the appropriate vSphere port-groups for each network and fill in the network ranges, gateway, netmask and DNS server fields.


Enter the values for the load balancer configuration:

  • Public Virtual IP address
  • Public Hostname, this hostname must resolve to the Public IP address.


Add a cluster to be used for Nova.


Add the datastores to be used by Nova to store the different instances. If you have a VSAN datastore keep in mind that to be able to use it with Nova the images stored in Glance have to be streamOptimzed.


Select the datastore to be used by Glance image service.


Configure Neutron networking. For Neutron there are two different options:

  • DVS-based networking
  • NSX networking

For DVS simply select the Virtual Distributed Switch where you created the port-group for the data network with the VLAN 4095 configured.

For NSX deployment you must enter:

  • NSX Manager IP address.
  • NSX Manager administrative username.
  • NSX Manager administrative user password.
  • VDN Scope. Basically the Transport Zone in NSX-v to be used as transport layer for data traffic.
  • Edge Cluster. A vSphere cluster to deploy the NSX Edge instances.
  • Virtual Distributed Switch for NSX networking.
  • External Network. This a port group to be used as external network by instances in OpenStack via a virtual router. This port group should be accessible from compute, management and edge clusters.


During the Neutron configuration the wizard will connect to the NSX Manager with the provided credentials and will ask to accept the SSL certificate.


In the next screen the wizard will ask for the OpenStack admin user, password and project. Also you can select the Keystone type option:

  • Database
  • Active Directory as LDAP Server.


Finally set the syslog server, it is not mandatory to set this value but it is highly recommended.


Review the configuration and click Finish.

Screen Shot 2015-01-31 at 20.43.54

Review the configuration and click Finish.

The deployment will take some time, depending on your storage backend. In my testing lab took around one hour, but it is a nested environment running on NFS so you can expect much better times deploying in a real world setup. When it is finished you can review the different components of VIO with vSphere Web Client in VMs and Templates, there would be a new folder structure containing all VIO virtual machines.


 Validate your VIO installation

In your favorite browser open an HTTPS session against the public hostname or virtual IP address configured during VIO installation. The Horizon portal login page will display.

Screen Shot 2015-02-01 at 22.50.41

Enter the admin credentials and OpenStack admin Overview page will show up. The access the Hypervisors area and check that the selected cluster for Nova appears there.

Screen Shot 2015-02-01 at 22.53.19

At this point VIO is setup and you can start to work in Horizon or using the CLI as with any other OpenStack distribution.

Have fun and happy stacking!


A question I’ve heard a few times, what are the command equivalencies between a standard Open vSwitch, running inside a Linux box, and the NSX vSwitch running inside ESXi? I have written this post to clarify this a bit.

There are four commands in NSX CLI that have equivalencies in the OVS world:

NVS Command OVS Command
nsx-dbctl ovs-vsctl
nsx-dpctl ovs-dpctl
nsx-appctl ovs-appctl
nsx-flowctl ovs-flowctl


ovs-dbctl command, like its OVS equivalent ovs-vsctl, Sub-commands are the same, and for example nsx-dbctl show will produce a similar output to ovs-vsctl show.

~ # nsx-dbctl show
    Manager "ssl:"
        is_connected: true
    Bridge "br-vmnic1"
        fail_mode: standalone
        Port "vmk3"
            Interface "vmk3"
        Port "vmnic1"
            Interface "vmnic1"
    Bridge br-int
        Controller "ssl:"
            is_connected: true
        Controller "unix:ovs-l3d.mgmt"
            is_connected: true
        fail_mode: secure
        Port "vNic.3000004"
            Interface "vNic.3000004"
        Port "vNic.3000006"
            Interface "vNic.3000006"
        Port "vNic.3000005"
            Interface "vNic.3000005"
    ovs_version: ""
~ #


nsx-dpctl command maps to ovs-dpctl and much like it allow you to manage Open vSwitch datapaths.

~ # nsx-dpctl show
        lookups: hit:1770781 missed:192476 lost:0
        flows: 14
        port 50331650: vmnic1
        port 50331651: vmk3
        port 50331652: vNic.3000004
        port 50331653: vNic.3000005
        port 50331654: vNic.3000006
~ #


nsx-appctl will allow the administrator to manage and configure OVS daemons. It maps to ovs-appctl command.

~ # nsx-appctl dpif/show
system@nsx-vswitch: hit:2230477 missed:148652
        flows: cur: 17, avg: 17, max: 33, life span: 1918447ms
        hourly avg: add rate: 66.907/min, del rate: 66.880/min
        daily avg: add rate: 43.476/min, del rate: 43.461/min
        overall avg: add rate: 60.918/min, del rate: 60.909/min
        br-int: hit:142949 missed:8461
                vNic.3000004 1/50331652: (system)
                vNic.3000005 2/50331653: (system)
                vNic.3000006 3/50331654: (system)
        br-vmnic1: hit:2087528 missed:140191
                vmk3 2/50331651: (system)
                vmnic1 1/50331650: (system)
~ #


nsx-flowctl is the equivalent of ovs-flowctl and will allow you to manage NSX vSwich flow tables, ports, etc.

~ # nsx-flowctl show br-bond0
OFPT_FEATURES_REPLY (xid=0x3): dpid:0000725d4492c540
n_tables:254, n_buffers:256
 1(vmnic4): addr:00:50:56:01:08:c6
     config:     0
     state:      0
     current:    1GB-FD
     advertised: 10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-HD 1GB-FD
     supported:  10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-HD 1GB-FD
     speed: 1000 Mbps now, 1000 Mbps max
 2(vmnic5): addr:00:50:56:01:08:c8
     config:     0
     state:      0
     current:    1GB-FD
     advertised: 10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-HD 1GB-FD
     supported:  10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-HD 1GB-FD
     speed: 1000 Mbps now, 1000 Mbps max
 3(vmk3): addr:00:50:56:66:57:18
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x6): frags=normal miss_send_len=0
~ #

Courteous comments are welcome.


VMware has released a new vRealize Operations Manager management pack for NSX Multi-hypervisor. This new management pack will allow vROps to extend its management capabilities into any NSX-MH infrastructure.

This management pack provides a great set a features, including:

  • Operational visibility into the different NSX-MH components, from NSX Manager to Controllers, transport nodes and logical elements of the network.
  • Search and drill down functionality to help the administrator monitor the health of the NSX objects.
  • Alerts and root cause problem solving capabilities by detecting configuration, connectivity and health deficiencies into the NSX environment.
  • Report templates for NSX Multi-Hypervisor environment.

The management pack requires vRealize Operations Manager 6.0 and can be downloaded from VMware Solutions Exchange.


To install this management pack go to Administration in the left pane.

Screen Shot 2014-12-16 at 01.15.10

From there go to Solutions and on the right pane click on the plus sign to add the new management pack.

Screen Shot 2014-12-16 at 01.15.24

Browse for the pack installation file, click Upload and then click Next when the installation file is uploaded.

Screen Shot 2014-12-16 at 01.16.27

Accept the EULA and proceed to the last screen. Wait until the management pack is installed and then click Finish.

Screen Shot 2014-12-16 at 01.19.10

Configure the adapter instance

The first task is to create the credentials for the solution. Access Administration -> Credentials and create a new credential for the NSX-MH Adapter.It has to include the administration credentials for the NSX Controller, NSX Manager and vCenter Server.

Screen Shot 2014-12-16 at 02.19.17

Next access Administration -> Solutions, select the NSX-MH pack and click on the gear icon.


On the pop-up window enter the IP address or the FQDN for:

  • NSX Controller
  • NSX Manager
  • vCenter Server

Only the first NSX Controller is needed.


Test the connection, accept the certificates for the different components and click Save Settings. After this the adapter is configured and will start collecting data, it will take a some time until it displays data, depending on the size of the NSX environment, to have a full collection of data.

NSX-MH dashboards

Out of the box the management pack comes with three dashboards.

  • NSX-MH Main: It provides an overview of the health of the different network objects

Screen Shot 2014-12-16 at 01.29.26

  • NSX-MH Topology: Provides details about the topology of a selected object, how it connects in the networks and a view of the related alerts and metrics.

Screen Shot 2014-12-15 at 02.30.37

  • NSX-MH Object Path: This dashboard enables the administrator to visually depict a the path between two selected objects and verify how they are connected between each other and other objects.

Screen Shot 2014-12-16 at 01.32.14



In the series of posts about OpenStack and KVM we saw how to add a KVM node to NSX for multi-hypervisor environments as a transport node. In this post we will discuss how to perform the same procedure for an ESXi host.

NSX vSwitch installation

Before proceeding with the installation keep in mind that NSX vSwitch can run on an ESXi host simultaneously only with VMware Standard Switch, distributed switches are not supported.

Install the NSX vSwitch vib file using esxcli.

~ # esxcli software vib install --no-sig-check -v /tmp/vmware-nsxvswitch-2.1.3-35984-prod2013-stage-release.vib
Installation Result
   Message: Operation finished successfully.
   Reboot Required: false
   VIBs Installed: VMware_bootbank_vmware-nsxvswitch_2.1.3-35984
   VIBs Removed:
   VIBs Skipped:
~ #
~ # esxcli software vib list | grep nsx
vmware-nsxvswitch              2.1.3-35984                           VMware  VMwareCertified   2014-07-13
~ #

Check that the a new virtual switch has been created on the host, don’t use esxcli but the good old esxcfg-vswitch command because for now there is no namespace available in esxcli for NSX vSwitch.

~ # esxcfg-vswitch -l
Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
vSwitch0         1536        7           128               1500    vmnic0,vmnic1

  PortGroup Name        VLAN ID  Used Ports  Uplinks
  vMotion               0        1           vmnic0,vmnic1
  Management Network    0        1           vmnic0,vmnic1

Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
vSwitch1         1536        6           128               1500    vmnic2,vmnic3

  PortGroup Name        VLAN ID  Used Ports  Uplinks
  vsan                  0        1           vmnic2,vmnic3

Switch Name      Num Ports   Used Ports  Uplinks
nsx-vswitch      1536        1

~ #

NSX vSwitch configuration

With NSX vSwitch installed proceed to the conifguration. First connect an uplink to the switch, this will create an NVS bridge which is the equivalent of an OVS bridge in Open vSwitch.

nsxcli uplink/connect vmnic4

Set an IP address for the uplink, this IP address will be used later to create the transport tunneling endpoint when we connect the ESXi as a transport node to NSX. You can also specify the VLAN tag by appending vlan=<vlan_id> as an additional parameter to the command.

nsxcli uplink/set-ip vmnic4

Validate that the bridge is correctly configured. Use nsxcli port/show to verify the bridge and nsxcli uplink/show for the uplink.

~ # nsxcli port/show


~ #

In the uplink/show output look for an entry like the one below.

MAC       : 00:50:56:01:08:ca
Link      : Up
MTU       : 1500
IP config :
VMK intf  : vmk3
MAC addr  : 00:50:56:6b:ca:dd
Services  : NSX-Tunneling
VLAN      : 0
IP        :
Mask      :
Connection : NVS (uplink0)
Configured as standalone interface

You can also check the status of the vmkernel interface with esxcli and with nsxcli.

 ~ # esxcli network ip interface ipv4 get -i vmk3
Name  IPv4 Address     IPv4 Netmask   IPv4 Broadcast   Address Type  DHCP DNS
----  ---------------  -------------  ---------------  ------------  --------
vmk3  STATIC           false
~ #
~ # nsxcli vmknic/show vmk3
MAC addr  : 00:50:56:6b:ca:dd
Services  : NSX-Tunneling
VLAN      : 0
IP        :
Mask      :
Assoc with: vmnic4
~ #

The next step is configure the gateway  for NSX vSwitch.

~ # nsxcli gw/set tunneling
~ #
~ # nsxcli gw/show tunneling
Configured default gateway       :
Currently active default gateway : (Manual)
~ #

Connect NSX vSwitch instance to NSX controller cluster.

~ # nsxcli manager/set ssl:
~ #
~ # nsx-dbctl show
    Manager "ssl:"
    Bridge br-int
        fail_mode: secure
    Bridge "br-vmnic4"
        fail_mode: standalone
        Port "vmk3"
            Interface "vmk3"
        Port "vmnic4"
            Interface "vmnic4"
    ovs_version: ""
~ #

Create an opaque network. An opaque network is basically a transport bridge that will provide the network backend for the virtual machines. Opaque networks must be identified during its creation based on its type and ID.

In this particular case the ESXi will be added later to a cluster acting as nova compute backend for my OpenStack lab so the network type must be and the UUID have to match the configured one for the integration_bridge setting in nova.conf file. We also need to specify the port attach mode, for OpenStack environments is manual.

~ # nsxcli network/add NSX-Bridge NSX-Bridge manual
~ #
~ # nsxcli network/show
UUID                                        Name                    Type            Mode
----                                        ----                    ----            ----
NSX-Bridge                                  NSX-Bridge         manual
~ #

Add ESXi as transport node

The final part of the procedure is to add our new ESXi server as transport node to NSX. Log into NSX Manager web UI and initiate the wizard to add a new Hypervisor. First specify the name of the new hypervisor.

Screen Shot 2014-07-14 at 02.13.30

Set the integration bridge.

Screen Shot 2014-07-14 at 02.22.44

Select Security Certificate as credential type and paste the NSX vSwitch SSL certificate. The certificate can be retrieved from /etc/nsxvswitch/nsxvswitch-cert.pem.

Screen Shot 2014-07-14 at 02.29.50

Add an SST transport connector, using the IP address configured for the uplink.

Screen Shot 2014-07-14 at 02.31.57

Click Save & View and verify the new hypervisor configuration in NSX.

Screen Shot 2014-07-14 at 02.36.15

The setup of our new ESXi server within NSX is done. As always comments are welcomed.


Welcome to Part 4 for this series about OpenStack and VMware NSX. To do a quick review, in the first three parts we described the different VMware NSX components and concepts and how to install and configure them, also discussed how to install and configure the KVM and GlusterFS nodes. In this fourth part of the series we will see how to deploy OpenStack in a three-node architecture and integrate it with our existent NSX installation.

If you remember the first post where I described the components of the lab, there were three OpenStack dedicated nodes:

  • Cloud controller node
  • Neutron networking node
  • Nova compute node

Instead of installing from scratch I decided to go with one of the OpenStack distributions: RDO. What is RDO and why I decided for it? RDO is a community distribution of OpenStack sponsored by Red Hat, yes I just say Red Hat so please stop the eye rolling.

RDO is the upstream version of RHEL OpenStack Platform, the commercial version of OpenStack by Red Hat. During the last months I tried several flavors of OpenStack and while I still think that installing from scratch is the best way to learn, in fact is what I did for my first labs, RDO gives me the possibility to quickly create my testing labs. Also RHEL OP Version 4, based on RDO, is supported with VMware NSX and I really couldn’t resist myself to try it.

Installation prerequisites

Before proceeding with the installation there are some preparations we need to perform on the OpenStack nodes.

SSH key generation

Generate a new SSH key to be later distributed on the OpenStack nodes during the installation. Use ssh-keygen to generate the new key.

Neutron server preparation

In the Neutron node install NSX Open vSwitch version as described in Part 3 for the KVM nodes, the network interface configuration it’s quite similar.

With the network interface configuration files properly setup exist your SSH session and log into the VM console to create the OVS bridges like the example below.

ovs-vsctl add-br br-ex
ovs-vsctl br-set-external-id br-ex bridge-id br-ex
ovs-vsctl set Bridge br-ex fail-mode=standalone
ovs-vsctl add-port br-ex eth0

OpenStack installation

RDO relies on packstack for the installation of its different components. Packstack is a tool that will install all required software in the nodes based on an answer file. Enable RDO and EPEL repos and install openstack-packstack package.

yum install -y
yum install -y
yum install -y openstack-packstack

Once it is installed generate a new answer file, we will use this file as a template for our installation.

packstack --gen-answer-file rdo_answers.txt

Edit packstack answer file and modify the following entries, leave the rest with the default values. It is important to do not eliminate any entry or packstack execution will fail.

Deactivate services we do not want to deploy.


Nova settings.


And finally Neutron settings. Don’t set any L3 value since that part will be managed by NSX.


Launch OpenStack installation process.

packstack --answer-file rdo_answers.txt

The installation will take a while so you better grab a cup of coffee and have a look at the output while the software installs on each of the three nodes. If everything goes as expected we should see a similar message at the of the installation process.

 **** Installation completed successfully ******

Additional information:
 * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
 * File /root/keystonerc_admin has been created on OpenStack client host To use the command line tools you need to source the file.
 * To access the OpenStack Dashboard browse to
Please, find your login credentials stored in the keystonerc_admin in your home directory. 
 * Because of the kernel update the host requires reboot. 
 * Because of the kernel update the host requires reboot.
 * Because of the kernel update the host requires reboot.
 * The installation log file is available at: /var/tmp/packstack/20140617-001835-On5TCi/openstack-setup.log 
 * The generated manifests are available at: /var/tmp/packstack/20140617-001835-On5TCi/manifests 
[root@cloud-controller ~]#

Reboot the three nodes as instructed and proceed to the next step.

Configure Glance to use GlusterFS

RDO packstack cannot configure Glance to use GlusterFS as its storage backend during the installation and it has to be configured afterwards. Fortunately the necessary steps are documented on RDO site.

Stop Glance services.

service openstack-glance-registry stop
service openstack-glance-api stop

Install gluster required packages on the controller node.

yum install glusterfs-fuse glusterfs

Mount GlusterFS share and set the ownership and permissions for glance user.

mount -t glusterfs gluster.vlab.local:gv0 /var/lib/glance/images
chown -R glance:glance /var/lib/glance/images

Start Glance services.

service openstack-glance-registry start
service openstack-glance-api start

With the installation finished OpenStack Horizon dashboard should be available at http://cloud_controller_fqdn/dashboard. Log in with the user admin, the password for this user can be found in the file /root/keystonerc_admin on the cloud controller node.

[root@cloud-controller ~]# cat keystonerc_admin
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=cd0ed5b5f251450f
export OS_AUTH_URL=
export PS1='[\u@\h \W(keystone_admin)]\$ '
[root@cloud-controller ~]#

If login fails with an unexpected error check that firewall is deactivated in all three nodes and that all services are up and running, in some of my deployments Neutron server did not start after a reboot and I had to start it manually.

Once logged into horizon navigate to Admin -> Hypervisor and check that the KVM hypervisor is properly registered.

Screen Shot 2014-06-17 at 01.56.04

Configure the NSX integration

At this point we have a working OpenStack installation with Neutron using the Open vSwitch plugin, now we will proceed to integrate our shiny OpenStack cloud with NSX.

Install NSX Neutron plugin

VMware provides a set of RPM packages containing the NSX plugin and a VMware sanctioned version of Neutron, however I found that this packages were older than my Havana installation and didn’t want to brake any dependencies and spend hours trying to fix my installation.

A tar file containing all the source for both the plugin and Neutron itself is also available and instructions on how to compile and install it are provided in NSX documentation, during my first trials I took this path but this time I decided to use the upstream plugin instead since it was available in RDO repositories.

yum install openstack-neutron-nicira

Configure NSX plugin

Register the Neutron server as a transport node on the NSX Controller Cluster.

ovs-vsctl set-manager ssl:

Stop neutron services.

service neutron-server stop

Edit /etc/neutron/neutron.conf file and set core_plugin value to neutron.plugins.nicira.NeutronPlugin.NvpPluginV2.

Configure nvp.ini file accordingly, this file can be found in /etc/neutron/plugins/nicira.

Set NSX admin user and password.

nvp_user = admin
nvp_password = admin

Configure NSX controllers IP addresses.

nvp_controllers =

Set the default Transport Zone UUID and the L3 and L2 gateway serveices UUID, these values can be retrieved from the NSX Manager web.

default_tz_uuid = b948fd35-5737-4a30-8741-43134771d40c
default_l3_gw_service_uuid = adee048c-3776-4bd2-ade1-42ab5c90bf9e

Configure metadata for Nova instances, set metadata_dhcp_host_route to False in [DEFAULT] section. In [nvp] section set the metadata mode as access_network.

enable_metadata_access_network = True
metadata_mode = access_network

Create a [database] section and configure the connection to Neutron MySQL database, the data can be found on neutron.conf file.

connection = mysql://neutron:ac2191a8661b4b66@

FInally before start Neutron services check nvp.ini with the command neutron-check-nvp-config. You should get something like this.

[root@neutron ~]# neutron-check-nvp-config /etc/neutron/plugins/nicira/nvp.ini
----------------------- Database Options -----------------------
        connection: mysql://neutron:ac2191a8661b4b66@
        retry_interval: 10
        max_retries: 10
-----------------------    NVP Options   -----------------------
        NVP Generation Timeout -1
        Number of concurrent connections to each controller 10
        max_lp_per_bridged_ls: 5000
        max_lp_per_overlay_ls: 256
-----------------------  Cluster Options -----------------------
        requested_timeout: 30
        retries: 2
        redirects: 2
        http_timeout: 10
Number of controllers found: 1
        Controller endpoint:
                Gateway(L3GatewayServiceConfig) uuid: adee048c-3776-4bd2-ade1-42ab5c90bf9e
        Transport zones: [u'b948fd35-5737-4a30-8741-43134771d40c']
[root@neutron ~]#

Start Neutron services

service neutron-server start

Create a network neutron command line to test that everything is working as expected.

[root@cloud-controller ~(keystone_admin)]# neutron net-create nsx-test-net
Created a new network:
| Field                 | Value                                |
| admin_state_up        | True                                 |
| id                    | 24f3b23f-a938-40e7-b026-14c8fb77ff34 |
| name                  | nsx-test-net                         |
| port_security_enabled | True                                 |
| shared                | False                                |
| status                | ACTIVE                               |
| subnets               |                                      |
| tenant_id             | 4d9fbabd4c9d4fa4a2185ff7559ae4e8     |
[root@cloud-controller ~(keystone_admin)]#
[root@cloud-controller ~(keystone_admin)]# neutron net-list
| id                                   | name         | subnets |
| 24f3b23f-a938-40e7-b026-14c8fb77ff34 | nsx-test-net |         |
[root@cloud-controller ~(keystone_admin)]#

Access NSX Manager web interface, navigate to Logical Switches and confirm that a new logical switch with the same name and UUID as the new OpenStack network has been created.

Screen Shot 2014-06-21 at 22.15.20

Congratulations! We have successfully deployed a distributed installation of OpenStack with KVM as the underlying hypervisor and integrated with VMware NSX state of the art network virtualization software. In future posts out of this four article series we will discuss some tips and other parts of OpenStack and NSX. Courteous comments are welcome.