Team+OpenStack+@+VMwareIn a previous article I showed the process to patch an existing VIO 1.0 installation, which as you were able to see it is a clean and easy process. VMware announced VMware Integrated OpenStack 2.0 during VMware US and it finally became GA a few weeks ago.

This new version of VIO has all OpenStack code updated up to the latest Kilo release and comes packaged with many interesting features like Load-Balancing-as-a-Service (LBaaS) or auto-scaling capabilities based on Heat and Ceilometer.

With a new VIO version hot of the press it is time now to upgrade your VIO 1.0.x environment to 2.0 and take advantage of all those new great goodies. The upgrade process is pretty straightforward and consist of three main stages.

  • Upgrade VIO Management Server
  • Deploy a new VIO 2.0 environment
  • Perform the data migration

Keep in mind that you will need to have enough hardware resources in your management cluster to be able to host two full fledged VIO installations at the same time during the migration process. Just for the sake of transparency, the lab environment where I test the upgrade is based on vSphere 5.5 Update 2, NSX for vSphere 6.1.4 and VIO 1.0.2.

Step 1 – Upgrade VIO Management Server

From VMware website download the .deb upgrade package and upload it to VIO Management Server.

Download VIO upgrade package

Stage the upgrade package.

viouser@vio-oms:~$ sudo viopatch add -l vio-1.0-upgrade_2.0.0.3037964_all.deb
[sudo] password for viouser:
vio-1.0-upgrade_2.0.0.3037964_all.deb patch has been added.
viouser@vio-oms:~$ viopatch list
Name            Version       Type   Installed
--------------- ------------- ------ -----------
vio-1.0-upgrade infra  No
vio-patch-2 infra  Yes


Upgrade the management server with viopatch command.

viouser@vio-oms:~$ sudo viopatch install -p vio-1.0-upgrade -v
Installing patch vio-1.0-upgrade version
Installation complete for patch vio-1.0-upgrade version

Go to the vSphere Web Client, logout and log back in to verify that the new version is correct.


Step 2 – Deploy a new VIO 2.0 environment

With VIO Management Server upgraded is now time to deploy a fresh 2.0 environment. In the VIO plugin go to Manage section and a new Upgrades tab will be there. Before


Before starting with the deployment check in the Networks tab that there are enough free IP address for the new deployment, if there aren’t then add a new IP range.


Click on the Upgrade Screen Shot 2015-10-20 at 13.49.34 icon. Select if you want to participate in the Customer experience improvement program, my recommendation here is to say yes to help our engineering team to improve VIO upgrade experience even more ;-), and enter the name for the new deployment.


Enter the IP addresses for the public and private load balanced IP addresses, keep in mind that these IP addresses must belong to the API subnet of the existing VIO 1.0 environment in case of the public and to the management network segment in the case of the private one.


In the last screen review the configured values and click Finish. The new environment will be deployed and you will be able to monitor it from the Upgrades tab.


Step 3 – Migrate the data

With the new environment up and ready we can start the data migration. From the Upgrades tab right-click in the your existing VIO 1.0 installation and select Migrate Data.


The migration wizard will ask for confirmation, click OK. During the data migration all OpenStack service will be unavailable.


When the migration process is finished the status of the new VIO 2.0 environment will appear as Migrated and the previous VIO 1.0 will appear as Stopped.


Open a browser and connect the VIO 2.0 public IP to access OpenStack Horizon interface, login and verify that all your workloads, networks, image, etc have been properly migrated. Logout from Horizon and go back to the Web Client. Now that the data has been migrated we need to migrate the original Public Virtual IP to the new environment.

Right-click on VIO 1.0 deployment and from the menu select Switch To New Deployment.


A new pop-up will appear asking for confirmation since again the OpenStack service will be unavailable during the IP reconfiguration.

After the reconfiguration the new VIO 2.0 deployment will be in Running status and the Public Virtual IP will be the same as the former 1.0 deployment.


The upgrading procedure is finished. We can now access now Horizon using the existing FQDN, verify that everything is still working and enjoy your new OpenStack Kilo environment.


In the same way as patching, with VIO upgrading your OpenStack cloud does not have to be a painful experience, VIO provides the best OpenStack experience in a vSphere environment. Kudos to my colleagues of the Team OpenStack @ VMware.

Happy Stacking!


ResizedImage456224-openstack-liberty-logoDuring this past week the OpenStack Foundation has finally released Liberty, the latest version of our favorite cloud software. This is the twelfth iteration of OpenStack and bring many new great features. To name the most interesting to me.

  • Enhanced Manageability – Finer-grained access controls and security settings, a new role-based access control (RBAC) for the Heat and Neutron and simplified management capabilities.
  • Enhanced Orchestration – Heat comes with lots of new resources for management, automation and orchestration to support all new Liberty management capabilities.
  • Container Management – Mangum, the container management piece of OpenStack, makes its debut with Liberty. With support for container cluster management software like Kubernetes, Mesos and Docker Swarm.

I’m eager to get my hands dirty with Magnum and all new Heat capabilities.

Finally the OpenStack Foundation has released this great video showcasing Liberty core services.

You can also get the details about all changes in Liberty Release Notes.

Happy stacking!


Fedora 22 was released a few months ago and amongst many new features it came with a replacement for yum as package manager called dnf, or DaNdiFied YUM, oh yes yum is still around but it is now considered legacy software. Also DNF will become in the near future the default package manager for RHEL and CentOS so it is for the best that you get familiarized with it sooner than later.

DNF Commands

The first thing you need to understand about dnf is that many commands are basically still the same but there are differences. Package management commands can be executed with almost the same syntax previously used with yum.

Search for a package,

[jrey@fed22-srv ~]$ sudo dnf search htop
Last metadata expiration check performed 1:25:54 ago on Mon Oct 5 23:47:45 2015.
=================================== N/S Matched: htop ====================================
htop.x86_64 : Interactive process viewer
php-lightopenid.noarch : PHP OpenID library
[jrey@fed22-srv ~]$

Install a package.

[jrey@fed22-srv ~]$ sudo dnf install htop

Remove a package.

[jrey@fed22-srv ~]$ sudo dnf remove htop

Get information about a package

[jrey@fed22-srv ~]$ sudo dnf info htop
Last metadata expiration check performed 1:47:13 ago on Mon Oct 5 23:47:45 2015.
Available Packages
Name : htop
Arch : x86_64
Epoch : 0
Version : 1.0.3
Release : 4.fc22
Size : 91 k
Repo : fedora
Summary : Interactive process viewer
License : GPL+
Description : htop is an interactive text-mode process viewer for Linux, similar to
 : top(1).

[jrey@fed22-srv ~]$

Group and repository management commands are still the same as well.

[jrey@fed22-srv ~]$ sudo dnf repolist

Querying the available repositories for a specific command.

[jrey@fed22-srv ~]$ sudo dnf repoquery --whatprovides htop
Last metadata expiration check performed 1:54:52 ago on Mon Oct 5 23:47:45 2015.
[jrey@fed22-srv ~]$

dnf comes with some powerful capabilities like history query.

[jrey@fed22-srv ~]$ sudo dnf history list
Last metadata expiration check performed 11 days, 19:14:54 ago on Wed Oct 7 02:56:21 2015.
ID | Command line             | Date a           | Action  | Altere
 9 | history undo 8           | 2015-10-06 01:53 | Install | 1 
 8 | erase htop               | 2015-10-06 01:28 | Erase   | 1 
 7 | install htop -y          | 2015-10-06 01:28 | Install | 1 
 6 | remove htop              | 2015-10-06 01:14 | Erase   | 1 
 5 | install htop             | 2015-10-06 01:14 | Install | 1 
 4 | install make gcc kernel- | 2015-09-30 16:21 | Install | 9 
 3 | update                   | 2015-09-30 15:43 | I, U    | 112 
 2 | update                   | 2015-09-16 11:45 | I, O, U | 297 
 1 |                          | 2015-09-16 10:59 | Install | 658 EE
[jrey@fed22-srv ~]$

This can be specially helpful if you need to rollback a change, like clean up dependencies after uninstalling a package or reinstall a package.

[jrey@fed22-srv ~]$ sudo history undo 8

You can also look for duplicated within the installed ones.

[jrey@fed22-srv ~]$ sudo dnf repoquery --duplicated
Last metadata expiration check performed 0:30:42 ago on Tue Oct 6 02:48:41 2015.
[jrey@fed22-srv ~]$

Retrieve all available packages providing a specific software of capability.

[jrey@fed22-srv ~]$ sudo dnf repoquery --whatprovides curl
Last metadata expiration check performed 0:38:00 ago on Tue Oct 6 02:48:41 2015.
[jrey@fed22-srv ~]$

This is a very basic introduction to dnf capabilities but hopefully you have been able to get how it works. My advice is to review DNF documentation for all the details.

The Photon Connection

VMware Photon comes with tdnf (Tiny DNF); this is a development by VMware that comes with compatible repository and package management capabilites. Not every dnf command is available but the basic ones are there.

Package installation and updates.

Screen Shot 2015-10-11 at 19.41.00

Repository management.

Screen Shot 2015-10-11 at 18.54.47

In the future if I find the time I’ll write a new post with some advanced examples of dnf commands. Comments are welcome.


Lattice is the latest addition from Pivotal to its portfolio of open source projects. Lattice leverages various components from Cloud Foundry, in order to run containerized workloads in a cluster.

  • Diego – The new Cloud Foundry elastic runtime. Acts as an action-based scheduler and provides support for Docker images.
  • Doppler – The log and metric aggregator for the platform and the running workloads.
  • Gorouter – A software-based router with reverse proxy capabilities. Dynamically updated as the containers are spun up and down.

The first fact we need to understand about Lattice is that it is not intended to run production workloads. Instead Lattice is meant to be run on Virtualbox or VMware Fusion using Vagrant. In the end Lattice is an easy way to leverage all the power of Cloud Foundry for running containers, in your laptop and without having to bother about all Cloud Foundry installation details.

Deploying Lattice

Installing and running Lattice in your laptop is relatively easy process, first of all we will need  First download the latest package from GitHub Releases page for Lattice, there are packages available for Linux and OS X.

Unzip the package in a directory with the rest of your virtual machines.


Now copy the ltc utility to a directory in your PATH, I always use /usr/local/bin for this kind of binaries.

Running Lattice

Running Lattice should be quite simple, just change to vagrant directory on Lattice installation path, then execute vagrant up command and that’s it. However there is a caveat, by default the Vagranfile will use the IP address if LATTICE_SYSTEM_IP variable is not provided during the execution.

To avoid this issue pass LATTICE_SYSTEM_IP variable to Vagrant during the execution. I personally have used both VMware Fusion and VMware AppCatalyst but you can use Virtualbox too. For the AppCatalyst the only requirement is to have appcatalyst-daemon running since it is needed by the Vagrant provider.

LATTICE_SYSTEM_IP= vagrant up --provider vmware_fusion

With this we will have our Lattice instance up and running. Next we need to tell ltc how to connect to our Lattice instance, this operation is called targeting.

ltc target

With the API endpoint set let’s deploy our first application, for this example we will use Lattice example app. Run the ltc create command with the name of the new app and the container to be spun up as the arguments.

Screen Shot 2015-10-02 at 13.33.37

Open your favorite browser and access

Screen Shot 2015-10-02 at 13.44.24

The index indicates the node that we are accessing. Next we will scale up the application adding two additional containers. Use ltc scale to add additional instances of the app and ltc status to retrieve the status.

Screen Shot 2015-10-02 at 14.22.40

Another useful operation with ltc is the capacity to get the logs for your app.

 2015-10-02 14:23:18 ☆ trantor in ~
○ → ltc logs my-app
10/02 14:30:11.10 [APP|2] Lattice-app. Says Hello. on index: 2
10/02 14:30:11.28 [APP|0] Lattice-app. Says Hello. on index: 0
10/02 14:30:11.60 [APP|1] Lattice-app. Says Hello. on index: 1
10/02 14:30:12.10 [APP|2] Lattice-app. Says Hello. on index: 2
10/02 14:30:12.28 [APP|0] Lattice-app. Says Hello. on index: 0
10/02 14:30:12.60 [APP|1] Lattice-app. Says Hello. on index: 1

I’ll let you to add more apps to Lattice and to play around with ltc.

Comments are welcome, as always.


As with the rest of NSX for vSphere components any competent admin would like to configure a remote syslog server for the NSX Controllers, in my homelab I have vRealize Log Insight and recently I decided to configure it on my NSX Controllers and document the procedure here mostly as self-reference.

NSX Manager has the option to configure a remote syslog server using its management web site, but where is the option for the Controllers? Well, if you lurk around NSX interface in vSphere Web Client will quickly notice that the option is somehow missing.  Actually the only option to enable it is using NSX REST API. Let’s see how to do it.

For this post I will use Firefox REST Client Add-on but you can use your favorite REST client. Firstly any REST API call will require at least the Authentication header, in Firefox REST Client click on Authentication drop-down menu, select Basic Authentication and enter the admin credentials.

Screen Shot 2015-09-30 at 02.25.19

Additionally PUT and POST methods will require you to set a custom header with the following values that will define the content of the HTTP Request body. Use these values.

  • Name: Content-Type
  • Value: application/xml

With these two headers set enter the API URL, in my case it is:


To construct this URL you will need the controller ID that can be get in the NSX interface in vSphere Web Client as shown below.

Screen Shot 2015-09-30 at 02.50.38

Select POST as the method. You will need to enter the body for the HTTP Request, in XML format. Use the below code as an example to build the content for the HTTP Request body.

This XML code will indicate the NSX Manager to set the IP address in the syslogServer node as the remote syslog server for the controller in the URL. The protocol, port and log level are also defined.

Screen Shot 2015-09-17 at 01.14.19

Submit the request and if everything is configured as described you will receive a 200 OK status code.

Screen Shot 2015-09-17 at 01.15.06

At this point the syslog server is configured for all NSX Controllers, you can check the status using also an API call with the same URL and selecting GET method.

Comments are welcome.


It occurred to me recently that after recovering a VIO failed deployment, in my case an issue with one of the database nodes, in the Web Client Plugin the OpenStack Cluster still was in Error state.

Screen Shot 2015-06-01 at 12.25.15

After investigating a bit internally and thanks to my colleague Dimitri Desmidt I was able to solve it.

Open an SSH connection as viouser to the VIO Management Server and elevate to root. Launch psql with the following command:

/opt/vmware/vpostgres/current/bin/psql -U omsdb

From psql prompt execute:

update cluster set status='RUNNING';

After that logout and login back to Web Client and the cluster will be now in Running status.


Team+OpenStack+@+VMwareVMware has released the first patch for VMware Integrated OpenStack. This patch release comes with improvements around the installer, Keystone service and fixes some security issues. Review the Release Notes to get full details of what is included.

After the patch was released I thought it was a perfect time to upgrade my VIO lab, document the procedure and publish it, so without further ado lets get some patches installed!

Step 1 – Upload and install the patch

Get the patch from VIO product download page, of course you need to have the proper rights to do it, the patch is a Debian package in deb format. There are some caveats here, the way to upload and install the patch is using vSphere Web Client from Manage ->  Updates.

Screen Shot 2015-04-28 at 23.38.55

However after the immediate release of the patch an issue was identified using this method and currently until it is solved the safest way to do it is using the CLI. Use your favorite SCP/SFTP client to upload the patch to VIO Management Server as viouser.

Add the patch using viopatch command.

viouser@vio-manager:~$ sudo viopatch add -l /home/viouser/vio-patch-1_1.0.1.2668568_all.deb
[sudo] password for viouser:
/home/viouser/vio-patch-1_1.0.1.2668568_all.deb patch has been added.

List the patches to verify that has been added.

viouser@vio-manager:~$ viopatch list
Name         Version        Type    Installed
-----------  -------------  ------  -----------
vio-patch-1  infra   No


Install the patch, before installing verify that VIO Cluster is in Running status or the update will fail. The patch can also be applied before deploying OpenStack.

viouser@vio-manager:~$ sudo viopatch install --patch vio-patch-1 --version
[sudo] password for viouser:
Installing patch vio-patch-1 version
Installation complete for patch vio-patch-1 version

Step 2 – Verify the installation

Log out and log in back in vSphere Web Client. The new version and build number can be verified in the Summary tab.

Screen Shot 2015-04-29 at 01.09.21

Also in Manage -> Updates the newly installed patch can be seen in more detail.

Screen Shot 2015-04-29 at 01.09.53

Ans this is it. Anyone that has ever to endure the pain of patching an OpenStack installation, either lab or production environment, I am sure that will appreciate how clean and easy is the process in VIO.


FirewallD, or Dynamic Firewall Manager, is the replacement for the IPTables firewall in Red Hat Enterprise Linux. The main improvement over IPTables is the capacity to make cahnges without the need to restart the whole firewall service.

FirewallD was first introduced in Fedora 18 and has been the default firewall mechanism for Fedora since then. Finally this year Red Hat decided to include it in RHEL 7, and of course it also made its way to the different RHEL clones like CentOS 7 and Scientific Linux 7.

Checking FirewallD service status

To get the basic status of the service simply use firewall-cmd --state.

[root@centos7 ~]# firewall-cmd --state
[root@centos7 ~]#

If you need to get a more detailed state of the service you can always use systemctl command.

[root@centos7 ~]# systemctl status firewalld.service
firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled)
   Active: active (running) since Wed 2014-11-19 06:47:42 EST; 32min ago
 Main PID: 873 (firewalld)
   CGroup: /system.slice/firewalld.service
           └─873 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid

Nov 19 06:47:41 centos7.vlab.local systemd[1]: Starting firewalld - dynamic firewall daemon...
Nov 19 06:47:42 centos7.vlab.local systemd[1]: Started firewalld - dynamic firewall daemon.
[root@centos7 ~]#

To enable or disable FirewallD again use systemctl commands.

systemctl enable firewalld.service
systemctl disable firewalld.service

Managing firewall zones

FirewallD introduces the zones concept, a zone is no more than a way to define the level of trust for a set of connections. A connection definition can only be part of one zone at the same time but zones can be grouped  There is a set of predefined zones:

  • Public – For use in public areas. Only selected incoming connections are accepted.
  • Drop – Any incoming network packets are dropped, there is no reply. Only outgoing network connections are possible.
  • Block – Any incoming network connections are rejected with an icmp-host-prohibited message for IPv4 and icmp6-adm-prohibited for IPv6. Only network connections initiated within this system are possible.
  • External – For use on external networks with masquerading enabled especially for routers. Only selected incoming connections are accepted.
  • DMZ – For computers DMZ network, with limited access to the internal network. Only selected incoming connections are accepted.
  • Work – For use in work areas. Only selected incoming connections are accepted.
  • Home – For use in home areas. Only selected incoming connections are accepted.
  • Trusted – All network connections are accepted.
  • Internal – For use on internal networks. Only selected incoming connections are accepted.

By default all interfaces are assigned to the public zone. Each zone is defined in its own XML file stored in /usr/lib/firewalld/zones. For example the public zone XML file looks like this.

root@centos7 zones]# cat public.xml
<?xml version="1.0" encoding="utf-8"?>
  <description>For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.</description>
  <service name="ssh"/>
  <service name="dhcpv6-client"/>
[root@centos7 zones]#

Retrieve a simple list of the existing zones.

[root@centos7 ~]# firewall-cmd --get-zones
block dmz drop external home internal public trusted work
[root@centos7 ~]#

Get a detailed list of the same zones.

firewall-cmd --list-all-zones

Get the default zone.

[root@centos7 ~]# firewall-cmd --get-default-zone
[root@centos7 ~]#

Get the active zones.

[root@centos7 ~]# firewall-cmd --get-active-zones
  interfaces: eno16777736 virbr0
[root@centos7 ~]#

Get the details of a specific zone.

[root@centos7 zones]# firewall-cmd --zone=public --list-all
public (default, active)
  interfaces: eno16777736 virbr0
  services: dhcpv6-client ssh
  masquerade: no
  rich rules:

[root@centos7 zones]#

Change the default zone.

firewall-cmd --set-default-zone=home

Interfaces and sources

Zones can be bound to a network interface and to a specific network addressing or source.

Assign an interface to a different zone, the first command assigns it temporarily and the second makes it permanently.

firewall-cmd --zone=home --change-interface=eth0
firewall-cmd --permanent --zone=home --change-interface=eth0

Retrieve the zone an interface is assigned to.

[root@centos7 zones]# firewall-cmd --get-zone-of-interface=eno16777736
[root@centos7 zones]#

Bound the zone work to a source.

firewall-cmd --permanent --zone=work --add-source=

List the sources assigned to a zone, in this case work.

[root@centos7 ~]# firewall-cmd --permanent --zone=work --list-sources
[root@centos7 ~]#


FirewallD can assign services permanently to a zone, for example to assign http service to the dmz zone. A service can be also assigned to multiple zones.

[root@centos7 ~]# firewall-cmd --permanent --zone=dmz --add-service=http
[root@centos7 ~]# firewall-cmd --reload
[root@centos7 ~]#

List the services assigned to a given zone.

[root@centos7 ~]# firewall-cmd --list-services --zone=dmz
http ssh
[root@centos7 ~]#

Other operations

Besides of Zones, interfaces and Services management FirewallD like other firewalls can perform several network related operations like masquerading, set direct rules and manage ports.

Masquerading and port forwading

Add masquerading to a zone.

firewall-cmd --zone=external --add-masquerade

Query if masquerading is enabled in a zone.

[root@centos7 ~]# firewall-cmd --zone=external --query-masquerade
[root@centos7 ~]#

You can also set port redirection. For example to forward traffic originally intended for port 80/tcp to port 8080/tcp.

firewall-cmd --zone=external --add-forward-port=port=80:proto=tcp:toport=8080

A destination address can also bee added to the above command.

firewall-cmd --zone=external --add-forward-port=port=80:proto=tcp:toport=8080:toaddr=

Set direct rules

Create a firewall rule for 8080/tcp port.

firewall-cmd --direct --add-rule ipv4 filter INPUT -p tcp -m state --state NEW -m tcp --dport 8080 -j ACCEPT

Port management

Allow a port temporary in a zone.

firewall-cmd --zone=dmz --add-port=8080/tcp

Hopefully you found the post useful to start working with FirewallD. Comments are welcome.


Team+OpenStack+@+VMwareVMware Integrated OpenStack was made available for customers last Thursday. It is a exciting time to be part of VMware.

Coincidentally with this announcement I have updated my original post about VIO installation with new screenshots and information from the GA version of VIO and this post has been also published in our official VMware OpenStack Blog.

Also if you want to quickly experience VMware and OpenStack all together there is a new Hands On Lab available about VIO, the name is HOL-SDC-1420 VMware Integrated OpenStack and NSX.

Happy Stacking!


VMware Integrated OpenStack, or VIO, was announced during last year VMworld in San Francisco and has been finally released today by VMware.

For me this is a very special release because I have been one of the lucky internal adopters and beta testers of VIO. I have spent many hours working with several VIO builds and trying to help our incredible engineering team. This is in my opinion a really solid and well designed product and will be a game changer in the OpenStack world. Honestly I cannot be more excited :)


VIO is basically a VMware supported OpenStack distribution prepared to run on top of an existing VMware infrastructure. VMware Integrated OpenStack will empower any VMware Administrator to easily deliver and operate an Enterprise production grade OpenStack cloud on VMware components. This means that you will be able at to take advantage of all VMware vSphere great features like HA, DRS or VSAN for your OpenStack cloud and also extend and integrate it with other VMware management components like vRealize Operations and vRealize Log Insight.

VIO components

VIO is made by two main building blocks, first the VIO Manager and second OpenStack components. VIO is packaged as an OVA file that contains the VIO Manager server and an Ubuntu Linux virtual machine to be used as the template for the different OpenStack components.

The OpenStack services in VIO are deployed as a distributed highly available solution formed by the following components:

  • OpenStack controllers. Two virtual machines running Horizon Dashboard, Nova (API, scheduler and VNC) services, Keystone, Heat, Glance, and Cinder services in an active-active cluster.
  • Memcached cluster.
  • RabbitMQ cluster, for messaging services used by all OpenStack services.
  • Load Balancer virtual machines, an active-active cluster managing the internal and public virtual IP addresses.
  • Nova Compute machine, running the n-cpu service.
  • Database cluster. A three node MariaDB Galera cluster that stores the OpenStack metadata.
  • Object Storage machine, running Swift services.
  • DHCP nodes. These nodes are only required if NSX is not selected as provider for Neutron.

Installation requirements

To be able to successfully deploy VIO you will need at least the following:

  • One management cluster with two to three hosts, depending on the hardware resources of the hosts.
  • One Edge cluster. As with any NSX for vSphere deployment it is recommended to deploy a separate cluster to run all Edge gateway instances.
  • One compute cluster to be used by Nova to run instances. One ESXi host will be enough but again that will depend on how much resources are available and what kind of workloads you want to run.
  • Management network with at least 15 static IP addresses available.
  • External network with a minimum of two IP addresses available. This is the network where Horizon portal will be exposed and that will be used by the tenants to access OpenStack APIs and services.
  • Data network, only needed if NSX is going to be used. The different tenant logical network will be created on top of this, the management network can be used but it is recommended to have a separate network.
  • NSX for vSphere, 6.1.2 at minimum. It has to be setup prior to VIO deployment if NSX plugin is going to be used with Neutron.
  • Distributed Port Group. In case of choosing DVS-based networking a vSphere port-group tagged with VLAN 4095 must be setup. This port group will be used as the data network.

The hardware requirements are around 56 vCPU, 192GB of memory and 605GB of storage. To that you have to add NSX for vSphere required resources for the NSX Manager, the three NSX Controllers and the NSX Edge pool, if NSX is going to be used.

Anyway in a future post I will review in detail all the pre-requisites and their setup process for VIO, and the integration between NSX-v and Neutron.

VIO Installation

Now that we have seen a bit of VIO I am going to show how to perform an installation.

Deploying VIO Manager

The first step is to deploy VIO OVA on our management cluster. From vSphere Web Client launch the Deploy OVF Template wizard and enter the URL to the VIO OVA file.


Accept the EULA and proceed to configure the template. First as with any OVA template enter the name and the folder,


Select the datastore and the storage format.


Select the network for VIO Manager.


Now we will customize the template, this includes entering the VIO Manager server networking settings, NTP, SSO lookup service URL and Syslog server.

Screen Shot 2015-01-31 at 00.32.27

Go through the next two screens, click finish and start the deployment. Once it is finished you will have a new vApp with the two virtual machines. Our next step is to register the management server with vCenter, power on the OMS vApp and when the management server is fully started logout of vSphere Web Client. Log in back to vSphere Web Client, you will notice a new icon in the Home page.

Screen Shot 2015-02-01 at 02.34.35

Access the VIO plugin interface and in the Summary you should see that VIO Manager has automatically registered itself with vCenter.


From this screen you can also change the VIO Manager server in case you need to re-deploy a new one. To do so select the management server in the pop-up and click OK.

Screen Shot 2015-01-31 at 17.24.49

Accept the SSL certificate to finish the procedure.

Screen Shot 2014-08-23 at 01.41.07

VIO Manager Server will now be displayed as connected in the Summary tab.


Deploying OpenStack

With VIO Manager running and connected to our vCenter it is time now to deploy OpenStack. Proceed to the Getting Started tab and click Deploy OpenStack.


A new wizard will be launched. In the first screen we must select the deployment type. VIO allows to deploy a new OpenStack installation or deploy from a previously saved template file.


Provide the vCenter administrative credentials.


Select the management cluster where we are going to deploy VIO.


Next you need to configure the Management and External networks. Select the appropriate vSphere port-groups for each network and fill in the network ranges, gateway, netmask and DNS server fields.


Enter the values for the load balancer configuration:

  • Public Virtual IP address
  • Public Hostname, this hostname must resolve to the Public IP address.


Add a cluster to be used for Nova.


Add the datastores to be used by Nova to store the different instances. If you have a VSAN datastore keep in mind that to be able to use it with Nova the images stored in Glance have to be streamOptimzed.


Select the datastore to be used by Glance image service.


Configure Neutron networking. For Neutron there are two different options:

  • DVS-based networking
  • NSX networking

For DVS simply select the Virtual Distributed Switch where you created the port-group for the data network with the VLAN 4095 configured.

For NSX deployment you must enter:

  • NSX Manager IP address.
  • NSX Manager administrative username.
  • NSX Manager administrative user password.
  • VDN Scope. Basically the Transport Zone in NSX-v to be used as transport layer for data traffic.
  • Edge Cluster. A vSphere cluster to deploy the NSX Edge instances.
  • Virtual Distributed Switch for NSX networking.
  • External Network. This a port group to be used as external network by instances in OpenStack via a virtual router. This port group should be accessible from compute, management and edge clusters.


During the Neutron configuration the wizard will connect to the NSX Manager with the provided credentials and will ask to accept the SSL certificate.


In the next screen the wizard will ask for the OpenStack admin user, password and project. Also you can select the Keystone type option:

  • Database
  • Active Directory as LDAP Server.


Finally set the syslog server, it is not mandatory to set this value but it is highly recommended.


Review the configuration and click Finish.

Screen Shot 2015-01-31 at 20.43.54

Review the configuration and click Finish.

The deployment will take some time, depending on your storage backend. In my testing lab took around one hour, but it is a nested environment running on NFS so you can expect much better times deploying in a real world setup. When it is finished you can review the different components of VIO with vSphere Web Client in VMs and Templates, there would be a new folder structure containing all VIO virtual machines.


 Validate your VIO installation

In your favorite browser open an HTTPS session against the public hostname or virtual IP address configured during VIO installation. The Horizon portal login page will display.

Screen Shot 2015-02-01 at 22.50.41

Enter the admin credentials and OpenStack admin Overview page will show up. The access the Hypervisors area and check that the selected cluster for Nova appears there.

Screen Shot 2015-02-01 at 22.53.19

At this point VIO is setup and you can start to work in Horizon or using the CLI as with any other OpenStack distribution.

Have fun and happy stacking!