Archives For networking

In the series of posts about OpenStack and KVM we saw how to add a KVM node to NSX for multi-hypervisor environments as a transport node. In this post we will discuss how to perform the same procedure for an ESXi host.

NSX vSwitch installation

Before proceeding with the installation keep in mind that NSX vSwitch can run on an ESXi host simultaneously only with VMware Standard Switch, distributed switches are not supported.

Install the NSX vSwitch vib file using esxcli.

~ # esxcli software vib install --no-sig-check -v /tmp/vmware-nsxvswitch-2.1.3-35984-prod2013-stage-release.vib
Installation Result
   Message: Operation finished successfully.
   Reboot Required: false
   VIBs Installed: VMware_bootbank_vmware-nsxvswitch_2.1.3-35984
   VIBs Removed:
   VIBs Skipped:
~ #
~ # esxcli software vib list | grep nsx
vmware-nsxvswitch              2.1.3-35984                           VMware  VMwareCertified   2014-07-13
~ #

Check that the a new virtual switch has been created on the host, don’t use esxcli but the good old esxcfg-vswitch command because for now there is no namespace available in esxcli for NSX vSwitch.

~ # esxcfg-vswitch -l
Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
vSwitch0         1536        7           128               1500    vmnic0,vmnic1

  PortGroup Name        VLAN ID  Used Ports  Uplinks
  vMotion               0        1           vmnic0,vmnic1
  Management Network    0        1           vmnic0,vmnic1

Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
vSwitch1         1536        6           128               1500    vmnic2,vmnic3

  PortGroup Name        VLAN ID  Used Ports  Uplinks
  vsan                  0        1           vmnic2,vmnic3

Switch Name      Num Ports   Used Ports  Uplinks
nsx-vswitch      1536        1

~ #

NSX vSwitch configuration

With NSX vSwitch installed proceed to the conifguration. First connect an uplink to the switch, this will create an NVS bridge which is the equivalent of an OVS bridge in Open vSwitch.

nsxcli uplink/connect vmnic4

Set an IP address for the uplink, this IP address will be used later to create the transport tunneling endpoint when we connect the ESXi as a transport node to NSX. You can also specify the VLAN tag by appending vlan=<vlan_id> as an additional parameter to the command.

nsxcli uplink/set-ip vmnic4 192.168.110.123 255.255.255.0

Validate that the bridge is correctly configured. Use nsxcli port/show to verify the bridge and nsxcli uplink/show for the uplink.

~ # nsxcli port/show
br-int:
-------

br-vmnic4:
----------
vmnic4
vmk3

~ #

In the uplink/show output look for an entry like the one below.

==============================
vmnic4:
MAC       : 00:50:56:01:08:ca
Link      : Up
MTU       : 1500
IP config :
------------------------------
VMK intf  : vmk3
MAC addr  : 00:50:56:6b:ca:dd
Services  : NSX-Tunneling
VLAN      : 0
IP        : 192.168.110.123(Static)
Mask      : 255.255.255.0(Static)
..............................
------------------------------
Connection : NVS (uplink0)
Configured as standalone interface
==============================

You can also check the status of the vmkernel interface with esxcli and with nsxcli.

 ~ # esxcli network ip interface ipv4 get -i vmk3
Name  IPv4 Address     IPv4 Netmask   IPv4 Broadcast   Address Type  DHCP DNS
----  ---------------  -------------  ---------------  ------------  --------
vmk3  192.168.110.123  255.255.255.0  192.168.110.255  STATIC           false
~ #
~ # nsxcli vmknic/show vmk3
vmk3:
MAC addr  : 00:50:56:6b:ca:dd
Services  : NSX-Tunneling
VLAN      : 0
IP        : 192.168.110.123(Static)
Mask      : 255.255.255.0(Static)
Assoc with: vmnic4
..............................
~ #

The next step is configure the gateway  for NSX vSwitch.

~ # nsxcli gw/set tunneling 192.168.110.2
~ #
~ # nsxcli gw/show tunneling
Tunneling:
Configured default gateway       : 192.168.110.2
Currently active default gateway : 192.168.110.2 (Manual)
~ #

Connect NSX vSwitch instance to NSX controller cluster.

~ # nsxcli manager/set ssl:192.168.110.31
~ #
~ # nsx-dbctl show
e42912a7-693f-43ee-84d5-11b5bb3491eb
    Manager "ssl:192.168.110.31:6632"
    Bridge br-int
        fail_mode: secure
    Bridge "br-vmnic4"
        fail_mode: standalone
        Port "vmk3"
            Interface "vmk3"
        Port "vmnic4"
            Interface "vmnic4"
    ovs_version: "2.1.3.35984"
~ #

Create an opaque network. An opaque network is basically a transport bridge that will provide the network backend for the virtual machines. Opaque networks must be identified during its creation based on its type and ID.

In this particular case the ESXi will be added later to a cluster acting as nova compute backend for my OpenStack lab so the network type must be nsx.network and the UUID have to match the configured one for the integration_bridge setting in nova.conf file. We also need to specify the port attach mode, for OpenStack environments is manual.

~ # nsxcli network/add NSX-Bridge NSX-Bridge nsx.network manual
success
~ #
~ # nsxcli network/show
UUID                                        Name                    Type            Mode
----                                        ----                    ----            ----
NSX-Bridge                                  NSX-Bridge              nsx.network     manual
~ #

Add ESXi as transport node

The final part of the procedure is to add our new ESXi server as transport node to NSX. Log into NSX Manager web UI and initiate the wizard to add a new Hypervisor. First specify the name of the new hypervisor.

Screen Shot 2014-07-14 at 02.13.30

Set the integration bridge.

Screen Shot 2014-07-14 at 02.22.44

Select Security Certificate as credential type and paste the NSX vSwitch SSL certificate. The certificate can be retrieved from /etc/nsxvswitch/nsxvswitch-cert.pem.

Screen Shot 2014-07-14 at 02.29.50

Add an SST transport connector, using the IP address configured for the uplink.

Screen Shot 2014-07-14 at 02.31.57

Click Save & View and verify the new hypervisor configuration in NSX.

Screen Shot 2014-07-14 at 02.36.15

The setup of our new ESXi server within NSX is done. As always comments are welcomed.

Juanma.

Welcome to Part 4 for this series about OpenStack and VMware NSX. To do a quick review, in the first three parts we described the different VMware NSX components and concepts and how to install and configure them, also discussed how to install and configure the KVM and GlusterFS nodes. In this fourth part of the series we will see how to deploy OpenStack in a three-node architecture and integrate it with our existent NSX installation.

If you remember the first post where I described the components of the lab, there were three OpenStack dedicated nodes:

  • Cloud controller node
  • Neutron networking node
  • Nova compute node

Instead of installing from scratch I decided to go with one of the OpenStack distributions: RDO. What is RDO and why I decided for it? RDO is a community distribution of OpenStack sponsored by Red Hat, yes I just say Red Hat so please stop the eye rolling.

RDO is the upstream version of RHEL OpenStack Platform, the commercial version of OpenStack by Red Hat. During the last months I tried several flavors of OpenStack and while I still think that installing from scratch is the best way to learn, in fact is what I did for my first labs, RDO gives me the possibility to quickly create my testing labs. Also RHEL OP Version 4, based on RDO, is supported with VMware NSX and I really couldn’t resist myself to try it.

Installation prerequisites

Before proceeding with the installation there are some preparations we need to perform on the OpenStack nodes.

SSH key generation

Generate a new SSH key to be later distributed on the OpenStack nodes during the installation. Use ssh-keygen to generate the new key.

Neutron server preparation

In the Neutron node install NSX Open vSwitch version as described in Part 3 for the KVM nodes, the network interface configuration it’s quite similar.

With the network interface configuration files properly setup exist your SSH session and log into the VM console to create the OVS bridges like the example below.

ovs-vsctl add-br br-ex
ovs-vsctl br-set-external-id br-ex bridge-id br-ex
ovs-vsctl set Bridge br-ex fail-mode=standalone
ovs-vsctl add-port br-ex eth0

OpenStack installation

RDO relies on packstack for the installation of its different components. Packstack is a tool that will install all required software in the nodes based on an answer file. Enable RDO and EPEL repos and install openstack-packstack package.

yum install -y http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum install -y http://repos.fedorapeople.org/repos/openstack/openstack-havana/rdo-release-havana-8.noarch.rpm
yum install -y openstack-packstack

Once it is installed generate a new answer file, we will use this file as a template for our installation.

packstack --gen-answer-file rdo_answers.txt

Edit packstack answer file and modify the following entries, leave the rest with the default values. It is important to do not eliminate any entry or packstack execution will fail.

Deactivate services we do not want to deploy.

CONFIG_SWIFT_INSTALL=n
CONFIG_CEILOMETER_INSTALL=n
CONFIG_NAGIOS_INSTALL=n
CONFIG_CINDER_INSTALL=n

Nova settings.

CONFIG_NOVA_COMPUTE_HOSTS=192.168.82.42
CONFIG_NOVA_NETWORK_HOSTS=

And finally Neutron settings. Don’t set any L3 value since that part will be managed by NSX.

CONFIG_NEUTRON_SERVER_HOST=192.168.82.41
CONFIG_NEUTRON_DHCP_HOSTS=192.168.82.41
CONFIG_NEUTRON_METADATA_HOSTS=192.168.82.41

Launch OpenStack installation process.

packstack --answer-file rdo_answers.txt

The installation will take a while so you better grab a cup of coffee and have a look at the output while the software installs on each of the three nodes. If everything goes as expected we should see a similar message at the of the installation process.

 **** Installation completed successfully ******

Additional information:
 * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
 * File /root/keystonerc_admin has been created on OpenStack client host 192.168.82.40. To use the command line tools you need to source the file.
 * To access the OpenStack Dashboard browse to http://192.168.82.40/dashboard.
Please, find your login credentials stored in the keystonerc_admin in your home directory. 
 * Because of the kernel update the host 192.168.82.42 requires reboot. 
 * Because of the kernel update the host 192.168.82.40 requires reboot.
 * Because of the kernel update the host 192.168.82.41 requires reboot.
 * The installation log file is available at: /var/tmp/packstack/20140617-001835-On5TCi/openstack-setup.log 
 * The generated manifests are available at: /var/tmp/packstack/20140617-001835-On5TCi/manifests 
[root@cloud-controller ~]#

Reboot the three nodes as instructed and proceed to the next step.

Configure Glance to use GlusterFS

RDO packstack cannot configure Glance to use GlusterFS as its storage backend during the installation and it has to be configured afterwards. Fortunately the necessary steps are documented on RDO site.

Stop Glance services.

service openstack-glance-registry stop
service openstack-glance-api stop

Install gluster required packages on the controller node.

yum install glusterfs-fuse glusterfs

Mount GlusterFS share and set the ownership and permissions for glance user.

mount -t glusterfs gluster.vlab.local:gv0 /var/lib/glance/images
chown -R glance:glance /var/lib/glance/images

Start Glance services.

service openstack-glance-registry start
service openstack-glance-api start

With the installation finished OpenStack Horizon dashboard should be available at http://cloud_controller_fqdn/dashboard. Log in with the user admin, the password for this user can be found in the file /root/keystonerc_admin on the cloud controller node.

[root@cloud-controller ~]# cat keystonerc_admin
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=cd0ed5b5f251450f
export OS_AUTH_URL=http://192.168.82.40:35357/v2.0/
export PS1='[\u@\h \W(keystone_admin)]\$ '
[root@cloud-controller ~]#

If login fails with an unexpected error check that firewall is deactivated in all three nodes and that all services are up and running, in some of my deployments Neutron server did not start after a reboot and I had to start it manually.

Once logged into horizon navigate to Admin -> Hypervisor and check that the KVM hypervisor is properly registered.

Screen Shot 2014-06-17 at 01.56.04

Configure the NSX integration

At this point we have a working OpenStack installation with Neutron using the Open vSwitch plugin, now we will proceed to integrate our shiny OpenStack cloud with NSX.

Install NSX Neutron plugin

VMware provides a set of RPM packages containing the NSX plugin and a VMware sanctioned version of Neutron, however I found that this packages were older than my Havana installation and didn’t want to brake any dependencies and spend hours trying to fix my installation.

A tar file containing all the source for both the plugin and Neutron itself is also available and instructions on how to compile and install it are provided in NSX documentation, during my first trials I took this path but this time I decided to use the upstream plugin instead since it was available in RDO repositories.

yum install openstack-neutron-nicira

Configure NSX plugin

Register the Neutron server as a transport node on the NSX Controller Cluster.

ovs-vsctl set-manager ssl:192.168.82.45

Stop neutron services.

service neutron-server stop

Edit /etc/neutron/neutron.conf file and set core_plugin value to neutron.plugins.nicira.NeutronPlugin.NvpPluginV2.

Configure nvp.ini file accordingly, this file can be found in /etc/neutron/plugins/nicira.

Set NSX admin user and password.

nvp_user = admin
nvp_password = admin

Configure NSX controllers IP addresses.

nvp_controllers = 192.168.82.45

Set the default Transport Zone UUID and the L3 and L2 gateway serveices UUID, these values can be retrieved from the NSX Manager web.

default_tz_uuid = b948fd35-5737-4a30-8741-43134771d40c
default_l3_gw_service_uuid = adee048c-3776-4bd2-ade1-42ab5c90bf9e

Configure metadata for Nova instances, set metadata_dhcp_host_route to False in [DEFAULT] section. In [nvp] section set the metadata mode as access_network.

enable_metadata_access_network = True
metadata_mode = access_network

Create a [database] section and configure the connection to Neutron MySQL database, the data can be found on neutron.conf file.

[database]
connection = mysql://neutron:ac2191a8661b4b66@192.168.82.40/ovs_neutron

FInally before start Neutron services check nvp.ini with the command neutron-check-nvp-config. You should get something like this.

[root@neutron ~]# neutron-check-nvp-config /etc/neutron/plugins/nicira/nvp.ini
----------------------- Database Options -----------------------
        connection: mysql://neutron:ac2191a8661b4b66@192.168.82.40/ovs_neutron
        retry_interval: 10
        max_retries: 10
-----------------------    NVP Options   -----------------------
        NVP Generation Timeout -1
        Number of concurrent connections to each controller 10
        max_lp_per_bridged_ls: 5000
        max_lp_per_overlay_ls: 256
-----------------------  Cluster Options -----------------------
        requested_timeout: 30
        retries: 2
        redirects: 2
        http_timeout: 10
Number of controllers found: 1
        Controller endpoint: 192.168.82.45:443
                Gateway(L3GatewayServiceConfig) uuid: adee048c-3776-4bd2-ade1-42ab5c90bf9e
        Transport zones: [u'b948fd35-5737-4a30-8741-43134771d40c']
Done.
[root@neutron ~]#

Start Neutron services

service neutron-server start

Create a network neutron command line to test that everything is working as expected.

[root@cloud-controller ~(keystone_admin)]# neutron net-create nsx-test-net
Created a new network:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| id                    | 24f3b23f-a938-40e7-b026-14c8fb77ff34 |
| name                  | nsx-test-net                         |
| port_security_enabled | True                                 |
| shared                | False                                |
| status                | ACTIVE                               |
| subnets               |                                      |
| tenant_id             | 4d9fbabd4c9d4fa4a2185ff7559ae4e8     |
+-----------------------+--------------------------------------+
[root@cloud-controller ~(keystone_admin)]#
[root@cloud-controller ~(keystone_admin)]# neutron net-list
+--------------------------------------+--------------+---------+
| id                                   | name         | subnets |
+--------------------------------------+--------------+---------+
| 24f3b23f-a938-40e7-b026-14c8fb77ff34 | nsx-test-net |         |
+--------------------------------------+--------------+---------+
[root@cloud-controller ~(keystone_admin)]#

Access NSX Manager web interface, navigate to Logical Switches and confirm that a new logical switch with the same name and UUID as the new OpenStack network has been created.

Screen Shot 2014-06-21 at 22.15.20

Congratulations! We have successfully deployed a distributed installation of OpenStack with KVM as the underlying hypervisor and integrated with VMware NSX state of the art network virtualization software. In future posts out of this four article series we will discuss some tips and other parts of OpenStack and NSX. Courteous comments are welcome.

Juanma.

kvmWelcome to the third post of my series about OpenStack. In the first and second posts we saw in detail how to prepare the basic network infrastructure of our future OpenStack cloud using VMware NSX. In this third one we are going to install and configure the KVM compute host and the shared storage of the lab.

KVM setup

Create and install two CentOS 6.4 virtual machines with 2 vCPU, 2 GB of RAM, 2 network interfaces (E1000) and one 16GB disk. For the partitioning schema I have used the following one:

  • sda1 – 512MB – /boot
  • sda2 – Rest of the disk – LVM PV
    • lv_root – 13.5GB – /
    • lv_swap – 2GB – swap

Mark Base and Standard groups to be installed and leave the rest unchecked. Set the hostname during the installation and leave the networking configuration with the default values. Please have in mind that you will need to have a DHCP server on your network, in my case I’m using the one that comes with VMware Fusion if you don’t have one then you will have to set here a temporary IP address in order to able to install the KVM software. Once the installation is done reboot your virtual machine and open a root SSH session to proceed with the rest of the configuration tasks.

Disable SELinux with setenfornce command, also modify SELinux config to disable it during OS boot. I do not recommend to disable SELinux in a production environment but for a lab it will simplify things.

setenforce 0
cp /etc/selinux/config /etc/selinux/config.orig
sed -i s/SELINUX\=enforcing/SELINUX\=disabled/ /etc/selinux/config

Check that hardware virtualization support is activated.

egrep -i 'vmx|svm' /proc/cpuinfo

Install KVM packages.

yum install kvm libvirt python-virtinst qemu-kvm

After installing a ton of dependencies and if t nothing failed enable and start the libvirtd service.

[root@kvm1 ~]# chkconfig libvirtd on
[root@kvm1 ~]# service libvirtd start
Starting libvirtd daemon:                                  [  OK  ]
[root@kvm1 ~]#

Verify that KVM has been correctly installed and it’s loaded and running on the system.

[root@kvm1 ~]# lsmod | grep kvm
kvm_intel              53484  0
kvm                   316506  1 kvm_intel
[root@kvm1 ~]#
[root@kvm1 ~]# virsh -c qemu:///system list
 Id    Name                           State
----------------------------------------------------

[root@kvm1 ~]#

Hypervisor networking setup

With KVM software installed and ready we can now move on to configure the networking for both hosts and integrate them into our NSX deployment.

Disable Network Manager for both interfaces. Edit /etc/sysconfig/network-scripts/ifcfg-ethX files and change NM_CONTROLLED value to no.

By default libvirt creates virbr0 network bridge to be used for the virtual machines to access the external network through a NAT connection. We need to disable it to ensure that bridge components of Open vSwitch can load without any errors.

virsh net-destroy default
virsh net-autostart --disable default

Install Open vSwitch

Copy the NSX OVS package to the KVM host and extract it.

[root@kvm1 nsx-ovs]# tar vxfz nsx-ovs-2.1.0-build33849-rhel64_x86_64.tar.gz
./
./nicira-flow-stats-exporter/
./nicira-flow-stats-exporter/nicira-flow-stats-exporter-4.1.0.32691-1.x86_64.rpm
./tcpdump-ovs-4.4.0.ovs2.1.0.33849-1.x86_64.rpm
./kmod-openvswitch-2.1.0.33849-1.el6.x86_64.rpm
./openvswitch-2.1.0.33849-1.x86_64.rpm
./nicira-ovs-hypervisor-node-2.1.0.33849-1.x86_64.rpm
./nicira-ovs-hypervisor-node-debuginfo-2.1.0.33849-1.x86_64.rpm
[root@kvm1 nsx-ovs]#

Install Open vSwitch packages.

rpm -Uvh kmod-openvswitch-2.1.0.33849-1.el6.x86_64.rpm
rpm -Uvh openvswitch-2.1.0.33849-1.x86_64.rpm

Verify that Open vSwitch service is enabled and start it.

[root@kvm1 ~]# chkconfig --list openvswitch
openvswitch     0:off   1:off   2:on    3:on    4:on    5:on    6:off
[root@kvm1 ~]#
[root@kvm1 ~]#
[root@kvm1 ~]# service openvswitch start
/etc/openvswitch/conf.db does not exist ... (warning).
Creating empty database /etc/openvswitch/conf.db           [  OK  ]
Starting ovsdb-server                                      [  OK  ]
Configuring Open vSwitch system IDs                        [  OK  ]
Inserting openvswitch module                               [  OK  ]
Starting ovs-vswitchd                                      [  OK  ]
Enabling remote OVSDB managers                             [  OK  ]
[root@kvm1 ~]#

Install nicira-ovs-hypervisor-node package, this utility provides the infrastructure for distributed routing on the hypervisor. With the installation the integration bridge br-int and OVS SSL credentials will be created.

[root@kvm1 ~]# rpm -Uvh nicira-ovs-hypervisor-node*.rpm
Preparing...                ########################################### [100%]
   1:nicira-ovs-hypervisor-n########################################### [ 50%]
   2:nicira-ovs-hypervisor-n########################################### [100%]
Running '/usr/sbin/ovs-integrate init'
successfully generated self-signed certificates..
successfully created the integration bridge..
[root@kvm1 ~]#

There are other packages like nicira-flow-stats-exporter and tcpdump-ovs but they are not needed for OVS functioning. We can proceed now with OVS configuration.

Configure Open vSwitch

The first step is to create OVS bridges for each network interface card of the hypervisor.

ovs-vsctl add-br br0
ovs-vsctl br-set-external-id br0 bridge-id br0
ovs-vsctl set Bridge br0 fail-mode=standalone
ovs-vsctl add-port br0 eth0

If you were logged in by an SSH session you have probably noticed that your connection is lost, this is because br0 interface has taken control of the networking of the host and it doesn’t have an IP address configured. To solve this access the host console and edit ifcfg-eth0 file and modify to look like this.

DEVICE=eth0
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br0
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
NAME=eth0
HOTPLUG=no
HWADDR=00:0C:29:CA:34:FE
NM_CONTROLLED=no

Next create and edit ifcfg-br0 file.

DEVICE=br0
DEVICETYPE=ovs
TYPE=OVSBridge
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.82.42
NETMASK=255.255.255.0
GATEWAY=192.168.82.2
IPV6INIT=no
HOTPLUG=no

Restart the network service and test the connection.

service network restart

Repeat all the above steps for the second network interface.

Finally configure NSX Controller Cluster as manager in Open vSwitch.

ovs-vsctl set-manager ssl:192.168.82.44

Execute ovs-vsctl show command to review OVS current configuration.

[root@kvm1 ~]# ovs-vsctl show
383c3f17-5c53-4992-be8e-6e9b195e51d8
    Manager "ssl:192.168.82.44"
    Bridge "br1"
        fail_mode: standalone
        Port "br1"
            Interface "br1"
                type: internal
        Port "eth1"
            Interface "eth1"
    Bridge "br0"
        fail_mode: standalone
        Port "eth0"
            Interface "eth0"
        Port "br0"
            Interface "br0"
                type: internal
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.1.0.33849"
[root@kvm1 ~]#

Register OVS in NSX Controller

With our OVS instance installed and running we can now inform NSX Controller of its existence either via NVP API or NSX Manager, in our case we will use the later.

Log into NSX Manager as admin user and go to Dashboard, from Summary of Transport Components table click Add in the Hypervisors row. Verify that Hypervisor is selected as transport node and move to the Basics screen. Enter a name for the hypervisor, usually the hostname of the server.

Screen Shot 2014-05-05 at 23.18.22

In Properties enter:

  • Integration bridge ID, for us is br-int.
  • Admin Status Enabled -  Enabled by default.

Screen Shot 2014-05-05 at 23.29.03

For the Credential screen we are going to need the SSL certificate that was created along with the integration bridge during the NSX OVS installation. The PEM certificate file is ovsclient-cert.pem and is in /etc/openvswitch directory.

[root@kvm1 ~]# cat /etc/openvswitch/ovsclient-cert.pem
-----BEGIN CERTIFICATE-----
MIIDwjCCAqoCCQDZUob5H9tzvjANBgkqhkiG9w0BAQUFADCBojELMAkGA1UEBhMC
VVMxCzAJBgNVBAgTAkNBMRIwEAYDVQQHEwlQYWxvIEFsdG8xFTATBgNVBAoTDE9w
ZW4gdlN3aXRjaDEfMB0GA1UECxMWT3BlbiB2U3dpdGNoIGNlcnRpZmllcjE6MDgG
A1UEAxMxb3ZzY2xpZW50IGlkOjA4NWQwMTFiLTJiMzYtNGQ5My1iMWIyLWJjODIz
MDczYzE0YzAeFw0xNDA1MDQyMjE3NTVaFw0yNDA1MDEyMjE3NTVaMIGiMQswCQYD
VQQGEwJVUzELMAkGA1UECBMCQ0ExEjAQBgNVBAcTCVBhbG8gQWx0bzEVMBMGA1UE
ChMMT3BlbiB2U3dpdGNoMR8wHQYDVQQLExZPcGVuIHZTd2l0Y2ggY2VydGlmaWVy
MTowOAYDVQQDEzFvdnNjbGllbnQgaWQ6MDg1ZDAxMWItMmIzNi00ZDkzLWIxYjIt
YmM4MjMwNzNjMTRjMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwgqT
hvG72vat0hXvTuukZOs6fM4CAphmN34l4415q/vReSM3upN+vOLoyGJ/8VJGdNXH
3Bsu6V58f6o8EPbfnhgqf2rCP0r5kiiN5SivsAWI5//ltV1GDFO4+8VpYAwn4Cbd
sNOuFEM1mKOR//IL3Riy9Nkh16wfLy44KEE9745uhZ9gW96AkSkBx1ajjUiApnjL
M6L2w/E4sxNeMDLf/VYlc/SuEg775D9iaPpA1haJt8FFw1g769FsR9Q0Fl+CoT7f
ggBZTKwwcoU+5Ew1mNlPV0Hm8vpFcXbtMBeuT9Fe7k4bC+UuQPaSnbPpbZMpx/wd
fHOdJpemcog/0EjOJQIDAQABMA0GCSqGSIb3DQEBBQUAA4IBAQDBPNM/uI25ofIl
AgCpG42UD3M/RZRPX0/6Be4jCTaAuET6J8wAKA4k1btA6UPt0M98N6o4y60Du2D+
ZwFOa2LSTXZB43X70XnDKxapDVqmhKtrmX2hL1NRD9RjTTx3TOXMOlUiUizRB1+L
d8MNhX3qrvOLeFOUnxm6C5RnI/HdqvS9TyxybX+Qfqit9Q66hbjAt9RribXSw21G
Ix8d9S4NyDO91mDstIcXeNRUk8K64gEQSKxQO9QKmVAQBIlYAJVVXzfkXyHEiKTe
0zIsW/oknwWeQMD9xSrKomY/5+LCuDM1jT5LcL8vxmrEVIrUjNqt4nQsT4mjooG+
XYf2HdXj
-----END CERTIFICATE-----
[root@kvm1 ~]#

Copy the contents of the file and paste them in the Security Certificate text box.

Screen Shot 2014-05-05 at 23.36.28

Finally add the Transport Connector with the values:

  • Transport Type: STT
  • Transport Zone UUID: The transport zone, in my case the UUID corresponding to vlab-transport-zone.
  • IP Address – The address of the br0 interface of the host.

Screen Shot 2014-05-05 at 23.41.57

Click Save & View and check that Management and OpenFlow connections are up.

Screen Shot 2014-05-05 at 23.52.16

GlusterFS setup

gluster-logo-300x115I choose GlusterFS for my OpenStack lab for two reasons.  I have used it in the past so this has been a good opportunity for me to refresh and enhance my rusty gluster skills, and it’s supported as storage backend for Glance in OpenStack. Instead of going with CentOS again this time I choose Fedora 20 for my gluster VM, a real world GlusterFS cluster will have at least two node but for our lab one will be enough.

Create a Fedora x64 virtual machine with 1 vCPU, 1GB of RAM and one network interface. For the storage part use the following:

  • System disk: 16GB
  • Data disk: 72GB

Use the same partitioning schema of the KVM hosts for the system disk. Choose a Minimal installation and add the Standard group. Configure the hostname and the IP address of the node, set the root password and create a user as administrator, I’m using here my personal user jrey.

Disable SELinux.

sudo setenforce 0
sudo cp /etc/selinux/config /etc/selinux/config.orig
sudo sed -i s/SELINUX\=enforcing/SELINUX\=disabled/ /etc/selinux/config

Stop and disable firewalld.

sudo systemctl disable firewalld.service
sudo systemctl stop firewalld.service

Install GlusterFS packages. There is no need to add any additional yum repository since Gluster is included in the standard Fedora repos.

sudo systemctl install glusterfs-server

Enable Gluster services.

sudo systemctl enable glusterd.service
sudo systemctl enable glusterfsd.service

Start Gluster services.

[jrey@gluster ~]$ sudo systemctl start glusterd.service
[jrey@gluster ~]$ sudo systemctl start glusterfsd.service
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo systemctl status glusterd.service
glusterd.service - GlusterFS an clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)
   Active: active (running) since Mon 2014-04-28 17:17:35 CEST; 20s ago
  Process: 1496 ExecStart=/usr/sbin/glusterd -p /run/glusterd.pid (code=exited, status=0/SUCCESS)
 Main PID: 1497 (glusterd)
   CGroup: /system.slice/glusterd.service
           └─1497 /usr/sbin/glusterd -p /run/glusterd.pid

Apr 28 17:17:35 gluster.vlab.local systemd[1]: Started GlusterFS an clustered file-system server.
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo systemctl status glusterfsd.service
glusterfsd.service - GlusterFS brick processes (stopping only)
   Loaded: loaded (/usr/lib/systemd/system/glusterfsd.service; enabled)
   Active: active (exited) since Mon 2014-04-28 17:17:45 CEST; 15s ago
  Process: 1515 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
 Main PID: 1515 (code=exited, status=0/SUCCESS)

Apr 28 17:17:45 gluster.vlab.local systemd[1]: Starting GlusterFS brick processes (stopping only)...
Apr 28 17:17:45 gluster.vlab.local systemd[1]: Started GlusterFS brick processes (stopping only).
[jrey@gluster ~]$

Since we are running a one-node cluster there is no need to add any node to the trusted pool. In case you decide to run a multinode environment you can setup the pool by running the following command on each node of the clsuter. .

gluster peer probe <IP_ADDRESS_OF_OTHER_NODE>

Edit the data disk with fdisk and create a single partition. Format the partition as XFS.

[jrey@gluster ~]$ sudo mkfs.xfs -i size=512 /dev/sdb1
meta-data=/dev/sdb1              isize=512    agcount=4, agsize=4718528 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=18874112, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=9215, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[jrey@gluster ~]$

Create the mount point for the new filesystem, mount the partition and edit /etc/fstab accordingly to make it persistent to reboots.

sudo mkdir -p /data/glance/
sudo mount /dev/sdb1 /data/glance
sudo mkdir -p /data/glance/brick1
sudo echo "/dev/sdb1 /data/glance xfs defaults 0 0" >> /etc/fstab

Create the Gluster volume and start it.

[jrey@gluster ~]$ sudo gluster volume create gv0 gluster.vlab.local:/data/glance/brick1
volume create: gv0: success: please start the volume to access data
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo gluster volume start gv0
volume start: gv0: success
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo gluster volume info

Volume Name: gv0
Type: Distribute
Volume ID: d1ad2d00-6210-4856-a5eb-26ddcba77a70
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: gluster.vlab.local:/data/glance/brick1
[jrey@gluster ~]$

The configuration of the Gluster node is finished. In the next article we will install and configure OpenStack using the different components detailed during current and previous parts of the series.

Please feel free to add any comment or correction.

Juanma.

Welcome to Part 2 of this series about OpenStack and NSX. In the first part we defined the basic NSX concepts and components, installed and configured the NSX appliances and connected the NSX Controller Cluster with the NSX Manager. In this second part we will see how Transport and Logical networks are configured, get yourself comfortable because this is going to be a long post :-)

To quickly refresh our concepts remember that the logical network represents the virtual machine point of view of the network and the transport network represents the underlying physical network through its transport nodes.

Configure the Transport Network

The first step to have a fully functional NSX infrastructure is to configure the Transport Network. The Transport Network is made of the Transport Zone and the Transport Nodes. These transport nodes can be NSX appliances like Service Nodes or Gateways and hypervisors like KVM or ESXi hosts. Third-party hardware L2 Gateways can also be transport nodes but those are out of the scope of this series.

Create a Transport Zone

A Transport Zone is a representation of the physical network used to to send traffic between OVS instances. Without a transport zone the transport nodes cannot be connected to NSX so it is mandatory that you define it before performing any operation on them.

Select Network Components > Transport Layer > Transport Zones.

Screen Shot 2014-04-28 at 22.30.40

In the next screen click Add to launch the Create Transport Zone wizard. This same wizard can also be launched from the Dashboard screen in the Summary of Transport Components area click the Add button form the Zones row.

Screen Shot 2014-04-28 at 22.36.05

Enter the name of the Transport Zone and click Save & View.

Screen Shot 2014-04-28 at 22.41.43

The new transport zone will now be available.

Screen Shot 2014-04-29 at 00.11.59

With the Transport Zone created we can start configuring the transport nodes.

Configure the Transport Nodes

As we detailed in Part 1 the Service Node appliances are installed and configured independently as the rest of the appliances however they need to be added to NSX Controller Cluster in order to be able to perform the offloading function for the OVS devices.

From the Summary of Transport Components section in the Dashboard screen click Add.

Screen Shot 2014-04-29 at 00.29.14

The Create Service Node window will show up. In the first screen select Service Node as the Transport Node Type and click Next.

Screen Shot 2014-04-29 at 00.31.59

Enter the display name for the Service Node, in this case nsxsn.

Screen Shot 2014-04-29 at 00.39.15

In the Properties screen you will see three settings available.

  • Management Rendezvous Server – Used to designate the Service Node Management Rendezvous, it will proxy management traffic between NSX Controller Cluster and remote NSX Gateways.
  • Admin Status Enabled – Used to enable or disable the Transport Node.
  • Tunnel Keep Alive Spray – Used to improve the health testing of transport node’s tunnels.

For our lab leaving the default values will suffice.

Screen Shot 2014-04-29 at 00.55.46

For the next step get the SSL certificate from the Service Node. Establish an SSH session with the appliance as admin user and use the show switch certificate command. The output of the command can be a bit large, we just need the certificate itself.

-----BEGIN CERTIFICATE-----
MIIDgjCCAmoCAQMwDQYJKoZIhvcNAQEEBQAwgYExCzAJBgNVBAYTAlVTMQswCQYD
VQQIEwJDQTEVMBMGA1UEChMMT3BlbiB2U3dpdGNoMREwDwYDVQQLEwhzd2l0Y2hj
YTE7MDkGA1UEAxMyT1ZTIHN3aXRjaGNhIENBIENlcnRpZmljYXRlICgyMDE0IEFw
ciAyNyAyMzoyMDowNSkwHhcNMTQwNDI3MjMyMDUzWhcNMjQwNDI0MjMyMDUzWjCB
izELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAkNBMRUwEwYDVQQKEwxPcGVuIHZTd2l0
Y2gxHzAdBgNVBAsTFk9wZW4gdlN3aXRjaCBjZXJ0aWZpZXIxNzA1BgNVBAMTLmNs
aWVudCBpZDpjYWIwNWU2OS1iZjI5LTRkMjItYTY1Ny0zYTJhZThkNjgyY2MwggEi
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC9lAk6DZWO/miggjjXk4xQd3hv
ieTPpjklw6Q4UKW+qMt0GjhC06n/cVK4kR12v1aCcxsKWmPK8LC1vU68e2T61zLe
NjRYHfa9VhqKjAY5p9xPcmQGli8+Cff47LfUVylEA+74YNMDHCuJbMagtwJOXSUa
UpaB3EnsEu6C4d4RzMBn55tlDbWAuFojH9JAki3g4maMqJRhILRUYoUFoSknmUvC
8cm719TcXw4u5cAzNBC2mMv6uRuDd+l1VquhFkksP2Di3D0/kI2yBW7lgDRTE4fn
O8hLasNPuGg24mOAkW/OIvusieW2MSqEwhGV5+G4fRgbRAI1ijTRT1K4dZuhAgMB
AAEwDQYJKoZIhvcNAQEEBQADggEBAB5bqYe2LXlIbwHSx1j28d/5FBmGxMd5LUEB
h29B+nTj3wVZkZpIxFoP/QRhzMXPWGM1PeixWN9o8oSZfrCEA7yMcn3uMMwdAmNz
7eNv4svw19hccWEvdNRBkQKdDX1YKItwUJKqVMJnI2dCqsGD4H1R9uwU+QJuEIgm
VMEoHYq7TwQXX6TR1eebjOKdeg4laOcsKystHTW+wuMBfOfcoYIuEZMQ7SOsRANe
l1hm3VI7t1gxp38r9JbtEC2jCqqBMzR+ZrzmodLsn/VgFDv8QiUZ5tFHaWl+jhQ/
JWxXqjLo42B6fRUA04wF6tJKnu3KDaVFIx4ssvKw2Q5u6PNSf8I=
-----END CERTIFICATE-----

Go back to the Create Service Node window and select Security Certificate as credential type and paste the certificate extracted from the Service Node in the Security Certificate text area.

Screen Shot 2014-04-29 at 01.08.43

The final step is to setup the Transport Connectors. Click Add Connector.

Screen Shot 2014-04-29 at 01.15.22

In the Create Transport Connector screen select STT as the Transport Type. Select the transport zone the we created before and enter the IP address of the Service Node.

Screen Shot 2014-04-29 at 01.20.01

Once the Connector is create click Save in the final screen and the new Service Node will be added to the NSX Controller.

Screen Shot 2014-04-29 at 01.25.08

Now we need to finish the Gateway appliance configuration in a similar way as we did with the Service Node. Again from the Dashboard and the Summary of Transport Components section, launch the Create Gateway by clicking the Add button in the Gateways row. The rest of the steps are very similar to the Service Node process.

  • Select Gateway as Transport Node Type
  • Get the SSL certificate from NSX Gateway with the show switch certificate command.
  • Configure the credentials using the SSL certificate
  • Create an STT Transport Connector and set the IP address of the Gateway

All the above Transport Node related tasks can be achieved through the command line by using the request transport-node-register command. This is a hidden command that can be used to register Service Nodes or Gateways in a NSX Controller Cluster. According to NSX documentation there are two versions of the command:

  • cert – Used for production environments
  • mgmt-ip – Used for testing environments

The first one will transmit the encoded PEM certificates to the NSX Controller while the second will use the appliance management IP as the credential. The arguments for both versions are:

  • controller-ip-url – Switch manager address of the NSX Controller Cluster, accepts IP or hostname and the TCP port to connect to.
  • ctrler-username – NSX administration account for the Controller.
  • ctrler-password – NSX administration account password.
  • mgmt-ip – The IP address of the transport node.
  • cert – As we detailed before this one is exclusive of mgmt-ip and viceversa.
  • rendezvous-yes-or-no – Simply pass yes or no to indicate that the transport node is a Management Rendezvous Server one.
  • tc-ip-address – IP address of the transport node  connector.
  • tc-zone.uuid – Transport Zone to be associated with the transport node.
  • tc-type – Encapsulation format for the transport node’s transport connector.

With those arguments a registering command for our Service Node would be like this.

request transport-node-register nsxc.vlab.local admin admin mgmt-ip no tc-id 192.168.82.46 tc-uuid b948fd35-5737-4a30-8741-43134771d40c tc-type STT

Create a Gateway Service

The next step would be to setup a Gateway Service. My lab lives within VMware Fusion and for now I don’t really need an L2 or L3 Gateway Service but since the purpose of this post is to illustrate NSX I decide to configure one and let everything in place for a future expansion of the lab.

Remember that Gateway services can be of two types:

  • L2 Gateway Service – Will expand logical network by connecting it to a physical L2 segment.
  • L3 Gateway Service – Connects virtual router ports to physical to physical IP networks.

It’s important to note that in an NSX deployment you may connect only one Gateway Service, either L2 or L3, to a given L2 physical segment.

L3 Service Setup

From NSX Manager Dashboard go to Summary of Transport Components section and in Gateway Services click Add. In the first step of the Create Gateway Service wizard select the L3 Gateway Service from the drop-down menu.

Screen Shot 2014-05-03 at 13.11.51

In the second step configure the Display Name for the new service and click Next.

Screen Shot 2014-05-03 at 13.15.36

The third and final step is to bind our previously configured gateway node to the service. Click Add Gateway.

Screen Shot 2014-05-03 at 13.25.28

In the Edit Gateway pop-up select the UUID of the gateway node and the network interface to be used, leave the Failure Zone ID field blank. This last field is used for high availability of L3 services, I will try to write about this subject in a future post.

Screen Shot 2014-05-03 at 13.27.17

Click Save & View and check the newly created Gateway Service.

Screen Shot 2014-05-03 at 13.40.48

L2 Service Setup

To create a new L2 Gateway Service follow the same procedure as with L3 one and launch the Create Gateway Service wizard.

  • Select L2 Gateway Service.
  • Enter the name of the new service.
  • Add the gateway and fill in the UUID and network interface fields, this screen is slightly different since there is no Failure Zone ID field.

Screen Shot 2014-05-03 at 19.06.50

Please have in mind that the example of the above screenshot will fail because you cannot use the same gateway appliance for two different L2 or L3 Gateway Services, if you need an L2 service deploy a new gateway node and configure it following the above steps.

With this step our Transport Network is almost setup, the only part left would be to add the hypervisors to the Controller Cluster but I’ll left that part for the next article.

NSX Logical Network View

In any typical OpenStack deployment the logical network elements will usually be created and configured not through the NSX Manager but NVP API. The API would be called by OpenStack Neutron module using the neutron plugin for VMware NSX either from Horizon dashboard or Neutron command line. However I decided to explain how to create and configure the different Logical Layer elements from NSX Manager.

Before starting with a simple walk-through of the process we need first to describe the different elements of the Logical Network. NSX Logical Network provide a similar functionality of a dedicated Ethernet switch. It recreates entities like switches, routers and ports and provides management functionality for them through NVP API.

  • Logical Switch – Recreates an Ethernet-type L2 service-model, containing logical switch ports that can be configured to implement a set of security and QoS policies.
  • Logical Router – Provides L3 routing services for the logical network. Can be configured to offer other services such as NAT and routed connections to the external physical network.
  • Logical Switch Port – Represents and provides a logical connection point for virtual machines network interfaces (VIF), router patch connections or an L2 gateway connection to an external network.
  • Logical Router Port – Provides the logical connection point for a patch connection to a switch or L3 gateway connections.
  • Logical Port Attachment – This is the logical equivalent of connecting a network cable between an interface and a switch port.

Create a Logical Switch

Let’s start from the begging and create a Logical Switch. From Summary of Logical Components are in the Dashboard screen click Add in the Switches row.

Screen Shot 2014-05-05 at 22.09.20

Provide the name of new switch and click Next.

Screen Shot 2014-05-05 at 22.14.23

In Properties there are two different settings:

  • Port Isolation Enabled – This setting basically disables VM to VM communication by preventing communication between the different logical ports of the switch.
  • Replication Mode – Determines which transport node handle replication of broadcast, unknown-unicast and multicast (BUM) traffic. There are two possible values:
    • Service Node – Traffic is sent  to the NSX Service Node to be flooded to L2 logical segment. This is the default and recommended setting.
    • Source Node – BUM traffic is handled directly by the source hypervisor instead of a Service Node.

Screen Shot 2014-05-05 at 22.16.54

Next specify the transport binding for the logical switch. Click Add Binding and select the Transport Type and the Transport Zone UUID. I’ve selected STT our previously created transport zone respectively. For the transport type there are several types available:

  • STT
  • GRE
  • Bridge
  • IPsec GRE
  • IPsec STT
  • VXLAN

Screen Shot 2014-05-05 at 22.27.11

Click Save & View to review our new logical switch, leave the router connection for later.

Add Logical Switch Ports

Once one or more logical switches have been created we can start adding ports to them. Ports will provide connection points to our virtual machines. Click Add in the Logical Ports row and the Create Logical Switch Port wizard will be started.

Select the Logical Switch the port will belong to.

Screen Shot 2014-05-05 at 22.57.21

In Basics provide a descriptive name for the port, I tend to use the convention vm_name-port.

Screen Shot 2014-05-05 at 23.01.00

In the Properties screen you have the following filed available:

  • Port number – Optional parameter.
  • Admin Status Enabled – Enabled by default.
  • Logical Queue UUID – An optional parameter used to link the port to a QOS policy.

Screen Shot 2014-05-05 at 23.07.01

Leave the Mirror Targets settings with the default values and move forward to the Attachment screen. Select VIF, virtual machine interface, as Attachment Type, select a hypervisor and the network interface of the virtual machine.

Screen Shot 2014-05-06 at 01.32.23

Attachments can all be of type:

  • None
  • Extended Network Bridge
  • Mult-Domain Interconnect
  • L2 Gateway
  • Patch to logical router port
  • VTEP L2 Gateway

For example an Extended Network Bridged attachment should be configured like this.

Screen Shot 2014-05-06 at 01.36.12

Create a Logical Router

Launch the Create Logical Router dialog and set the name of the new router in the first screen.

Screen Shot 2014-05-06 at 01.44.19

In Properties select the Routing Type:

  • Routing Table – Allows to define static routes on the logical router
  • Single Default Route – Defines a single default route for all traffic, routing all traffic through the L3 Gateway connecting the router to the datacenter physical network.

Tick Enable NAT Synchronization checkbox in order to provide NAT service through this logical router and want NAT rules to survive in the event of a Gateway failover.

Replication Mode works in the same way as in the Logical Switch, Service Node is selected by default.

Screen Shot 2014-05-06 at 01.55.07

Configure the Distributed Logical Router. If the checkbox is unticked it means the logical router will be a centralized logical router and all network traffic between virtual machines will be forwarded to the NSX Service Nodes. On the contrary if you tick the checkbox it means it will be a distributed logical router and it will provide a one-hop routing of VM to VM traffic, to be able to use this feature all hypervisors running VMs using this router must be in the same transport zone.

Screen Shot 2014-05-06 at 02.03.39

Click Save & View to finish the process and review the new router. Optionally at the last step you can assign an L3 Gateway Service and configure the corresponding Logical Router Port.

Select the UUID of the desired gateway service and configure the Logical Router Port settings. In the example I choose the basic configuration since I only need to provide the IP address for the port.

Screen Shot 2014-05-06 at 02.12.52

Add a Logical Router Port

To create and assign a logical port to an existent router launch the corresponding wizard from Summary of Logical Components table in the Dashboard screen.

Select the Logical Router UUID from the drop-down.

Screen Shot 2014-05-06 at 22.12.57

Enter a name for the port and click Next to move to Properties step. The Port Number and MAC Address fields are optional, leave Admin Status Enabled checked. In the IP Addresses table add the required IP address, must be in IPv4 CIDR notation.

Screen Shot 2014-05-06 at 22.19.26

Configure the attachment. For router ports the attachments can be set to one of the following types:

  • None
  • L3 Gateway
  • Patch

For my example lab this time I configured the attachment as a Patch one. You need to select the Logical Switch UUID and the Peer Port UUID, this peer port is port in the logical switch and you have to configure it either before creating the router port or you can create it at this step.

Screen Shot 2014-05-06 at 22.36.10

Click Save to finish the creation process.

This completes the logical network part, it’s a very basic setup without any security or QOS services but I hope that you gained a better understanding of transport and logical network concepts and the relationships between their different components. In third part of the series we will review how to setup the KVM hypervisor and connect it to the NSX infrastructure. Comments, corrections or questions are always welcome.

Juanma.

If you follow me on Twitter or Google+ probably have seen and increased number of tweets and posts about OpenStack, DevStack, KVM and other Linux related topics. It’s no secret that I am a *nix guy however it wasn’t until last year that I really discovered OpenStack. Oh yes I knew about it, have read a ton of articles and watched some videos in YouTube but I never had the opportunity to actually play with it until I sat on a Hands on Lab about OpenStack and vSphere during VMworld in Barcelona last October. After VMworld I started a personal project to learn as much as possible about OpenStack, using some labs with KVM and vSphere to try to achieve a decent level of proficiency. Finally this year I was able to ramp up with NSX and decided to build a new lab with OpenStack, KVM and NSX and document my progress here in my blog. So without further ado here it is my first series of posts about OpenStack and NSX.

During this series we will see how to deploy OpenStack with KVM as the underlying hypervisor and VMware NSX for the networking part. I intended to create a fairly comprehensive guide here for my personal reference and as a learning exercise. All posts of the series are based on my personal experience in a lab environment.

Lab components

To illustrate the post I have created a lab with virtual machines running on VMware Fusion in my MacBook Pro, but you can use any virtualization software you want as long as it allows you to expose the virtualization extensions to the virtual machine, for the KVM compute node. We will need the following virtual machines

  • Cloud controller node
  • Nova compute node with KVM
  • Neutron networking node
  • GlusterFS storage node
  • NSX Controller
  • NSX Manager
  • NSX Service Node
  • NSX Gateway

I’ll provide the exact hardware config of each virtual machine in its own part. We will deploy OpenStack Havana using as reference one of the architectures described in OpenStack Havana installation guide.

You are probably asking yourself now why I’m using Havana when Icehouse was released just a few weeks ago? There are two reasons for this, first is that when I started to create my lab and decided to document my progress here Icehouse wasn’t out yet and after it was released I decided to stick with Havana because the NSX plugin for Neutron, OpenStack network module, has not been updated yet for Icehouse.

The software versions to be used are:

  • OpenStack Havana
  • CentOS 6.4 – For OpenStack nodes
  • Fedora 20 – For GlusterFS storage node
  • NSX for multi-hypervisor

I have another Fedora 20 virtual machine providing DNS and NTP services for the lab, I’m planning to add DHCP and OpenLDAP capabilities in the future.

NSX deployment overview

Screen Shot 2014-04-24 at 22.40.40The Network Views

The first concept you need to understand in NSX are the network views. NSX defines two network views:

  • Logical Network View
  • Transport Network View

The Logical Network View is a representation of the network services and connectivity that a virtual machine “see” in the cloud, basically for the operating system running inside the VM the logical network view is “the network” that it is connected to. The Logical Network View is completely independent from the underlying physical network. It is made of the logical ports, switches and routers that interconnects the different virtual machines within a tenant and connect them to the outside physical network. In a cloud each tenant will have its own logical network view and would isolated from other tenants views.

The Transport Network View represents the physical devices that underlie the logical networks. These devices or transport nodes, as they are referred, can be hypervisors and the network appliances interconnecting those hypervisors to the external physical network. Every one of these transport nodes must run an instance of Open vSwitch.

NSX Deployment Components

An NSX deployment will be made out of the Control Plane and Data Plane. Additionally there is a Management Plane comprised by the NSX Manager, last one is not mandatory for an OpenStack deployment but it can be useful.

NSX Control Plane

The Control Plane is made of the NSX Controller Cluster. This is an OpenFlow controller that manages all the Open vSwitch devices running on the transport nodes and a logical network manager that allow to build and maintain all the logical networks carried by the transport nodes. It provides consistency between logical network view and transport network view. Internally it has several roles to manage the different tasks it is responsible of.

  • Transport node management: Maintains connections with the different OVS instances.
  • Logical network management: Monitors when endhosts get connected and disconnected from OVS. Also implements logical connectivity and policies by configuring OVS forwarding states.
  • Data persistence and replication: Stores data from OVS devices and NVP API to provides persistence across all nodes of the cluster in case of failure.
  • API server: Handles HTTP requests from external elements.

The NSX Controller is an scalable out cluster running on x86 hardware, it supports a minimum of three nodes and a maximum of five. Single node clusters are not supported although for the lab I deployed a single-node one.

NSX Data Plane

The Data Plane will be implemented by the previously referred transport nodes, this is OVS devices and NSX appliances, managed by the Controller Cluster.

Hypervisors: The compute nodes leveraging Open vSwitch to provide network connectivity for the virtual machines.

NSX Gateway/s: The NSX Gateways formed the Gateway Service that allows a logical network to be attached to a physical network not managed by NSX. The gateways can be L2 Gateway, expands L2 logical segment to include a physical one, and L3 Gateway that maps itself to physical router port.

NSX Service Node/s: The Service Nodes are OVS enabled appliances that provide extra processing capacity by offloading network packet processing from the hypervisor virtual switches. The type of operations managed by the service nodes are for example assisting with the packet replication during broadcast/multicast operations or unknown multicast flooding in overlay logical networks.

NSX Management Plane

The NSX Management Plane is composed exclusively by the NSX Manager. Provides a different and more friendly way to interact with the NVP API, and configure the logical network components for example, through its web UI. In an OpenStack deployment there is need to use it, however it can be helpful for troubleshooting purposes.

NSX network appliances deployment

For our lab purposes create four Ubuntu x64 virtual machines with 1vCPU, 1GB of RAM, 1 network interface (E1000) and 16GB of disk.

NSX Controller

Power on the VM and on the boot screen select Automated Install.

Screen Shot 2014-04-27 at 20.49.15

The installation will take several minutes to finish. When it’s finished you will see a prompt like this in the virtual machine console.

Screen Shot 2014-04-27 at 22.42.11

Login as admin user with password admin. In a normal deployment you will configure admin user password with set admin user password but for the lab is not needed.

Set the IP address for the controller node.

nsx-controller # set network interface breth0 static 192.168.82.45 255.255.255.0
Setting IP for interface breth0...
Clearing DNS configuration...
nsx-controller # 
nsx-controller # show network interface breth0
IP config: static
Address: 192.168.82.45
Netmask: 255.255.255.0
Broadcast: 192.168.82.255
MTU: 1500
MAC: 00:0c:29:92:ce:0c
Admin-Status: UP
Link-Status: UP
SNMP: disabled
nsx-controller #

Configure the hostname.

nsx-controller # set hostname nsxc
nsxc #

Next configure the default route.

nsxc # add network route 0.0.0.0 0.0.0.0 192.168.82.2
nsxc #
nsxc # show network route
Prefix/Mask         Gateway         Metric  MTU     Iface
0.0.0.0/0           192.168.82.2    0       intf    breth0
192.168.82.0/24     0.0.0.0         0       intf    breth0
nsxc #

Set the address of the DNS and NTP servers.

nsxc # add network dns-server 192.168.82.110
nsxc #
nsxc # add network ntp-server 192.168.82.110
 * Stopping NTP server ntpd                                                                                                                                                          [ OK ]
Synchronizing with NTP servers. This may take a few seconds...
27 Apr 21:03:49 ntpdate[3755]: step time server 192.168.82.110 offset -7199.735794 sec
 * Starting NTP server ntpd                                                                                                                                                          [ OK ]
nsxc #

Set the management address of the control cluster.

set control-cluster management-address 192.168.82.45

Configure the IP address to be used for communication with the different transport nodes.

set control-cluster role switch_manager listen-ip 192.168.82.45

Configure the IP address to handle NVP API requests.

set control-cluster role api_provider listen-ip 192.168.82.45

Finally join the cluster, since this the first node of the cluster the IP has to be its own one.

nsxc # join control-cluster 192.168.82.45
Clearing controller state and restarting
Stopping nicira-nvp-controller: [Done]
Clearing nicira-nvp-controller's state: OK
Starting nicira-nvp-controller: CLI revert file already exists
mapping eth0 -> bridged-pif
ssh stop/waiting
ssh start/running, process 5009
mapping breth0 -> eth0
mapping breth0 -> eth0
ssh stop/waiting
ssh start/running, process 5158
Setting core limit to unlimited
Setting file descriptor limit to 100000
 nicira-nvp-controller [OK]
** Watching control-cluster history; ctrl-c to exit **
===================================
Host nsx-controller
Node ffac511c-12b3-4dd0-baa7-632df4860521 (192.168.82.248)
  04/27 22:40:42: Initializing data contact with cluster
  04/27 22:40:49: Fetching initial configuration data
  04/27 22:40:51: Join complete
nsxc #

You can check at any moment the status of the node in the cluster with the show control-cluster status command.

nsxc # show control-cluster status
Type                Status                                       Since
--------------------------------------------------------------------------------
Join status:        Join complete                                04/27 22:40:51
Majority status:    Disconnected from cluster majority           04/27 22:53:44
Restart status:     This controller can be safely restarted      04/27 21:23:29
Cluster ID:         7837a89a-22f3-4c8c-8bef-c100886374e9
Node UUID:          7837a89a-22f3-4c8c-8bef-c100886374e9

Role                Configured status   Active status
--------------------------------------------------------------------------------
api_provider        enabled             activated
persistence_server  enabled             activated
switch_manager      enabled             activated
logical_manager     enabled             activated
directory_server    disabled            disabled
nsxc #

In a standard NSX deployment now would the moment to add more nodes to the cluster using again the join control-cluster command with the same IP address.

NSX Gateway

Proceed with the Automated Install as in the Controller node. When the installation is done login as admin user.

Set IP address.

set network interface breth0 static 192.168.82.47 255.255.255.0

Set hostname.

set hostname nsxg

Configure the rest of the network parameters as in the Controller node and proceed to the gateway specific configuration.

nsxg # add switch manager 192.168.82.45
Waiting for the manager CA certificate to synchronize...
Manager CA certificate synchronized
nsxg #

NSX Service Node

Again launch the Automated Install and let it finish. As admin user configure the IP address…

set network interface breth0 static 192.168.82.46 255.255.255.0

…and the hostname.

set hostname nsxsn

Finish the network configuration as in the Gateway and the Controller and configure the Service Node to be aware of the Controller Cluster

add switch manager 192.168.82.45

The above command will return an error like this.

Manager CA certificate failed to synchronize.  Verify
the manager is running on the specified IP address.

It’s normal since the Transport Node will not be able to connect to the NSX Controller Cluster until the cluster has been informed, either via NVP API or NSX Manager interface, about the existence of the Transport Node.

NSX Manager

Access the NSX Manager console, you have to see a similar screen.

Screen Shot 2014-04-28 at 00.47.54

Set the IP and the hostname and configure the default route, DNS and NTP server.

set network interface breth0 static 192.168.82.47 255.255.255.0
set hostname nsxm
add network route 0.0.0.0 0.0.0.0 192.168.82.2
add network dns-server 192.168.82.110
add network ntp-server 192.168.82.110

With this we have completed the installation and initial configuration of our four NSX appliances. In a real world deployment we should have to add at least two more NSX controller nodes to our cluster and maybe one or more gateways in order to setup L2 and L3 Gateway Services. The number of Service Nodes will depend on the expected load of our cloud.

Connect the NSX Manager to the Controller Cluster

Our next step is to connect our newly crested NSX Controller Cluster with NSX Manager. Access NSX Manager web interface and login as admin user.

Screen Shot 2014-04-28 at 01.10.19

After the login the Manager will indicate that there is no Controller Cluster added.

Screen Shot 2014-04-28 at 01.15.33

Click the Add Cluster button and enter the data for the NSX Controller Cluster.

Screen Shot 2014-04-28 at 01.26.03

If the connection is successful the a new screen will show up.

Screen Shot 2014-04-28 at 01.36.29

Provide the following information:

  • Name of the cluster
  • Contact email address of the administrator
  • Automatically Use New IPs – This setting, checked by default, will add all the IP address of the members form this cluster as eligible to receive API call from the NSX Manager.
  • Make Active Cluster

In the next screen enter the IP address of your syslog server or click Use This NSX Manager to use the NSX Manager as syslog server.

Screen Shot 2014-04-28 at 01.57.43

After clicking in Configure the Manager will finish the configuration of the Controller Cluster and will go back the previous screen where we can see the new cluster we have just added to the Manager.

Screen Shot 2014-04-28 at 02.01.44

In the next post we will see how to configure NSX Transport and Logical network elements. As always comments are welcome.

Juanma.

With release of ESXi 5.0 the esxcli command has been also vastly improved. One of this new capabilities is the possibility to manage the DNS configuration of the server.

The basic syntax for dns is:

~# esxcli network ip dns

This gives you two namespaces to work with:

  • search
  • server

esxcli_dns1

With the first one you can manage the suffixes for DNS search and the second is for the DNS server to be used by the ESXi.

  • Server operations
    List the servers configured:

image

Add a new server:

image

Remove a configured server:

image

  • Domain search operations

List configured domain suffixes:

image

Add a new domain:

image

Remove a configured domain:

image

Juanma.

Long time since my last post about HP Integrity Virtual Machines, well you know I’ve been very occupied with vSphere and Linux but that doesn’t mean that I completely eliminate HP-UX from my life, on the contrary… HP-UX ROCKS! :-D

This is just a quick post on how to map a specific port of virtual switch to a specific VLAN. First retrieve the configuration of the vswitch.

[root@hpvmhost] ~ # hpvmnet -S devlan12
Name     Number State   Mode      NamePPA  MAC Address    IP Address
======== ====== ======= ========= ======== ============== ===============
devlan12      3 Up      Shared    lan4     0x000cfc0046b9 10.1.1.99    

[Port Configuration Details]
Port    Port         Port     Untagged Number of    Active VM
Number  State        Adaptor  VLANID   Reserved VMs
======= ============ ======== ======== ============ ============
1       Active       lan      none     1            oradev01
2       Active       lan      none     1            oradev02
3       Active       lan      none     1            oradev03
4       Active       lan      none     1            oradev04
5       Active       lan      none     1            nfstest01
6       Active       lan      none     1            linuxvm1
7       Active       lan      none     1            linuxvm2 

[root@hpvmhost] ~ #

We are going to map the port 5 to the VLAN 120 in order to isolate the traffic of that NFS server from the other virtual machines that aren’t on the same VLAN. Again the command to use is hpmvnet.

[root@hpvmhost] ~ # hpvmnet -S devlan12 -u portid:5:vlanid:120

If you display again the HPVM network configuration for the devlan12 vswitch the change will appear under the Untagged VLANID column.

[root@hpvmhost] ~ # hpvmnet -S devlan12
Name     Number State   Mode      NamePPA  MAC Address    IP Address
======== ====== ======= ========= ======== ============== ===============
devlan12      3 Up      Shared    lan4     0x000cfc0046b9 10.1.1.99    

[Port Configuration Details]
Port    Port         Port     Untagged Number of    Active VM
Number  State        Adaptor  VLANID   Reserved VMs
======= ============ ======== ======== ============ ============
1       Active       lan      none     1            oradev01
2       Active       lan      none     1            oradev02
3       Active       lan      none     1            oradev03
4       Active       lan      none     1            oradev04
5       Active       lan      120      1            nfstest01
6       Active       lan      none     1            linuxvm1
7       Active       lan      none     1            linuxvm2 

[root@hpvmhost] ~ #

Juanma.

This is the fourth and last part of this series of posts about Virtual Connect, the first three were:

In this final post I will discuss Server Profiles, what are they and how to create. As in the rest of the series I’m using Virtual Connect 3.10.

So, what is a Server Profile? We can define a Virtual Connect server profile as a logical grouping of attributes related to server connectivity that can be assigned to a server blade. You can see it as the connectivity personality of the server.

The server profile includes:

  • MAC address.
  • Preboot Execution Environment (PXE) enablement.
  • Network connection setting for each NIC port and WWN.
  • SAN fabric connection.
  • SAN boot paramenter setting for each Fibre Channel HBA port.

Once the server profile is created you can apply it to any server within the VC Domain. There is a maximum of 64 fully populated VC Server Profiles in a VC Domain.

As we saw in the network and storage posts the VCM can be configured so that blade servers use their factory-default MACs/WWNs and serial numbers or Virtual Connect provided and administered ranges of MACs and WWNs. These MACs and WWNs will override the default MAC and WWN values when a server profile is applied to the server and appear to preboot environments and host operating systems as the hardware addresses.

When a server profile is assigned to a Device Bay the Virtual Connect Manager securely connects to the blade in the bay and configures the NIC ports with profile provided MAC addresses and PXE settings and the FC HBA ports with the appropriate WWNs and SAN boot settings. Additionally the VCM automatically connects the server to the specified networks and SAN fabrics.

This server profile can then be copied or reassigned to another server as needed without interrupting the server connectivity to the network and SAN.

Once a blade server has been assigned a server profile and as long as it remains in the same device it does not require further VC Manager configuration during server or enclosure power cycle. They boot and access the network and fabric as long as soon as the server and interconnect modules are ready. If a server is inserted into a device bay that has already been assigned a server profile VCM automatically updates the configuration of that server before it is allowed to power on and connect to the network.

If a blade server is moved from a Virtual Connect managed enclosure to a non VC managed one all the ports automatically returns to their original factory values and settings in order to prevent duplicate MAC and WWNs within the datacenter because a blade server redeployment.

In addition to the above information the following points must be considered when working with server profiles:

  • Blade server and card firmware revision must be at a revision that supports Virtual Connect profile assignment.
  • Before creating the first server profile select whether to use Virtual Connect administered MAC and WWN ranges or the local factory default values.
  • After an enclosure is imported into a VC Domain the blades will remain isolated from networks and SAN fabrics until a server profile is assigned.
  • When Virtual Connect administered MACs and/or WWNs or when changing Fibre Channel boot parameters the servers must be powered off in order to receive or relinquish a server profile.
  • Fibre Channel SAN connections will display in the profile server screen only if the VC-FC module in the enclosure managed by Virtual Connect. If there is no VC-FC module the FC option wouldn’t appear in the server profile screen until a module has been added.
  • Some server profile SAN boor settings, like the controller boot order, are applied only after the server has been booted with the final mezzanine card configuration.
  • If PXE or SAN boot settings are made outside of Virtual Connect, the settings defined by the server profile will be restored after the blade server completes the next boot cycle.

If you have worked in the past with the 2.x Virtual Connect Manager revisions I’m sure that you will remember the Server Profile Wizard. That wizard has been removed from the  3.x revisions of VCM.

To start the server profile creation you have now to go to the Virtual Connect Home and in the Server area click on Define Server Profile.

In the Define Server Profile screen first enter the name of the profile, ESX01 in the example, and choose if you want to use factory default MAC and WWN or the VC-predefined.

Then move to Ethernet Network Connections. Here you can select the networks to assign to the ports, the port speed between AUTO, PREFERRED and CUSTOM and the PXE settings (ENABLED, DISABLED or USE-BIOS). By default there are only two connections created, to add more connections just right-click the area and choose Add connection.

In Network Name if you choose Multiple Networks a new icon will appear that will allow you to edit this connection type. Click and a new section will show up, this section allows to select the Shared Uplink Set and the networks. There is also a checkbox to set if you want to force the same VLAN mappings as the Shared Uplink Set to the different networks.

The next area is the FC SAN Connections. Assign the modules in the bays to the correspondent fabric and set the port speed.

Also in this section you can define the SAN boot parameters, click on the checkbox, the page will dim and a pop-up will appear.There you can configure each FC connection as PRIMARY, SECONDARY, DISABLED or USE-BIOS and set the Target Port Name and the LUN.

Finally we can assign the profile to a server bay.

Click Apply and the new server profile will be done. You can always edit the existent server profiles from the Server Profiles screen in the VC administration interface.

And this is the end. This series is done, if you have follow the correct steps outlined in the four posts you will have a fully operation Virtual Connect Domain. Of course there are a some topics I’d like to write about like iSCSI, FlexFabric and the VCM command line but I believe it’s better to do it in their own dedicated posts, stay tuned :-)

Juanma.

In the first post of the series I introduced to you HP Virtual Connect and showed how to use the Domain Wizard Setup to initially configure a VC domain. In the following article I will outline the use of the Network Setup Wizard and explain Virtual Connect networking concepts.

Before we begin to setup the network it would be very useful to clarify the Virtual Connect port terminology.

  • External port – The Ethernet connectors SFP+ modules (either 1GB or 10GB), 10GBASE-CX4 and RJ-45 on the faceplate of the Ethernet module.
  • Stacking port – These are Ethernet external ports used to connect within a Virtual Connect Domain the VC Ethernet modules. The Ethernet modules automatically identify the stacking modules.
  • Uplink port – An external port configured within a Domain for use as a connection to the external networking equipment. These ports are defined within Virtual Connect by the enclosure name, interconnect bay that contains the module and the port number.
  • Uplink port set – A set of uplinks ports trunked together in order to provide improved throughput and availability.
  • Shared uplink port – This is an Ethernet uplink port that carries the traffic for multiple networks. The associated networks are mapped to a specific VLAN on the external connection, the appropiate VLAN tags are removed or added as Ethernet packets enter or leave the VC Domain.
  • Shared uplink port set – This is a group of Ethernet uplinks trunked to provide improved throughput and availability to VC Shared Uplink Set.

The Virtual Connect Network Setup Wizard will help to establish external Ethernet connectivity for the enclosure. With this wizard you will be able to:

  • Identify the MAC addresses to be used by the servers within the VC Domain.
  • Configure Server VLAN tagging.
  • Set up connections from the c-Class enclosure to the external networks.

The network connections can be:

  • Dedicated uplink to a specific Ethernet network.
  • Shared uplink sets.

The first screen of the wizard is the MAC Address Settings. As every server in the market the HP Blades come with factory-default MAC addresses already assigned to their network cards. However Virtual Connect can override these values while the server remains in the enclosure.

Virtual Connect access the NICs through the Onboard Administrator and the server iLO to manage the MAC addresses. It provides 64 predefined and reserved MAC address ranges. The wizard will give you the option to use either an HP predefined range or an user defined one. HP recommends to use the predefined ranges.

Once you have chosen the address range and click next the wizard will ask for confirmation before continue.

The next screen is Server VLAN Tagging Support. Here the wizard gives you two possible options:

  • Tunnel VLAN Tags
  • Map VLAN Tags

The first one, Tunnel VLAN Tags, supports only VLAN tagging on networks with dedicated uplinks where all VLAN tags passed through the VC Domain without modification and ports connected to networks using shared uplinks can only send and receive untagged frames.

On the other hand Map VLAN Tags allow you to add more than one network to an Ethernet server port and specify the VLAN mapping between server tags and VC-Enet networks. Also, the VLAN tunneling will be disabled for VC Ethernet networks with dedicated uplinks.

There is also a checkbox in the page to , if this option is enabled the server ports connected to multiple VC Ethernet networks are forced to use the same VLAN mappings as those used for the corresponding Shared Uplink Set and the network connections can only be selected from a single Shared Uplink Set. When this option is not checked server network connections can be selected from any VC network and the external VLAN ID mappings can be manually edited. In the example of the screenshots I decided to check it.

Below are another two optional settings for link speed control when using mapped VLAN tags. These settings are:

  • Set a Custom value for Preferred Link Connection Speed. This value applies to server profiles with a Multiple Networks connection defined and the Port Speed Setting set to Preferred.
  • Set a Custom value for Maximum Link Connection Speed. This value limits the maximum port speed for multi-network connections when a Custom port speed is specified.

In our example we’re not going to check neither of them . Click next to move into the Define Network Connection screen.

Choose the network type you want to define and click next. I choose .

The Define Single Network shows up. First define the network name (prod_net_01 in my example). There are three configurable values.

  • Smart Link – With this option enabled Virtual Connect will drop the Ethernet link on every server connected to that network if the link to the external switches is lost.
  • Private Network – This option is intended to provide extra network security by isolating all server ports from each other within the VC Domain. All packets will be sent through the VC Domain and out the uplinks ports so the communication between the severs will go through an external L3 router that will redirect the traffic back to the Domain.
  • Enable VLAN Tunneling.

Click in the Advanced button to configure Advanced Network Settings. Set the network link speeds that best suites your configuration.

Again from the Define Single Network page we are going to assign a port to our network. Click on Add Port and select an uplink port.

Set the Connection Mode to Auto if the ports are trunked and to Failver if not.

Click Apply and move onto the next screen. From this screen you can create as many additional networks as you need.

Now we are going to create a network using VLAN tagging. Click Next an move again into the Define Network Connection page, select Connection with uplink(s) carrying multiple networks (using VLAN tagging) and click Next. The Define Shared Uplink Port Set page will be displayed.

A shared uplink is the way Virtual Connect has to identify which uplinks carry multiple networks over the same cable. On shared uplinks the VLAN tags are added when packets leave the enclosure and added when leave. The external switch and the Virtual Connect Manager must be configured with the same VLAN tag ID for each network  on the shared uplinks. The uplinks enables multiple ports to be added in order to support port aggregation and link failover, with a consistent set of VLAN tags. Virtual Connect has no restriction on which VLAN IDs can be used so the VLANs already used in the external infrastructure can be used here.

Since the VLAN tags are removed or added as soon as the packect enter or leave VC Ethernet Module shared uplink they have no relevance after the packet enter the enclosure. By identifying an associated network as the native VLAN will cause all untagged incoming packets to be placed onto this network, just one network can be designated as the native VLAN.

To finish the network creation assign a name (up to 64 characters with no spaces), add a port using the drop-down menu like in the single network process described above and add the networks you want to associate to the uplink. Finally click Apply.

In the final screen you will see now the three networks associated to a Shared Uplink Set. You can check this also from the Virtual Connect Manager page in the Ethernet Networks area.

And we are done with the Network Setup, in the next post I will show the storage part. As always any feedback would be welcome :-)

Juanma.

A friend asked me last week if I could produce a document for him explaining the initial basic setup of Virtual Connect, I decided that instead of  that it would be better and more helpful to write in a series of blog posts, here it is the first of them for you to enjoy.

Virtual Connect is a technology developed by Hewlett-Packard for the HP BladeSystem c-Class enclosures. Provides server-edge and I/O virtualization in order to simplify the setup, maintenance and administration of server connections. It comprises a set of interconnect modules, both Ethernet and Fibre Channel, and a software known as Virtual Connect Manager.

Virtual Connect Manager, or VCM, is the single point administration interface for Virtual Connect. Under the hoods VCM is a software embedded into the VC Ethernet module, it can be accessed through a web-based interface or command line either with a serial connection to the Ethernet module or through a SSH connection to the module.

From the VCM only a single domain, with up to four enclosures, can be managed.

For large-scale infrastructures there is a more scalable version of VCM known as Virtual Connect Enterprise Manager, or VCEM. Unlike VCM, Virtual Connect Enterprise Manager is not embedded into the VC-Enet module, is a separate software that must be installed in another server. VCEM extends the VC management capabilities up to 250 domains and hundreds of blade servers.

Current series of articles will focus only on the Virtual Connect Manager GUI. Please take into account that I’m using Virtual Connect 3.10 version in the whole series and there some differences with the VC 2.x revisions.

When you login into the VCM for the first time a series of wizards will show up to help you with the initial setup of the domain. This article will cover the first of those wizards, the Domain Setup Wizard.

This wizard will allow you to:

  • Import enclosure configuration and communication settings
  • Name the domain
  • Set the IP address of the Virtual Connect Manager
  • Set up the local user accounts and its permissions and privileges
  • Confirm that the stacking links provide connectivity and redundancy

After the informative screen the first step will display. Here you have to provide the enclosure Onboard Administrator IP address and credentials, these credentials must have administrative level. Click next when finish.

Now VC Domain Wizard will import all the servers and VC interconnect modules within the enclosure.

In the next screen select the enclosure to import and click next.

A pop-up will show up to inform that the networking of all the blades within the enclosure will be disabled until VC Networking is properly configured. of course it will ask for confirmation.

After finishing the import the wizard will go the General Settings part. The Domain Setup Wizard automatically assigns a domain name based on the enclosure name, you can change the name when running the setup wizard or at any time later from the Domain Settings screen. The Virtual Connect domain name should be unique and can be up to 31 characters without spaces or special characters.

Next step is to configure the local user accounts.

By default the only local account is Administrator, this account cannot be deleted nor have domain privileges removed. You can also add up to 32 accounts with a combination of up to four levels of access. The available levels are:

  • Virtual Connect Domain
  • Server
  • Networking
  • Storage

There is also an Advanced area for each account where you can set Strong Passwords requirement and the minimum password length.

With this the Domain Setup Wizard is done. In the next article I will write about the network setup of the enclosure using the Network Setup Wizard.

Juanma.