Archives For VMware

A question I’ve heard a few times, what are the command equivalencies between a standard Open vSwitch, running inside a Linux box, and the NSX vSwitch running inside ESXi? I have written this post to clarify this a bit.

There are four commands in NSX CLI that have equivalencies in the OVS world:

NVS Command OVS Command
nsx-dbctl ovs-vsctl
nsx-dpctl ovs-dpctl
nsx-appctl ovs-appctl
nsx-flowctl ovs-flowctl

nsx-dbctl

ovs-dbctl command, like its OVS equivalent ovs-vsctl, Sub-commands are the same, and for example nsx-dbctl show will produce a similar output to ovs-vsctl show.

~ # nsx-dbctl show
ec451c1a-0258-423a-b406-dec83af4b241
    Manager "ssl:192.168.110.201:6632"
        is_connected: true
    Bridge "br-vmnic1"
        fail_mode: standalone
        Port "vmk3"
            Interface "vmk3"
        Port "vmnic1"
            Interface "vmnic1"
    Bridge br-int
        Controller "ssl:192.168.110.201:6633"
            is_connected: true
        Controller "unix:ovs-l3d.mgmt"
            is_connected: true
        fail_mode: secure
        Port "vNic.3000004"
            Interface "vNic.3000004"
        Port "vNic.3000006"
            Interface "vNic.3000006"
        Port "vNic.3000005"
            Interface "vNic.3000005"
    ovs_version: "2.0.2.31704"
~ #

nsx-dpctl

nsx-dpctl command maps to ovs-dpctl and much like it allow you to manage Open vSwitch datapaths.

~ # nsx-dpctl show
system@nsx-vswitch:
        lookups: hit:1770781 missed:192476 lost:0
        flows: 14
        port 50331650: vmnic1
        port 50331651: vmk3
        port 50331652: vNic.3000004
        port 50331653: vNic.3000005
        port 50331654: vNic.3000006
~ #

nsx-appctl

nsx-appctl will allow the administrator to manage and configure OVS daemons. It maps to ovs-appctl command.

~ # nsx-appctl dpif/show
system@nsx-vswitch: hit:2230477 missed:148652
        flows: cur: 17, avg: 17, max: 33, life span: 1918447ms
        hourly avg: add rate: 66.907/min, del rate: 66.880/min
        daily avg: add rate: 43.476/min, del rate: 43.461/min
        overall avg: add rate: 60.918/min, del rate: 60.909/min
        br-int: hit:142949 missed:8461
                vNic.3000004 1/50331652: (system)
                vNic.3000005 2/50331653: (system)
                vNic.3000006 3/50331654: (system)
        br-vmnic1: hit:2087528 missed:140191
                vmk3 2/50331651: (system)
                vmnic1 1/50331650: (system)
~ #

nsx-flowctl

nsx-flowctl is the equivalent of ovs-flowctl and will allow you to manage NSX vSwich flow tables, ports, etc.

~ # nsx-flowctl show br-bond0
OFPT_FEATURES_REPLY (xid=0x3): dpid:0000725d4492c540
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
 1(vmnic4): addr:00:50:56:01:08:c6
     config:     0
     state:      0
     current:    1GB-FD
     advertised: 10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-HD 1GB-FD
     supported:  10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-HD 1GB-FD
     speed: 1000 Mbps now, 1000 Mbps max
 2(vmnic5): addr:00:50:56:01:08:c8
     config:     0
     state:      0
     current:    1GB-FD
     advertised: 10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-HD 1GB-FD
     supported:  10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-HD 1GB-FD
     speed: 1000 Mbps now, 1000 Mbps max
 3(vmk3): addr:00:50:56:66:57:18
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x6): frags=normal miss_send_len=0
~ #

Courteous comments are welcome.

Juanma.

VMware has released a new vRealize Operations Manager management pack for NSX Multi-hypervisor. This new management pack will allow vROps to extend its management capabilities into any NSX-MH infrastructure.

This management pack provides a great set a features, including:

  • Operational visibility into the different NSX-MH components, from NSX Manager to Controllers, transport nodes and logical elements of the network.
  • Search and drill down functionality to help the administrator monitor the health of the NSX objects.
  • Alerts and root cause problem solving capabilities by detecting configuration, connectivity and health deficiencies into the NSX environment.
  • Report templates for NSX Multi-Hypervisor environment.

The management pack requires vRealize Operations Manager 6.0 and can be downloaded from VMware Solutions Exchange.

Installation

To install this management pack go to Administration in the left pane.

Screen Shot 2014-12-16 at 01.15.10

From there go to Solutions and on the right pane click on the plus sign to add the new management pack.

Screen Shot 2014-12-16 at 01.15.24

Browse for the pack installation file, click Upload and then click Next when the installation file is uploaded.

Screen Shot 2014-12-16 at 01.16.27

Accept the EULA and proceed to the last screen. Wait until the management pack is installed and then click Finish.

Screen Shot 2014-12-16 at 01.19.10

Configure the adapter instance

The first task is to create the credentials for the solution. Access Administration -> Credentials and create a new credential for the NSX-MH Adapter.It has to include the administration credentials for the NSX Controller, NSX Manager and vCenter Server.

Screen Shot 2014-12-16 at 02.19.17

Next access Administration -> Solutions, select the NSX-MH pack and click on the gear icon.

configure-nsx-mh

On the pop-up window enter the IP address or the FQDN for:

  • NSX Controller
  • NSX Manager
  • vCenter Server

Only the first NSX Controller is needed.

configure-nsx-mh_2

Test the connection, accept the certificates for the different components and click Save Settings. After this the adapter is configured and will start collecting data, it will take a some time until it displays data, depending on the size of the NSX environment, to have a full collection of data.

NSX-MH dashboards

Out of the box the management pack comes with three dashboards.

  • NSX-MH Main: It provides an overview of the health of the different network objects

Screen Shot 2014-12-16 at 01.29.26

  • NSX-MH Topology: Provides details about the topology of a selected object, how it connects in the networks and a view of the related alerts and metrics.

Screen Shot 2014-12-15 at 02.30.37

  • NSX-MH Object Path: This dashboard enables the administrator to visually depict a the path between two selected objects and verify how they are connected between each other and other objects.

Screen Shot 2014-12-16 at 01.32.14

Juanma.

 

In the series of posts about OpenStack and KVM we saw how to add a KVM node to NSX for multi-hypervisor environments as a transport node. In this post we will discuss how to perform the same procedure for an ESXi host.

NSX vSwitch installation

Before proceeding with the installation keep in mind that NSX vSwitch can run on an ESXi host simultaneously only with VMware Standard Switch, distributed switches are not supported.

Install the NSX vSwitch vib file using esxcli.

~ # esxcli software vib install --no-sig-check -v /tmp/vmware-nsxvswitch-2.1.3-35984-prod2013-stage-release.vib
Installation Result
   Message: Operation finished successfully.
   Reboot Required: false
   VIBs Installed: VMware_bootbank_vmware-nsxvswitch_2.1.3-35984
   VIBs Removed:
   VIBs Skipped:
~ #
~ # esxcli software vib list | grep nsx
vmware-nsxvswitch              2.1.3-35984                           VMware  VMwareCertified   2014-07-13
~ #

Check that the a new virtual switch has been created on the host, don’t use esxcli but the good old esxcfg-vswitch command because for now there is no namespace available in esxcli for NSX vSwitch.

~ # esxcfg-vswitch -l
Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
vSwitch0         1536        7           128               1500    vmnic0,vmnic1

  PortGroup Name        VLAN ID  Used Ports  Uplinks
  vMotion               0        1           vmnic0,vmnic1
  Management Network    0        1           vmnic0,vmnic1

Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
vSwitch1         1536        6           128               1500    vmnic2,vmnic3

  PortGroup Name        VLAN ID  Used Ports  Uplinks
  vsan                  0        1           vmnic2,vmnic3

Switch Name      Num Ports   Used Ports  Uplinks
nsx-vswitch      1536        1

~ #

NSX vSwitch configuration

With NSX vSwitch installed proceed to the conifguration. First connect an uplink to the switch, this will create an NVS bridge which is the equivalent of an OVS bridge in Open vSwitch.

nsxcli uplink/connect vmnic4

Set an IP address for the uplink, this IP address will be used later to create the transport tunneling endpoint when we connect the ESXi as a transport node to NSX. You can also specify the VLAN tag by appending vlan=<vlan_id> as an additional parameter to the command.

nsxcli uplink/set-ip vmnic4 192.168.110.123 255.255.255.0

Validate that the bridge is correctly configured. Use nsxcli port/show to verify the bridge and nsxcli uplink/show for the uplink.

~ # nsxcli port/show
br-int:
-------

br-vmnic4:
----------
vmnic4
vmk3

~ #

In the uplink/show output look for an entry like the one below.

==============================
vmnic4:
MAC       : 00:50:56:01:08:ca
Link      : Up
MTU       : 1500
IP config :
------------------------------
VMK intf  : vmk3
MAC addr  : 00:50:56:6b:ca:dd
Services  : NSX-Tunneling
VLAN      : 0
IP        : 192.168.110.123(Static)
Mask      : 255.255.255.0(Static)
..............................
------------------------------
Connection : NVS (uplink0)
Configured as standalone interface
==============================

You can also check the status of the vmkernel interface with esxcli and with nsxcli.

 ~ # esxcli network ip interface ipv4 get -i vmk3
Name  IPv4 Address     IPv4 Netmask   IPv4 Broadcast   Address Type  DHCP DNS
----  ---------------  -------------  ---------------  ------------  --------
vmk3  192.168.110.123  255.255.255.0  192.168.110.255  STATIC           false
~ #
~ # nsxcli vmknic/show vmk3
vmk3:
MAC addr  : 00:50:56:6b:ca:dd
Services  : NSX-Tunneling
VLAN      : 0
IP        : 192.168.110.123(Static)
Mask      : 255.255.255.0(Static)
Assoc with: vmnic4
..............................
~ #

The next step is configure the gateway  for NSX vSwitch.

~ # nsxcli gw/set tunneling 192.168.110.2
~ #
~ # nsxcli gw/show tunneling
Tunneling:
Configured default gateway       : 192.168.110.2
Currently active default gateway : 192.168.110.2 (Manual)
~ #

Connect NSX vSwitch instance to NSX controller cluster.

~ # nsxcli manager/set ssl:192.168.110.31
~ #
~ # nsx-dbctl show
e42912a7-693f-43ee-84d5-11b5bb3491eb
    Manager "ssl:192.168.110.31:6632"
    Bridge br-int
        fail_mode: secure
    Bridge "br-vmnic4"
        fail_mode: standalone
        Port "vmk3"
            Interface "vmk3"
        Port "vmnic4"
            Interface "vmnic4"
    ovs_version: "2.1.3.35984"
~ #

Create an opaque network. An opaque network is basically a transport bridge that will provide the network backend for the virtual machines. Opaque networks must be identified during its creation based on its type and ID.

In this particular case the ESXi will be added later to a cluster acting as nova compute backend for my OpenStack lab so the network type must be nsx.network and the UUID have to match the configured one for the integration_bridge setting in nova.conf file. We also need to specify the port attach mode, for OpenStack environments is manual.

~ # nsxcli network/add NSX-Bridge NSX-Bridge nsx.network manual
success
~ #
~ # nsxcli network/show
UUID                                        Name                    Type            Mode
----                                        ----                    ----            ----
NSX-Bridge                                  NSX-Bridge              nsx.network     manual
~ #

Add ESXi as transport node

The final part of the procedure is to add our new ESXi server as transport node to NSX. Log into NSX Manager web UI and initiate the wizard to add a new Hypervisor. First specify the name of the new hypervisor.

Screen Shot 2014-07-14 at 02.13.30

Set the integration bridge.

Screen Shot 2014-07-14 at 02.22.44

Select Security Certificate as credential type and paste the NSX vSwitch SSL certificate. The certificate can be retrieved from /etc/nsxvswitch/nsxvswitch-cert.pem.

Screen Shot 2014-07-14 at 02.29.50

Add an SST transport connector, using the IP address configured for the uplink.

Screen Shot 2014-07-14 at 02.31.57

Click Save & View and verify the new hypervisor configuration in NSX.

Screen Shot 2014-07-14 at 02.36.15

The setup of our new ESXi server within NSX is done. As always comments are welcomed.

Juanma.

kvmWelcome to the third post of my series about OpenStack. In the first and second posts we saw in detail how to prepare the basic network infrastructure of our future OpenStack cloud using VMware NSX. In this third one we are going to install and configure the KVM compute host and the shared storage of the lab.

KVM setup

Create and install two CentOS 6.4 virtual machines with 2 vCPU, 2 GB of RAM, 2 network interfaces (E1000) and one 16GB disk. For the partitioning schema I have used the following one:

  • sda1 – 512MB – /boot
  • sda2 – Rest of the disk – LVM PV
    • lv_root – 13.5GB – /
    • lv_swap – 2GB – swap

Mark Base and Standard groups to be installed and leave the rest unchecked. Set the hostname during the installation and leave the networking configuration with the default values. Please have in mind that you will need to have a DHCP server on your network, in my case I’m using the one that comes with VMware Fusion if you don’t have one then you will have to set here a temporary IP address in order to able to install the KVM software. Once the installation is done reboot your virtual machine and open a root SSH session to proceed with the rest of the configuration tasks.

Disable SELinux with setenfornce command, also modify SELinux config to disable it during OS boot. I do not recommend to disable SELinux in a production environment but for a lab it will simplify things.

setenforce 0
cp /etc/selinux/config /etc/selinux/config.orig
sed -i s/SELINUX\=enforcing/SELINUX\=disabled/ /etc/selinux/config

Check that hardware virtualization support is activated.

egrep -i 'vmx|svm' /proc/cpuinfo

Install KVM packages.

yum install kvm libvirt python-virtinst qemu-kvm

After installing a ton of dependencies and if t nothing failed enable and start the libvirtd service.

[root@kvm1 ~]# chkconfig libvirtd on
[root@kvm1 ~]# service libvirtd start
Starting libvirtd daemon:                                  [  OK  ]
[root@kvm1 ~]#

Verify that KVM has been correctly installed and it’s loaded and running on the system.

[root@kvm1 ~]# lsmod | grep kvm
kvm_intel              53484  0
kvm                   316506  1 kvm_intel
[root@kvm1 ~]#
[root@kvm1 ~]# virsh -c qemu:///system list
 Id    Name                           State
----------------------------------------------------

[root@kvm1 ~]#

Hypervisor networking setup

With KVM software installed and ready we can now move on to configure the networking for both hosts and integrate them into our NSX deployment.

Disable Network Manager for both interfaces. Edit /etc/sysconfig/network-scripts/ifcfg-ethX files and change NM_CONTROLLED value to no.

By default libvirt creates virbr0 network bridge to be used for the virtual machines to access the external network through a NAT connection. We need to disable it to ensure that bridge components of Open vSwitch can load without any errors.

virsh net-destroy default
virsh net-autostart --disable default

Install Open vSwitch

Copy the NSX OVS package to the KVM host and extract it.

[root@kvm1 nsx-ovs]# tar vxfz nsx-ovs-2.1.0-build33849-rhel64_x86_64.tar.gz
./
./nicira-flow-stats-exporter/
./nicira-flow-stats-exporter/nicira-flow-stats-exporter-4.1.0.32691-1.x86_64.rpm
./tcpdump-ovs-4.4.0.ovs2.1.0.33849-1.x86_64.rpm
./kmod-openvswitch-2.1.0.33849-1.el6.x86_64.rpm
./openvswitch-2.1.0.33849-1.x86_64.rpm
./nicira-ovs-hypervisor-node-2.1.0.33849-1.x86_64.rpm
./nicira-ovs-hypervisor-node-debuginfo-2.1.0.33849-1.x86_64.rpm
[root@kvm1 nsx-ovs]#

Install Open vSwitch packages.

rpm -Uvh kmod-openvswitch-2.1.0.33849-1.el6.x86_64.rpm
rpm -Uvh openvswitch-2.1.0.33849-1.x86_64.rpm

Verify that Open vSwitch service is enabled and start it.

[root@kvm1 ~]# chkconfig --list openvswitch
openvswitch     0:off   1:off   2:on    3:on    4:on    5:on    6:off
[root@kvm1 ~]#
[root@kvm1 ~]#
[root@kvm1 ~]# service openvswitch start
/etc/openvswitch/conf.db does not exist ... (warning).
Creating empty database /etc/openvswitch/conf.db           [  OK  ]
Starting ovsdb-server                                      [  OK  ]
Configuring Open vSwitch system IDs                        [  OK  ]
Inserting openvswitch module                               [  OK  ]
Starting ovs-vswitchd                                      [  OK  ]
Enabling remote OVSDB managers                             [  OK  ]
[root@kvm1 ~]#

Install nicira-ovs-hypervisor-node package, this utility provides the infrastructure for distributed routing on the hypervisor. With the installation the integration bridge br-int and OVS SSL credentials will be created.

[root@kvm1 ~]# rpm -Uvh nicira-ovs-hypervisor-node*.rpm
Preparing...                ########################################### [100%]
   1:nicira-ovs-hypervisor-n########################################### [ 50%]
   2:nicira-ovs-hypervisor-n########################################### [100%]
Running '/usr/sbin/ovs-integrate init'
successfully generated self-signed certificates..
successfully created the integration bridge..
[root@kvm1 ~]#

There are other packages like nicira-flow-stats-exporter and tcpdump-ovs but they are not needed for OVS functioning. We can proceed now with OVS configuration.

Configure Open vSwitch

The first step is to create OVS bridges for each network interface card of the hypervisor.

ovs-vsctl add-br br0
ovs-vsctl br-set-external-id br0 bridge-id br0
ovs-vsctl set Bridge br0 fail-mode=standalone
ovs-vsctl add-port br0 eth0

If you were logged in by an SSH session you have probably noticed that your connection is lost, this is because br0 interface has taken control of the networking of the host and it doesn’t have an IP address configured. To solve this access the host console and edit ifcfg-eth0 file and modify to look like this.

DEVICE=eth0
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br0
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
NAME=eth0
HOTPLUG=no
HWADDR=00:0C:29:CA:34:FE
NM_CONTROLLED=no

Next create and edit ifcfg-br0 file.

DEVICE=br0
DEVICETYPE=ovs
TYPE=OVSBridge
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.82.42
NETMASK=255.255.255.0
GATEWAY=192.168.82.2
IPV6INIT=no
HOTPLUG=no

Restart the network service and test the connection.

service network restart

Repeat all the above steps for the second network interface.

Finally configure NSX Controller Cluster as manager in Open vSwitch.

ovs-vsctl set-manager ssl:192.168.82.44

Execute ovs-vsctl show command to review OVS current configuration.

[root@kvm1 ~]# ovs-vsctl show
383c3f17-5c53-4992-be8e-6e9b195e51d8
    Manager "ssl:192.168.82.44"
    Bridge "br1"
        fail_mode: standalone
        Port "br1"
            Interface "br1"
                type: internal
        Port "eth1"
            Interface "eth1"
    Bridge "br0"
        fail_mode: standalone
        Port "eth0"
            Interface "eth0"
        Port "br0"
            Interface "br0"
                type: internal
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.1.0.33849"
[root@kvm1 ~]#

Register OVS in NSX Controller

With our OVS instance installed and running we can now inform NSX Controller of its existence either via NVP API or NSX Manager, in our case we will use the later.

Log into NSX Manager as admin user and go to Dashboard, from Summary of Transport Components table click Add in the Hypervisors row. Verify that Hypervisor is selected as transport node and move to the Basics screen. Enter a name for the hypervisor, usually the hostname of the server.

Screen Shot 2014-05-05 at 23.18.22

In Properties enter:

  • Integration bridge ID, for us is br-int.
  • Admin Status Enabled –  Enabled by default.

Screen Shot 2014-05-05 at 23.29.03

For the Credential screen we are going to need the SSL certificate that was created along with the integration bridge during the NSX OVS installation. The PEM certificate file is ovsclient-cert.pem and is in /etc/openvswitch directory.

[root@kvm1 ~]# cat /etc/openvswitch/ovsclient-cert.pem
-----BEGIN CERTIFICATE-----
MIIDwjCCAqoCCQDZUob5H9tzvjANBgkqhkiG9w0BAQUFADCBojELMAkGA1UEBhMC
VVMxCzAJBgNVBAgTAkNBMRIwEAYDVQQHEwlQYWxvIEFsdG8xFTATBgNVBAoTDE9w
ZW4gdlN3aXRjaDEfMB0GA1UECxMWT3BlbiB2U3dpdGNoIGNlcnRpZmllcjE6MDgG
A1UEAxMxb3ZzY2xpZW50IGlkOjA4NWQwMTFiLTJiMzYtNGQ5My1iMWIyLWJjODIz
MDczYzE0YzAeFw0xNDA1MDQyMjE3NTVaFw0yNDA1MDEyMjE3NTVaMIGiMQswCQYD
VQQGEwJVUzELMAkGA1UECBMCQ0ExEjAQBgNVBAcTCVBhbG8gQWx0bzEVMBMGA1UE
ChMMT3BlbiB2U3dpdGNoMR8wHQYDVQQLExZPcGVuIHZTd2l0Y2ggY2VydGlmaWVy
MTowOAYDVQQDEzFvdnNjbGllbnQgaWQ6MDg1ZDAxMWItMmIzNi00ZDkzLWIxYjIt
YmM4MjMwNzNjMTRjMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwgqT
hvG72vat0hXvTuukZOs6fM4CAphmN34l4415q/vReSM3upN+vOLoyGJ/8VJGdNXH
3Bsu6V58f6o8EPbfnhgqf2rCP0r5kiiN5SivsAWI5//ltV1GDFO4+8VpYAwn4Cbd
sNOuFEM1mKOR//IL3Riy9Nkh16wfLy44KEE9745uhZ9gW96AkSkBx1ajjUiApnjL
M6L2w/E4sxNeMDLf/VYlc/SuEg775D9iaPpA1haJt8FFw1g769FsR9Q0Fl+CoT7f
ggBZTKwwcoU+5Ew1mNlPV0Hm8vpFcXbtMBeuT9Fe7k4bC+UuQPaSnbPpbZMpx/wd
fHOdJpemcog/0EjOJQIDAQABMA0GCSqGSIb3DQEBBQUAA4IBAQDBPNM/uI25ofIl
AgCpG42UD3M/RZRPX0/6Be4jCTaAuET6J8wAKA4k1btA6UPt0M98N6o4y60Du2D+
ZwFOa2LSTXZB43X70XnDKxapDVqmhKtrmX2hL1NRD9RjTTx3TOXMOlUiUizRB1+L
d8MNhX3qrvOLeFOUnxm6C5RnI/HdqvS9TyxybX+Qfqit9Q66hbjAt9RribXSw21G
Ix8d9S4NyDO91mDstIcXeNRUk8K64gEQSKxQO9QKmVAQBIlYAJVVXzfkXyHEiKTe
0zIsW/oknwWeQMD9xSrKomY/5+LCuDM1jT5LcL8vxmrEVIrUjNqt4nQsT4mjooG+
XYf2HdXj
-----END CERTIFICATE-----
[root@kvm1 ~]#

Copy the contents of the file and paste them in the Security Certificate text box.

Screen Shot 2014-05-05 at 23.36.28

Finally add the Transport Connector with the values:

  • Transport Type: STT
  • Transport Zone UUID: The transport zone, in my case the UUID corresponding to vlab-transport-zone.
  • IP Address – The address of the br0 interface of the host.

Screen Shot 2014-05-05 at 23.41.57

Click Save & View and check that Management and OpenFlow connections are up.

Screen Shot 2014-05-05 at 23.52.16

GlusterFS setup

gluster-logo-300x115I choose GlusterFS for my OpenStack lab for two reasons.  I have used it in the past so this has been a good opportunity for me to refresh and enhance my rusty gluster skills, and it’s supported as storage backend for Glance in OpenStack. Instead of going with CentOS again this time I choose Fedora 20 for my gluster VM, a real world GlusterFS cluster will have at least two node but for our lab one will be enough.

Create a Fedora x64 virtual machine with 1 vCPU, 1GB of RAM and one network interface. For the storage part use the following:

  • System disk: 16GB
  • Data disk: 72GB

Use the same partitioning schema of the KVM hosts for the system disk. Choose a Minimal installation and add the Standard group. Configure the hostname and the IP address of the node, set the root password and create a user as administrator, I’m using here my personal user jrey.

Disable SELinux.

sudo setenforce 0
sudo cp /etc/selinux/config /etc/selinux/config.orig
sudo sed -i s/SELINUX\=enforcing/SELINUX\=disabled/ /etc/selinux/config

Stop and disable firewalld.

sudo systemctl disable firewalld.service
sudo systemctl stop firewalld.service

Install GlusterFS packages. There is no need to add any additional yum repository since Gluster is included in the standard Fedora repos.

sudo systemctl install glusterfs-server

Enable Gluster services.

sudo systemctl enable glusterd.service
sudo systemctl enable glusterfsd.service

Start Gluster services.

[jrey@gluster ~]$ sudo systemctl start glusterd.service
[jrey@gluster ~]$ sudo systemctl start glusterfsd.service
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo systemctl status glusterd.service
glusterd.service - GlusterFS an clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)
   Active: active (running) since Mon 2014-04-28 17:17:35 CEST; 20s ago
  Process: 1496 ExecStart=/usr/sbin/glusterd -p /run/glusterd.pid (code=exited, status=0/SUCCESS)
 Main PID: 1497 (glusterd)
   CGroup: /system.slice/glusterd.service
           └─1497 /usr/sbin/glusterd -p /run/glusterd.pid

Apr 28 17:17:35 gluster.vlab.local systemd[1]: Started GlusterFS an clustered file-system server.
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo systemctl status glusterfsd.service
glusterfsd.service - GlusterFS brick processes (stopping only)
   Loaded: loaded (/usr/lib/systemd/system/glusterfsd.service; enabled)
   Active: active (exited) since Mon 2014-04-28 17:17:45 CEST; 15s ago
  Process: 1515 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
 Main PID: 1515 (code=exited, status=0/SUCCESS)

Apr 28 17:17:45 gluster.vlab.local systemd[1]: Starting GlusterFS brick processes (stopping only)...
Apr 28 17:17:45 gluster.vlab.local systemd[1]: Started GlusterFS brick processes (stopping only).
[jrey@gluster ~]$

Since we are running a one-node cluster there is no need to add any node to the trusted pool. In case you decide to run a multinode environment you can setup the pool by running the following command on each node of the clsuter. .

gluster peer probe <IP_ADDRESS_OF_OTHER_NODE>

Edit the data disk with fdisk and create a single partition. Format the partition as XFS.

[jrey@gluster ~]$ sudo mkfs.xfs -i size=512 /dev/sdb1
meta-data=/dev/sdb1              isize=512    agcount=4, agsize=4718528 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=18874112, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=9215, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[jrey@gluster ~]$

Create the mount point for the new filesystem, mount the partition and edit /etc/fstab accordingly to make it persistent to reboots.

sudo mkdir -p /data/glance/
sudo mount /dev/sdb1 /data/glance
sudo mkdir -p /data/glance/brick1
sudo echo "/dev/sdb1 /data/glance xfs defaults 0 0" >> /etc/fstab

Create the Gluster volume and start it.

[jrey@gluster ~]$ sudo gluster volume create gv0 gluster.vlab.local:/data/glance/brick1
volume create: gv0: success: please start the volume to access data
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo gluster volume start gv0
volume start: gv0: success
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo gluster volume info

Volume Name: gv0
Type: Distribute
Volume ID: d1ad2d00-6210-4856-a5eb-26ddcba77a70
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: gluster.vlab.local:/data/glance/brick1
[jrey@gluster ~]$

The configuration of the Gluster node is finished. In the next article we will install and configure OpenStack using the different components detailed during current and previous parts of the series.

Please feel free to add any comment or correction.

Juanma.

Welcome to Part 2 of this series about OpenStack and NSX. In the first part we defined the basic NSX concepts and components, installed and configured the NSX appliances and connected the NSX Controller Cluster with the NSX Manager. In this second part we will see how Transport and Logical networks are configured, get yourself comfortable because this is going to be a long post :-)

To quickly refresh our concepts remember that the logical network represents the virtual machine point of view of the network and the transport network represents the underlying physical network through its transport nodes.

Configure the Transport Network

The first step to have a fully functional NSX infrastructure is to configure the Transport Network. The Transport Network is made of the Transport Zone and the Transport Nodes. These transport nodes can be NSX appliances like Service Nodes or Gateways and hypervisors like KVM or ESXi hosts. Third-party hardware L2 Gateways can also be transport nodes but those are out of the scope of this series.

Create a Transport Zone

A Transport Zone is a representation of the physical network used to to send traffic between OVS instances. Without a transport zone the transport nodes cannot be connected to NSX so it is mandatory that you define it before performing any operation on them.

Select Network Components > Transport Layer > Transport Zones.

Screen Shot 2014-04-28 at 22.30.40

In the next screen click Add to launch the Create Transport Zone wizard. This same wizard can also be launched from the Dashboard screen in the Summary of Transport Components area click the Add button form the Zones row.

Screen Shot 2014-04-28 at 22.36.05

Enter the name of the Transport Zone and click Save & View.

Screen Shot 2014-04-28 at 22.41.43

The new transport zone will now be available.

Screen Shot 2014-04-29 at 00.11.59

With the Transport Zone created we can start configuring the transport nodes.

Configure the Transport Nodes

As we detailed in Part 1 the Service Node appliances are installed and configured independently as the rest of the appliances however they need to be added to NSX Controller Cluster in order to be able to perform the offloading function for the OVS devices.

From the Summary of Transport Components section in the Dashboard screen click Add.

Screen Shot 2014-04-29 at 00.29.14

The Create Service Node window will show up. In the first screen select Service Node as the Transport Node Type and click Next.

Screen Shot 2014-04-29 at 00.31.59

Enter the display name for the Service Node, in this case nsxsn.

Screen Shot 2014-04-29 at 00.39.15

In the Properties screen you will see three settings available.

  • Management Rendezvous Server – Used to designate the Service Node Management Rendezvous, it will proxy management traffic between NSX Controller Cluster and remote NSX Gateways.
  • Admin Status Enabled – Used to enable or disable the Transport Node.
  • Tunnel Keep Alive Spray – Used to improve the health testing of transport node’s tunnels.

For our lab leaving the default values will suffice.

Screen Shot 2014-04-29 at 00.55.46

For the next step get the SSL certificate from the Service Node. Establish an SSH session with the appliance as admin user and use the show switch certificate command. The output of the command can be a bit large, we just need the certificate itself.

-----BEGIN CERTIFICATE-----
MIIDgjCCAmoCAQMwDQYJKoZIhvcNAQEEBQAwgYExCzAJBgNVBAYTAlVTMQswCQYD
VQQIEwJDQTEVMBMGA1UEChMMT3BlbiB2U3dpdGNoMREwDwYDVQQLEwhzd2l0Y2hj
YTE7MDkGA1UEAxMyT1ZTIHN3aXRjaGNhIENBIENlcnRpZmljYXRlICgyMDE0IEFw
ciAyNyAyMzoyMDowNSkwHhcNMTQwNDI3MjMyMDUzWhcNMjQwNDI0MjMyMDUzWjCB
izELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAkNBMRUwEwYDVQQKEwxPcGVuIHZTd2l0
Y2gxHzAdBgNVBAsTFk9wZW4gdlN3aXRjaCBjZXJ0aWZpZXIxNzA1BgNVBAMTLmNs
aWVudCBpZDpjYWIwNWU2OS1iZjI5LTRkMjItYTY1Ny0zYTJhZThkNjgyY2MwggEi
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC9lAk6DZWO/miggjjXk4xQd3hv
ieTPpjklw6Q4UKW+qMt0GjhC06n/cVK4kR12v1aCcxsKWmPK8LC1vU68e2T61zLe
NjRYHfa9VhqKjAY5p9xPcmQGli8+Cff47LfUVylEA+74YNMDHCuJbMagtwJOXSUa
UpaB3EnsEu6C4d4RzMBn55tlDbWAuFojH9JAki3g4maMqJRhILRUYoUFoSknmUvC
8cm719TcXw4u5cAzNBC2mMv6uRuDd+l1VquhFkksP2Di3D0/kI2yBW7lgDRTE4fn
O8hLasNPuGg24mOAkW/OIvusieW2MSqEwhGV5+G4fRgbRAI1ijTRT1K4dZuhAgMB
AAEwDQYJKoZIhvcNAQEEBQADggEBAB5bqYe2LXlIbwHSx1j28d/5FBmGxMd5LUEB
h29B+nTj3wVZkZpIxFoP/QRhzMXPWGM1PeixWN9o8oSZfrCEA7yMcn3uMMwdAmNz
7eNv4svw19hccWEvdNRBkQKdDX1YKItwUJKqVMJnI2dCqsGD4H1R9uwU+QJuEIgm
VMEoHYq7TwQXX6TR1eebjOKdeg4laOcsKystHTW+wuMBfOfcoYIuEZMQ7SOsRANe
l1hm3VI7t1gxp38r9JbtEC2jCqqBMzR+ZrzmodLsn/VgFDv8QiUZ5tFHaWl+jhQ/
JWxXqjLo42B6fRUA04wF6tJKnu3KDaVFIx4ssvKw2Q5u6PNSf8I=
-----END CERTIFICATE-----

Go back to the Create Service Node window and select Security Certificate as credential type and paste the certificate extracted from the Service Node in the Security Certificate text area.

Screen Shot 2014-04-29 at 01.08.43

The final step is to setup the Transport Connectors. Click Add Connector.

Screen Shot 2014-04-29 at 01.15.22

In the Create Transport Connector screen select STT as the Transport Type. Select the transport zone the we created before and enter the IP address of the Service Node.

Screen Shot 2014-04-29 at 01.20.01

Once the Connector is create click Save in the final screen and the new Service Node will be added to the NSX Controller.

Screen Shot 2014-04-29 at 01.25.08

Now we need to finish the Gateway appliance configuration in a similar way as we did with the Service Node. Again from the Dashboard and the Summary of Transport Components section, launch the Create Gateway by clicking the Add button in the Gateways row. The rest of the steps are very similar to the Service Node process.

  • Select Gateway as Transport Node Type
  • Get the SSL certificate from NSX Gateway with the show switch certificate command.
  • Configure the credentials using the SSL certificate
  • Create an STT Transport Connector and set the IP address of the Gateway

All the above Transport Node related tasks can be achieved through the command line by using the request transport-node-register command. This is a hidden command that can be used to register Service Nodes or Gateways in a NSX Controller Cluster. According to NSX documentation there are two versions of the command:

  • cert – Used for production environments
  • mgmt-ip – Used for testing environments

The first one will transmit the encoded PEM certificates to the NSX Controller while the second will use the appliance management IP as the credential. The arguments for both versions are:

  • controller-ip-url – Switch manager address of the NSX Controller Cluster, accepts IP or hostname and the TCP port to connect to.
  • ctrler-username – NSX administration account for the Controller.
  • ctrler-password – NSX administration account password.
  • mgmt-ip – The IP address of the transport node.
  • cert – As we detailed before this one is exclusive of mgmt-ip and viceversa.
  • rendezvous-yes-or-no – Simply pass yes or no to indicate that the transport node is a Management Rendezvous Server one.
  • tc-ip-address – IP address of the transport node  connector.
  • tc-zone.uuid – Transport Zone to be associated with the transport node.
  • tc-type – Encapsulation format for the transport node’s transport connector.

With those arguments a registering command for our Service Node would be like this.

request transport-node-register nsxc.vlab.local admin admin mgmt-ip no tc-id 192.168.82.46 tc-uuid b948fd35-5737-4a30-8741-43134771d40c tc-type STT

Create a Gateway Service

The next step would be to setup a Gateway Service. My lab lives within VMware Fusion and for now I don’t really need an L2 or L3 Gateway Service but since the purpose of this post is to illustrate NSX I decide to configure one and let everything in place for a future expansion of the lab.

Remember that Gateway services can be of two types:

  • L2 Gateway Service – Will expand logical network by connecting it to a physical L2 segment.
  • L3 Gateway Service – Connects virtual router ports to physical to physical IP networks.

It’s important to note that in an NSX deployment you may connect only one Gateway Service, either L2 or L3, to a given L2 physical segment.

L3 Service Setup

From NSX Manager Dashboard go to Summary of Transport Components section and in Gateway Services click Add. In the first step of the Create Gateway Service wizard select the L3 Gateway Service from the drop-down menu.

Screen Shot 2014-05-03 at 13.11.51

In the second step configure the Display Name for the new service and click Next.

Screen Shot 2014-05-03 at 13.15.36

The third and final step is to bind our previously configured gateway node to the service. Click Add Gateway.

Screen Shot 2014-05-03 at 13.25.28

In the Edit Gateway pop-up select the UUID of the gateway node and the network interface to be used, leave the Failure Zone ID field blank. This last field is used for high availability of L3 services, I will try to write about this subject in a future post.

Screen Shot 2014-05-03 at 13.27.17

Click Save & View and check the newly created Gateway Service.

Screen Shot 2014-05-03 at 13.40.48

L2 Service Setup

To create a new L2 Gateway Service follow the same procedure as with L3 one and launch the Create Gateway Service wizard.

  • Select L2 Gateway Service.
  • Enter the name of the new service.
  • Add the gateway and fill in the UUID and network interface fields, this screen is slightly different since there is no Failure Zone ID field.

Screen Shot 2014-05-03 at 19.06.50

Please have in mind that the example of the above screenshot will fail because you cannot use the same gateway appliance for two different L2 or L3 Gateway Services, if you need an L2 service deploy a new gateway node and configure it following the above steps.

With this step our Transport Network is almost setup, the only part left would be to add the hypervisors to the Controller Cluster but I’ll left that part for the next article.

NSX Logical Network View

In any typical OpenStack deployment the logical network elements will usually be created and configured not through the NSX Manager but NVP API. The API would be called by OpenStack Neutron module using the neutron plugin for VMware NSX either from Horizon dashboard or Neutron command line. However I decided to explain how to create and configure the different Logical Layer elements from NSX Manager.

Before starting with a simple walk-through of the process we need first to describe the different elements of the Logical Network. NSX Logical Network provide a similar functionality of a dedicated Ethernet switch. It recreates entities like switches, routers and ports and provides management functionality for them through NVP API.

  • Logical Switch – Recreates an Ethernet-type L2 service-model, containing logical switch ports that can be configured to implement a set of security and QoS policies.
  • Logical Router – Provides L3 routing services for the logical network. Can be configured to offer other services such as NAT and routed connections to the external physical network.
  • Logical Switch Port – Represents and provides a logical connection point for virtual machines network interfaces (VIF), router patch connections or an L2 gateway connection to an external network.
  • Logical Router Port – Provides the logical connection point for a patch connection to a switch or L3 gateway connections.
  • Logical Port Attachment – This is the logical equivalent of connecting a network cable between an interface and a switch port.

Create a Logical Switch

Let’s start from the begging and create a Logical Switch. From Summary of Logical Components are in the Dashboard screen click Add in the Switches row.

Screen Shot 2014-05-05 at 22.09.20

Provide the name of new switch and click Next.

Screen Shot 2014-05-05 at 22.14.23

In Properties there are two different settings:

  • Port Isolation Enabled – This setting basically disables VM to VM communication by preventing communication between the different logical ports of the switch.
  • Replication Mode – Determines which transport node handle replication of broadcast, unknown-unicast and multicast (BUM) traffic. There are two possible values:
    • Service Node – Traffic is sent  to the NSX Service Node to be flooded to L2 logical segment. This is the default and recommended setting.
    • Source Node – BUM traffic is handled directly by the source hypervisor instead of a Service Node.

Screen Shot 2014-05-05 at 22.16.54

Next specify the transport binding for the logical switch. Click Add Binding and select the Transport Type and the Transport Zone UUID. I’ve selected STT our previously created transport zone respectively. For the transport type there are several types available:

  • STT
  • GRE
  • Bridge
  • IPsec GRE
  • IPsec STT
  • VXLAN

Screen Shot 2014-05-05 at 22.27.11

Click Save & View to review our new logical switch, leave the router connection for later.

Add Logical Switch Ports

Once one or more logical switches have been created we can start adding ports to them. Ports will provide connection points to our virtual machines. Click Add in the Logical Ports row and the Create Logical Switch Port wizard will be started.

Select the Logical Switch the port will belong to.

Screen Shot 2014-05-05 at 22.57.21

In Basics provide a descriptive name for the port, I tend to use the convention vm_name-port.

Screen Shot 2014-05-05 at 23.01.00

In the Properties screen you have the following filed available:

  • Port number – Optional parameter.
  • Admin Status Enabled – Enabled by default.
  • Logical Queue UUID – An optional parameter used to link the port to a QOS policy.

Screen Shot 2014-05-05 at 23.07.01

Leave the Mirror Targets settings with the default values and move forward to the Attachment screen. Select VIF, virtual machine interface, as Attachment Type, select a hypervisor and the network interface of the virtual machine.

Screen Shot 2014-05-06 at 01.32.23

Attachments can all be of type:

  • None
  • Extended Network Bridge
  • Mult-Domain Interconnect
  • L2 Gateway
  • Patch to logical router port
  • VTEP L2 Gateway

For example an Extended Network Bridged attachment should be configured like this.

Screen Shot 2014-05-06 at 01.36.12

Create a Logical Router

Launch the Create Logical Router dialog and set the name of the new router in the first screen.

Screen Shot 2014-05-06 at 01.44.19

In Properties select the Routing Type:

  • Routing Table – Allows to define static routes on the logical router
  • Single Default Route – Defines a single default route for all traffic, routing all traffic through the L3 Gateway connecting the router to the datacenter physical network.

Tick Enable NAT Synchronization checkbox in order to provide NAT service through this logical router and want NAT rules to survive in the event of a Gateway failover.

Replication Mode works in the same way as in the Logical Switch, Service Node is selected by default.

Screen Shot 2014-05-06 at 01.55.07

Configure the Distributed Logical Router. If the checkbox is unticked it means the logical router will be a centralized logical router and all network traffic between virtual machines will be forwarded to the NSX Service Nodes. On the contrary if you tick the checkbox it means it will be a distributed logical router and it will provide a one-hop routing of VM to VM traffic, to be able to use this feature all hypervisors running VMs using this router must be in the same transport zone.

Screen Shot 2014-05-06 at 02.03.39

Click Save & View to finish the process and review the new router. Optionally at the last step you can assign an L3 Gateway Service and configure the corresponding Logical Router Port.

Select the UUID of the desired gateway service and configure the Logical Router Port settings. In the example I choose the basic configuration since I only need to provide the IP address for the port.

Screen Shot 2014-05-06 at 02.12.52

Add a Logical Router Port

To create and assign a logical port to an existent router launch the corresponding wizard from Summary of Logical Components table in the Dashboard screen.

Select the Logical Router UUID from the drop-down.

Screen Shot 2014-05-06 at 22.12.57

Enter a name for the port and click Next to move to Properties step. The Port Number and MAC Address fields are optional, leave Admin Status Enabled checked. In the IP Addresses table add the required IP address, must be in IPv4 CIDR notation.

Screen Shot 2014-05-06 at 22.19.26

Configure the attachment. For router ports the attachments can be set to one of the following types:

  • None
  • L3 Gateway
  • Patch

For my example lab this time I configured the attachment as a Patch one. You need to select the Logical Switch UUID and the Peer Port UUID, this peer port is port in the logical switch and you have to configure it either before creating the router port or you can create it at this step.

Screen Shot 2014-05-06 at 22.36.10

Click Save to finish the creation process.

This completes the logical network part, it’s a very basic setup without any security or QOS services but I hope that you gained a better understanding of transport and logical network concepts and the relationships between their different components. In third part of the series we will review how to setup the KVM hypervisor and connect it to the NSX infrastructure. Comments, corrections or questions are always welcome.

Juanma.

vCenter Chargeback provides a fully featured API that allows to automate many tasks like user and rights management, cost configuration or reporting.

Chargeback API is a REST-based one, this means that it will receive requests and send responses using HTTP protocol and methods. CBM API implements a set of basic CRUD operations, and each of them maps with an HTTP method as shown in he below table.

POST CREATE
GET READ
PUT UPDATE/CREATE
DELETE DELETE

API syntax is actually very easy, it is composed of:

  • Request method
  • Base URL
  • API signature

It’s better illustrated with an example:

POST https://chargeback.corp.local/vCenter-CB/api/login?version=2.5

We can map the above example with the different elements of the API syntax:

We have also included the API version, I usually includes the version as an URL parameter but as we will see later is not really required. Some of the tasks will need also URL parameters that will be placed after the signature.

If there is need for more complex information either in the request or the response an XML payload have to be sent, just like in many other REST APIs. Even to perform a simple login an XML has to be sent, just like the next example.

For our first ride with CBM API we will use Firefox REST Client add-on, can be found here, this handy add-on provides a visual an easy way to quickly ramp up with any REST API. I personally have used it a lot with Chargeback to try the different API operations during a development project for a customer.

I’m not going to review every possible API, just a few examples to illustrate how it works.

imageLogin operation:

This is the most basic operation of all. In the REST Client paste the XML payload in the Body area, select POST as the method to use and fill the URL field.

 

Get hierarchy list:

Not every task needs an XML payload, in the following example we are going to get a list of the hierarchies using a GET method and with no message body. The URL to make the request would be:

https://<chargeback_server>/vCenter-CB/api/hierarchies?version=2.5

After executing the request we can see in the REST Client the response from Chargeback in XML format.

get_hierarchies

If we go to Chargeback web UI we’ll see the listed hierarchies.

hierarchy_cbm_ui

Get all Pricing Models:

Another simple request with no XML payload, with a similar syntax to the previous one:

GET https://<chargeback_server>/vCenter-CB/api/costModels?version=2.5

It will produce however a much more detailed response XML with the details of each of the configured Pricing Models.

<?xml version="1.0" encoding="UTF-8"?>
<Response xmlns="http://www.vmware.com/vcenter/chargeback/2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" status="success" isValidLicense="true">
  <CostModels>
    <CostModel id="31">
      <Name>Default Allocation Based Chargeback Pricing Model</Name>
      <Description>(DONT DELETE) This is only for optimization reports, only base rates are allowed for editing.</Description>
      <Currency id="104">
        <Name>USD</Name>
      </Currency>
    </CostModel>
    <CostModel id="30">
      <Name>Default Chargeback Pricing Model</Name>
      <Description>This is the default pricing model shipped with VMware vCenter Chargeback Manager.</Description>
      <Currency id="104">
        <Name>USD</Name>
      </Currency>
    </CostModel>
    <CostModel id="558">
      <Name>VMware Cloud Director Actual Usage Pricing Model</Name>
      <Description>Apply this pricing model to charge for actual usage in hierarchy</Description>
      <Currency id="104">
        <Name>USD</Name>
      </Currency>
    </CostModel>
    <CostModel id="548">
      <Name>VMware Cloud Director Allocation Pool Pricing Model</Name>
      <Description>Apply this pricing model on vDC with Allocation model as 'Allocation Pool' in hierarchy</Description>
      <Currency id="104">
        <Name>USD</Name>
      </Currency>
    </CostModel>
    <CostModel id="550">
      <Name>VMware Cloud Director Networks Pricing Model</Name>
      <Description>Apply this pricing model on organization networks in hierarchy</Description>
      <Currency id="104">
        <Name>USD</Name>
      </Currency>
    </CostModel>
    <CostModel id="556">
      <Name>VMware Cloud Director Overage Allocation Pool Pricing Model</Name>
      <Description>Apply this pricing model to charge for overage on vDC with Allocation model as 'Allocation Pool' in hierarchy</Description>
      <Currency id="104">
        <Name>USD</Name>
      </Currency>
    </CostModel>
    <CostModel id="554">
      <Name>VMware Cloud Director Pay As You Go - Fixed Charging Pricing Model</Name>
      <Description>Apply this pricing model for 'Fixed charging' on vDC with Allocation model as 'Pay As You Go' in hierarchy</Description>
      <Currency id="104">
        <Name>USD</Name>
      </Currency>
    </CostModel>
    <CostModel id="552">
      <Name>VMware Cloud Director Pay As You Go - Resource Based Charging Pricing Model</Name>
      <Description>Apply this pricing model for 'Resource based charging' on vDC with Allocation model as 'Pay As You Go' in hierarchy</Description>
      <Currency id="104">
        <Name>USD</Name>
      </Currency>
    </CostModel>
    <CostModel id="546">
      <Name>VMware Cloud Director Reservation Pool Pricing Model</Name>
      <Description>Apply this pricing model on vDC with Allocation model as 'Reservation Pool' in hierarchy</Description>
      <Currency id="104">
        <Name>USD</Name>
      </Currency>
    </CostModel>
  </CostModels>
</Response>

Add a new hierarchy:

invoked using a POST method, that corresponds with the CREATE operation from the table at the beginning of the post. The syntax for the request would be:

POST https://<chargeback_server>/vCenter-CB/api/hierarchy

In this case I’m not going to put the version as a parameter. An XML payload with the details of the new hierarchy is required.

image

Login to Chargeback web interface to check that the new hierarchy is there.

image

I hope that now you have at least a general understanding of how Chargeback API works and how easy is to interact with it. In the second post of the series we will review how to automate Chargeback using vCenter Orchestrator.

Juanma.

VMware has released VMware vSphere Mobile Watchlist. It is available for Android and iOS, iPhone only for now, and will enable any system administrator to keep an eye on their most critical apps from their phones.

It is a very intuitive app to use, below are a series of screenshots from the app installed on my iPhone 5 and connected to my homelab vCenter Server.

From the main screen you can add virtual machines from your vCenter inventory to the default watchlist or create a new watchlist.

Once you have added several virtual machines to your list you can check them in a glance in list or grid mode.

VM watchlist

Tap on a VM and you will access its details, configured resources, VM Tools state, related objects, etc.

As you can see from the screenshot this a multi screen so slide to the left and you can get a console screenshot of the virtual machine and perform different actions on the virtual machine.

Console screenshot    

I hope this is a step towards a new set of mobile apps from VMware focused on the administration of the different components of a virtual and cloud infrastructure :)

Juanma.

Every customer usually asks about how to monitor their vCenter Chargeback installations, hence I finally decided to write a small post listing the services and processes of the different Chargeback components.

Windows Service Path to executable
VMware vCenter Chargeback C:\Program Files (x86)\VMware\VMware vCenter Chargeback\apache-tomcat\bin\tomcat6.exe
VMware vCenter Chargeback – VMware Cloud Director DataCollector C:\Program Files (x86)\VMware\VMware vCenter Chargeback\VMware Cloud Director DataCollector\JavaService.exe
VMware vCenter Chargeback – vShield Manager DataCollector C:\Program Files (x86)\VMware\VMware vCenter Chargeback\vShield Manager DataCollector\JavaService.exe
VMware vCenter Chargeback DataCollector-Embedded C:\Program Files (x86)\VMware\VMware vCenter Chargeback\DataCollector-Embedded\JavaService.exe
VMware vCenter Chargeback Load Balancer C:\Program Files (x86)\VMware\VMware vCenter Chargeback\Apache2.2\bin\httpd.exe

Bear in mind that if vShield and vCloud DataCollectors are installed on the same server as Chargeback Server the path will be slightly different:

VMware vCenter Chargeback – vShield Manager DataCollector-Embedded C:\Program Files (x86)\VMware\VMware vCenter Chargeback\vShield Manager DataCollector-Embedded\JavaService.exe
VMware vCenter Chargeback DataCollector-Embedded C:\Program Files (x86)\VMware\VMware vCenter Chargeback\DataCollector-Embedded\JavaService.exe

Juanma.

I found this error last week during a deployment in a customer. The vCenter Infrastructure Navigator appliance does not maintain its configured hostname after a reboot, it gets reset to the default localhost.localdom value.

image

Setting it again in the administration web interface doesn’t solve problem, it will be lost again after the next reboot.

The problem is in the vami_set_hostname script, it has a HOSTNAME variable set to localhost.localdom and if it fails to make the reverse lookup of the hostname from the IP address using the host command it will be set to the default value.

image

To fix this edit that file, it can be found on /opt/vmware/share/vami, and set the value of the variable to your hostname. After that reboot the appliance to check that everything works as expected.

Juanma.

I thought it would be worthy to write a quick post to show the script I’ve been using to create CBM databases in customers installations. The original script wasn’t mine, honestly I don’t know who wrote it, and I’ve modified it to suit my needs.

Just remember to adjust the path of the files, its size, the usernames and password to your environment standards. Hope you find it helpful.

Juanma.