Archives For Virtualization

kvmWelcome to the third post of my series about OpenStack. In the first and second posts we saw in detail how to prepare the basic network infrastructure of our future OpenStack cloud using VMware NSX. In this third one we are going to install and configure the KVM compute host and the shared storage of the lab.

KVM setup

Create and install two CentOS 6.4 virtual machines with 2 vCPU, 2 GB of RAM, 2 network interfaces (E1000) and one 16GB disk. For the partitioning schema I have used the following one:

  • sda1 – 512MB – /boot
  • sda2 – Rest of the disk – LVM PV
    • lv_root – 13.5GB – /
    • lv_swap – 2GB – swap

Mark Base and Standard groups to be installed and leave the rest unchecked. Set the hostname during the installation and leave the networking configuration with the default values. Please have in mind that you will need to have a DHCP server on your network, in my case I’m using the one that comes with VMware Fusion if you don’t have one then you will have to set here a temporary IP address in order to able to install the KVM software. Once the installation is done reboot your virtual machine and open a root SSH session to proceed with the rest of the configuration tasks.

Disable SELinux with setenfornce command, also modify SELinux config to disable it during OS boot. I do not recommend to disable SELinux in a production environment but for a lab it will simplify things.

setenforce 0
cp /etc/selinux/config /etc/selinux/config.orig
sed -i s/SELINUX\=enforcing/SELINUX\=disabled/ /etc/selinux/config

Check that hardware virtualization support is activated.

egrep -i 'vmx|svm' /proc/cpuinfo

Install KVM packages.

yum install kvm libvirt python-virtinst qemu-kvm

After installing a ton of dependencies and if t nothing failed enable and start the libvirtd service.

[root@kvm1 ~]# chkconfig libvirtd on
[root@kvm1 ~]# service libvirtd start
Starting libvirtd daemon:                                  [  OK  ]
[root@kvm1 ~]#

Verify that KVM has been correctly installed and it’s loaded and running on the system.

[root@kvm1 ~]# lsmod | grep kvm
kvm_intel              53484  0
kvm                   316506  1 kvm_intel
[root@kvm1 ~]#
[root@kvm1 ~]# virsh -c qemu:///system list
 Id    Name                           State
----------------------------------------------------

[root@kvm1 ~]#

Hypervisor networking setup

With KVM software installed and ready we can now move on to configure the networking for both hosts and integrate them into our NSX deployment.

Disable Network Manager for both interfaces. Edit /etc/sysconfig/network-scripts/ifcfg-ethX files and change NM_CONTROLLED value to no.

By default libvirt creates virbr0 network bridge to be used for the virtual machines to access the external network through a NAT connection. We need to disable it to ensure that bridge components of Open vSwitch can load without any errors.

virsh net-destroy default
virsh net-autostart --disable default

Install Open vSwitch

Copy the NSX OVS package to the KVM host and extract it.

[root@kvm1 nsx-ovs]# tar vxfz nsx-ovs-2.1.0-build33849-rhel64_x86_64.tar.gz
./
./nicira-flow-stats-exporter/
./nicira-flow-stats-exporter/nicira-flow-stats-exporter-4.1.0.32691-1.x86_64.rpm
./tcpdump-ovs-4.4.0.ovs2.1.0.33849-1.x86_64.rpm
./kmod-openvswitch-2.1.0.33849-1.el6.x86_64.rpm
./openvswitch-2.1.0.33849-1.x86_64.rpm
./nicira-ovs-hypervisor-node-2.1.0.33849-1.x86_64.rpm
./nicira-ovs-hypervisor-node-debuginfo-2.1.0.33849-1.x86_64.rpm
[root@kvm1 nsx-ovs]#

Install Open vSwitch packages.

rpm -Uvh kmod-openvswitch-2.1.0.33849-1.el6.x86_64.rpm
rpm -Uvh openvswitch-2.1.0.33849-1.x86_64.rpm

Verify that Open vSwitch service is enabled and start it.

[root@kvm1 ~]# chkconfig --list openvswitch
openvswitch     0:off   1:off   2:on    3:on    4:on    5:on    6:off
[root@kvm1 ~]#
[root@kvm1 ~]#
[root@kvm1 ~]# service openvswitch start
/etc/openvswitch/conf.db does not exist ... (warning).
Creating empty database /etc/openvswitch/conf.db           [  OK  ]
Starting ovsdb-server                                      [  OK  ]
Configuring Open vSwitch system IDs                        [  OK  ]
Inserting openvswitch module                               [  OK  ]
Starting ovs-vswitchd                                      [  OK  ]
Enabling remote OVSDB managers                             [  OK  ]
[root@kvm1 ~]#

Install nicira-ovs-hypervisor-node package, this utility provides the infrastructure for distributed routing on the hypervisor. With the installation the integration bridge br-int and OVS SSL credentials will be created.

[root@kvm1 ~]# rpm -Uvh nicira-ovs-hypervisor-node*.rpm
Preparing...                ########################################### [100%]
   1:nicira-ovs-hypervisor-n########################################### [ 50%]
   2:nicira-ovs-hypervisor-n########################################### [100%]
Running '/usr/sbin/ovs-integrate init'
successfully generated self-signed certificates..
successfully created the integration bridge..
[root@kvm1 ~]#

There are other packages like nicira-flow-stats-exporter and tcpdump-ovs but they are not needed for OVS functioning. We can proceed now with OVS configuration.

Configure Open vSwitch

The first step is to create OVS bridges for each network interface card of the hypervisor.

ovs-vsctl add-br br0
ovs-vsctl br-set-external-id br0 bridge-id br0
ovs-vsctl set Bridge br0 fail-mode=standalone
ovs-vsctl add-port br0 eth0

If you were logged in by an SSH session you have probably noticed that your connection is lost, this is because br0 interface has taken control of the networking of the host and it doesn’t have an IP address configured. To solve this access the host console and edit ifcfg-eth0 file and modify to look like this.

DEVICE=eth0
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br0
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
NAME=eth0
HOTPLUG=no
HWADDR=00:0C:29:CA:34:FE
NM_CONTROLLED=no

Next create and edit ifcfg-br0 file.

DEVICE=br0
DEVICETYPE=ovs
TYPE=OVSBridge
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.82.42
NETMASK=255.255.255.0
GATEWAY=192.168.82.2
IPV6INIT=no
HOTPLUG=no

Restart the network service and test the connection.

service network restart

Repeat all the above steps for the second network interface.

Finally configure NSX Controller Cluster as manager in Open vSwitch.

ovs-vsctl set-manager ssl:192.168.82.44

Execute ovs-vsctl show command to review OVS current configuration.

[root@kvm1 ~]# ovs-vsctl show
383c3f17-5c53-4992-be8e-6e9b195e51d8
    Manager "ssl:192.168.82.44"
    Bridge "br1"
        fail_mode: standalone
        Port "br1"
            Interface "br1"
                type: internal
        Port "eth1"
            Interface "eth1"
    Bridge "br0"
        fail_mode: standalone
        Port "eth0"
            Interface "eth0"
        Port "br0"
            Interface "br0"
                type: internal
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.1.0.33849"
[root@kvm1 ~]#

Register OVS in NSX Controller

With our OVS instance installed and running we can now inform NSX Controller of its existence either via NVP API or NSX Manager, in our case we will use the later.

Log into NSX Manager as admin user and go to Dashboard, from Summary of Transport Components table click Add in the Hypervisors row. Verify that Hypervisor is selected as transport node and move to the Basics screen. Enter a name for the hypervisor, usually the hostname of the server.

Screen Shot 2014-05-05 at 23.18.22

In Properties enter:

  • Integration bridge ID, for us is br-int.
  • Admin Status Enabled –  Enabled by default.

Screen Shot 2014-05-05 at 23.29.03

For the Credential screen we are going to need the SSL certificate that was created along with the integration bridge during the NSX OVS installation. The PEM certificate file is ovsclient-cert.pem and is in /etc/openvswitch directory.

[root@kvm1 ~]# cat /etc/openvswitch/ovsclient-cert.pem
-----BEGIN CERTIFICATE-----
MIIDwjCCAqoCCQDZUob5H9tzvjANBgkqhkiG9w0BAQUFADCBojELMAkGA1UEBhMC
VVMxCzAJBgNVBAgTAkNBMRIwEAYDVQQHEwlQYWxvIEFsdG8xFTATBgNVBAoTDE9w
ZW4gdlN3aXRjaDEfMB0GA1UECxMWT3BlbiB2U3dpdGNoIGNlcnRpZmllcjE6MDgG
A1UEAxMxb3ZzY2xpZW50IGlkOjA4NWQwMTFiLTJiMzYtNGQ5My1iMWIyLWJjODIz
MDczYzE0YzAeFw0xNDA1MDQyMjE3NTVaFw0yNDA1MDEyMjE3NTVaMIGiMQswCQYD
VQQGEwJVUzELMAkGA1UECBMCQ0ExEjAQBgNVBAcTCVBhbG8gQWx0bzEVMBMGA1UE
ChMMT3BlbiB2U3dpdGNoMR8wHQYDVQQLExZPcGVuIHZTd2l0Y2ggY2VydGlmaWVy
MTowOAYDVQQDEzFvdnNjbGllbnQgaWQ6MDg1ZDAxMWItMmIzNi00ZDkzLWIxYjIt
YmM4MjMwNzNjMTRjMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwgqT
hvG72vat0hXvTuukZOs6fM4CAphmN34l4415q/vReSM3upN+vOLoyGJ/8VJGdNXH
3Bsu6V58f6o8EPbfnhgqf2rCP0r5kiiN5SivsAWI5//ltV1GDFO4+8VpYAwn4Cbd
sNOuFEM1mKOR//IL3Riy9Nkh16wfLy44KEE9745uhZ9gW96AkSkBx1ajjUiApnjL
M6L2w/E4sxNeMDLf/VYlc/SuEg775D9iaPpA1haJt8FFw1g769FsR9Q0Fl+CoT7f
ggBZTKwwcoU+5Ew1mNlPV0Hm8vpFcXbtMBeuT9Fe7k4bC+UuQPaSnbPpbZMpx/wd
fHOdJpemcog/0EjOJQIDAQABMA0GCSqGSIb3DQEBBQUAA4IBAQDBPNM/uI25ofIl
AgCpG42UD3M/RZRPX0/6Be4jCTaAuET6J8wAKA4k1btA6UPt0M98N6o4y60Du2D+
ZwFOa2LSTXZB43X70XnDKxapDVqmhKtrmX2hL1NRD9RjTTx3TOXMOlUiUizRB1+L
d8MNhX3qrvOLeFOUnxm6C5RnI/HdqvS9TyxybX+Qfqit9Q66hbjAt9RribXSw21G
Ix8d9S4NyDO91mDstIcXeNRUk8K64gEQSKxQO9QKmVAQBIlYAJVVXzfkXyHEiKTe
0zIsW/oknwWeQMD9xSrKomY/5+LCuDM1jT5LcL8vxmrEVIrUjNqt4nQsT4mjooG+
XYf2HdXj
-----END CERTIFICATE-----
[root@kvm1 ~]#

Copy the contents of the file and paste them in the Security Certificate text box.

Screen Shot 2014-05-05 at 23.36.28

Finally add the Transport Connector with the values:

  • Transport Type: STT
  • Transport Zone UUID: The transport zone, in my case the UUID corresponding to vlab-transport-zone.
  • IP Address – The address of the br0 interface of the host.

Screen Shot 2014-05-05 at 23.41.57

Click Save & View and check that Management and OpenFlow connections are up.

Screen Shot 2014-05-05 at 23.52.16

GlusterFS setup

gluster-logo-300x115I choose GlusterFS for my OpenStack lab for two reasons.  I have used it in the past so this has been a good opportunity for me to refresh and enhance my rusty gluster skills, and it’s supported as storage backend for Glance in OpenStack. Instead of going with CentOS again this time I choose Fedora 20 for my gluster VM, a real world GlusterFS cluster will have at least two node but for our lab one will be enough.

Create a Fedora x64 virtual machine with 1 vCPU, 1GB of RAM and one network interface. For the storage part use the following:

  • System disk: 16GB
  • Data disk: 72GB

Use the same partitioning schema of the KVM hosts for the system disk. Choose a Minimal installation and add the Standard group. Configure the hostname and the IP address of the node, set the root password and create a user as administrator, I’m using here my personal user jrey.

Disable SELinux.

sudo setenforce 0
sudo cp /etc/selinux/config /etc/selinux/config.orig
sudo sed -i s/SELINUX\=enforcing/SELINUX\=disabled/ /etc/selinux/config

Stop and disable firewalld.

sudo systemctl disable firewalld.service
sudo systemctl stop firewalld.service

Install GlusterFS packages. There is no need to add any additional yum repository since Gluster is included in the standard Fedora repos.

sudo systemctl install glusterfs-server

Enable Gluster services.

sudo systemctl enable glusterd.service
sudo systemctl enable glusterfsd.service

Start Gluster services.

[jrey@gluster ~]$ sudo systemctl start glusterd.service
[jrey@gluster ~]$ sudo systemctl start glusterfsd.service
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo systemctl status glusterd.service
glusterd.service - GlusterFS an clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)
   Active: active (running) since Mon 2014-04-28 17:17:35 CEST; 20s ago
  Process: 1496 ExecStart=/usr/sbin/glusterd -p /run/glusterd.pid (code=exited, status=0/SUCCESS)
 Main PID: 1497 (glusterd)
   CGroup: /system.slice/glusterd.service
           └─1497 /usr/sbin/glusterd -p /run/glusterd.pid

Apr 28 17:17:35 gluster.vlab.local systemd[1]: Started GlusterFS an clustered file-system server.
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo systemctl status glusterfsd.service
glusterfsd.service - GlusterFS brick processes (stopping only)
   Loaded: loaded (/usr/lib/systemd/system/glusterfsd.service; enabled)
   Active: active (exited) since Mon 2014-04-28 17:17:45 CEST; 15s ago
  Process: 1515 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
 Main PID: 1515 (code=exited, status=0/SUCCESS)

Apr 28 17:17:45 gluster.vlab.local systemd[1]: Starting GlusterFS brick processes (stopping only)...
Apr 28 17:17:45 gluster.vlab.local systemd[1]: Started GlusterFS brick processes (stopping only).
[jrey@gluster ~]$

Since we are running a one-node cluster there is no need to add any node to the trusted pool. In case you decide to run a multinode environment you can setup the pool by running the following command on each node of the clsuter. .

gluster peer probe <IP_ADDRESS_OF_OTHER_NODE>

Edit the data disk with fdisk and create a single partition. Format the partition as XFS.

[jrey@gluster ~]$ sudo mkfs.xfs -i size=512 /dev/sdb1
meta-data=/dev/sdb1              isize=512    agcount=4, agsize=4718528 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=18874112, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=9215, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[jrey@gluster ~]$

Create the mount point for the new filesystem, mount the partition and edit /etc/fstab accordingly to make it persistent to reboots.

sudo mkdir -p /data/glance/
sudo mount /dev/sdb1 /data/glance
sudo mkdir -p /data/glance/brick1
sudo echo "/dev/sdb1 /data/glance xfs defaults 0 0" >> /etc/fstab

Create the Gluster volume and start it.

[jrey@gluster ~]$ sudo gluster volume create gv0 gluster.vlab.local:/data/glance/brick1
volume create: gv0: success: please start the volume to access data
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo gluster volume start gv0
volume start: gv0: success
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo gluster volume info

Volume Name: gv0
Type: Distribute
Volume ID: d1ad2d00-6210-4856-a5eb-26ddcba77a70
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: gluster.vlab.local:/data/glance/brick1
[jrey@gluster ~]$

The configuration of the Gluster node is finished. In the next article we will install and configure OpenStack using the different components detailed during current and previous parts of the series.

Please feel free to add any comment or correction.

Juanma.

If you follow me on Twitter or Google+ probably have seen and increased number of tweets and posts about OpenStack, DevStack, KVM and other Linux related topics. It’s no secret that I am a *nix guy however it wasn’t until last year that I really discovered OpenStack. Oh yes I knew about it, have read a ton of articles and watched some videos in YouTube but I never had the opportunity to actually play with it until I sat on a Hands on Lab about OpenStack and vSphere during VMworld in Barcelona last October. After VMworld I started a personal project to learn as much as possible about OpenStack, using some labs with KVM and vSphere to try to achieve a decent level of proficiency. Finally this year I was able to ramp up with NSX and decided to build a new lab with OpenStack, KVM and NSX and document my progress here in my blog. So without further ado here it is my first series of posts about OpenStack and NSX.

During this series we will see how to deploy OpenStack with KVM as the underlying hypervisor and VMware NSX for the networking part. I intended to create a fairly comprehensive guide here for my personal reference and as a learning exercise. All posts of the series are based on my personal experience in a lab environment.

Lab components

To illustrate the post I have created a lab with virtual machines running on VMware Fusion in my MacBook Pro, but you can use any virtualization software you want as long as it allows you to expose the virtualization extensions to the virtual machine, for the KVM compute node. We will need the following virtual machines

  • Cloud controller node
  • Nova compute node with KVM
  • Neutron networking node
  • GlusterFS storage node
  • NSX Controller
  • NSX Manager
  • NSX Service Node
  • NSX Gateway

I’ll provide the exact hardware config of each virtual machine in its own part. We will deploy OpenStack Havana using as reference one of the architectures described in OpenStack Havana installation guide.

You are probably asking yourself now why I’m using Havana when Icehouse was released just a few weeks ago? There are two reasons for this, first is that when I started to create my lab and decided to document my progress here Icehouse wasn’t out yet and after it was released I decided to stick with Havana because the NSX plugin for Neutron, OpenStack network module, has not been updated yet for Icehouse.

The software versions to be used are:

  • OpenStack Havana
  • CentOS 6.4 – For OpenStack nodes
  • Fedora 20 – For GlusterFS storage node
  • NSX for multi-hypervisor

I have another Fedora 20 virtual machine providing DNS and NTP services for the lab, I’m planning to add DHCP and OpenLDAP capabilities in the future.

NSX deployment overview

Screen Shot 2014-04-24 at 22.40.40The Network Views

The first concept you need to understand in NSX are the network views. NSX defines two network views:

  • Logical Network View
  • Transport Network View

The Logical Network View is a representation of the network services and connectivity that a virtual machine “see” in the cloud, basically for the operating system running inside the VM the logical network view is “the network” that it is connected to. The Logical Network View is completely independent from the underlying physical network. It is made of the logical ports, switches and routers that interconnects the different virtual machines within a tenant and connect them to the outside physical network. In a cloud each tenant will have its own logical network view and would isolated from other tenants views.

The Transport Network View represents the physical devices that underlie the logical networks. These devices or transport nodes, as they are referred, can be hypervisors and the network appliances interconnecting those hypervisors to the external physical network. Every one of these transport nodes must run an instance of Open vSwitch.

NSX Deployment Components

An NSX deployment will be made out of the Control Plane and Data Plane. Additionally there is a Management Plane comprised by the NSX Manager, last one is not mandatory for an OpenStack deployment but it can be useful.

NSX Control Plane

The Control Plane is made of the NSX Controller Cluster. This is an OpenFlow controller that manages all the Open vSwitch devices running on the transport nodes and a logical network manager that allow to build and maintain all the logical networks carried by the transport nodes. It provides consistency between logical network view and transport network view. Internally it has several roles to manage the different tasks it is responsible of.

  • Transport node management: Maintains connections with the different OVS instances.
  • Logical network management: Monitors when endhosts get connected and disconnected from OVS. Also implements logical connectivity and policies by configuring OVS forwarding states.
  • Data persistence and replication: Stores data from OVS devices and NVP API to provides persistence across all nodes of the cluster in case of failure.
  • API server: Handles HTTP requests from external elements.

The NSX Controller is an scalable out cluster running on x86 hardware, it supports a minimum of three nodes and a maximum of five. Single node clusters are not supported although for the lab I deployed a single-node one.

NSX Data Plane

The Data Plane will be implemented by the previously referred transport nodes, this is OVS devices and NSX appliances, managed by the Controller Cluster.

Hypervisors: The compute nodes leveraging Open vSwitch to provide network connectivity for the virtual machines.

NSX Gateway/s: The NSX Gateways formed the Gateway Service that allows a logical network to be attached to a physical network not managed by NSX. The gateways can be L2 Gateway, expands L2 logical segment to include a physical one, and L3 Gateway that maps itself to physical router port.

NSX Service Node/s: The Service Nodes are OVS enabled appliances that provide extra processing capacity by offloading network packet processing from the hypervisor virtual switches. The type of operations managed by the service nodes are for example assisting with the packet replication during broadcast/multicast operations or unknown multicast flooding in overlay logical networks.

NSX Management Plane

The NSX Management Plane is composed exclusively by the NSX Manager. Provides a different and more friendly way to interact with the NVP API, and configure the logical network components for example, through its web UI. In an OpenStack deployment there is need to use it, however it can be helpful for troubleshooting purposes.

NSX network appliances deployment

For our lab purposes create four Ubuntu x64 virtual machines with 1vCPU, 1GB of RAM, 1 network interface (E1000) and 16GB of disk.

NSX Controller

Power on the VM and on the boot screen select Automated Install.

Screen Shot 2014-04-27 at 20.49.15

The installation will take several minutes to finish. When it’s finished you will see a prompt like this in the virtual machine console.

Screen Shot 2014-04-27 at 22.42.11

Login as admin user with password admin. In a normal deployment you will configure admin user password with set admin user password but for the lab is not needed.

Set the IP address for the controller node.

nsx-controller # set network interface breth0 static 192.168.82.45 255.255.255.0
Setting IP for interface breth0...
Clearing DNS configuration...
nsx-controller # 
nsx-controller # show network interface breth0
IP config: static
Address: 192.168.82.45
Netmask: 255.255.255.0
Broadcast: 192.168.82.255
MTU: 1500
MAC: 00:0c:29:92:ce:0c
Admin-Status: UP
Link-Status: UP
SNMP: disabled
nsx-controller #

Configure the hostname.

nsx-controller # set hostname nsxc
nsxc #

Next configure the default route.

nsxc # add network route 0.0.0.0 0.0.0.0 192.168.82.2
nsxc #
nsxc # show network route
Prefix/Mask         Gateway         Metric  MTU     Iface
0.0.0.0/0           192.168.82.2    0       intf    breth0
192.168.82.0/24     0.0.0.0         0       intf    breth0
nsxc #

Set the address of the DNS and NTP servers.

nsxc # add network dns-server 192.168.82.110
nsxc #
nsxc # add network ntp-server 192.168.82.110
 * Stopping NTP server ntpd                                                                                                                                                          [ OK ]
Synchronizing with NTP servers. This may take a few seconds...
27 Apr 21:03:49 ntpdate[3755]: step time server 192.168.82.110 offset -7199.735794 sec
 * Starting NTP server ntpd                                                                                                                                                          [ OK ]
nsxc #

Set the management address of the control cluster.

set control-cluster management-address 192.168.82.45

Configure the IP address to be used for communication with the different transport nodes.

set control-cluster role switch_manager listen-ip 192.168.82.45

Configure the IP address to handle NVP API requests.

set control-cluster role api_provider listen-ip 192.168.82.45

Finally join the cluster, since this the first node of the cluster the IP has to be its own one.

nsxc # join control-cluster 192.168.82.45
Clearing controller state and restarting
Stopping nicira-nvp-controller: [Done]
Clearing nicira-nvp-controller's state: OK
Starting nicira-nvp-controller: CLI revert file already exists
mapping eth0 -> bridged-pif
ssh stop/waiting
ssh start/running, process 5009
mapping breth0 -> eth0
mapping breth0 -> eth0
ssh stop/waiting
ssh start/running, process 5158
Setting core limit to unlimited
Setting file descriptor limit to 100000
 nicira-nvp-controller [OK]
** Watching control-cluster history; ctrl-c to exit **
===================================
Host nsx-controller
Node ffac511c-12b3-4dd0-baa7-632df4860521 (192.168.82.248)
  04/27 22:40:42: Initializing data contact with cluster
  04/27 22:40:49: Fetching initial configuration data
  04/27 22:40:51: Join complete
nsxc #

You can check at any moment the status of the node in the cluster with the show control-cluster status command.

nsxc # show control-cluster status
Type                Status                                       Since
--------------------------------------------------------------------------------
Join status:        Join complete                                04/27 22:40:51
Majority status:    Disconnected from cluster majority           04/27 22:53:44
Restart status:     This controller can be safely restarted      04/27 21:23:29
Cluster ID:         7837a89a-22f3-4c8c-8bef-c100886374e9
Node UUID:          7837a89a-22f3-4c8c-8bef-c100886374e9

Role                Configured status   Active status
--------------------------------------------------------------------------------
api_provider        enabled             activated
persistence_server  enabled             activated
switch_manager      enabled             activated
logical_manager     enabled             activated
directory_server    disabled            disabled
nsxc #

In a standard NSX deployment now would the moment to add more nodes to the cluster using again the join control-cluster command with the same IP address.

NSX Gateway

Proceed with the Automated Install as in the Controller node. When the installation is done login as admin user.

Set IP address.

set network interface breth0 static 192.168.82.47 255.255.255.0

Set hostname.

set hostname nsxg

Configure the rest of the network parameters as in the Controller node and proceed to the gateway specific configuration.

nsxg # add switch manager 192.168.82.45
Waiting for the manager CA certificate to synchronize...
Manager CA certificate synchronized
nsxg #

NSX Service Node

Again launch the Automated Install and let it finish. As admin user configure the IP address…

set network interface breth0 static 192.168.82.46 255.255.255.0

…and the hostname.

set hostname nsxsn

Finish the network configuration as in the Gateway and the Controller and configure the Service Node to be aware of the Controller Cluster

add switch manager 192.168.82.45

The above command will return an error like this.

Manager CA certificate failed to synchronize.  Verify
the manager is running on the specified IP address.

It’s normal since the Transport Node will not be able to connect to the NSX Controller Cluster until the cluster has been informed, either via NVP API or NSX Manager interface, about the existence of the Transport Node.

NSX Manager

Access the NSX Manager console, you have to see a similar screen.

Screen Shot 2014-04-28 at 00.47.54

Set the IP and the hostname and configure the default route, DNS and NTP server.

set network interface breth0 static 192.168.82.47 255.255.255.0
set hostname nsxm
add network route 0.0.0.0 0.0.0.0 192.168.82.2
add network dns-server 192.168.82.110
add network ntp-server 192.168.82.110

With this we have completed the installation and initial configuration of our four NSX appliances. In a real world deployment we should have to add at least two more NSX controller nodes to our cluster and maybe one or more gateways in order to setup L2 and L3 Gateway Services. The number of Service Nodes will depend on the expected load of our cloud.

Connect the NSX Manager to the Controller Cluster

Our next step is to connect our newly crested NSX Controller Cluster with NSX Manager. Access NSX Manager web interface and login as admin user.

Screen Shot 2014-04-28 at 01.10.19

After the login the Manager will indicate that there is no Controller Cluster added.

Screen Shot 2014-04-28 at 01.15.33

Click the Add Cluster button and enter the data for the NSX Controller Cluster.

Screen Shot 2014-04-28 at 01.26.03

If the connection is successful the a new screen will show up.

Screen Shot 2014-04-28 at 01.36.29

Provide the following information:

  • Name of the cluster
  • Contact email address of the administrator
  • Automatically Use New IPs – This setting, checked by default, will add all the IP address of the members form this cluster as eligible to receive API call from the NSX Manager.
  • Make Active Cluster

In the next screen enter the IP address of your syslog server or click Use This NSX Manager to use the NSX Manager as syslog server.

Screen Shot 2014-04-28 at 01.57.43

After clicking in Configure the Manager will finish the configuration of the Controller Cluster and will go back the previous screen where we can see the new cluster we have just added to the Manager.

Screen Shot 2014-04-28 at 02.01.44

In the next post we will see how to configure NSX Transport and Logical network elements. As always comments are welcome.

Juanma.

VMware has released VMware vSphere Mobile Watchlist. It is available for Android and iOS, iPhone only for now, and will enable any system administrator to keep an eye on their most critical apps from their phones.

It is a very intuitive app to use, below are a series of screenshots from the app installed on my iPhone 5 and connected to my homelab vCenter Server.

From the main screen you can add virtual machines from your vCenter inventory to the default watchlist or create a new watchlist.

Once you have added several virtual machines to your list you can check them in a glance in list or grid mode.

VM watchlist

Tap on a VM and you will access its details, configured resources, VM Tools state, related objects, etc.

As you can see from the screenshot this a multi screen so slide to the left and you can get a console screenshot of the virtual machine and perform different actions on the virtual machine.

Console screenshot    

I hope this is a step towards a new set of mobile apps from VMware focused on the administration of the different components of a virtual and cloud infrastructure :)

Juanma.

I got aware of this issue last week after installing a Fedora 18 virtual machine on Fusion 5. The installation of the Tools went as expected but when the install process launched the vmware-tools-config,pl script I got the typical error of not being able to find the Linux Kernel headers.

Searching for a valid kernel header path...
The path "" is not a valid path to the 3.7.2-204.fc18.x86_64 kernel headers.
Would you like to change it? [yes]

I installed the kernel headers and devel packages with yum.

[root@fed18 ~]# yum install kernel-headers kernel-devel

Fired up again the configuration script and got the same error. The problem is that snce kernel 3.7 all the kernel header files have been relocated to a new path and because of that the script is not able to find them. To solve it just create a symlink of the version.h file from the new location to the old one.

[root@fed18 src]# ln -s /usr/src/kernels/3.7.2-204.fc18.x86_64/include/generated/uapi/linux/version.h /lib/modules/3.7.2-204.fc18.x86_64/build/include/linux/

With the problem fixed I launched the config script again and the tools finally got configured without problems.

[root@fed18 ~]# vmware-config-tools.pl 
Initializing...

Making sure services for VMware Tools are stopped.
Stopping Thinprint services in the virtual machine:
 Stopping Virtual Printing daemon: done
Stopping vmware-tools (via systemctl): [ OK ]

The VMware FileSystem Sync Driver (vmsync) allows external third-party backup 
software that is integrated with vSphere to create backups of the virtual 
machine. Do you wish to enable this feature? [no]

Before you can compile modules, you need to have the following installed...
make
gcc
kernel headers of the running kernel

Searching for GCC...
Detected GCC binary at "/bin/gcc".
The path "/bin/gcc" appears to be a valid path to the gcc binary.
Would you like to change it? [no]

Searching for a valid kernel header path...
Detected the kernel headers at 
"/lib/modules/3.7.2-204.fc18.x86_64/build/include".
The path "/lib/modules/3.7.2-204.fc18.x86_64/build/include" appears to be a 
valid path to the 3.7.2-204.fc18.x86_64 kernel headers.
Would you like to change it? [no]

Juanma.

Last week vSphere 5 Update 1 was released by VMware, along with the main products some of the SDKs and automation tools were also updated, including the vMA.

As you should remember from my first post about vMA 5 the classic vma-update utility is no longer available. So to be able to update our vMA to the new version we have to use the Web UI. Following is the procedure to perform the upgrade.

First access the web interface using the vi-admin user as always.

image

From the main screen go to the Update tab. In the Status screen click on Check Updates.

image

After a few seconds a message will appear showing the new update available.

image

Click on Install Updates and after asking for confirmation the update process will start.

image

Once the update process is complete the appliance will ask for a system reboot.

image

Go to the System tab and perform the reboot. After the reboot is done you can check the new version in the appliance console,

image

And in the /etc/vma-release file.

vi-admin@vma:~> cat /etc/vma-release
vMA 5.0.0 BUILD-643553

Copyright (C) 1998-2011 VMware, Inc. All rights reserved.
This product is protected by U.S. and international copyright and
intellectual property laws. VMware products are covered by one or more U.S.
Patent Numbers D617,808, D617,809, D617,810, D617,811, 6,075,938,
6,397,242, 6,496,847, 6,704,925, 6,711,672, 6,725,289, 6,735,601,
6,785,886, 6,789,156, 6,795,966, 6,880,022, 6,883,095, 6,940,980,
6,944,699, 6,961,806, 6,961,941, 6,970,562, 7,017,041, 7,055,032,
7,065,642, 7,069,413, 7,069,435, 7,082,598, 7,089,377, 7,111,086,
7,111,145, 7,117,481, 7,149,310, 7,149,843, 7,155,558, 7,222,221,
7,260,815, 7,260,820, 7,269,683, 7,275,136, 7,277,998, 7,277,999,
7,278,030, 7,281,102, 7,290,253, 7,343,599, 7,356,679, 7,386,720,
7,409,487, 7,412,492, 7,412,702, 7,424,710, 7,428,636, 7,433,951,
7,434,002, 7,447,854, 7,447,903, 7,467,067, 7,475,002, 7,478,173,
7,478,180, 7,478,218, 7,478,388, 7,484,208, 7,487,313, 7,487,314,
7,490,216, 7,500,048, 7,506,122, 7,516,453, 7,529,897, 7,543,301,
7,555,747, 7,565,527, 7,571,471, 7,577,722, 7,581,064, 7,590,715,
7,590,982, 7,594,111, 7,596,594, 7,596,697, 7,599,493, 7,603,704,
7,606,868, 7,620,523, 7,620,766, 7,620,955, 7,624,240, 7,630,493,
7,636,831, 7,657,659, 7,657,937, 7,665,088, 7,672,814, 7,680,919,
7,689,986, 7,693,996, 7,694,101, 7,702,843, 7,707,185, 7,707,285,
7,707,578, 7,716,446, 7,734,045, 7,734,911, 7,734,912, 7,735,136,
7,743,389, 7,761,917, 7,765,543, 7,774,391, 7,779,091, 7,783,779,
7,783,838, 7,793,279, 7,797,748, 7,801,703, 7,802,000, 7,802,248,
7,805,676, 7,814,495, 7,823,145, 7,831,661, 7,831,739, 7,831,761,
7,831,773, 7,840,790, 7,840,839, 7,840,993, 7,844,954, 7,849,098,
7,853,744, 7,853,960, 7,856,419, 7,856,531, 7,856,637, 7,865,663,
7,869,967, 7,886,127, 7,886,148, 7,886,346, 7,890,754, 7,895,437,
7,908,646, 7,912,951, 7,921,197, 7,925,850; patents pending.
VMware, the VMware "boxes" logo and design, Virtual SMP and VMotion are
registered trademarks or trademarks of VMware, Inc. in the United States
and/or other jurisdictions. All other marks and names mentioned herein may
be trademarks of their respective companies.
vi-admin@vma:~>

The above procedure use the default VMware repository and your appliance must be able to resolve public DNS addresses and access the internet in order to download de upgrade bits.

Juanma.

imageYes, you have heard correctly. The people from Veeam are willing to kindly give a free full pass to VMworld or TechEd 2012, you choose the conference and they provide the pass.

The promotion is called Virtualization Lovers and you only have to register at their website:

You have until March 26, 2012.

Good luck!

Juanma.

Once again my colleague at HP Eric Seabert (@ericsiebert) has opened up the polls to elect the Top VMware and Virtualization blogs.

This year yours truly is lucky enough to be on the list of candidates so if you feel that my work here have been of help I’ll be grateful. However I’m a guy with his feet on the ground and I am perfectly aware that there are a ton of great blogs out there.

My personal Top Five are:

Now go to http://vote.vsphere-land.com/ and vote for your favorite blog!

Juanma.

Today a co-worker has asked me how to list the packages installed in an ESXi 4.1 Update 1 server, in the ESX COS we had the RedHat rpm command but in ESXi there is no rpm and of course there is no COS.

His intention was to look for the version of the qla2xxx driver and my first thought was to use vmkload_mod, the problem is that with this command you can get the version of a driver already loaded by the VMkernel and we wanted to look for the version of a driver installed but no loaded.

I tried esxupdate with no luck.

~ # esxupdate query
----Bulletin ID----- -----Installed----- --------------Summary---------------
ESXi410-201101223-UG 2011-01-13T05:09:39 3w-9xxx: scsi driver for VMware ESXi
ESXi410-201101224-UG 2011-01-13T05:09:39 vxge: net driver for VMware ESXi    
~ #

Then I suddenly thought that the ESXi Tech Support Mode is based on Busybox. If you have ever use a Busybox environment, like a QNAP NAS, you will probably remember that the way to install new software over the network is with ipkg command and to list the software packages already installed the syntax is ipkg list_installed.

~ # ipkg list_installed
emulex-cim-provider - 410.2.0.32.1-207424 -
lsi-provider - 410.04.V0.24-140815 -
qlogic-fchba-provider - 400.1.1.8-140815 -
vmware-esx-drivers-ata-libata - 400.2.00.1-1vmw.1.4.348481 -
vmware-esx-drivers-ata-pata-amd - 400.0.2.4.1-1vmw.1.4.348481 -
vmware-esx-drivers-ata-pata-atiixp - 400.0.4.3.1-1vmw.1.4.348481 -
vmware-esx-drivers-ata-pata-cmd64x - 400.0.2.1.1-1vmw.1.4.348481 -
vmware-esx-drivers-ata-pata-hpt3x2n - 400.0.3.0.1-1vmw.1.4.348481 -
vmware-esx-drivers-ata-pata-pdc2027x - 400.0.74ac5.1-1vmw.1.4.348481 -
vmware-esx-drivers-ata-pata-serverworks - 400.0.3.7.1-1vmw.1.4.348481 -
vmware-esx-drivers-ata-pata-sil680 - 400.0.3.2.1-1vmw.1.4.348481 -
vmware-esx-drivers-ata-pata-via - 400.0.1.14.1-1vmw.1.4.348481 -
vmware-esx-drivers-block-cciss - 400.3.6.14.10.1-2vmw.1.4.348481 -
vmware-esx-drivers-char-hpcru - 400.1.1.0.1-1vmw.1.4.348481 -
vmware-esx-drivers-char-pseudo-char-dev - 400.0.0.1.1-1vmw.1.4.348481 -
vmware-esx-drivers-char-random - 400.1.0.0.1-1vmw.1.4.348481 -
vmware-esx-drivers-char-tpm-tis - 400.0.0.1.1-1vmw.1.4.348481 -
vmware-esx-drivers-ehci-ehci-hcd - 400.1.0.0.1-1vmw.1.4.348481 -
vmware-esx-drivers-hid-hid - 400.2.6.0.1-1vmw.1.4.348481 -
vmware-esx-drivers-ioat-ioat - 400.2.15.0.1-1vmw.1.4.348481 -
vmware-esx-drivers-ipmi-ipmi-devintf - 400.39.2.0.1-1vmw.1.4.348481 -
vmware-esx-drivers-ipmi-ipmi-msghandler - 400.39.2.0.1-1vmw.1.4.348481 -
vmware-esx-drivers-ipmi-ipmi-si-drv - 400.39.2.0.1-1vmw.1.4.348481 -
vmware-esx-drivers-net-bnx2 - 400.2.0.7d-3vmw.1.4.348481 -
vmware-esx-drivers-net-bnx2x - 400.1.54.1.v41.1-2vmw.1.4.348481 -
vmware-esx-drivers-net-cdc-ether - 400.1.0.0.1-2vmw.1.4.348481 -
vmware-esx-drivers-net-cnic - 400.1.9.7d.rc2.3.1-2vmw.1.4.348481 -
vmware-esx-drivers-net-e1000 - 400.8.0.3.2-1vmw.1.4.348481 -
vmware-esx-drivers-net-e1000e - 400.1.1.2.1-1vmw.1.4.348481 -
vmware-esx-drivers-net-enic - 400.1.4.0.261-1vmw.1.4.348481 -
vmware-esx-drivers-net-forcedeth - 400.0.61.0.1-1vmw.1.4.348481 -
vmware-esx-drivers-net-igb - 400.1.3.19.12.2-2vmw.1.4.348481 -
vmware-esx-drivers-net-ixgbe - 400.2.0.38.2.5.1-1vmw.1.4.348481 -
vmware-esx-drivers-net-nx-nic - 400.4.0.550.1-1vmw.1.4.348481 -
vmware-esx-drivers-net-s2io - 400.2.1.4.13427.1-1vmw.1.4.348481 -
vmware-esx-drivers-net-sky2 - 400.1.20-1vmw.1.4.348481 -
vmware-esx-drivers-net-tg3 - 400.3.86.0.1-1vmw.1.4.348481 -
vmware-esx-drivers-net-usbnet - 400.1.0.0.1-2vmw.1.4.348481 -
vmware-esx-drivers-ohci-usb-ohci - 400.1.0.0.1-1vmw.1.4.348481 -
vmware-esx-drivers-sata-ahci - 400.2.0.0.1-5vmw.1.4.348481 -
vmware-esx-drivers-sata-ata-piix - 400.2.00ac6.1-3vmw.1.4.348481 -
vmware-esx-drivers-sata-sata-nv - 400.2.0.0.1-1vmw.1.4.348481 -
vmware-esx-drivers-sata-sata-promise - 400.1.04.0.1-1vmw.1.4.348481 -
vmware-esx-drivers-sata-sata-sil - 400.2.0.0.1-1vmw.1.4.348481 -
vmware-esx-drivers-sata-sata-svw - 400.2.0.0.1-1vmw.1.4.348481 -
vmware-esx-drivers-scsi-aacraid - 400.4.1.1.5.1-1vmw.1.4.348481 -
vmware-esx-drivers-scsi-adp94xx - 400.1.0.8.12.1-1vmw.1.4.348481 -
vmware-esx-drivers-scsi-aic79xx - 400.3.2.0.1-1vmw.1.4.348481 -
vmware-esx-drivers-scsi-bnx2i - 400.1.8.11t5.rc2.8.1-4vmw.1.4.348481 -
vmware-esx-drivers-scsi-fnic - 400.1.1.0.113.2-4vmw.1.4.348481 -
vmware-esx-drivers-scsi-hpsa - 400.3.6.14.45-4vmw.1.4.348481 -
vmware-esx-drivers-scsi-ips - 400.7.12.06.1-3vmw.1.4.348481 -
vmware-esx-drivers-scsi-iscsi-linux - 400.1.0.0.1-1vmw.1.4.348481 -
vmware-esx-drivers-scsi-lpfc820 - 400.8.2.1.30.1-58vmw.1.4.348481 -
vmware-esx-drivers-scsi-megaraid-mbox - 400.2.20.5.1.4-1vmw.1.4.348481 -
vmware-esx-drivers-scsi-megaraid-sas - 400.4.0.14.1-18vmw.1.4.348481 -
vmware-esx-drivers-scsi-megaraid2 - 400.2.00.4.1-4vmw.1.4.348481 -
vmware-esx-drivers-scsi-mpt2sas - 400.04.255.03.00.1-6vmw.1.4.348481 -
vmware-esx-drivers-scsi-mptsas - 400.4.21.00.01.1-6vmw.1.4.348481 -
vmware-esx-drivers-scsi-mptspi - 400.4.21.00.01.1-6vmw.1.4.348481 -
vmware-esx-drivers-scsi-qla2xxx - 400.831.k1.28.1-1vmw.1.4.348481 -
vmware-esx-drivers-scsi-qla4xxx - 400.5.01.03.1-10vmw.1.4.348481 -
vmware-esx-drivers-scsi-sample-iscsi - 400.1.0.0-1vmw.1.4.348481 -
vmware-esx-drivers-uhci-usb-uhci - 400.3.0.0.1-1vmw.1.4.348481 -
vmware-esx-drivers-usb-storage-usb-storage - 400.1.0.0.1-1vmw.1.4.348481 -
vmware-esx-drivers-usbcore-usb - 400.1.0.0.1-1vmw.1.4.348481 -
vmware-esx-drivers-vmklinux-vmklinux - 4.1.0-1.4.348481 -
Successfully terminated.
~ #

There you are :-) There is one gotcha to get the version, it starts just after the 400.

Next task of course was to do the same in ESXi 5.0.

~ # ipkg list_installed
-sh: ipkg: not found
~ #

Ouch! Ipkg has been removed from ESXi 5.0. The key to get the same list is esxcli.

~ # esxcli software vib list
Name                  Version                             Vendor  Acceptance Level  Install Date
--------------------  ----------------------------------  ------  ----------------  ------------
ata-pata-amd          0.3.10-3vmw.500.0.0.469512          VMware  VMwareCertified   2011-09-07 
ata-pata-atiixp       0.4.6-3vmw.500.0.0.469512           VMware  VMwareCertified   2011-09-07 
ata-pata-cmd64x       0.2.5-3vmw.500.0.0.469512           VMware  VMwareCertified   2011-09-07 
ata-pata-hpt3x2n      0.3.4-3vmw.500.0.0.469512           VMware  VMwareCertified   2011-09-07 
ata-pata-pdc2027x     1.0-3vmw.500.0.0.469512             VMware  VMwareCertified   2011-09-07 
ata-pata-serverworks  0.4.3-3vmw.500.0.0.469512           VMware  VMwareCertified   2011-09-07 
ata-pata-sil680       0.4.8-3vmw.500.0.0.469512           VMware  VMwareCertified   2011-09-07 
ata-pata-via          0.3.3-2vmw.500.0.0.469512           VMware  VMwareCertified   2011-09-07 
block-cciss           3.6.14-10vmw.500.0.0.469512         VMware  VMwareCertified   2011-09-07 
ehci-ehci-hcd         1.0-3vmw.500.0.0.469512             VMware  VMwareCertified   2011-09-07 
esx-base              5.0.0-0.0.469512                    VMware  VMwareCertified   2011-09-07 
esx-tboot             5.0.0-0.0.469512                    VMware  VMwareCertified   2011-09-07 
ima-qla4xxx           2.01.07-1vmw.500.0.0.469512         VMware  VMwareCertified   2011-09-07 
ipmi-ipmi-devintf     39.1-4vmw.500.0.0.469512            VMware  VMwareCertified   2011-09-07 
ipmi-ipmi-msghandler  39.1-4vmw.500.0.0.469512            VMware  VMwareCertified   2011-09-07 
ipmi-ipmi-si-drv      39.1-4vmw.500.0.0.469512            VMware  VMwareCertified   2011-09-07 
misc-cnic-register    1.1-1vmw.500.0.0.469512             VMware  VMwareCertified   2011-09-07 
misc-drivers          5.0.0-0.0.469512                    VMware  VMwareCertified   2011-09-07 
net-be2net            4.0.88.0-1vmw.500.0.0.469512        VMware  VMwareCertified   2011-09-07 
net-bnx2              2.0.15g.v50.11-5vmw.500.0.0.469512  VMware  VMwareCertified   2011-09-07 
net-bnx2x             1.61.15.v50.1-1vmw.500.0.0.469512   VMware  VMwareCertified   2011-09-07 
net-cnic              1.10.2j.v50.7-2vmw.500.0.0.469512   VMware  VMwareCertified   2011-09-07 
net-e1000             8.0.3.1-2vmw.500.0.0.469512         VMware  VMwareCertified   2011-09-07 
net-e1000e            1.1.2-3vmw.500.0.0.469512           VMware  VMwareCertified   2011-09-07 
net-enic              1.4.2.15a-1vmw.500.0.0.469512       VMware  VMwareCertified   2011-09-07 
net-forcedeth         0.61-2vmw.500.0.0.469512            VMware  VMwareCertified   2011-09-07 
net-igb               2.1.11.1-3vmw.500.0.0.469512        VMware  VMwareCertified   2011-09-07 
net-ixgbe             2.0.84.8.2-10vmw.500.0.0.469512     VMware  VMwareCertified   2011-09-07 
net-nx-nic            4.0.557-3vmw.500.0.0.469512         VMware  VMwareCertified   2011-09-07 
net-r8168             8.013.00-3vmw.500.0.0.469512        VMware  VMwareCertified   2011-09-07 
net-r8169             6.011.00-2vmw.500.0.0.469512        VMware  VMwareCertified   2011-09-07 
net-s2io              2.1.4.13427-3vmw.500.0.0.469512     VMware  VMwareCertified   2011-09-07 
net-sky2              1.20-2vmw.500.0.0.469512            VMware  VMwareCertified   2011-09-07 
net-tg3               3.110h.v50.4-4vmw.500.0.0.469512    VMware  VMwareCertified   2011-09-07 
ohci-usb-ohci         1.0-3vmw.500.0.0.469512             VMware  VMwareCertified   2011-09-07 
sata-ahci             3.0-6vmw.500.0.0.469512             VMware  VMwareCertified   2011-09-07 
sata-ata-piix         2.12-4vmw.500.0.0.469512            VMware  VMwareCertified   2011-09-07 
sata-sata-nv          3.5-3vmw.500.0.0.469512             VMware  VMwareCertified   2011-09-07 
sata-sata-promise     2.12-3vmw.500.0.0.469512            VMware  VMwareCertified   2011-09-07 
sata-sata-sil         2.3-3vmw.500.0.0.469512             VMware  VMwareCertified   2011-09-07 
sata-sata-svw         2.3-3vmw.500.0.0.469512             VMware  VMwareCertified   2011-09-07 
scsi-aacraid          1.1.5.1-9vmw.500.0.0.469512         VMware  VMwareCertified   2011-09-07 
scsi-adp94xx          1.0.8.12-6vmw.500.0.0.469512        VMware  VMwareCertified   2011-09-07 
scsi-aic79xx          3.1-5vmw.500.0.0.469512             VMware  VMwareCertified   2011-09-07 
scsi-bnx2i            1.9.1d.v50.1-3vmw.500.0.0.469512    VMware  VMwareCertified   2011-09-07 
scsi-fnic             1.5.0.3-1vmw.500.0.0.469512         VMware  VMwareCertified   2011-09-07 
scsi-hpsa             5.0.0-17vmw.500.0.0.469512          VMware  VMwareCertified   2011-09-07 
scsi-ips              7.12.05-4vmw.500.0.0.469512         VMware  VMwareCertified   2011-09-07 
scsi-lpfc820          8.2.2.1-18vmw.500.0.0.469512        VMware  VMwareCertified   2011-09-07 
scsi-megaraid-mbox    2.20.5.1-6vmw.500.0.0.469512        VMware  VMwareCertified   2011-09-07 
scsi-megaraid-sas     4.32-1vmw.500.0.0.469512            VMware  VMwareCertified   2011-09-07 
scsi-megaraid2        2.00.4-9vmw.500.0.0.469512          VMware  VMwareCertified   2011-09-07 
scsi-mpt2sas          06.00.00.00-5vmw.500.0.0.469512     VMware  VMwareCertified   2011-09-07 
scsi-mptsas           4.23.01.00-5vmw.500.0.0.469512      VMware  VMwareCertified   2011-09-07 
scsi-mptspi           4.23.01.00-5vmw.500.0.0.469512      VMware  VMwareCertified   2011-09-07 
scsi-qla2xxx          901.k1.1-14vmw.500.0.0.469512       VMware  VMwareCertified   2011-09-07 
scsi-qla4xxx          5.01.03.2-3vmw.500.0.0.469512       VMware  VMwareCertified   2011-09-07 
uhci-usb-uhci         1.0-3vmw.500.0.0.469512             VMware  VMwareCertified   2011-09-07 
tools-light           5.0.0-0.0.469512                    VMware  VMwareCertified   2011-09-07 
~ #

A final thought for all of you starting with vSphere 5, esxcli is the key in ESXi 5.0 shell.

Juanma.

With release of ESXi 5.0 the esxcli command has been also vastly improved. One of this new capabilities is the possibility to manage the DNS configuration of the server.

The basic syntax for dns is:

~# esxcli network ip dns

This gives you two namespaces to work with:

  • search
  • server

esxcli_dns1

With the first one you can manage the suffixes for DNS search and the second is for the DNS server to be used by the ESXi.

  • Server operations
    List the servers configured:

image

Add a new server:

image

Remove a configured server:

image

  • Domain search operations

List configured domain suffixes:

image

Add a new domain:

image

Remove a configured domain:

image

Juanma.

If you are willing to see a so great whitebox like Phil Jaenke’s (@RootWyrm) BabyDragon I’m sorry to say that you’ll be terribly disappointed because unlike Phil’s beauty mine wasn’t built on purpose.

I’ve been running all my labs within VMware Workstation on my custom made workstation, whihc by the way was running Windows 7 (64-bit). But recently I decided that it was time to move to a more reliable solution so I converted my Windows 7 system in an ESXi server.

Surprisingly when I installed ESXi 4.1 Update 1 everything was properly recognized so I thought it could be of help to post the hardware configuration for other vGeeks out there that could be looking for working hardware components for their homelabs.

  • Processor: Intel Core 2 Quad Q9550. Supports FT!
  • Memory: 8GB
  • Motherboard: Asrock Penryn 1600SLI-110dB
  • Nic: Embedded nVidia NForce Network Controller. Supported under the forcedeth driver
~ # ethtool -i vmnic0
driver: forcedeth
version: 0.61.0.1-1vmw
firmware-version:
bus-info: 0000:00:14.0
~ #
  • SATA controller: nVidia MCP51 SATA Controller.
~ # vmkload_mod -s sata_nv
vmkload_mod module information
 input file: /usr/lib/vmware/vmkmod/sata_nv.o
 Version: Version 2.0.0.1-1vmw, Build: 348481, Interface: ddi_9_1 Built on: Jan 12 2011
 License: GPL
 Required name-spaces:
  com.vmware.vmkapi@v1_1_0_0
 Parameters:
  heap_max: int
    Maximum attainable heap size for the driver.
  heap_initial: int
    Initial heap size allocated for the driver.
~ #
  • HDD1: 1 x 120GB SATA 7200RPM Seagate ST3120026AS.
  • HDD2: 1 x 1TB SATA 7200RPM Seagate ST31000528AS.

Finally here it is a screenshot of the vSphere Client connected to the vCenter VM and showing the summary of the host.

The other components of my homelab are a QNAP TS-219P NAS and an HP ProCurve 1810G-8 switch. I also have plans to add two more NICs and a SSD to the server as soon as possible and of course to build a new whitebox.

Juanma.