Archives For DevOps

Lattice is the latest addition from Pivotal to its portfolio of open source projects. Lattice leverages various components from Cloud Foundry, in order to run containerized workloads in a cluster.

  • Diego – The new Cloud Foundry elastic runtime. Acts as an action-based scheduler and provides support for Docker images.
  • Doppler – The log and metric aggregator for the platform and the running workloads.
  • Gorouter – A software-based router with reverse proxy capabilities. Dynamically updated as the containers are spun up and down.

The first fact we need to understand about Lattice is that it is not intended to run production workloads. Instead Lattice is meant to be run on Virtualbox or VMware Fusion using Vagrant. In the end Lattice is an easy way to leverage all the power of Cloud Foundry for running containers, in your laptop and without having to bother about all Cloud Foundry installation details.

Deploying Lattice

Installing and running Lattice in your laptop is relatively easy process, first of all we will need  First download the latest package from GitHub Releases page for Lattice, there are packages available for Linux and OS X.

Unzip the package in a directory with the rest of your virtual machines.

unzip lattice-bundle-v0.4.3-osx.zip

Now copy the ltc utility to a directory in your PATH, I always use /usr/local/bin for this kind of binaries.

Running Lattice

Running Lattice should be quite simple, just change to vagrant directory on Lattice installation path, then execute vagrant up command and that’s it. However there is a caveat, by default the Vagranfile will use the IP address 192.168.11.11 if LATTICE_SYSTEM_IP variable is not provided during the execution.

To avoid this issue pass LATTICE_SYSTEM_IP variable to Vagrant during the execution. I personally have used both VMware Fusion and VMware AppCatalyst but you can use Virtualbox too. For the AppCatalyst the only requirement is to have appcatalyst-daemon running since it is needed by the Vagrant provider.

LATTICE_SYSTEM_IP=192.168.161.11 vagrant up --provider vmware_fusion

With this we will have our Lattice instance up and running. Next we need to tell ltc how to connect to our Lattice instance, this operation is called targeting.

ltc target 192.168.161.11.xip.io

With the API endpoint set let’s deploy our first application, for this example we will use Lattice example app. Run the ltc create command with the name of the new app and the container to be spun up as the arguments.

Screen Shot 2015-10-02 at 13.33.37

Open your favorite browser and access https://my-app.192.168.161.11.xip.io.

Screen Shot 2015-10-02 at 13.44.24

The index indicates the node that we are accessing. Next we will scale up the application adding two additional containers. Use ltc scale to add additional instances of the app and ltc status to retrieve the status.

Screen Shot 2015-10-02 at 14.22.40

Another useful operation with ltc is the capacity to get the logs for your app.

 2015-10-02 14:23:18 ☆ trantor in ~
○ → ltc logs my-app
10/02 14:30:11.10 [APP|2] Lattice-app. Says Hello. on index: 2
10/02 14:30:11.28 [APP|0] Lattice-app. Says Hello. on index: 0
10/02 14:30:11.60 [APP|1] Lattice-app. Says Hello. on index: 1
10/02 14:30:12.10 [APP|2] Lattice-app. Says Hello. on index: 2
10/02 14:30:12.28 [APP|0] Lattice-app. Says Hello. on index: 0
10/02 14:30:12.60 [APP|1] Lattice-app. Says Hello. on index: 1

I’ll let you to add more apps to Lattice and to play around with ltc.

Comments are welcome, as always.

Juanma.

As with the rest of NSX for vSphere components any competent admin would like to configure a remote syslog server for the NSX Controllers, in my homelab I have vRealize Log Insight and recently I decided to configure it on my NSX Controllers and document the procedure here mostly as self-reference.

NSX Manager has the option to configure a remote syslog server using its management web site, but where is the option for the Controllers? Well, if you lurk around NSX interface in vSphere Web Client will quickly notice that the option is somehow missing.  Actually the only option to enable it is using NSX REST API. Let’s see how to do it.

For this post I will use Firefox REST Client Add-on but you can use your favorite REST client. Firstly any REST API call will require at least the Authentication header, in Firefox REST Client click on Authentication drop-down menu, select Basic Authentication and enter the admin credentials.

Screen Shot 2015-09-30 at 02.25.19

Additionally PUT and POST methods will require you to set a custom header with the following values that will define the content of the HTTP Request body. Use these values.

  • Name: Content-Type
  • Value: application/xml

With these two headers set enter the API URL, in my case it is:

https://nsxm-01.mcorp.local/api/2.0/vdn/controller/controller-1/syslog

To construct this URL you will need the controller ID that can be get in the NSX interface in vSphere Web Client as shown below.

Screen Shot 2015-09-30 at 02.50.38

Select POST as the method. You will need to enter the body for the HTTP Request, in XML format. Use the below code as an example to build the content for the HTTP Request body.

This XML code will indicate the NSX Manager to set the IP address in the syslogServer node as the remote syslog server for the controller in the URL. The protocol, port and log level are also defined.

Screen Shot 2015-09-17 at 01.14.19

Submit the request and if everything is configured as described you will receive a 200 OK status code.

Screen Shot 2015-09-17 at 01.15.06

At this point the syslog server is configured for all NSX Controllers, you can check the status using also an API call with the same URL and selecting GET method.

Comments are welcome.

Juanma.

fedora_infinity_140x140Cockpit is a new web based server manager to administer Linux server, it will provide the system administrators with a user friendly interface to manage their Linux servers, it includes multiserver managing capacity and more importantly it will create no interference or disconnection between the tasks done from the web and from the command line. This last feature is specially useful

By default Cockpit, stable version, comes installed and enabled in Fedora 21 Server. It also can be found in CentOS/RHEL 7 Atomic, Fedora 21 Atomic and Fedora 21 Cloud, and there are plans in the near future to support Arch Linux.

Lets review now some of the features of Cockpit, as said before multiple servers can be managed from the same Cockpit instance.

Screen Shot 2014-12-31 at 19.26.00

Once you access one of managed nodes it will present general overview of the server with real-time charts of CPU, Memory, Disk I/O and Network Traffic.

Screen Shot 2014-12-31 at 19.37.53

On the left pane there are a series of actionable items that will give you access to the different subsystems of the node like Networking, Storage, User Accounts and even the status of the Docker containers running on the server, if the Docker service has been enabled.

System services view.

Screen Shot 2014-12-31 at 19.52.53

When a process is selected Cockpit will display its details.

Screen Shot 2015-01-08 at 12.11.27

Networking area displays traffic for the selected interface, the journal of the networking system and even allows you to create a new bond interface, a new bridge or add a new VLAN tag to the interface.

Screen Shot 2014-12-31 at 19.53.18

The Storage view will display similar info for the disks, and will display detailed information for each of them, review the LVM configuration of the server and perform different storage related operations.

Screen Shot 2014-12-31 at 19.53.45

Journal view lets you review systemd journal. You can go back seven days into the log and filter on the type of messages.

Screen Shot 2014-12-31 at 19.54.29

After using Cockpit for some time in my lab I can say that I genuinely love it, the interface is pretty fast, it uses systemd for everything and it does not interface with my console-based admin habits, on the contrary is a great complement to them.

Juanma.

VMware has released a new vRealize Operations Manager management pack for NSX Multi-hypervisor. This new management pack will allow vROps to extend its management capabilities into any NSX-MH infrastructure.

This management pack provides a great set a features, including:

  • Operational visibility into the different NSX-MH components, from NSX Manager to Controllers, transport nodes and logical elements of the network.
  • Search and drill down functionality to help the administrator monitor the health of the NSX objects.
  • Alerts and root cause problem solving capabilities by detecting configuration, connectivity and health deficiencies into the NSX environment.
  • Report templates for NSX Multi-Hypervisor environment.

The management pack requires vRealize Operations Manager 6.0 and can be downloaded from VMware Solutions Exchange.

Installation

To install this management pack go to Administration in the left pane.

Screen Shot 2014-12-16 at 01.15.10

From there go to Solutions and on the right pane click on the plus sign to add the new management pack.

Screen Shot 2014-12-16 at 01.15.24

Browse for the pack installation file, click Upload and then click Next when the installation file is uploaded.

Screen Shot 2014-12-16 at 01.16.27

Accept the EULA and proceed to the last screen. Wait until the management pack is installed and then click Finish.

Screen Shot 2014-12-16 at 01.19.10

Configure the adapter instance

The first task is to create the credentials for the solution. Access Administration -> Credentials and create a new credential for the NSX-MH Adapter.It has to include the administration credentials for the NSX Controller, NSX Manager and vCenter Server.

Screen Shot 2014-12-16 at 02.19.17

Next access Administration -> Solutions, select the NSX-MH pack and click on the gear icon.

configure-nsx-mh

On the pop-up window enter the IP address or the FQDN for:

  • NSX Controller
  • NSX Manager
  • vCenter Server

Only the first NSX Controller is needed.

configure-nsx-mh_2

Test the connection, accept the certificates for the different components and click Save Settings. After this the adapter is configured and will start collecting data, it will take a some time until it displays data, depending on the size of the NSX environment, to have a full collection of data.

NSX-MH dashboards

Out of the box the management pack comes with three dashboards.

  • NSX-MH Main: It provides an overview of the health of the different network objects

Screen Shot 2014-12-16 at 01.29.26

  • NSX-MH Topology: Provides details about the topology of a selected object, how it connects in the networks and a view of the related alerts and metrics.

Screen Shot 2014-12-15 at 02.30.37

  • NSX-MH Object Path: This dashboard enables the administrator to visually depict a the path between two selected objects and verify how they are connected between each other and other objects.

Screen Shot 2014-12-16 at 01.32.14

Juanma.

 

Anyone with some experience and knowledge about VMware HA knows how to perform a Reconfigure for HA operation in a host from the vSphere client and I’m no exception to that rule. However I never did with PowerCLI.

I created a new cluster in my homelab with a problem in one of the hosts, I fixed the problem, put my mind to work and after an hour or so digging through PowerCLI and the vSphere API Reference Documentation I came up with the following easy way to do it.

First we are going to create a variable that contained the configuration of the ESXi I wanted to reconfigure.

C:\
[vSphere PowerCLI] % $vmhost = Get-VMHost esxi06.vjlab.local
C:\
[vSphere PowerCLI] %
C:\
[vSphere PowerCLI] % $vmhost | Format-List

State                 : Connected
ConnectionState       : Connected
PowerState            : PoweredOn
VMSwapfileDatastoreId :
VMSwapfilePolicy      : Inherit
ParentId              : ClusterComputeResource-domain-c121
IsStandalone          : False
Manufacturer          : VMware, Inc.
Model                 : VMware Virtual Platform
NumCpu                : 2
CpuTotalMhz           : 5670
CpuUsageMhz           : 869
MemoryTotalMB         : 2299
MemoryUsageMB         : 868
ProcessorType         : Intel(R) Core(TM)2 Quad CPU    Q9550  @ 2.83GHz
HyperthreadingActive  : False
TimeZone              : UTC
Version               : 4.1.0
Build                 : 260247
Parent                : cluster3
VMSwapfileDatastore   :
StorageInfo           : HostStorageSystem-storageSystem-143
NetworkInfo           : esxi06:vjlab.local
DiagnosticPartition   : mpx.vmhba1:C0:T0:L0
FirewallDefaultPolicy :
ApiVersion            : 4.1
CustomFields          : {[com.hp.proliant, ]}
ExtensionData         : VMware.Vim.HostSystem
Id                    : HostSystem-host-143
Name                  : esxi06.vjlab.local
Uid                   : /VIServer=administrator@vcenter1.vjlab.local:443/VMHost=HostSystem-host-143/

C:\
[vSphere PowerCLI] %

Next with the cmdlet Get-View I retrieved the .NET objects of the host ID and stored them in another variable.

C:\
[vSphere PowerCLI] % Get-View $vmhost.Id

Runtime             : VMware.Vim.HostRuntimeInfo
Summary             : VMware.Vim.HostListSummary
Hardware            : VMware.Vim.HostHardwareInfo
Capability          : VMware.Vim.HostCapability
ConfigManager       : VMware.Vim.HostConfigManager
Config              : VMware.Vim.HostConfigInfo
Vm                  : {}
Datastore           : {Datastore-datastore-144}
Network             : {Network-network-11}
DatastoreBrowser    : HostDatastoreBrowser-datastoreBrowser-host-143
SystemResources     : VMware.Vim.HostSystemResourceInfo
Parent              : ClusterComputeResource-domain-c121
CustomValue         : {}
OverallStatus       : red
ConfigStatus        : red
ConfigIssue         : {0}
EffectiveRole       : {-1}
Permission          : {}
Name                : esxi06.vjlab.local
DisabledMethod      : {ExitMaintenanceMode_Task, PowerUpHostFromStandBy_Task, ReconnectHost_Task}
RecentTask          : {}
DeclaredAlarmState  : {alarm-1.host-143, alarm-101.host-143, alarm-102.host-143, alarm-103.host-143...}
TriggeredAlarmState : {}
AlarmActionsEnabled : True
Tag                 : {}
Value               : {}
AvailableField      : {com.hp.proliant}
MoRef               : HostSystem-host-143
Client              : VMware.Vim.VimClient

C:\
[vSphere PowerCLI] % $esxha = Get-View $vmhost.Id

Now through the $esxha variable I invoked the method ReconfigureHostForDAS to reconfigure the ESXi, this method is part of the HostSystem object and its description can be found here in the vSphere API reference.

As it can be seen in the above screenshot, the task is displayed in the vSphere client. You can also monitor the operation with the Get-Task cmdlet.

Finally I created the below script to simplify things in the future :-)

# Reconfigure-VMHostHA.ps1
# PowerCLI script to reconfigure for VMware HA a VM Host
#
# Juan Manuel Rey - juanmanuel (dot) reyportal (at) gmail (dot) com
# https://jreypo.wordpress.com
#

param([string]$esx)

$vmhost = Get-VMHost $esx
$esxha = Get-View $vmhost.Id
$esxha.ReconfigureHostForDAS()

Juanma.

Hpasmcli, HP Management Command Line Interface, is a scriptable command line tool to manage and monitor the HP ProLiant servers through the hpasmd and hpasmxld daemons. It is part of the hp-health package that comes with the HP Proliant Support Pack, or PSP.

[root@rhel4 ~]# rpm -qa | grep hp-health
hp-health-8.1.1-14.rhel4
[root@rhel4 ~]#
[root@rhel4 ~]# rpm -qi hp-health-8.1.1-14.rhel4
Name        : hp-health                    Relocations: (not relocatable)
Version     : 8.1.1                             Vendor: Hewlett-Packard Company
Release     : 14.rhel4                      Build Date: Fri 04 Jul 2008 07:04:51 PM CEST
Install Date: Thu 02 Apr 2009 05:10:48 PM CEST      Build Host: rhel4ebuild.M73C253-lab.net
Group       : System Environment            Source RPM: hp-health-8.1.1-14.rhel4.src.rpm
Size        : 1147219                          License: 2008 Hewlett-Packard Development Company, L.P.
Signature   : (none)
Packager    : Hewlett-Packard Company
URL         : http://www.hp.com/go/proliantlinux
Summary     : hp System Health Application and Command line Utility Package
Description :
This package contains the System Health Monitor for all hp Proliant systems
with ASM, ILO, & ILO2 embedded management asics.  Also contained are the
command line utilities.
[root@rhel4 ~]#
[root@rhel4 ~]# rpm -ql hp-health-8.1.1-14.rhel4
/etc/init.d/hp-health
/opt/hp/hp-health
/opt/hp/hp-health/bin
/opt/hp/hp-health/bin/IrqRouteTbl
/opt/hp/hp-health/bin/hpasmd
/opt/hp/hp-health/bin/hpasmlited
/opt/hp/hp-health/bin/hpasmpld
/opt/hp/hp-health/bin/hpasmxld
/opt/hp/hp-health/hprpm.xpm
/opt/hp/hp-health/sh
/opt/hp/hp-health/sh/hpasmxld_reset.sh
/sbin/hpasmcli
/sbin/hpbootcfg
/sbin/hplog
/sbin/hpuid
/usr/lib/libhpasmintrfc.so
/usr/lib/libhpasmintrfc.so.2
/usr/lib/libhpasmintrfc.so.2.0
/usr/lib/libhpev.so
/usr/lib/libhpev.so.1
/usr/lib/libhpev.so.1.0
/usr/lib64/libhpasmintrfc64.so
/usr/lib64/libhpasmintrfc64.so.2
/usr/lib64/libhpasmintrfc64.so.2.0
/usr/share/man/man4/hp-health.4.gz
/usr/share/man/man4/hpasmcli.4.gz
/usr/share/man/man7/hp_mgmt_install.7.gz
/usr/share/man/man8/hpbootcfg.8.gz
/usr/share/man/man8/hplog.8.gz
/usr/share/man/man8/hpuid.8.gz
[root@rhel4 ~]#

This handy tool can be used to view and modify several BIOS settings of the server and to monitor the status of the different hardware components like fans, memory modules, temperature, power supplies, etc.

It can be used in two ways:

  • Interactive shell
  • Within a script

The interactive shell supports TAB command completion and command recovery through a history buffer.

[root@rhel4 ~]# hpasmcli
HP management CLI for Linux (v1.0)
Copyright 2004 Hewlett-Packard Development Group, L.P.

--------------------------------------------------------------------------
NOTE: Some hpasmcli commands may not be supported on all Proliant servers.
      Type 'help' to get a list of all top level commands.
--------------------------------------------------------------------------
hpasmcli> help
CLEAR  DISABLE  ENABLE  EXIT  HELP  NOTE  QUIT  REPAIR  SET  SHOW
hpasmcli>

As it can be seen in the above example several main tasks can be done, to get the usage of every command simply use HELP followed by the command.

hpasmcli> help show
USAGE: SHOW [ ASR | BOOT | DIMM | F1 | FANS | HT | IML | IPL | NAME | PORTMAP | POWERSUPPLY | PXE | SERIAL | SERVER | TEMP | UID | WOL ]
hpasmcli>
hpasmcli> HELP SHOW BOOT
USAGE: SHOW BOOT: Shows boot devices.
hpasmcli>

In my experience SHOW is the most used command above the others. Following are examples for some of the tasks.

– Display general information of the server

hpasmcli> SHOW SERVER
System        : ProLiant DL380 G5
Serial No.    : XXXXXXXXX     
ROM version   : P56 11/01/2008
iLo present   : Yes
Embedded NICs : 2
        NIC1 MAC: 00:1c:c4:62:42:a0
        NIC2 MAC: 00:1c:c4:62:42:9e

Processor: 0
        Name         : Intel Xeon
        Stepping     : 11
        Speed        : 2666 MHz
        Bus          : 1333 MHz
        Core         : 4
        Thread       : 4
        Socket       : 1
        Level2 Cache : 8192 KBytes
        Status       : Ok

Processor: 1
        Name         : Intel Xeon
        Stepping     : 11
        Speed        : 2666 MHz
        Bus          : 1333 MHz
        Core         : 4
        Thread       : 4
        Socket       : 2
        Level2 Cache : 8192 KBytes
        Status       : Ok

Processor total  : 2

Memory installed : 16384 MBytes
ECC supported    : Yes
hpasmcli>

– Show current temperatures

hpasmcli> SHOW TEMP
Sensor   Location              Temp       Threshold
------   --------              ----       ---------
#1        I/O_ZONE             49C/120F   70C/158F
#2        AMBIENT              23C/73F    39C/102F
#3        CPU#1                30C/86F    127C/260F
#4        CPU#1                30C/86F    127C/260F
#5        POWER_SUPPLY_BAY     52C/125F   77C/170F
#6        CPU#2                30C/86F    127C/260F
#7        CPU#2                30C/86F    127C/260F

hpasmcli>

– Get the status of the server fans

hpasmcli> SHOW FAN
Fan  Location        Present Speed  of max  Redundant  Partner  Hot-pluggable
---  --------        ------- -----  ------  ---------  -------  -------------
#1   I/O_ZONE        Yes     NORMAL 45%     Yes        0        Yes          
#2   I/O_ZONE        Yes     NORMAL 45%     Yes        0        Yes          
#3   PROCESSOR_ZONE  Yes     NORMAL 41%     Yes        0        Yes          
#4   PROCESSOR_ZONE  Yes     NORMAL 36%     Yes        0        Yes          
#5   PROCESSOR_ZONE  Yes     NORMAL 36%     Yes        0        Yes          
#6   PROCESSOR_ZONE  Yes     NORMAL 36%     Yes        0        Yes           

hpasmcli>

– Show device boot order configuration

hpasmcli> SHOW BOOT
First boot device is: CDROM.
One time boot device is: Not set.
hpasmcli>

– Set USB key as first boot device

hpasmcli> SET BOOT FIRST USBKEY

– Show memory modules status

hpasmcli> SHOW DIMM
DIMM Configuration
------------------
Cartridge #:   0
Module #:      1
Present:       Yes
Form Factor:   fh
Memory Type:   14h
Size:          4096 MB
Speed:         667 MHz
Status:        Ok

Cartridge #:   0
Module #:      2
Present:       Yes
Form Factor:   fh
Memory Type:   14h
Size:          4096 MB
Speed:         667 MHz
Status:        Ok

Cartridge #:   0
Module #:      3
Present:       Yes
Form Factor:   fh
Memory Type:   14h
Size:          4096 MB
Speed:         667 MHz
Status:        Ok
...

In the scripting mode hpasmcli can be used directly from the shell prompt with the -s option and the command between quotation marks, this of course allow you to process the output of the commands  like in the below exampl.

[root@rhel4 ~]# hpasmcli -s "show dimm" | egrep "Module|Status"
Module #:      1
Status:        Ok
Module #:      2
Status:        Ok
Module #:      3
Status:        Ok
Module #:      4
Status:        Ok
Module #:      5
Status:        Ok
Module #:      6
Status:        Ok
Module #:      7
Status:        Ok
Module #:      8
Status:        Ok
[root@rhel4 ~]#

To execute more than one command sequentially separate them with a semicolon.

[root@rhel4 ~]# hpasmcli -s "show fan; show temp"

Fan  Location        Present Speed  of max  Redundant  Partner  Hot-pluggable
---  --------        ------- -----  ------  ---------  -------  -------------
#1   I/O_ZONE        Yes     NORMAL 45%     Yes        0        Yes          
#2   I/O_ZONE        Yes     NORMAL 45%     Yes        0        Yes          
#3   PROCESSOR_ZONE  Yes     NORMAL 41%     Yes        0        Yes          
#4   PROCESSOR_ZONE  Yes     NORMAL 36%     Yes        0        Yes          
#5   PROCESSOR_ZONE  Yes     NORMAL 36%     Yes        0        Yes          
#6   PROCESSOR_ZONE  Yes     NORMAL 36%     Yes        0        Yes           

Sensor   Location              Temp       Threshold
------   --------              ----       ---------
#1        I/O_ZONE             47C/116F   70C/158F
#2        AMBIENT              21C/69F    39C/102F
#3        CPU#1                30C/86F    127C/260F
#4        CPU#1                30C/86F    127C/260F
#5        POWER_SUPPLY_BAY     50C/122F   77C/170F
#6        CPU#2                30C/86F    127C/260F
#7        CPU#2                30C/86F    127C/260F

[root@rhel4 ~]#

If you want to play more with hpasmcli go to its man page and to the ProLiant Support Pack documentation.

Juanma.