Archives For devops

Team+OpenStack+@+VMwareIn a previous article I showed the process to patch an existing VIO 1.0 installation, which as you were able to see it is a clean and easy process. VMware announced VMware Integrated OpenStack 2.0 during VMware US and it finally became GA a few weeks ago.

This new version of VIO has all OpenStack code updated up to the latest Kilo release and comes packaged with many interesting features like Load-Balancing-as-a-Service (LBaaS) or auto-scaling capabilities based on Heat and Ceilometer.

With a new VIO version hot of the press it is time now to upgrade your VIO 1.0.x environment to 2.0 and take advantage of all those new great goodies. The upgrade process is pretty straightforward and consist of three main stages.

  • Upgrade VIO Management Server
  • Deploy a new VIO 2.0 environment
  • Perform the data migration

Keep in mind that you will need to have enough hardware resources in your management cluster to be able to host two full fledged VIO installations at the same time during the migration process. Just for the sake of transparency, the lab environment where I test the upgrade is based on vSphere 5.5 Update 2, NSX for vSphere 6.1.4 and VIO 1.0.2.

Step 1 – Upgrade VIO Management Server

From VMware website download the .deb upgrade package and upload it to VIO Management Server.

Download VIO upgrade package

Stage the upgrade package.

viouser@vio-oms:~$ sudo viopatch add -l vio-1.0-upgrade_2.0.0.3037964_all.deb
[sudo] password for viouser:
vio-1.0-upgrade_2.0.0.3037964_all.deb patch has been added.
viouser@vio-oms:~$ viopatch list
Name            Version       Type   Installed
--------------- ------------- ------ -----------
vio-1.0-upgrade 2.0.0.3037964 infra  No
vio-patch-2     1.0.2.2813500 infra  Yes

viouser@vio-oms:~$

Upgrade the management server with viopatch command.

viouser@vio-oms:~$ sudo viopatch install -p vio-1.0-upgrade -v 2.0.0.3037964
Installing patch vio-1.0-upgrade version 2.0.0.3037964
done
Installation complete for patch vio-1.0-upgrade version 2.0.0.3037964
viouser@vio-oms:~$

Go to the vSphere Web Client, logout and log back in to verify that the new version is correct.

vio-oms-upgraded

Step 2 – Deploy a new VIO 2.0 environment

With VIO Management Server upgraded is now time to deploy a fresh 2.0 environment. In the VIO plugin go to Manage section and a new Upgrades tab will be there. Before

vio-upgrades-tab

Before starting with the deployment check in the Networks tab that there are enough free IP address for the new deployment, if there aren’t then add a new IP range.

new_ip_range

Click on the Upgrade Screen Shot 2015-10-20 at 13.49.34 icon. Select if you want to participate in the Customer experience improvement program, my recommendation here is to say yes to help our engineering team to improve VIO upgrade experience even more ;-), and enter the name for the new deployment.

deployment_name

Enter the IP addresses for the public and private load balanced IP addresses, keep in mind that these IP addresses must belong to the API subnet of the existing VIO 1.0 environment in case of the public and to the management network segment in the case of the private one.

lb_vio2_ips

In the last screen review the configured values and click Finish. The new environment will be deployed and you will be able to monitor it from the Upgrades tab.

vio2_new_deployment

Step 3 – Migrate the data

With the new environment up and ready we can start the data migration. From the Upgrades tab right-click in the your existing VIO 1.0 installation and select Migrate Data.

migrate_vio_data

The migration wizard will ask for confirmation, click OK. During the data migration all OpenStack service will be unavailable.

data_migration

When the migration process is finished the status of the new VIO 2.0 environment will appear as Migrated and the previous VIO 1.0 will appear as Stopped.

vio_migrated

Open a browser and connect the VIO 2.0 public IP to access OpenStack Horizon interface, login and verify that all your workloads, networks, image, etc have been properly migrated. Logout from Horizon and go back to the Web Client. Now that the data has been migrated we need to migrate the original Public Virtual IP to the new environment.

Right-click on VIO 1.0 deployment and from the menu select Switch To New Deployment.

switch_vio_ip

A new pop-up will appear asking for confirmation since again the OpenStack service will be unavailable during the IP reconfiguration.

After the reconfiguration the new VIO 2.0 deployment will be in Running status and the Public Virtual IP will be the same as the former 1.0 deployment.

migration_finished

The upgrading procedure is finished. We can now access now Horizon using the existing FQDN, verify that everything is still working and enjoy your new OpenStack Kilo environment.

horizon_kilo

In the same way as patching, with VIO upgrading your OpenStack cloud does not have to be a painful experience, VIO provides the best OpenStack experience in a vSphere environment. Kudos to my colleagues of the Team OpenStack @ VMware.

Happy Stacking!

Juanma.

Lattice is the latest addition from Pivotal to its portfolio of open source projects. Lattice leverages various components from Cloud Foundry, in order to run containerized workloads in a cluster.

  • Diego – The new Cloud Foundry elastic runtime. Acts as an action-based scheduler and provides support for Docker images.
  • Doppler – The log and metric aggregator for the platform and the running workloads.
  • Gorouter – A software-based router with reverse proxy capabilities. Dynamically updated as the containers are spun up and down.

The first fact we need to understand about Lattice is that it is not intended to run production workloads. Instead Lattice is meant to be run on Virtualbox or VMware Fusion using Vagrant. In the end Lattice is an easy way to leverage all the power of Cloud Foundry for running containers, in your laptop and without having to bother about all Cloud Foundry installation details.

Deploying Lattice

Installing and running Lattice in your laptop is relatively easy process, first of all we will need  First download the latest package from GitHub Releases page for Lattice, there are packages available for Linux and OS X.

Unzip the package in a directory with the rest of your virtual machines.

unzip lattice-bundle-v0.4.3-osx.zip

Now copy the ltc utility to a directory in your PATH, I always use /usr/local/bin for this kind of binaries.

Running Lattice

Running Lattice should be quite simple, just change to vagrant directory on Lattice installation path, then execute vagrant up command and that’s it. However there is a caveat, by default the Vagranfile will use the IP address 192.168.11.11 if LATTICE_SYSTEM_IP variable is not provided during the execution.

To avoid this issue pass LATTICE_SYSTEM_IP variable to Vagrant during the execution. I personally have used both VMware Fusion and VMware AppCatalyst but you can use Virtualbox too. For the AppCatalyst the only requirement is to have appcatalyst-daemon running since it is needed by the Vagrant provider.

LATTICE_SYSTEM_IP=192.168.161.11 vagrant up --provider vmware_fusion

With this we will have our Lattice instance up and running. Next we need to tell ltc how to connect to our Lattice instance, this operation is called targeting.

ltc target 192.168.161.11.xip.io

With the API endpoint set let’s deploy our first application, for this example we will use Lattice example app. Run the ltc create command with the name of the new app and the container to be spun up as the arguments.

Screen Shot 2015-10-02 at 13.33.37

Open your favorite browser and access https://my-app.192.168.161.11.xip.io.

Screen Shot 2015-10-02 at 13.44.24

The index indicates the node that we are accessing. Next we will scale up the application adding two additional containers. Use ltc scale to add additional instances of the app and ltc status to retrieve the status.

Screen Shot 2015-10-02 at 14.22.40

Another useful operation with ltc is the capacity to get the logs for your app.

 2015-10-02 14:23:18 ☆ trantor in ~
○ → ltc logs my-app
10/02 14:30:11.10 [APP|2] Lattice-app. Says Hello. on index: 2
10/02 14:30:11.28 [APP|0] Lattice-app. Says Hello. on index: 0
10/02 14:30:11.60 [APP|1] Lattice-app. Says Hello. on index: 1
10/02 14:30:12.10 [APP|2] Lattice-app. Says Hello. on index: 2
10/02 14:30:12.28 [APP|0] Lattice-app. Says Hello. on index: 0
10/02 14:30:12.60 [APP|1] Lattice-app. Says Hello. on index: 1

I’ll let you to add more apps to Lattice and to play around with ltc.

Comments are welcome, as always.

Juanma.

As with the rest of NSX for vSphere components any competent admin would like to configure a remote syslog server for the NSX Controllers, in my homelab I have vRealize Log Insight and recently I decided to configure it on my NSX Controllers and document the procedure here mostly as self-reference.

NSX Manager has the option to configure a remote syslog server using its management web site, but where is the option for the Controllers? Well, if you lurk around NSX interface in vSphere Web Client will quickly notice that the option is somehow missing.  Actually the only option to enable it is using NSX REST API. Let’s see how to do it.

For this post I will use Firefox REST Client Add-on but you can use your favorite REST client. Firstly any REST API call will require at least the Authentication header, in Firefox REST Client click on Authentication drop-down menu, select Basic Authentication and enter the admin credentials.

Screen Shot 2015-09-30 at 02.25.19

Additionally PUT and POST methods will require you to set a custom header with the following values that will define the content of the HTTP Request body. Use these values.

  • Name: Content-Type
  • Value: application/xml

With these two headers set enter the API URL, in my case it is:

https://nsxm-01.mcorp.local/api/2.0/vdn/controller/controller-1/syslog

To construct this URL you will need the controller ID that can be get in the NSX interface in vSphere Web Client as shown below.

Screen Shot 2015-09-30 at 02.50.38

Select POST as the method. You will need to enter the body for the HTTP Request, in XML format. Use the below code as an example to build the content for the HTTP Request body.

This XML code will indicate the NSX Manager to set the IP address in the syslogServer node as the remote syslog server for the controller in the URL. The protocol, port and log level are also defined.

Screen Shot 2015-09-17 at 01.14.19

Submit the request and if everything is configured as described you will receive a 200 OK status code.

Screen Shot 2015-09-17 at 01.15.06

At this point the syslog server is configured for all NSX Controllers, you can check the status using also an API call with the same URL and selecting GET method.

Comments are welcome.

Juanma.

Being used to have Cockpit in my Fedora 21 Server VMs I decided that having it also on my CentOS machines would be awesome, unfortunately I quickly found that Cockpit was not available in CentOS repositories. Of course I knew that Cockpit comes installed and enabled by default in CentOS 7 Atomic host image so I figured out that those packages had to be hidden in some Atomic related repo.

After looking a bit I finally found in GitHub the sig-atomic-buildscripts repository that belongs to CentOS Project. This repository contains several scripts and files intended to build your own CentOS Atomic host including virt7-testing.repo, the Yum repository file needed for Cockpit.

Clone the GutHub repository.

git clone https://github.com/baude/sig-atomic-buildscripts

Copy virt7-testing.repo file to /etc/yum.repos.d and install Cockpit.

yum install cockpit

Enable Cockpit service.

[root@webtest ~]# systemctl enable cockpit.socket
ln -s '/usr/lib/systemd/system/cockpit.socket' '/etc/systemd/system/sockets.target.wants/cockpit.socket'
[root@webtest ~]#

Add Cockpit to the list of trusted services in FirewallD.

[root@webtest ~]# firewall-cmd --permanent --zone=public --add-service=cockpit
success
[root@webtest ~]#
[root@webtest ~]# firewall-cmd --reload
success
[root@webtest ~]#
[root@webtest ~]# firewall-cmd --list-services
cockpit dhcpv6-client ssh
[root@webtest ~]#

Start Cockpit socket.

systemctl start cockpit.socket

Do no try to access Cockpit yet, there is an issue about running Cockpit on stock CentOS/RHEL 7. To be able to start it we need first to modify the service file to disable SSL.Edit file /usr/lib/systemd/system/cockpit.service and modify ExecStart line to look like this.

ExecStart=/usr/libexec/cockpit-ws --no-tls

I know this procedure will invalidate Cockpit for a production environment in RHEL7 at least for now but this is for my lab environment and I can live with it.

Reload systemd.

systemctl daemon-reload

Restart Cockpit.

systemctl restart cockpit

Access Cockpit web interface, login as root and have fun :-)

Screen Shot 2015-01-09 at 01.57.51

Juanma.

 

fedora_infinity_140x140Cockpit is a new web based server manager to administer Linux server, it will provide the system administrators with a user friendly interface to manage their Linux servers, it includes multiserver managing capacity and more importantly it will create no interference or disconnection between the tasks done from the web and from the command line. This last feature is specially useful

By default Cockpit, stable version, comes installed and enabled in Fedora 21 Server. It also can be found in CentOS/RHEL 7 Atomic, Fedora 21 Atomic and Fedora 21 Cloud, and there are plans in the near future to support Arch Linux.

Lets review now some of the features of Cockpit, as said before multiple servers can be managed from the same Cockpit instance.

Screen Shot 2014-12-31 at 19.26.00

Once you access one of managed nodes it will present general overview of the server with real-time charts of CPU, Memory, Disk I/O and Network Traffic.

Screen Shot 2014-12-31 at 19.37.53

On the left pane there are a series of actionable items that will give you access to the different subsystems of the node like Networking, Storage, User Accounts and even the status of the Docker containers running on the server, if the Docker service has been enabled.

System services view.

Screen Shot 2014-12-31 at 19.52.53

When a process is selected Cockpit will display its details.

Screen Shot 2015-01-08 at 12.11.27

Networking area displays traffic for the selected interface, the journal of the networking system and even allows you to create a new bond interface, a new bridge or add a new VLAN tag to the interface.

Screen Shot 2014-12-31 at 19.53.18

The Storage view will display similar info for the disks, and will display detailed information for each of them, review the LVM configuration of the server and perform different storage related operations.

Screen Shot 2014-12-31 at 19.53.45

Journal view lets you review systemd journal. You can go back seven days into the log and filter on the type of messages.

Screen Shot 2014-12-31 at 19.54.29

After using Cockpit for some time in my lab I can say that I genuinely love it, the interface is pretty fast, it uses systemd for everything and it does not interface with my console-based admin habits, on the contrary is a great complement to them.

Juanma.

Anyone with some experience and knowledge about VMware HA knows how to perform a Reconfigure for HA operation in a host from the vSphere client and I’m no exception to that rule. However I never did with PowerCLI.

I created a new cluster in my homelab with a problem in one of the hosts, I fixed the problem, put my mind to work and after an hour or so digging through PowerCLI and the vSphere API Reference Documentation I came up with the following easy way to do it.

First we are going to create a variable that contained the configuration of the ESXi I wanted to reconfigure.

C:\
[vSphere PowerCLI] % $vmhost = Get-VMHost esxi06.vjlab.local
C:\
[vSphere PowerCLI] %
C:\
[vSphere PowerCLI] % $vmhost | Format-List

State                 : Connected
ConnectionState       : Connected
PowerState            : PoweredOn
VMSwapfileDatastoreId :
VMSwapfilePolicy      : Inherit
ParentId              : ClusterComputeResource-domain-c121
IsStandalone          : False
Manufacturer          : VMware, Inc.
Model                 : VMware Virtual Platform
NumCpu                : 2
CpuTotalMhz           : 5670
CpuUsageMhz           : 869
MemoryTotalMB         : 2299
MemoryUsageMB         : 868
ProcessorType         : Intel(R) Core(TM)2 Quad CPU    Q9550  @ 2.83GHz
HyperthreadingActive  : False
TimeZone              : UTC
Version               : 4.1.0
Build                 : 260247
Parent                : cluster3
VMSwapfileDatastore   :
StorageInfo           : HostStorageSystem-storageSystem-143
NetworkInfo           : esxi06:vjlab.local
DiagnosticPartition   : mpx.vmhba1:C0:T0:L0
FirewallDefaultPolicy :
ApiVersion            : 4.1
CustomFields          : {[com.hp.proliant, ]}
ExtensionData         : VMware.Vim.HostSystem
Id                    : HostSystem-host-143
Name                  : esxi06.vjlab.local
Uid                   : /VIServer=administrator@vcenter1.vjlab.local:443/VMHost=HostSystem-host-143/

C:\
[vSphere PowerCLI] %

Next with the cmdlet Get-View I retrieved the .NET objects of the host ID and stored them in another variable.

C:\
[vSphere PowerCLI] % Get-View $vmhost.Id

Runtime             : VMware.Vim.HostRuntimeInfo
Summary             : VMware.Vim.HostListSummary
Hardware            : VMware.Vim.HostHardwareInfo
Capability          : VMware.Vim.HostCapability
ConfigManager       : VMware.Vim.HostConfigManager
Config              : VMware.Vim.HostConfigInfo
Vm                  : {}
Datastore           : {Datastore-datastore-144}
Network             : {Network-network-11}
DatastoreBrowser    : HostDatastoreBrowser-datastoreBrowser-host-143
SystemResources     : VMware.Vim.HostSystemResourceInfo
Parent              : ClusterComputeResource-domain-c121
CustomValue         : {}
OverallStatus       : red
ConfigStatus        : red
ConfigIssue         : {0}
EffectiveRole       : {-1}
Permission          : {}
Name                : esxi06.vjlab.local
DisabledMethod      : {ExitMaintenanceMode_Task, PowerUpHostFromStandBy_Task, ReconnectHost_Task}
RecentTask          : {}
DeclaredAlarmState  : {alarm-1.host-143, alarm-101.host-143, alarm-102.host-143, alarm-103.host-143...}
TriggeredAlarmState : {}
AlarmActionsEnabled : True
Tag                 : {}
Value               : {}
AvailableField      : {com.hp.proliant}
MoRef               : HostSystem-host-143
Client              : VMware.Vim.VimClient

C:\
[vSphere PowerCLI] % $esxha = Get-View $vmhost.Id

Now through the $esxha variable I invoked the method ReconfigureHostForDAS to reconfigure the ESXi, this method is part of the HostSystem object and its description can be found here in the vSphere API reference.

As it can be seen in the above screenshot, the task is displayed in the vSphere client. You can also monitor the operation with the Get-Task cmdlet.

Finally I created the below script to simplify things in the future :-)

# Reconfigure-VMHostHA.ps1
# PowerCLI script to reconfigure for VMware HA a VM Host
#
# Juan Manuel Rey - juanmanuel (dot) reyportal (at) gmail (dot) com
# https://jreypo.wordpress.com
#

param([string]$esx)

$vmhost = Get-VMHost $esx
$esxha = Get-View $vmhost.Id
$esxha.ReconfigureHostForDAS()

Juanma.

Hpasmcli, HP Management Command Line Interface, is a scriptable command line tool to manage and monitor the HP ProLiant servers through the hpasmd and hpasmxld daemons. It is part of the hp-health package that comes with the HP Proliant Support Pack, or PSP.

[root@rhel4 ~]# rpm -qa | grep hp-health
hp-health-8.1.1-14.rhel4
[root@rhel4 ~]#
[root@rhel4 ~]# rpm -qi hp-health-8.1.1-14.rhel4
Name        : hp-health                    Relocations: (not relocatable)
Version     : 8.1.1                             Vendor: Hewlett-Packard Company
Release     : 14.rhel4                      Build Date: Fri 04 Jul 2008 07:04:51 PM CEST
Install Date: Thu 02 Apr 2009 05:10:48 PM CEST      Build Host: rhel4ebuild.M73C253-lab.net
Group       : System Environment            Source RPM: hp-health-8.1.1-14.rhel4.src.rpm
Size        : 1147219                          License: 2008 Hewlett-Packard Development Company, L.P.
Signature   : (none)
Packager    : Hewlett-Packard Company
URL         : http://www.hp.com/go/proliantlinux
Summary     : hp System Health Application and Command line Utility Package
Description :
This package contains the System Health Monitor for all hp Proliant systems
with ASM, ILO, & ILO2 embedded management asics.  Also contained are the
command line utilities.
[root@rhel4 ~]#
[root@rhel4 ~]# rpm -ql hp-health-8.1.1-14.rhel4
/etc/init.d/hp-health
/opt/hp/hp-health
/opt/hp/hp-health/bin
/opt/hp/hp-health/bin/IrqRouteTbl
/opt/hp/hp-health/bin/hpasmd
/opt/hp/hp-health/bin/hpasmlited
/opt/hp/hp-health/bin/hpasmpld
/opt/hp/hp-health/bin/hpasmxld
/opt/hp/hp-health/hprpm.xpm
/opt/hp/hp-health/sh
/opt/hp/hp-health/sh/hpasmxld_reset.sh
/sbin/hpasmcli
/sbin/hpbootcfg
/sbin/hplog
/sbin/hpuid
/usr/lib/libhpasmintrfc.so
/usr/lib/libhpasmintrfc.so.2
/usr/lib/libhpasmintrfc.so.2.0
/usr/lib/libhpev.so
/usr/lib/libhpev.so.1
/usr/lib/libhpev.so.1.0
/usr/lib64/libhpasmintrfc64.so
/usr/lib64/libhpasmintrfc64.so.2
/usr/lib64/libhpasmintrfc64.so.2.0
/usr/share/man/man4/hp-health.4.gz
/usr/share/man/man4/hpasmcli.4.gz
/usr/share/man/man7/hp_mgmt_install.7.gz
/usr/share/man/man8/hpbootcfg.8.gz
/usr/share/man/man8/hplog.8.gz
/usr/share/man/man8/hpuid.8.gz
[root@rhel4 ~]#

This handy tool can be used to view and modify several BIOS settings of the server and to monitor the status of the different hardware components like fans, memory modules, temperature, power supplies, etc.

It can be used in two ways:

  • Interactive shell
  • Within a script

The interactive shell supports TAB command completion and command recovery through a history buffer.

[root@rhel4 ~]# hpasmcli
HP management CLI for Linux (v1.0)
Copyright 2004 Hewlett-Packard Development Group, L.P.

--------------------------------------------------------------------------
NOTE: Some hpasmcli commands may not be supported on all Proliant servers.
      Type 'help' to get a list of all top level commands.
--------------------------------------------------------------------------
hpasmcli> help
CLEAR  DISABLE  ENABLE  EXIT  HELP  NOTE  QUIT  REPAIR  SET  SHOW
hpasmcli>

As it can be seen in the above example several main tasks can be done, to get the usage of every command simply use HELP followed by the command.

hpasmcli> help show
USAGE: SHOW [ ASR | BOOT | DIMM | F1 | FANS | HT | IML | IPL | NAME | PORTMAP | POWERSUPPLY | PXE | SERIAL | SERVER | TEMP | UID | WOL ]
hpasmcli>
hpasmcli> HELP SHOW BOOT
USAGE: SHOW BOOT: Shows boot devices.
hpasmcli>

In my experience SHOW is the most used command above the others. Following are examples for some of the tasks.

– Display general information of the server

hpasmcli> SHOW SERVER
System        : ProLiant DL380 G5
Serial No.    : XXXXXXXXX     
ROM version   : P56 11/01/2008
iLo present   : Yes
Embedded NICs : 2
        NIC1 MAC: 00:1c:c4:62:42:a0
        NIC2 MAC: 00:1c:c4:62:42:9e

Processor: 0
        Name         : Intel Xeon
        Stepping     : 11
        Speed        : 2666 MHz
        Bus          : 1333 MHz
        Core         : 4
        Thread       : 4
        Socket       : 1
        Level2 Cache : 8192 KBytes
        Status       : Ok

Processor: 1
        Name         : Intel Xeon
        Stepping     : 11
        Speed        : 2666 MHz
        Bus          : 1333 MHz
        Core         : 4
        Thread       : 4
        Socket       : 2
        Level2 Cache : 8192 KBytes
        Status       : Ok

Processor total  : 2

Memory installed : 16384 MBytes
ECC supported    : Yes
hpasmcli>

– Show current temperatures

hpasmcli> SHOW TEMP
Sensor   Location              Temp       Threshold
------   --------              ----       ---------
#1        I/O_ZONE             49C/120F   70C/158F
#2        AMBIENT              23C/73F    39C/102F
#3        CPU#1                30C/86F    127C/260F
#4        CPU#1                30C/86F    127C/260F
#5        POWER_SUPPLY_BAY     52C/125F   77C/170F
#6        CPU#2                30C/86F    127C/260F
#7        CPU#2                30C/86F    127C/260F

hpasmcli>

– Get the status of the server fans

hpasmcli> SHOW FAN
Fan  Location        Present Speed  of max  Redundant  Partner  Hot-pluggable
---  --------        ------- -----  ------  ---------  -------  -------------
#1   I/O_ZONE        Yes     NORMAL 45%     Yes        0        Yes          
#2   I/O_ZONE        Yes     NORMAL 45%     Yes        0        Yes          
#3   PROCESSOR_ZONE  Yes     NORMAL 41%     Yes        0        Yes          
#4   PROCESSOR_ZONE  Yes     NORMAL 36%     Yes        0        Yes          
#5   PROCESSOR_ZONE  Yes     NORMAL 36%     Yes        0        Yes          
#6   PROCESSOR_ZONE  Yes     NORMAL 36%     Yes        0        Yes           

hpasmcli>

– Show device boot order configuration

hpasmcli> SHOW BOOT
First boot device is: CDROM.
One time boot device is: Not set.
hpasmcli>

– Set USB key as first boot device

hpasmcli> SET BOOT FIRST USBKEY

– Show memory modules status

hpasmcli> SHOW DIMM
DIMM Configuration
------------------
Cartridge #:   0
Module #:      1
Present:       Yes
Form Factor:   fh
Memory Type:   14h
Size:          4096 MB
Speed:         667 MHz
Status:        Ok

Cartridge #:   0
Module #:      2
Present:       Yes
Form Factor:   fh
Memory Type:   14h
Size:          4096 MB
Speed:         667 MHz
Status:        Ok

Cartridge #:   0
Module #:      3
Present:       Yes
Form Factor:   fh
Memory Type:   14h
Size:          4096 MB
Speed:         667 MHz
Status:        Ok
...

In the scripting mode hpasmcli can be used directly from the shell prompt with the -s option and the command between quotation marks, this of course allow you to process the output of the commands  like in the below exampl.

[root@rhel4 ~]# hpasmcli -s "show dimm" | egrep "Module|Status"
Module #:      1
Status:        Ok
Module #:      2
Status:        Ok
Module #:      3
Status:        Ok
Module #:      4
Status:        Ok
Module #:      5
Status:        Ok
Module #:      6
Status:        Ok
Module #:      7
Status:        Ok
Module #:      8
Status:        Ok
[root@rhel4 ~]#

To execute more than one command sequentially separate them with a semicolon.

[root@rhel4 ~]# hpasmcli -s "show fan; show temp"

Fan  Location        Present Speed  of max  Redundant  Partner  Hot-pluggable
---  --------        ------- -----  ------  ---------  -------  -------------
#1   I/O_ZONE        Yes     NORMAL 45%     Yes        0        Yes          
#2   I/O_ZONE        Yes     NORMAL 45%     Yes        0        Yes          
#3   PROCESSOR_ZONE  Yes     NORMAL 41%     Yes        0        Yes          
#4   PROCESSOR_ZONE  Yes     NORMAL 36%     Yes        0        Yes          
#5   PROCESSOR_ZONE  Yes     NORMAL 36%     Yes        0        Yes          
#6   PROCESSOR_ZONE  Yes     NORMAL 36%     Yes        0        Yes           

Sensor   Location              Temp       Threshold
------   --------              ----       ---------
#1        I/O_ZONE             47C/116F   70C/158F
#2        AMBIENT              21C/69F    39C/102F
#3        CPU#1                30C/86F    127C/260F
#4        CPU#1                30C/86F    127C/260F
#5        POWER_SUPPLY_BAY     50C/122F   77C/170F
#6        CPU#2                30C/86F    127C/260F
#7        CPU#2                30C/86F    127C/260F

[root@rhel4 ~]#

If you want to play more with hpasmcli go to its man page and to the ProLiant Support Pack documentation.

Juanma.