Archives For HPVM

Long time since my last post about HP Integrity Virtual Machines, well you know I’ve been very occupied with vSphere and Linux but that doesn’t mean that I completely eliminate HP-UX from my life, on the contrary… HP-UX ROCKS! :-D

This is just a quick post on how to map a specific port of virtual switch to a specific VLAN. First retrieve the configuration of the vswitch.

[root@hpvmhost] ~ # hpvmnet -S devlan12
Name     Number State   Mode      NamePPA  MAC Address    IP Address
======== ====== ======= ========= ======== ============== ===============
devlan12      3 Up      Shared    lan4     0x000cfc0046b9 10.1.1.99    

[Port Configuration Details]
Port    Port         Port     Untagged Number of    Active VM
Number  State        Adaptor  VLANID   Reserved VMs
======= ============ ======== ======== ============ ============
1       Active       lan      none     1            oradev01
2       Active       lan      none     1            oradev02
3       Active       lan      none     1            oradev03
4       Active       lan      none     1            oradev04
5       Active       lan      none     1            nfstest01
6       Active       lan      none     1            linuxvm1
7       Active       lan      none     1            linuxvm2 

[root@hpvmhost] ~ #

We are going to map the port 5 to the VLAN 120 in order to isolate the traffic of that NFS server from the other virtual machines that aren’t on the same VLAN. Again the command to use is hpmvnet.

[root@hpvmhost] ~ # hpvmnet -S devlan12 -u portid:5:vlanid:120

If you display again the HPVM network configuration for the devlan12 vswitch the change will appear under the Untagged VLANID column.

[root@hpvmhost] ~ # hpvmnet -S devlan12
Name     Number State   Mode      NamePPA  MAC Address    IP Address
======== ====== ======= ========= ======== ============== ===============
devlan12      3 Up      Shared    lan4     0x000cfc0046b9 10.1.1.99    

[Port Configuration Details]
Port    Port         Port     Untagged Number of    Active VM
Number  State        Adaptor  VLANID   Reserved VMs
======= ============ ======== ======== ============ ============
1       Active       lan      none     1            oradev01
2       Active       lan      none     1            oradev02
3       Active       lan      none     1            oradev03
4       Active       lan      none     1            oradev04
5       Active       lan      120      1            nfstest01
6       Active       lan      none     1            linuxvm1
7       Active       lan      none     1            linuxvm2 

[root@hpvmhost] ~ #

Juanma.

This week HP has released OpenVMS 8.4 for Alpha and Integrity servers.

This new version of this robust and reliable operating system introduces a bunch of great new features. Some of the more important for me are:

  • Support for the new Integrity i2 Blades.
  • IP Cluster Interconnect for Alpha and Integrity platforms.
  • Integration with the VSE suite.
  • iCAP support on Integrity cell-based servers.
  • Full Operative System provisioning capabilities up to eight servers.
  • System Management Homepage.
  • Enhanced management capabilities with more WBEM based providers on Blade systems.

But above all one has come to my attention:

  • OpenVMS supported as HPVM guest.

Yes, finally, OpenVMS has reach the stable and supported state as HPVM guest and you know what, it comes with full AVIO support both for storage and networking. This feature expands even more the virtualization portfolio in the Intanium platform and open a new era for the OpenVMS users.

I don’t know how you feel about this but I’m really excited :-)

Juanma.

The  AVIO Lan drivers for Linux HPVM guests are supported since HPVM4.0 but as you will see enabling it is a little more complicated than in HP-UX guests.

The first prerequisite is to have installed the HPVM management software, once you have this package installed look for a RPM package called hpvm_lgssn in /opt/hpvm/guest-images/linux/DRIVERS.

root@hpvm-host:/opt/hpvm/guest-images/linux/DRIVERS # ll
total 584
 0 drwxr-xr-x 2 bin bin     96 Apr 13 18:47 ./
 0 drwxr-xr-x 5 bin bin     96 Apr 13 18:48 ../
 8 -r--r--r-- 1 bin bin   7020 Mar 27  2009 README
576 -rw-r--r-- 1 bin bin 587294 Mar 27  2009 hpvm_lgssn-4.1.0-3.ia64.rpm
root@hpvm-host:/opt/hpvm/guest-images/linux/DRIVERS #

Copy the package to the virtual machine with your favorite method and install it.

[sles10]:/var/tmp # rpm -ivh hpvm_lgssn-4.1.0-3.ia64.rpm
Preparing...                ########################################### [100%]
Installing...               ########################################### [100%]

[sles10]:/var/tmp #

Check the installation of the package.

[sles10]:~ # rpm -qa | grep hpvm
hpvm-4.1.0-1
hpvmprovider-4.1.0-1
hpvm_lgssn-4.1.0-3
[sles10]:~ #
[sles10]:~ # rpm -ql hpvm_lgssn
/opt/hpvm_drivers
/opt/hpvm_drivers/lgssn
/opt/hpvm_drivers/lgssn/LICENSE
/opt/hpvm_drivers/lgssn/Makefile
/opt/hpvm_drivers/lgssn/README
/opt/hpvm_drivers/lgssn/hpvm_guest.h
/opt/hpvm_drivers/lgssn/lgssn.h
/opt/hpvm_drivers/lgssn/lgssn_ethtool.c
/opt/hpvm_drivers/lgssn/lgssn_main.c
/opt/hpvm_drivers/lgssn/lgssn_recv.c
/opt/hpvm_drivers/lgssn/lgssn_recv.h
/opt/hpvm_drivers/lgssn/lgssn_send.c
/opt/hpvm_drivers/lgssn/lgssn_send.h
/opt/hpvm_drivers/lgssn/lgssn_trace.h
/opt/hpvm_drivers/lgssn/rh4
/opt/hpvm_drivers/lgssn/rh4/u5
/opt/hpvm_drivers/lgssn/rh4/u5/lgssn.ko
/opt/hpvm_drivers/lgssn/rh4/u6
/opt/hpvm_drivers/lgssn/rh4/u6/lgssn.ko
/opt/hpvm_drivers/lgssn/sles10
/opt/hpvm_drivers/lgssn/sles10/SP1
/opt/hpvm_drivers/lgssn/sles10/SP1/lgssn.ko
/opt/hpvm_drivers/lgssn/sles10/SP2
/opt/hpvm_drivers/lgssn/sles10/SP2/lgssn.ko
[sles10]:~ #

There are two ways to install the driver, compile it or use one of the pre-compiled modules. These pre-compiled modules are for the following distributions and kernels:

  • Red Hat 4 release 5 (2.6.9-55.EL)
  • Red Hat 4 release 6 (2.6.9-67.EL)
  • SLES10 SP1 (2.6.16.46-0.12)
  • SLES10 SP2 (2.6.16.60-0.21)

For other kernels you must compile the driver. In the Linux box of the example I had a supported kernels and distro (SLES10 SP2) but instead of using the pre-compiled one I decided to go through the whole process.

Go the path /opt/hpvm_drivers/lgssn, there you will find the sources of the driver. To compile and install execute a simple make install.

[sles10]:/opt/hpvm_drivers/lgssn # make install
make -C /lib/modules/2.6.16.60-0.21-default/build SUBDIRS=/opt/hpvm_drivers/lgssn modules
make[1]: Entering directory `/usr/src/linux-2.6.16.60-0.21-obj/ia64/default'
make -C ../../../linux-2.6.16.60-0.21 O=../linux-2.6.16.60-0.21-obj/ia64/default modules
 CC [M]  /opt/hpvm_drivers/lgssn/lgssn_main.o
 CC [M]  /opt/hpvm_drivers/lgssn/lgssn_send.o
 CC [M]  /opt/hpvm_drivers/lgssn/lgssn_recv.o
 CC [M]  /opt/hpvm_drivers/lgssn/lgssn_ethtool.o
 LD [M]  /opt/hpvm_drivers/lgssn/lgssn.o
 Building modules, stage 2.
 MODPOST
 CC      /opt/hpvm_drivers/lgssn/lgssn.mod.o
 LD [M]  /opt/hpvm_drivers/lgssn/lgssn.ko
make[1]: Leaving directory `/usr/src/linux-2.6.16.60-0.21-obj/ia64/default'
find /lib/modules/2.6.16.60-0.21-default -name lgssn.ko -exec rm -f {} \; || true
find /lib/modules/2.6.16.60-0.21-default -name lgssn.ko.gz -exec rm -f {} \; || true
install -D -m 644 lgssn.ko /lib/modules/2.6.16.60-0.21-default/kernel/drivers/net/lgssn/lgssn.ko
/sbin/depmod -a || true
[sles10]:/opt/hpvm_drivers/lgssn #

This will copy the driver to /lib/module/<KERNEL_VERSION>/kernel/drivers/net/lgssn/.

To ensure that the new driver will loaded during the startup of the operative system first add the following line to /etc/modprobe.conf, one line for each interface configured for AVIO Lan.

alias eth1 lgssn

The HPVM 4.2 manual said you have to issue the command depmod -a in order to inform the kernel about the change but if you look the above log will see that the last command executed by the make install is a depmod -a. Look into the modules.dep file to check that the corresponding line for the lgssn driver has been added.

[sles10]:~ # grep lgssn /lib/modules/2.6.16.60-0.21-default/modules.dep
/lib/modules/2.6.16.60-0.21-default/kernel/drivers/net/lgssn/lgssn.ko:
[sles10]:~ #

At this point and if you have previously reconfigured the virtual machine, load the module and restart the network services.

[sles10]:/opt/hpvm_drivers/lgssn # insmod /lib/modules/2.6.16.60-0.21-default/kernel/drivers/net/lgssn/lgssn.ko
[sles10]:/opt/hpvm_drivers/lgssn # lsmod |grep lgssn
lgssn                 576136  0
[sles10]:/opt/hpvm_drivers/lgssn #
[sles10]:/opt/hpvm_drivers/lgssn # service network restart
Shutting down network interfaces:
    eth0      device: Intel Corporation 82540EM Gigabit Ethernet Controller
    eth0      configuration: eth-id-2a:87:14:5c:f9:ed
    eth0                                                              done
    eth1      device: Hewlett-Packard Company Unknown device 1338
    eth1      configuration: eth-id-66:f3:f8:4e:37:d5
    eth1                                                              done
    eth2      device: Intel Corporation 82540EM Gigabit Ethernet Controller
    eth2      configuration: eth-id-0a:dc:fd:cb:2c:62
    eth2                                                              done
Shutting down service network  .  .  .  .  .  .  .  .  .  .  .  .  .  done
Hint: you may set mandatory devices in /etc/sysconfig/network/config
Setting up network interfaces:
    lo        
    lo       
              IP address: 127.0.0.1/8   
              IP address: 127.0.0.2/8   
Checking for network time protocol daemon (NTPD):                     running
    lo                                                                done
    eth0      device: Intel Corporation 82540EM Gigabit Ethernet Controller
    eth0      configuration: eth-id-2a:87:14:5c:f9:ed
Warning: Could not set up default route via interface
 Command ip route replace to default via 10.31.12.1 returned:
 . RTNETLINK answers: Network is unreachable
 Configuration line: default 10.31.12.1 - -
 This needs NOT to be AN ERROR if you set up multiple interfaces.
 See man 5 routes how to avoid this warning.

Checking for network time protocol daemon (NTPD):                     running
    eth0                                                              done
    eth1      device: Hewlett-Packard Company Unknown device 1338
    eth1      configuration: eth-id-66:f3:f8:4e:37:d5
    eth1      IP address: 10.31.4.16/24   
Warning: Could not set up default route via interface
 Command ip route replace to default via 10.31.12.1 returned:
 . RTNETLINK answers: Network is unreachable
 Configuration line: default 10.31.12.1 - -
 This needs NOT to be AN ERROR if you set up multiple interfaces.
 See man 5 routes how to avoid this warning.

Checking for network time protocol daemon (NTPD):                     running
    eth1                                                              done
    eth2      device: Intel Corporation 82540EM Gigabit Ethernet Controller
    eth2      configuration: eth-id-0a:dc:fd:cb:2c:62
    eth2      IP address: 10.31.12.11/24   
Checking for network time protocol daemon (NTPD):                     running
    eth2                                                              done
Setting up service network  .  .  .  .  .  .  .  .  .  .  .  .  .  .  done
[sles10]:/opt/hpvm_drivers/lgssn #

If you have not configured the networking interface of the virtual machine shutdown the virtual machine and from the host modify each virtual NIC of the guest. Take into account that AVIO Lan drivers are not supported with localnet virtual switches.

root@hpvm-host:~ # hpvmmodify -P sles10 -m network:avio_lan:0,2:vswitch:vlan2:portid:4
root@hpvm-host:~ # hpvmstatus -P sles10 -d
[Virtual Machine Devices]
...
[Network Interface Details]
network:lan:0,0,0x2A87145CF9ED:vswitch:localnet:portid:4
network:avio_lan:0,1,0x66F3F84E37D5:vswitch:vlan1:portid:4
network:avio_lan:0,2,0x0ADCFDCB2C62:vswitch:vlan2:portid:4
...
root@hpvm-host:~ #

Finally start the virtual machine and check that everything went well and the drivers have been loaded.

Juanma

Like other virtualization software, HP Integrity Virtual Machines comes with several memory management capabilities. In this new post about HPVM I will try to explain what are these capabilities, their purpose and how to configure and use them.

  • Dynamic memory

Dynamic memory is an HPVM feature that allow you to resize the amount of memory of a guest without rebooting it. The HPVM manual mention an example in which dynamic memory is applicable.

…this feature allows a guest that is a Serviceguard node to be used as a standby server for multiple Serviceguard packages. When a package fails over to the guest, the guest memory can be changed to suit the requirements of the package before, during, and after the failover process.

Dynamic memory is only available on HP-UX guests with the guest management software installed.

Lets see how to enable an configure dynamic memory.

First thing to do is to enable dynamic memory.

root@hinata:~ # hpvmmodify -P batman -x ram_dyn_type=driver

There are three possible values for the ram_dyn_type option:

  1. None: Self explanatory.
  2. Any: In the next boot of the guest it will check if dynamic memory is enabled and if the driver is loaded. If the dynamic memory driver is in place the option will change its value to driver.
  3. Driver: When the ram_dyn_type is set to driver it means that every dynamic memory control and range is functional.

Specify the minimun amount of RAM to be allocated to the guest, the default unit is MB but GB can also be used.

root@hinata:~ # hpvmmodify -P batman -x ram_dyn_min=1024

Next set the maximum memory.

root@hinata:~ # hpvmmodify -P batman -x ram_dyn_max=4G

Set the amount of memory to be allocated when the guests starts, this value must be greater than the minimum one.

root@hinata:~ # hpvmmodify -P batman -x ram_dyn_target_start=2048

Check the status of the guest to see the newly configured options.

root@hinata:~ # hpvmstatus -r -P batman
[Virtual Machine entitlements]
 Percent       Cumulative
#VCPUs Entitlement Maximum   Usage            Usage
====== =========== ======= ======= ================
 6       10.0%  100.0%    0.0%                0 

[Virtual CPU details]
vCPU Cumulative       Guest   Host    Cycles   Sampling
ID   Usage            percent percent achieved Interval
==== ================ ======= ======= ======== ===========
 0                0    0.0%    0.0%     0MHz   0 seconds
 1                0    0.0%    0.0%     0MHz   0 seconds
 2                0    0.0%    0.0%     0MHz   0 seconds
 3                0    0.0%    0.0%     0MHz   0 seconds
 4                0    0.0%    0.0%     0MHz   0 seconds
 5                0    0.0%    0.0%     0MHz   0 seconds 

[Virtual Machine Memory Entitlement]
DynMem  Memory   DynMem  DynMem DynMem  Comfort Total    Free   Avail    Mem    AMR     AMR
 Min   Entitle   Max    Target Current   Min   Memory  Memory  Memory  Press  Chunk   State
======= ======= ======= ======= ======= ======= ======= ======= ======= ===== ======= ========
 1024MB     0MB     4GB  4096MB  4096MB     0MB     4GB     0MB     0MB   0       0MB DISABLED

Once dynamic memory is properly configured, from the VM host, the memory of a guest can be manually resized to a value between the ram_dyn_min and ram_dyn_max parameters in increments of the default chunk size, which is 64MB.

root@hinata:~ # hpvmmodify -P batman -x ram_target=3136

There is one final option named dynamic_memory_control, with this option the system administration can allow the root user of the guest to change dynamic memory options, from the guest side, while it is running. The dynamic_memory_control option is incompatible with automatic memory reallocation.

Just to show a small example from the guest side, to view the dynamic memory configuration:

root@batman:~# hpvmmgmt -V -l ram
[Dynamic Memory Information]
=======================================
Type                    : driver
Minimum memory          : 1024 MB
Target memory           : 4090 MB
Maximum memory          : 4096 MB
Current memory          : 4090 MB
Comfortable minimum     : 1850 MB
Boot memory             : 4090 MB
Free memory             : 2210 MB
Available memory        : 505 MB
Memory pressure         : 0
Memory chunksize        : 65536 KB
Driver Mode(s)          : STARTED ENABLED 

root@batman:~#
  • Automatic memory reallocation

The new HPVM 4.2 version from March expands dynamic memory with an interesting feature called Automatic Memory Reallocation. This new feature provides the possibility of automated changes in the amount of memory used by a guest based on load conditions.

Automatic memory reallocation is only supported on HP-UX guests with dynamic memory enabled and with the guest management software installed.

Automatic memory reallocation can be configured through two ways:

  1. System-wide values.
  2. On a per-VM basis.

Each way doesn’t exclude the other one, you can set the system-wide parameters for every VM and later customize some of the virtual machines adjusting their parameters to any additional requirement.

Automatic memory reallocation is enabled by default on the VM host. Open the file /etc/rc.config.d/hpvmconf and check that HPVMAMRENABLE=0 is not set to verify that automatic memory reallocation is enabled. The process hpmvmamrd, the automatic memory reallocation daemon, can also be check with a simple ps.

In the same file two system-wide tunables can be configured.

  1. HPVMCHUNKSIZE
  2. HPVMAMRWAITTIME

The first parameter determine the number of megabytes by the guest will attempt to grow if there is memory pressure. If the parameter is not set the default value will be 256MB. The best practice for this parameter is to be a multiple of the dynamic memory chunk size.

The second one set the maximum number of seconds that any VM startup process will wait for memory before reporting a failure due to insufficient memory. The default value is 60 seconds and the maximum configurable 600 seconds.

With the above parameter set to its defaults or customized the next step is to enable automatic memory reallocation in the virtual machines. The amr feature is DISABLED by default on the VMs. To enable use the amr_enable option.

root@hinata:~ # hpvmmodify -P batman -x amr_enable=1

Now set the memory entitlement for the virtual machine. The entitlement is the minimum amount of RAM guaranteed to the virtual machine.

root@hinata:~ # hpvmmodify -P batman -x ram_dyn_entitlement=1500

Take into account that if amr is not enabled the entitlement could be set but it will not work and any VM without the entitlement parameter set will be ignored by automatic memory reallocation.

The entitlement value can be modified online by the system administrator at any time, but there are some rules that apply:

  1. If there is not enough memory to grow the VM memory to the specified entitlement the operation will fail.
  2. The memory of virtual machine can not be grown beyond its maximum memory.
  3. The virtual machine memory always have to be set to a value between ram_dyn_max and ram_dyn_min parameters, no more no less.

When the memory of a guest is resized by default the HPVMCHUNKSIZE value is used but a per-VM chunk size can also be set. To do so use the amr_chunk_size parameter.

root@hinata:~ # hpvmmodify -P batman -x amr_chunk_resize=512

As in the system-wide parameter the recommendation is to set the chunk size to a multple of the dynamic memory chunks size.

Finally to display the configuration and the current use of the virtual machines resource entitlements use hpvmstatus -r.

root@hinata:~ # hpvmstatus -r
[Virtual Machine Resource Entitlement]
[Virtual CPU entitlement]
 Percent       Cumulative
Virtual Machine Name VM #  #VCPUs Entitlement Maximum   Usage            Usage
==================== ===== ====== =========== ======= ======= ================
rh-www                   1      4       50.0%  100.0%    0.0%                0
sql-dev                  2      4       50.0%  100.0%    0.3%         21611866
rhino                    3      4       50.0%  100.0%    0.0%                0
batman                   4      8       20.0%  100.0%    0.8%          1318996
robin                    5      8       20.0%  100.0%    0.8%            97993
falcon                   6      2       10.0%  100.0%    0.0%                0 

[Virtual Machine Memory Entitlement]
 DynMem  Memory   DynMem  DynMem DynMem  Comfort Total    Free   Avail    Mem    AMR     AMR
Virtual Machine Name  VM #   Min   Entitle   Max    Target Current   Min   Memory  Memory  Memory  Press  Chunk   State
==================== ===== ======= ======= ======= ======= ======= ======= ======= ======= ======= ===== ======= ========
rh-www                   1   512MB     0MB     8GB  8192MB  8192MB     0MB     8GB     0MB     0MB   0       0MB DISABLED
sql-dev                  2   512MB     0MB     8GB  8192MB  8192MB     0MB     8GB     0MB     0MB   0       0MB DISABLED
rhino                    3  1024MB  1500MB     6GB  2048MB  6144MB     0MB     6GB     0MB     0MB   0     256MB  ENABLED
batman                   4  1024MB  1500MB     4GB  4090MB  4090MB  1850MB     4GB  2214MB   500MB   0     256MB  ENABLED
robin                    5  1024MB  1500MB     4GB  4090MB  4090MB  1914MB     4GB  2165MB   531MB   0     256MB  ENABLED
falcon                   6   512MB     0MB     6GB  6144MB  6144MB     0MB     6GB     0MB     0MB   0       0MB DISABLED

I hope this helps to clarify how HPVM manage the memory of the virtual machines and how to customize its configuration. As always any comment would be welcome :-)

Juanma.

Last month HP released the last update of HP-UX 11iv3, the Update 6 or the March 2010 Update or 11.36… I decided some time ago to do not try to understand why we have a so stupid naming scheme for HP-UX.

Anyway, putting aside the philosophical rambling, HP-UX 11iv3 update 6 is here and it came full of new features. The following ones stand out, at least for me.

  • Improved system boot time thanks to RCEnhancement, Oliver wrote about it several weeks ago, until this release it was available in the Software Depot and now is included in the install media.
  • New DRD version capable of synchronize the clone with the running system.
  • LVM updated to version 2.2. With this version we have logical volume snapshots, can’t wait to try this :-D, logical volume migration within the same VG through the new lvmove command and boot support for LVM 2.2 volume groups, this is very cool because until this release the vg00 was stuck in LVM 1.0. Ignite-UX have also been updated to take advantage of this feature and we’ll be asked to choose between LVM 1.0 and LVM 2.2 bootable volume groups.

This release comes bundled with the new HPVM version, the 4.2, which also adds a bunch of new features. To name a few.

  • Automatic memory reallocation.
  • Capacity of suspend and resume a guest.
  • Support for CVM/CFS backing stores for HPVM Serviceguard cluster packages.
  • Encryption of the data during Online VM migration.
  • AVIO support for Veritas Volume Manager based backing stores.

In short, there are a lot of new interesting features, a lot issues have also been fixed and as I said at the beginning we still have to live with an odd naming scheme but in the end I’m quite happy with this new release at least in theory because I didn’t have the opportunity to try it yet but I will do very soon since I’m planning to deploy HPVM 4.2 in the near future. In fact my HPVM 3.5 to 4.1 migration project has become a 3.5 to 4.2 migration, how cool is that eh! ;-)

Juanma.

As I already said many times my current HPVM version is 3.5 so it doesn’t support guest online migration. But lacking the online migration feature doesn’t mean that we can not perform Integrity VM migration between hosts.

Currently there are two methods to perform migrations:

  • HPVM commands.
  • MC/ServiceGuard.

In this post I will only cover the HPVM way. I will leave HPVM ServiceGuard clusters for a future post but as many of you already know moving a guest between cluster nodes is like moving any other ServiceGuard package since the guests are managed by SG as packages.

PREREQUISITES:

There is a certain list of prerequisites the guest has to met in order to be successfully migrated between hosts.

  • Off-line state:

This is pretty obvious of course, the guest must be off.

  • SSH configuration:

In both hosts root must have SSH access through public key authentication to the other.

root@ivmcl01:~ # hpvmmigrate -P hpvm1 -h ivmcl02
hpvmmigrate: SSH execution error. Make sure ssh is setup right on both source and target systems.
  • Shared devices:

If the guest has a shared device like the CD/DVD of the host, the device has to be deleted from the guest configuration.

root@ivmcl01:~ # hpvmmigrate -P hpvm1 -h ivmcl02
hpvmmigrate: Device /dev/rdsk/c1t4d0 is shared.  Guest with shared storage devices cannot be migrated.
  • Storage devices:

There are two consideration about storage devices.

The storage devices of the guest must be physical disks. Migration of guests with lvols as storage devices is supported only in HPVM 4.1 release.

root@ivmcl01:~ # hpvmmigrate -P hpvm1 -h ivmcl02
hpvmmigrate: Target VM Host error - Device does not exist.
hpvmmigrate: See HPVM command log file on target VM Host for more detail.

The WWID of the device must be the same in both HPVM hosts.

root@ivmcl01:~ # hpvmmigrate -P hpvm1 -h ivmcl02
hpvmmigrate: Device WWID does not match.
hpvmmigrate: See HPVM command log file on target VM Host for more detail.
  • Network configuration:

The virtual switch where the guest is connected to must be configured on the same network card in both hosts. For example if vswitch vlan2 is using lan0 in host1 must be using lan0 in host2 or the migration will fail.

root@ivmcl01:~ # hpvmmigrate -P hpvm1 -h ivmcl02
hpvmmigrate: Target VM Host error - vswitch validation failed.
hpvmmigrate: See HPVM command log file on target VM Host for more detail.

PROCEDURE:

If all the prerequisites explained before are met by our guest we can proceed with the migration. The command to use is hpvmmigrate, the name or the VM number and the hostname of the destination server have to be provided. Some of the resources of the virtual machines like number of CPU, ammount of RAM or the machine label can also be modified.

root@ivmcl01:~ # hpvmmigrate -P hpvm1 -h ivmcl02
hpvmmigrate: Guest migrated successfully.
root@ivmcl01:~ #

Check the existence of the migrated guest in the destination host.

root@ivmcl02:~ # hpvmstatus
[Virtual Machines]
Virtual Machine Name VM #  OS Type State     #VCPUs #Devs #Nets Memory  Runsysid
==================== ===== ======= ========= ====== ===== ===== ======= ========
oratest01                1 HPUX    On (OS)        4    10     3   16 GB        0
oratest02                2 HPUX    On (OS)        4     8     3   16 GB        0
sapvm01                  3 HPUX    Off            3     8     3    8 GB        0
sapvm02                  4 HPUX    Off            3     7     3    8 GB        0
sles01                   5 LINUX   On (OS)        1     4     3    4 GB        0
rhel01                   6 LINUX   Off            1     4     3    4 GB        0
hp-vxvm                  7 HPUX    On (OS)        2    17     3    6 GB        0
ws2003                   8 WINDOWS Off            4     4     3   12 GB        0
hpvm1                   10 HPUX    Off            1     1     1    3 GB        0
root@ivmcl02:~ #

As you can see once all the prerequisites have been met the migration is quite easy.

CONCLUSION:

Even with the disadvantage of lacking online migration the guest migration feature can be of usefulness to balance the load betwwen HPVM hosts.

Juanma.

Welcome again to “HPVM World!” my dear readers :-D

Have to say that even with the initial disappointment about hpvmclone, cloning IVMs was a very funny task but I believe that the after cloning tasks weren’t very clear, at least for me, so I decided to write this follow up post to clarify that part.

Let’s assume we already have a cloned virtual machine, in this particular case I used dd to clone the virtual disk and later I created the IVM and added the storage device and the other resources but it also applied to the other method with minor changes.

[root@hpvmhost] ~ # hpvmstatus -P vmnode2 -d
[Virtual Machine Devices]

[Storage Interface Details]
disk:scsi:0,0,0:lv:/dev/vg_vmtest/rvmnode2disk
dvd:scsi:0,0,1:disk:/dev/rdsk/c1t4d0

[Network Interface Details]
network:lan:0,1,0xB20EBA14E76C:vswitch:localnet
network:lan:0,2,0x3E9492C9F615:vswitch:vlan02

[Misc Interface Details]
serial:com1::tty:console
[root@hpvmhost] ~ #

We start the virtual machine an access its console.Now we are going to follow some of the final steps of the third method described in my previous post. From the main EFI Boot Manager select the Boot option maintenance menu option.

EFI Boot Manager ver 1.10 [14.62] [Build: Mon Oct  1 09:27:26 2007]

Please select a boot option

     EFI Shell [Built-in]                                           
     Boot option maintenance menu                                    

     Use ^ and v to change option(s). Use Enter to select an option

Select Boot from a file and the select the first partition:

EFI Boot Maintenance Manager ver 1.10 [14.62]

Boot From a File.  Select a Volume

    IA64_EFI [Acpi(PNP0A03,0)/Pci(0|0)/Scsi(Pun0,Lun0)/HD(Part1,Sig
    IA64_EFI [Acpi(PNP0A03,0)/Pci(0|0)/Scsi(Pun0,Lun0)/HD(Part3,Sig7
    Removable Media Boot [Acpi(PNP0A03,0)/Pci(0|0)/Scsi(Pun1,Lun0)]
    Load File [Acpi(PNP0A03,0)/Pci(1|0)/Mac(B20EBA14E76C)]          
    Load File [Acpi(PNP0A03,0)/Pci(2|0)/Mac(3E9492C9F615)]        
    Load File [EFI Shell [Built-in]]                                
    Legacy Boot
    Exit

Enter the EFI directory then the HPUX directory and finally select hpux.file. Like I said before this part is very similar to the final steps of Method 3.

EFI Boot Maintenance Manager ver 1.10 [14.62]

Select file or change to new directory:

       03/09/10  03:45p <DIR>       4,096 .                         
       03/09/10  03:45p <DIR>       4,096 ..                        
       03/10/10  04:21p           657,609 hpux.efi                  
       03/09/10  03:45p            24,576 nbp.efi                   
       Exit

After this the machine will boot.

   Filename: \EFI\HPUX\hpux.efi
 DevicePath: [Acpi(PNP0A03,0)/Pci(0|0)/Scsi(Pun0,Lun0)/HD(Part1,Sig71252358-2BCD-11DF-8000-D6217B60E588)/\EFI\HPUX\hpux.efi]
   IA-64 EFI Application 03/10/10  04:21p     657,609 bytes

(C) Copyright 1999-2008 Hewlett-Packard Development Company, L.P.
All rights reserved

HP-UX Boot Loader for IPF  --  Revision 2.036

Press Any Key to interrupt Autoboot
\EFI\HPUX\AUTO ==> boot vmunix
Seconds left till autoboot -   0
AUTOBOOTING...> System Memory = 2042 MB
loading section 0
..................................................................................... (complete)
loading section 1
............... (complete)
loading symbol table
loading System Directory (boot.sys) to MFS
.....
loading MFSFILES directory (bootfs) to MFS
..................
Launching /stand/vmunix
SIZE: Text:43425K + Data:7551K + BSS:22118K = Total:73096K
...

When the VM is up login as root. The first tasks as always are to change hostname and network configuration to avoid conflicts.

Next we are going recreate lvmtab since the current one contains the LVM configuration of the source virtual machine. Performing a simple vgdisplay will show it.

root@vmnode2:/# vgdisplay
vgdisplay: Warning: couldn't query physical volume "/dev/disk/disk15_p2":
The specified path does not correspond to physical volume attached to
this volume group
vgdisplay: Warning: couldn't query all of the physical volumes.
--- Volume groups ---
VG Name                     /dev/vg00
VG Write Access             read/write     
VG Status                   available                 
Max LV                      255    
Cur LV                      8      
Open LV                     8      
Max PV                      16     
Cur PV                      1      
Act PV                      0      
Max PE per PV               3085         
VGDA                        0   
PE Size (Mbytes)            8               
Total PE                    0       
Alloc PE                    0       
Free PE                     0       
Total PVG                   0        
Total Spare PVs             0              
Total Spare PVs in use      0

root@vmnode2:/#

To correct this remove the /etc/lvmtab file and launch a vgscan.

root@vmnode2:/# rm /etc/lvmtab
/etc/lvmtab: ? (y/n) y
root@vmnode2:/var/tmp/software# vgscan
Creating "/etc/lvmtab".
vgscan: Couldn't access the list of physical volumes for volume group "/dev/vg00".
Physical Volume "/dev/dsk/c1t1d0" contains no LVM information

*** LVMTAB has been created successfully.
*** Do the following to resync the information on the disk.
*** #1.  vgchange -a y
*** #2.  lvlnboot -R
root@vmnode2:/#

Follow the recommended steps in vgscan output, the first step only applies if there are any other VGs in the system, if there is only vg00 it is already active so this step is not necesary.

Running lvnlboot -R is mandatory since we need to recover and update the links to the lvols in the Boot Data Reserved Area of the booting disk.

root@vmnode2:/# lvlnboot -R
Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00.conf
root@vmnode2:/#

Now the LVM configuration is fixed, try again the vgdisplay command.

root@vmnode2:/# vgdisplay
--- Volume groups ---
VG Name                     /dev/vg00
VG Write Access             read/write
VG Status                   available
Max LV                      255
Cur LV                      8
Open LV                     8
Max PV                      16
Cur PV                      1
Act PV                      1
Max PE per PV               3085
VGDA                        2
PE Size (Mbytes)            8
Total PE                    3075
Alloc PE                    2866
Free PE                     209
Total PVG                   0
Total Spare PVs             0
Total Spare PVs in use      0

root@vmnode2:/#

With the LVM configuration fixed the next step is to indicate the booting disk to the system.

root@vmnode2:/# setboot -p /dev/disk/disk21_p2
Primary boot path set to 0/0/0/0.0x0.0x0 (/dev/disk/disk21_p2)
root@vmnode2:/#
root@vmnode2:/# setboot
Primary bootpath : 0/0/0/0.0x0.0x0 (/dev/rdisk/disk21)
HA Alternate bootpath :
Alternate bootpath :

Autoboot is ON (enabled)
root@vmnode2:/#

Finally reboot the virtual machine and if we did everything correctly a new boot option will be available in the EFI Boot Manager.

EFI Boot Manager ver 1.10 [14.62] [Build: Mon Oct  1 09:27:26 2007]

Please select a boot option

    HP-UX Primary Boot: 0/0/0/0.0x0.0x0                             
    EFI Shell [Built-in]                                            
    Boot option maintenance menu                                    

    Use ^ and v to change option(s). Use Enter to select an option

Let the system boot by itself through the new default option (HP-UX Primary Boot) and we are done.

Any feedback would be welcome.

Juanma.

Cloning HPVM guests

March 9, 2010 — 7 Comments

Our next step in the wonderful HPVM World is… cloning virtual machines.

If you have used VMware Virtual Infrastructure cloning, probably are used to the easy “right-click and clone vm” procedure. Sadly HPVM cloning has nothing in common with it. In fact the process to clone a virtual machine can be a little creppy.

Of course there is a hpvmclone command and anyone can think, as I did the first time I had to clone an IVM, I only have to provide the source VM, the new VM name and voilà everything will be done:

[root@hpvmhost] ~ # hpvmclone -P ivm1 -N ivm_clone01
[root@hpvmhost] ~ # hpvmstatus
[Virtual Machines]
Virtual Machine Name VM #  OS Type State     #VCPUs #Devs #Nets Memory  Runsysid
==================== ===== ======= ========= ====== ===== ===== ======= ========
ivm1                     9 HPUX    Off            3     3     2    2 GB        0
ivm2                    10 HPUX    Off            1     7     1    3 GB        0
ivm_clone01             11 HPUX    Off            3     3     2    2 GB        0
[root@hpvmhost] ~ #

The new virtual machine can be seen and everything seems to be fine but when you ask for the configuration details of the new IVM a nasty surprise will appear… the storage devices had not been cloned instead it looks that hpvmclone simply mapped the devices of the source IVM to the new IVM:

[root@hpvmhost] ~ # hpvmstatus -P ivm_clone01
[Virtual Machine Details]
Virtual Machine Name VM #  OS Type State
==================== ===== ======= ========
ivm_clone01             11 HPUX    Off       

[Authorized Administrators]
Oper Groups: 
Admin Groups:
Oper Users:  
Admin Users:  

[Virtual CPU Details]
#vCPUs Entitlement Maximum
====== =========== =======
     3       20.0%  100.0% 

[Memory Details]
Total    Reserved
Memory   Memory 
=======  ========
   2 GB     64 MB 

[Storage Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
disk    scsi         0   2   0   0   0 lv        /dev/vg_vmtest/rivm1d1
disk    scsi         0   2   0   1   0 lv        /dev/vg_vmtest/rivm1d2
dvd     scsi         0   2   0   2   0 disk      /dev/rdsk/c1t4d0

[Network Interface Details]
Interface Adaptor    Name/Num   PortNum Bus Dev Ftn Mac Address
========= ========== ========== ======= === === === =================
vswitch   lan        vlan02     11        0   0   0 f6-fb-bf-41-78-63
vswitch   lan        localnet   10        0   1   0 2a-69-35-d5-c1-5f

[Misc Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
serial  com1                           tty       console
[root@hpvmhost] ~ #

With this configuration the virtual machines can’t be booted at the same time. So, what is the purpose of hpvmclone if the newly cloned node can’t be used simultaneously with the original? Sincerely this makes no sense at least for me.

At that point and since I really wanted to use both machines in a test cluster I decided to do a little research through Google and ITRC.

After reading again the official documentation, a few dozens posts regarding HPVM cloning and HPVM in general and a few very nice posts in Daniel Parkes’ HP-UX Tips & Tricks site I finally came up with three different methods to successfully and “physically” clone an Integrity Virtual Machine.

METHOD 1: Using dd.

  • Create the LVM structure for the new virtual machine on the host.
  • Use dd to copy every storage device from the source virtual machine.
[root@hpvmhost] ~ # dd if=/dev/vg_vmtest/rivm1d1 of=/dev/vg_vmtest/rclone01_d1 bs=1024k
12000+0 records in
12000+0 records out
[root@hpvmhost] ~ #
[root@hpvmhost] ~ # dd if=/dev/vg_vmtest/rivm1d2 of=/dev/vg_vmtest/rclone01_d2 bs=1024k
12000+0 records in
12000+0 records out
[root@hpvmhost] ~ #
  • Using hpvmclone create the new machine and in the same command add the new storage devices and delete the old ones from its configuration, any resource can also be modified at this point like with hpvmcreate.
[root@hpvmhost] ~ # hpvmclone -P ivm1 -N clone01 -d disk:scsi:0,2,0:lv:/dev/vg_vmtest/rivm1d1 \
> -d disk:scsi:0,2,1:lv:/dev/vg_vmtest/rivm1d2 \
> -a disk:scsi::lv:/dev/vg_vmtest/rclone01_d1 \
> -a disk:scsi::lv:/dev/vg_vmtest/rclone01_d2 \
> -l "Clone-cluster 01" \
> -B manual
[root@hpvmhost] ~ #
  • Start the new virtual machine and make the necessary changes to the guest OS (network, hostname, etc).

METHOD 2: Clone the virtual storage devices at the same time the IVM is cloned.

Yes, yes and yes it can be done with hpvmclone, you have to use the -b switch and provide the storage resource to use.

I really didn’t test this procedure with other devices apart from the booting disk/disks. In theory the man page of the command and the HPVM documentation states that this option can be used to specify the booting device of the clone but I used to clone a virtual machine with one boot disk and one with two disks and in both cases it worked without problems.

  • As in METHOD 1 create the necessary LVM infrastructure for the new IVM.
  • Once the lvols are created clone the virtual machine.
[root@hpvmhost] ~ # hpvmclone -P ivm1 -N vxcl01 -a disk:scsi::lv:/dev/vg_vmtest/rvxcl01_d1 \
> -a disk:scsi::lv:/dev/vg_vmtest/rvxcl01_d2 \
> -b disk:scsi:0,2,0:lv:/dev/vg_vmtest/rvxcl01_d1 \
> -b disk:scsi:0,2,1:lv:/dev/vg_vmtest/rvxcl01_d2 \
> -d disk:scsi:0,2,0:lv:/dev/vg_vmtest/rivm1d1 \
> -d disk:scsi:0,2,1:lv:/dev/vg_vmtest/rivm1d2 \
> -B manual
12000+0 records in
12000+0 records out
hpvmclone: Virtual storage cloned successfully.
12000+0 records in
12000+0 records out
hpvmclone: Virtual storage cloned successfully.
[root@hpvmhost] ~ #
  • Start the virtual machine.
  • Now log into the virtual machine to check the start-up process and to make any change needed.

METHOD 3: Dynamic Root Disk.

Since with DRD a clone of the vg00 can be produced we can use it too to clone an Integrity Virtual Machine.

  • First step is to create a new lvol that will contain the clone of the vg00, it has to be at least the same size as the original disk.
  • Install the last DRD version supported on the virtual machine to clone.
  • Add the new volume to the source virtual machine and from the guest OS re-scan for the new disk.
  • Now proceed with the DRD clone.
root@ivm2:~# drd clone -v -x overwrite=true -t /dev/disk/disk15   

=======  03/09/10 15:45:15 MST  BEGIN Clone System Image (user=root)  (jobid=ivm2)

       * Reading Current System Information
       * Selecting System Image To Clone
       * Converting legacy Dsf "/dev/dsk/c0t0d0" to "/dev/disk/disk3"
       * Selecting Target Disk
NOTE:    There may be LVM 2 volumes configured that will not be recognized.
       * Selecting Volume Manager For New System Image
       * Analyzing For System Image Cloning
       * Creating New File Systems
       * Copying File Systems To New System Image
       * Copying File Systems To New System Image succeeded.
       * Making New System Image Bootable
       * Unmounting New System Image Clone
       * System image: "sysimage_001" on disk "/dev/disk/disk15"

=======  03/09/10 16:05:20 MST  END Clone System Image succeeded. (user=root)  (jobid=ivm2)

root@ivm2:~#
  • Mount the new image.
root@ivm2:~# drd mount -v

=======  03/09/10 16:09:08 MST  BEGIN Mount Inactive System Image (user=root)  (jobid=ivm2)

       * Checking for Valid Inactive System Image
       * Locating Inactive System Image
       * Preparing To Mount Inactive System Image
       * Selected inactive system image "sysimage_001" on disk "/dev/disk/disk15".
       * Mounting Inactive System Image
       * System image: "sysimage_001" on disk "/dev/disk/disk15"

=======  03/09/10 16:09:26 MST  END Mount Inactive System Image succeeded. (user=root)  (jobid=ivm2)

root@ivm2:~#
  • On the mounted image edit the netconf file and modify the hostname to “” and remove any network configuration such as IP address, gateway, etc. The image is mounted on /var/opt/drd/mnts/sysimage_001.
  • Move or delete the DRD XML registry file in /var/opt/drd/mnts/sysimage_001/var/opt/drd/registry in order to avoid any problems during the boot of the clone since the source disk will not be present.
  • Unmount the image.
root@ivm2:~# drd umount -v 

=======  03/09/10 16:20:45 MST  BEGIN Unmount Inactive System Image (user=root)  (jobid=ivm2)

       * Checking for Valid Inactive System Image
       * Locating Inactive System Image
       * Preparing To Unmount Inactive System Image
       * Unmounting Inactive System Image
       * System image: "sysimage_001" on disk "/dev/disk/disk15"

=======  03/09/10 16:20:58 MST  END Unmount Inactive System Image succeeded. (user=root)  (jobid=ivm2)

root@ivm2:~#
  • Now we are going to create the new virtual machine with hpvmclone. Of course the new IVM can be created through hpvmcreate and add the new disk as its boot disk.
[root@hpvmhost] ~ # hpvmclone -P ivm2 -N ivm3 -B manual -d disk:scsi:0,1,0:lv:/dev/vg_vmtest/rivm2disk
[root@hpvmhost] ~ # hpvmstatus -P ivm3
[Virtual Machine Details]
Virtual Machine Name VM #  OS Type State
==================== ===== ======= ========
ivm3                     4 HPUX    Off       

[Authorized Administrators]
Oper Groups: 
Admin Groups:
Oper Users:  
Admin Users:  

[Virtual CPU Details]
#vCPUs Entitlement Maximum
====== =========== =======
     1       10.0%  100.0% 

[Memory Details]
Total    Reserved
Memory   Memory 
=======  ========
   3 GB     64 MB 

[Storage Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
dvd     scsi         0   1   0   1   0 disk      /dev/rdsk/c1t4d0
disk    scsi         0   1   0   2   0 lv        /dev/vg_vmtest/rivm3disk

[Network Interface Details]
Interface Adaptor    Name/Num   PortNum Bus Dev Ftn Mac Address
========= ========== ========== ======= === === === =================
vswitch   lan        vlan02     11        0   0   0 52-4f-f9-5e-02-82

[Misc Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
serial  com1                           tty       console
[root@hpvmhost] ~ #
  • Final step is to boot the newly create machine, from the EFI menu we’re going to create a new boot file.
  • First select the Boot option maintenance menu:
EFI Boot Manager ver 1.10 [14.62] [Build: Mon Oct  1 09:27:26 2007]

Please select a boot option

    HP-UX Primary Boot: 0/0/1/0.0.0                                
    EFI Shell [Built-in]                                           
    Boot option maintenance menu                                    

    Use ^ and v to change option(s). Use Enter to select an option
  • Now go to Add a Boot Option.
EFI Boot Maintenance Manager ver 1.10 [14.62]

Main Menu. Select an Operation

        Boot from a File                                           
        Add a Boot Option                                          
        Delete Boot Option(s)                                      
        Change Boot Order                                           

        Manage BootNext setting                                    
        Set Auto Boot TimeOut                                       

        Select Active Console Output Devices                       
        Select Active Console Input Devices                        
        Select Active Standard Error Devices                        

        Cold Reset                                                 
        Exit                                                       

    Timeout-->[10] sec SystemGuid-->[5A0F8F26-2BA2-11DF-9C04-001A4B07F002]
    SerialNumber-->[VM01010008          ]
  • Select the first partition of the disk.
EFI Boot Maintenance Manager ver 1.10 [14.62]

Add a Boot Option.  Select a Volume

    IA64_EFI [Acpi(PNP0A03,0)/Pci(1|0)/Scsi(Pun2,Lun0)/HD(Part1,Sig7
    IA64_EFI [Acpi(PNP0A03,0)/Pci(1|0)/Scsi(Pun2,Lun0)/HD(Part3,Sig7
    Removable Media Boot [Acpi(PNP0A03,0)/Pci(1|0)/Scsi(Pun1,Lun0)]
    Load File [Acpi(PNP0A03,0)/Pci(0|0)/Mac(524FF95E0282)]         
    Load File [EFI Shell [Built-in]]                               
    Legacy Boot                                                    
    Exit
  • Select the first option.
EFI Boot Maintenance Manager ver 1.10 [14.62]

Select file or change to new directory:

       03/09/10  03:45p <DIR>       4,096 EFI                      
       [Treat like Removable Media Boot]                           
    Exit
  • Enter the HPUX directory.
EFI Boot Maintenance Manager ver 1.10 [14.62]

Select file or change to new directory:

       03/09/10  03:45p <DIR>       4,096 .                        
       03/09/10  03:45p <DIR>           0 ..                       
       03/09/10  03:45p <DIR>       4,096 HPUX                     
       03/09/10  03:45p <DIR>       4,096 Intel_Firmware           
       03/09/10  03:45p <DIR>       4,096 diag                     
       03/09/10  03:45p <DIR>       4,096 hp                       
       03/09/10  03:45p <DIR>       4,096 tools                    
    Exit
  • Select the hpux.efi file.
EFI Boot Maintenance Manager ver 1.10 [14.62]

Select file or change to new directory:

       03/09/10  03:45p <DIR>       4,096 .                        
       03/09/10  03:45p <DIR>       4,096 ..                       
       03/09/10  03:45p           654,025 hpux.efi                 
       03/09/10  03:45p            24,576 nbp.efi                  
    Exit
  • Enter BOOTDISK as description and None as BootOption Data Type. Save changes.
Filename: \EFI\HPUX\hpux.efi
DevicePath: [Acpi(PNP0A03,0)/Pci(1|0)/Scsi(Pun2,Lun0)/HD(Part1,Sig71252358-2BCD-11DF-8000-D6217B60E588)/\EFI\HPUX\hpux.efi]

IA-64 EFI Application 03/09/10  03:45p     654,025 bytes

Enter New Description:  BOOTDISK
New BootOption Data. ASCII/Unicode strings only, with max of 240 characters

Enter BootOption Data Type [A-Ascii U-Unicode N-No BootOption] :  None

Save changes to NVRAM [Y-Yes N-No]:
  • Go back to the EFI main menu and boot from the new option.
EFI Boot Manager ver 1.10 [14.62] [Build: Mon Oct  1 09:27:26 2007]

Please select a boot option
HP-UX Primary Boot: 0/0/1/0.0.0
EFI Shell [Built-in]
BOOTDISK
Boot option maintenance menu

Use ^ and v to change option(s). Use Enter to select an option
Loading.: BOOTDISK
Starting: BOOTDISK

(C) Copyright 1999-2006 Hewlett-Packard Development Company, L.P.
All rights reserved

HP-UX Boot Loader for IPF  --  Revision 2.035

Press Any Key to interrupt Autoboot
\EFI\HPUX\AUTO ==> boot vmunix
Seconds left till autoboot -   0
AUTOBOOTING...> System Memory = 3066 MB
loading section 0
.................................................................................. (complete)
loading section 1
.............. (complete)
loading symbol table
loading System Directory (boot.sys) to MFS
.....
loading MFSFILES directory (bootfs) to MFS
................
Launching /stand/vmunix
SIZE: Text:41555K + Data:6964K + BSS:20747K = Total:69267K
  • Finally the OS will ask some questions about the network configuration and other parameters, answer what suits better your needing.
_______________________________________________________________________________

                       Welcome to HP-UX!

Before using your system, you will need to answer a few questions.

The first question is whether you plan to use this system on a network.

Answer "yes" if you have connected the system to a network and are ready
to link with a network.

Answer "no" if you:

     * Plan to set up this system as a standalone (no networking).

     * Want to use the system now as a standalone and connect to a
       network later.
_______________________________________________________________________________

Are you ready to link this system to a network?

Press [y] for yes or [n] for no, then press [Enter] y
...

And we are done.

Conclusions: I have to say that at the beginning the cloning system of HPVM disappointed me; but after a while I got used to it.

In my opinion the best method of the above is the second if you have one boot disk, and I really can’t see a reason to have a vg00 with several PVs on a virtual machine. If you have an IVM as template and need to produce many copies as quickly as possible this method is perfect.

Of course there is a fourth method: Our beloved Ignite-UX. But I will write about it in another post.

Juanma.

[root@hpvmhost] ~ # hpvmstatus -P hpvxcl01
[Virtual Machine Details]
Virtual Machine Name VM #  OS Type State
==================== ===== ======= ========
hpvxcl01                11 HPUX    Off[Authorized Administrators]
Oper Groups:
Admin Groups:
Oper Users:
Admin Users:[Virtual CPU Details]
#vCPUs Entitlement Maximum
====== =========== =======
3       20.0%  100.0%[Memory Details]
Total    Reserved
Memory   Memory
=======  ========
2 GB     64 MB

[Storage Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
disk    scsi         0   2   0   0   0 lv        /dev/vg_vmtest/rivm1d1
disk    scsi         0   2   0   1   0 lv        /dev/vg_vmtest/rivm1d2
dvd     scsi         0   2   0   2   0 disk      /dev/rdsk/c1t4d0
disk    scsi         0   2   0   3   0 lv        /dev/vg_vmtest/rivm1sd1
disk    scsi         0   2   0   4   0 lv        /dev/vg_vmtest/rivm1sd2
disk    scsi         0   2   0   5   0 lv        /dev/vg_vmtest/rivm1sd3
disk    scsi         0   2   0   6   0 lv        /dev/vg_vmtest/rivm1md1
disk    scsi         0   2   0   7   0 lv        /dev/vg_vmtest/rivm1md2
disk    scsi         0   2   0   8   0 lv        /dev/vg_vmtest/rivm1md3
disk    scsi         0   2   0   9   0 lv        /dev/vg_vmtest/rivm1lmd1
disk    scsi         0   2   0  10   0 lv        /dev/vg_vmtest/rivm1lmd2

[Network Interface Details]
Interface Adaptor    Name/Num   PortNum Bus Dev Ftn Mac Address
========= ========== ========== ======= === === === =================
vswitch   lan        swtch502   11        0   0   0 f6-fb-bf-41-78-63
vswitch   lan        localnet   10        0   1   0 2a-69-35-d5-c1-5f

[Misc Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
serial  com1                           tty       console
[root@hpvmhost] ~ #

Following with my re-learning HPVM process today I’ve been playing around with my virtual switches and a question had arise.

How can I move a vNic from one vSwitch to another?

I discovered is not a difficult task, just one important question to take into account, the virtual machine must be powered off. This kind of changes can’t be done if the IVM is online, at least with HPVM 3.5. I never used 4.0 or 4.1 releases of HPVM and I didn’t find anything in the documentation that suggest a different behavior.

To perform the operation we’re going to use, as usual ;-),  hpvmmodify. It comes with the -m switch to modify the I/O resources of an already existing virtual machine, but you have to specify the hardware address of the device. To identify the address of the network card  launch hpvmstatus with -d, this options shows the output with the format used on the command line.

[root@hpvmhost] ~ # hpvmstatus -P ivm1 -d
[Virtual Machine Devices]
...
[Network Interface Details]
network:lan:0,0,0x56E9E3096A22:vswitch:vlan02
network:lan:0,1,0xAED6F7FA4E3E:vswitch:localnet
...
[root@hpvmhost] ~ #

As it can be seen in the Networking Interface Details the third field shows, separated by commas,  the lan bus, the device number and the MAC address of the vNic. We only need the first two values, that is the lan bus and device number, “0,0” in our the example.

Now we can proceed.

[root@hpvmhost] ~ # hpvmmodify -P ivm2 -m network:lan:0,0:vswitch:vlan03   
[root@hpvmhost] ~ #
[root@hpvmhost] ~ # hpvmstatus -P ivm1
[Virtual Machine Details]
Virtual Machine Name VM #  OS Type State
==================== ===== ======= ========
ivm1                     9 HPUX    On (OS)   
...
[Network Interface Details]
Interface Adaptor    Name/Num   PortNum Bus Dev Ftn Mac Address
========= ========== ========== ======= === === === =================
vswitch   lan        vlan03     9         0   0   0 56-e9-e3-09-6a-22
vswitch   lan        localnet   9         0   1   0 ae-d6-f7-fa-4e-3e
...
[root@hpvmhost] ~ #

And we are done.

I will write a few additional posts covering  more HPVM tips, small ones and big ones, at the same time I’m practicing them on my lab server.

Juanma.

Yes I have to admit it, it’s been a while since the last time I created an Integrity Virtual Machine. In my last job didn’t have HPVM and here the VMs were already running when I arrived. So a few weeks ago I decided to cut my teeth again with HPVM, specially since I am pushing forward very hard for an OS and HPVM version upgrade of the IVM cluster which is currently running HP-UX 11.23 with HPVM 3.5.

First logical step in order to get proficient again with IVM is to create a new virtual machine. I asked Javi, our storage guy, for a new LUN and after add it to my lab server I started the whole process.

Some of the steps are obvious for any HP-UX Sysadmin, like VGs and LVs creation, but I decided to show the commands in order to maintain some consistency across this how-to/checklist/what-ever-you-like-to-call-it.

  • Create a volume group for the IVM virtual disks.
[root@hpvmhost] ~ # vgcreate -s 16 -e 6000 vg_vmtest /dev/dsk/c15t7d1
Volume group "/dev/vg_vmtest" has been successfully created.
Volume Group configuration for /dev/vg_vmtest has been saved in /etc/lvmconf/vg_vmtest.conf
[root@hpvmhost] ~ #
[root@hpvmhost] ~ # vgextend vg_vmtest /dev/dsk/c5t7d1  /dev/dsk/c7t7d1 /dev/dsk/c13t7d1
Volume group "vg_vmtest" has been successfully extended.
Volume Group configuration for /dev/vg_vmtest has been saved in /etc/lvmconf/vg_vmtest.conf
[root@hpvmhost] ~ # vgdisplay -v vg_vmtest
--- Volume groups ---
VG Name                     /dev/vg_vmtest
VG Write Access             read/write     
VG Status                   available                 
Max LV                      255    
Cur LV                      0      
Open LV                     0      
Max PV                      16     
Cur PV                      1      
Act PV                      1      
Max PE per PV               6000         
VGDA                        2   
PE Size (Mbytes)            16              
Total PE                    3199    
Alloc PE                    0       
Free PE                     3199    
Total PVG                   0        
Total Spare PVs             0              
Total Spare PVs in use      0                     

--- Physical volumes ---
PV Name                     /dev/dsk/c15t7d1
PV Name                     /dev/dsk/c5t7d1  Alternate Link
PV Name                     /dev/dsk/c7t7d1  Alternate Link
PV Name                     /dev/dsk/c13t7d1 Alternate Link
PV Status                   available                
Total PE                    3199    
Free PE                     3199    
Autoswitch                  On        
Proactive Polling           On               

[root@hpvmhost] ~ #
  • Create one lvol for each disk you want to add to your virtual machine, of course these lvols must belong to the volume group previously created.
[root@hpvmhost] ~ # lvcreate -L 12000 -n ivm1d1 vg_vmtest
Logical volume "/dev/vg_vmtest/ivm1d1" has been successfully created with
character device "/dev/vg_vmtest/rivm1d1".
Logical volume "/dev/vg_vmtest/ivm1d1" has been successfully extended.
Volume Group configuration for /dev/vg_vmtest has been saved in /etc/lvmconf/vg_vmtest.conf
[root@hpvmhost] ~ #
[root@hpvmhost] ~ # lvcreate -L 12000 -n ivm1d2 vg_vmtest
Logical volume "/dev/vg_vmtest/ivm1d2" has been successfully created with
character device "/dev/vg_vmtest/rivm1d2".
Logical volume "/dev/vg_vmtest/ivm1d2" has been successfully extended.
Volume Group configuration for /dev/vg_vmtest has been saved in /etc/lvmconf/vg_vmtest.conf
[root@hpvmhost] ~ #
  • Now we’re going to do some real stuff. Create the IVM with the hpvmcreate command and use the hpvmstatus to check that everything went well :
[root@hpvmhost] ~ # hpvmcreate -P ivm1 -O hpux  
[root@hpvmhost] ~ # hpvmstatus -P ivm1
[Virtual Machine Details]
Virtual Machine Name VM #  OS Type State
==================== ===== ======= ========
ivm1                     8 HPUX    Off       

[Authorized Administrators]
Oper Groups:  
Admin Groups:
Oper Users:   
Admin Users:  

[Virtual CPU Details]
#vCPUs Entitlement Maximum
====== =========== =======
 1       10.0%  100.0% 

[Memory Details]
Total    Reserved
Memory   Memory  
=======  ========
 2 GB     64 MB 

[Storage Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================

[Network Interface Details]
Interface Adaptor    Name/Num   PortNum Bus Dev Ftn Mac Address
========= ========== ========== ======= === === === =================

[Misc Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
serial  com1                           tty       console
[root@hpvmhost] ~ #

We have a new virtual machine created but with no resources at all.

If you have read the HPVM documentation, and you should, probably know that every resource can be assigned at this step but I like to add them later one by one.

Since now we’re going to use the hpvmstatus to verify every change made. This command can be invoked without options to show a general summary or can query a single virtual machine, a verbose option is also available with -V. Take a look of its man page to check more options.

  • Add more CPU and RAM. The default values are 1 vCPU and 2GB of RAM, more can be assigned with hpvmmodify:
[root@hpvmhost] ~ # hpvmmodify -P ivm1 -c 2
[root@hpvmhost] ~ # hpvmmodify -P ivm1 -r 4G
[root@hpvmhost] ~ # hpvmstatus
[Virtual Machines]
Virtual Machine Name VM #  OS Type State     #VCPUs #Devs #Nets Memory  Runsysid
==================== ===== ======= ========= ====== ===== ===== ======= ========
oratest01                1 HPUX    On (OS)        4    10     3   16 GB        0
oratest02                2 HPUX    On (OS)        4     8     3   16 GB        0
sapvm01                  3 HPUX    Off            3     8     3    8 GB        0
sapvm02                  4 HPUX    Off            3     7     3    8 GB        0
sles01                   5 LINUX   On (OS)        1     4     3    4 GB        0
rhel01                   6 LINUX   Off            1     4     3    4 GB        0
hp-vxvm                  7 HPUX    On (OS)        2    17     3    6 GB        0
ivm1                     8 HPUX    Off            2     0     0    4 GB        0
[root@hpvmhost] ~ #
  • With the CPUs and RAM finished it’s time to add the storage devices, as always we’re going to use hpvmmodify:
[root@hpvmhost] ~ # hpvmmodify -P ivm1 -a disk:scsi::lv:/dev/vg_vmtest/rivm1d1
[root@hpvmhost] ~ # hpvmmodify -P ivm1 -a disk:scsi::lv:/dev/vg_vmtest/rivm1d2
[root@hpvmhost] ~ # hpvmmodify -P ivm1 -a dvd:scsi::disk:/dev/rdsk/c1t4d0
[root@hpvmhost] ~ # hpvmstatus -P ivm1
[Virtual Machine Details]
Virtual Machine Name VM #  OS Type State
==================== ===== ======= ========
ivm1                     8 HPUX    Off       

[Authorized Administrators]
Oper Groups:  
Admin Groups:
Oper Users:   
Admin Users:  

[Virtual CPU Details]
#vCPUs Entitlement Maximum
====== =========== =======
 2       10.0%  100.0% 

[Memory Details]
Total    Reserved
Memory   Memory  
=======  ========
 4 GB     64 MB 

[Storage Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
disk    scsi         0   2   0   0   0 lv        /dev/vg_vmtest/rivm1d1
disk    scsi         0   2   0   1   0 lv        /dev/vg_vmtest/rivm1d2
dvd     scsi         0   2   0   2   0 disk      /dev/rdsk/c1t4d0

[Network Interface Details]
Interface Adaptor    Name/Num   PortNum Bus Dev Ftn Mac Address
========= ========== ========== ======= === === === =================

[Misc Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
serial  com1                           tty       console
[root@hpvmhost] ~ #

An important tip about the storage devices, remember that you have to use the character device file of the LV. If a block device is used you will get the following error:

[root@hpvmhost] ~ # hpvmmodify -P ivm1 -a disk:scsi::lv:/dev/vg_vmtest/ivm1d1
hpvmmodify: WARNING (ivm1): Expecting a character device file for disk backing file, but '/dev/vg_vmtest/ivm1d1' appears to be a block device.
hpvmmodify: ERROR (ivm1): Illegal blk device '/dev/vg_vmtest/ivm1d1' as backing device.
hpvmmodify: ERROR (ivm1): Unable to add device '/dev/vg_vmtest/ivm1d1'.
hpvmmodify: Unable to create device disk:scsi::lv:/dev/vg_vmtest/ivm1d1.
hpvmmodify: Unable to modify the guest.
[root@hpvmhost] ~ #
  • Virtual networking 1: First check the available virtual switches with hpvmnet:
[root@hpvmhost] / # hpvmnet
Name     Number State   Mode      NamePPA  MAC Address    IP Address
======== ====== ======= ========= ======== ============== ===============
localnet      1 Up      Shared             N/A            N/A
vlan02        2 Up      Shared    lan3     0x000000000000 192.168.1.12
vlan03        3 Up      Shared    lan4     0x001111111111 10.10.3.4
[root@hpvmhost] / #
  • Virtual Networking 2: Add a couple of vnics to the virtual machine.
[root@hpvmhost] / # hpvmmodify -P ivm1 -a network:lan:vswitch:vlan02
[root@hpvmhost] / # hpvmmodify -P ivm1 -a network:lan:vswitch:localnet
[root@hpvmhost] / #
[root@hpvmhost] / # hpvmstatus -P ivm1
[Virtual Machine Details]
Virtual Machine Name VM #  OS Type State
==================== ===== ======= ========
ivm1                     8 HPUX    Off       

[Authorized Administrators]
Oper Groups:  
Admin Groups:
Oper Users:   
Admin Users:  

[Virtual CPU Details]
#vCPUs Entitlement Maximum
====== =========== =======
 2       10.0%  100.0% 

[Memory Details]
Total    Reserved
Memory   Memory  
=======  ========
 4 GB     64 MB 

[Storage Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
disk    scsi         0   2   0   0   0 lv        /dev/vg_vmtest/rivm1d1
disk    scsi         0   2   0   1   0 lv        /dev/vg_vmtest/rivm1d2
dvd     scsi         0   2   0   2   0 disk      /dev/rdsk/c1t4d0

[Network Interface Details]
Interface Adaptor    Name/Num   PortNum Bus Dev Ftn Mac Address
========= ========== ========== ======= === === === =================
vswitch   lan        vlan02     8         0   0   0 56-e9-e3-09-6a-22
vswitch   lan        localnet   8         0   1   0 ae-d6-f7-fa-4e-3e

[Misc Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
serial  com1                           tty       console
[root@hpvmhost] / #
  • And we have an IVM ready to be used. To start it use the hpvmstart command and access its console with hpvmconsole, the interface is almost equal to GSP/MP.
[root@hpvmhost] ~ # hpvmstart -P ivm1
(C) Copyright 2000 - 2008 Hewlett-Packard Development Company, L.P.
Opening minor device and creating guest machine container
Creation of VM, minor device 3
Allocating guest memory: 4096MB
  allocating low RAM (0-80000000, 2048MB)
/opt/hpvm/lbin/hpvmapp (/var/opt/hpvm/uuids/2b3b1198-2062-11df-9e06-001a4b07f002/vmm_config.current): Allocated 2147483648 bytes at 0x6000000100000000
  allocating high RAM (100000000-180000000, 2048MB)
/opt/hpvm/lbin/hpvmapp (/var/opt/hpvm/uuids/2b3b1198-2062-11df-9e06-001a4b07f002/vmm_config.current): Allocated 2147483648 bytes at 0x6000000200000000
    locking memory: 100000000-180000000
    allocating datalogger memory: FF800000-FF840000 (256KB for 155KB)
/opt/hpvm/lbin/hpvmapp (/var/opt/hpvm/uuids/2b3b1198-2062-11df-9e06-001a4b07f002/vmm_config.current): Allocated 262144 bytes at 0x6000000300000000
    locking datalogger memory
  allocating firmware RAM (fff00000-fff20000, 128KB)
/opt/hpvm/lbin/hpvmapp (/var/opt/hpvm/uuids/2b3b1198-2062-11df-9e06-001a4b07f002/vmm_config.current): Allocated 131072 bytes at 0x6000000300080000
    locked SAL RAM: 00000000fff00000 (8KB)
    locked ESI RAM: 00000000fff02000 (8KB)
    locked PAL RAM: 00000000fff04000 (8KB)
    locked Min Save State: 00000000fff06000 (8KB)
    locked datalogger: 00000000ff800000 (256KB)
Loading boot image
Image initial IP=102000 GP=67E000
Initialize guest memory mapping tables
Starting event polling thread
Starting thread initialization
Daemonizing....
hpvmstart: Successful start initiation of guest 'ivm1'
[root@hpvmhost] ~ #
[root@hpvmhost] ~ # hpvmconsole -P ivm1

   vMP MAIN MENU

         CO: Console
         CM: Command Menu
         CL: Console Log
         SL: Show Event Logs
         VM: Virtual Machine Menu
         HE: Main Help Menu
         X: Exit Connection

[ivm1] vMP> co

       (Use Ctrl-B to return to vMP main menu.)

- - - - - - - - - - Prior Console Output - - - - - - - - - -

And we are finished. I’m not going through the installation process since it’s not the objective of this post and it is very well documented in the HP-UX documentation.

I really enjoyed this post, it has been very useful exercise in order to re-learn the roots of HPVM and a very good starting point for the HP-UX/HPVM upgrade I’m going to undertake during next weeks.

Juanma.