Archives For Itanium

Creating a Veritas Volume Manager boot disk using the LVM boot disks as its source probably looks to many as a very complicated process, nothing so far from the reality. In fact the whole conversion can be done with one command, vxcp_lvmroot.  In this post I will try to clarify the process and explain some of the underlying mechanisms.

I’m going to take for granted that all of you understand the basic structure of boot disks on Itanium servers. If you have read my post about boot disk structure on Integrity servers you will remember that the disks are composed by three partitions:

  • EFI
  • OS Partition.
  • HPSP – HP Service Partition.

For the purpose of this post the only relevant partition is the OS Partition, also named as HPUX in HP-UX hosts.

Unlike LVM, where the volumes are named with numbers (lvol1, lvol2…), in VxVM the volumes follow a specific naming convention that reflects the usage of each one of them:

  • standvol
  • swapvol
  • rootvol
  • usrvol
  • varvol
  • tmpvol
  • optvol

Veritas volumes support also a usetype field used to provide additional information about the volume to VxVM itself. The three most common usetypes on HP-UX are:

  • fsgen – File systems and general purpose volumes
  • swap – Swap volumes
  • root – Used for the volume that contains the root file system

The following restrictions must be taken into account for any VxVM boot disk:

  • As in LVM the volumes involved in the boot process (standvol, swapvol and rootvol) must be contiguous.
  • The above volumes can have only one subdisk and can’t span to additional disks.
  • The volumes within the root disk can not use dirty region logging (DRL).
  • The private region size is 1MB rather than the default value of 32MB.
  • The /stand file system can only be configured with VxFS data layout version 5 or the system will not boot.
  • In PA-RISC systems the /stand file system must be HFS, this is necessary because the PA-RISC HP-UX kernel loader is not VxFS-aware.

Following is an example to illustrate the process.

First, with diskinfo, verify the size of the current boot disk and the new disk to check that they are the same.

root@robin:/# diskinfo /dev/rdsk/c0t0d0
SCSI describe of /dev/rdsk/c0t0d0:
             vendor: HP      
         product id: Virtual LvDisk  
               type: direct access
               size: 40960000 Kbytes
   bytes per sector: 512
root@robin:/#
root@robin:/# diskinfo /dev/rdsk/c0t8d0
SCSI describe of /dev/rdsk/c0t8d0:
             vendor: HP      
         product id: Virtual LvDisk  
               type: direct access
               size: 40960000 Kbytes
   bytes per sector: 512
root@robin:/#

After that scrub the new disk, this will prevent possible problems during the creation process because if  vxcp_lvmroot encounter LVM structures on the disk it will fail.

root@robin:~# dd if=/dev/zero of=/dev/rdsk/c0t8d0 bs=1048576 count=1024  
1024+0 records in
1024+0 records out
root@robin:~#

Finally launch the vxcp_lvmroot command. Before commencing the copy, vxcp_lvmroot will determine how many disks are required and will ensure that enough disks have been specified.

Each one of the given disks for the conversion will be checked to make sure that aren’t in use as LVM, VxVM or raw disks. Once the appropriate checks have been issued the disks will be given VxVM media names, the disk or disks containing the root will be given rootdisk## names and the other disks that are part of the rootdg will be given rootaux## names, ## is a number starting on 01.

root@robin:~# /etc/vx/bin/vxcp_lvmroot -v -b c0t8d0
VxVM vxcp_lvmroot INFO V-5-2-4668 10:42: Bootdisk is configured with new-style DSF
VxVM vxcp_lvmroot INFO V-5-2-2499 10:42: Gathering information on LVM root volume group vg00
VxVM vxcp_lvmroot INFO V-5-2-2441 10:42: Checking specified disk(s) for usability
VxVM vxcp_lvmroot INFO V-5-2-4679 10:42: Using legacy-style DSF names
VxVM vxcp_lvmroot INFO V-5-2-2566 10:42: Preparing disk c0t8d0 as a VxVM root disk
VxVM vxcp_lvmroot INFO V-5-2-3767 10:42: Disk c0t8d0 is now EFI partitioned disk c0t8d0s2
VxVM vxcp_lvmroot INFO V-5-2-2537 10:42: Initializing DG rootdg with disk c0t8d0s2 as DM rootdisk01
VxVM vxcp_lvmroot INFO V-5-2-1606 10:42: Copying /dev/vg00/lvol1 (vxfs) to /dev/vx/dsk/rootdg/standvol
VxVM vxcp_lvmroot INFO V-5-2-1604 10:42: Cloning /dev/vg00/lvol2 (swap) to /dev/vx/dsk/rootdg/swapvol
VxVM vxcp_lvmroot INFO V-5-2-1606 10:42: Copying /dev/vg00/lvol3 (vxfs) to /dev/vx/dsk/rootdg/rootvol
VxVM vxcp_lvmroot INFO V-5-2-1606 10:43: Copying /dev/vg00/lvol4 (vxfs) to /dev/vx/dsk/rootdg/homevol
VxVM vxcp_lvmroot INFO V-5-2-1606 10:43: Copying /dev/vg00/lvol5 (vxfs) to /dev/vx/dsk/rootdg/optvol
VxVM vxcp_lvmroot INFO V-5-2-1606 10:50: Copying /dev/vg00/lvol6 (vxfs) to /dev/vx/dsk/rootdg/tmpvol
VxVM vxcp_lvmroot INFO V-5-2-1606 10:50: Copying /dev/vg00/lvol7 (vxfs) to /dev/vx/dsk/rootdg/usrvol
VxVM vxcp_lvmroot INFO V-5-2-1606 10:55: Copying /dev/vg00/lvol8 (vxfs) to /dev/vx/dsk/rootdg/varvol
VxVM vxcp_lvmroot INFO V-5-2-1606 10:58: Copying /dev/vg00/lv_crash (vxfs) to /dev/vx/dsk/rootdg/crashvol
VxVM vxcp_lvmroot INFO V-5-2-4678 10:58: Setting up disk c0t8d0s2 as a boot disk
VxVM vxcp_lvmroot INFO V-5-2-1638 10:59: Installing fstab and fixing dev nodes on new root FS
VxVM vxcp_lvmroot INFO V-5-2-2538 10:59: Installing bootconf & rootconf files in new stand FS
VxVM vxcp_lvmroot INFO V-5-2-2462 10:59: Current setboot values:
VxVM vxcp_lvmroot INFO V-5-2-2569 10:59: Primary:       0/0/0/0.0x0.0x0 /dev/rdisk/disk4
VxVM vxcp_lvmroot INFO V-5-2-2416 10:59: Alternate:      
VxVM vxcp_lvmroot INFO V-5-2-4676 10:59: Making disk /dev/rdisk/disk20_p2 the primary boot disk
VxVM vxcp_lvmroot INFO V-5-2-4663 10:59: Making disk /dev/rdisk/disk4_p2 the alternate boot disk
VxVM vxcp_lvmroot INFO V-5-2-4671 10:59: Disk c0t8d0s2 is now a VxVM rootable boot disk
root@robin:~#

Now to verify the new VxVM boot disk, first check the newly created rootdg diskgroup.

root@robin:~# vxprint -htg rootdg
DG NAME         NCONFIG      NLOG     MINORS   GROUP-ID
ST NAME         STATE        DM_CNT   SPARE_CNT         APPVOL_CNT
DM NAME         DEVICE       TYPE     PRIVLEN  PUBLEN   STATE
RV NAME         RLINK_CNT    KSTATE   STATE    PRIMARY  DATAVOLS  SRL
RL NAME         RVG          KSTATE   STATE    REM_HOST REM_DG    REM_RLNK
CO NAME         CACHEVOL     KSTATE   STATE
VT NAME         RVG          KSTATE   STATE    NVOLUME
V  NAME         RVG/VSET/CO  KSTATE   STATE    LENGTH   READPOL   PREFPLEX UTYPE
PL NAME         VOLUME       KSTATE   STATE    LENGTH   LAYOUT    NCOL/WID MODE
SD NAME         PLEX         DISK     DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE
SV NAME         PLEX         VOLNAME  NVOLLAYR LENGTH   [COL/]OFF AM/NM    MODE
SC NAME         PLEX         CACHE    DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE
DC NAME         PARENTVOL    LOGVOL
SP NAME         SNAPVOL      DCO
EX NAME         ASSOC        VC                       PERMS    MODE     STATE
SR NAME         KSTATE

dg rootdg       default      default  4466000  1276076559.38.robin

dm rootdisk01   c0t8d0s2     auto     1024     40035232 -

v  crashvol     -            ENABLED  ACTIVE   4194304  SELECT    -        fsgen
pl crashvol-01  crashvol     ENABLED  ACTIVE   4194304  CONCAT    -        RW
sd rootdisk01-09 crashvol-01 rootdisk01 28778496 4194304 0        c0t8d0s2 ENA

v  homevol      -            ENABLED  ACTIVE   155648   SELECT    -        fsgen
pl homevol-01   homevol      ENABLED  ACTIVE   155648   CONCAT    -        RW
sd rootdisk01-04 homevol-01  rootdisk01 7077888 155648  0         c0t8d0s2 ENA

v  optvol       -            ENABLED  ACTIVE   9560064  SELECT    -        fsgen
pl optvol-01    optvol       ENABLED  ACTIVE   9560064  CONCAT    -        RW
sd rootdisk01-05 optvol-01   rootdisk01 7233536 9560064 0         c0t8d0s2 ENA

v  rootvol      -            ENABLED  ACTIVE   1048576  SELECT    -        root
pl rootvol-01   rootvol      ENABLED  ACTIVE   1048576  CONCAT    -        RW
sd rootdisk01-03 rootvol-01  rootdisk01 6029312 1048576 0         c0t8d0s2 ENA

v  standvol     -            ENABLED  ACTIVE   1835008  SELECT    -        fsgen
pl standvol-01  standvol     ENABLED  ACTIVE   1835008  CONCAT    -        RW
sd rootdisk01-01 standvol-01 rootdisk01 0      1835008  0         c0t8d0s2 ENA

v  swapvol      -            ENABLED  ACTIVE   4194304  SELECT    -        swap
pl swapvol-01   swapvol      ENABLED  ACTIVE   4194304  CONCAT    -        RW
sd rootdisk01-02 swapvol-01  rootdisk01 1835008 4194304 0         c0t8d0s2 ENA

v  tmpvol       -            ENABLED  ACTIVE   524288   SELECT    -        fsgen
pl tmpvol-01    tmpvol       ENABLED  ACTIVE   524288   CONCAT    -        RW
sd rootdisk01-06 tmpvol-01   rootdisk01 16793600 524288 0         c0t8d0s2 ENA

v  usrvol       -            ENABLED  ACTIVE   6217728  SELECT    -        fsgen
pl usrvol-01    usrvol       ENABLED  ACTIVE   6217728  CONCAT    -        RW
sd rootdisk01-07 usrvol-01   rootdisk01 17317888 6217728 0        c0t8d0s2 ENA

v  varvol       -            ENABLED  ACTIVE   5242880  SELECT    -        fsgen
pl varvol-01    varvol       ENABLED  ACTIVE   5242880  CONCAT    -        RW
sd rootdisk01-08 varvol-01   rootdisk01 23535616 5242880 0        c0t8d0s2 ENA
root@robin:~#

Verify the contents of the LABEL file.

root@robin:~# vxvmboot -v /dev/rdsk/c0t8d0s2

LIF Label File @ (1k) block # 834 on VxVM Disk /dev/rdsk/c0t8d0s2:
Label Entry: 0, Boot Volume start:     3168; length: 1792 MB
Label Entry: 1, Root Volume start:  6032480; length: 1024 MB
Label Entry: 2, Swap Volume start:  1838176; length: 4096 MB
Label Entry: 3, Dump Volume start:  1838176; length: 4096 MB
root@robin:~#

Check the new boot paths and if everything is OK reboot the server.

root@robin:~# setboot -v
Primary bootpath : 0/0/0/0.0x8.0x0 (/dev/rdisk/disk20)
HA Alternate bootpath :
Alternate bootpath : 0/0/0/0.0x0.0x0 (/dev/rdisk/disk4)

Autoboot is ON (enabled)
setboot: error accessing firmware - Function is not available
The firmware of your system does not support querying or changing the SpeedyBoot
settings.
root@robin:~#
root@robin:~# shutdown -ry 0 

SHUTDOWN PROGRAM
06/09/10 11:11:37 WETDST

Broadcast Message from root (console) Wed Jun  9 11:11:37...
SYSTEM BEING BROUGHT DOWN NOW ! ! !

...

If everything went as expected the server will boot from the new disk and the migration process wil be finished.

Juanma.

The  AVIO Lan drivers for Linux HPVM guests are supported since HPVM4.0 but as you will see enabling it is a little more complicated than in HP-UX guests.

The first prerequisite is to have installed the HPVM management software, once you have this package installed look for a RPM package called hpvm_lgssn in /opt/hpvm/guest-images/linux/DRIVERS.

root@hpvm-host:/opt/hpvm/guest-images/linux/DRIVERS # ll
total 584
 0 drwxr-xr-x 2 bin bin     96 Apr 13 18:47 ./
 0 drwxr-xr-x 5 bin bin     96 Apr 13 18:48 ../
 8 -r--r--r-- 1 bin bin   7020 Mar 27  2009 README
576 -rw-r--r-- 1 bin bin 587294 Mar 27  2009 hpvm_lgssn-4.1.0-3.ia64.rpm
root@hpvm-host:/opt/hpvm/guest-images/linux/DRIVERS #

Copy the package to the virtual machine with your favorite method and install it.

[sles10]:/var/tmp # rpm -ivh hpvm_lgssn-4.1.0-3.ia64.rpm
Preparing...                ########################################### [100%]
Installing...               ########################################### [100%]

[sles10]:/var/tmp #

Check the installation of the package.

[sles10]:~ # rpm -qa | grep hpvm
hpvm-4.1.0-1
hpvmprovider-4.1.0-1
hpvm_lgssn-4.1.0-3
[sles10]:~ #
[sles10]:~ # rpm -ql hpvm_lgssn
/opt/hpvm_drivers
/opt/hpvm_drivers/lgssn
/opt/hpvm_drivers/lgssn/LICENSE
/opt/hpvm_drivers/lgssn/Makefile
/opt/hpvm_drivers/lgssn/README
/opt/hpvm_drivers/lgssn/hpvm_guest.h
/opt/hpvm_drivers/lgssn/lgssn.h
/opt/hpvm_drivers/lgssn/lgssn_ethtool.c
/opt/hpvm_drivers/lgssn/lgssn_main.c
/opt/hpvm_drivers/lgssn/lgssn_recv.c
/opt/hpvm_drivers/lgssn/lgssn_recv.h
/opt/hpvm_drivers/lgssn/lgssn_send.c
/opt/hpvm_drivers/lgssn/lgssn_send.h
/opt/hpvm_drivers/lgssn/lgssn_trace.h
/opt/hpvm_drivers/lgssn/rh4
/opt/hpvm_drivers/lgssn/rh4/u5
/opt/hpvm_drivers/lgssn/rh4/u5/lgssn.ko
/opt/hpvm_drivers/lgssn/rh4/u6
/opt/hpvm_drivers/lgssn/rh4/u6/lgssn.ko
/opt/hpvm_drivers/lgssn/sles10
/opt/hpvm_drivers/lgssn/sles10/SP1
/opt/hpvm_drivers/lgssn/sles10/SP1/lgssn.ko
/opt/hpvm_drivers/lgssn/sles10/SP2
/opt/hpvm_drivers/lgssn/sles10/SP2/lgssn.ko
[sles10]:~ #

There are two ways to install the driver, compile it or use one of the pre-compiled modules. These pre-compiled modules are for the following distributions and kernels:

  • Red Hat 4 release 5 (2.6.9-55.EL)
  • Red Hat 4 release 6 (2.6.9-67.EL)
  • SLES10 SP1 (2.6.16.46-0.12)
  • SLES10 SP2 (2.6.16.60-0.21)

For other kernels you must compile the driver. In the Linux box of the example I had a supported kernels and distro (SLES10 SP2) but instead of using the pre-compiled one I decided to go through the whole process.

Go the path /opt/hpvm_drivers/lgssn, there you will find the sources of the driver. To compile and install execute a simple make install.

[sles10]:/opt/hpvm_drivers/lgssn # make install
make -C /lib/modules/2.6.16.60-0.21-default/build SUBDIRS=/opt/hpvm_drivers/lgssn modules
make[1]: Entering directory `/usr/src/linux-2.6.16.60-0.21-obj/ia64/default'
make -C ../../../linux-2.6.16.60-0.21 O=../linux-2.6.16.60-0.21-obj/ia64/default modules
 CC [M]  /opt/hpvm_drivers/lgssn/lgssn_main.o
 CC [M]  /opt/hpvm_drivers/lgssn/lgssn_send.o
 CC [M]  /opt/hpvm_drivers/lgssn/lgssn_recv.o
 CC [M]  /opt/hpvm_drivers/lgssn/lgssn_ethtool.o
 LD [M]  /opt/hpvm_drivers/lgssn/lgssn.o
 Building modules, stage 2.
 MODPOST
 CC      /opt/hpvm_drivers/lgssn/lgssn.mod.o
 LD [M]  /opt/hpvm_drivers/lgssn/lgssn.ko
make[1]: Leaving directory `/usr/src/linux-2.6.16.60-0.21-obj/ia64/default'
find /lib/modules/2.6.16.60-0.21-default -name lgssn.ko -exec rm -f {} \; || true
find /lib/modules/2.6.16.60-0.21-default -name lgssn.ko.gz -exec rm -f {} \; || true
install -D -m 644 lgssn.ko /lib/modules/2.6.16.60-0.21-default/kernel/drivers/net/lgssn/lgssn.ko
/sbin/depmod -a || true
[sles10]:/opt/hpvm_drivers/lgssn #

This will copy the driver to /lib/module/<KERNEL_VERSION>/kernel/drivers/net/lgssn/.

To ensure that the new driver will loaded during the startup of the operative system first add the following line to /etc/modprobe.conf, one line for each interface configured for AVIO Lan.

alias eth1 lgssn

The HPVM 4.2 manual said you have to issue the command depmod -a in order to inform the kernel about the change but if you look the above log will see that the last command executed by the make install is a depmod -a. Look into the modules.dep file to check that the corresponding line for the lgssn driver has been added.

[sles10]:~ # grep lgssn /lib/modules/2.6.16.60-0.21-default/modules.dep
/lib/modules/2.6.16.60-0.21-default/kernel/drivers/net/lgssn/lgssn.ko:
[sles10]:~ #

At this point and if you have previously reconfigured the virtual machine, load the module and restart the network services.

[sles10]:/opt/hpvm_drivers/lgssn # insmod /lib/modules/2.6.16.60-0.21-default/kernel/drivers/net/lgssn/lgssn.ko
[sles10]:/opt/hpvm_drivers/lgssn # lsmod |grep lgssn
lgssn                 576136  0
[sles10]:/opt/hpvm_drivers/lgssn #
[sles10]:/opt/hpvm_drivers/lgssn # service network restart
Shutting down network interfaces:
    eth0      device: Intel Corporation 82540EM Gigabit Ethernet Controller
    eth0      configuration: eth-id-2a:87:14:5c:f9:ed
    eth0                                                              done
    eth1      device: Hewlett-Packard Company Unknown device 1338
    eth1      configuration: eth-id-66:f3:f8:4e:37:d5
    eth1                                                              done
    eth2      device: Intel Corporation 82540EM Gigabit Ethernet Controller
    eth2      configuration: eth-id-0a:dc:fd:cb:2c:62
    eth2                                                              done
Shutting down service network  .  .  .  .  .  .  .  .  .  .  .  .  .  done
Hint: you may set mandatory devices in /etc/sysconfig/network/config
Setting up network interfaces:
    lo        
    lo       
              IP address: 127.0.0.1/8   
              IP address: 127.0.0.2/8   
Checking for network time protocol daemon (NTPD):                     running
    lo                                                                done
    eth0      device: Intel Corporation 82540EM Gigabit Ethernet Controller
    eth0      configuration: eth-id-2a:87:14:5c:f9:ed
Warning: Could not set up default route via interface
 Command ip route replace to default via 10.31.12.1 returned:
 . RTNETLINK answers: Network is unreachable
 Configuration line: default 10.31.12.1 - -
 This needs NOT to be AN ERROR if you set up multiple interfaces.
 See man 5 routes how to avoid this warning.

Checking for network time protocol daemon (NTPD):                     running
    eth0                                                              done
    eth1      device: Hewlett-Packard Company Unknown device 1338
    eth1      configuration: eth-id-66:f3:f8:4e:37:d5
    eth1      IP address: 10.31.4.16/24   
Warning: Could not set up default route via interface
 Command ip route replace to default via 10.31.12.1 returned:
 . RTNETLINK answers: Network is unreachable
 Configuration line: default 10.31.12.1 - -
 This needs NOT to be AN ERROR if you set up multiple interfaces.
 See man 5 routes how to avoid this warning.

Checking for network time protocol daemon (NTPD):                     running
    eth1                                                              done
    eth2      device: Intel Corporation 82540EM Gigabit Ethernet Controller
    eth2      configuration: eth-id-0a:dc:fd:cb:2c:62
    eth2      IP address: 10.31.12.11/24   
Checking for network time protocol daemon (NTPD):                     running
    eth2                                                              done
Setting up service network  .  .  .  .  .  .  .  .  .  .  .  .  .  .  done
[sles10]:/opt/hpvm_drivers/lgssn #

If you have not configured the networking interface of the virtual machine shutdown the virtual machine and from the host modify each virtual NIC of the guest. Take into account that AVIO Lan drivers are not supported with localnet virtual switches.

root@hpvm-host:~ # hpvmmodify -P sles10 -m network:avio_lan:0,2:vswitch:vlan2:portid:4
root@hpvm-host:~ # hpvmstatus -P sles10 -d
[Virtual Machine Devices]
...
[Network Interface Details]
network:lan:0,0,0x2A87145CF9ED:vswitch:localnet:portid:4
network:avio_lan:0,1,0x66F3F84E37D5:vswitch:vlan1:portid:4
network:avio_lan:0,2,0x0ADCFDCB2C62:vswitch:vlan2:portid:4
...
root@hpvm-host:~ #

Finally start the virtual machine and check that everything went well and the drivers have been loaded.

Juanma

The boot disk/disks of every Integrity server are divided into three partitions:

  1. EFI Partition: Contains the necessary tools and files to find and load the appropriate kernel. Here resides for example the hpux.efi utility.
  2. OS Partition: In the case of HP-UX contains the LVM or VxVM structure, the kernel and any filesystem that play a role during the boot process.
  3. HP Service Partition (HPSP).

EFI Partition

The Extensible Firmware Interface (EFI) partition is subdivided into three main areas:

  • MBR: The Master Boot Record, located at the top of the disk, a legacy Intel structure ignored by EFI.
  • GPT: Every EFI partition is assigned a unique identifier known as GUID (Globally Unique Identifier). The locations of the GUID s are stored in the EFI GUID Partition Table or GPT. This very critical structure is replicated at the top and the bottom of the disk.
  • EFI System Partition: This partition contains the OS loader responsible of loading the operative system during the boot process. On HP-UX disks the OS loader is the famous \efi\hpux\hpux.efi file. Here is contained also the \efi\hpux\auto file which stores the system boot string and some utilities as well.

OS Partition

The OS Partition obviously contains the Operative System that runs on the server. An HP-UX partition contains a LIF area, private region and public region.

The Logical Interchange Format (LIF) boot area stores the following files:

  • ISL. Not used on Integrity.
  • AUTO. Not used on Integrity.
  • HPUX. Not used on Integrity.
  • LABEL. A binary file that contains the records of the locations of /stand and the primary swap.

The private region contains LVM and VxVM configuration information.

And the public region contains the corresponding volumes for:

  • stand: /stand filesystem including the HP-UX kernel.
  • swap: Primary swap space.
  • root: The root filesystem that includes /, /etc, /dev and /sbin.

HP Service Partition

The HP Service Partition, or HPSP, is a FAT-32 filesystem that contains several offline diagnostic utilities to be used on unbootable systems.

Juanma.

Like other virtualization software, HP Integrity Virtual Machines comes with several memory management capabilities. In this new post about HPVM I will try to explain what are these capabilities, their purpose and how to configure and use them.

  • Dynamic memory

Dynamic memory is an HPVM feature that allow you to resize the amount of memory of a guest without rebooting it. The HPVM manual mention an example in which dynamic memory is applicable.

…this feature allows a guest that is a Serviceguard node to be used as a standby server for multiple Serviceguard packages. When a package fails over to the guest, the guest memory can be changed to suit the requirements of the package before, during, and after the failover process.

Dynamic memory is only available on HP-UX guests with the guest management software installed.

Lets see how to enable an configure dynamic memory.

First thing to do is to enable dynamic memory.

root@hinata:~ # hpvmmodify -P batman -x ram_dyn_type=driver

There are three possible values for the ram_dyn_type option:

  1. None: Self explanatory.
  2. Any: In the next boot of the guest it will check if dynamic memory is enabled and if the driver is loaded. If the dynamic memory driver is in place the option will change its value to driver.
  3. Driver: When the ram_dyn_type is set to driver it means that every dynamic memory control and range is functional.

Specify the minimun amount of RAM to be allocated to the guest, the default unit is MB but GB can also be used.

root@hinata:~ # hpvmmodify -P batman -x ram_dyn_min=1024

Next set the maximum memory.

root@hinata:~ # hpvmmodify -P batman -x ram_dyn_max=4G

Set the amount of memory to be allocated when the guests starts, this value must be greater than the minimum one.

root@hinata:~ # hpvmmodify -P batman -x ram_dyn_target_start=2048

Check the status of the guest to see the newly configured options.

root@hinata:~ # hpvmstatus -r -P batman
[Virtual Machine entitlements]
 Percent       Cumulative
#VCPUs Entitlement Maximum   Usage            Usage
====== =========== ======= ======= ================
 6       10.0%  100.0%    0.0%                0 

[Virtual CPU details]
vCPU Cumulative       Guest   Host    Cycles   Sampling
ID   Usage            percent percent achieved Interval
==== ================ ======= ======= ======== ===========
 0                0    0.0%    0.0%     0MHz   0 seconds
 1                0    0.0%    0.0%     0MHz   0 seconds
 2                0    0.0%    0.0%     0MHz   0 seconds
 3                0    0.0%    0.0%     0MHz   0 seconds
 4                0    0.0%    0.0%     0MHz   0 seconds
 5                0    0.0%    0.0%     0MHz   0 seconds 

[Virtual Machine Memory Entitlement]
DynMem  Memory   DynMem  DynMem DynMem  Comfort Total    Free   Avail    Mem    AMR     AMR
 Min   Entitle   Max    Target Current   Min   Memory  Memory  Memory  Press  Chunk   State
======= ======= ======= ======= ======= ======= ======= ======= ======= ===== ======= ========
 1024MB     0MB     4GB  4096MB  4096MB     0MB     4GB     0MB     0MB   0       0MB DISABLED

Once dynamic memory is properly configured, from the VM host, the memory of a guest can be manually resized to a value between the ram_dyn_min and ram_dyn_max parameters in increments of the default chunk size, which is 64MB.

root@hinata:~ # hpvmmodify -P batman -x ram_target=3136

There is one final option named dynamic_memory_control, with this option the system administration can allow the root user of the guest to change dynamic memory options, from the guest side, while it is running. The dynamic_memory_control option is incompatible with automatic memory reallocation.

Just to show a small example from the guest side, to view the dynamic memory configuration:

root@batman:~# hpvmmgmt -V -l ram
[Dynamic Memory Information]
=======================================
Type                    : driver
Minimum memory          : 1024 MB
Target memory           : 4090 MB
Maximum memory          : 4096 MB
Current memory          : 4090 MB
Comfortable minimum     : 1850 MB
Boot memory             : 4090 MB
Free memory             : 2210 MB
Available memory        : 505 MB
Memory pressure         : 0
Memory chunksize        : 65536 KB
Driver Mode(s)          : STARTED ENABLED 

root@batman:~#
  • Automatic memory reallocation

The new HPVM 4.2 version from March expands dynamic memory with an interesting feature called Automatic Memory Reallocation. This new feature provides the possibility of automated changes in the amount of memory used by a guest based on load conditions.

Automatic memory reallocation is only supported on HP-UX guests with dynamic memory enabled and with the guest management software installed.

Automatic memory reallocation can be configured through two ways:

  1. System-wide values.
  2. On a per-VM basis.

Each way doesn’t exclude the other one, you can set the system-wide parameters for every VM and later customize some of the virtual machines adjusting their parameters to any additional requirement.

Automatic memory reallocation is enabled by default on the VM host. Open the file /etc/rc.config.d/hpvmconf and check that HPVMAMRENABLE=0 is not set to verify that automatic memory reallocation is enabled. The process hpmvmamrd, the automatic memory reallocation daemon, can also be check with a simple ps.

In the same file two system-wide tunables can be configured.

  1. HPVMCHUNKSIZE
  2. HPVMAMRWAITTIME

The first parameter determine the number of megabytes by the guest will attempt to grow if there is memory pressure. If the parameter is not set the default value will be 256MB. The best practice for this parameter is to be a multiple of the dynamic memory chunk size.

The second one set the maximum number of seconds that any VM startup process will wait for memory before reporting a failure due to insufficient memory. The default value is 60 seconds and the maximum configurable 600 seconds.

With the above parameter set to its defaults or customized the next step is to enable automatic memory reallocation in the virtual machines. The amr feature is DISABLED by default on the VMs. To enable use the amr_enable option.

root@hinata:~ # hpvmmodify -P batman -x amr_enable=1

Now set the memory entitlement for the virtual machine. The entitlement is the minimum amount of RAM guaranteed to the virtual machine.

root@hinata:~ # hpvmmodify -P batman -x ram_dyn_entitlement=1500

Take into account that if amr is not enabled the entitlement could be set but it will not work and any VM without the entitlement parameter set will be ignored by automatic memory reallocation.

The entitlement value can be modified online by the system administrator at any time, but there are some rules that apply:

  1. If there is not enough memory to grow the VM memory to the specified entitlement the operation will fail.
  2. The memory of virtual machine can not be grown beyond its maximum memory.
  3. The virtual machine memory always have to be set to a value between ram_dyn_max and ram_dyn_min parameters, no more no less.

When the memory of a guest is resized by default the HPVMCHUNKSIZE value is used but a per-VM chunk size can also be set. To do so use the amr_chunk_size parameter.

root@hinata:~ # hpvmmodify -P batman -x amr_chunk_resize=512

As in the system-wide parameter the recommendation is to set the chunk size to a multple of the dynamic memory chunks size.

Finally to display the configuration and the current use of the virtual machines resource entitlements use hpvmstatus -r.

root@hinata:~ # hpvmstatus -r
[Virtual Machine Resource Entitlement]
[Virtual CPU entitlement]
 Percent       Cumulative
Virtual Machine Name VM #  #VCPUs Entitlement Maximum   Usage            Usage
==================== ===== ====== =========== ======= ======= ================
rh-www                   1      4       50.0%  100.0%    0.0%                0
sql-dev                  2      4       50.0%  100.0%    0.3%         21611866
rhino                    3      4       50.0%  100.0%    0.0%                0
batman                   4      8       20.0%  100.0%    0.8%          1318996
robin                    5      8       20.0%  100.0%    0.8%            97993
falcon                   6      2       10.0%  100.0%    0.0%                0 

[Virtual Machine Memory Entitlement]
 DynMem  Memory   DynMem  DynMem DynMem  Comfort Total    Free   Avail    Mem    AMR     AMR
Virtual Machine Name  VM #   Min   Entitle   Max    Target Current   Min   Memory  Memory  Memory  Press  Chunk   State
==================== ===== ======= ======= ======= ======= ======= ======= ======= ======= ======= ===== ======= ========
rh-www                   1   512MB     0MB     8GB  8192MB  8192MB     0MB     8GB     0MB     0MB   0       0MB DISABLED
sql-dev                  2   512MB     0MB     8GB  8192MB  8192MB     0MB     8GB     0MB     0MB   0       0MB DISABLED
rhino                    3  1024MB  1500MB     6GB  2048MB  6144MB     0MB     6GB     0MB     0MB   0     256MB  ENABLED
batman                   4  1024MB  1500MB     4GB  4090MB  4090MB  1850MB     4GB  2214MB   500MB   0     256MB  ENABLED
robin                    5  1024MB  1500MB     4GB  4090MB  4090MB  1914MB     4GB  2165MB   531MB   0     256MB  ENABLED
falcon                   6   512MB     0MB     6GB  6144MB  6144MB     0MB     6GB     0MB     0MB   0       0MB DISABLED

I hope this helps to clarify how HPVM manage the memory of the virtual machines and how to customize its configuration. As always any comment would be welcome :-)

Juanma.

Last week was, without any doubt, one of the most exciting of the year. The new Integrity Servers have been finally unveiled.

This new whole line of Integrity machines are based on Tukwila, the latest iteration of the Itanium processor line which was presented by Intel early this year, and with one exception all of them are based in the blade form factor. Let’s take a quick look of the new servers.

  • Entry-level

In this area, and as the only rack server of the new line, we have the rx28000, at first look it seems no more than a remake of the rx2660 but if you go deeper will find a powerful machine with 2 Quad-core or Dual-core Itanium 9300 processors and a maximum of 192GB of RAM.

That’s a considerable amount of power for a server of this kind. I personally like this server and have to convince my manager to kindly donate one for my home lab ;-)

  • Mid-range

In the mid-range line there are three beautiful babies named BL860c_i2, BL870c_i2 and BL890c_i2.

The key for this new servers is modularity, the BL860c_i2 is the base of her bigger sisters. HP has developed a new piece of hardware known as Integrity Blade Link Assembly which makes possible to combine  blade modules. The 870 is composed by two blade modules and the 890 by four. The 860 is no more than a single blade module with a single Link Assembly on its front. This way of combining the blades makes the 890 the only 8 socket blade currently available.

The 870 and the 890 with 16 and 32 cores respectively are the logical replacement for the rx7640 and rx8640 but as many people have been saying since they were publicly presented there is of  the OLAR question or really the apparently lack of OLAR which in fact was one of the key features of the mid-range cell-based Integrity servers. We’ll see how this issue is solved.

  • High-End

The new rx2800 and the new blades are great but the real shock for everybody came when HP announced the new Superdome 2. Ladies and gentlemen the new mission critical computing era is here, forget those fat and proprietary racks, forget everything you know about high-end servers and be welcome to the blade land.

This new version of the HP flagship is based on the blade concept. Instead of cells we have cell-blades inside a new 18U enclosure based in the HP C7000 Blade Enclosure. Just remember one word… commonality. The new Superdome 2 will share a lot of parts with the C7000 and can be also managed through the same tools like the Onboard Administrator.

The specs of this baby are astonishing and during the presentation at the HP Technology At Work event four different configurations were outlined ranging from 8 sockets/32 cores in four blade-cells to a maximum of 64 sockets/256 cores in 32 cell-blades distributed through four enclosures in two racks. Like I said, astonishing :-D

There have been a lot rumors during last year about HP-UX and Itanium future mainly because the delays of the Tukwilla processor. The discussion has recently reach ITRC.

But if any of you had doubts about HP-UX future I firmly believe that HP sent a clear message on the opposite direction. HP-UX is probably the more robust and reliable Unix in the enterprise arena. And to be serious, what are you going to use to replace it? Linux? Solaris? Please ;-)

Juanma.

I was playing this afternoon with DRD in an 11.23 machine and just after launching the clone process I decided to stop it with Ctrl-C since I wasn’t logging the session and I wanted to do it. The process stopped with an error and I was sent back to the shell.

       * Copying File Systems To New System Image
ERROR:   Exiting due to keyboard interrupt.
       * Unmounting New System Image Clone
       * System image: "sysimage_001" on disk "/dev/dsk/c2t1d0"
ERROR:   Caught signal SIGINT.  This process is running critical code, this signal will be handled shortly.
ERROR:   Caught signal SIGINT.  This process is running critical code, this signal will be handled shortly.
ERROR:   Caught signal SIGINT.  This process is running critical code, this signal will be handled shortly.
ERROR:   Caught signal SIGINT.  This process is running critical code, this signal will be handled shortly.
ERROR:   Unmounting the file system fails.
         - Unmounting the clone image fails.
         - The "umount" command returned  "13". The "sync" command returned  "0". The error messages produced are the following: ""
       * Unmounting New System Image Clone failed with 5 errors.
       * Copying File Systems To New System Image failed with 6 errors.
=======  04/21/10 08:20:19 EDT  END Clone System Image failed with 6 errors. (user=root)  (jobid=ivm-v2)

I know it is a very bad idea but it’s not production server, just a virtual machine I use to perform tests. In fact my stupid behavior gave me the opportunity to discover and play with a funny and pretty bunch of errors. Here it is how I manage to  resolve it.

I launched again the clone process in preview mode, just in case, and DRD fails with the following error.

[ivm-v2]/ # drd clone -p -v -t /dev/dsk/c2t1d0

=======  04/21/10 08:22:01 EDT  BEGIN Clone System Image Preview (user=root)  (jobid=ivm-v2)

        * Reading Current System Information
        * Selecting System Image To Clone
        * Selecting Target Disk
 ERROR:   Selection of the target disk fails.
          - Selecting the target disk fails.
          - Validation of the disk "/dev/dsk/c2t1d0" fails with the following error(s):
          - Target volume group device entry "/dev/drd00" exists. Run "drd umount" before proceeding.
        * Selecting Target Disk failed with 1 error.

=======  04/21/10 08:22:10 EDT  END Clone System Image Preview failed with 1 error. (user=root)  (jobid=ivm-v2)

It seems that the original process just leaved the image mounted but after trying with drd umount just like the DRD output said it didn’t worked. The image was only partially created, yeah I created a beautiful mess ;-)

At that point, and in another “clever” movement, instead of simply removing the drd00 volume group I just deleted /dev/drd00… who’s da man!! or like we use to say in Spain ¡Con dos cojones!

DRD, of course, failed with a new error.

ERROR:   Selection of the target disk fails.
         - Selecting the target disk fails.
         - Validation of the disk "/dev/dsk/c2t1d0" fails with the following error(s):
         - Target volume group "/dev/drd00" found in logical volume table. "/etc/lvmtab" is corrupt and must be fixed before proceeding.
       * Selecting Target Disk failed with 1 error.

Well it wasn’t so bad. I recreated /etc/lvmtab and yes… I fired up my friend Dynamic Root Disk in preview mode.

[ivm-v2]/ # rm -f /etc/lvmtab
[ivm-v2]/ # vgscan -v
Creating "/etc/lvmtab".
vgscan: Couldn't access the list of physical volumes for volume group "/dev/vg00".
Invalid argument
Physical Volume "/dev/dsk/c3t2d0" contains no LVM information

/dev/vg00
/dev/dsk/c2t0d0s2

Following Physical Volumes belong to one Volume Group.
Unable to match these Physical Volumes to a Volume Group.
Use the vgimport command to complete the process.
/dev/dsk/c2t1d0s2

Scan of Physical Volumes Complete.
*** LVMTAB has been created successfully.
*** Do the following to resync the information on the disk.
*** #1.  vgchange -a y
*** #2.  lvlnboot -R
[ivm-v2]/ # lvlnboot -R
Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00.conf
[ivm-v2]/ #
[ivm-v2]/ # drd clone -p -v -t /dev/dsk/c2t1d0

=======  04/21/10 08:26:06 EDT  BEGIN Clone System Image Preview (user=root)  (jobid=ivm-v2)

       * Reading Current System Information
       * Selecting System Image To Clone
       * Selecting Target Disk
ERROR:   Selection of the target disk fails.
         - Selecting the target disk fails.
         - Validation of the disk "/dev/dsk/c2t1d0" fails with the following error(s):
         - The disk "/dev/dsk/c2t1d0" contains data. To overwrite this disk use the option "-x overwrite=true".
       * Selecting Target Disk failed with 1 error.

=======  04/21/10 08:26:13 EDT  END Clone System Image Preview failed with 1 error. (user=root)  (jobid=ivm-v2)

I couldn’t believe that. Another error? Why in the hell I got involved with DRD? But I am an Sysadmin, and a stubborn one. Looked at the disk and discovered that it had been partitioned by the first failed DRD cloning process. I just wiped out the whole disk with idisk and just in case I used the overwrite option.

[ivm-v2]/ # idisk -p /dev/rdsk/c2t1d0
idisk version: 1.31

EFI Primary Header:
        Signature                 = EFI PART
        Revision                  = 0x10000
        HeaderSize                = 0x5c
        HeaderCRC32               = 0xe19d8a07
        MyLbaLo                   = 0x1
        AlternateLbaLo            = 0x1117732f
        FirstUsableLbaLo          = 0x22
        LastUsableLbaLo           = 0x1117730c
        Disk GUID                 = d79b52fa-4d43-11df-8001-d6217b60e588
        PartitionEntryLbaLo       = 0x2
        NumberOfPartitionEntries  = 0xc
        SizeOfPartitionEntry      = 0x80
        PartitionEntryArrayCRC32  = 0xca7e53ce

  Primary Partition Table (in 512 byte blocks):
    Partition 1 (EFI):
        Partition Type GUID       = c12a7328-f81f-11d2-ba4b-00a0c93ec93b
        Unique Partition GUID     = d79b550c-4d43-11df-8002-d6217b60e588
        Starting Lba              = 0x22
        Ending Lba                = 0xfa021
    Partition 2 (HP-UX):
        Partition Type GUID       = 75894c1e-3aeb-11d3-b7c1-7b03a0000000
        Unique Partition GUID     = d79b5534-4d43-11df-8003-d6217b60e588
        Starting Lba              = 0xfa022
        Ending Lba                = 0x110af021
    Partition 3 (HPSP):
        Partition Type GUID       = e2a1e728-32e3-11d6-a682-7b03a0000000
        Unique Partition GUID     = d79b5552-4d43-11df-8004-d6217b60e588
        Starting Lba              = 0x110af022
        Ending Lba                = 0x11177021

[ivm-v2]/ #
[ivm-v2]/ # idisk -R /dev/rdsk/c2t1d0
idisk version: 1.31
********************** WARNING ***********************
If you continue you will destroy all partition data on this disk.
Do you wish to continue(yes/no)? yes

Don’t know why but I was pretty sure that DRD was going to fail again… and it did.

=======  04/21/10 08:27:02 EDT  BEGIN Clone System Image Preview (user=root)  (jobid=ivm-v2)

       * Reading Current System Information
       * Selecting System Image To Clone
       * Selecting Target Disk
       * The disk "/dev/dsk/c2t1d0" contains data which will be overwritten.
       * Selecting Volume Manager For New System Image
       * Analyzing For System Image Cloning
ERROR:   Analysis of file system creation fails.
         - Analysis of target fails.
         - The analysis step for creation of an inactive system image failed.
         - The default DRD mount point "/var/opt/drd/mnts/sysimage_001/" cannot be used due to the following error(s):
         - The mount point /var/opt/drd/mnts/sysimage_001/ is not an empty directory as required.
       * Analyzing For System Image Cloning failed with 1 error.

=======  04/21/10 08:27:09 EDT  END Clone System Image Preview failed with 1 error. (user=root)  (jobid=ivm-v2)

After a quick check I found that the original image was mounted.

[ivm-v2]/ # mount
/ on /dev/vg00/lvol3 ioerror=mwdisable,delaylog,dev=40000003 on Wed Apr 21 07:29:37 2010
/stand on /dev/vg00/lvol1 ioerror=mwdisable,log,tranflush,dev=40000001 on Wed Apr 21 07:29:38 2010
/var on /dev/vg00/lvol8 ioerror=mwdisable,delaylog,dev=40000008 on Wed Apr 21 07:29:50 2010
/usr on /dev/vg00/lvol7 ioerror=mwdisable,delaylog,dev=40000007 on Wed Apr 21 07:29:50 2010
/tmp on /dev/vg00/lvol4 ioerror=mwdisable,delaylog,dev=40000004 on Wed Apr 21 07:29:50 2010
/opt on /dev/vg00/lvol6 ioerror=mwdisable,delaylog,dev=40000006 on Wed Apr 21 07:29:50 2010
/home on /dev/vg00/lvol5 ioerror=mwdisable,delaylog,dev=40000005 on Wed Apr 21 07:29:50 2010
/net on -hosts ignore,indirect,nosuid,soft,nobrowse,dev=1 on Wed Apr 21 07:30:26 2010
/var/opt/drd/mnts/sysimage_001 on /dev/drd00/lvol3 ioerror=nodisable,delaylog,dev=40010003 on Wed Apr 21 08:19:46 2010
/var/opt/drd/mnts/sysimage_001/stand on /dev/drd00/lvol1 ioerror=nodisable,delaylog,dev=40010001 on Wed Apr 21 08:19:46 2010
/var/opt/drd/mnts/sysimage_001/tmp on /dev/drd00/lvol4 ioerror=nodisable,delaylog,dev=40010004 on Wed Apr 21 08:19:46 2010
/var/opt/drd/mnts/sysimage_001/home on /dev/drd00/lvol5 ioerror=nodisable,delaylog,dev=40010005 on Wed Apr 21 08:19:46 2010
/var/opt/drd/mnts/sysimage_001/opt on /dev/drd00/lvol6 ioerror=nodisable,delaylog,dev=40010006 on Wed Apr 21 08:19:46 2010
/var/opt/drd/mnts/sysimage_001/usr on /dev/drd00/lvol7 ioerror=nodisable,delaylog,dev=40010007 on Wed Apr 21 08:19:46 2010
/var/opt/drd/mnts/sysimage_001/var on /dev/drd00/lvol8 ioerror=nodisable,delaylog,dev=40010008 on Wed Apr 21 08:19:47 2010
[ivm-v2]/ #

Had to unmount the filesystems of the image one by one and after almost committing suicide with a a rack rail I launched the clone again and without the preview, if I were going to play a stupid role at least it was going to the most stupid one in the world x-)

[ivm-v2]/ # drd clone -x overwrite=true -v -t /dev/dsk/c2t1d0

=======  04/21/10 08:38:22 EDT  BEGIN Clone System Image (user=root)  (jobid=rx260-02)

       * Reading Current System Information
       * Selecting System Image To Clone
       * Selecting Target Disk
       * The disk "/dev/dsk/c2t1d0" contains data which will be overwritten.
       * Selecting Volume Manager For New System Image
       * Analyzing For System Image Cloning
       * Creating New File Systems
ERROR:   Clone file system creation fails.
         - Creating the target file systems fails.
         - Command "/opt/drd/lbin/drdconfigure" fails with the return code 255. The entire output from the command is given below:
         - Start of output from /opt/drd/lbin/drdconfigure:
         -        * Creating LVM physical volume "/dev/rdsk/c2t1d0s2" (0/1/1/0.1.0).
                  * Creating volume group "drd00".
           ERROR:   Command "/sbin/vgcreate -A n -e 4356 -l 255 -p 16 -s 32 /dev/drd00
                    /dev/dsk/c2t1d0s2" failed.

         - End of output from /opt/drd/lbin/drdconfigure
       * Creating New File Systems failed with 1 error.
       * Unmounting New System Image Clone
       * System image: "sysimage_001" on disk "/dev/dsk/c2t1d0"

=======  04/21/10 08:38:46 EDT  END Clone System Image failed with 1 error. (user=root)  (jobid=rx260-02)

[ivm-v2]/ #

I thought that every possible error was fixed but there it was DRD saying that it failed with a bogus “return code 255″, oh yes very insightful because it’s not a 254 or a 256 it is a 255 code and everybody know what it means… Shit! I don’t know what it means. Yes it was true, I didn’t know what “return code 255″ stood for. After doing a small search on ITRC there was only one entry about a similar case, only one. I manage to create a beautiful error, don’t you think?

The question is that there was a mismatch between the minor numbers the kernel believed were in use and those really visible in the device files. DRD will always try to use the next free based on the device files and since in my case there was only one in use but the kernel thought there were two in use, one from vg00 and another one from the failed clone, it failed.

The solution is to cheat the kernel creating a fake group device using the minor number the kernel thinks is in use.

[ivm-v2]/dev # mkdir fake
[ivm-v2]/dev # cd fake
[ivm-v2]/dev/fake # mknod group c 64 0x010000
[ivm-v2]/dev/fake #

After that I launched DRD and everything went smoothly.

Fortunately everything happened in a test virtual machine and at any step of my frustrating trip through self-generated DRD error I could reset the VM and start over again with a clean system but since the purpose of Dynamic Root Disk is to minimize the downtime of production systems the reboot was not an option, at least no the first in the list.

The credit for the solution goes to Judit Wathen from the Dynamic Root Disk Team at HP, continue with your great work :-D

Juanma.

[ivm-v2]/ # idisk -p /dev/rdsl k/c2t1d0

idisk version: 1.31

EFI Primary Header:

Signature                 = EFI PART

Revision                  = 0×10000

HeaderSize                = 0x5c

HeaderCRC32               = 0xe19d8a07

MyLbaLo                   = 0×1

AlternateLbaLo            = 0x1117732f

FirstUsableLbaLo          = 0×22

LastUsableLbaLo           = 0x1117730c

Disk GUID                 = d79b52fa-4d43-11df-8001-d6217b60e588

PartitionEntryLbaLo       = 0×2

NumberOfPartitionEntries  = 0xc

SizeOfPartitionEntry      = 0×80

PartitionEntryArrayCRC32  = 0xca7e53ce

Primary Partition Table (in 512 byte blocks):

Partition 1 (EFI):

Partition Type GUID       = c12a7328-f81f-11d2-ba4b-00a0c93ec93b

Unique Partition GUID     = d79b550c-4d43-11df-8002-d6217b60e588

Starting Lba              = 0×22

Ending Lba                = 0xfa021

Partition 2 (HP-UX):

Partition Type GUID       = 75894c1e-3aeb-11d3-b7c1-7b03a0000000

Unique Partition GUID     = d79b5534-4d43-11df-8003-d6217b60e588

Starting Lba              = 0xfa022

Ending Lba                = 0x110af021

Partition 3 (HPSP):

Partition Type GUID       = e2a1e728-32e3-11d6-a682-7b03a0000000

Unique Partition GUID     = d79b5552-4d43-11df-8004-d6217b60e588

Starting Lba              = 0x110af022

Ending Lba                = 0×11177021

[ivm-v2]/ #

[ivm-v2]/ # idisk -R /dev/rdsk/c2t1d0

idisk version: 1.31

********************** WARNING ***********************

If you continue you will destroy all partition data on this disk.

Do you wish to continue(yes/no)? yes

As I already said many times my current HPVM version is 3.5 so it doesn’t support guest online migration. But lacking the online migration feature doesn’t mean that we can not perform Integrity VM migration between hosts.

Currently there are two methods to perform migrations:

  • HPVM commands.
  • MC/ServiceGuard.

In this post I will only cover the HPVM way. I will leave HPVM ServiceGuard clusters for a future post but as many of you already know moving a guest between cluster nodes is like moving any other ServiceGuard package since the guests are managed by SG as packages.

PREREQUISITES:

There is a certain list of prerequisites the guest has to met in order to be successfully migrated between hosts.

  • Off-line state:

This is pretty obvious of course, the guest must be off.

  • SSH configuration:

In both hosts root must have SSH access through public key authentication to the other.

root@ivmcl01:~ # hpvmmigrate -P hpvm1 -h ivmcl02
hpvmmigrate: SSH execution error. Make sure ssh is setup right on both source and target systems.
  • Shared devices:

If the guest has a shared device like the CD/DVD of the host, the device has to be deleted from the guest configuration.

root@ivmcl01:~ # hpvmmigrate -P hpvm1 -h ivmcl02
hpvmmigrate: Device /dev/rdsk/c1t4d0 is shared.  Guest with shared storage devices cannot be migrated.
  • Storage devices:

There are two consideration about storage devices.

The storage devices of the guest must be physical disks. Migration of guests with lvols as storage devices is supported only in HPVM 4.1 release.

root@ivmcl01:~ # hpvmmigrate -P hpvm1 -h ivmcl02
hpvmmigrate: Target VM Host error - Device does not exist.
hpvmmigrate: See HPVM command log file on target VM Host for more detail.

The WWID of the device must be the same in both HPVM hosts.

root@ivmcl01:~ # hpvmmigrate -P hpvm1 -h ivmcl02
hpvmmigrate: Device WWID does not match.
hpvmmigrate: See HPVM command log file on target VM Host for more detail.
  • Network configuration:

The virtual switch where the guest is connected to must be configured on the same network card in both hosts. For example if vswitch vlan2 is using lan0 in host1 must be using lan0 in host2 or the migration will fail.

root@ivmcl01:~ # hpvmmigrate -P hpvm1 -h ivmcl02
hpvmmigrate: Target VM Host error - vswitch validation failed.
hpvmmigrate: See HPVM command log file on target VM Host for more detail.

PROCEDURE:

If all the prerequisites explained before are met by our guest we can proceed with the migration. The command to use is hpvmmigrate, the name or the VM number and the hostname of the destination server have to be provided. Some of the resources of the virtual machines like number of CPU, ammount of RAM or the machine label can also be modified.

root@ivmcl01:~ # hpvmmigrate -P hpvm1 -h ivmcl02
hpvmmigrate: Guest migrated successfully.
root@ivmcl01:~ #

Check the existence of the migrated guest in the destination host.

root@ivmcl02:~ # hpvmstatus
[Virtual Machines]
Virtual Machine Name VM #  OS Type State     #VCPUs #Devs #Nets Memory  Runsysid
==================== ===== ======= ========= ====== ===== ===== ======= ========
oratest01                1 HPUX    On (OS)        4    10     3   16 GB        0
oratest02                2 HPUX    On (OS)        4     8     3   16 GB        0
sapvm01                  3 HPUX    Off            3     8     3    8 GB        0
sapvm02                  4 HPUX    Off            3     7     3    8 GB        0
sles01                   5 LINUX   On (OS)        1     4     3    4 GB        0
rhel01                   6 LINUX   Off            1     4     3    4 GB        0
hp-vxvm                  7 HPUX    On (OS)        2    17     3    6 GB        0
ws2003                   8 WINDOWS Off            4     4     3   12 GB        0
hpvm1                   10 HPUX    Off            1     1     1    3 GB        0
root@ivmcl02:~ #

As you can see once all the prerequisites have been met the migration is quite easy.

CONCLUSION:

Even with the disadvantage of lacking online migration the guest migration feature can be of usefulness to balance the load betwwen HPVM hosts.

Juanma.

Welcome again to “HPVM World!” my dear readers :-D

Have to say that even with the initial disappointment about hpvmclone, cloning IVMs was a very funny task but I believe that the after cloning tasks weren’t very clear, at least for me, so I decided to write this follow up post to clarify that part.

Let’s assume we already have a cloned virtual machine, in this particular case I used dd to clone the virtual disk and later I created the IVM and added the storage device and the other resources but it also applied to the other method with minor changes.

[root@hpvmhost] ~ # hpvmstatus -P vmnode2 -d
[Virtual Machine Devices]

[Storage Interface Details]
disk:scsi:0,0,0:lv:/dev/vg_vmtest/rvmnode2disk
dvd:scsi:0,0,1:disk:/dev/rdsk/c1t4d0

[Network Interface Details]
network:lan:0,1,0xB20EBA14E76C:vswitch:localnet
network:lan:0,2,0x3E9492C9F615:vswitch:vlan02

[Misc Interface Details]
serial:com1::tty:console
[root@hpvmhost] ~ #

We start the virtual machine an access its console.Now we are going to follow some of the final steps of the third method described in my previous post. From the main EFI Boot Manager select the Boot option maintenance menu option.

EFI Boot Manager ver 1.10 [14.62] [Build: Mon Oct  1 09:27:26 2007]

Please select a boot option

     EFI Shell [Built-in]                                           
     Boot option maintenance menu                                    

     Use ^ and v to change option(s). Use Enter to select an option

Select Boot from a file and the select the first partition:

EFI Boot Maintenance Manager ver 1.10 [14.62]

Boot From a File.  Select a Volume

    IA64_EFI [Acpi(PNP0A03,0)/Pci(0|0)/Scsi(Pun0,Lun0)/HD(Part1,Sig
    IA64_EFI [Acpi(PNP0A03,0)/Pci(0|0)/Scsi(Pun0,Lun0)/HD(Part3,Sig7
    Removable Media Boot [Acpi(PNP0A03,0)/Pci(0|0)/Scsi(Pun1,Lun0)]
    Load File [Acpi(PNP0A03,0)/Pci(1|0)/Mac(B20EBA14E76C)]          
    Load File [Acpi(PNP0A03,0)/Pci(2|0)/Mac(3E9492C9F615)]        
    Load File [EFI Shell [Built-in]]                                
    Legacy Boot
    Exit

Enter the EFI directory then the HPUX directory and finally select hpux.file. Like I said before this part is very similar to the final steps of Method 3.

EFI Boot Maintenance Manager ver 1.10 [14.62]

Select file or change to new directory:

       03/09/10  03:45p <DIR>       4,096 .                         
       03/09/10  03:45p <DIR>       4,096 ..                        
       03/10/10  04:21p           657,609 hpux.efi                  
       03/09/10  03:45p            24,576 nbp.efi                   
       Exit

After this the machine will boot.

   Filename: \EFI\HPUX\hpux.efi
 DevicePath: [Acpi(PNP0A03,0)/Pci(0|0)/Scsi(Pun0,Lun0)/HD(Part1,Sig71252358-2BCD-11DF-8000-D6217B60E588)/\EFI\HPUX\hpux.efi]
   IA-64 EFI Application 03/10/10  04:21p     657,609 bytes

(C) Copyright 1999-2008 Hewlett-Packard Development Company, L.P.
All rights reserved

HP-UX Boot Loader for IPF  --  Revision 2.036

Press Any Key to interrupt Autoboot
\EFI\HPUX\AUTO ==> boot vmunix
Seconds left till autoboot -   0
AUTOBOOTING...> System Memory = 2042 MB
loading section 0
..................................................................................... (complete)
loading section 1
............... (complete)
loading symbol table
loading System Directory (boot.sys) to MFS
.....
loading MFSFILES directory (bootfs) to MFS
..................
Launching /stand/vmunix
SIZE: Text:43425K + Data:7551K + BSS:22118K = Total:73096K
...

When the VM is up login as root. The first tasks as always are to change hostname and network configuration to avoid conflicts.

Next we are going recreate lvmtab since the current one contains the LVM configuration of the source virtual machine. Performing a simple vgdisplay will show it.

root@vmnode2:/# vgdisplay
vgdisplay: Warning: couldn't query physical volume "/dev/disk/disk15_p2":
The specified path does not correspond to physical volume attached to
this volume group
vgdisplay: Warning: couldn't query all of the physical volumes.
--- Volume groups ---
VG Name                     /dev/vg00
VG Write Access             read/write     
VG Status                   available                 
Max LV                      255    
Cur LV                      8      
Open LV                     8      
Max PV                      16     
Cur PV                      1      
Act PV                      0      
Max PE per PV               3085         
VGDA                        0   
PE Size (Mbytes)            8               
Total PE                    0       
Alloc PE                    0       
Free PE                     0       
Total PVG                   0        
Total Spare PVs             0              
Total Spare PVs in use      0

root@vmnode2:/#

To correct this remove the /etc/lvmtab file and launch a vgscan.

root@vmnode2:/# rm /etc/lvmtab
/etc/lvmtab: ? (y/n) y
root@vmnode2:/var/tmp/software# vgscan
Creating "/etc/lvmtab".
vgscan: Couldn't access the list of physical volumes for volume group "/dev/vg00".
Physical Volume "/dev/dsk/c1t1d0" contains no LVM information

*** LVMTAB has been created successfully.
*** Do the following to resync the information on the disk.
*** #1.  vgchange -a y
*** #2.  lvlnboot -R
root@vmnode2:/#

Follow the recommended steps in vgscan output, the first step only applies if there are any other VGs in the system, if there is only vg00 it is already active so this step is not necesary.

Running lvnlboot -R is mandatory since we need to recover and update the links to the lvols in the Boot Data Reserved Area of the booting disk.

root@vmnode2:/# lvlnboot -R
Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00.conf
root@vmnode2:/#

Now the LVM configuration is fixed, try again the vgdisplay command.

root@vmnode2:/# vgdisplay
--- Volume groups ---
VG Name                     /dev/vg00
VG Write Access             read/write
VG Status                   available
Max LV                      255
Cur LV                      8
Open LV                     8
Max PV                      16
Cur PV                      1
Act PV                      1
Max PE per PV               3085
VGDA                        2
PE Size (Mbytes)            8
Total PE                    3075
Alloc PE                    2866
Free PE                     209
Total PVG                   0
Total Spare PVs             0
Total Spare PVs in use      0

root@vmnode2:/#

With the LVM configuration fixed the next step is to indicate the booting disk to the system.

root@vmnode2:/# setboot -p /dev/disk/disk21_p2
Primary boot path set to 0/0/0/0.0x0.0x0 (/dev/disk/disk21_p2)
root@vmnode2:/#
root@vmnode2:/# setboot
Primary bootpath : 0/0/0/0.0x0.0x0 (/dev/rdisk/disk21)
HA Alternate bootpath :
Alternate bootpath :

Autoboot is ON (enabled)
root@vmnode2:/#

Finally reboot the virtual machine and if we did everything correctly a new boot option will be available in the EFI Boot Manager.

EFI Boot Manager ver 1.10 [14.62] [Build: Mon Oct  1 09:27:26 2007]

Please select a boot option

    HP-UX Primary Boot: 0/0/0/0.0x0.0x0                             
    EFI Shell [Built-in]                                            
    Boot option maintenance menu                                    

    Use ^ and v to change option(s). Use Enter to select an option

Let the system boot by itself through the new default option (HP-UX Primary Boot) and we are done.

Any feedback would be welcome.

Juanma.

Cloning HPVM guests

March 9, 2010 — 6 Comments

Our next step in the wonderful HPVM World is… cloning virtual machines.

If you have used VMware Virtual Infrastructure cloning, probably are used to the easy “right-click and clone vm” procedure. Sadly HPVM cloning has nothing in common with it. In fact the process to clone a virtual machine can be a little creppy.

Of course there is a hpvmclone command and anyone can think, as I did the first time I had to clone an IVM, I only have to provide the source VM, the new VM name and voilà everything will be done:

[root@hpvmhost] ~ # hpvmclone -P ivm1 -N ivm_clone01
[root@hpvmhost] ~ # hpvmstatus
[Virtual Machines]
Virtual Machine Name VM #  OS Type State     #VCPUs #Devs #Nets Memory  Runsysid
==================== ===== ======= ========= ====== ===== ===== ======= ========
ivm1                     9 HPUX    Off            3     3     2    2 GB        0
ivm2                    10 HPUX    Off            1     7     1    3 GB        0
ivm_clone01             11 HPUX    Off            3     3     2    2 GB        0
[root@hpvmhost] ~ #

The new virtual machine can be seen and everything seems to be fine but when you ask for the configuration details of the new IVM a nasty surprise will appear… the storage devices had not been cloned instead it looks that hpvmclone simply mapped the devices of the source IVM to the new IVM:

[root@hpvmhost] ~ # hpvmstatus -P ivm_clone01
[Virtual Machine Details]
Virtual Machine Name VM #  OS Type State
==================== ===== ======= ========
ivm_clone01             11 HPUX    Off       

[Authorized Administrators]
Oper Groups: 
Admin Groups:
Oper Users:  
Admin Users:  

[Virtual CPU Details]
#vCPUs Entitlement Maximum
====== =========== =======
     3       20.0%  100.0% 

[Memory Details]
Total    Reserved
Memory   Memory 
=======  ========
   2 GB     64 MB 

[Storage Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
disk    scsi         0   2   0   0   0 lv        /dev/vg_vmtest/rivm1d1
disk    scsi         0   2   0   1   0 lv        /dev/vg_vmtest/rivm1d2
dvd     scsi         0   2   0   2   0 disk      /dev/rdsk/c1t4d0

[Network Interface Details]
Interface Adaptor    Name/Num   PortNum Bus Dev Ftn Mac Address
========= ========== ========== ======= === === === =================
vswitch   lan        vlan02     11        0   0   0 f6-fb-bf-41-78-63
vswitch   lan        localnet   10        0   1   0 2a-69-35-d5-c1-5f

[Misc Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
serial  com1                           tty       console
[root@hpvmhost] ~ #

With this configuration the virtual machines can’t be booted at the same time. So, what is the purpose of hpvmclone if the newly cloned node can’t be used simultaneously with the original? Sincerely this makes no sense at least for me.

At that point and since I really wanted to use both machines in a test cluster I decided to do a little research through Google and ITRC.

After reading again the official documentation, a few dozens posts regarding HPVM cloning and HPVM in general and a few very nice posts in Daniel Parkes’ HP-UX Tips & Tricks site I finally came up with three different methods to successfully and “physically” clone an Integrity Virtual Machine.

METHOD 1: Using dd.

  • Create the LVM structure for the new virtual machine on the host.
  • Use dd to copy every storage device from the source virtual machine.
[root@hpvmhost] ~ # dd if=/dev/vg_vmtest/rivm1d1 of=/dev/vg_vmtest/rclone01_d1 bs=1024k
12000+0 records in
12000+0 records out
[root@hpvmhost] ~ #
[root@hpvmhost] ~ # dd if=/dev/vg_vmtest/rivm1d2 of=/dev/vg_vmtest/rclone01_d2 bs=1024k
12000+0 records in
12000+0 records out
[root@hpvmhost] ~ #
  • Using hpvmclone create the new machine and in the same command add the new storage devices and delete the old ones from its configuration, any resource can also be modified at this point like with hpvmcreate.
[root@hpvmhost] ~ # hpvmclone -P ivm1 -N clone01 -d disk:scsi:0,2,0:lv:/dev/vg_vmtest/rivm1d1 \
> -d disk:scsi:0,2,1:lv:/dev/vg_vmtest/rivm1d2 \
> -a disk:scsi::lv:/dev/vg_vmtest/rclone01_d1 \
> -a disk:scsi::lv:/dev/vg_vmtest/rclone01_d2 \
> -l "Clone-cluster 01" \
> -B manual
[root@hpvmhost] ~ #
  • Start the new virtual machine and make the necessary changes to the guest OS (network, hostname, etc).

METHOD 2: Clone the virtual storage devices at the same time the IVM is cloned.

Yes, yes and yes it can be done with hpvmclone, you have to use the -b switch and provide the storage resource to use.

I really didn’t test this procedure with other devices apart from the booting disk/disks. In theory the man page of the command and the HPVM documentation states that this option can be used to specify the booting device of the clone but I used to clone a virtual machine with one boot disk and one with two disks and in both cases it worked without problems.

  • As in METHOD 1 create the necessary LVM infrastructure for the new IVM.
  • Once the lvols are created clone the virtual machine.
[root@hpvmhost] ~ # hpvmclone -P ivm1 -N vxcl01 -a disk:scsi::lv:/dev/vg_vmtest/rvxcl01_d1 \
> -a disk:scsi::lv:/dev/vg_vmtest/rvxcl01_d2 \
> -b disk:scsi:0,2,0:lv:/dev/vg_vmtest/rvxcl01_d1 \
> -b disk:scsi:0,2,1:lv:/dev/vg_vmtest/rvxcl01_d2 \
> -d disk:scsi:0,2,0:lv:/dev/vg_vmtest/rivm1d1 \
> -d disk:scsi:0,2,1:lv:/dev/vg_vmtest/rivm1d2 \
> -B manual
12000+0 records in
12000+0 records out
hpvmclone: Virtual storage cloned successfully.
12000+0 records in
12000+0 records out
hpvmclone: Virtual storage cloned successfully.
[root@hpvmhost] ~ #
  • Start the virtual machine.
  • Now log into the virtual machine to check the start-up process and to make any change needed.

METHOD 3: Dynamic Root Disk.

Since with DRD a clone of the vg00 can be produced we can use it too to clone an Integrity Virtual Machine.

  • First step is to create a new lvol that will contain the clone of the vg00, it has to be at least the same size as the original disk.
  • Install the last DRD version supported on the virtual machine to clone.
  • Add the new volume to the source virtual machine and from the guest OS re-scan for the new disk.
  • Now proceed with the DRD clone.
root@ivm2:~# drd clone -v -x overwrite=true -t /dev/disk/disk15   

=======  03/09/10 15:45:15 MST  BEGIN Clone System Image (user=root)  (jobid=ivm2)

       * Reading Current System Information
       * Selecting System Image To Clone
       * Converting legacy Dsf "/dev/dsk/c0t0d0" to "/dev/disk/disk3"
       * Selecting Target Disk
NOTE:    There may be LVM 2 volumes configured that will not be recognized.
       * Selecting Volume Manager For New System Image
       * Analyzing For System Image Cloning
       * Creating New File Systems
       * Copying File Systems To New System Image
       * Copying File Systems To New System Image succeeded.
       * Making New System Image Bootable
       * Unmounting New System Image Clone
       * System image: "sysimage_001" on disk "/dev/disk/disk15"

=======  03/09/10 16:05:20 MST  END Clone System Image succeeded. (user=root)  (jobid=ivm2)

root@ivm2:~#
  • Mount the new image.
root@ivm2:~# drd mount -v

=======  03/09/10 16:09:08 MST  BEGIN Mount Inactive System Image (user=root)  (jobid=ivm2)

       * Checking for Valid Inactive System Image
       * Locating Inactive System Image
       * Preparing To Mount Inactive System Image
       * Selected inactive system image "sysimage_001" on disk "/dev/disk/disk15".
       * Mounting Inactive System Image
       * System image: "sysimage_001" on disk "/dev/disk/disk15"

=======  03/09/10 16:09:26 MST  END Mount Inactive System Image succeeded. (user=root)  (jobid=ivm2)

root@ivm2:~#
  • On the mounted image edit the netconf file and modify the hostname to “” and remove any network configuration such as IP address, gateway, etc. The image is mounted on /var/opt/drd/mnts/sysimage_001.
  • Move or delete the DRD XML registry file in /var/opt/drd/mnts/sysimage_001/var/opt/drd/registry in order to avoid any problems during the boot of the clone since the source disk will not be present.
  • Unmount the image.
root@ivm2:~# drd umount -v 

=======  03/09/10 16:20:45 MST  BEGIN Unmount Inactive System Image (user=root)  (jobid=ivm2)

       * Checking for Valid Inactive System Image
       * Locating Inactive System Image
       * Preparing To Unmount Inactive System Image
       * Unmounting Inactive System Image
       * System image: "sysimage_001" on disk "/dev/disk/disk15"

=======  03/09/10 16:20:58 MST  END Unmount Inactive System Image succeeded. (user=root)  (jobid=ivm2)

root@ivm2:~#
  • Now we are going to create the new virtual machine with hpvmclone. Of course the new IVM can be created through hpvmcreate and add the new disk as its boot disk.
[root@hpvmhost] ~ # hpvmclone -P ivm2 -N ivm3 -B manual -d disk:scsi:0,1,0:lv:/dev/vg_vmtest/rivm2disk
[root@hpvmhost] ~ # hpvmstatus -P ivm3
[Virtual Machine Details]
Virtual Machine Name VM #  OS Type State
==================== ===== ======= ========
ivm3                     4 HPUX    Off       

[Authorized Administrators]
Oper Groups: 
Admin Groups:
Oper Users:  
Admin Users:  

[Virtual CPU Details]
#vCPUs Entitlement Maximum
====== =========== =======
     1       10.0%  100.0% 

[Memory Details]
Total    Reserved
Memory   Memory 
=======  ========
   3 GB     64 MB 

[Storage Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
dvd     scsi         0   1   0   1   0 disk      /dev/rdsk/c1t4d0
disk    scsi         0   1   0   2   0 lv        /dev/vg_vmtest/rivm3disk

[Network Interface Details]
Interface Adaptor    Name/Num   PortNum Bus Dev Ftn Mac Address
========= ========== ========== ======= === === === =================
vswitch   lan        vlan02     11        0   0   0 52-4f-f9-5e-02-82

[Misc Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
serial  com1                           tty       console
[root@hpvmhost] ~ #
  • Final step is to boot the newly create machine, from the EFI menu we’re going to create a new boot file.
  • First select the Boot option maintenance menu:
EFI Boot Manager ver 1.10 [14.62] [Build: Mon Oct  1 09:27:26 2007]

Please select a boot option

    HP-UX Primary Boot: 0/0/1/0.0.0                                
    EFI Shell [Built-in]                                           
    Boot option maintenance menu                                    

    Use ^ and v to change option(s). Use Enter to select an option
  • Now go to Add a Boot Option.
EFI Boot Maintenance Manager ver 1.10 [14.62]

Main Menu. Select an Operation

        Boot from a File                                           
        Add a Boot Option                                          
        Delete Boot Option(s)                                      
        Change Boot Order                                           

        Manage BootNext setting                                    
        Set Auto Boot TimeOut                                       

        Select Active Console Output Devices                       
        Select Active Console Input Devices                        
        Select Active Standard Error Devices                        

        Cold Reset                                                 
        Exit                                                       

    Timeout-->[10] sec SystemGuid-->[5A0F8F26-2BA2-11DF-9C04-001A4B07F002]
    SerialNumber-->[VM01010008          ]
  • Select the first partition of the disk.
EFI Boot Maintenance Manager ver 1.10 [14.62]

Add a Boot Option.  Select a Volume

    IA64_EFI [Acpi(PNP0A03,0)/Pci(1|0)/Scsi(Pun2,Lun0)/HD(Part1,Sig7
    IA64_EFI [Acpi(PNP0A03,0)/Pci(1|0)/Scsi(Pun2,Lun0)/HD(Part3,Sig7
    Removable Media Boot [Acpi(PNP0A03,0)/Pci(1|0)/Scsi(Pun1,Lun0)]
    Load File [Acpi(PNP0A03,0)/Pci(0|0)/Mac(524FF95E0282)]         
    Load File [EFI Shell [Built-in]]                               
    Legacy Boot                                                    
    Exit
  • Select the first option.
EFI Boot Maintenance Manager ver 1.10 [14.62]

Select file or change to new directory:

       03/09/10  03:45p <DIR>       4,096 EFI                      
       [Treat like Removable Media Boot]                           
    Exit
  • Enter the HPUX directory.
EFI Boot Maintenance Manager ver 1.10 [14.62]

Select file or change to new directory:

       03/09/10  03:45p <DIR>       4,096 .                        
       03/09/10  03:45p <DIR>           0 ..                       
       03/09/10  03:45p <DIR>       4,096 HPUX                     
       03/09/10  03:45p <DIR>       4,096 Intel_Firmware           
       03/09/10  03:45p <DIR>       4,096 diag                     
       03/09/10  03:45p <DIR>       4,096 hp                       
       03/09/10  03:45p <DIR>       4,096 tools                    
    Exit
  • Select the hpux.efi file.
EFI Boot Maintenance Manager ver 1.10 [14.62]

Select file or change to new directory:

       03/09/10  03:45p <DIR>       4,096 .                        
       03/09/10  03:45p <DIR>       4,096 ..                       
       03/09/10  03:45p           654,025 hpux.efi                 
       03/09/10  03:45p            24,576 nbp.efi                  
    Exit
  • Enter BOOTDISK as description and None as BootOption Data Type. Save changes.
Filename: \EFI\HPUX\hpux.efi
DevicePath: [Acpi(PNP0A03,0)/Pci(1|0)/Scsi(Pun2,Lun0)/HD(Part1,Sig71252358-2BCD-11DF-8000-D6217B60E588)/\EFI\HPUX\hpux.efi]

IA-64 EFI Application 03/09/10  03:45p     654,025 bytes

Enter New Description:  BOOTDISK
New BootOption Data. ASCII/Unicode strings only, with max of 240 characters

Enter BootOption Data Type [A-Ascii U-Unicode N-No BootOption] :  None

Save changes to NVRAM [Y-Yes N-No]:
  • Go back to the EFI main menu and boot from the new option.
EFI Boot Manager ver 1.10 [14.62] [Build: Mon Oct  1 09:27:26 2007]

Please select a boot option
HP-UX Primary Boot: 0/0/1/0.0.0
EFI Shell [Built-in]
BOOTDISK
Boot option maintenance menu

Use ^ and v to change option(s). Use Enter to select an option
Loading.: BOOTDISK
Starting: BOOTDISK

(C) Copyright 1999-2006 Hewlett-Packard Development Company, L.P.
All rights reserved

HP-UX Boot Loader for IPF  --  Revision 2.035

Press Any Key to interrupt Autoboot
\EFI\HPUX\AUTO ==> boot vmunix
Seconds left till autoboot -   0
AUTOBOOTING...> System Memory = 3066 MB
loading section 0
.................................................................................. (complete)
loading section 1
.............. (complete)
loading symbol table
loading System Directory (boot.sys) to MFS
.....
loading MFSFILES directory (bootfs) to MFS
................
Launching /stand/vmunix
SIZE: Text:41555K + Data:6964K + BSS:20747K = Total:69267K
  • Finally the OS will ask some questions about the network configuration and other parameters, answer what suits better your needing.
_______________________________________________________________________________

                       Welcome to HP-UX!

Before using your system, you will need to answer a few questions.

The first question is whether you plan to use this system on a network.

Answer "yes" if you have connected the system to a network and are ready
to link with a network.

Answer "no" if you:

     * Plan to set up this system as a standalone (no networking).

     * Want to use the system now as a standalone and connect to a
       network later.
_______________________________________________________________________________

Are you ready to link this system to a network?

Press [y] for yes or [n] for no, then press [Enter] y
...

And we are done.

Conclusions: I have to say that at the beginning the cloning system of HPVM disappointed me; but after a while I got used to it.

In my opinion the best method of the above is the second if you have one boot disk, and I really can’t see a reason to have a vg00 with several PVs on a virtual machine. If you have an IVM as template and need to produce many copies as quickly as possible this method is perfect.

Of course there is a fourth method: Our beloved Ignite-UX. But I will write about it in another post.

Juanma.

[root@hpvmhost] ~ # hpvmstatus -P hpvxcl01
[Virtual Machine Details]
Virtual Machine Name VM #  OS Type State
==================== ===== ======= ========
hpvxcl01                11 HPUX    Off[Authorized Administrators]
Oper Groups:
Admin Groups:
Oper Users:
Admin Users:[Virtual CPU Details]
#vCPUs Entitlement Maximum
====== =========== =======
3       20.0%  100.0%[Memory Details]
Total    Reserved
Memory   Memory
=======  ========
2 GB     64 MB

[Storage Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
disk    scsi         0   2   0   0   0 lv        /dev/vg_vmtest/rivm1d1
disk    scsi         0   2   0   1   0 lv        /dev/vg_vmtest/rivm1d2
dvd     scsi         0   2   0   2   0 disk      /dev/rdsk/c1t4d0
disk    scsi         0   2   0   3   0 lv        /dev/vg_vmtest/rivm1sd1
disk    scsi         0   2   0   4   0 lv        /dev/vg_vmtest/rivm1sd2
disk    scsi         0   2   0   5   0 lv        /dev/vg_vmtest/rivm1sd3
disk    scsi         0   2   0   6   0 lv        /dev/vg_vmtest/rivm1md1
disk    scsi         0   2   0   7   0 lv        /dev/vg_vmtest/rivm1md2
disk    scsi         0   2   0   8   0 lv        /dev/vg_vmtest/rivm1md3
disk    scsi         0   2   0   9   0 lv        /dev/vg_vmtest/rivm1lmd1
disk    scsi         0   2   0  10   0 lv        /dev/vg_vmtest/rivm1lmd2

[Network Interface Details]
Interface Adaptor    Name/Num   PortNum Bus Dev Ftn Mac Address
========= ========== ========== ======= === === === =================
vswitch   lan        swtch502   11        0   0   0 f6-fb-bf-41-78-63
vswitch   lan        localnet   10        0   1   0 2a-69-35-d5-c1-5f

[Misc Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
serial  com1                           tty       console
[root@hpvmhost] ~ #

Following with my re-learning HPVM process today I’ve been playing around with my virtual switches and a question had arise.

How can I move a vNic from one vSwitch to another?

I discovered is not a difficult task, just one important question to take into account, the virtual machine must be powered off. This kind of changes can’t be done if the IVM is online, at least with HPVM 3.5. I never used 4.0 or 4.1 releases of HPVM and I didn’t find anything in the documentation that suggest a different behavior.

To perform the operation we’re going to use, as usual ;-),  hpvmmodify. It comes with the -m switch to modify the I/O resources of an already existing virtual machine, but you have to specify the hardware address of the device. To identify the address of the network card  launch hpvmstatus with -d, this options shows the output with the format used on the command line.

[root@hpvmhost] ~ # hpvmstatus -P ivm1 -d
[Virtual Machine Devices]
...
[Network Interface Details]
network:lan:0,0,0x56E9E3096A22:vswitch:vlan02
network:lan:0,1,0xAED6F7FA4E3E:vswitch:localnet
...
[root@hpvmhost] ~ #

As it can be seen in the Networking Interface Details the third field shows, separated by commas,  the lan bus, the device number and the MAC address of the vNic. We only need the first two values, that is the lan bus and device number, “0,0″ in our the example.

Now we can proceed.

[root@hpvmhost] ~ # hpvmmodify -P ivm2 -m network:lan:0,0:vswitch:vlan03   
[root@hpvmhost] ~ #
[root@hpvmhost] ~ # hpvmstatus -P ivm1
[Virtual Machine Details]
Virtual Machine Name VM #  OS Type State
==================== ===== ======= ========
ivm1                     9 HPUX    On (OS)   
...
[Network Interface Details]
Interface Adaptor    Name/Num   PortNum Bus Dev Ftn Mac Address
========= ========== ========== ======= === === === =================
vswitch   lan        vlan03     9         0   0   0 56-e9-e3-09-6a-22
vswitch   lan        localnet   9         0   1   0 ae-d6-f7-fa-4e-3e
...
[root@hpvmhost] ~ #

And we are done.

I will write a few additional posts covering  more HPVM tips, small ones and big ones, at the same time I’m practicing them on my lab server.

Juanma.