Archives For Dynamic Root Disk

Dynamic Root Diks, or DRD for short, is a nice and handy tool that IMHO every HP-UX Sysadmin must know. In an HPVM related post I showed how to use DRD to clone a virtual machine but today I will explain the purpose DRD was intended when it was first introduced… patching a server. I’m going to suppose you have an spare disk for the task and of course have DRD installed in the server.

1.- Clone the root disk.

root@sheldon:/ # drd clone -x overwrite=true -v -t /dev/disk/disk2

=======  04/21/10 09:05:53 EDT  BEGIN Clone System Image (user=root)  (jobid=sheldon-01)

* Reading Current System Information
* Selecting System Image To Clone
* Selecting Target Disk
* Selecting Volume Manager For New System Image
* Analyzing For System Image Cloning
* Creating New File Systems
* Copying File Systems To New System Image
* Making New System Image Bootable
* Unmounting New System Image Clone
* System image: "sysimage_001" on disk "/dev/disk/disk2"

=======  04/21/10 09:38:48 EDT  END Clone System Image succeeded. (user=root)  (jobid=sheldon-01)

root@sheldon:/ #

2.- Mount the image.

root@sheldon:/ # drd mount

=======  04/21/10 09:41:20 EDT  BEGIN Mount Inactive System Image (user=root)  (jobid=sheldon)

 * Checking for Valid Inactive System Image
 * Locating Inactive System Image
 * Mounting Inactive System Image

=======  04/21/10 09:41:31 EDT  END Mount Inactive System Image succeeded. (user=root)  (jobid=sheldon)

root@sheldon:/ #

Check the mount by displaying the drd00 volume group.

root@sheldon:/ # vgdisplay drd00

VG Name                     /dev/drd00
VG Write Access             read/write     
VG Status                   available                 
Max LV                      255    
Cur LV                      8      
Open LV                     8      
Max PV                      16     
Cur PV                      1      
Act PV                      1      
Max PE per PV               4356         
VGDA                        2   
PE Size (Mbytes)            32              
Total PE                    4346    
Alloc PE                    2062    
Free PE                     2284    
Total PVG                   0        
Total Spare PVs             0              
Total Spare PVs in use      0  

root@sheldon:/ #

3.- Apply the patches on the mounted clone.

root@sheldon:/ # drd runcmd swinstall -s /tmp/patches_01.depot

=======  04/21/10 09:42:55 EDT  BEGIN Executing Command On Inactive System Image (user=root)  (jobid=sheldon)

 * Checking for Valid Inactive System Image
 * Analyzing Command To Be Run On Inactive System Image
 * Locating Inactive System Image
 * Accessing Inactive System Image for Command Execution
 * Setting Up Environment For Command Execution
 * Executing Command On Inactive System Image
 * Using unsafe patch list version 20061206
 * Starting swagentd for drd runcmd
 * Executing command: "/usr/sbin/swinstall -s /tmp/patches_01.depot"

=======  04/21/10 09:42:59 EDT  BEGIN swinstall SESSION
 (non-interactive) (jobid=sheldon-0006) (drd session)

 * Session started for user "root@sheldon".

 * Beginning Selection

 ...
 ...
 ...

=======  04/21/10 09:44:37 EDT  END swinstall SESSION (non-interactive)
 (jobid=sheldon-0006) (drd session)

 * Command "/usr/sbin/swinstall -s /tmp/patches_01.depot" completed with the return
 code "0".
 * Stopping swagentd for drd runcmd
 * Cleaning Up After Command Execution On Inactive System Image

=======  04/21/10 09:44:38 EDT  END Executing Command On Inactive System Image succeeded. (user=root)  (jobid=sheldon)

root@sheldon:/ #

4.- Check the installed patches on the DRD image.

root@sheldon:/ # drd runcmd swlist patches_01

=======  04/21/10 09:45:29 EDT  BEGIN Executing Command On Inactive System Image (user=root)  (jobid=sheldon)

 * Checking for Valid Inactive System Image
 * Analyzing Command To Be Run On Inactive System Image
 * Locating Inactive System Image
 * Accessing Inactive System Image for Command Execution
 * Setting Up Environment For Command Execution
 * Executing Command On Inactive System Image
 * Executing command: "/usr/sbin/swlist patches_01"
# Initializing...
# Contacting target "sheldon"...
#
# Target:  sheldon:/
#

 # patches_01                    1.0            ACME Patching depot
   patches_01.acme-RUN
 * Command "/usr/sbin/swlist patches_01" completed with the return code "0".
 * Cleaning Up After Command Execution On Inactive System Image

=======  04/21/10 09:45:32 EDT  END Executing Command On Inactive System Image succeeded. (user=root)  (jobid=sheldon)

root@sheldon:/ #

5.- Activate the image and reboot the server.

At this point you only have to activate the patched image with the drd activate command and schedule a reboot of the server.

If you want to activate and reboot at the same time use the -x reboot=true option as in the example below.

root@sheldon:/ # drd activate -x reboot=true

=======  04/21/10 09:52:26 EDT  BEGIN Activate Inactive System Image
 (user=root)  (jobid=sheldon)

 * Checking for Valid Inactive System Image
 * Reading Current System Information
 * Locating Inactive System Image
 * Determining Bootpath Status
 * Primary bootpath : 0/1/1/0.0.0 [/dev/disk/disk1] before activate.
 * Primary bootpath : 0/1/1/0.1.0 [/dev/disk/disk2] after activate.
 * Alternate bootpath : 0 [unknown] before activate.
 * Alternate bootpath : 0 [unknown] after activate.
 * HA Alternate bootpath : <none> [] before activate.
 * HA Alternate bootpath : <none> [] after activate.
 * Activating Inactive System Image
 * Rebooting System

If everything goes well after the reboot give the patched server some time, I leave this to your own criteria, before restoring the mirror.

Juanma.

I was playing this afternoon with DRD in an 11.23 machine and just after launching the clone process I decided to stop it with Ctrl-c since I wasn’t logging the session and I wanted to do it. The process stopped with an error and I was sent back to the shell.

       * Copying File Systems To New System Image
ERROR:   Exiting due to keyboard interrupt.
       * Unmounting New System Image Clone
       * System image: "sysimage_001" on disk "/dev/dsk/c2t1d0"
ERROR:   Caught signal SIGINT.  This process is running critical code, this signal will be handled shortly.
ERROR:   Caught signal SIGINT.  This process is running critical code, this signal will be handled shortly.
ERROR:   Caught signal SIGINT.  This process is running critical code, this signal will be handled shortly.
ERROR:   Caught signal SIGINT.  This process is running critical code, this signal will be handled shortly.
ERROR:   Unmounting the file system fails.
         - Unmounting the clone image fails.
         - The "umount" command returned  "13". The "sync" command returned  "0". The error messages produced are the following: ""
       * Unmounting New System Image Clone failed with 5 errors.
       * Copying File Systems To New System Image failed with 6 errors.
=======  04/21/10 08:20:19 EDT  END Clone System Image failed with 6 errors. (user=root)  (jobid=ivm-v2)

I know it is a very bad idea but it’s not production server, just a virtual machine I use to perform tests. In fact my stupid behaviour gave me the opportunity to discover and play with a funny and pretty bunch of errors. Here it is how I manage to  resolve it.

I launched again the clone process in preview mode, just in case, and DRD fails with the following error.

[ivm-v2]/ # drd clone -p -v -t /dev/dsk/c2t1d0

=======  04/21/10 08:22:01 EDT  BEGIN Clone System Image Preview (user=root)  (jobid=ivm-v2)

        * Reading Current System Information
        * Selecting System Image To Clone
        * Selecting Target Disk
 ERROR:   Selection of the target disk fails.
          - Selecting the target disk fails.
          - Validation of the disk "/dev/dsk/c2t1d0" fails with the following error(s):
          - Target volume group device entry "/dev/drd00" exists. Run "drd umount" before proceeding.
        * Selecting Target Disk failed with 1 error.

=======  04/21/10 08:22:10 EDT  END Clone System Image Preview failed with 1 error. (user=root)  (jobid=ivm-v2)

It seems that the original process just leaved the image mounted but after trying with drd umount just like the DRD output said it didn’t worked. The image was only partially created, yeah I created a beautiful mess ;-)

At that point, and in another “clever” movement, instead of simply removing the drd00 volume group I just deleted /dev/drd00… who’s da man!! or like we use to say in Spain ¡Con dos cojones!

DRD, of course, failed with a new error.

ERROR:   Selection of the target disk fails.
         - Selecting the target disk fails.
         - Validation of the disk "/dev/dsk/c2t1d0" fails with the following error(s):
         - Target volume group "/dev/drd00" found in logical volume table. "/etc/lvmtab" is corrupt and must be fixed before proceeding.
       * Selecting Target Disk failed with 1 error.

Well it wasn’t so bad. I recreated /etc/lvmtab and yes… I fired up my friend Dynamic Root Disk in preview mode.

[ivm-v2]/ # rm -f /etc/lvmtab
[ivm-v2]/ # vgscan -v
Creating "/etc/lvmtab".
vgscan: Couldn't access the list of physical volumes for volume group "/dev/vg00".
Invalid argument
Physical Volume "/dev/dsk/c3t2d0" contains no LVM information

/dev/vg00
/dev/dsk/c2t0d0s2

Following Physical Volumes belong to one Volume Group.
Unable to match these Physical Volumes to a Volume Group.
Use the vgimport command to complete the process.
/dev/dsk/c2t1d0s2

Scan of Physical Volumes Complete.
*** LVMTAB has been created successfully.
*** Do the following to resync the information on the disk.
*** #1.  vgchange -a y
*** #2.  lvlnboot -R
[ivm-v2]/ # lvlnboot -R
Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00.conf
[ivm-v2]/ #
[ivm-v2]/ # drd clone -p -v -t /dev/dsk/c2t1d0

=======  04/21/10 08:26:06 EDT  BEGIN Clone System Image Preview (user=root)  (jobid=ivm-v2)

       * Reading Current System Information
       * Selecting System Image To Clone
       * Selecting Target Disk
ERROR:   Selection of the target disk fails.
         - Selecting the target disk fails.
         - Validation of the disk "/dev/dsk/c2t1d0" fails with the following error(s):
         - The disk "/dev/dsk/c2t1d0" contains data. To overwrite this disk use the option "-x overwrite=true".
       * Selecting Target Disk failed with 1 error.

=======  04/21/10 08:26:13 EDT  END Clone System Image Preview failed with 1 error. (user=root)  (jobid=ivm-v2)

I couldn’t believe that. Another error? Why in the hell I got involved with DRD? But I am an Sysadmin, and a stubborn one. Looked at the disk and discovered that it had been partitioned by the first failed DRD cloning process. I just wiped out the whole disk with idisk and just in case I used the overwrite option.

[ivm-v2]/ # idisk -p /dev/rdsk/c2t1d0
idisk version: 1.31

EFI Primary Header:
        Signature                 = EFI PART
        Revision                  = 0x10000
        HeaderSize                = 0x5c
        HeaderCRC32               = 0xe19d8a07
        MyLbaLo                   = 0x1
        AlternateLbaLo            = 0x1117732f
        FirstUsableLbaLo          = 0x22
        LastUsableLbaLo           = 0x1117730c
        Disk GUID                 = d79b52fa-4d43-11df-8001-d6217b60e588
        PartitionEntryLbaLo       = 0x2
        NumberOfPartitionEntries  = 0xc
        SizeOfPartitionEntry      = 0x80
        PartitionEntryArrayCRC32  = 0xca7e53ce

  Primary Partition Table (in 512 byte blocks):
    Partition 1 (EFI):
        Partition Type GUID       = c12a7328-f81f-11d2-ba4b-00a0c93ec93b
        Unique Partition GUID     = d79b550c-4d43-11df-8002-d6217b60e588
        Starting Lba              = 0x22
        Ending Lba                = 0xfa021
    Partition 2 (HP-UX):
        Partition Type GUID       = 75894c1e-3aeb-11d3-b7c1-7b03a0000000
        Unique Partition GUID     = d79b5534-4d43-11df-8003-d6217b60e588
        Starting Lba              = 0xfa022
        Ending Lba                = 0x110af021
    Partition 3 (HPSP):
        Partition Type GUID       = e2a1e728-32e3-11d6-a682-7b03a0000000
        Unique Partition GUID     = d79b5552-4d43-11df-8004-d6217b60e588
        Starting Lba              = 0x110af022
        Ending Lba                = 0x11177021

[ivm-v2]/ #
[ivm-v2]/ # idisk -R /dev/rdsk/c2t1d0
idisk version: 1.31
********************** WARNING ***********************
If you continue you will destroy all partition data on this disk.
Do you wish to continue(yes/no)? yes

Don’t know why but I was pretty sure that DRD was going to fail again… and it did.

=======  04/21/10 08:27:02 EDT  BEGIN Clone System Image Preview (user=root)  (jobid=ivm-v2)

       * Reading Current System Information
       * Selecting System Image To Clone
       * Selecting Target Disk
       * The disk "/dev/dsk/c2t1d0" contains data which will be overwritten.
       * Selecting Volume Manager For New System Image
       * Analyzing For System Image Cloning
ERROR:   Analysis of file system creation fails.
         - Analysis of target fails.
         - The analysis step for creation of an inactive system image failed.
         - The default DRD mount point "/var/opt/drd/mnts/sysimage_001/" cannot be used due to the following error(s):
         - The mount point /var/opt/drd/mnts/sysimage_001/ is not an empty directory as required.
       * Analyzing For System Image Cloning failed with 1 error.

=======  04/21/10 08:27:09 EDT  END Clone System Image Preview failed with 1 error. (user=root)  (jobid=ivm-v2)

After a quick check I found that the original image was mounted.

[ivm-v2]/ # mount
/ on /dev/vg00/lvol3 ioerror=mwdisable,delaylog,dev=40000003 on Wed Apr 21 07:29:37 2010
/stand on /dev/vg00/lvol1 ioerror=mwdisable,log,tranflush,dev=40000001 on Wed Apr 21 07:29:38 2010
/var on /dev/vg00/lvol8 ioerror=mwdisable,delaylog,dev=40000008 on Wed Apr 21 07:29:50 2010
/usr on /dev/vg00/lvol7 ioerror=mwdisable,delaylog,dev=40000007 on Wed Apr 21 07:29:50 2010
/tmp on /dev/vg00/lvol4 ioerror=mwdisable,delaylog,dev=40000004 on Wed Apr 21 07:29:50 2010
/opt on /dev/vg00/lvol6 ioerror=mwdisable,delaylog,dev=40000006 on Wed Apr 21 07:29:50 2010
/home on /dev/vg00/lvol5 ioerror=mwdisable,delaylog,dev=40000005 on Wed Apr 21 07:29:50 2010
/net on -hosts ignore,indirect,nosuid,soft,nobrowse,dev=1 on Wed Apr 21 07:30:26 2010
/var/opt/drd/mnts/sysimage_001 on /dev/drd00/lvol3 ioerror=nodisable,delaylog,dev=40010003 on Wed Apr 21 08:19:46 2010
/var/opt/drd/mnts/sysimage_001/stand on /dev/drd00/lvol1 ioerror=nodisable,delaylog,dev=40010001 on Wed Apr 21 08:19:46 2010
/var/opt/drd/mnts/sysimage_001/tmp on /dev/drd00/lvol4 ioerror=nodisable,delaylog,dev=40010004 on Wed Apr 21 08:19:46 2010
/var/opt/drd/mnts/sysimage_001/home on /dev/drd00/lvol5 ioerror=nodisable,delaylog,dev=40010005 on Wed Apr 21 08:19:46 2010
/var/opt/drd/mnts/sysimage_001/opt on /dev/drd00/lvol6 ioerror=nodisable,delaylog,dev=40010006 on Wed Apr 21 08:19:46 2010
/var/opt/drd/mnts/sysimage_001/usr on /dev/drd00/lvol7 ioerror=nodisable,delaylog,dev=40010007 on Wed Apr 21 08:19:46 2010
/var/opt/drd/mnts/sysimage_001/var on /dev/drd00/lvol8 ioerror=nodisable,delaylog,dev=40010008 on Wed Apr 21 08:19:47 2010
[ivm-v2]/ #

Had to unmount the filesystems of the image one by one and after almost committing suicide with a rack rail I launched the clone again and without the preview, if I were going to play a stupid role at least it was going to the most stupid one in the world x-)

[ivm-v2]/ # drd clone -x overwrite=true -v -t /dev/dsk/c2t1d0

=======  04/21/10 08:38:22 EDT  BEGIN Clone System Image (user=root)  (jobid=rx260-02)

       * Reading Current System Information
       * Selecting System Image To Clone
       * Selecting Target Disk
       * The disk "/dev/dsk/c2t1d0" contains data which will be overwritten.
       * Selecting Volume Manager For New System Image
       * Analyzing For System Image Cloning
       * Creating New File Systems
ERROR:   Clone file system creation fails.
         - Creating the target file systems fails.
         - Command "/opt/drd/lbin/drdconfigure" fails with the return code 255. The entire output from the command is given below:
         - Start of output from /opt/drd/lbin/drdconfigure:
         -        * Creating LVM physical volume "/dev/rdsk/c2t1d0s2" (0/1/1/0.1.0).
                  * Creating volume group "drd00".
           ERROR:   Command "/sbin/vgcreate -A n -e 4356 -l 255 -p 16 -s 32 /dev/drd00
                    /dev/dsk/c2t1d0s2" failed.

         - End of output from /opt/drd/lbin/drdconfigure
       * Creating New File Systems failed with 1 error.
       * Unmounting New System Image Clone
       * System image: "sysimage_001" on disk "/dev/dsk/c2t1d0"

=======  04/21/10 08:38:46 EDT  END Clone System Image failed with 1 error. (user=root)  (jobid=rx260-02)

[ivm-v2]/ #

I thought that every possible error was fixed but there it was DRD saying that it failed with a bogus return code 255, oh yes very insightful because it’s not a 254 or a 256 it is a 255 code and everybody know what it means… Shit! I don’t know what it means. Yes it was true, I didn’t know what “return code 255” stood for. After doing a small search on ITRC there was only one entry about a similar case, only one. I manage to create a beautiful error, don’t you think?

The question is that there was a mismatch between the minor numbers the kernel believed were in use and those really visible in the device files. DRD will always try to use the next free based on the device files and since in my case there was only one in use but the kernel thought there were two in use, one from vg00 and another one from the failed clone, it failed.

The solution is to cheat the kernel creating a fake group device using the minor number the kernel thinks is in use.

[ivm-v2]/dev # mkdir fake
[ivm-v2]/dev # cd fake
[ivm-v2]/dev/fake # mknod group c 64 0x010000
[ivm-v2]/dev/fake #

After that I launched DRD and everything went smoothly.

Fortunately everything happened in a test virtual machine and at any step of my frustrating trip through self-generated DRD error I could reset the VM and start over again with a clean system but since the purpose of Dynamic Root Disk is to minimize the downtime of production systems the reboot was not an option, at least no the first in the list.

The credit for the solution goes to Judit Wathen from the Dynamic Root Disk Team at HP, continue with your great work :-D

Juanma.

[ivm-v2]/ # idisk -p /dev/rdsl k/c2t1d0

idisk version: 1.31

EFI Primary Header:

Signature                 = EFI PART

Revision                  = 0x10000

HeaderSize                = 0x5c

HeaderCRC32               = 0xe19d8a07

MyLbaLo                   = 0x1

AlternateLbaLo            = 0x1117732f

FirstUsableLbaLo          = 0x22

LastUsableLbaLo           = 0x1117730c

Disk GUID                 = d79b52fa-4d43-11df-8001-d6217b60e588

PartitionEntryLbaLo       = 0x2

NumberOfPartitionEntries  = 0xc

SizeOfPartitionEntry      = 0x80

PartitionEntryArrayCRC32  = 0xca7e53ce

Primary Partition Table (in 512 byte blocks):

Partition 1 (EFI):

Partition Type GUID       = c12a7328-f81f-11d2-ba4b-00a0c93ec93b

Unique Partition GUID     = d79b550c-4d43-11df-8002-d6217b60e588

Starting Lba              = 0x22

Ending Lba                = 0xfa021

Partition 2 (HP-UX):

Partition Type GUID       = 75894c1e-3aeb-11d3-b7c1-7b03a0000000

Unique Partition GUID     = d79b5534-4d43-11df-8003-d6217b60e588

Starting Lba              = 0xfa022

Ending Lba                = 0x110af021

Partition 3 (HPSP):

Partition Type GUID       = e2a1e728-32e3-11d6-a682-7b03a0000000

Unique Partition GUID     = d79b5552-4d43-11df-8004-d6217b60e588

Starting Lba              = 0x110af022

Ending Lba                = 0x11177021

[ivm-v2]/ #

[ivm-v2]/ # idisk -R /dev/rdsk/c2t1d0

idisk version: 1.31

********************** WARNING ***********************

If you continue you will destroy all partition data on this disk.

Do you wish to continue(yes/no)? yes

Cloning HPVM guests

March 9, 2010 — 9 Comments

Our next step in the wonderful HPVM World is… cloning virtual machines.

If you have used VMware Virtual Infrastructure cloning, probably are used to the easy “right-click and clone vm” procedure. Sadly HPVM cloning has nothing in common with it. In fact the process to clone a virtual machine can be a little creepy.

Of course there is a hpvmclone command and anyone can think, as I did the first time I had to clone an IVM, I only have to provide the source VM, the new VM name and voilà everything will be done:

[root@hpvmhost] ~ # hpvmclone -P ivm1 -N ivm_clone01
[root@hpvmhost] ~ # hpvmstatus
[Virtual Machines]
Virtual Machine Name VM #  OS Type State     #VCPUs #Devs #Nets Memory  Runsysid
==================== ===== ======= ========= ====== ===== ===== ======= ========
ivm1                     9 HPUX    Off            3     3     2    2 GB        0
ivm2                    10 HPUX    Off            1     7     1    3 GB        0
ivm_clone01             11 HPUX    Off            3     3     2    2 GB        0
[root@hpvmhost] ~ #

The new virtual machine can be seen and everything seems to be fine but when you ask for the configuration details of the new IVM a nasty surprise will appear… the storage devices had not been cloned instead it looks that hpvmclone simply mapped the devices of the source IVM to the new IVM:

[root@hpvmhost] ~ # hpvmstatus -P ivm_clone01
[Virtual Machine Details]
Virtual Machine Name VM #  OS Type State
==================== ===== ======= ========
ivm_clone01             11 HPUX    Off       

[Authorized Administrators]
Oper Groups: 
Admin Groups:
Oper Users:  
Admin Users:  

[Virtual CPU Details]
#vCPUs Entitlement Maximum
====== =========== =======
     3       20.0%  100.0% 

[Memory Details]
Total    Reserved
Memory   Memory 
=======  ========
   2 GB     64 MB 

[Storage Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
disk    scsi         0   2   0   0   0 lv        /dev/vg_vmtest/rivm1d1
disk    scsi         0   2   0   1   0 lv        /dev/vg_vmtest/rivm1d2
dvd     scsi         0   2   0   2   0 disk      /dev/rdsk/c1t4d0

[Network Interface Details]
Interface Adaptor    Name/Num   PortNum Bus Dev Ftn Mac Address
========= ========== ========== ======= === === === =================
vswitch   lan        vlan02     11        0   0   0 f6-fb-bf-41-78-63
vswitch   lan        localnet   10        0   1   0 2a-69-35-d5-c1-5f

[Misc Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
serial  com1                           tty       console
[root@hpvmhost] ~ #

With this configuration the virtual machines can’t be booted at the same time. So, what is the purpose of hpvmclone if the newly cloned node can’t be used simultaneously with the original? Sincerely this makes no sense at least for me.

At that point and since I really wanted to use both machines in a test cluster I decided to do a little research through Google and ITRC.

After reading again the official documentation, a few dozens posts regarding HPVM cloning and HPVM in general and a few very nice posts in Daniel Parkes’ HP-UX Tips & Tricks site I finally came up with three different methods to successfully and “physically” clone an Integrity Virtual Machine.

METHOD 1: Using dd.

  • Create the LVM structure for the new virtual machine on the host.
  • Use dd to copy every storage device from the source virtual machine.
[root@hpvmhost] ~ # dd if=/dev/vg_vmtest/rivm1d1 of=/dev/vg_vmtest/rclone01_d1 bs=1024k
12000+0 records in
12000+0 records out
[root@hpvmhost] ~ #
[root@hpvmhost] ~ # dd if=/dev/vg_vmtest/rivm1d2 of=/dev/vg_vmtest/rclone01_d2 bs=1024k
12000+0 records in
12000+0 records out
[root@hpvmhost] ~ #
  • Using hpvmclone create the new machine and in the same command add the new storage devices and delete the old ones from its configuration, any resource can also be modified at this point like with hpvmcreate.
[root@hpvmhost] ~ # hpvmclone -P ivm1 -N clone01 -d disk:scsi:0,2,0:lv:/dev/vg_vmtest/rivm1d1 \
> -d disk:scsi:0,2,1:lv:/dev/vg_vmtest/rivm1d2 \
> -a disk:scsi::lv:/dev/vg_vmtest/rclone01_d1 \
> -a disk:scsi::lv:/dev/vg_vmtest/rclone01_d2 \
> -l "Clone-cluster 01" \
> -B manual
[root@hpvmhost] ~ #
  • Start the new virtual machine and make the necessary changes to the guest OS (network, hostname, etc).

METHOD 2: Clone the virtual storage devices at the same time the IVM is cloned.

Yes, yes and yes it can be done with hpvmclone, you have to use the -b switch and provide the storage resource to use.

I really didn’t test this procedure with other devices apart from the booting disk/disks. In theory the man page of the command and the HPVM documentation states that this option can be used to specify the booting device of the clone but I used to clone a virtual machine with one boot disk and one with two disks and in both cases it worked without problems.

  • As in METHOD 1 create the necessary LVM infrastructure for the new IVM.
  • Once the lvols are created clone the virtual machine.
[root@hpvmhost] ~ # hpvmclone -P ivm1 -N vxcl01 -a disk:scsi::lv:/dev/vg_vmtest/rvxcl01_d1 \
> -a disk:scsi::lv:/dev/vg_vmtest/rvxcl01_d2 \
> -b disk:scsi:0,2,0:lv:/dev/vg_vmtest/rvxcl01_d1 \
> -b disk:scsi:0,2,1:lv:/dev/vg_vmtest/rvxcl01_d2 \
> -d disk:scsi:0,2,0:lv:/dev/vg_vmtest/rivm1d1 \
> -d disk:scsi:0,2,1:lv:/dev/vg_vmtest/rivm1d2 \
> -B manual
12000+0 records in
12000+0 records out
hpvmclone: Virtual storage cloned successfully.
12000+0 records in
12000+0 records out
hpvmclone: Virtual storage cloned successfully.
[root@hpvmhost] ~ #
  • Start the virtual machine.
  • Now log into the virtual machine to check the start-up process and to make any change needed.

METHOD 3: Dynamic Root Disk.

Since with DRD a clone of the vg00 can be produced we can use it too to clone an Integrity Virtual Machine.

  • First step is to create a new lvol that will contain the clone of the vg00, it has to be at least the same size as the original disk.
  • Install the last DRD version supported on the virtual machine to clone.
  • Add the new volume to the source virtual machine and from the guest OS re-scan for the new disk.
  • Now proceed with the DRD clone.
root@ivm2:~# drd clone -v -x overwrite=true -t /dev/disk/disk15   

=======  03/09/10 15:45:15 MST  BEGIN Clone System Image (user=root)  (jobid=ivm2)

       * Reading Current System Information
       * Selecting System Image To Clone
       * Converting legacy Dsf "/dev/dsk/c0t0d0" to "/dev/disk/disk3"
       * Selecting Target Disk
NOTE:    There may be LVM 2 volumes configured that will not be recognized.
       * Selecting Volume Manager For New System Image
       * Analyzing For System Image Cloning
       * Creating New File Systems
       * Copying File Systems To New System Image
       * Copying File Systems To New System Image succeeded.
       * Making New System Image Bootable
       * Unmounting New System Image Clone
       * System image: "sysimage_001" on disk "/dev/disk/disk15"

=======  03/09/10 16:05:20 MST  END Clone System Image succeeded. (user=root)  (jobid=ivm2)

root@ivm2:~#
  • Mount the new image.
root@ivm2:~# drd mount -v

=======  03/09/10 16:09:08 MST  BEGIN Mount Inactive System Image (user=root)  (jobid=ivm2)

       * Checking for Valid Inactive System Image
       * Locating Inactive System Image
       * Preparing To Mount Inactive System Image
       * Selected inactive system image "sysimage_001" on disk "/dev/disk/disk15".
       * Mounting Inactive System Image
       * System image: "sysimage_001" on disk "/dev/disk/disk15"

=======  03/09/10 16:09:26 MST  END Mount Inactive System Image succeeded. (user=root)  (jobid=ivm2)

root@ivm2:~#
  • On the mounted image edit the netconf file and modify the hostname to “” and remove any network configuration such as IP address, gateway, etc. The image is mounted on /var/opt/drd/mnts/sysimage_001.
  • Move or delete the DRD XML registry file in /var/opt/drd/mnts/sysimage_001/var/opt/drd/registry in order to avoid any problems during the boot of the clone since the source disk will not be present.
  • Unmount the image.
root@ivm2:~# drd umount -v 

=======  03/09/10 16:20:45 MST  BEGIN Unmount Inactive System Image (user=root)  (jobid=ivm2)

       * Checking for Valid Inactive System Image
       * Locating Inactive System Image
       * Preparing To Unmount Inactive System Image
       * Unmounting Inactive System Image
       * System image: "sysimage_001" on disk "/dev/disk/disk15"

=======  03/09/10 16:20:58 MST  END Unmount Inactive System Image succeeded. (user=root)  (jobid=ivm2)

root@ivm2:~#
  • Now we are going to create the new virtual machine with hpvmclone. Of course the new IVM can be created through hpvmcreate and add the new disk as its boot disk.
[root@hpvmhost] ~ # hpvmclone -P ivm2 -N ivm3 -B manual -d disk:scsi:0,1,0:lv:/dev/vg_vmtest/rivm2disk
[root@hpvmhost] ~ # hpvmstatus -P ivm3
[Virtual Machine Details]
Virtual Machine Name VM #  OS Type State
==================== ===== ======= ========
ivm3                     4 HPUX    Off       

[Authorized Administrators]
Oper Groups: 
Admin Groups:
Oper Users:  
Admin Users:  

[Virtual CPU Details]
#vCPUs Entitlement Maximum
====== =========== =======
     1       10.0%  100.0% 

[Memory Details]
Total    Reserved
Memory   Memory 
=======  ========
   3 GB     64 MB 

[Storage Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
dvd     scsi         0   1   0   1   0 disk      /dev/rdsk/c1t4d0
disk    scsi         0   1   0   2   0 lv        /dev/vg_vmtest/rivm3disk

[Network Interface Details]
Interface Adaptor    Name/Num   PortNum Bus Dev Ftn Mac Address
========= ========== ========== ======= === === === =================
vswitch   lan        vlan02     11        0   0   0 52-4f-f9-5e-02-82

[Misc Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
serial  com1                           tty       console
[root@hpvmhost] ~ #
  • Final step is to boot the newly create machine, from the EFI menu we’re going to create a new boot file.
  • First select the Boot option maintenance menu:
EFI Boot Manager ver 1.10 [14.62] [Build: Mon Oct  1 09:27:26 2007]

Please select a boot option

    HP-UX Primary Boot: 0/0/1/0.0.0                                
    EFI Shell [Built-in]                                           
    Boot option maintenance menu                                    

    Use ^ and v to change option(s). Use Enter to select an option
  • Now go to Add a Boot Option.
EFI Boot Maintenance Manager ver 1.10 [14.62]

Main Menu. Select an Operation

        Boot from a File                                           
        Add a Boot Option                                          
        Delete Boot Option(s)                                      
        Change Boot Order                                           

        Manage BootNext setting                                    
        Set Auto Boot TimeOut                                       

        Select Active Console Output Devices                       
        Select Active Console Input Devices                        
        Select Active Standard Error Devices                        

        Cold Reset                                                 
        Exit                                                       

    Timeout-->[10] sec SystemGuid-->[5A0F8F26-2BA2-11DF-9C04-001A4B07F002]
    SerialNumber-->[VM01010008          ]
  • Select the first partition of the disk.
EFI Boot Maintenance Manager ver 1.10 [14.62]

Add a Boot Option.  Select a Volume

    IA64_EFI [Acpi(PNP0A03,0)/Pci(1|0)/Scsi(Pun2,Lun0)/HD(Part1,Sig7
    IA64_EFI [Acpi(PNP0A03,0)/Pci(1|0)/Scsi(Pun2,Lun0)/HD(Part3,Sig7
    Removable Media Boot [Acpi(PNP0A03,0)/Pci(1|0)/Scsi(Pun1,Lun0)]
    Load File [Acpi(PNP0A03,0)/Pci(0|0)/Mac(524FF95E0282)]         
    Load File [EFI Shell [Built-in]]                               
    Legacy Boot                                                    
    Exit
  • Select the first option.
EFI Boot Maintenance Manager ver 1.10 [14.62]

Select file or change to new directory:

       03/09/10  03:45p <DIR>       4,096 EFI                      
       [Treat like Removable Media Boot]                           
    Exit
  • Enter the HPUX directory.
EFI Boot Maintenance Manager ver 1.10 [14.62]

Select file or change to new directory:

       03/09/10  03:45p <DIR>       4,096 .                        
       03/09/10  03:45p <DIR>           0 ..                       
       03/09/10  03:45p <DIR>       4,096 HPUX                     
       03/09/10  03:45p <DIR>       4,096 Intel_Firmware           
       03/09/10  03:45p <DIR>       4,096 diag                     
       03/09/10  03:45p <DIR>       4,096 hp                       
       03/09/10  03:45p <DIR>       4,096 tools                    
    Exit
  • Select the hpux.efi file.
EFI Boot Maintenance Manager ver 1.10 [14.62]

Select file or change to new directory:

       03/09/10  03:45p <DIR>       4,096 .                        
       03/09/10  03:45p <DIR>       4,096 ..                       
       03/09/10  03:45p           654,025 hpux.efi                 
       03/09/10  03:45p            24,576 nbp.efi                  
    Exit
  • Enter BOOTDISK as description and None as BootOption Data Type. Save changes.
Filename: \EFI\HPUX\hpux.efi
DevicePath: [Acpi(PNP0A03,0)/Pci(1|0)/Scsi(Pun2,Lun0)/HD(Part1,Sig71252358-2BCD-11DF-8000-D6217B60E588)/\EFI\HPUX\hpux.efi]

IA-64 EFI Application 03/09/10  03:45p     654,025 bytes

Enter New Description:  BOOTDISK
New BootOption Data. ASCII/Unicode strings only, with max of 240 characters

Enter BootOption Data Type [A-Ascii U-Unicode N-No BootOption] :  None

Save changes to NVRAM [Y-Yes N-No]:
  • Go back to the EFI main menu and boot from the new option.
EFI Boot Manager ver 1.10 [14.62] [Build: Mon Oct  1 09:27:26 2007]

Please select a boot option
HP-UX Primary Boot: 0/0/1/0.0.0
EFI Shell [Built-in]
BOOTDISK
Boot option maintenance menu

Use ^ and v to change option(s). Use Enter to select an option
Loading.: BOOTDISK
Starting: BOOTDISK

(C) Copyright 1999-2006 Hewlett-Packard Development Company, L.P.
All rights reserved

HP-UX Boot Loader for IPF  --  Revision 2.035

Press Any Key to interrupt Autoboot
\EFI\HPUX\AUTO ==> boot vmunix
Seconds left till autoboot -   0
AUTOBOOTING...> System Memory = 3066 MB
loading section 0
.................................................................................. (complete)
loading section 1
.............. (complete)
loading symbol table
loading System Directory (boot.sys) to MFS
.....
loading MFSFILES directory (bootfs) to MFS
................
Launching /stand/vmunix
SIZE: Text:41555K + Data:6964K + BSS:20747K = Total:69267K
  • Finally the OS will ask some questions about the network configuration and other parameters, answer what suits better your needing.
_______________________________________________________________________________

                       Welcome to HP-UX!

Before using your system, you will need to answer a few questions.

The first question is whether you plan to use this system on a network.

Answer "yes" if you have connected the system to a network and are ready
to link with a network.

Answer "no" if you:

     * Plan to set up this system as a standalone (no networking).

     * Want to use the system now as a standalone and connect to a
       network later.
_______________________________________________________________________________

Are you ready to link this system to a network?

Press [y] for yes or [n] for no, then press [Enter] y
...

And we are done.

Conclusions: I have to say that at the beginning the cloning system of HPVM disappointed me; but after a while I got used to it.

In my opinion the best method of the above is the second if you have one boot disk, and I really can’t see a reason to have a vg00 with several PVs on a virtual machine. If you have an IVM as template and need to produce many copies as quickly as possible this method is perfect.

Of course there is a fourth method: Our beloved Ignite-UX. But I will write about it in another post.

Juanma.

[root@hpvmhost] ~ # hpvmstatus -P hpvxcl01
[Virtual Machine Details]
Virtual Machine Name VM #  OS Type State
==================== ===== ======= ========
hpvxcl01                11 HPUX    Off[Authorized Administrators]
Oper Groups:
Admin Groups:
Oper Users:
Admin Users:[Virtual CPU Details]
#vCPUs Entitlement Maximum
====== =========== =======
3       20.0%  100.0%[Memory Details]
Total    Reserved
Memory   Memory
=======  ========
2 GB     64 MB

[Storage Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
disk    scsi         0   2   0   0   0 lv        /dev/vg_vmtest/rivm1d1
disk    scsi         0   2   0   1   0 lv        /dev/vg_vmtest/rivm1d2
dvd     scsi         0   2   0   2   0 disk      /dev/rdsk/c1t4d0
disk    scsi         0   2   0   3   0 lv        /dev/vg_vmtest/rivm1sd1
disk    scsi         0   2   0   4   0 lv        /dev/vg_vmtest/rivm1sd2
disk    scsi         0   2   0   5   0 lv        /dev/vg_vmtest/rivm1sd3
disk    scsi         0   2   0   6   0 lv        /dev/vg_vmtest/rivm1md1
disk    scsi         0   2   0   7   0 lv        /dev/vg_vmtest/rivm1md2
disk    scsi         0   2   0   8   0 lv        /dev/vg_vmtest/rivm1md3
disk    scsi         0   2   0   9   0 lv        /dev/vg_vmtest/rivm1lmd1
disk    scsi         0   2   0  10   0 lv        /dev/vg_vmtest/rivm1lmd2

[Network Interface Details]
Interface Adaptor    Name/Num   PortNum Bus Dev Ftn Mac Address
========= ========== ========== ======= === === === =================
vswitch   lan        swtch502   11        0   0   0 f6-fb-bf-41-78-63
vswitch   lan        localnet   10        0   1   0 2a-69-35-d5-c1-5f

[Misc Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
serial  com1                           tty       console
[root@hpvmhost] ~ #