Archive

Archive for March, 2010

Guest migration in HPVM 3.5

March 24, 2010 1 comment

As I already said many times my current HPVM version is 3.5 so it doesn’t support guest online migration. But lacking the online migration feature doesn’t mean that we can not perform Integrity VM migration between hosts.

Currently there are two methods to perform migrations:

  • HPVM commands.
  • MC/ServiceGuard.

In this post I will only cover the HPVM way. I will leave HPVM ServiceGuard clusters for a future post but as many of you already know moving a guest between cluster nodes is like moving any other ServiceGuard package since the guests are managed by SG as packages.

PREREQUISITES:

There is a certain list of prerequisites the guest has to met in order to be successfully migrated between hosts.

  • Off-line state:

This is pretty obvious of course, the guest must be off.

  • SSH configuration:

In both hosts root must have SSH access through public key authentication to the other.

root@ivmcl01:~ # hpvmmigrate -P hpvm1 -h ivmcl02
hpvmmigrate: SSH execution error. Make sure ssh is setup right on both source and target systems.
  • Shared devices:

If the guest has a shared device like the CD/DVD of the host, the device has to be deleted from the guest configuration.

root@ivmcl01:~ # hpvmmigrate -P hpvm1 -h ivmcl02
hpvmmigrate: Device /dev/rdsk/c1t4d0 is shared.  Guest with shared storage devices cannot be migrated.
  • Storage devices:

There are two consideration about storage devices.

The storage devices of the guest must be physical disks. Migration of guests with lvols as storage devices is supported only in HPVM 4.1 release.

root@ivmcl01:~ # hpvmmigrate -P hpvm1 -h ivmcl02
hpvmmigrate: Target VM Host error - Device does not exist.
hpvmmigrate: See HPVM command log file on target VM Host for more detail.

The WWID of the device must be the same in both HPVM hosts.

root@ivmcl01:~ # hpvmmigrate -P hpvm1 -h ivmcl02
hpvmmigrate: Device WWID does not match.
hpvmmigrate: See HPVM command log file on target VM Host for more detail.
  • Network configuration:

The virtual switch where the guest is connected to must be configured on the same network card in both hosts. For example if vswitch vlan2 is using lan0 in host1 must be using lan0 in host2 or the migration will fail.

root@ivmcl01:~ # hpvmmigrate -P hpvm1 -h ivmcl02
hpvmmigrate: Target VM Host error - vswitch validation failed.
hpvmmigrate: See HPVM command log file on target VM Host for more detail.

PROCEDURE:

If all the prerequisites explained before are met by our guest we can proceed with the migration. The command to use is hpvmmigrate, the name or the VM number and the hostname of the destination server have to be provided. Some of the resources of the virtual machines like number of CPU, ammount of RAM or the machine label can also be modified.

root@ivmcl01:~ # hpvmmigrate -P hpvm1 -h ivmcl02
hpvmmigrate: Guest migrated successfully.
root@ivmcl01:~ #

Check the existence of the migrated guest in the destination host.

root@ivmcl02:~ # hpvmstatus
[Virtual Machines]
Virtual Machine Name VM #  OS Type State     #VCPUs #Devs #Nets Memory  Runsysid
==================== ===== ======= ========= ====== ===== ===== ======= ========
oratest01                1 HPUX    On (OS)        4    10     3   16 GB        0
oratest02                2 HPUX    On (OS)        4     8     3   16 GB        0
sapvm01                  3 HPUX    Off            3     8     3    8 GB        0
sapvm02                  4 HPUX    Off            3     7     3    8 GB        0
sles01                   5 LINUX   On (OS)        1     4     3    4 GB        0
rhel01                   6 LINUX   Off            1     4     3    4 GB        0
hp-vxvm                  7 HPUX    On (OS)        2    17     3    6 GB        0
ws2003                   8 WINDOWS Off            4     4     3   12 GB        0
hpvm1                   10 HPUX    Off            1     1     1    3 GB        0
root@ivmcl02:~ #

As you can see once all the prerequisites have been met the migration is quite easy.

CONCLUSION:

Even with the disadvantage of lacking online migration the guest migration feature can be of usefulness to balance the load betwwen HPVM hosts.

Juanma.

HPVM clones first boot tasks

March 17, 2010 2 comments

Welcome again to “HPVM World!” my dear readers :-D

Have to say that even with the initial disappointment about hpvmclone, cloning IVMs was a very funny task but I believe that the after cloning tasks weren’t very clear, at least for me, so I decided to write this follow up post to clarify that part.

Let’s assume we already have a cloned virtual machine, in this particular case I used dd to clone the virtual disk and later I created the IVM and added the storage device and the other resources but it also applied to the other method with minor changes.

[root@hpvmhost] ~ # hpvmstatus -P vmnode2 -d
[Virtual Machine Devices]

[Storage Interface Details]
disk:scsi:0,0,0:lv:/dev/vg_vmtest/rvmnode2disk
dvd:scsi:0,0,1:disk:/dev/rdsk/c1t4d0

[Network Interface Details]
network:lan:0,1,0xB20EBA14E76C:vswitch:localnet
network:lan:0,2,0x3E9492C9F615:vswitch:vlan02

[Misc Interface Details]
serial:com1::tty:console
[root@hpvmhost] ~ #

We start the virtual machine an access its console.Now we are going to follow some of the final steps of the third method described in my previous post. From the main EFI Boot Manager select the Boot option maintenance menu option.

EFI Boot Manager ver 1.10 [14.62] [Build: Mon Oct  1 09:27:26 2007]

Please select a boot option

     EFI Shell [Built-in]                                           
     Boot option maintenance menu                                    

     Use ^ and v to change option(s). Use Enter to select an option

Select Boot from a file and the select the first partition:

EFI Boot Maintenance Manager ver 1.10 [14.62]

Boot From a File.  Select a Volume

    IA64_EFI [Acpi(PNP0A03,0)/Pci(0|0)/Scsi(Pun0,Lun0)/HD(Part1,Sig
    IA64_EFI [Acpi(PNP0A03,0)/Pci(0|0)/Scsi(Pun0,Lun0)/HD(Part3,Sig7
    Removable Media Boot [Acpi(PNP0A03,0)/Pci(0|0)/Scsi(Pun1,Lun0)]
    Load File [Acpi(PNP0A03,0)/Pci(1|0)/Mac(B20EBA14E76C)]          
    Load File [Acpi(PNP0A03,0)/Pci(2|0)/Mac(3E9492C9F615)]        
    Load File [EFI Shell [Built-in]]                                
    Legacy Boot
    Exit

Enter the EFI directory then the HPUX directory and finally select hpux.file. Like I said before this part is very similar to the final steps of Method 3.

EFI Boot Maintenance Manager ver 1.10 [14.62]

Select file or change to new directory:

       03/09/10  03:45p <DIR>       4,096 .                         
       03/09/10  03:45p <DIR>       4,096 ..                        
       03/10/10  04:21p           657,609 hpux.efi                  
       03/09/10  03:45p            24,576 nbp.efi                   
       Exit

After this the machine will boot.

   Filename: \EFI\HPUX\hpux.efi
 DevicePath: [Acpi(PNP0A03,0)/Pci(0|0)/Scsi(Pun0,Lun0)/HD(Part1,Sig71252358-2BCD-11DF-8000-D6217B60E588)/\EFI\HPUX\hpux.efi]
   IA-64 EFI Application 03/10/10  04:21p     657,609 bytes

(C) Copyright 1999-2008 Hewlett-Packard Development Company, L.P.
All rights reserved

HP-UX Boot Loader for IPF  --  Revision 2.036

Press Any Key to interrupt Autoboot
\EFI\HPUX\AUTO ==> boot vmunix
Seconds left till autoboot -   0
AUTOBOOTING...> System Memory = 2042 MB
loading section 0
..................................................................................... (complete)
loading section 1
............... (complete)
loading symbol table
loading System Directory (boot.sys) to MFS
.....
loading MFSFILES directory (bootfs) to MFS
..................
Launching /stand/vmunix
SIZE: Text:43425K + Data:7551K + BSS:22118K = Total:73096K
...

When the VM is up login as root. The first tasks as always are to change hostname and network configuration to avoid conflicts.

Next we are going recreate lvmtab since the current one contains the LVM configuration of the source virtual machine. Performing a simple vgdisplay will show it.

root@vmnode2:/# vgdisplay
vgdisplay: Warning: couldn't query physical volume "/dev/disk/disk15_p2":
The specified path does not correspond to physical volume attached to
this volume group
vgdisplay: Warning: couldn't query all of the physical volumes.
--- Volume groups ---
VG Name                     /dev/vg00
VG Write Access             read/write     
VG Status                   available                 
Max LV                      255    
Cur LV                      8      
Open LV                     8      
Max PV                      16     
Cur PV                      1      
Act PV                      0      
Max PE per PV               3085         
VGDA                        0   
PE Size (Mbytes)            8               
Total PE                    0       
Alloc PE                    0       
Free PE                     0       
Total PVG                   0        
Total Spare PVs             0              
Total Spare PVs in use      0

root@vmnode2:/#

To correct this remove the /etc/lvmtab file and launch a vgscan.

root@vmnode2:/# rm /etc/lvmtab
/etc/lvmtab: ? (y/n) y
root@vmnode2:/var/tmp/software# vgscan
Creating "/etc/lvmtab".
vgscan: Couldn't access the list of physical volumes for volume group "/dev/vg00".
Physical Volume "/dev/dsk/c1t1d0" contains no LVM information

*** LVMTAB has been created successfully.
*** Do the following to resync the information on the disk.
*** #1.  vgchange -a y
*** #2.  lvlnboot -R
root@vmnode2:/#

Follow the recommended steps in vgscan output, the first step only applies if there are any other VGs in the system, if there is only vg00 it is already active so this step is not necesary.

Running lvnlboot -R is mandatory since we need to recover and update the links to the lvols in the Boot Data Reserved Area of the booting disk.

root@vmnode2:/# lvlnboot -R
Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00.conf
root@vmnode2:/#

Now the LVM configuration is fixed, try again the vgdisplay command.

root@vmnode2:/# vgdisplay
--- Volume groups ---
VG Name                     /dev/vg00
VG Write Access             read/write
VG Status                   available
Max LV                      255
Cur LV                      8
Open LV                     8
Max PV                      16
Cur PV                      1
Act PV                      1
Max PE per PV               3085
VGDA                        2
PE Size (Mbytes)            8
Total PE                    3075
Alloc PE                    2866
Free PE                     209
Total PVG                   0
Total Spare PVs             0
Total Spare PVs in use      0

root@vmnode2:/#

With the LVM configuration fixed the next step is to indicate the booting disk to the system.

root@vmnode2:/# setboot -p /dev/disk/disk21_p2
Primary boot path set to 0/0/0/0.0x0.0x0 (/dev/disk/disk21_p2)
root@vmnode2:/#
root@vmnode2:/# setboot
Primary bootpath : 0/0/0/0.0x0.0x0 (/dev/rdisk/disk21)
HA Alternate bootpath :
Alternate bootpath :

Autoboot is ON (enabled)
root@vmnode2:/#

Finally reboot the virtual machine and if we did everything correctly a new boot option will be available in the EFI Boot Manager.

EFI Boot Manager ver 1.10 [14.62] [Build: Mon Oct  1 09:27:26 2007]

Please select a boot option

    HP-UX Primary Boot: 0/0/0/0.0x0.0x0                             
    EFI Shell [Built-in]                                            
    Boot option maintenance menu                                    

    Use ^ and v to change option(s). Use Enter to select an option

Let the system boot by itself through the new default option (HP-UX Primary Boot) and we are done.

Any feedback would be welcome.

Juanma.

Cloning HPVM guests

March 9, 2010 6 comments

Our next step in the wonderful HPVM World is… cloning virtual machines.

If you have used VMware Virtual Infrastructure cloning, probably are used to the easy “right-click and clone vm” procedure. Sadly HPVM cloning has nothing in common with it. In fact the process to clone a virtual machine can be a little creppy.

Of course there is a hpvmclone command and anyone can think, as I did the first time I had to clone an IVM, I only have to provide the source VM, the new VM name and voilà everything will be done:

[root@hpvmhost] ~ # hpvmclone -P ivm1 -N ivm_clone01
[root@hpvmhost] ~ # hpvmstatus
[Virtual Machines]
Virtual Machine Name VM #  OS Type State     #VCPUs #Devs #Nets Memory  Runsysid
==================== ===== ======= ========= ====== ===== ===== ======= ========
ivm1                     9 HPUX    Off            3     3     2    2 GB        0
ivm2                    10 HPUX    Off            1     7     1    3 GB        0
ivm_clone01             11 HPUX    Off            3     3     2    2 GB        0
[root@hpvmhost] ~ #

The new virtual machine can be seen and everything seems to be fine but when you ask for the configuration details of the new IVM a nasty surprise will appear… the storage devices had not been cloned instead it looks that hpvmclone simply mapped the devices of the source IVM to the new IVM:

[root@hpvmhost] ~ # hpvmstatus -P ivm_clone01
[Virtual Machine Details]
Virtual Machine Name VM #  OS Type State
==================== ===== ======= ========
ivm_clone01             11 HPUX    Off       

[Authorized Administrators]
Oper Groups: 
Admin Groups:
Oper Users:  
Admin Users:  

[Virtual CPU Details]
#vCPUs Entitlement Maximum
====== =========== =======
     3       20.0%  100.0% 

[Memory Details]
Total    Reserved
Memory   Memory 
=======  ========
   2 GB     64 MB 

[Storage Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
disk    scsi         0   2   0   0   0 lv        /dev/vg_vmtest/rivm1d1
disk    scsi         0   2   0   1   0 lv        /dev/vg_vmtest/rivm1d2
dvd     scsi         0   2   0   2   0 disk      /dev/rdsk/c1t4d0

[Network Interface Details]
Interface Adaptor    Name/Num   PortNum Bus Dev Ftn Mac Address
========= ========== ========== ======= === === === =================
vswitch   lan        vlan02     11        0   0   0 f6-fb-bf-41-78-63
vswitch   lan        localnet   10        0   1   0 2a-69-35-d5-c1-5f

[Misc Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
serial  com1                           tty       console
[root@hpvmhost] ~ #

With this configuration the virtual machines can’t be booted at the same time. So, what is the purpose of hpvmclone if the newly cloned node can’t be used simultaneously with the original? Sincerely this makes no sense at least for me.

At that point and since I really wanted to use both machines in a test cluster I decided to do a little research through Google and ITRC.

After reading again the official documentation, a few dozens posts regarding HPVM cloning and HPVM in general and a few very nice posts in Daniel Parkes’ HP-UX Tips & Tricks site I finally came up with three different methods to successfully and “physically” clone an Integrity Virtual Machine.

METHOD 1: Using dd.

  • Create the LVM structure for the new virtual machine on the host.
  • Use dd to copy every storage device from the source virtual machine.
[root@hpvmhost] ~ # dd if=/dev/vg_vmtest/rivm1d1 of=/dev/vg_vmtest/rclone01_d1 bs=1024k
12000+0 records in
12000+0 records out
[root@hpvmhost] ~ #
[root@hpvmhost] ~ # dd if=/dev/vg_vmtest/rivm1d2 of=/dev/vg_vmtest/rclone01_d2 bs=1024k
12000+0 records in
12000+0 records out
[root@hpvmhost] ~ #
  • Using hpvmclone create the new machine and in the same command add the new storage devices and delete the old ones from its configuration, any resource can also be modified at this point like with hpvmcreate.
[root@hpvmhost] ~ # hpvmclone -P ivm1 -N clone01 -d disk:scsi:0,2,0:lv:/dev/vg_vmtest/rivm1d1 \
> -d disk:scsi:0,2,1:lv:/dev/vg_vmtest/rivm1d2 \
> -a disk:scsi::lv:/dev/vg_vmtest/rclone01_d1 \
> -a disk:scsi::lv:/dev/vg_vmtest/rclone01_d2 \
> -l "Clone-cluster 01" \
> -B manual
[root@hpvmhost] ~ #
  • Start the new virtual machine and make the necessary changes to the guest OS (network, hostname, etc).

METHOD 2: Clone the virtual storage devices at the same time the IVM is cloned.

Yes, yes and yes it can be done with hpvmclone, you have to use the -b switch and provide the storage resource to use.

I really didn’t test this procedure with other devices apart from the booting disk/disks. In theory the man page of the command and the HPVM documentation states that this option can be used to specify the booting device of the clone but I used to clone a virtual machine with one boot disk and one with two disks and in both cases it worked without problems.

  • As in METHOD 1 create the necessary LVM infrastructure for the new IVM.
  • Once the lvols are created clone the virtual machine.
[root@hpvmhost] ~ # hpvmclone -P ivm1 -N vxcl01 -a disk:scsi::lv:/dev/vg_vmtest/rvxcl01_d1 \
> -a disk:scsi::lv:/dev/vg_vmtest/rvxcl01_d2 \
> -b disk:scsi:0,2,0:lv:/dev/vg_vmtest/rvxcl01_d1 \
> -b disk:scsi:0,2,1:lv:/dev/vg_vmtest/rvxcl01_d2 \
> -d disk:scsi:0,2,0:lv:/dev/vg_vmtest/rivm1d1 \
> -d disk:scsi:0,2,1:lv:/dev/vg_vmtest/rivm1d2 \
> -B manual
12000+0 records in
12000+0 records out
hpvmclone: Virtual storage cloned successfully.
12000+0 records in
12000+0 records out
hpvmclone: Virtual storage cloned successfully.
[root@hpvmhost] ~ #
  • Start the virtual machine.
  • Now log into the virtual machine to check the start-up process and to make any change needed.

METHOD 3: Dynamic Root Disk.

Since with DRD a clone of the vg00 can be produced we can use it too to clone an Integrity Virtual Machine.

  • First step is to create a new lvol that will contain the clone of the vg00, it has to be at least the same size as the original disk.
  • Install the last DRD version supported on the virtual machine to clone.
  • Add the new volume to the source virtual machine and from the guest OS re-scan for the new disk.
  • Now proceed with the DRD clone.
root@ivm2:~# drd clone -v -x overwrite=true -t /dev/disk/disk15   

=======  03/09/10 15:45:15 MST  BEGIN Clone System Image (user=root)  (jobid=ivm2)

       * Reading Current System Information
       * Selecting System Image To Clone
       * Converting legacy Dsf "/dev/dsk/c0t0d0" to "/dev/disk/disk3"
       * Selecting Target Disk
NOTE:    There may be LVM 2 volumes configured that will not be recognized.
       * Selecting Volume Manager For New System Image
       * Analyzing For System Image Cloning
       * Creating New File Systems
       * Copying File Systems To New System Image
       * Copying File Systems To New System Image succeeded.
       * Making New System Image Bootable
       * Unmounting New System Image Clone
       * System image: "sysimage_001" on disk "/dev/disk/disk15"

=======  03/09/10 16:05:20 MST  END Clone System Image succeeded. (user=root)  (jobid=ivm2)

root@ivm2:~#
  • Mount the new image.
root@ivm2:~# drd mount -v

=======  03/09/10 16:09:08 MST  BEGIN Mount Inactive System Image (user=root)  (jobid=ivm2)

       * Checking for Valid Inactive System Image
       * Locating Inactive System Image
       * Preparing To Mount Inactive System Image
       * Selected inactive system image "sysimage_001" on disk "/dev/disk/disk15".
       * Mounting Inactive System Image
       * System image: "sysimage_001" on disk "/dev/disk/disk15"

=======  03/09/10 16:09:26 MST  END Mount Inactive System Image succeeded. (user=root)  (jobid=ivm2)

root@ivm2:~#
  • On the mounted image edit the netconf file and modify the hostname to “” and remove any network configuration such as IP address, gateway, etc. The image is mounted on /var/opt/drd/mnts/sysimage_001.
  • Move or delete the DRD XML registry file in /var/opt/drd/mnts/sysimage_001/var/opt/drd/registry in order to avoid any problems during the boot of the clone since the source disk will not be present.
  • Unmount the image.
root@ivm2:~# drd umount -v 

=======  03/09/10 16:20:45 MST  BEGIN Unmount Inactive System Image (user=root)  (jobid=ivm2)

       * Checking for Valid Inactive System Image
       * Locating Inactive System Image
       * Preparing To Unmount Inactive System Image
       * Unmounting Inactive System Image
       * System image: "sysimage_001" on disk "/dev/disk/disk15"

=======  03/09/10 16:20:58 MST  END Unmount Inactive System Image succeeded. (user=root)  (jobid=ivm2)

root@ivm2:~#
  • Now we are going to create the new virtual machine with hpvmclone. Of course the new IVM can be created through hpvmcreate and add the new disk as its boot disk.
[root@hpvmhost] ~ # hpvmclone -P ivm2 -N ivm3 -B manual -d disk:scsi:0,1,0:lv:/dev/vg_vmtest/rivm2disk
[root@hpvmhost] ~ # hpvmstatus -P ivm3
[Virtual Machine Details]
Virtual Machine Name VM #  OS Type State
==================== ===== ======= ========
ivm3                     4 HPUX    Off       

[Authorized Administrators]
Oper Groups: 
Admin Groups:
Oper Users:  
Admin Users:  

[Virtual CPU Details]
#vCPUs Entitlement Maximum
====== =========== =======
     1       10.0%  100.0% 

[Memory Details]
Total    Reserved
Memory   Memory 
=======  ========
   3 GB     64 MB 

[Storage Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
dvd     scsi         0   1   0   1   0 disk      /dev/rdsk/c1t4d0
disk    scsi         0   1   0   2   0 lv        /dev/vg_vmtest/rivm3disk

[Network Interface Details]
Interface Adaptor    Name/Num   PortNum Bus Dev Ftn Mac Address
========= ========== ========== ======= === === === =================
vswitch   lan        vlan02     11        0   0   0 52-4f-f9-5e-02-82

[Misc Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
serial  com1                           tty       console
[root@hpvmhost] ~ #
  • Final step is to boot the newly create machine, from the EFI menu we’re going to create a new boot file.
  • First select the Boot option maintenance menu:
EFI Boot Manager ver 1.10 [14.62] [Build: Mon Oct  1 09:27:26 2007]

Please select a boot option

    HP-UX Primary Boot: 0/0/1/0.0.0                                
    EFI Shell [Built-in]                                           
    Boot option maintenance menu                                    

    Use ^ and v to change option(s). Use Enter to select an option
  • Now go to Add a Boot Option.
EFI Boot Maintenance Manager ver 1.10 [14.62]

Main Menu. Select an Operation

        Boot from a File                                           
        Add a Boot Option                                          
        Delete Boot Option(s)                                      
        Change Boot Order                                           

        Manage BootNext setting                                    
        Set Auto Boot TimeOut                                       

        Select Active Console Output Devices                       
        Select Active Console Input Devices                        
        Select Active Standard Error Devices                        

        Cold Reset                                                 
        Exit                                                       

    Timeout-->[10] sec SystemGuid-->[5A0F8F26-2BA2-11DF-9C04-001A4B07F002]
    SerialNumber-->[VM01010008          ]
  • Select the first partition of the disk.
EFI Boot Maintenance Manager ver 1.10 [14.62]

Add a Boot Option.  Select a Volume

    IA64_EFI [Acpi(PNP0A03,0)/Pci(1|0)/Scsi(Pun2,Lun0)/HD(Part1,Sig7
    IA64_EFI [Acpi(PNP0A03,0)/Pci(1|0)/Scsi(Pun2,Lun0)/HD(Part3,Sig7
    Removable Media Boot [Acpi(PNP0A03,0)/Pci(1|0)/Scsi(Pun1,Lun0)]
    Load File [Acpi(PNP0A03,0)/Pci(0|0)/Mac(524FF95E0282)]         
    Load File [EFI Shell [Built-in]]                               
    Legacy Boot                                                    
    Exit
  • Select the first option.
EFI Boot Maintenance Manager ver 1.10 [14.62]

Select file or change to new directory:

       03/09/10  03:45p <DIR>       4,096 EFI                      
       [Treat like Removable Media Boot]                           
    Exit
  • Enter the HPUX directory.
EFI Boot Maintenance Manager ver 1.10 [14.62]

Select file or change to new directory:

       03/09/10  03:45p <DIR>       4,096 .                        
       03/09/10  03:45p <DIR>           0 ..                       
       03/09/10  03:45p <DIR>       4,096 HPUX                     
       03/09/10  03:45p <DIR>       4,096 Intel_Firmware           
       03/09/10  03:45p <DIR>       4,096 diag                     
       03/09/10  03:45p <DIR>       4,096 hp                       
       03/09/10  03:45p <DIR>       4,096 tools                    
    Exit
  • Select the hpux.efi file.
EFI Boot Maintenance Manager ver 1.10 [14.62]

Select file or change to new directory:

       03/09/10  03:45p <DIR>       4,096 .                        
       03/09/10  03:45p <DIR>       4,096 ..                       
       03/09/10  03:45p           654,025 hpux.efi                 
       03/09/10  03:45p            24,576 nbp.efi                  
    Exit
  • Enter BOOTDISK as description and None as BootOption Data Type. Save changes.
Filename: \EFI\HPUX\hpux.efi
DevicePath: [Acpi(PNP0A03,0)/Pci(1|0)/Scsi(Pun2,Lun0)/HD(Part1,Sig71252358-2BCD-11DF-8000-D6217B60E588)/\EFI\HPUX\hpux.efi]

IA-64 EFI Application 03/09/10  03:45p     654,025 bytes

Enter New Description:  BOOTDISK
New BootOption Data. ASCII/Unicode strings only, with max of 240 characters

Enter BootOption Data Type [A-Ascii U-Unicode N-No BootOption] :  None

Save changes to NVRAM [Y-Yes N-No]:
  • Go back to the EFI main menu and boot from the new option.
EFI Boot Manager ver 1.10 [14.62] [Build: Mon Oct  1 09:27:26 2007]

Please select a boot option
HP-UX Primary Boot: 0/0/1/0.0.0
EFI Shell [Built-in]
BOOTDISK
Boot option maintenance menu

Use ^ and v to change option(s). Use Enter to select an option
Loading.: BOOTDISK
Starting: BOOTDISK

(C) Copyright 1999-2006 Hewlett-Packard Development Company, L.P.
All rights reserved

HP-UX Boot Loader for IPF  --  Revision 2.035

Press Any Key to interrupt Autoboot
\EFI\HPUX\AUTO ==> boot vmunix
Seconds left till autoboot -   0
AUTOBOOTING...> System Memory = 3066 MB
loading section 0
.................................................................................. (complete)
loading section 1
.............. (complete)
loading symbol table
loading System Directory (boot.sys) to MFS
.....
loading MFSFILES directory (bootfs) to MFS
................
Launching /stand/vmunix
SIZE: Text:41555K + Data:6964K + BSS:20747K = Total:69267K
  • Finally the OS will ask some questions about the network configuration and other parameters, answer what suits better your needing.
_______________________________________________________________________________

                       Welcome to HP-UX!

Before using your system, you will need to answer a few questions.

The first question is whether you plan to use this system on a network.

Answer "yes" if you have connected the system to a network and are ready
to link with a network.

Answer "no" if you:

     * Plan to set up this system as a standalone (no networking).

     * Want to use the system now as a standalone and connect to a
       network later.
_______________________________________________________________________________

Are you ready to link this system to a network?

Press [y] for yes or [n] for no, then press [Enter] y
...

And we are done.

Conclusions: I have to say that at the beginning the cloning system of HPVM disappointed me; but after a while I got used to it.

In my opinion the best method of the above is the second if you have one boot disk, and I really can’t see a reason to have a vg00 with several PVs on a virtual machine. If you have an IVM as template and need to produce many copies as quickly as possible this method is perfect.

Of course there is a fourth method: Our beloved Ignite-UX. But I will write about it in another post.

Juanma.

[root@hpvmhost] ~ # hpvmstatus -P hpvxcl01
[Virtual Machine Details]
Virtual Machine Name VM #  OS Type State
==================== ===== ======= ========
hpvxcl01                11 HPUX    Off[Authorized Administrators]
Oper Groups:
Admin Groups:
Oper Users:
Admin Users:[Virtual CPU Details]
#vCPUs Entitlement Maximum
====== =========== =======
3       20.0%  100.0%[Memory Details]
Total    Reserved
Memory   Memory
=======  ========
2 GB     64 MB

[Storage Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
disk    scsi         0   2   0   0   0 lv        /dev/vg_vmtest/rivm1d1
disk    scsi         0   2   0   1   0 lv        /dev/vg_vmtest/rivm1d2
dvd     scsi         0   2   0   2   0 disk      /dev/rdsk/c1t4d0
disk    scsi         0   2   0   3   0 lv        /dev/vg_vmtest/rivm1sd1
disk    scsi         0   2   0   4   0 lv        /dev/vg_vmtest/rivm1sd2
disk    scsi         0   2   0   5   0 lv        /dev/vg_vmtest/rivm1sd3
disk    scsi         0   2   0   6   0 lv        /dev/vg_vmtest/rivm1md1
disk    scsi         0   2   0   7   0 lv        /dev/vg_vmtest/rivm1md2
disk    scsi         0   2   0   8   0 lv        /dev/vg_vmtest/rivm1md3
disk    scsi         0   2   0   9   0 lv        /dev/vg_vmtest/rivm1lmd1
disk    scsi         0   2   0  10   0 lv        /dev/vg_vmtest/rivm1lmd2

[Network Interface Details]
Interface Adaptor    Name/Num   PortNum Bus Dev Ftn Mac Address
========= ========== ========== ======= === === === =================
vswitch   lan        swtch502   11        0   0   0 f6-fb-bf-41-78-63
vswitch   lan        localnet   10        0   1   0 2a-69-35-d5-c1-5f

[Misc Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
serial  com1                           tty       console
[root@hpvmhost] ~ #

Moving vNICs between vSwitches

March 3, 2010 1 comment

Following with my re-learning HPVM process today I’ve been playing around with my virtual switches and a question had arise.

How can I move a vNic from one vSwitch to another?

I discovered is not a difficult task, just one important question to take into account, the virtual machine must be powered off. This kind of changes can’t be done if the IVM is online, at least with HPVM 3.5. I never used 4.0 or 4.1 releases of HPVM and I didn’t find anything in the documentation that suggest a different behavior.

To perform the operation we’re going to use, as usual ;-),  hpvmmodify. It comes with the -m switch to modify the I/O resources of an already existing virtual machine, but you have to specify the hardware address of the device. To identify the address of the network card  launch hpvmstatus with -d, this options shows the output with the format used on the command line.

[root@hpvmhost] ~ # hpvmstatus -P ivm1 -d
[Virtual Machine Devices]
...
[Network Interface Details]
network:lan:0,0,0x56E9E3096A22:vswitch:vlan02
network:lan:0,1,0xAED6F7FA4E3E:vswitch:localnet
...
[root@hpvmhost] ~ #

As it can be seen in the Networking Interface Details the third field shows, separated by commas,  the lan bus, the device number and the MAC address of the vNic. We only need the first two values, that is the lan bus and device number, “0,0″ in our the example.

Now we can proceed.

[root@hpvmhost] ~ # hpvmmodify -P ivm2 -m network:lan:0,0:vswitch:vlan03   
[root@hpvmhost] ~ #
[root@hpvmhost] ~ # hpvmstatus -P ivm1
[Virtual Machine Details]
Virtual Machine Name VM #  OS Type State
==================== ===== ======= ========
ivm1                     9 HPUX    On (OS)   
...
[Network Interface Details]
Interface Adaptor    Name/Num   PortNum Bus Dev Ftn Mac Address
========= ========== ========== ======= === === === =================
vswitch   lan        vlan03     9         0   0   0 56-e9-e3-09-6a-22
vswitch   lan        localnet   9         0   1   0 ae-d6-f7-fa-4e-3e
...
[root@hpvmhost] ~ #

And we are done.

I will write a few additional posts covering  more HPVM tips, small ones and big ones, at the same time I’m practicing them on my lab server.

Juanma.

Coming back to the IVM world

March 2, 2010 2 comments

Yes I have to admit it, it’s been a while since the last time I created an Integrity Virtual Machine. In my last job didn’t have HPVM and here the VMs were already running when I arrived. So a few weeks ago I decided to cut my teeth again with HPVM, specially since I am pushing forward very hard for an OS and HPVM version upgrade of the IVM cluster which is currently running HP-UX 11.23 with HPVM 3.5.

First logical step in order to get proficient again with IVM is to create a new virtual machine. I asked Javi, our storage guy, for a new LUN and after add it to my lab server I started the whole process.

Some of the steps are obvious for any HP-UX Sysadmin, like VGs and LVs creation, but I decided to show the commands in order to maintain some consistency across this how-to/checklist/what-ever-you-like-to-call-it.

  • Create a volume group for the IVM virtual disks.
[root@hpvmhost] ~ # vgcreate -s 16 -e 6000 vg_vmtest /dev/dsk/c15t7d1
Volume group "/dev/vg_vmtest" has been successfully created.
Volume Group configuration for /dev/vg_vmtest has been saved in /etc/lvmconf/vg_vmtest.conf
[root@hpvmhost] ~ #
[root@hpvmhost] ~ # vgextend vg_vmtest /dev/dsk/c5t7d1  /dev/dsk/c7t7d1 /dev/dsk/c13t7d1
Volume group "vg_vmtest" has been successfully extended.
Volume Group configuration for /dev/vg_vmtest has been saved in /etc/lvmconf/vg_vmtest.conf
[root@hpvmhost] ~ # vgdisplay -v vg_vmtest
--- Volume groups ---
VG Name                     /dev/vg_vmtest
VG Write Access             read/write     
VG Status                   available                 
Max LV                      255    
Cur LV                      0      
Open LV                     0      
Max PV                      16     
Cur PV                      1      
Act PV                      1      
Max PE per PV               6000         
VGDA                        2   
PE Size (Mbytes)            16              
Total PE                    3199    
Alloc PE                    0       
Free PE                     3199    
Total PVG                   0        
Total Spare PVs             0              
Total Spare PVs in use      0                     

--- Physical volumes ---
PV Name                     /dev/dsk/c15t7d1
PV Name                     /dev/dsk/c5t7d1  Alternate Link
PV Name                     /dev/dsk/c7t7d1  Alternate Link
PV Name                     /dev/dsk/c13t7d1 Alternate Link
PV Status                   available                
Total PE                    3199    
Free PE                     3199    
Autoswitch                  On        
Proactive Polling           On               

[root@hpvmhost] ~ #
  • Create one lvol for each disk you want to add to your virtual machine, of course these lvols must belong to the volume group previously created.
[root@hpvmhost] ~ # lvcreate -L 12000 -n ivm1d1 vg_vmtest
Logical volume "/dev/vg_vmtest/ivm1d1" has been successfully created with
character device "/dev/vg_vmtest/rivm1d1".
Logical volume "/dev/vg_vmtest/ivm1d1" has been successfully extended.
Volume Group configuration for /dev/vg_vmtest has been saved in /etc/lvmconf/vg_vmtest.conf
[root@hpvmhost] ~ #
[root@hpvmhost] ~ # lvcreate -L 12000 -n ivm1d2 vg_vmtest
Logical volume "/dev/vg_vmtest/ivm1d2" has been successfully created with
character device "/dev/vg_vmtest/rivm1d2".
Logical volume "/dev/vg_vmtest/ivm1d2" has been successfully extended.
Volume Group configuration for /dev/vg_vmtest has been saved in /etc/lvmconf/vg_vmtest.conf
[root@hpvmhost] ~ #
  • Now we’re going to do some real stuff. Create the IVM with the hpvmcreate command and use the hpvmstatus to check that everything went well :
[root@hpvmhost] ~ # hpvmcreate -P ivm1 -O hpux  
[root@hpvmhost] ~ # hpvmstatus -P ivm1
[Virtual Machine Details]
Virtual Machine Name VM #  OS Type State
==================== ===== ======= ========
ivm1                     8 HPUX    Off       

[Authorized Administrators]
Oper Groups:  
Admin Groups:
Oper Users:   
Admin Users:  

[Virtual CPU Details]
#vCPUs Entitlement Maximum
====== =========== =======
 1       10.0%  100.0% 

[Memory Details]
Total    Reserved
Memory   Memory  
=======  ========
 2 GB     64 MB 

[Storage Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================

[Network Interface Details]
Interface Adaptor    Name/Num   PortNum Bus Dev Ftn Mac Address
========= ========== ========== ======= === === === =================

[Misc Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
serial  com1                           tty       console
[root@hpvmhost] ~ #

We have a new virtual machine created but with no resources at all.

If you have read the HPVM documentation, and you should, probably know that every resource can be assigned at this step but I like to add them later one by one.

Since now we’re going to use the hpvmstatus to verify every change made. This command can be invoked without options to show a general summary or can query a single virtual machine, a verbose option is also available with -V. Take a look of its man page to check more options.

  • Add more CPU and RAM. The default values are 1 vCPU and 2GB of RAM, more can be assigned with hpvmmodify:
[root@hpvmhost] ~ # hpvmmodify -P ivm1 -c 2
[root@hpvmhost] ~ # hpvmmodify -P ivm1 -r 4G
[root@hpvmhost] ~ # hpvmstatus
[Virtual Machines]
Virtual Machine Name VM #  OS Type State     #VCPUs #Devs #Nets Memory  Runsysid
==================== ===== ======= ========= ====== ===== ===== ======= ========
oratest01                1 HPUX    On (OS)        4    10     3   16 GB        0
oratest02                2 HPUX    On (OS)        4     8     3   16 GB        0
sapvm01                  3 HPUX    Off            3     8     3    8 GB        0
sapvm02                  4 HPUX    Off            3     7     3    8 GB        0
sles01                   5 LINUX   On (OS)        1     4     3    4 GB        0
rhel01                   6 LINUX   Off            1     4     3    4 GB        0
hp-vxvm                  7 HPUX    On (OS)        2    17     3    6 GB        0
ivm1                     8 HPUX    Off            2     0     0    4 GB        0
[root@hpvmhost] ~ #
  • With the CPUs and RAM finished it’s time to add the storage devices, as always we’re going to use hpvmmodify:
[root@hpvmhost] ~ # hpvmmodify -P ivm1 -a disk:scsi::lv:/dev/vg_vmtest/rivm1d1
[root@hpvmhost] ~ # hpvmmodify -P ivm1 -a disk:scsi::lv:/dev/vg_vmtest/rivm1d2
[root@hpvmhost] ~ # hpvmmodify -P ivm1 -a dvd:scsi::disk:/dev/rdsk/c1t4d0
[root@hpvmhost] ~ # hpvmstatus -P ivm1
[Virtual Machine Details]
Virtual Machine Name VM #  OS Type State
==================== ===== ======= ========
ivm1                     8 HPUX    Off       

[Authorized Administrators]
Oper Groups:  
Admin Groups:
Oper Users:   
Admin Users:  

[Virtual CPU Details]
#vCPUs Entitlement Maximum
====== =========== =======
 2       10.0%  100.0% 

[Memory Details]
Total    Reserved
Memory   Memory  
=======  ========
 4 GB     64 MB 

[Storage Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
disk    scsi         0   2   0   0   0 lv        /dev/vg_vmtest/rivm1d1
disk    scsi         0   2   0   1   0 lv        /dev/vg_vmtest/rivm1d2
dvd     scsi         0   2   0   2   0 disk      /dev/rdsk/c1t4d0

[Network Interface Details]
Interface Adaptor    Name/Num   PortNum Bus Dev Ftn Mac Address
========= ========== ========== ======= === === === =================

[Misc Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
serial  com1                           tty       console
[root@hpvmhost] ~ #

An important tip about the storage devices, remember that you have to use the character device file of the LV. If a block device is used you will get the following error:

[root@hpvmhost] ~ # hpvmmodify -P ivm1 -a disk:scsi::lv:/dev/vg_vmtest/ivm1d1
hpvmmodify: WARNING (ivm1): Expecting a character device file for disk backing file, but '/dev/vg_vmtest/ivm1d1' appears to be a block device.
hpvmmodify: ERROR (ivm1): Illegal blk device '/dev/vg_vmtest/ivm1d1' as backing device.
hpvmmodify: ERROR (ivm1): Unable to add device '/dev/vg_vmtest/ivm1d1'.
hpvmmodify: Unable to create device disk:scsi::lv:/dev/vg_vmtest/ivm1d1.
hpvmmodify: Unable to modify the guest.
[root@hpvmhost] ~ #
  • Virtual networking 1: First check the available virtual switches with hpvmnet:
[root@hpvmhost] / # hpvmnet
Name     Number State   Mode      NamePPA  MAC Address    IP Address
======== ====== ======= ========= ======== ============== ===============
localnet      1 Up      Shared             N/A            N/A
vlan02        2 Up      Shared    lan3     0x000000000000 192.168.1.12
vlan03        3 Up      Shared    lan4     0x001111111111 10.10.3.4
[root@hpvmhost] / #
  • Virtual Networking 2: Add a couple of vnics to the virtual machine.
[root@hpvmhost] / # hpvmmodify -P ivm1 -a network:lan:vswitch:vlan02
[root@hpvmhost] / # hpvmmodify -P ivm1 -a network:lan:vswitch:localnet
[root@hpvmhost] / #
[root@hpvmhost] / # hpvmstatus -P ivm1
[Virtual Machine Details]
Virtual Machine Name VM #  OS Type State
==================== ===== ======= ========
ivm1                     8 HPUX    Off       

[Authorized Administrators]
Oper Groups:  
Admin Groups:
Oper Users:   
Admin Users:  

[Virtual CPU Details]
#vCPUs Entitlement Maximum
====== =========== =======
 2       10.0%  100.0% 

[Memory Details]
Total    Reserved
Memory   Memory  
=======  ========
 4 GB     64 MB 

[Storage Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
disk    scsi         0   2   0   0   0 lv        /dev/vg_vmtest/rivm1d1
disk    scsi         0   2   0   1   0 lv        /dev/vg_vmtest/rivm1d2
dvd     scsi         0   2   0   2   0 disk      /dev/rdsk/c1t4d0

[Network Interface Details]
Interface Adaptor    Name/Num   PortNum Bus Dev Ftn Mac Address
========= ========== ========== ======= === === === =================
vswitch   lan        vlan02     8         0   0   0 56-e9-e3-09-6a-22
vswitch   lan        localnet   8         0   1   0 ae-d6-f7-fa-4e-3e

[Misc Interface Details]
Guest                                 Physical
Device  Adaptor    Bus Dev Ftn Tgt Lun Storage   Device
======= ========== === === === === === ========= =========================
serial  com1                           tty       console
[root@hpvmhost] / #
  • And we have an IVM ready to be used. To start it use the hpvmstart command and access its console with hpvmconsole, the interface is almost equal to GSP/MP.
[root@hpvmhost] ~ # hpvmstart -P ivm1
(C) Copyright 2000 - 2008 Hewlett-Packard Development Company, L.P.
Opening minor device and creating guest machine container
Creation of VM, minor device 3
Allocating guest memory: 4096MB
  allocating low RAM (0-80000000, 2048MB)
/opt/hpvm/lbin/hpvmapp (/var/opt/hpvm/uuids/2b3b1198-2062-11df-9e06-001a4b07f002/vmm_config.current): Allocated 2147483648 bytes at 0x6000000100000000
  allocating high RAM (100000000-180000000, 2048MB)
/opt/hpvm/lbin/hpvmapp (/var/opt/hpvm/uuids/2b3b1198-2062-11df-9e06-001a4b07f002/vmm_config.current): Allocated 2147483648 bytes at 0x6000000200000000
    locking memory: 100000000-180000000
    allocating datalogger memory: FF800000-FF840000 (256KB for 155KB)
/opt/hpvm/lbin/hpvmapp (/var/opt/hpvm/uuids/2b3b1198-2062-11df-9e06-001a4b07f002/vmm_config.current): Allocated 262144 bytes at 0x6000000300000000
    locking datalogger memory
  allocating firmware RAM (fff00000-fff20000, 128KB)
/opt/hpvm/lbin/hpvmapp (/var/opt/hpvm/uuids/2b3b1198-2062-11df-9e06-001a4b07f002/vmm_config.current): Allocated 131072 bytes at 0x6000000300080000
    locked SAL RAM: 00000000fff00000 (8KB)
    locked ESI RAM: 00000000fff02000 (8KB)
    locked PAL RAM: 00000000fff04000 (8KB)
    locked Min Save State: 00000000fff06000 (8KB)
    locked datalogger: 00000000ff800000 (256KB)
Loading boot image
Image initial IP=102000 GP=67E000
Initialize guest memory mapping tables
Starting event polling thread
Starting thread initialization
Daemonizing....
hpvmstart: Successful start initiation of guest 'ivm1'
[root@hpvmhost] ~ #
[root@hpvmhost] ~ # hpvmconsole -P ivm1

   vMP MAIN MENU

         CO: Console
         CM: Command Menu
         CL: Console Log
         SL: Show Event Logs
         VM: Virtual Machine Menu
         HE: Main Help Menu
         X: Exit Connection

[ivm1] vMP> co

       (Use Ctrl-B to return to vMP main menu.)

- - - - - - - - - - Prior Console Output - - - - - - - - - -

And we are finished. I’m not going through the installation process since it’s not the objective of this post and it is very well documented in the HP-UX documentation.

I really enjoyed this post, it has been very useful exercise in order to re-learn the roots of HPVM and a very good starting point for the HP-UX/HPVM upgrade I’m going to undertake during next weeks.

Juanma.

Follow

Get every new post delivered to your Inbox.

Join 197 other followers

%d bloggers like this: