I was playing this afternoon with DRD in an 11.23 machine and just after launching the clone process I decided to stop it with Ctrl-C since I wasn’t logging the session and I wanted to do it. The process stopped with an error and I was sent back to the shell.
* Copying File Systems To New System Image ERROR: Exiting due to keyboard interrupt. * Unmounting New System Image Clone * System image: "sysimage_001" on disk "/dev/dsk/c2t1d0" ERROR: Caught signal SIGINT. This process is running critical code, this signal will be handled shortly. ERROR: Caught signal SIGINT. This process is running critical code, this signal will be handled shortly. ERROR: Caught signal SIGINT. This process is running critical code, this signal will be handled shortly. ERROR: Caught signal SIGINT. This process is running critical code, this signal will be handled shortly. ERROR: Unmounting the file system fails. - Unmounting the clone image fails. - The "umount" command returned "13". The "sync" command returned "0". The error messages produced are the following: "" * Unmounting New System Image Clone failed with 5 errors. * Copying File Systems To New System Image failed with 6 errors. ======= 04/21/10 08:20:19 EDT END Clone System Image failed with 6 errors. (user=root) (jobid=ivm-v2)
I know it is a very bad idea but it’s not production server, just a virtual machine I use to perform tests. In fact my stupid behavior gave me the opportunity to discover and play with a funny and pretty bunch of errors. Here it is how I manage to resolve it.
I launched again the clone process in preview mode, just in case, and DRD fails with the following error.
[ivm-v2]/ # drd clone -p -v -t /dev/dsk/c2t1d0 ======= 04/21/10 08:22:01 EDT BEGIN Clone System Image Preview (user=root) (jobid=ivm-v2) * Reading Current System Information * Selecting System Image To Clone * Selecting Target Disk ERROR: Selection of the target disk fails. - Selecting the target disk fails. - Validation of the disk "/dev/dsk/c2t1d0" fails with the following error(s): - Target volume group device entry "/dev/drd00" exists. Run "drd umount" before proceeding. * Selecting Target Disk failed with 1 error. ======= 04/21/10 08:22:10 EDT END Clone System Image Preview failed with 1 error. (user=root) (jobid=ivm-v2)
It seems that the original process just leaved the image mounted but after trying with drd umount just like the DRD output said it didn’t worked. The image was only partially created, yeah I created a beautiful mess ;-)
At that point, and in another “clever” movement, instead of simply removing the drd00 volume group I just deleted /dev/drd00… who’s da man!! or like we use to say in Spain ¡Con dos cojones!
DRD, of course, failed with a new error.
ERROR: Selection of the target disk fails. - Selecting the target disk fails. - Validation of the disk "/dev/dsk/c2t1d0" fails with the following error(s): - Target volume group "/dev/drd00" found in logical volume table. "/etc/lvmtab" is corrupt and must be fixed before proceeding. * Selecting Target Disk failed with 1 error.
Well it wasn’t so bad. I recreated /etc/lvmtab and yes… I fired up my friend Dynamic Root Disk in preview mode.
[ivm-v2]/ # rm -f /etc/lvmtab [ivm-v2]/ # vgscan -v Creating "/etc/lvmtab". vgscan: Couldn't access the list of physical volumes for volume group "/dev/vg00". Invalid argument Physical Volume "/dev/dsk/c3t2d0" contains no LVM information /dev/vg00 /dev/dsk/c2t0d0s2 Following Physical Volumes belong to one Volume Group. Unable to match these Physical Volumes to a Volume Group. Use the vgimport command to complete the process. /dev/dsk/c2t1d0s2 Scan of Physical Volumes Complete. *** LVMTAB has been created successfully. *** Do the following to resync the information on the disk. *** #1. vgchange -a y *** #2. lvlnboot -R [ivm-v2]/ # lvlnboot -R Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00.conf [ivm-v2]/ # [ivm-v2]/ # drd clone -p -v -t /dev/dsk/c2t1d0 ======= 04/21/10 08:26:06 EDT BEGIN Clone System Image Preview (user=root) (jobid=ivm-v2) * Reading Current System Information * Selecting System Image To Clone * Selecting Target Disk ERROR: Selection of the target disk fails. - Selecting the target disk fails. - Validation of the disk "/dev/dsk/c2t1d0" fails with the following error(s): - The disk "/dev/dsk/c2t1d0" contains data. To overwrite this disk use the option "-x overwrite=true". * Selecting Target Disk failed with 1 error. ======= 04/21/10 08:26:13 EDT END Clone System Image Preview failed with 1 error. (user=root) (jobid=ivm-v2)
I couldn’t believe that. Another error? Why in the hell I got involved with DRD? But I am an Sysadmin, and a stubborn one. Looked at the disk and discovered that it had been partitioned by the first failed DRD cloning process. I just wiped out the whole disk with idisk and just in case I used the overwrite option.
[ivm-v2]/ # idisk -p /dev/rdsk/c2t1d0 idisk version: 1.31 EFI Primary Header: Signature = EFI PART Revision = 0x10000 HeaderSize = 0x5c HeaderCRC32 = 0xe19d8a07 MyLbaLo = 0x1 AlternateLbaLo = 0x1117732f FirstUsableLbaLo = 0x22 LastUsableLbaLo = 0x1117730c Disk GUID = d79b52fa-4d43-11df-8001-d6217b60e588 PartitionEntryLbaLo = 0x2 NumberOfPartitionEntries = 0xc SizeOfPartitionEntry = 0x80 PartitionEntryArrayCRC32 = 0xca7e53ce Primary Partition Table (in 512 byte blocks): Partition 1 (EFI): Partition Type GUID = c12a7328-f81f-11d2-ba4b-00a0c93ec93b Unique Partition GUID = d79b550c-4d43-11df-8002-d6217b60e588 Starting Lba = 0x22 Ending Lba = 0xfa021 Partition 2 (HP-UX): Partition Type GUID = 75894c1e-3aeb-11d3-b7c1-7b03a0000000 Unique Partition GUID = d79b5534-4d43-11df-8003-d6217b60e588 Starting Lba = 0xfa022 Ending Lba = 0x110af021 Partition 3 (HPSP): Partition Type GUID = e2a1e728-32e3-11d6-a682-7b03a0000000 Unique Partition GUID = d79b5552-4d43-11df-8004-d6217b60e588 Starting Lba = 0x110af022 Ending Lba = 0x11177021 [ivm-v2]/ # [ivm-v2]/ # idisk -R /dev/rdsk/c2t1d0 idisk version: 1.31 ********************** WARNING *********************** If you continue you will destroy all partition data on this disk. Do you wish to continue(yes/no)? yes
Don’t know why but I was pretty sure that DRD was going to fail again… and it did.
======= 04/21/10 08:27:02 EDT BEGIN Clone System Image Preview (user=root) (jobid=ivm-v2) * Reading Current System Information * Selecting System Image To Clone * Selecting Target Disk * The disk "/dev/dsk/c2t1d0" contains data which will be overwritten. * Selecting Volume Manager For New System Image * Analyzing For System Image Cloning ERROR: Analysis of file system creation fails. - Analysis of target fails. - The analysis step for creation of an inactive system image failed. - The default DRD mount point "/var/opt/drd/mnts/sysimage_001/" cannot be used due to the following error(s): - The mount point /var/opt/drd/mnts/sysimage_001/ is not an empty directory as required. * Analyzing For System Image Cloning failed with 1 error. ======= 04/21/10 08:27:09 EDT END Clone System Image Preview failed with 1 error. (user=root) (jobid=ivm-v2)
After a quick check I found that the original image was mounted.
[ivm-v2]/ # mount / on /dev/vg00/lvol3 ioerror=mwdisable,delaylog,dev=40000003 on Wed Apr 21 07:29:37 2010 /stand on /dev/vg00/lvol1 ioerror=mwdisable,log,tranflush,dev=40000001 on Wed Apr 21 07:29:38 2010 /var on /dev/vg00/lvol8 ioerror=mwdisable,delaylog,dev=40000008 on Wed Apr 21 07:29:50 2010 /usr on /dev/vg00/lvol7 ioerror=mwdisable,delaylog,dev=40000007 on Wed Apr 21 07:29:50 2010 /tmp on /dev/vg00/lvol4 ioerror=mwdisable,delaylog,dev=40000004 on Wed Apr 21 07:29:50 2010 /opt on /dev/vg00/lvol6 ioerror=mwdisable,delaylog,dev=40000006 on Wed Apr 21 07:29:50 2010 /home on /dev/vg00/lvol5 ioerror=mwdisable,delaylog,dev=40000005 on Wed Apr 21 07:29:50 2010 /net on -hosts ignore,indirect,nosuid,soft,nobrowse,dev=1 on Wed Apr 21 07:30:26 2010 /var/opt/drd/mnts/sysimage_001 on /dev/drd00/lvol3 ioerror=nodisable,delaylog,dev=40010003 on Wed Apr 21 08:19:46 2010 /var/opt/drd/mnts/sysimage_001/stand on /dev/drd00/lvol1 ioerror=nodisable,delaylog,dev=40010001 on Wed Apr 21 08:19:46 2010 /var/opt/drd/mnts/sysimage_001/tmp on /dev/drd00/lvol4 ioerror=nodisable,delaylog,dev=40010004 on Wed Apr 21 08:19:46 2010 /var/opt/drd/mnts/sysimage_001/home on /dev/drd00/lvol5 ioerror=nodisable,delaylog,dev=40010005 on Wed Apr 21 08:19:46 2010 /var/opt/drd/mnts/sysimage_001/opt on /dev/drd00/lvol6 ioerror=nodisable,delaylog,dev=40010006 on Wed Apr 21 08:19:46 2010 /var/opt/drd/mnts/sysimage_001/usr on /dev/drd00/lvol7 ioerror=nodisable,delaylog,dev=40010007 on Wed Apr 21 08:19:46 2010 /var/opt/drd/mnts/sysimage_001/var on /dev/drd00/lvol8 ioerror=nodisable,delaylog,dev=40010008 on Wed Apr 21 08:19:47 2010 [ivm-v2]/ #
Had to unmount the filesystems of the image one by one and after almost committing suicide with a a rack rail I launched the clone again and without the preview, if I were going to play a stupid role at least it was going to the most stupid one in the world x-)
[ivm-v2]/ # drd clone -x overwrite=true -v -t /dev/dsk/c2t1d0 ======= 04/21/10 08:38:22 EDT BEGIN Clone System Image (user=root) (jobid=rx260-02) * Reading Current System Information * Selecting System Image To Clone * Selecting Target Disk * The disk "/dev/dsk/c2t1d0" contains data which will be overwritten. * Selecting Volume Manager For New System Image * Analyzing For System Image Cloning * Creating New File Systems ERROR: Clone file system creation fails. - Creating the target file systems fails. - Command "/opt/drd/lbin/drdconfigure" fails with the return code 255. The entire output from the command is given below: - Start of output from /opt/drd/lbin/drdconfigure: - * Creating LVM physical volume "/dev/rdsk/c2t1d0s2" (0/1/1/0.1.0). * Creating volume group "drd00". ERROR: Command "/sbin/vgcreate -A n -e 4356 -l 255 -p 16 -s 32 /dev/drd00 /dev/dsk/c2t1d0s2" failed. - End of output from /opt/drd/lbin/drdconfigure * Creating New File Systems failed with 1 error. * Unmounting New System Image Clone * System image: "sysimage_001" on disk "/dev/dsk/c2t1d0" ======= 04/21/10 08:38:46 EDT END Clone System Image failed with 1 error. (user=root) (jobid=rx260-02) [ivm-v2]/ #
I thought that every possible error was fixed but there it was DRD saying that it failed with a bogus “return code 255″, oh yes very insightful because it’s not a 254 or a 256 it is a 255 code and everybody know what it means… Shit! I don’t know what it means. Yes it was true, I didn’t know what “return code 255″ stood for. After doing a small search on ITRC there was only one entry about a similar case, only one. I manage to create a beautiful error, don’t you think?
The question is that there was a mismatch between the minor numbers the kernel believed were in use and those really visible in the device files. DRD will always try to use the next free based on the device files and since in my case there was only one in use but the kernel thought there were two in use, one from vg00 and another one from the failed clone, it failed.
The solution is to cheat the kernel creating a fake group device using the minor number the kernel thinks is in use.
[ivm-v2]/dev # mkdir fake [ivm-v2]/dev # cd fake [ivm-v2]/dev/fake # mknod group c 64 0x010000 [ivm-v2]/dev/fake #
After that I launched DRD and everything went smoothly.
Fortunately everything happened in a test virtual machine and at any step of my frustrating trip through self-generated DRD error I could reset the VM and start over again with a clean system but since the purpose of Dynamic Root Disk is to minimize the downtime of production systems the reboot was not an option, at least no the first in the list.
In today’s post I will try to explain step by step how to add an iSCSI volume from the HP Lefthand P4000 VSA to a VMware ESXi4 server.
Step One: Create a volume.
Lets suppose we already have a configured storage appliance, I showed how to create a cluster in my previous post so I will not repeat that part here. Open the Centralized Management Console and go to the Management group -> Cluster ->Volumes and Snapshots.
Click on Tasks and choose New Volume.
Enter the volume name, a small description and the size. The volume can also be assigned to any server already connected to the cluster, as we don’t have any server assigned this option can be ignored, for now.
In the Advanced tab the volume can be assigned to an existing cluster and the RAID level, the volume type and the provisioning type can be set.
When everything is configured click OK and the volume will be created. After the creation process, the new volume will be showed up in the CMC.
At this point we have a new volume with some RAID protection level, none in this particular case since it’s a single node cluster. Next step is to assign the volume to a server.
Step Two: ESXi iSCSI configuration.
Connect to the chosen ESXi4 Server through vSphere Client and from the Configuration tab in the right pane go to the Networking area and click the Add Networking link
In the Add Networking Wizard select VMkernel as Connection Type.
Create a new virtual switch.
Enter the Port Group Properties, in my case the label as no other properties where relevant for me.
Set the IP settings, go to Summary screen and click Finish.
The newly created virtual switch will appear in the Networking area.
With the new virtual switch created go to Storage Adapters there you will see an iSCSI software adapter.
Click on properties and on the General tab click the Configure button and check the Enabled status box.
Once iSCSI is enabled its properties window will be populated.
Click close, the server will ask for rescan of the adapter but at this point it is not necessary so it can be skipped.
Step Three: Add the volume to the ESXi server.
Now, that we have our volume created and the iSCSI adapter of our ESXi server activated, the next logical step is to add the storage to server.
On the HP Lefthand CMC go to the servers area add a new server.
Add a name for the server, a small description, check the Allow access via iSCSI box and select the authentication. In the example I choose CHAP not required. With this option you only have to enter the Initiator Node Name, you can grab it from the details of the ESXi iSCSI adapter.
To finish the process click OK and you will see the newly added server. Go to the tab Volume and Snapshots tab on the server configuration and from the Tasks menu assign a volume to the server.
Select the volume created at the beginning,
Now go back to the vSphere client and launch again the properties of the iSCSI adapter. On the Dynamic Discovery tab add the virtual IP address of the VSA cluster.
Click Close and the server will ask again to rescan the adapter, now say yes and after the rescanning process the iSCSI LUN will show up.
Now in the ESXi a new Datastore can be created with the newly configured storage. Of course the same LUN can also be used to provide shared storage for more ESXi servers and used for VMotion, HA or any other interesting VMware vSphere features. May be in another post ;-)
The P400 Virtual Storage Appliance is a storage product from HP, its features include:
- Synchronous replication to remote sites.
- Snapshot capability.
- Thin provisioning.
- Network RAID.
It can be used to create a virtual iSCSI SAN to provide shared storage for ESX environments.
A 60-day trial is available here, it requires to log-in with your HP Passport account. As I wanted to test it is what I did, there are two versions available one for ESX and a second one labeled as Laptop Demo which is optimized for VMware Workstation and also comes with the Centralized Management Console software. I choose the last one.
After the import the appliance into VMware Workstation you will see that it comes configured as “Other Linux 2.6x kernel” as guest OS, with 1 vCPU, 384MB of RAM, 2 308MB disks used for the OS of the VSA and a 7.2 GB disk. The three disks are SCSI and are configured as Independent-Persistent.
At that point I fired up the appliance and started to play with my VSA.
- First Step: Basic configuration.
A couple of minutes after have been started the appliance will show the log-in prompt.
As instructed enter “start”. You will be redirected to another log-in screen where you only have to press Enter and then the configuration screen will appear.
The first section “General Settings” allow you to create an administrator account and to set the password of the default account.
Move to the networking settings. The first screen ask you to choose the network interface, in my case I only had one.
And now you can proceed with the NIC configuration. Will ask for confirmation prior to commit any changes you made in the VSA network configuration.
In the next area of the main menu, Network TCP Status, the speed of the network card can be forced to 1000MBs Full Duplex and the Frame Size can be set.
The final part is for group management configuration, in fact to remove the VSA from a management group, and to reset the VSA to its defaults settings.
Now we have our P4000 configured to be managed through CMC. I will not explain the CMC installation since it’s just an almost “next->next->…” tasks.
- Second step: VSA management through Centralized Management Console.
Launch the CMC. The Find Nodes Wizard will pop-up.
Choose the appropriate option in the next screen. To add a single VSA choose the second one.
Enter the IP address of the appliance.
Click Finish and if the VSA is online the wizard will find it and add it to the CMC.
Now the VSA is managed through the CMC but it is not part of a management group.
- Step Three: Add more storage.
The first basic tasks we’re going to do with the VSA prior to Management Creation is to add more storage.
As the VSA is a virtual machine go toVMware Workstation or vSphere Client, depends on which VSA version are you using, and edit the appliance settings.
If you look into the advanced configuration of the third disk, the 7.2GB one, you will see that it has the 1:0 SCSI address.
This is very important because the new disks must be added sequentially from 1:1 through 1:4 addresses in order to be detected by the VSA. If there is disk added to the VSA the 1:0 SCSI address must be used for the first disk.
Add the new disk and before finishing the process edit the advanced settings of the disk and set the SCSI address.
Now in the CMC go to the storage configuration. You will see the new disk/disks as uninitialized.
Right click on the disk and select Add Disk to RAID.
Next you will see the disk active and already added to the RAID.
- Step four: Management group creation.
We’re going to create the most basic configuration possible with a P4000 VSA. One VSA in a single management group and part of a single-node cluster.
From the Getting Started screen launch the Management Groups, Clusters and Volume Wizard.
Select New Management Group and enter the data of the new group.
Add the administrative user.
Set the time of the Management Group.
Create a new Standard Cluster.
Enter the name of the cluster and select the nodes of the cluster, in this particular set-up there is only one node.
Finally add a virtual IP for the cluster.
Once the cluster is created the wizard will ask to create a volume. The volume can also be added later to the cluster.
After we click finish the management group and the cluster will be created.
And we are done. In the next post about the P4000 I will show how to add and iSCSI volume to an ESXi4 server.
Last month HP released the last update of HP-UX 11iv3, the Update 6 or the March 2010 Update or 11.36… I decided some time ago to do not try to understand why we have a so stupid naming scheme for HP-UX.
Anyway, putting aside the philosophical rambling, HP-UX 11iv3 update 6 is here and it came full of new features. The following ones stand out, at least for me.
- Improved system boot time thanks to RCEnhancement, Oliver wrote about it several weeks ago, until this release it was available in the Software Depot and now is included in the install media.
- New DRD version capable of synchronize the clone with the running system.
- LVM updated to version 2.2. With this version we have logical volume snapshots, can’t wait to try this :-D, logical volume migration within the same VG through the new lvmove command and boot support for LVM 2.2 volume groups, this is very cool because until this release the vg00 was stuck in LVM 1.0. Ignite-UX have also been updated to take advantage of this feature and we’ll be asked to choose between LVM 1.0 and LVM 2.2 bootable volume groups.
This release comes bundled with the new HPVM version, the 4.2, which also adds a bunch of new features. To name a few.
- Automatic memory reallocation.
- Capacity of suspend and resume a guest.
- Support for CVM/CFS backing stores for HPVM Serviceguard cluster packages.
- Encryption of the data during Online VM migration.
- AVIO support for Veritas Volume Manager based backing stores.
In short, there are a lot of new interesting features, a lot issues have also been fixed and as I said at the beginning we still have to live with an odd naming scheme but in the end I’m quite happy with this new release at least in theory because I didn’t have the opportunity to try it yet but I will do very soon since I’m planning to deploy HPVM 4.2 in the near future. In fact my HPVM 3.5 to 4.1 migration project has become a 3.5 to 4.2 migration, how cool is that eh! ;-)
First something I completely forgot in my first post. I discovered OpenVZ thanks to Vivek Gite’s great site nixCraft. This post and the previous one are inspired by his nice series of posts about OpenVZ. Now the show can begin :-)
As I said in my first post about OpenVZ I decided to set-up a test server. Since I didn’t had a spare box in my homelab I created a VM inside VMware Workstation, the performance isn’t the same as in a physical server but this a test and learn environment so it will suffice.
There is a Debian based bare-metal installer ISO named Proxmos Virtual Environment and OpenVZ is also supported in many Linux distributions, each one has its own installation method, but I choose CentOS for my Host node server because is one of my favorite Linux server distros.
- Add the yum repository to the server:
[root@openvz ~]# cd /etc/yum.repos.d/ [root@openvz yum.repos.d]# ls CentOS-Base.repo CentOS-Media.repo [root@openvz yum.repos.d]# wget http://download.openvz.org/openvz.repo --2010-04-04 00:53:12-- http://download.openvz.org/openvz.repo Resolving download.openvz.org... 188.8.131.52 Connecting to download.openvz.org|184.108.40.206|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 3182 (3.1K) [text/plain] Saving to: `openvz.repo' 100%[==========================================================================================>] 3,182 --.-K/s in 0.1s 2010-04-04 00:53:14 (22.5 KB/s) - `openvz.repo' saved [3182/3182] [root@openvz yum.repos.d]# rpm --import http://download.openvz.org/RPM-GPG-Key-OpenVZ [root@openvz yum.repos.d]#
- Install the OpenVZ kernel, in my particular case I used the basic kernel but there are SMP+PAE, PAE and Xen kernels available:
[root@openvz yum.repos.d]# yum install ovzkernel Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: ftp.dei.uc.pt * base: ftp.dei.uc.pt * extras: ftp.dei.uc.pt * openvz-kernel-rhel5: openvz.proserve.nl * openvz-utils: openvz.proserve.nl * updates: ftp.dei.uc.pt addons | 951 B 00:00 base | 2.1 kB 00:00 extras | 2.1 kB 00:00 openvz-kernel-rhel5 | 951 B 00:00 openvz-utils | 951 B 00:00 updates | 1.9 kB 00:00 Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package ovzkernel.i686 0:2.6.18-164.15.1.el5.028stab068.9 set to be installed --> Finished Dependency Resolution Dependencies Resolved ==================================================================================================================================== Package Arch Version Repository Size ==================================================================================================================================== Installing: ovzkernel i686 2.6.18-164.15.1.el5.028stab068.9 openvz-kernel-rhel5 19 M Transaction Summary ==================================================================================================================================== Install 1 Package(s) Update 0 Package(s) Remove 0 Package(s) Total download size: 19 M Is this ok [y/N]: y Downloading Packages: ovzkernel-2.6.18-164.15.1.el5.028stab068.9.i686.rpm | 19 MB 00:19 Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction Installing : ovzkernel 1/1 Installed: ovzkernel.i686 0:2.6.18-164.15.1.el5.028stab068.9 Complete! [root@openvz yum.repos.d]#
- Install the OpenVZ management utilities:
[root@openvz yum.repos.d]# yum install vzctl vzquota Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: centos.cict.fr * base: ftp.dei.uc.pt * extras: centos.cict.fr * openvz-kernel-rhel5: mirrors.ircam.fr * openvz-utils: mirrors.ircam.fr * updates: ftp.dei.uc.pt Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package vzctl.i386 0:3.0.23-1 set to be updated --> Processing Dependency: vzctl-lib = 3.0.23-1 for package: vzctl --> Processing Dependency: libvzctl-0.0.2.so for package: vzctl ---> Package vzquota.i386 0:3.0.12-1 set to be updated --> Running transaction check ---> Package vzctl-lib.i386 0:3.0.23-1 set to be updated --> Finished Dependency Resolution Dependencies Resolved ==================================================================================================================================== Package Arch Version Repository Size ==================================================================================================================================== Installing: vzctl i386 3.0.23-1 openvz-utils 143 k vzquota i386 3.0.12-1 openvz-utils 82 k Installing for dependencies: vzctl-lib i386 3.0.23-1 openvz-utils 175 k Transaction Summary ==================================================================================================================================== Install 3 Package(s) Update 0 Package(s) Remove 0 Package(s) Total download size: 400 k Is this ok [y/N]: y Downloading Packages: (1/3): vzquota-3.0.12-1.i386.rpm | 82 kB 00:00 (2/3): vzctl-3.0.23-1.i386.rpm | 143 kB 00:00 (3/3): vzctl-lib-3.0.23-1.i386.rpm | 175 kB 00:00 ------------------------------------------------------------------------------------------------------------------------------------ Total 201 kB/s | 400 kB 00:01 Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction Installing : vzctl-lib 1/3 Installing : vzquota 2/3 Installing : vzctl 3/3 Installed: vzctl.i386 0:3.0.23-1 vzquota.i386 0:3.0.12-1 Dependency Installed: vzctl-lib.i386 0:3.0.23-1 Complete! [root@openvz yum.repos.d]#
- Configure the kernel. The following adjustments must be done in the /etc/sysctl.conf file:
# On Hardware Node we generally need # packet forwarding enabled and proxy arp disabled net.ipv4.ip_forward = 1 net.ipv6.conf.default.forwarding = 1 net.ipv6.conf.all.forwarding = 1 net.ipv4.conf.default.proxy_arp = 0 # Enables source route verification net.ipv4.conf.all.rp_filter = 1 # Enables the magic-sysrq key kernel.sysrq = 1 # We do not want all our interfaces to send redirects net.ipv4.conf.default.send_redirects = 1 net.ipv4.conf.all.send_redirects = 0
- Disable SELinux:
[root@openvz ~]# cat /etc/sysconfig/selinux # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - SELinux is fully disabled. SELINUX=disabled # SELINUXTYPE= type of policy in use. Possible values are: # targeted - Only targeted network daemons are protected. # strict - Full SELinux protection. SELINUXTYPE=targeted # SETLOCALDEFS= Check local definition changes SETLOCALDEFS=0 [root@openvz ~]#
- Reboot the sever with the new kernel.
- Check the OpenVZ service:
[root@openvz ~]# chkconfig --list vz vz 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@openvz ~]# service vz status OpenVZ is running... [root@openvz ~]#
The first part is over, now we are going to create a VPS as a proof of concept.
- Download the template of the Linux distribution to install as VPS and place it in /vz/template/cache.
[root@openvz /]# cd vz/template/cache/ [root@openvz cache]# wget http://download.openvz.org/template/precreated/centos-5-x86.tar.gz --2010-04-04 23:20:20-- http://download.openvz.org/template/precreated/centos-5-x86.tar.gz Resolving download.openvz.org... 220.127.116.11 Connecting to download.openvz.org|18.104.22.168|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 179985449 (172M) [application/x-gzip] Saving to: `centos-5-x86.tar.gz' 100%[==========================================================================================>] 179,985,449 987K/s in 2m 58s 2010-04-04 23:23:19 (988 KB/s) - `centos-5-x86.tar.gz' saved [179985449/179985449] [root@openvz cache]#
- Create a new virtual machine using the template.
[root@openvz cache]# vzctl create 1 --ostemplate centos-5-x86 Creating container private area (centos-5-x86) Performing postcreate actions Container private area was created [root@openvz cache]#
- We have a basic VPS created but it needs more tweaking before we can start it. Set the IP address, the DNS server, hostname, a name to identify it in the Host node and finally set the On Boot parameter to automatically start the container with the host.
[root@openvz cache]# vzctl set 1 --ipadd 192.168.1.70 --save Saved parameters for CT 1 [root@openvz cache]# vzctl set 1 --name vps01 --save Name vps01 assigned Saved parameters for CT 1 [root@openvz cache]# vzctl set 1 --hostname vps01 --save Saved parameters for CT 1 [root@openvz cache]# vzctl set 1 --nameserver 192.168.1.1 --save Saved parameters for CT 1 [root@openvz cache]# vzctl set 1 --onboot yes --save Saved parameters for CT 1 [root@openvz cache]#
- Start the container and check it with vzlist.
[root@openvz cache]# vzctl start vps01 Starting container ... Container is mounted Adding IP address(es): 192.168.1.70 Setting CPU units: 1000 Configure meminfo: 65536 Set hostname: vps01 File resolv.conf was modified Container start in progress... [root@openvz cache]# [root@openvz cache]# [root@openvz cache]# vzlist CTID NPROC STATUS IP_ADDR HOSTNAME 1 10 running 192.168.1.70 vps01 [root@openvz cache]#
- Enter the container and check that its operating system is up and running.
[root@openvz cache]# vzctl enter vps01 entered into CT 1 [root@vps01 /]# [root@vps01 /]# free -m total used free shared buffers cached Mem: 256 8 247 0 0 0 -/+ buffers/cache: 8 247 Swap: 0 0 0 [root@vps01 /]# uptime 02:02:11 up 8 min, 0 users, load average: 0.00, 0.00, 0.00 [root@vps01 /]#
- To finish the test stop the container.
[root@openvz /]# vzctl stop 1 Stopping container ... Container was stopped Container is unmounted [root@openvz /]# [root@openvz /]# vzlist -a CTID NPROC STATUS IP_ADDR HOSTNAME 1 - stopped 192.168.1.70 vps01 [root@openvz /]#
And as I like to say… we are done ;-) Next time will try to cover more advanced topics.
Long time since my last post. I’ve been on holidays! :-D
But don’t worry my dear readers, I did not fall into laziness and the few times I wasn’t playing with my son I’ve been playing in my homelab with other virtualization technologies, storage appliances, my ESXi servers… what can I say I’m a Geek. One of the most interesting technologies I’ve been playing with is OpenVZ.
OpenVZ is a container-based operating system-level virtualization technology for Linux, if you have ever worked with Solaris 10 Zones this is very similar. The OpenVZ project is supported by Parallels which also have based their commercial solution Virtuozzo on OpenVZ.
OpenVZ (and other container-based technologies) differs with other technologies like VMware or HPVM in that the last ones virtualize the entire machine with its own OS, disks, ram, etc. OpenVZ on the contrary use only one Linux kernel and create multiple isolated instances. of course there are pros and cons, just to name a couple:
- VMware, HPVM and other true hypervisors are more flexible since many different operative systems can be run on top of them, OpenVZ on the contrary can only run Linux instances.
- OpenVZ since it is not a real hypervisor does not have their overhead so it is very fast.
Glossary of terms:
- Host node, CTo, VEo: The host where the containers run.
- VPS, VE: The containers themselves. One Host node can run multiple VPS and each VPS can run a different Linux distribution such as Gentoo, Ubuntu, CentOS, etc, but every VPS operate under the same Linux kernel.
- CTID: ConTainer’s IDentifier. A unique number that every VPS has and used to manage it.
- Beancounters: The beancounters, also known as UBC Parameter Units, are nothing but a set of limits defined by the system administrator. The beancounters assure that no VPS can abuse the resources of the Host node. The whole list of Beancounter is described in detail in the OpenVZ wiki.
- VPS templates: The templates are the images used to create new containers.
OpenVZ directory structure:
- /vz: The default main directory.
- /vz/private: Where the VPS are stored.
- /vz/template/cache: In this path are stored the templates for the different Linux distributions.
- /etc/vz: Configuration directory for OpenVZ.
- /etc/vz/vz.conf: OpenVZ configuration file.
The amount of resources from the Host node available for the Virtual Environments can be managed through four different ways.
- Two-Level Disk Quota: The administrator of the Host node can set-up disk quotas for each container.
- Fair CPU scheduler: It is a two-level scheduler with a first level scheduler deciding which container is given the CPU time slice and on the second level the standard Linux scheduler decides which process to run in that container.
- I/O scheduler: Very similar to the CPU scheduler, is also a two-level scheduler. Priorities are assigned to each container and the I/O scheduler distributes the available bandwidth according to those priorities.
- User Beancounters.
I’m now in the process of set-up an OpenVZ test server in my homelab so I will try to cover some of its features more in depth in a future post.