A very controversial title I know, but please hold your fire and let me explain.
For many of us, the members of the VMware Community, VMware Workstation was and is one of the ways we use to learn and play with our favorite Virtualization goodies. I personally have it installed in my laptop and use it everyday to try customer setups at small scale, perform tests and try new stuff I learn in one of the many products from VMware and many others I have installed, from ESX 3.5 to ESXi 5.0, vCenter, vMA, VDR, HP P4000 VSAs, etc.
However until very recently for a newcomer to the Virtualization world that really wanted to learn Hyper-V the only way was to get a couple of systems and some kind of shared storage compatible with MSCS.
Now, ironically thanks to VMware, that situation has changed. In fact it has dramatically improved because now a folk that want to learn how things are done in the Hyper-V world has VMware Workstation 8 and the free edition of ESXi 5 at his disposal where he can run Hyper-V nested and even power-on VMs.
With a moderately powerful whitebox running free ESXi5 or a workstation-class system running Workstation 8, and a MSDN subscription, a Sysadmin can try very complex setups in his homelab that will improve his Hyper-V skills. And that can be a great asset for Microsoft in the same way it was for VMware.
I have to admit that no matter how much I like VMware I will use that solution to learn a bit of Hyper-V… but that will be after I get my VCP5 by the end of the year ;-)
So, what do you think. Will VMware new star products be helpful for Hyper-V adoption since now more Windows Sysadmins can learn Microsoft hypervisor? Or this wouldn’t affect and VMware vSphere will reign all over the Data Center until the end of the days?
Please comment! :)
If you are willing to see a so great whitebox like Phil Jaenke’s (@RootWyrm) BabyDragon I’m sorry to say that you’ll be terribly disappointed because unlike Phil’s beauty mine wasn’t built on purpose.
I’ve been running all my labs within VMware Workstation on my custom made workstation, whihc by the way was running Windows 7 (64-bit). But recently I decided that it was time to move to a more reliable solution so I converted my Windows 7 system in an ESXi server.
Surprisingly when I installed ESXi 4.1 Update 1 everything was properly recognized so I thought it could be of help to post the hardware configuration for other vGeeks out there that could be looking for working hardware components for their homelabs.
- Processor: Intel Core 2 Quad Q9550. Supports FT!
- Memory: 8GB
- Motherboard: Asrock Penryn 1600SLI-110dB
- Nic: Embedded nVidia NForce Network Controller. Supported under the forcedeth driver
~ # ethtool -i vmnic0 driver: forcedeth version: 0.61.0.1-1vmw firmware-version: bus-info: 0000:00:14.0 ~ #
- SATA controller: nVidia MCP51 SATA Controller.
~ # vmkload_mod -s sata_nv vmkload_mod module information input file: /usr/lib/vmware/vmkmod/sata_nv.o Version: Version 126.96.36.199-1vmw, Build: 348481, Interface: ddi_9_1 Built on: Jan 12 2011 License: GPL Required name-spaces: com.vmware.vmkapi@v1_1_0_0 Parameters: heap_max: int Maximum attainable heap size for the driver. heap_initial: int Initial heap size allocated for the driver. ~ #
- HDD1: 1 x 120GB SATA 7200RPM Seagate ST3120026AS.
- HDD2: 1 x 1TB SATA 7200RPM Seagate ST31000528AS.
Finally here it is a screenshot of the vSphere Client connected to the vCenter VM and showing the summary of the host.
The other components of my homelab are a QNAP TS-219P NAS and an HP ProCurve 1810G-8 switch. I also have plans to add two more NICs and a SSD to the server as soon as possible and of course to build a new whitebox.
If you are wondering if you can run your vSphere 5 lab nested on ESXi 4.1, the answer is yes.
I used Eric Gray’s (@eric_gray) procedure VMware ESX 4 can even virtualize itself to create the VMs. For the guest OS type I tried Red Hat Enterprise Linux 5 (64-bit) and Red Hat Enterprise Linux 6 (64-bit) and both worked without a glitch.
Here they are running on top of my whitebox, which is running ESXi 4.1 Update 1, the left one (esxi5) is created as RHEL6 and the right one (esxi5-02) RHEL5.
I added also the monitor_control.restrict_backdoor option but have not try yet to run nested VMs. I’ll do later and will update the post with the results.
When you are trying to configure iSCSI of and ESX(i) server from the command line is clear that at some point you are going to need the iqn. Of course you can use the vSphere Client to get the iqn but the Unix Geek inside me really wants to do it from the shell.
First list the SCSI devices available in the system to get the iSCSI hba.
[root@esx02 ~]# esxcfg-scsidevs -a vmhba0 mptspi link-n/a pscsi.vmhba0 (0:0:16.0) LSI Logic / Symbios Logic LSI Logic Parallel SCSI Controller vmhba1 ata_piix link-n/a ide.vmhba1 (0:0:7.1) Intel Corporation Virtual Machine Chipset vmhba32 ata_piix link-n/a ide.vmhba32 (0:0:7.1) Intel Corporation Virtual Machine Chipset vmhba33 iscsi_vmk online iscsi.vmhba33 iSCSI Software Adapter [root@esx02 ~]#
After that Jon uses the command vmkiscsi-tool to get the iqn.
[root@esx02 ~]# vmkiscsi-tool -I -l vmhba33 iSCSI Node Name: iqn.1998-01.com.vmware:esx02-42b0f47e [root@esx02 ~]#
Beauty, isn’t it? But I found one glitch. This method is done from the ESX root shell but how do I get the iqn from the vMA? Some of my hosts are ESXi and even for the ESX I use the vMA to perform my everyday administration tasks.
There is no vmkiscsi-tool command in the vMA, instead we are going to use the vicfg-iscsi or the vicfg-scsidevs command.
With vicfg-scsidevs we can obtain the iqn listed in the UID colum.
[vi-admin@vma ~][esx02.mlab.local]$ vicfg-scsidevs -a Adapter_ID Driver UID PCI Vendor & Model vmhba0 mptspi pscsi.vmhba0 (0:16.0) LSI Logic Parallel SCSI Controller vmhba1 ata_piix unknown.vmhba1 (0:7.1) Virtual Machine Chipset vmhba32 ata_piix ide.vmhba32 (0:7.1) Virtual Machine Chipset vmhba33 iscsi_vmk iqn.1998-01.com.vmware:esx02-42b0f47e () iSCSI Software Adapter [vi-admin@vma ~][esx02.mlab.local]$
And with vicfg-iscsi we can get the iqn providing the vmhba device.
[vi-admin@vma ~][esx02.mlab.local]$ vicfg-iscsi --iscsiname --list vmhba33 iSCSI Node Name : iqn.1998-01.com.vmware:esx02-42b0f47e iSCSI Node Alias : [vi-admin@vma ~][esx02.mlab.local]$
The next logical step is to use PowerCLI to retrive the iqn, but I’ll leave that for a future post.
This post is mostly for self-reference but may be someone would find it useful. Last night I decided to change the IP address of one of the Openfiler instances in my homelab and instead of previously removing the NFS shares from the ESX servers I simply made the changes.
After a restart of the network services in the Openfiler server to commit the changes I found that the ESX servers saw the datastore as inactive.
First I tried to remove it from the vSphere Client and I received the following error message:
I quickly switched to an SSH session in the vMA to check the state of the datastore, it appeared as not mounted.
[vi-admin@vma /][esx01.mlab.local]$ esxcfg-nas -l nfs_datastore1 is /mnt/vg_nfs/lv_nfs01/nfs_datastore1 from openfiler.mlab.local not mounted [vi-admin@vma /][esx01.mlab.local]$
At this point I used esxcfg-nas command to remove the datastore.
[vi-admin@vma /][esx01.mlab.local]$ esxcfg-nas -d nfs_datastore1 NAS volume nfs_datastore1 deleted. [vi-admin@vma /][esx01.mlab.local]$ esxcfg-nas -l No NAS datastore found [vi-admin@vma /][esx01.mlab.local]$
Very easy, isn’t it? Oh by the way this just confirm one of my personal beliefs “Where there is shell, there is a way” ;-)
Even if you have access to the enterprise-class storage appliances, like the HP P4000 VSA or the EMC Celerra VSA, an Openfiler storage appliance can be a great asset to your homelab. Specially if you, like myself, run an “all virtual” homelab within VMware Workstation, since Openfiler is by far less resource hunger than its enterprise counterparts.
Simon Seagrave (@Kiwi_Si) from TechHead.co.uk wrote an excellent article explaining how to add iSCSI LUNs from an Openfiler instance to your ESX/ESXi servers, if iSCSI is your “thing” you should check it.
In this article I’ll explain how-to configure a NFS share in Openfiler and then add it as a datastore to your vSphere servers. I’ll take for granted that you already have an Openfiler server up and running.
1 – Enable NFS service
As always point your browser to https://<openfiler_address>:446, login and from the main screen go to the Services tab and enable the NFSv3 service as shown below.
2 – Setup network access
From the System tab add the network of the ESX servers as authorized. I added the whole network segment but you can also create network access rules per host in order to setup a more secure and granular access policy.
3 – Create the volumes
The next step is to create the volumes we are going to use as the base for the NFS shares. If like me you’re a Unix/Linux Geek it is for sure that you understand perfectly the PV -> VG -> LV concepts if not I strongly recommend you to check the TechHead article mentioned above where Simon explained it very well or if you want to go a little deeper with volumes in Unix/Linux my article about volume and filesystem basics in Linux and HP-UX.
First we need to create the physical volumes; go to the Volumes tab, enter the Block Devices section and edit the disk to be used for the volumes.
Create a partition and set the type to Physical Volume.
Once the Physical Volume is created go to the Volume Groups section and create a new VG and use for it the new PV.
Finally click on Add Volume. In this section you will have to choose the new VG that will contain the new volume, the size, name descrption and more important the Filesystem/Volume Type. There are three type:
The first is obviously intended for iSCSI volume and the other two for NFS, the criteria to follow here is the scalibility since esxt3 supports up to 8TB and XFS up to 10TB.
Click Create and the new volume will be created.
4 – Create the NFS share
Go to the Shares tab, there you will find the new volume as an available share.
Just to clarify concepts, this volume IS NOT the real NFS share. We are going to create a folder into the volume and share that folder through NFS to our ESX/ESXi servers.
Click into the volume name and in the pop-up enter the name of the folder and click Create folder.
Select the folder and in the pop-up click the Make Share button.
Finally we are going to configure the newly created share; select the share to enter its configuration area.
Edit the share data to your suit and select the Access Control Mode. Two modes are available:
- Public guest access – There is no user based authentication.
- Controlled access – The authentication is defined in the Accounts section.
Since this is only for my homelab I choose Public access.
Next select the share type, for our purposes case I obviously choose NFS and set the permissions as Read-Write.
You can also edit the NFS options and configure to suit your personal preferences and/or specifications.
Just a final tip for the non-Unix people, if you want to check the NFS share open a SSH session with the openfiler server and as root issue the command showmount -e. The output should look like this.
The Openfiler configuration is done, now we are going to create a new datastore in our ESX servers.
5 – Add the datastore to the ESX servers
Now that the share is created and configured it is time to add it to our ESX servers.
As usually from the vSphere Client go to Configuration -> Storage -> Add storage.
In the pop-up window choose Network File System.
Enter in the Server, Folder and Datastore Name label.
Finally check the data and click finish. If everything goes well after a few seconds the new datastore should appear.
And with this we are finished. If you see any mistake or have anything to add please comment :-)