During the migration of an ESX 4.x to ESXi 5.0 the whole process can be monitored directly from the console of the server.
Once the process has started press Alt-F1 to access the Console. Login with root and blank password.
From here you can go to the /var/log folder and using the tail command monitor the ESXi log files.
Also by pressing Alt-F12 you will see the vmkernel log, this log will show the upgrade process in real time. Once the log reaches the point in the screenshot the upgrade will be complete.
At this point and before restarting the host if you go back again to the ESXi console you can review the ESXi install log file, called esxi_install.log which in fact is a symlink to the file weasel.log.
In this log file you can observe the whole migration process, I strongly recommend to lose a few minutes on this since you will learn a lot of under the hood info about the ESXi installation process.
Finally and only as a curiosity after the reboot if you login into the ESXi Shell a message indicating that the system has been migrated to ESXi 5.0 will be displayed before the prompt.
As a small follow-up to yesterday’s post about NFS shares with Openfiler in the following article I will show how to add a new datastore to an ESX server using the vMA and PowerCLI.
From the vMA shell we are going to use the command vicfg-nas. To clarify things a bit for teh newcomers, vicfg-nas and esxcfg-nas are the same command, in fact esxcfg-nas is no more than a link to the first.
The option to create a new datastore is -a and additionally the address/hostname of teh NFS servers, the share and a label for teh new datastore must be provided.
[vi-admin@vma ~][esx01.mlab.local]$ vicfg-nas -l No NAS datastore found [vi-admin@vma ~][esx01.mlab.local]$ vicfg-nas -a -o openfiler.mlab.local -s /mnt/vg_nfs/lv_nfs01/nfs_datastore1 nfs_datastore1 Connecting to NAS volume: nfs_datastore1 nfs_datastore1 created and connected. [vi-admin@vma ~][esx01.mlab.local]$
When the operation is done you can check the new datastore with vicfg-nas -l.
[vi-admin@vma ~][esx01.mlab.local]$ vicfg-nas -l nfs_datastore1 is /mnt/vg_nfs/lv_nfs01/nfs_datastore1 from openfiler.mlab.local mounted [vi-admin@vma ~][esx01.mlab.local]$
In the second part of the post we are going to use vSphere PowerCLI, which as you already know is a PowerShell snapin to manage vSphere/VI3 infrastructure. I will write more about PowerCLI in the since I’m very fond with it.
The cmdlet to create the new NFS datastore is New-Datastore and you must provide the ESX host, the NFS server, the path of the share and a name for the datastore. Then you can check that the new datastore has been properly added with Get-Datastore.
This post is mostly for self-reference but may be someone would find it useful. Last night I decided to change the IP address of one of the Openfiler instances in my homelab and instead of previously removing the NFS shares from the ESX servers I simply made the changes.
After a restart of the network services in the Openfiler server to commit the changes I found that the ESX servers saw the datastore as inactive.
First I tried to remove it from the vSphere Client and I received the following error message:
I quickly switched to an SSH session in the vMA to check the state of the datastore, it appeared as not mounted.
[vi-admin@vma /][esx01.mlab.local]$ esxcfg-nas -l nfs_datastore1 is /mnt/vg_nfs/lv_nfs01/nfs_datastore1 from openfiler.mlab.local not mounted [vi-admin@vma /][esx01.mlab.local]$
At this point I used esxcfg-nas command to remove the datastore.
[vi-admin@vma /][esx01.mlab.local]$ esxcfg-nas -d nfs_datastore1 NAS volume nfs_datastore1 deleted. [vi-admin@vma /][esx01.mlab.local]$ esxcfg-nas -l No NAS datastore found [vi-admin@vma /][esx01.mlab.local]$
Very easy, isn’t it? Oh by the way this just confirm one of my personal beliefs “Where there is shell, there is a way” ;-)
Even if you have access to the enterprise-class storage appliances, like the HP P4000 VSA or the EMC Celerra VSA, an Openfiler storage appliance can be a great asset to your homelab. Specially if you, like myself, run an “all virtual” homelab within VMware Workstation, since Openfiler is by far less resource hunger than its enterprise counterparts.
Simon Seagrave (@Kiwi_Si) from TechHead.co.uk wrote an excellent article explaining how to add iSCSI LUNs from an Openfiler instance to your ESX/ESXi servers, if iSCSI is your “thing” you should check it.
In this article I’ll explain how-to configure a NFS share in Openfiler and then add it as a datastore to your vSphere servers. I’ll take for granted that you already have an Openfiler server up and running.
1 – Enable NFS service
As always point your browser to https://<openfiler_address>:446, login and from the main screen go to the Services tab and enable the NFSv3 service as shown below.
2 – Setup network access
From the System tab add the network of the ESX servers as authorized. I added the whole network segment but you can also create network access rules per host in order to setup a more secure and granular access policy.
3 – Create the volumes
The next step is to create the volumes we are going to use as the base for the NFS shares. If like me you’re a Unix/Linux Geek it is for sure that you understand perfectly the PV -> VG -> LV concepts if not I strongly recommend you to check the TechHead article mentioned above where Simon explained it very well or if you want to go a little deeper with volumes in Unix/Linux my article about volume and filesystem basics in Linux and HP-UX.
First we need to create the physical volumes; go to the Volumes tab, enter the Block Devices section and edit the disk to be used for the volumes.
Create a partition and set the type to Physical Volume.
Once the Physical Volume is created go to the Volume Groups section and create a new VG and use for it the new PV.
Finally click on Add Volume. In this section you will have to choose the new VG that will contain the new volume, the size, name descrption and more important the Filesystem/Volume Type. There are three type:
The first is obviously intended for iSCSI volume and the other two for NFS, the criteria to follow here is the scalibility since esxt3 supports up to 8TB and XFS up to 10TB.
Click Create and the new volume will be created.
4 – Create the NFS share
Go to the Shares tab, there you will find the new volume as an available share.
Just to clarify concepts, this volume IS NOT the real NFS share. We are going to create a folder into the volume and share that folder through NFS to our ESX/ESXi servers.
Click into the volume name and in the pop-up enter the name of the folder and click Create folder.
Select the folder and in the pop-up click the Make Share button.
Finally we are going to configure the newly created share; select the share to enter its configuration area.
Edit the share data to your suit and select the Access Control Mode. Two modes are available:
- Public guest access – There is no user based authentication.
- Controlled access – The authentication is defined in the Accounts section.
Since this is only for my homelab I choose Public access.
Next select the share type, for our purposes case I obviously choose NFS and set the permissions as Read-Write.
You can also edit the NFS options and configure to suit your personal preferences and/or specifications.
Just a final tip for the non-Unix people, if you want to check the NFS share open a SSH session with the openfiler server and as root issue the command showmount -e. The output should look like this.
The Openfiler configuration is done, now we are going to create a new datastore in our ESX servers.
5 – Add the datastore to the ESX servers
Now that the share is created and configured it is time to add it to our ESX servers.
As usually from the vSphere Client go to Configuration -> Storage -> Add storage.
In the pop-up window choose Network File System.
Enter in the Server, Folder and Datastore Name label.
Finally check the data and click finish. If everything goes well after a few seconds the new datastore should appear.
And with this we are finished. If you see any mistake or have anything to add please comment :-)
The reason for this post is trying to be a single point of reference for HP related VMware resources.
I created the list for my personal use while ago but in the hope that it can be useful for someone else I decided to review and share it. I will try to keep the list up to date and also add it as a permanent page in the menu above.
- HP virtualization with VMware – This is the main page about VMware in the HP site. It has dozens of links to White Papers, webinars, podcasts and other HP sites about VMware.
- HP and VMware Virtualization Alliance – The HP-VMware Alliance page in the VMware site. It has several areas that outline the different HP-VMware joint solutions.
- VMware Enterprise Library at HP – Case studies, White Papers and Datasheets.
- HP Insight Control for VMware vCenter Server
VMware on ProLiant
- ProLiant server VMware support matrix – This page is the Rosetta Stone for every VMware installation on HP hardware. It has every HP Proliant Blade/Server cross-referenced in a table with every ESX/ESXi version from the 2.1 to the 4.1. The vSphere tab has also a column about VMware FT support.
- VMware demos in HP hardware – This site has a few interesting videos demoing VMware products in HP hardware.
- ESX4 images for the G7 ProLiant Blades.
- HP sizing tool for Vmware vSphere
- HP Management Agents for ESX 4.x
- HP Virtual Connect Flex-10 and VMware vSphere 4.0
- Cisco Nexus 1000V on HP BladeSystem
- VMware Storage Solutions from HP – Includes the ESX/ESXi 3.x and 4.x support matrices for HP Storageworks systems.
- Running VMware vSphere 4 on HP LeftHand P4000 SAN Solutions – Excellent White Paper, a must for every VMware-Lefthand infrastructure.
- HP EVA and vSphere 4 best practices
- HP XP24000 and vSphere 4 best practices
- VMware vCenter Plug-in for HP StorageWorks Arrays – Great video by Calvin Zito (@HPStorareGuy)
- HP P4000 VAAI demo – Video of the demo showed at VMworld 2010 San Francisco.
- HP StorageWorks drivers – Including the virtualization adapters for VMware SRM for EVA, XP and P4000 systems
- HP P4000 VSA – Product page of my beloved VSA :-)
- HP Client Virtualization – HP main site about VDI, not exclusively about VMware but very intersting.
- HP Virtual Desktop Infrastructure with VMware View – HP VDI solution with VMware View main site.
- HP Reference Architecture for VMware View with HP StorageWorks P4800 BladeSystem SAN
- HP Cloud Map with BladeSystem Matrix for VMware vCloud Director – A demo showing what can be done by combining the HP Matrix and the awesome vCloud Director.
If you are in the virtualization business you’ll probably know that since the release of vSphere back in 2009 the web access the ESX servers has been disabled by default. I really never minded about this, I still have nightmares with the awful and useless web interface of the ESX3, and to be sincere who needs a web access when you have SSH access, PowerCLI and the almighty vSphere Client.
But recently I found myself only with a Linux machine and no remote access to the vCenter Server. With a so limited range of resources I decided to try the web access but I had to enable it.
The first step is to log into the host via SSH. Once you are inside the ESX and from a root shell execute the following command to start the service.
Now you can point your web browser to http://<esx_ip_address>/ui and login as root, you will note that the interface is pretty much the same as in VMware Server 2.0.
After that I wanted to make the change permanent and like in any normal RedHat Linux server I issued the classic chkconfig command.
[root@esx41-01 ~]# chkconfig vmware-webAccess on
I thought that everything was done, nothing so far from the reality, after a reboot of the server the Web Access was gone.
At that point I no longer needed to access the ESX through the web so I did not spent more time with this; but later with one of my ESX servers at home I finally found how to permanently enable the Web Access.
From the vSphere Client go to the configuration tab of the ESX host and edit the Security Profile from the Software area. The pop-up window will show a list of services, look for the web access and check it.
If you now change to a SSH session and ask for the status of the service, will see the service started and enabled.
Reboot the server, if you can of course ;-), and you’ll see that the changes are permanent.
The first time I installed an ESX 4 Update 1 on VMware Workstation an awful red message reporting some NUMA errors appeared on the main console screen.
At that time I decided to ignore it. It didn’t interfere with the normal functioning of the ESXs and since I never got again to the console of the ESX, I just fired up the VM in Workstation and then started to work from the vSphere Client, for a long time the error fall into the oblivion.
This week I decided to install a new ESX4 and couple of ESXi4 VMs in my home lab and the error appeared again and this time the geek inside me couldn’t resist and after doing some research I found this VMware Knowledge Base article which also pointed to a Dell document, both of them said that the error could be ignored because there is no real loss of service, something that I already knew x-). I finally found the solution in a VMTN post.
From the vSphere Client go to Configuration -> Software -> Adavanced Settings and in the VMkernel area disable the VMkernel.Boot.userNUMAInfo setting.
After that reboot your ESX and will see that the error has disappear.
I also noticed that the error is present on the virtualized ESXi but to see it from the ESXi console press Alt-F11 and you will get to a screen almost identical as the one from the first screenshot.