Today a co-worker has asked me how to list the packages installed in an ESXi 4.1 Update 1 server, in the ESX COS we had the RedHat rpm command but in ESXi there is no rpm and of course there is no COS.
His intention was to look for the version of the qla2xxx driver and my first thought was to use vmkload_mod, the problem is that with this command you can get the version of a driver already loaded by the VMkernel and we wanted to look for the version of a driver installed but no loaded.
I tried esxupdate with no luck.
~ # esxupdate query ----Bulletin ID----- -----Installed----- --------------Summary--------------- ESXi410-201101223-UG 2011-01-13T05:09:39 3w-9xxx: scsi driver for VMware ESXi ESXi410-201101224-UG 2011-01-13T05:09:39 vxge: net driver for VMware ESXi ~ #
Then I suddenly thought that the ESXi Tech Support Mode is based on Busybox. If you have ever use a Busybox environment, like a QNAP NAS, you will probably remember that the way to install new software over the network is with ipkg command and to list the software packages already installed the syntax is ipkg list_installed.
~ # ipkg list_installed emulex-cim-provider - 410.2.0.32.1-207424 - lsi-provider - 410.04.V0.24-140815 - qlogic-fchba-provider - 400.1.1.8-140815 - vmware-esx-drivers-ata-libata - 400.2.00.1-1vmw.1.4.348481 - vmware-esx-drivers-ata-pata-amd - 400.0.2.4.1-1vmw.1.4.348481 - vmware-esx-drivers-ata-pata-atiixp - 400.0.4.3.1-1vmw.1.4.348481 - vmware-esx-drivers-ata-pata-cmd64x - 400.0.2.1.1-1vmw.1.4.348481 - vmware-esx-drivers-ata-pata-hpt3x2n - 400.0.3.0.1-1vmw.1.4.348481 - vmware-esx-drivers-ata-pata-pdc2027x - 400.0.74ac5.1-1vmw.1.4.348481 - vmware-esx-drivers-ata-pata-serverworks - 400.0.3.7.1-1vmw.1.4.348481 - vmware-esx-drivers-ata-pata-sil680 - 400.0.3.2.1-1vmw.1.4.348481 - vmware-esx-drivers-ata-pata-via - 400.0.1.14.1-1vmw.1.4.348481 - vmware-esx-drivers-block-cciss - 400.3.6.14.10.1-2vmw.1.4.348481 - vmware-esx-drivers-char-hpcru - 400.1.1.0.1-1vmw.1.4.348481 - vmware-esx-drivers-char-pseudo-char-dev - 400.0.0.1.1-1vmw.1.4.348481 - vmware-esx-drivers-char-random - 400.1.0.0.1-1vmw.1.4.348481 - vmware-esx-drivers-char-tpm-tis - 400.0.0.1.1-1vmw.1.4.348481 - vmware-esx-drivers-ehci-ehci-hcd - 400.1.0.0.1-1vmw.1.4.348481 - vmware-esx-drivers-hid-hid - 400.2.6.0.1-1vmw.1.4.348481 - vmware-esx-drivers-ioat-ioat - 400.2.15.0.1-1vmw.1.4.348481 - vmware-esx-drivers-ipmi-ipmi-devintf - 400.39.2.0.1-1vmw.1.4.348481 - vmware-esx-drivers-ipmi-ipmi-msghandler - 400.39.2.0.1-1vmw.1.4.348481 - vmware-esx-drivers-ipmi-ipmi-si-drv - 400.39.2.0.1-1vmw.1.4.348481 - vmware-esx-drivers-net-bnx2 - 400.2.0.7d-3vmw.1.4.348481 - vmware-esx-drivers-net-bnx2x - 400.1.54.1.v41.1-2vmw.1.4.348481 - vmware-esx-drivers-net-cdc-ether - 400.1.0.0.1-2vmw.1.4.348481 - vmware-esx-drivers-net-cnic - 400.1.9.7d.rc2.3.1-2vmw.1.4.348481 - vmware-esx-drivers-net-e1000 - 400.8.0.3.2-1vmw.1.4.348481 - vmware-esx-drivers-net-e1000e - 400.1.1.2.1-1vmw.1.4.348481 - vmware-esx-drivers-net-enic - 400.1.4.0.261-1vmw.1.4.348481 - vmware-esx-drivers-net-forcedeth - 400.0.61.0.1-1vmw.1.4.348481 - vmware-esx-drivers-net-igb - 400.1.3.19.12.2-2vmw.1.4.348481 - vmware-esx-drivers-net-ixgbe - 400.2.0.38.2.5.1-1vmw.1.4.348481 - vmware-esx-drivers-net-nx-nic - 400.4.0.550.1-1vmw.1.4.348481 - vmware-esx-drivers-net-s2io - 400.2.1.4.13427.1-1vmw.1.4.348481 - vmware-esx-drivers-net-sky2 - 400.1.20-1vmw.1.4.348481 - vmware-esx-drivers-net-tg3 - 400.3.86.0.1-1vmw.1.4.348481 - vmware-esx-drivers-net-usbnet - 400.1.0.0.1-2vmw.1.4.348481 - vmware-esx-drivers-ohci-usb-ohci - 400.1.0.0.1-1vmw.1.4.348481 - vmware-esx-drivers-sata-ahci - 400.2.0.0.1-5vmw.1.4.348481 - vmware-esx-drivers-sata-ata-piix - 400.2.00ac6.1-3vmw.1.4.348481 - vmware-esx-drivers-sata-sata-nv - 400.2.0.0.1-1vmw.1.4.348481 - vmware-esx-drivers-sata-sata-promise - 400.1.04.0.1-1vmw.1.4.348481 - vmware-esx-drivers-sata-sata-sil - 400.2.0.0.1-1vmw.1.4.348481 - vmware-esx-drivers-sata-sata-svw - 400.2.0.0.1-1vmw.1.4.348481 - vmware-esx-drivers-scsi-aacraid - 400.4.1.1.5.1-1vmw.1.4.348481 - vmware-esx-drivers-scsi-adp94xx - 400.1.0.8.12.1-1vmw.1.4.348481 - vmware-esx-drivers-scsi-aic79xx - 400.3.2.0.1-1vmw.1.4.348481 - vmware-esx-drivers-scsi-bnx2i - 400.1.8.11t5.rc2.8.1-4vmw.1.4.348481 - vmware-esx-drivers-scsi-fnic - 400.1.1.0.113.2-4vmw.1.4.348481 - vmware-esx-drivers-scsi-hpsa - 400.3.6.14.45-4vmw.1.4.348481 - vmware-esx-drivers-scsi-ips - 400.7.12.06.1-3vmw.1.4.348481 - vmware-esx-drivers-scsi-iscsi-linux - 400.1.0.0.1-1vmw.1.4.348481 - vmware-esx-drivers-scsi-lpfc820 - 400.8.2.1.30.1-58vmw.1.4.348481 - vmware-esx-drivers-scsi-megaraid-mbox - 400.2.20.5.1.4-1vmw.1.4.348481 - vmware-esx-drivers-scsi-megaraid-sas - 400.4.0.14.1-18vmw.1.4.348481 - vmware-esx-drivers-scsi-megaraid2 - 400.2.00.4.1-4vmw.1.4.348481 - vmware-esx-drivers-scsi-mpt2sas - 400.04.255.03.00.1-6vmw.1.4.348481 - vmware-esx-drivers-scsi-mptsas - 400.4.21.00.01.1-6vmw.1.4.348481 - vmware-esx-drivers-scsi-mptspi - 400.4.21.00.01.1-6vmw.1.4.348481 - vmware-esx-drivers-scsi-qla2xxx - 400.831.k1.28.1-1vmw.1.4.348481 - vmware-esx-drivers-scsi-qla4xxx - 400.5.01.03.1-10vmw.1.4.348481 - vmware-esx-drivers-scsi-sample-iscsi - 400.1.0.0-1vmw.1.4.348481 - vmware-esx-drivers-uhci-usb-uhci - 400.3.0.0.1-1vmw.1.4.348481 - vmware-esx-drivers-usb-storage-usb-storage - 400.1.0.0.1-1vmw.1.4.348481 - vmware-esx-drivers-usbcore-usb - 400.1.0.0.1-1vmw.1.4.348481 - vmware-esx-drivers-vmklinux-vmklinux - 4.1.0-1.4.348481 - Successfully terminated. ~ #
There you are :-) There is one gotcha to get the version, it starts just after the 400.
Next task of course was to do the same in ESXi 5.0.
~ # ipkg list_installed -sh: ipkg: not found ~ #
Ouch! Ipkg has been removed from ESXi 5.0. The key to get the same list is esxcli.
~ # esxcli software vib list Name Version Vendor Acceptance Level Install Date -------------------- ---------------------------------- ------ ---------------- ------------ ata-pata-amd 0.3.10-3vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 ata-pata-atiixp 0.4.6-3vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 ata-pata-cmd64x 0.2.5-3vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 ata-pata-hpt3x2n 0.3.4-3vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 ata-pata-pdc2027x 1.0-3vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 ata-pata-serverworks 0.4.3-3vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 ata-pata-sil680 0.4.8-3vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 ata-pata-via 0.3.3-2vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 block-cciss 3.6.14-10vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 ehci-ehci-hcd 1.0-3vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 esx-base 5.0.0-0.0.469512 VMware VMwareCertified 2011-09-07 esx-tboot 5.0.0-0.0.469512 VMware VMwareCertified 2011-09-07 ima-qla4xxx 2.01.07-1vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 ipmi-ipmi-devintf 39.1-4vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 ipmi-ipmi-msghandler 39.1-4vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 ipmi-ipmi-si-drv 39.1-4vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 misc-cnic-register 1.1-1vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 misc-drivers 5.0.0-0.0.469512 VMware VMwareCertified 2011-09-07 net-be2net 126.96.36.199-1vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 net-bnx2 2.0.15g.v50.11-5vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 net-bnx2x 1.61.15.v50.1-1vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 net-cnic 1.10.2j.v50.7-2vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 net-e1000 188.8.131.52-2vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 net-e1000e 1.1.2-3vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 net-enic 184.108.40.206a-1vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 net-forcedeth 0.61-2vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 net-igb 220.127.116.11-3vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 net-ixgbe 18.104.22.168.2-10vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 net-nx-nic 4.0.557-3vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 net-r8168 8.013.00-3vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 net-r8169 6.011.00-2vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 net-s2io 22.214.171.12427-3vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 net-sky2 1.20-2vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 net-tg3 3.110h.v50.4-4vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 ohci-usb-ohci 1.0-3vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 sata-ahci 3.0-6vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 sata-ata-piix 2.12-4vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 sata-sata-nv 3.5-3vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 sata-sata-promise 2.12-3vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 sata-sata-sil 2.3-3vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 sata-sata-svw 2.3-3vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 scsi-aacraid 126.96.36.199-9vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 scsi-adp94xx 188.8.131.52-6vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 scsi-aic79xx 3.1-5vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 scsi-bnx2i 1.9.1d.v50.1-3vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 scsi-fnic 184.108.40.206-1vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 scsi-hpsa 5.0.0-17vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 scsi-ips 7.12.05-4vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 scsi-lpfc820 220.127.116.11-18vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 scsi-megaraid-mbox 18.104.22.168-6vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 scsi-megaraid-sas 4.32-1vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 scsi-megaraid2 2.00.4-9vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 scsi-mpt2sas 06.00.00.00-5vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 scsi-mptsas 4.23.01.00-5vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 scsi-mptspi 4.23.01.00-5vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 scsi-qla2xxx 901.k1.1-14vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 scsi-qla4xxx 5.01.03.2-3vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 uhci-usb-uhci 1.0-3vmw.500.0.0.469512 VMware VMwareCertified 2011-09-07 tools-light 5.0.0-0.0.469512 VMware VMwareCertified 2011-09-07 ~ #
A final thought for all of you starting with vSphere 5, esxcli is the key in ESXi 5.0 shell.
Like we found before for netstat there is no arp command available from within ESXi Tech Support Mode, so how can you list the ARP table entries if you need to? Or how can you do it remotely either with vCLI or PowerCLI?
In this quick post I’ll show you the different ways to list the ARP table entries of an ESXi server, as always both for ESXi4 and ESXi5.
Tech Support Mode
From ESXi Tech Support Mode we need to relay in esxcli.
Again we need esxcli in order to get the ARP table.
In this case we are going to use esxcli but trough the Get-EsxCli cmdlet. First we retrieve the esxcli instance and then we get the ARP table list.
In a previous post I described how to get the network connections of an ESXi server using esxcli from Tech Support Mode and vSphere CLI. Following I’ll show you how to get the same information from an ESXi 4.1 and 5.0 using PowerCLI.
The key to perform this task s the Get-EsxCli cmdlet. This command was introduced with PowerCLI 4.1.1 and its purpose is to expose the esxcli framework.
The first task to do with Get-EsxCli is to create a wrapper using a variable that will expose the esxcli functionality.
As it can be seen in the screenshot, all the namespace of my whitebox are exposed just like with esxcli command. Now we are going to get the network connections of the host.
Finally following is the syntax to get the network connections of an ESXi 5 server.
In both cases I used the Format-Table cmdlet to get the ouput in a easily readable and useful format.
If you need to put a host in maintenance mode and only have access through ESXi Tech Support Mode, either local from DCUI or remote with SSH, in the following quick post I’ll show you how to do it using vim-cmd command.
Put the host in maintenance mode:
Check the state of the host.
Get the ESXi out of maintenance mode.
This procedure works in ESXi 4.x and ESXi 5.
We are going to suppose that you are trying to troubleshoot your ESXi network problems and as an experienced sysadmin one of the first things to do is getting the network connections of the host. But you are in ESXi and that means there is no netstat command, that handy Unix command that saved your life so many times in the past.
Please don’t panic yet, because as always in VMware there is a solution for that: esxcli to the rescue. Here it is the way to list the network connections of your ESXi host, both for ESXi 4.1 and ESXi 5 :-)
I tested it in ESXi 4.1 and ESXi 4.1 Update 1. The network namespace is not available in ESXi 4.0.
I used Remote Tech Support (SSH), simply known as SSH in ESXi5, in both examples but you can also launch the command from the vMA or using vSphere CLI from a Windows or a Linux machine.
[vi-admin@vma ~]$ esxcli --server=arrakis.jreypo.local --username=root network connection list
vi-admin@vma5:~> esxcli --server=esxi5.jreypo.local --username=root network ip connection list
If you are wondering if you can run your vSphere 5 lab nested on ESXi 4.1, the answer is yes.
I used Eric Gray’s (@eric_gray) procedure VMware ESX 4 can even virtualize itself to create the VMs. For the guest OS type I tried Red Hat Enterprise Linux 5 (64-bit) and Red Hat Enterprise Linux 6 (64-bit) and both worked without a glitch.
Here they are running on top of my whitebox, which is running ESXi 4.1 Update 1, the left one (esxi5) is created as RHEL6 and the right one (esxi5-02) RHEL5.
I added also the monitor_control.restrict_backdoor option but have not try yet to run nested VMs. I’ll do later and will update the post with the results.
As a small follow-up to yesterday’s post about NFS shares with Openfiler in the following article I will show how to add a new datastore to an ESX server using the vMA and PowerCLI.
From the vMA shell we are going to use the command vicfg-nas. To clarify things a bit for teh newcomers, vicfg-nas and esxcfg-nas are the same command, in fact esxcfg-nas is no more than a link to the first.
The option to create a new datastore is -a and additionally the address/hostname of teh NFS servers, the share and a label for teh new datastore must be provided.
[vi-admin@vma ~][esx01.mlab.local]$ vicfg-nas -l No NAS datastore found [vi-admin@vma ~][esx01.mlab.local]$ vicfg-nas -a -o openfiler.mlab.local -s /mnt/vg_nfs/lv_nfs01/nfs_datastore1 nfs_datastore1 Connecting to NAS volume: nfs_datastore1 nfs_datastore1 created and connected. [vi-admin@vma ~][esx01.mlab.local]$
When the operation is done you can check the new datastore with vicfg-nas -l.
[vi-admin@vma ~][esx01.mlab.local]$ vicfg-nas -l nfs_datastore1 is /mnt/vg_nfs/lv_nfs01/nfs_datastore1 from openfiler.mlab.local mounted [vi-admin@vma ~][esx01.mlab.local]$
In the second part of the post we are going to use vSphere PowerCLI, which as you already know is a PowerShell snapin to manage vSphere/VI3 infrastructure. I will write more about PowerCLI in the since I’m very fond with it.
The cmdlet to create the new NFS datastore is New-Datastore and you must provide the ESX host, the NFS server, the path of the share and a name for the datastore. Then you can check that the new datastore has been properly added with Get-Datastore.
This post is mostly for self-reference but may be someone would find it useful. Last night I decided to change the IP address of one of the Openfiler instances in my homelab and instead of previously removing the NFS shares from the ESX servers I simply made the changes.
After a restart of the network services in the Openfiler server to commit the changes I found that the ESX servers saw the datastore as inactive.
First I tried to remove it from the vSphere Client and I received the following error message:
I quickly switched to an SSH session in the vMA to check the state of the datastore, it appeared as not mounted.
[vi-admin@vma /][esx01.mlab.local]$ esxcfg-nas -l nfs_datastore1 is /mnt/vg_nfs/lv_nfs01/nfs_datastore1 from openfiler.mlab.local not mounted [vi-admin@vma /][esx01.mlab.local]$
At this point I used esxcfg-nas command to remove the datastore.
[vi-admin@vma /][esx01.mlab.local]$ esxcfg-nas -d nfs_datastore1 NAS volume nfs_datastore1 deleted. [vi-admin@vma /][esx01.mlab.local]$ esxcfg-nas -l No NAS datastore found [vi-admin@vma /][esx01.mlab.local]$
Very easy, isn’t it? Oh by the way this just confirm one of my personal beliefs “Where there is shell, there is a way” ;-)
Even if you have access to the enterprise-class storage appliances, like the HP P4000 VSA or the EMC Celerra VSA, an Openfiler storage appliance can be a great asset to your homelab. Specially if you, like myself, run an “all virtual” homelab within VMware Workstation, since Openfiler is by far less resource hunger than its enterprise counterparts.
Simon Seagrave (@Kiwi_Si) from TechHead.co.uk wrote an excellent article explaining how to add iSCSI LUNs from an Openfiler instance to your ESX/ESXi servers, if iSCSI is your “thing” you should check it.
In this article I’ll explain how-to configure a NFS share in Openfiler and then add it as a datastore to your vSphere servers. I’ll take for granted that you already have an Openfiler server up and running.
1 – Enable NFS service
As always point your browser to https://<openfiler_address>:446, login and from the main screen go to the Services tab and enable the NFSv3 service as shown below.
2 – Setup network access
From the System tab add the network of the ESX servers as authorized. I added the whole network segment but you can also create network access rules per host in order to setup a more secure and granular access policy.
3 – Create the volumes
The next step is to create the volumes we are going to use as the base for the NFS shares. If like me you’re a Unix/Linux Geek it is for sure that you understand perfectly the PV -> VG -> LV concepts if not I strongly recommend you to check the TechHead article mentioned above where Simon explained it very well or if you want to go a little deeper with volumes in Unix/Linux my article about volume and filesystem basics in Linux and HP-UX.
First we need to create the physical volumes; go to the Volumes tab, enter the Block Devices section and edit the disk to be used for the volumes.
Create a partition and set the type to Physical Volume.
Once the Physical Volume is created go to the Volume Groups section and create a new VG and use for it the new PV.
Finally click on Add Volume. In this section you will have to choose the new VG that will contain the new volume, the size, name descrption and more important the Filesystem/Volume Type. There are three type:
The first is obviously intended for iSCSI volume and the other two for NFS, the criteria to follow here is the scalibility since esxt3 supports up to 8TB and XFS up to 10TB.
Click Create and the new volume will be created.
4 – Create the NFS share
Go to the Shares tab, there you will find the new volume as an available share.
Just to clarify concepts, this volume IS NOT the real NFS share. We are going to create a folder into the volume and share that folder through NFS to our ESX/ESXi servers.
Click into the volume name and in the pop-up enter the name of the folder and click Create folder.
Select the folder and in the pop-up click the Make Share button.
Finally we are going to configure the newly created share; select the share to enter its configuration area.
Edit the share data to your suit and select the Access Control Mode. Two modes are available:
- Public guest access – There is no user based authentication.
- Controlled access – The authentication is defined in the Accounts section.
Since this is only for my homelab I choose Public access.
Next select the share type, for our purposes case I obviously choose NFS and set the permissions as Read-Write.
You can also edit the NFS options and configure to suit your personal preferences and/or specifications.
Just a final tip for the non-Unix people, if you want to check the NFS share open a SSH session with the openfiler server and as root issue the command showmount -e. The output should look like this.
The Openfiler configuration is done, now we are going to create a new datastore in our ESX servers.
5 – Add the datastore to the ESX servers
Now that the share is created and configured it is time to add it to our ESX servers.
As usually from the vSphere Client go to Configuration -> Storage -> Add storage.
In the pop-up window choose Network File System.
Enter in the Server, Folder and Datastore Name label.
Finally check the data and click finish. If everything goes well after a few seconds the new datastore should appear.
And with this we are finished. If you see any mistake or have anything to add please comment :-)
The reason for this post is trying to be a single point of reference for HP related VMware resources.
I created the list for my personal use while ago but in the hope that it can be useful for someone else I decided to review and share it. I will try to keep the list up to date and also add it as a permanent page in the menu above.
- HP virtualization with VMware – This is the main page about VMware in the HP site. It has dozens of links to White Papers, webinars, podcasts and other HP sites about VMware.
- HP and VMware Virtualization Alliance – The HP-VMware Alliance page in the VMware site. It has several areas that outline the different HP-VMware joint solutions.
- VMware Enterprise Library at HP – Case studies, White Papers and Datasheets.
- HP Insight Control for VMware vCenter Server
VMware on ProLiant
- ProLiant server VMware support matrix – This page is the Rosetta Stone for every VMware installation on HP hardware. It has every HP Proliant Blade/Server cross-referenced in a table with every ESX/ESXi version from the 2.1 to the 4.1. The vSphere tab has also a column about VMware FT support.
- VMware demos in HP hardware – This site has a few interesting videos demoing VMware products in HP hardware.
- ESX4 images for the G7 ProLiant Blades.
- HP sizing tool for Vmware vSphere
- HP Management Agents for ESX 4.x
- HP Virtual Connect Flex-10 and VMware vSphere 4.0
- Cisco Nexus 1000V on HP BladeSystem
- VMware Storage Solutions from HP – Includes the ESX/ESXi 3.x and 4.x support matrices for HP Storageworks systems.
- Running VMware vSphere 4 on HP LeftHand P4000 SAN Solutions – Excellent White Paper, a must for every VMware-Lefthand infrastructure.
- HP EVA and vSphere 4 best practices
- HP XP24000 and vSphere 4 best practices
- VMware vCenter Plug-in for HP StorageWorks Arrays – Great video by Calvin Zito (@HPStorareGuy)
- HP P4000 VAAI demo – Video of the demo showed at VMworld 2010 San Francisco.
- HP StorageWorks drivers – Including the virtualization adapters for VMware SRM for EVA, XP and P4000 systems
- HP P4000 VSA – Product page of my beloved VSA :-)
- HP Client Virtualization – HP main site about VDI, not exclusively about VMware but very intersting.
- HP Virtual Desktop Infrastructure with VMware View – HP VDI solution with VMware View main site.
- HP Reference Architecture for VMware View with HP StorageWorks P4800 BladeSystem SAN
- HP Cloud Map with BladeSystem Matrix for VMware vCloud Director – A demo showing what can be done by combining the HP Matrix and the awesome vCloud Director.