If you need to put a host in maintenance mode and only have access through ESXi Tech Support Mode, either local from DCUI or remote with SSH, in the following quick post I’ll show you how to do it using vim-cmd command.
Put the host in maintenance mode:
Check the state of the host.
Get the ESXi out of maintenance mode.
This procedure works in ESXi 4.x and ESXi 5.
If you are willing to see a so great whitebox like Phil Jaenke’s (@RootWyrm) BabyDragon I’m sorry to say that you’ll be terribly disappointed because unlike Phil’s beauty mine wasn’t built on purpose.
I’ve been running all my labs within VMware Workstation on my custom made workstation, whihc by the way was running Windows 7 (64-bit). But recently I decided that it was time to move to a more reliable solution so I converted my Windows 7 system in an ESXi server.
Surprisingly when I installed ESXi 4.1 Update 1 everything was properly recognized so I thought it could be of help to post the hardware configuration for other vGeeks out there that could be looking for working hardware components for their homelabs.
- Processor: Intel Core 2 Quad Q9550. Supports FT!
- Memory: 8GB
- Motherboard: Asrock Penryn 1600SLI-110dB
- Nic: Embedded nVidia NForce Network Controller. Supported under the forcedeth driver
~ # ethtool -i vmnic0 driver: forcedeth version: 0.61.0.1-1vmw firmware-version: bus-info: 0000:00:14.0 ~ #
- SATA controller: nVidia MCP51 SATA Controller.
~ # vmkload_mod -s sata_nv vmkload_mod module information input file: /usr/lib/vmware/vmkmod/sata_nv.o Version: Version 184.108.40.206-1vmw, Build: 348481, Interface: ddi_9_1 Built on: Jan 12 2011 License: GPL Required name-spaces: com.vmware.vmkapi@v1_1_0_0 Parameters: heap_max: int Maximum attainable heap size for the driver. heap_initial: int Initial heap size allocated for the driver. ~ #
- HDD1: 1 x 120GB SATA 7200RPM Seagate ST3120026AS.
- HDD2: 1 x 1TB SATA 7200RPM Seagate ST31000528AS.
Finally here it is a screenshot of the vSphere Client connected to the vCenter VM and showing the summary of the host.
The other components of my homelab are a QNAP TS-219P NAS and an HP ProCurve 1810G-8 switch. I also have plans to add two more NICs and a SSD to the server as soon as possible and of course to build a new whitebox.
We are going to suppose that you are trying to troubleshoot your ESXi network problems and as an experienced sysadmin one of the first things to do is getting the network connections of the host. But you are in ESXi and that means there is no netstat command, that handy Unix command that saved your life so many times in the past.
Please don’t panic yet, because as always in VMware there is a solution for that: esxcli to the rescue. Here it is the way to list the network connections of your ESXi host, both for ESXi 4.1 and ESXi 5 :-)
I tested it in ESXi 4.1 and ESXi 4.1 Update 1. The network namespace is not available in ESXi 4.0.
I used Remote Tech Support (SSH), simply known as SSH in ESXi5, in both examples but you can also launch the command from the vMA or using vSphere CLI from a Windows or a Linux machine.
[vi-admin@vma ~]$ esxcli --server=arrakis.jreypo.local --username=root network connection list
vi-admin@vma5:~> esxcli --server=esxi5.jreypo.local --username=root network ip connection list
If you are wondering if you can run your vSphere 5 lab nested on ESXi 4.1, the answer is yes.
I used Eric Gray’s (@eric_gray) procedure VMware ESX 4 can even virtualize itself to create the VMs. For the guest OS type I tried Red Hat Enterprise Linux 5 (64-bit) and Red Hat Enterprise Linux 6 (64-bit) and both worked without a glitch.
Here they are running on top of my whitebox, which is running ESXi 4.1 Update 1, the left one (esxi5) is created as RHEL6 and the right one (esxi5-02) RHEL5.
I added also the monitor_control.restrict_backdoor option but have not try yet to run nested VMs. I’ll do later and will update the post with the results.