Fun with VMware ESXi

In the course of rolling out new hardware we encountered some problems with our up-and-running VMware ESXi servers. I won’t go into that now, but I’ll go over the fun things we found out. Most of these are most likely documented already, but I figured I’d put them here that way I have some place to look the next time I need something!


Firstly, managing through the client is nice, but sometimes we need more so below are the steps to enable SSH to your ESXi host:

  1. ALT+F1 at the ESXi host console
  2. Type, “unsupported” no text will display
  3. vi /etc/inetd.conf
  4. remove the # sign in front of #ssh stream tcp nowait root…
  5. cat /var/run/
  6. kill -HUP , this is the number returned from step 5

You should now be able to ssh into your ESXi host using puTTY or ssh, or whatever!


One of the most annoying things for us is the lack of logs from ESXi, not to mention that they roll after each reboot, so you lose them. You can configure local and remote syslogging, I will give you the steps for both. My remote syslog server is a Windows 2008 R2 box that I’m playing with, and I’m using Kiwi SysLog Server to receive the logs.

Install Kiwi

  1. Agree to the license terms
  2. Select to run as a service
  3. Use the LocalSystem account
  4. Need a license for WebAccess so it’s pointless to check
  5. Choose a “Normal” install unless you need to set things up different
  6. Select your destination and click Install

Enable SysLog

  1. Connect to your ESXi host from the VMware Client
  2. Click the “Summary” tab
  3. Browse your local datastore, not it’s name
  4. Create a folder called “logs” inside
  5. Close the Datastore Browser
  6. Click the “Configuration” tab
  7. Click “Advanced Settings”
  8. Select Syslog

    1. Syslog.Local.DatastorePath = [Name of Datastore] logs/messages.log
    2. Syslog.Remote.Hostname = IP Address of syslog server
  9. Click Ok

Connect to your syslog server and you should now see that logs are available. The nice things is that since these logs are store on a different server, when the ESXi host reboots, you don’t lose them. The logs that are stored on the datastore also survive reboots, it’s nice to have both in case your syslog server fails.


Apparently ESXi is very touchy about iSCSI targets. If something get’s fouled up it seems the only way to clear it out is to reboot. This is not always fun, and while rebooting was the only thing that worked for us, here are some interesting tidbits we found while working with technical support:


This file contains connection information for iSCSI targets on the host.


This file contains all the iSCSI targets that this host has discovered. We believe this file is auto-gen’d but what we do notice is that old targets persist even though they are not listed in this file. More digging may be required in order to find additional files that may contain outdated iSCSI targets.


This file contains a listing of all the iSCSI targets that the host is currently bound to. The recomendation from technical support is that this and the preceding file be renamed to try and resolve the iSCSI issue.

esxcfg-swiscsi -d

This command disables the software iSCSI initiator

esxcfg-swiscsi -k

This command kills the software iSCSI initiator

esxcfg-swiscsi -e

This command enables the software iSCSI initiator

esxcfg-swiscsi -s

This command triggers a rescan on the iSCSI initiator, this is similar to clicking rescan on the storage tab.

Windows 2008 R2 Clustering

Our current file server cluster falls out of warranty in about a month, and as it was over-spec’d to begin with we made a purchase of new hardware from Dell. We bought several things, two new servers for the file sharing cluster, a new sql server, a new backup server, a new virtual server, and two new iSCSI sans. I’ll talk about these as we get to them, for the time being I’ll talk about Failover clustering in Windows 200r R2.

I started working on a post for setting up a Hyper-V cluster, but I’ve not gotten around to getting that done so I’ll use this as a starting point. I won’t go in to any great detail of how to install things, the installation is rather straightforward. I will mention a few things first though:

  • Failover clustering feature is only available on Windows 2008 Enterprise or above
  • If you’re running an iSCSI SAN enable the MultiPath IO Feature
  • Do yourself a favor and enable the SNMP Feature

Considerations for File server clusters

  • Enable the File Server role, if you’re setting up a file server cluster
  • Enable the Remote Volume Management rule in the Windows Firewall

Once everything is installed and ready to go you can run the, “Cluster validation report” and it will tell you what, if anything, you need to resolve prior to getting the cluster up and running. If you happen to be running a non-Microsoft DNS server (BIND) then the following information will be important for you. Once we had our cluster setup we noticed repeated error’s for Event ID 1196. After googling for it I found a nice blog post related to how DNS works for Windows Clusters, which specifically talked about using a Microsoft DNS server. After a little more googling I found KB 977158. This article directly related to our situation and upon applying the HotFix mentioned our cluster was able to finally come online. You can read the details in the article itself on what the problem was, but if you are running BIND for name resolution and want a Windows 2008 R2 cluster, install the hotfix.

When setting up a file server cluster in Windows 2003, you were provided a very minimal amount of configuration. You would want to create the folder in advance, define your NTFS permissions, if you were running Storage Server or R2 you could then define your folder based quotas and then, and only then could you create your File Share resource on the cluster.

The first thing you will want to do is make sure that you have disks available for storage, then create your first File Server resource. Once you have your File Server resource online you right-click to “Add a shared folder”. From this point on it provides a wizard that guides you through the things you need to setup. Below are screenshots that step you through each screen. Once you click finish it sets everything you want up prior to making the share available on the cluster! You no longer have to visit multiple consoles, it’s all literally at your finger tips.

I know I get excited about many things, but I know for me, this will make things so much simpler!