Apr 15

ESXi SAN boot and persistent storage

Given the long history I have with servers and SAN storage, the only time I’ve ever configured boot from SAN is for Solaris SPARC boxes.

We recently bought new B200 M3 blades for our UCS infrastructure, and with that order I unintentionally omitted ordering local disks with them (we usually do a basic RAID 0 across two local disks to boot ESXi from).

I was actualy ok with this, as we have an amazing EMC SAN, and having less disks to die, and less power to consume, is always a good thing.  So, this was the first time I’d implemented booting ESXi from SAN.

The issue was that vSphere was telling me “System logs are stored on nonpersistent storage”, which is fair if you care about system logs surviving a host reboot, which I do.

Easy setup

Though beyond the scope of this post, it’s worthy to note that actually doing it is quite simple — create a new LUN, zone it in the fabric appropriately, make sure it’s LUN0 and off you go.

Size matters

I discovered after doing one or two that the sizing of the boot LUN matters, from the perspective of a concept in ESXi called persistent storage.  In short, if ESXi doesn’t have enough space for its scratch partition on “persistent” storage (ie. a local disk, a SAN disk, etc), it’ll create one in the host RAMdisk on each boot (therefore being volatile).  This partition size is, of course, determined when you install ESXi, and isn’t easily changed after install — it’s actually quicker to just reinstall.

From VMware’s Installing or upgrading to ESXi 5.1 best practices (2032756):

Installing ESXi 5.1 requires a boot device that is a minimum of 1GB in size. When booting from a local disk or SAN/iSCSI LUN, a 5.2GB disk is required to allow for the creation of the VMFS volume and a 4GB scratch partition on the boot device. If a smaller disk or LUN is used, the installer attempts to allocate a scratch region on a separate local disk. If a local disk cannot be found, the scratch partition (/scratch) is located on the ESXi host ramdisk, linked to /tmp/scratch. You can reconfigure /scratch to use a separate disk or LUN. For best performance and memory optimization, VMware recommeds that you do not leave /scratch on the ESXi host ramdisk.

My mistake was to just pick a rounded off size, eg. 5GB, for my boot LUNs.  I was 200MB off 😉

The magic number

Sizing the LUN just a little bit bigger to satisfy (or perhaps trick) ESXi into doing a fully-local install is the trick.  I’m a purist, and refused to just goto, say, 6GB, so I experiented a little and pretty much fluked it first go.

The challenge for me was how to calculate 5.2GB into MB, because EMC Unisphere doesn’t accept decimals.  Then there’s the issue of “does this ‘thing’ calculate units in powers of 2 or not”?  Turns out the math was simple:

5.2 \times 1024 {=} 5{,}324.8

Round that up to 5325MB and ESXi likes that, and partitions the disk thusly:

vfat         4.0G  31.8M      4.0G   1% /vmfs/volumes/<long ID>
vfat       249.7M 130.2M    119.5M  52% /vmfs/volumes/<long ID>
vfat       249.7M   8.0K    249.7M   0% /vmfs/volumes/<long ID>
vfat       285.8M 201.9M     83.9M  71% /vmfs/volumes/<long ID>

Which loosely translates into the following mount point for /scratch (sadly, there is no mount command in ESXi):

lrwxrwxrwx 1 root root 49 Apr 2 08:15 scratch -> /vmfs/volumes/<long ID>

If the storage was on non-persistent media, that would be linked to /tmp.

The other method

Some like to actually use a shared datastore (eg via NFS), using a seperate subdirectory for each host, and changing the Syslog.global.logDir variable to point to it.  The main goal there is to save SAN LUN space.  Personally, I prefer the logs to be with the host, as it were, on it’s own LUN.

About the author


I love technology, and have been involved with it for over thirty years. I'm a IT manager, a seasoned network, storage, Unix and virtualisation guy. I love to code (mmm, sweet sweet Python), and I django, SQLalchemy, Eve and pytest when I'm behaving. I'm also a DJ, and photographer.

Leave a Reply

%d bloggers like this: