How to Setup Swap in ESXi 6.5

Another VMware basics post and this time I will teach you how to set up Swap for an ESXi Host.  The Swap here is for the VMKernel, and basically can create performance boost for those hosts that are heavily utilized.  For myself, I created a RAID 1 Volume of two Intel DC3710 800GB SSDs to for this.

The steps to do this are easy.  First create a datastore on your volume of choice.  It is best to use a low latency SSD that is local to the hosts.

Then we go to Host-> Manager and under the System Tab select Swap.

Snip20170417_9.png

Then we click Edit Settings, and change the Data Store to the one we just created.

Snip20170417_10.png

That’s it, now we have the Cache there.

If we browse to the DataStore we can see there is a swap file created.

Snip20170417_11.png

How to Add a Physical Nic to a Virtual Switch

With my brand new VMware ESXi Installation, it takes some time to configure everything to your liking and to best practices.  ESXi 6.5 is not shy about warning you when you do not have redundancy or something configured as it should.

With my setup I have two 1Gbe NICs but ESXi only configured one NIC as the second was not active.  Now that it is cabled and active, lets add it to the Virtual Switch.

At the main ESXi home page, click on Networking and verify that each NIC you wish to use shows as Enabled and has a Link Speed correct for the Network Adapter

Snip20170319_10.pngSnip20170319_12.png

For my installation the only two NICs we worry about are the first two, as vmnic2 corresponds to a 10Gbe Mellanox Card that has yet to be configured.

If we click on vSwitch0 we notice that VMWare has a warning!

Snip20170319_13.png

If we click Actions we can Choose “Add Uplink”

Snip20170319_14.png

We can see that VMware has added vmnic1.  Now wasn’t that simple!  No digging for the correct setting, just nice and easy.

Snip20170319_15.png

Once we save and refresh the view, we see the Warning is gone and that we now have the two adapters configured for the Switch.

Snip20170319_16.png

If we look at the Monitor tab under the VMNetwork we can see in the Events that the warning has cleared and why.

Snip20170319_17.png

ESXI 6.5 Install on SuperMicro Xeon D Hardware

Purpose of this guide is very simple, show an ESXI Install from start to finish and to help me remember how I did the install if I need to do it again.  Hopefully it will help guide some first timers as well.

Our hardware is as follows

SuperMicro X10SDV-7TP4F

4X Intel DC S3710 400GB Sata SSD

1X SuperMicro 64GB SATA DOM

Mellonox Connect X2 10Gbe Card

64GB DDR4 ECC Memory

For me its simple, as I have a SuperMicro Motherboard with Console Re-direction.  I can just mount the ISO via the console instead of having to create a USB with Rufus or so I thought.  Java is a bit screwy on Sierra, so I had an old USB for VMware, so I chose to use that with the iKVM Client with Chrome.

Snip20170319_2.png

The F Function Keys also don’t work all that well either so I had to use the Virtual Keyboard, which allowed me to boot from the USB Drive.  Snip20170319_3.png

Once the installer loads, you will be prompted to hit enter to continue

Snip20170319_2.png

Click F11 to Accept the EULA and ContinueSnip20170319_3.png

The Installer will Scan for Drives to Install VMWare to and then present you your options

Snip20170319_4

For my purposes I will be using the SATA SSD which happens to be a SuperMicro 64GB Sata DOM.  The NVMe Drive will be used to host the VMSphere and VRealize VMs.  The Intel DC 3710 400GB Drives will be used to host VMs.  I plan on testing utilizing software RAID for these Drives.  The issue will be managing the VM start.  If the VM that hosts the software RAID starts slower than the VMs that utilize it for storage, we will never have those VMs start up.

Next we select the Keyboard

Snip20170319_5.png

Then we select a root password.  Do not forget this.  If you do, it can be difficult to get the system working without a full reinstall.

Snip20170319_6.png

The installer will probe for hardwareSnip20170319_7.png

Now it will confirm with you that this is indeed the drive you wish to install on and proceed with the install once you hit F11.  Now take a little break and let VMware install. For us this is version 6.1 not 6.5, which is the current version.  We will walk through upgrading VMWare to the latest version later.

Snip20170319_8.png

Snip20170319_9.png

For us the installation took less then two minutes.  Snip20170319_10.png

If all goes well, upon reboot you will see this screen

Snip20170319_12.png

 

That’s it!  You have a working Vmware EXSi 6.0 Installation.  In our next post, we will work on upgrading VMWare ESXi to version 6.5.

Building a Home Lab – Back to the Future!

In my current role, I don’t do much System Engineer work, and frankly it’s boring to not.

I’ve decided that I need to keep my skills up and I’m working on rebuilding in my home lab what we had with the old team.

Compiling a list this is what I need to build, and what I hope to blog on.  I find that the simple how to guides are prevalent across the web, so I’ll blog about those pain points I encounter.

  1.  Systems Center Operations Manager 2016
  2.  Systems Center Virtual Machine Manager 2016 (with Substitute VMware ESXI and vSphere)
  3.  Windows Server Update Services
  4.  Windows Deployment Services
  5.  Active Directory
  6.  Windows DNS
  7.  Puppet
  8.  Foreman (though not technically in our old environment, I would like to use it for nix deployments
  9.  SQL Clustering and Replication
  10.  Virtual Desktop Infrastructure

My Active Directory, Windows DNS, Foreman, and Puppet Installations are already completed.  My vSphere and ESXi Systems is complete as well.

My entire infrastructure is running on the following as VMs off an ESXi Server

Hardware

SUPERMICRO MBD-X10SRL-F Server Motherboard LGA 2011 R3

Intel Xeon E5-2609 V4 1.7 GHz 20MB L3 Cache LGA 2011 85W BX80660E52609V4 Server Processor

8X SAMSUNG 16GB 288-Pin DDR4 SDRAM ECC Registered DDR4 2133 (PC4 17000) Server Memory Model M393A2G40DB0-CPB

SUPERMICRO SSD-DM064-PHI SATA DOM (SuperDOM) Solutions

Intel RS3DC080 PCI-Express 3.0 x8 Low Profile Ready SATA / SAS Controller Card

Intel 750 Series AIC 400GB PCI-Express 3.0 x4 MLC Internal Solid State Drive (SSD) SSDPEDMW400G4X1

4X HGST HITACHI Ultrastar SSD400M 400GB SAS 6GB SSD 2.5″ HUSML4040ASS600 (SSD DataStore)

4X SAMSUNG SM843T 480GB SATA3 2.5″ SSD INTERNAL SOLID STATE ENTERPRISE DRIVE LAPTOP (SSD DataStore 2)

Intel Ethernet Converged Network Adapter X540-T2

Intel Ethernet Server Adapter I350-F4

MikroTik – CRS125-24G-1S-IN – , Cloud Router Gigabit Switch, 24x 10/100/1000 Mbit/s Gigabit Ethernet with AutoMDI/X, Fully manageable Layer3, RouterOS v6, Level 5 license.

ASUS XG-U2008 Unmanaged 2-port 10G and 8-port Gigabit Switch

Notes

  • Both the SSD Datastores are running in a RAID 5
  • All Parts except the Intel 750 SSD, Motherboard, and CPU were purchased on eBay.   I saved hundreds by doing that.  DDR4 ECC is dirt cheap on eBay right now.
  • I will run out of storage space and memory long before I run out of CPU
    • To up the memory I’ll need to purchase 32GB DIMMS as I am maxed out with 16GB DIMMScapture
  • The Mikrotik Switch Handles 1G Traffic
  • The Asus Switch Handles 10G Traffic between my NAS and my Vmware Server
    • Current the 10G NIC is direct connected to one of my VMs that allows me to transfers data over the 10G NICs
  • The storage breakdown is below.  NappItDataStore is my NAS and it has ISO files for OS installs, and other installers.  All told, VM storage is only 2.8TB.  I would love to consolidate down to 4 X 1.2TB NVME SSDs and currently have my eye on this.  At 500$ a drive, it’s too expensive.  Hopefully these drives will start to come down on the used market.  At that point I could eliminate the Intel RAID Card, and go with all PCIE SSDs.

Capture.PNG