ESXi6, Home Lab

Building a Home Lab – Back to the Future!

In my current role, I don’t do much System Engineer work, and frankly it’s boring to not.

I’ve decided that I need to keep my skills up and I’m working on rebuilding in my home lab what we had with the old team.

Compiling a list this is what I need to build, and what I hope to blog on.  I find that the simple how to guides are prevalent across the web, so I’ll blog about those pain points I encounter.

  1.  Systems Center Operations Manager 2016
  2.  Systems Center Virtual Machine Manager 2016 (with Substitute VMware ESXI and vSphere)
  3.  Windows Server Update Services
  4.  Windows Deployment Services
  5.  Active Directory
  6.  Windows DNS
  7.  Puppet
  8.  Foreman (though not technically in our old environment, I would like to use it for nix deployments
  9.  SQL Clustering and Replication
  10.  Virtual Desktop Infrastructure

My Active Directory, Windows DNS, Foreman, and Puppet Installations are already completed.  My vSphere and ESXi Systems is complete as well.

My entire infrastructure is running on the following as VMs off an ESXi Server


SUPERMICRO MBD-X10SRL-F Server Motherboard LGA 2011 R3

Intel Xeon E5-2609 V4 1.7 GHz 20MB L3 Cache LGA 2011 85W BX80660E52609V4 Server Processor

8X SAMSUNG 16GB 288-Pin DDR4 SDRAM ECC Registered DDR4 2133 (PC4 17000) Server Memory Model M393A2G40DB0-CPB


Intel RS3DC080 PCI-Express 3.0 x8 Low Profile Ready SATA / SAS Controller Card

Intel 750 Series AIC 400GB PCI-Express 3.0 x4 MLC Internal Solid State Drive (SSD) SSDPEDMW400G4X1

4X HGST HITACHI Ultrastar SSD400M 400GB SAS 6GB SSD 2.5″ HUSML4040ASS600 (SSD DataStore)


Intel Ethernet Converged Network Adapter X540-T2

Intel Ethernet Server Adapter I350-F4

MikroTik – CRS125-24G-1S-IN – , Cloud Router Gigabit Switch, 24x 10/100/1000 Mbit/s Gigabit Ethernet with AutoMDI/X, Fully manageable Layer3, RouterOS v6, Level 5 license.

ASUS XG-U2008 Unmanaged 2-port 10G and 8-port Gigabit Switch


  • Both the SSD Datastores are running in a RAID 5
  • All Parts except the Intel 750 SSD, Motherboard, and CPU were purchased on eBay.   I saved hundreds by doing that.  DDR4 ECC is dirt cheap on eBay right now.
  • I will run out of storage space and memory long before I run out of CPU
    • To up the memory I’ll need to purchase 32GB DIMMS as I am maxed out with 16GB DIMMScapture
  • The Mikrotik Switch Handles 1G Traffic
  • The Asus Switch Handles 10G Traffic between my NAS and my Vmware Server
    • Current the 10G NIC is direct connected to one of my VMs that allows me to transfers data over the 10G NICs
  • The storage breakdown is below.  NappItDataStore is my NAS and it has ISO files for OS installs, and other installers.  All told, VM storage is only 2.8TB.  I would love to consolidate down to 4 X 1.2TB NVME SSDs and currently have my eye on this.  At 500$ a drive, it’s too expensive.  Hopefully these drives will start to come down on the used market.  At that point I could eliminate the Intel RAID Card, and go with all PCIE SSDs.





Puppet Agent on Windows 2016

In a past job, I worked with another Senior Engineer who was big on Puppet.  Before I left to take another position, I was starting to learn Puppet.  I continue to try to find time to improve my skills, and as I have a working test lab I started to dig into it again.  This coworker was mainly a Linux guy, and I started to dig into Puppet on Windows,  but I never go too far.


I’ve started to get my Lab for lack of a better term Puppetified.  All nodes went pretty easily once I started to remember what I had done.  I’ve been able to get most of my Windows and Linux Nodes online.  I have yet to get my OmniOS host online.

Quick tips

To get an Agent Running

  1. Test -> puppet agent -t
  2. List and Sign the Cert on the Master – puppet cert list / puppet cert sign <hostname>
  3. Test Again -> puppet agent -t
  4. Verify on Foreman (Optional)

To Install on Windows

  1.  Be sure the agent version you are using is not newer than your puppetmaster instance can handle.   You will receive an error similar to “puppet the environment must be purely alphanumeric not ‘puppet-ca'”
  2. Find a working node (Linux in my case) and run ” puppet agent –version”
  3. Uninstall previous failed puppet installations

General Tips

  1.  Make sure you have a CNAME of puppet.domain to your puppetmaster server.

Puppet on Windows Server 2016

Windows Server 2016 is currently not supported, but the install can work.  I have yet to test additional functionality but I will soon.

  •  Install the correct puppet agent.  In my case this was 3.7.2
  • After trying to click on the Puppet Folder in the start menu and finding I was unable, I decided to pull up the command prompt.
  •   CD to the bin folder for Puppet
    •  cd “C:\Program Files\Puppet Labs\Puppet\bin”
  • Execute the puppet_shell bat file
    • puppet_shell.bat
  • Run your typical agent install steps




Benchmark Time!

I’ve been slacking a bit with getting Benchmarks out, but I’ve done some simple ones.

This is the current hardware and software

6X HGST 6TB He Drives


SuperMicro X10SDV-4C-TLN4F Motherboard

LSI 9211-8i HBA

Supermicro SSD-DM064-PHI 64GB SATA DOM (Disk on Module)

Fractal Node 804 Case

Intel S3710 200GB SSD (SLOG)

Intel X540-T2 10G Controller

Napp-It ZFS appliance v. 16.08.03 Pro Aug.02.2016

OmniOS 5.11 omnios-r151018-95eaa7e June 2016

In general the results are what I hoped for.  I wanted to max out a single 1G Link or 10Link.

1G Connection via ASUS XG-U2008 Switch and Intel I219V 10 Threads Overlapped IO


10G Connection via ASUS XG-U2008 Switch and Intel x540-T2 10 Threads Overlapped IO


1G Connection via ASUS XG-U2008 Switch and Intel I219V IO Comparison


10G Connection via ASUS XG-U2008 Switch and Intel x540-T2 IO Comparison


The 10G speeds aren’t as consistent as I’d like but they are definitely pretty good.  Overall I’m happy with the build.


Fun with Nessus

Hat tip to LifeHacker for posting about Nessus.

I figured I’d give it a go on my network, and see what could be found.  I’ve worked in a previous life with Security Remediation, and while not fun it’s a necessary evil now.  I do this for fun, and I know as my lovely fiancee tells me repeatedly, I am a giant nerd.

The download and installation was simple enough, and I spent more time downloading and installing Windows Server 2016.  I’m hoping to have a Foreman Installation up and running soon that will automate my Windows Installs.  It’s easy enough to template a Windows 2016 VM with vSphere, but I’d like to learn another skill with Foreman and Puppet.

Installation was straightforward

  1.  Create an Administration Account
  2. Enter your key.  Hint remember to put the Dashes in as the GUI does not add them for you.
  3. Download the Nessus Definitions and initialize, note this takes a while.


It took a good twenty minutes between download and initialization.

Pretty simple, select New Scan and Pick one.  I chose Basic Scan, told it to hit my entire subnet, and away it went.


The scanner works pretty quickly and by clicking on the Name, and you can see the results in real time.  I do have a fair amount of vulnerabilities to look into, some of which I know about and some of which I wouldn’t.  It’d be a fun exercise to try to clean and secure my environment.


Some are also useless.  For example, the below Warning popped up.  All the devices are Routers or other networking gear so you would expect that to be the case.  I guess it’s good to have if the device turns out to be a server.  It’s also nice to see that Nessus tells you how to fix the vulnerability.


Sometimes Nessus does its best but fails pretty miserably.  I’ll give it credit that the device below is indeed a printer.


Total scan time for about 30 devices, which are a mix of Linux, Solaris, and Windows, with a mix of other network devices, took about 30minutes.



OmniOS and getting SmartmonTools to Work

I spent a couple days off and on trying to get smartmontools to work on OmniOS.  I saw some conflicting info on whether it should work out of the box, and for me at least it did not.  Below is what I did to get it to work

Don’t bother building from scratch.  Add the below repository and install the package.

pkg set-publisher -O uulm.mawi
pkg search -pr smartmontools
pkg install smartmontools
root@OmniOS:/root# pkg info -r smartmontools
          Name: system/storage/smartmontools
       Summary: Control and monitor storage systems using SMART
         State: Installed
     Publisher: uulm.mawi
       Version: 6.3
        Branch: 0.151012
Packaging Date: Mon Sep 29 13:22:53 2014
          Size: 1.83 MB
          FMRI: pkg://uulm.mawi/system/storage/smartmontools@6.3-0.151012:20140929T132253Z

The smartmontools file in /etc/default/smartmontools might have been created automatically, but I’m not sure if this was left over from previous attempts.  Either way below is what you need.

# Defaults for smartmontools initscript (/etc/init.d/smartmontools)
# This is a POSIX shell fragment

# List of devices you want to explicitly enable S.M.A.R.T. for
# Not needed (and not recommended) if the device is monitored by smartd
#enable_smart="/dev/hda /dev/hdb"

# uncomment to start smartd on system startup

# uncomment to pass additional options to smartd on startup

Then add your disks to /etc/smartd.conf.  For me it was like below, yours will differ.

/dev/rdsk/c1t5000CCA232C02D87d0 -a -d sat,12
/dev/rdsk/c1t5000CCA232C0D31Bd0 -a -d sat,12 
/dev/rdsk/c1t5000CCA232C0EA80d0 -a -d sat,12 
/dev/rdsk/c1t5000CCA232C1543Cd0 -a -d sat,12 
/dev/rdsk/c1t5000CCA232C0AD56d0 -a -d sat,12 
/dev/rdsk/c1t5000CCA232C0BBA6d0 -a -d sat,12 
/dev/rdsk/c1t5000039FF4CF3EA6d0 -a -d sat,12 
/dev/rdsk/c1t5000039FF4E58676d0 -a -d sat,12 
/dev/rdsk/c3t5d0 -a -d sat,12

If everything works you will see below.  I do have an issue where one of my disks is repeatedly parking loudly.  I’ll need to do some research to see why.  It is good to know that all my disks are healthy.



OmniOS and a Misunderstanding

I decided to go with OmniOS as it seems to be the OS everyone recommends for ZFS and NappIt and it works with the 10G networking.  Installation went smooth and without issues

Unfortunately, that was the only thing that went well.  No configuration would allow me to get a 10G write above 180-190MB/s which is just not acceptable.  I would have expected a greater than 60% increase in speed.


Then it dawned on me.  I’m being limited by the speed of the media I’m using to copy my files from (a USB 3.0 Drive).  In theory a SATA Interfaced drive is only going to hit 160-190MB/s.

So then I copied a file locally and witnessed the same speeds, and copied that file from a Local Drive on a VM to the mounted SMB share, and boom there was my speeds, topping out at 400+MB/s


OpenIndiana Install – Take Two

I decided to retry the OpenIndiana Install, and this time I re-downloaded the ISO.  Surprisingly without issue the ISO Booted quickly.

First issues I ran into was that the OpenIndiana Text Installer only saw the 6TB Drives I had installed.  In order to get the installer to see the Sata DOM, I needed to remove the HBA Cables and then re-run the installer.  As you can see, the installer now saw the 64GB SATA DOM, and the 200GB Intel SSD.


Once configuring the Network, and setting the usernames and passwords, the installer did its thing.


After a quick install, the systems was rebooted, and the HBA Drives reconnected.

As for configuration, I followed directions from one of my favorite Server/Hardware Sites ServetheHome

  • Simply launch a Terminal in Openindiana, then type su 
  • Next, we will launch the installer command: wget -O – | perl 
  • Now you can sit back and watch the magic.

The install went without issue, but currently OpenIndiana does not have drivers for the X557 Intel NICs that are built into the SuperMicro X10SDV-4C-TLN4F Motherboard. Testing on 1G Links was similar to what we say in FreeNAS.  Currently I only know that FreeNAS has support for the X557 Nics.



NAS OS Install – A Journey in Patience


I’m in the process of setting up the OS for the new NAS server, and I have to say, the OpenIndiana installer is slow.  I waited 10 minutes to after seeing the Sun Copyright banner, and then 15, then 20, and eventually 30 minutes.  At that point I decided to go download Freenas 9.10.1 and prepared to mount it via the IPMI on the SuperMicro Xeon D board.  But low and behold we have movement! Then we stalled again.  I was starting to have flashbacks of Solaris on Sun Hardware with Software RAID on 16 Drive Storage arrays running early version of ZFS.

Then I got smart, or so I thought.   I took the USB Expander and removed it.  Then I plugged the USB directly into the motherboard.  It didn’t do a thing unfortunately.


So instead, I decided to a mount an ISO via the IPMI and go with FreeNas. So everything was going well.


Okay we are making progress


After logging in and doing some basic setup, I created the first Volume.


Speed tests were as expected, one mirror does one disk performance, 2 mirrors do two disk performance.

I still plan on installing OpenIndiana on this machine in the future and benchmarking it as well.  My 10G Switch and new Card has arrived, so I’ll be sure to put those benchmarks up as well.


NAS Hiccup

So as I was starting to do the NAS Build and I noticed an issue.  I had ordered the wrong 10G Nic for my Motherboard.   My motherboard uses 10G RJ45 and I ordered a NIC that uses SFPs. Also I had to add a larger Sata DOM as OmniOS and NappIt need the size.  I decided to add a SLOG drive even though it isn’t needed for just streaming, but if I decide to do ISCSI or NFS for VMware it would be useful.

The final build is below.   This will not inhibit me from continuing the build but will not allow me to do 10G Tests.

6X HGST 6TB He Drives


SuperMicro X10SDV-4C-TLN4F Motherboard

LSI 9211-8i HBA

Supermicro SSD-DM064-PHI 64GB SATA DOM (Disk on Module)

Fractal Node 804 Case

Intel S3710 200GB SSD (SLOG)

Intel X540-T2 10G Controller


Okay what happened with the NAS Server?

I had this question asked earlier today, and the plain answer is “Stuff”.

I decided that I needed more expandability in the server, and since I was using a Fractal Node 804 for a gaming case, I decided to swap out the board and gaming components and put them in the Fractal Node 304.  This allows me up to 8 3.5in HDDs and a lot more real estate.

Couple notes on the Fractal Node 304 Case

  1.  Cable Management is a Pain.  See the bundle of cables by the back of the GPU?  That’s what I mean.  There is only room there because I don’t have the HDD Sliders in.
  2. The PSU design is smart, but is a problem if you need to access it.  Point and case, I didn’t flip the PSU Switch before powering on and had to remove the cover and flip it.

So what am I running in my gaming box and what games do I play?

It’s a rather simple build running Windows 10 to play No Man’s Sky