No 10gbe NICs showing in VMware 6.X with X10SDV-7TP4F

When I started building my VMware ESXi server, I did not have a Switch that could handle 10Gbe SFP+.  Now that I have a Dell X1052, I figured I would cable up a 10Gbe DAC and get moving.  Much to my surprise, I received a link light on the switch, but not on the motherboard.

Capture.PNG

Digging into the VMware side, I noticed that the 10Gbe NICs are not available, only the 1Gbe NICs.

Capture

A quick Google search brought me to a great site for VMware knowledge, tinkertry.com.  It appears that the drivers for the 10Gbe are not loaded.  So following the directions here, we open the SSH Console and enter the following command.

esxcli software vib install -v https://cdn.tinkertry.com/files/net-ixgbe_4.5.1-1OEM.600.0.0.2494585.vib –no-sig-check

We then reboot our host to get the new VIB to be loaded.

Capture

Low and behold on reboot we see the two 10gbe NICs.

Capture

Advertisements

Easy ESXi Patch Updates

If you are not familiar with v-front.de and you have a ESXi servers, you are really doing yourself a dis-service.  v-front.de maintains a repository of all ESXi updates and makes it very easy for you to update your servers when the newest patches come out.

Lately I have been having a series of interested GUI crashes with ESXi no matter what OS or Browser I use.  Knowing I was a bit behind on patches, I decided to update.

From their patching site I was able to grab the latest software profile and install it.  The steps are easy to do this and I will detail it here.

    1.  Enable the SSH Shell for your ESXi Host. At the Host Tab, click Actions -> Services – Enable Secure Shell (SSH).
    2. Going to the patching site and clicking on the latest update, v-front.de lists us the steps how to update our host.
      esxcli network firewall ruleset set -e true -r httpClient
      esxcli software profile update -p ESXi-6.5.0-20170404001-standard
      -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
      esxcli network firewall ruleset set -e false -r httpClient
    3. Reboot and then verify that the vibs have updated
      [root@esxi:~] esxcli software vib list | grep esx-base
      esx-base                       6.5.0-0.19.5310538                    VMware    VMwareCertified     2017-05-13
      [root@esxi:~] esxcli software vib list | grep esx-ui
      esx-ui                         1.18.0-5270848                        VMware    VMwareCertified     2017-05-13
      [root@esxi:~] esxcli software vib list | grep vsan
      vsan                           6.5.0-0.19.5310540                    VMware    VMwareCertified     2017-05-13
      vsanhealth                     6.5.0-0.19.5310541                    VMware    VMwareCertified     2017-05-13

VMWare ESXi Passthrough and Freenas

I recently had user colicstew comment that they could not get Freenas 9.10.2 U3 to see drives on his LSI2116.  Luckily I have some unused 2TB Drives and LSI 2116 Controller as part of my Xeon D Lab.

IMG_0561.jpgIMG_0560.jpg

First step was to passthrough my LSI2116 Controller and Reboot the Server.  You should see Enabled/Active for the LSI Controller which we do

Snip20170511_1.png

Next I created a VM. Snip20170511_2.png

I chose to install the OS on one of my VMware Datastores.

Snip20170511_3.png

On the Hardware Settings I set as shown.

Snip20170511_4.png

Then I clicked Add other device -> PCI Device and Add the LSI ControllerSnip20170511_5.png

Finally I added a CD-ROM Drive and attached the Freenas ISO downloaded from here.

Snip20170511_7.png

Snip20170511_8.png

Finally we boot the VM

Snip20170511_9.png

Then we select Install at the first Screen

Snip20170511_13.png

One of the oddities I ran into while creating this tutorial was the key sequence to release the mouse from the VMWare Remote Console was the escape key in the Freenas installer that bounces you back to Initial Setup Screen, so the remaining pictures are via my phone.

On the next screen you will be warned about only having 4GB of memory.  Since I’m only using this as a test system, I am not concerned.  If you were running this as an actual file server you would want at a minimum 1GB of Memory per TB of storage.

IMG_0562.jpgOn the this screen we see five drives.  One is our drive we will install FreeNAS OS to, and the other four are 2TB Hitachi Drives I attached to the Xeon D Board.

IMG_0563.jpg

Next you are warned that FreeNAS will clear the drive you are installing to.

IMG_0564.jpg

Next you are asked to set the root password.

IMG_0565.jpg

Finally you are asked to choose a BIOS Mode.  

IMG_0566.jpg

At this point, I went to install the system, but no matter what I did, it would not install the OS, failing with the below error.

Snip20170511_15.png

The problem here is that the drive that we are installing the OS to is thin provisioned.  FreeNAS does not like this.

Snip20170511_16.png

Fix is to create the disk as “Thick provisioned, eagerly zeroed”.

Snip20170511_17.png

Once that was fixed, the OS installation continued without issue.

Snip20170511_18.png

Once completed, you will see the following success message.Snip20170511_19.png

Then, I chose option four to shutdown the system.  Be sure to remove the DVD Drive, or the system will continue to boot from the Installer.  Once that is completed your system will work as expected.

Snip20170511_23.png

Unfortunately, I had no issues getting the system up and running.  colicstew feel free to email me via the contact method and we can see about troubleshooting this issue.  I would start with verifying you have the correct breakout cable.  I used one of the four breakout cables from the Motherboard.

 

How to Enable SNMPD on ESXi 6.5

I had some difficulties enabling SNMPD from the GUI on ESXi 6.5 and kept recieving the following error

Failed – An error occurred during host configuration

Quick search let me here.

Running the below steps listed at the above link allowed me to start SNMPD in the EXSi GUI without issue

esxcli system snmp set -r
esxcli system snmp set -c YOUR_STRING
esxcli system snmp set -p 161
esxcli system snmp set -L "City, State, Country"
esxcli system snmp set -C noc@example.com
esxcli system snmp set -e yes

Software Raid in Windows and NVMe U.2 Drive Benchmark

I have recently acquired a couple Intel 750 NVMe U.2 Drives to play around with.  In order to utilize these drives you need to source a SuperMicro AOC-SLG3-2E4 NVMe PCIe card or similar variant.  I good write up on the HBAs available is from our good friends at ServerTheHome.

In my VMware Server, I have passed through the two NVMe drives into a Windows 2016 VM.  From there we launch Disk Management, and below we see the two drives.

Snip20170427_1.png

For our testing we want the raw speed at first, so we right click on a drive and select “New Striped Volume”.  If we want redundancy we would choose a Mirrored Volume.  If we just wanted the combined storage of the two drives to appear to the OS as one drive, we would chose Spanned Volume.

Snip20170427_2.png

For there we make sure both our drives are selected.

Snip20170427_3.png

We assign the drive a letter, tell the OS to format the drive, give it a name if we wish, and click Finish.

We will get a warming, as the OS will convert the Disk from basic to dynamic.  Snip20170427_4.png

A note, if you receive an error message stating that there is not enough space on the drive, be sure you have the same Disk Type before you start and that the same amount of space is available on each drive.  For myself one drive was GPT and the other was Basic, resulting in a slight mismatch of drive space.  Once both drives were set to GPT, the drive space was the same and the mirrored operation could continue.

The drives will format and then appear once completed.

Snip20170427_5.png

Once formatted we can run some benchmarks against the hardware.

Snip20170428_7.png

In a RAID0 essentially this is what we expect to see, double the Reads and double the Writes.

The ATTO Results are also as expected.  There is some ludicrous speeds here, but you need an application that needs and can take advantage of these speeds.

Snip20170428_8.png

How to Setup Swap in ESXi 6.5

Another VMware basics post and this time I will teach you how to set up Swap for an ESXi Host.  The Swap here is for the VMKernel, and basically can create performance boost for those hosts that are heavily utilized.  For myself, I created a RAID 1 Volume of two Intel DC3710 800GB SSDs to for this.

The steps to do this are easy.  First create a datastore on your volume of choice.  It is best to use a low latency SSD that is local to the hosts.

Then we go to Host-> Manager and under the System Tab select Swap.

Snip20170417_9.png

Then we click Edit Settings, and change the Data Store to the one we just created.

Snip20170417_10.png

That’s it, now we have the Cache there.

If we browse to the DataStore we can see there is a swap file created.

Snip20170417_11.png

Hardware Passthrough in VMware ESXi 6.5

VMware ESXi 6.5 continues to improve the simplicity of using its platform more and more as new editions come out.  Today I’ll show you how simple it is to passthrough a hardware device to a VM.

In this case we are going to passthrough the motherboard disk controller, and LSI 2116 to a VM in order to work on a disk that I will write about in a later post.

From the main ESXi home page we click on Manage and then Hardware.

Snip20170328_1.png

You will be presented with a long list of all the hardware components that your system presents to ESXi.  For our purposes we need to find the LSI 2116 Controller.  Click the checkbox next to the device and you will get a notification stating you need to reboot in order to enable the passthrough of this device.

Snip20170328_2Snip20170328_3

Now we simply reboot the system.  Once the system is back up we take the VM we wish to use the hardware on.  Simple Edit the Settings on the VM and Click Add other Device and Select PCI Device

Snip20170328_5

Then we chose or device from the drop down.  For us, since we only have this one device passed, that is the only choice.   Save the configuration and you will clearly see the device added.

Snip20170328_7.png

In our case we have added a 10K Disk to the Storage Controller that has a unique setup on it, but more on that later.  Once we’ve booted an OS, or in our case the CentOS 7 Installer, I am happy to see that the disk shows up.

Snip20170328_8.png