No 10gbe NICs showing in VMware 6.X with X10SDV-7TP4F

When I started building my VMware ESXi server, I did not have a Switch that could handle 10Gbe SFP+.  Now that I have a Dell X1052, I figured I would cable up a 10Gbe DAC and get moving.  Much to my surprise, I received a link light on the switch, but not on the motherboard.

Capture.PNG

Digging into the VMware side, I noticed that the 10Gbe NICs are not available, only the 1Gbe NICs.

Capture

A quick Google search brought me to a great site for VMware knowledge, tinkertry.com.  It appears that the drivers for the 10Gbe are not loaded.  So following the directions here, we open the SSH Console and enter the following command.

esxcli software vib install -v https://cdn.tinkertry.com/files/net-ixgbe_4.5.1-1OEM.600.0.0.2494585.vib –no-sig-check

We then reboot our host to get the new VIB to be loaded.

Capture

Low and behold on reboot we see the two 10gbe NICs.

Capture

Advertisements

Easy ESXi Patch Updates

If you are not familiar with v-front.de and you have a ESXi servers, you are really doing yourself a dis-service.  v-front.de maintains a repository of all ESXi updates and makes it very easy for you to update your servers when the newest patches come out.

Lately I have been having a series of interested GUI crashes with ESXi no matter what OS or Browser I use.  Knowing I was a bit behind on patches, I decided to update.

From their patching site I was able to grab the latest software profile and install it.  The steps are easy to do this and I will detail it here.

    1.  Enable the SSH Shell for your ESXi Host. At the Host Tab, click Actions -> Services – Enable Secure Shell (SSH).
    2. Going to the patching site and clicking on the latest update, v-front.de lists us the steps how to update our host.
      esxcli network firewall ruleset set -e true -r httpClient
      esxcli software profile update -p ESXi-6.5.0-20170404001-standard
      -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
      esxcli network firewall ruleset set -e false -r httpClient
    3. Reboot and then verify that the vibs have updated
      [root@esxi:~] esxcli software vib list | grep esx-base
      esx-base                       6.5.0-0.19.5310538                    VMware    VMwareCertified     2017-05-13
      [root@esxi:~] esxcli software vib list | grep esx-ui
      esx-ui                         1.18.0-5270848                        VMware    VMwareCertified     2017-05-13
      [root@esxi:~] esxcli software vib list | grep vsan
      vsan                           6.5.0-0.19.5310540                    VMware    VMwareCertified     2017-05-13
      vsanhealth                     6.5.0-0.19.5310541                    VMware    VMwareCertified     2017-05-13

VMWare ESXi Passthrough and Freenas

I recently had user colicstew comment that they could not get Freenas 9.10.2 U3 to see drives on his LSI2116.  Luckily I have some unused 2TB Drives and LSI 2116 Controller as part of my Xeon D Lab.

IMG_0561.jpgIMG_0560.jpg

First step was to passthrough my LSI2116 Controller and Reboot the Server.  You should see Enabled/Active for the LSI Controller which we do

Snip20170511_1.png

Next I created a VM. Snip20170511_2.png

I chose to install the OS on one of my VMware Datastores.

Snip20170511_3.png

On the Hardware Settings I set as shown.

Snip20170511_4.png

Then I clicked Add other device -> PCI Device and Add the LSI ControllerSnip20170511_5.png

Finally I added a CD-ROM Drive and attached the Freenas ISO downloaded from here.

Snip20170511_7.png

Snip20170511_8.png

Finally we boot the VM

Snip20170511_9.png

Then we select Install at the first Screen

Snip20170511_13.png

One of the oddities I ran into while creating this tutorial was the key sequence to release the mouse from the VMWare Remote Console was the escape key in the Freenas installer that bounces you back to Initial Setup Screen, so the remaining pictures are via my phone.

On the next screen you will be warned about only having 4GB of memory.  Since I’m only using this as a test system, I am not concerned.  If you were running this as an actual file server you would want at a minimum 1GB of Memory per TB of storage.

IMG_0562.jpgOn the this screen we see five drives.  One is our drive we will install FreeNAS OS to, and the other four are 2TB Hitachi Drives I attached to the Xeon D Board.

IMG_0563.jpg

Next you are warned that FreeNAS will clear the drive you are installing to.

IMG_0564.jpg

Next you are asked to set the root password.

IMG_0565.jpg

Finally you are asked to choose a BIOS Mode.  

IMG_0566.jpg

At this point, I went to install the system, but no matter what I did, it would not install the OS, failing with the below error.

Snip20170511_15.png

The problem here is that the drive that we are installing the OS to is thin provisioned.  FreeNAS does not like this.

Snip20170511_16.png

Fix is to create the disk as “Thick provisioned, eagerly zeroed”.

Snip20170511_17.png

Once that was fixed, the OS installation continued without issue.

Snip20170511_18.png

Once completed, you will see the following success message.Snip20170511_19.png

Then, I chose option four to shutdown the system.  Be sure to remove the DVD Drive, or the system will continue to boot from the Installer.  Once that is completed your system will work as expected.

Snip20170511_23.png

Unfortunately, I had no issues getting the system up and running.  colicstew feel free to email me via the contact method and we can see about troubleshooting this issue.  I would start with verifying you have the correct breakout cable.  I used one of the four breakout cables from the Motherboard.

 

How to Setup Swap in ESXi 6.5

Another VMware basics post and this time I will teach you how to set up Swap for an ESXi Host.  The Swap here is for the VMKernel, and basically can create performance boost for those hosts that are heavily utilized.  For myself, I created a RAID 1 Volume of two Intel DC3710 800GB SSDs to for this.

The steps to do this are easy.  First create a datastore on your volume of choice.  It is best to use a low latency SSD that is local to the hosts.

Then we go to Host-> Manager and under the System Tab select Swap.

Snip20170417_9.png

Then we click Edit Settings, and change the Data Store to the one we just created.

Snip20170417_10.png

That’s it, now we have the Cache there.

If we browse to the DataStore we can see there is a swap file created.

Snip20170417_11.png

Cannot Create Datastore on VMware ESXi 6.5

I have been in the process of rebuilding my Plex Server and restoring the data.  I popped in four 4TB WD Red NAS Drives and attached them to my 12G Controller.  I figured it would be easy to create a datastore across the Raid 5 Drive and contine on my way.   Unfortunately that was not the case, and I kept recieving errors similar to the one seen below.

Snip20170417_1.pngWhile rather annoying, what I believe happens is that the Drives had a previous partition on them that ESXi just can not read or write to.  So what we have to do is do some configuration in the ESXi Shell.

First we need to enable SSH on this host, which is very simple.  At the Host Tab, click Actions -> Services – Enable Secure Shell (SSH).  ESXi will enable the service and pop up a reminder warning for you.

Snip20170417_2.png

Next we SSH into our ESXi Host.

Snip20170417_3.png

We then need to determine the Disk ID of the Device we wish to fix.  For us this is easy, as ESXi appends this to the name of the Device under Storage ->Devices. Snip20170417_4.png

From there we go back to the Shell and cd to /dev/disks where we should see a bunch of disk IDs.

Snip20170417_5.png

Then we run partedUtil mklabel /dev/disks/diskid msdos

Snip20170417_6.png

Then we run through the steps to create the new Datastore and after a bit of a wait we should see the new Datastore created.

Snip20170417_8.png

So there we have it, we have our Plex Datastore and I can create my new disk and start sharing my Media much to my fiancee’s enjoyment.   She has really missed her Power Rangers.  Also don’t forget to turn off SSH access for safety’s sake.

 

 

Hardware Passthrough in VMware ESXi 6.5

VMware ESXi 6.5 continues to improve the simplicity of using its platform more and more as new editions come out.  Today I’ll show you how simple it is to passthrough a hardware device to a VM.

In this case we are going to passthrough the motherboard disk controller, and LSI 2116 to a VM in order to work on a disk that I will write about in a later post.

From the main ESXi home page we click on Manage and then Hardware.

Snip20170328_1.png

You will be presented with a long list of all the hardware components that your system presents to ESXi.  For our purposes we need to find the LSI 2116 Controller.  Click the checkbox next to the device and you will get a notification stating you need to reboot in order to enable the passthrough of this device.

Snip20170328_2Snip20170328_3

Now we simply reboot the system.  Once the system is back up we take the VM we wish to use the hardware on.  Simple Edit the Settings on the VM and Click Add other Device and Select PCI Device

Snip20170328_5

Then we chose or device from the drop down.  For us, since we only have this one device passed, that is the only choice.   Save the configuration and you will clearly see the device added.

Snip20170328_7.png

In our case we have added a 10K Disk to the Storage Controller that has a unique setup on it, but more on that later.  Once we’ve booted an OS, or in our case the CentOS 7 Installer, I am happy to see that the disk shows up.

Snip20170328_8.png

How to Configure your Datastores in ESXi 6.5

Another basics posts for those of us who have not had the experience of installing and configuring ESXi 6.5.  Today we will configure our Datastores.

Once we have logged into our ESXi client web page, click on Storage and the Datastore Tab

Snip20170319_18.png

The initial Datastore is the media that you installed your ESXi “OS” on.  For me it was a 64GB SATADOM.  I do not wish to place any VMs on this, but I do have a NVMe Drive installed in the machine that I want my vSphere and vRealize components to live on.  The thinking is if the NVMe drive fails, I will have backups of those VMs and be able to purchase and install a new drive, and then restore the VMs.

To setup the new Datastore, click New datastore and select Create new VMFS datastore and Click NextSnip20170319_19.png

Then we would select our device to install the Datastore on and continue on our merry way.  Wait though, does anyone see the NVMe drive?  Why isn’t it there?

Snip20170319_20.png

Now it’s possible that this is a quirk of the new Install or its a quirk of the HTML5 Client.  I have noticed at least with vSphere in my job that you still have to fall back to the “Fat” Client from time to time. In this case we do not.

If we click on the Devices Tab and highlight the NVMe drive, we get the option to select New Datastore

Snip20170319_21.png

Snip20170319_22.png

When we click that we get the Option to Name Our Datastore, Partition it, and then complete the steps.  It seems it is just a shortened version of the Datastore Creation Wizard, and that the Storage Datastore Creation Wizard doesn’t have the necessary code to recognize the NVMe Drive.

So first we name the drive.

Snip20170319_23.png

Then we Partition it utilizing the entire drive

Snip20170319_24.png

Accept the Warning and we are complete.

Snip20170319_25.png

Going back to the Datastore Tab we see the two stores.

Snip20170319_26.png