Home Lab, NAS

Cheap 10gbe Network Adapter for Synology DS3018xs

In preparation for working with Boot from SAN and other high network requirement work with my Synology, I looked into getting a 10gbe adapter.  Unfortunately, the Synology Branded 10gbe Adapters are extremely expensive.

I was lucky enough to have a Mellanox ConnectX-2 Card kicking around.   These single port 10gbe cards can be found on eBay for less than 20$.

After 3D printing a low profile bracket for the card found on ThingiVerse, I was able to power up the Synology with card and connect over 10gbe without issue.


Although I have yet to test the adapter for a normal workload, Synology supports other Mellanox Cards on its other Devices.  We can also see by SSHing into the Synology, that the device correctly loaded the mlx4_en drivers.

[Mon Feb 26 11:22:58 2018] IPv6: ADDRCONF(NETDEV_UP): eth4: link is not ready
[Mon Feb 26 11:22:58 2018] mlx4_en: eth4: Link Up
[Mon Feb 26 11:22:58 2018] IPv6: ADDRCONF(NETDEV_CHANGE): eth4: link becomes ready

In the coming weeks I hope to have a series of posts detailing Boot from SAN configurations for ESXi as well as new posts and benchmarks working with Intel’s Xeon Bronze Series of chips.


Home Lab, Monitoring, NAS

How to Install SNMP on UnRAID6

One of the things I like to have in my lab environment is the ability to monitoring all OSes and keep an eye on such things as temperatures, disk space, and other sensors.  I was disheartened to find that UnRAID 6 did not have SNMP installed or configured. After some searching I was able to figure out how to get SNMP installed.

First, Log into your UnRAID Web page and Click on Plugins

Capture.PNGNext copy and paste the NerdPack Plugin into the URL Field and Click Install.  The NerdPack installs the prerequisites that you need to install SNMP.  Then you will see a plugin window pop up.


Then we go and reboot the server.  This step is not necessary, but I prefer to do this after each plugin install.

Next go to Settings, and Nerd Pack


Find the entry for Perl and click the slider


Click Apply on the bottom


The package manager will launch a window and you see the package install.


Then we go and install the UnRaid SNMP Plugin following the same steps for the previous plugin install.


Login to the host via SSH console and verify that SNMP is working by executing

snmpwalk -v2c -c public localhost

You should see output similar to below.


Now you should be able to import your host into an SNMPD Based monitoring.


ESXi6, Home Lab, NAS, VMware

VMWare ESXi Passthrough and Freenas

I recently had user colicstew comment that they could not get Freenas 9.10.2 U3 to see drives on his LSI2116.  Luckily I have some unused 2TB Drives and LSI 2116 Controller as part of my Xeon D Lab.


First step was to passthrough my LSI2116 Controller and Reboot the Server.  You should see Enabled/Active for the LSI Controller which we do


Next I created a VM. Snip20170511_2.png

I chose to install the OS on one of my VMware Datastores.


On the Hardware Settings I set as shown.


Then I clicked Add other device -> PCI Device and Add the LSI ControllerSnip20170511_5.png

Finally I added a CD-ROM Drive and attached the Freenas ISO downloaded from here.



Finally we boot the VM


Then we select Install at the first Screen


One of the oddities I ran into while creating this tutorial was the key sequence to release the mouse from the VMWare Remote Console was the escape key in the Freenas installer that bounces you back to Initial Setup Screen, so the remaining pictures are via my phone.

On the next screen you will be warned about only having 4GB of memory.  Since I’m only using this as a test system, I am not concerned.  If you were running this as an actual file server you would want at a minimum 1GB of Memory per TB of storage.

IMG_0562.jpgOn the this screen we see five drives.  One is our drive we will install FreeNAS OS to, and the other four are 2TB Hitachi Drives I attached to the Xeon D Board.


Next you are warned that FreeNAS will clear the drive you are installing to.


Next you are asked to set the root password.


Finally you are asked to choose a BIOS Mode.  


At this point, I went to install the system, but no matter what I did, it would not install the OS, failing with the below error.


The problem here is that the drive that we are installing the OS to is thin provisioned.  FreeNAS does not like this.


Fix is to create the disk as “Thick provisioned, eagerly zeroed”.


Once that was fixed, the OS installation continued without issue.


Once completed, you will see the following success message.Snip20170511_19.png

Then, I chose option four to shutdown the system.  Be sure to remove the DVD Drive, or the system will continue to boot from the Installer.  Once that is completed your system will work as expected.


Unfortunately, I had no issues getting the system up and running.  colicstew feel free to email me via the contact method and we can see about troubleshooting this issue.  I would start with verifying you have the correct breakout cable.  I used one of the four breakout cables from the Motherboard.



UnRAID- Storage Server On a USB Drive

I’m in the process of converting my Lab Environment into a separated Storage Server and Virtualization platform.  Today we will focus on the Software RAID Storage Server side with UnRAID.

I’m using the below setup for a Test Bench.  Under no circumstances should you use a Desktop Setup for Production use, but for just a Test Bench to gauge performance of software, this is perfectly fine.

  • Intel i3-6100T
  • Gigabyte GA-Z170N-Gaming 5
  • 8GB DDR4
  • 4X Western Digital 4.0TB NASWare Drives
  • 128GB OCZ RD400 NVMe M.2 SSD

Setup is very easy, either create or purchase a USB Drive with UnRAID and boot from it. Once booted, you navigate to the Web Page for UnRAID.


Then I navigated to the Main Tab and configured my drives.  I set three for Data and one for Parity.  Then we played the waiting game.  For 12TB of usable space we are waiting nine plus hours for the Parity Disk to Sync.


As some veterans of UnRAID may have seen, I’m running version 6.18 in the above screenshots.  Since I have an NVMe M.2 SSD in this setup, I decided to use it as a cache drive.


I decided to run some benchmarks to see how our system fares.  The System is only outfitted with a 1G network card, so we are limited to about 117MB/s as a best case scenario.  We ran into an interesting situation though.  When benching we started with the 1GiB File Size in Crystal Disk Mark, but we got abysmal  numbers.


So decided to benchmark starting on the lower end starting with the 50MiB File Size and sure enough we hit speeds that maxed out the network adapter.  The trend continued on the 500MiB File Size as well.

So I went back to the 1GiB File Size Benchmark and low and behold we were hitting good speeds.


There must be some type of caching mechanism that determines when to write and read from the cache and when not to.  I think because the system had just added the cache to the array, it was not “smart” enough to use it yet.

We can see the test consistently max out the Network Adapter.


Here is a 2GiB File Size Benchmark, which is in line with the previous.


I would expect similar performance until we have a file larger than the 128GB Cache Drive. I’ll need to find a benchmark tool that can use 128GiB File Sizes, as CrystalDiskMark only goes to 32GiB.  I also plan to run these same tests with a 10G Network as well, but I am currently waiting on the arrival of Second Xeon D Motherboard and have yet to order the 10G Adapter for that board.  We also have a ZeusRam 8GB SSD that I am waiting to arrive.  That drive is a favorite of the FreeNas Crowd.


Benchmark Time!

I’ve been slacking a bit with getting Benchmarks out, but I’ve done some simple ones.

This is the current hardware and software

6X HGST 6TB He Drives


SuperMicro X10SDV-4C-TLN4F Motherboard

LSI 9211-8i HBA

Supermicro SSD-DM064-PHI 64GB SATA DOM (Disk on Module)

Fractal Node 804 Case

Intel S3710 200GB SSD (SLOG)

Intel X540-T2 10G Controller

Napp-It ZFS appliance v. 16.08.03 Pro Aug.02.2016

OmniOS 5.11 omnios-r151018-95eaa7e June 2016

In general the results are what I hoped for.  I wanted to max out a single 1G Link or 10Link.

1G Connection via ASUS XG-U2008 Switch and Intel I219V 10 Threads Overlapped IO


10G Connection via ASUS XG-U2008 Switch and Intel x540-T2 10 Threads Overlapped IO


1G Connection via ASUS XG-U2008 Switch and Intel I219V IO Comparison


10G Connection via ASUS XG-U2008 Switch and Intel x540-T2 IO Comparison


The 10G speeds aren’t as consistent as I’d like but they are definitely pretty good.  Overall I’m happy with the build.


OmniOS and getting SmartmonTools to Work

I spent a couple days off and on trying to get smartmontools to work on OmniOS.  I saw some conflicting info on whether it should work out of the box, and for me at least it did not.  Below is what I did to get it to work

Don’t bother building from scratch.  Add the below repository and install the package.

pkg set-publisher -O http://scott.mathematik.uni-ulm.de/release uulm.mawi
pkg search -pr smartmontools
pkg install smartmontools
root@OmniOS:/root# pkg info -r smartmontools
          Name: system/storage/smartmontools
       Summary: Control and monitor storage systems using SMART
         State: Installed
     Publisher: uulm.mawi
       Version: 6.3
        Branch: 0.151012
Packaging Date: Mon Sep 29 13:22:53 2014
          Size: 1.83 MB
          FMRI: pkg://uulm.mawi/system/storage/smartmontools@6.3-0.151012:20140929T132253Z

The smartmontools file in /etc/default/smartmontools might have been created automatically, but I’m not sure if this was left over from previous attempts.  Either way below is what you need.

# Defaults for smartmontools initscript (/etc/init.d/smartmontools)
# This is a POSIX shell fragment

# List of devices you want to explicitly enable S.M.A.R.T. for
# Not needed (and not recommended) if the device is monitored by smartd
#enable_smart="/dev/hda /dev/hdb"

# uncomment to start smartd on system startup

# uncomment to pass additional options to smartd on startup

Then add your disks to /etc/smartd.conf.  For me it was like below, yours will differ.

/dev/rdsk/c1t5000CCA232C02D87d0 -a -d sat,12
/dev/rdsk/c1t5000CCA232C0D31Bd0 -a -d sat,12 
/dev/rdsk/c1t5000CCA232C0EA80d0 -a -d sat,12 
/dev/rdsk/c1t5000CCA232C1543Cd0 -a -d sat,12 
/dev/rdsk/c1t5000CCA232C0AD56d0 -a -d sat,12 
/dev/rdsk/c1t5000CCA232C0BBA6d0 -a -d sat,12 
/dev/rdsk/c1t5000039FF4CF3EA6d0 -a -d sat,12 
/dev/rdsk/c1t5000039FF4E58676d0 -a -d sat,12 
/dev/rdsk/c3t5d0 -a -d sat,12

If everything works you will see below.  I do have an issue where one of my disks is repeatedly parking loudly.  I’ll need to do some research to see why.  It is good to know that all my disks are healthy.



OmniOS and a Misunderstanding

I decided to go with OmniOS as it seems to be the OS everyone recommends for ZFS and NappIt and it works with the 10G networking.  Installation went smooth and without issues

Unfortunately, that was the only thing that went well.  No configuration would allow me to get a 10G write above 180-190MB/s which is just not acceptable.  I would have expected a greater than 60% increase in speed.


Then it dawned on me.  I’m being limited by the speed of the media I’m using to copy my files from (a USB 3.0 Drive).  In theory a SATA Interfaced drive is only going to hit 160-190MB/s.

So then I copied a file locally and witnessed the same speeds, and copied that file from a Local Drive on a VM to the mounted SMB share, and boom there was my speeds, topping out at 400+MB/s


OpenIndiana Install – Take Two

I decided to retry the OpenIndiana Install, and this time I re-downloaded the ISO.  Surprisingly without issue the ISO Booted quickly.

First issues I ran into was that the OpenIndiana Text Installer only saw the 6TB Drives I had installed.  In order to get the installer to see the Sata DOM, I needed to remove the HBA Cables and then re-run the installer.  As you can see, the installer now saw the 64GB SATA DOM, and the 200GB Intel SSD.


Once configuring the Network, and setting the usernames and passwords, the installer did its thing.


After a quick install, the systems was rebooted, and the HBA Drives reconnected.

As for configuration, I followed directions from one of my favorite Server/Hardware Sites ServetheHome

  • Simply launch a Terminal in Openindiana, then type su 
  • Next, we will launch the installer command: wget -O – http://www.napp-it.org/nappit | perl 
  • Now you can sit back and watch the magic.

The install went without issue, but currently OpenIndiana does not have drivers for the X557 Intel NICs that are built into the SuperMicro X10SDV-4C-TLN4F Motherboard. Testing on 1G Links was similar to what we say in FreeNAS.  Currently I only know that FreeNAS has support for the X557 Nics.



NAS OS Install – A Journey in Patience


I’m in the process of setting up the OS for the new NAS server, and I have to say, the OpenIndiana installer is slow.  I waited 10 minutes to after seeing the Sun Copyright banner, and then 15, then 20, and eventually 30 minutes.  At that point I decided to go download Freenas 9.10.1 and prepared to mount it via the IPMI on the SuperMicro Xeon D board.  But low and behold we have movement! Then we stalled again.  I was starting to have flashbacks of Solaris on Sun Hardware with Software RAID on 16 Drive Storage arrays running early version of ZFS.

Then I got smart, or so I thought.   I took the USB Expander and removed it.  Then I plugged the USB directly into the motherboard.  It didn’t do a thing unfortunately.


So instead, I decided to a mount an ISO via the IPMI and go with FreeNas. So everything was going well.


Okay we are making progress


After logging in and doing some basic setup, I created the first Volume.


Speed tests were as expected, one mirror does one disk performance, 2 mirrors do two disk performance.

I still plan on installing OpenIndiana on this machine in the future and benchmarking it as well.  My 10G Switch and new Card has arrived, so I’ll be sure to put those benchmarks up as well.


NAS Hiccup

So as I was starting to do the NAS Build and I noticed an issue.  I had ordered the wrong 10G Nic for my Motherboard.   My motherboard uses 10G RJ45 and I ordered a NIC that uses SFPs. Also I had to add a larger Sata DOM as OmniOS and NappIt need the size.  I decided to add a SLOG drive even though it isn’t needed for just streaming, but if I decide to do ISCSI or NFS for VMware it would be useful.

The final build is below.   This will not inhibit me from continuing the build but will not allow me to do 10G Tests.

6X HGST 6TB He Drives


SuperMicro X10SDV-4C-TLN4F Motherboard

LSI 9211-8i HBA

Supermicro SSD-DM064-PHI 64GB SATA DOM (Disk on Module)

Fractal Node 804 Case

Intel S3710 200GB SSD (SLOG)

Intel X540-T2 10G Controller