How to Install SNMP on UnRAID6

One of the things I like to have in my lab environment is the ability to monitoring all OSes and keep an eye on such things as temperatures, disk space, and other sensors.  I was disheartened to find that UnRAID 6 did not have SNMP installed or configured. After some searching I was able to figure out how to get SNMP installed.

First, Log into your UnRAID Web page and Click on Plugins

Capture.PNGNext copy and paste the NerdPack Plugin into the URL Field and Click Install.  The NerdPack installs the prerequisites that you need to install SNMP.  Then you will see a plugin window pop up.

Capture.PNG

Then we go and reboot the server.  This step is not necessary, but I prefer to do this after each plugin install.

Next go to Settings, and Nerd Pack

Capture.PNG

Find the entry for Perl and click the slider

Capture.PNG

Click Apply on the bottom

Capture.PNG

The package manager will launch a window and you see the package install.

Capture.PNG

Then we go and install the UnRaid SNMP Plugin following the same steps for the previous plugin install.

Capture.PNG

Login to the host via SSH console and verify that SNMP is working by executing

snmpwalk -v2c -c public localhost

You should see output similar to below.

Capture.PNG

Now you should be able to import your host into an SNMPD Based monitoring.

Capture.PNG

VMWare ESXi Passthrough and Freenas

I recently had user colicstew comment that they could not get Freenas 9.10.2 U3 to see drives on his LSI2116.  Luckily I have some unused 2TB Drives and LSI 2116 Controller as part of my Xeon D Lab.

IMG_0561.jpgIMG_0560.jpg

First step was to passthrough my LSI2116 Controller and Reboot the Server.  You should see Enabled/Active for the LSI Controller which we do

Snip20170511_1.png

Next I created a VM. Snip20170511_2.png

I chose to install the OS on one of my VMware Datastores.

Snip20170511_3.png

On the Hardware Settings I set as shown.

Snip20170511_4.png

Then I clicked Add other device -> PCI Device and Add the LSI ControllerSnip20170511_5.png

Finally I added a CD-ROM Drive and attached the Freenas ISO downloaded from here.

Snip20170511_7.png

Snip20170511_8.png

Finally we boot the VM

Snip20170511_9.png

Then we select Install at the first Screen

Snip20170511_13.png

One of the oddities I ran into while creating this tutorial was the key sequence to release the mouse from the VMWare Remote Console was the escape key in the Freenas installer that bounces you back to Initial Setup Screen, so the remaining pictures are via my phone.

On the next screen you will be warned about only having 4GB of memory.  Since I’m only using this as a test system, I am not concerned.  If you were running this as an actual file server you would want at a minimum 1GB of Memory per TB of storage.

IMG_0562.jpgOn the this screen we see five drives.  One is our drive we will install FreeNAS OS to, and the other four are 2TB Hitachi Drives I attached to the Xeon D Board.

IMG_0563.jpg

Next you are warned that FreeNAS will clear the drive you are installing to.

IMG_0564.jpg

Next you are asked to set the root password.

IMG_0565.jpg

Finally you are asked to choose a BIOS Mode.  

IMG_0566.jpg

At this point, I went to install the system, but no matter what I did, it would not install the OS, failing with the below error.

Snip20170511_15.png

The problem here is that the drive that we are installing the OS to is thin provisioned.  FreeNAS does not like this.

Snip20170511_16.png

Fix is to create the disk as “Thick provisioned, eagerly zeroed”.

Snip20170511_17.png

Once that was fixed, the OS installation continued without issue.

Snip20170511_18.png

Once completed, you will see the following success message.Snip20170511_19.png

Then, I chose option four to shutdown the system.  Be sure to remove the DVD Drive, or the system will continue to boot from the Installer.  Once that is completed your system will work as expected.

Snip20170511_23.png

Unfortunately, I had no issues getting the system up and running.  colicstew feel free to email me via the contact method and we can see about troubleshooting this issue.  I would start with verifying you have the correct breakout cable.  I used one of the four breakout cables from the Motherboard.

 

UnRAID- Storage Server On a USB Drive

I’m in the process of converting my Lab Environment into a separated Storage Server and Virtualization platform.  Today we will focus on the Software RAID Storage Server side with UnRAID.

I’m using the below setup for a Test Bench.  Under no circumstances should you use a Desktop Setup for Production use, but for just a Test Bench to gauge performance of software, this is perfectly fine.

  • Intel i3-6100T
  • Gigabyte GA-Z170N-Gaming 5
  • 8GB DDR4
  • 4X Western Digital 4.0TB NASWare Drives
  • 128GB OCZ RD400 NVMe M.2 SSD

Setup is very easy, either create or purchase a USB Drive with UnRAID and boot from it. Once booted, you navigate to the Web Page for UnRAID.

unraid1

Then I navigated to the Main Tab and configured my drives.  I set three for Data and one for Parity.  Then we played the waiting game.  For 12TB of usable space we are waiting nine plus hours for the Parity Disk to Sync.

unraid2

As some veterans of UnRAID may have seen, I’m running version 6.18 in the above screenshots.  Since I have an NVMe M.2 SSD in this setup, I decided to use it as a cache drive.

unraid3

I decided to run some benchmarks to see how our system fares.  The System is only outfitted with a 1G network card, so we are limited to about 117MB/s as a best case scenario.  We ran into an interesting situation though.  When benching we started with the 1GiB File Size in Crystal Disk Mark, but we got abysmal  numbers.

unraid4

So decided to benchmark starting on the lower end starting with the 50MiB File Size and sure enough we hit speeds that maxed out the network adapter.  The trend continued on the 500MiB File Size as well.

So I went back to the 1GiB File Size Benchmark and low and behold we were hitting good speeds.

unraid7

There must be some type of caching mechanism that determines when to write and read from the cache and when not to.  I think because the system had just added the cache to the array, it was not “smart” enough to use it yet.

We can see the test consistently max out the Network Adapter.

unraid8

Here is a 2GiB File Size Benchmark, which is in line with the previous.

unraid9

I would expect similar performance until we have a file larger than the 128GB Cache Drive. I’ll need to find a benchmark tool that can use 128GiB File Sizes, as CrystalDiskMark only goes to 32GiB.  I also plan to run these same tests with a 10G Network as well, but I am currently waiting on the arrival of Second Xeon D Motherboard and have yet to order the 10G Adapter for that board.  We also have a ZeusRam 8GB SSD that I am waiting to arrive.  That drive is a favorite of the FreeNas Crowd.

Benchmark Time!

I’ve been slacking a bit with getting Benchmarks out, but I’ve done some simple ones.

This is the current hardware and software

6X HGST 6TB He Drives

64GB DDR4 ECC

SuperMicro X10SDV-4C-TLN4F Motherboard

LSI 9211-8i HBA

Supermicro SSD-DM064-PHI 64GB SATA DOM (Disk on Module)

Fractal Node 804 Case

Intel S3710 200GB SSD (SLOG)

Intel X540-T2 10G Controller

Napp-It ZFS appliance v. 16.08.03 Pro Aug.02.2016

OmniOS 5.11 omnios-r151018-95eaa7e June 2016

In general the results are what I hoped for.  I wanted to max out a single 1G Link or 10Link.

1G Connection via ASUS XG-U2008 Switch and Intel I219V 10 Threads Overlapped IO

1G10Threads.PNG

10G Connection via ASUS XG-U2008 Switch and Intel x540-T2 10 Threads Overlapped IO

10G10Threads.PNG

1G Connection via ASUS XG-U2008 Switch and Intel I219V IO Comparison

capture

10G Connection via ASUS XG-U2008 Switch and Intel x540-T2 IO Comparison

capture

The 10G speeds aren’t as consistent as I’d like but they are definitely pretty good.  Overall I’m happy with the build.

OmniOS and getting SmartmonTools to Work

I spent a couple days off and on trying to get smartmontools to work on OmniOS.  I saw some conflicting info on whether it should work out of the box, and for me at least it did not.  Below is what I did to get it to work

Don’t bother building from scratch.  Add the below repository and install the package.

pkg set-publisher -O http://scott.mathematik.uni-ulm.de/release uulm.mawi
pkg search -pr smartmontools
pkg install smartmontools
root@OmniOS:/root# pkg info -r smartmontools
          Name: system/storage/smartmontools
       Summary: Control and monitor storage systems using SMART
         State: Installed
     Publisher: uulm.mawi
       Version: 6.3
        Branch: 0.151012
Packaging Date: Mon Sep 29 13:22:53 2014
          Size: 1.83 MB
          FMRI: pkg://uulm.mawi/system/storage/smartmontools@6.3-0.151012:20140929T132253Z

The smartmontools file in /etc/default/smartmontools might have been created automatically, but I’m not sure if this was left over from previous attempts.  Either way below is what you need.

# Defaults for smartmontools initscript (/etc/init.d/smartmontools)
# This is a POSIX shell fragment

# List of devices you want to explicitly enable S.M.A.R.T. for
# Not needed (and not recommended) if the device is monitored by smartd
#enable_smart="/dev/hda /dev/hdb"

# uncomment to start smartd on system startup
start_smartd=yes

# uncomment to pass additional options to smartd on startup
smartd_opts="--interval=1800"

Then add your disks to /etc/smartd.conf.  For me it was like below, yours will differ.

/dev/rdsk/c1t5000CCA232C02D87d0 -a -d sat,12
/dev/rdsk/c1t5000CCA232C0D31Bd0 -a -d sat,12 
/dev/rdsk/c1t5000CCA232C0EA80d0 -a -d sat,12 
/dev/rdsk/c1t5000CCA232C1543Cd0 -a -d sat,12 
/dev/rdsk/c1t5000CCA232C0AD56d0 -a -d sat,12 
/dev/rdsk/c1t5000CCA232C0BBA6d0 -a -d sat,12 
/dev/rdsk/c1t5000039FF4CF3EA6d0 -a -d sat,12 
/dev/rdsk/c1t5000039FF4E58676d0 -a -d sat,12 
/dev/rdsk/c3t5d0 -a -d sat,12

If everything works you will see below.  I do have an issue where one of my disks is repeatedly parking loudly.  I’ll need to do some research to see why.  It is good to know that all my disks are healthy.

capture

OmniOS and a Misunderstanding

I decided to go with OmniOS as it seems to be the OS everyone recommends for ZFS and NappIt and it works with the 10G networking.  Installation went smooth and without issues

Unfortunately, that was the only thing that went well.  No configuration would allow me to get a 10G write above 180-190MB/s which is just not acceptable.  I would have expected a greater than 60% increase in speed.

capture4

Then it dawned on me.  I’m being limited by the speed of the media I’m using to copy my files from (a USB 3.0 Drive).  In theory a SATA Interfaced drive is only going to hit 160-190MB/s.

So then I copied a file locally and witnessed the same speeds, and copied that file from a Local Drive on a VM to the mounted SMB share, and boom there was my speeds, topping out at 400+MB/s

OpenIndiana Install – Take Two

I decided to retry the OpenIndiana Install, and this time I re-downloaded the ISO.  Surprisingly without issue the ISO Booted quickly.

First issues I ran into was that the OpenIndiana Text Installer only saw the 6TB Drives I had installed.  In order to get the installer to see the Sata DOM, I needed to remove the HBA Cables and then re-run the installer.  As you can see, the installer now saw the 64GB SATA DOM, and the 200GB Intel SSD.

capture

Once configuring the Network, and setting the usernames and passwords, the installer did its thing.

capture

After a quick install, the systems was rebooted, and the HBA Drives reconnected.

As for configuration, I followed directions from one of my favorite Server/Hardware Sites ServetheHome

  • Simply launch a Terminal in Openindiana, then type su 
  • Next, we will launch the installer command: wget -O – www.napp-it.org/nappit | perl 
  • Now you can sit back and watch the magic.

The install went without issue, but currently OpenIndiana does not have drivers for the X557 Intel NICs that are built into the SuperMicro X10SDV-4C-TLN4F Motherboard. Testing on 1G Links was similar to what we say in FreeNAS.  Currently I only know that FreeNAS has support for the X557 Nics.