Windows 2016 Storage Spaces Testing

Setup is rather simple for Storage Spaces on 2016, with clicking on File and Storage Services and then Storage Pools

Capture.PNG

From there you will see that Primordial Pool and on the bottom right the Physical Disks available for use.

Capture.PNG

Capture.PNG

From there click Tasks and New Storage Pool

Capture

Name your Pool

Capture.PNG

Select the drives you wish to use.  For us, we chose 4 disk to create a mirrored pool.  Selecting five drives or including the NVMe drive cause a failure to create a Virtual Disk.

Capture.PNG

Click Create

Capture

Capture.PNG

Go Virtual Disk, Tasks and New Virtual Disk

Capture.PNG

Name the Virtual Disk and Select “Create storage tiers on this virtual disk” if you have a SSD on the machine.  For us we have M.2 Samsung SSD in the system.

Capture

This is not an enclosure, we do not select Enclosure Awareness.

We make a decision between storage and speed and redundancy.  For us we want the redundancy but can’t because we need a even number of same sized disks.  If we were to chose only 4 disks, we would be able to use Mirror, but for just testing a

lesCapture

Since we chose Tiered Storage, we cannot choose thin provisioning.

Capture.PNG

Once the creation is complete, go and format the disk.

Now we benchmark, where it becomes immediately apparent that Spaces fails terrible on 4K Writes and does really well on Sequential.  I would expect better 4K writes from a Single 7200 RPM drive.

 

 

Building a Home Lab – Back to the Future!

In my current role, I don’t do much System Engineer work, and frankly it’s boring to not.

I’ve decided that I need to keep my skills up and I’m working on rebuilding in my home lab what we had with the old team.

Compiling a list this is what I need to build, and what I hope to blog on.  I find that the simple how to guides are prevalent across the web, so I’ll blog about those pain points I encounter.

  1.  Systems Center Operations Manager 2016
  2.  Systems Center Virtual Machine Manager 2016 (with Substitute VMware ESXI and vSphere)
  3.  Windows Server Update Services
  4.  Windows Deployment Services
  5.  Active Directory
  6.  Windows DNS
  7.  Puppet
  8.  Foreman (though not technically in our old environment, I would like to use it for nix deployments
  9.  SQL Clustering and Replication
  10.  Virtual Desktop Infrastructure

My Active Directory, Windows DNS, Foreman, and Puppet Installations are already completed.  My vSphere and ESXi Systems is complete as well.

My entire infrastructure is running on the following as VMs off an ESXi Server

Hardware

SUPERMICRO MBD-X10SRL-F Server Motherboard LGA 2011 R3

Intel Xeon E5-2609 V4 1.7 GHz 20MB L3 Cache LGA 2011 85W BX80660E52609V4 Server Processor

8X SAMSUNG 16GB 288-Pin DDR4 SDRAM ECC Registered DDR4 2133 (PC4 17000) Server Memory Model M393A2G40DB0-CPB

SUPERMICRO SSD-DM064-PHI SATA DOM (SuperDOM) Solutions

Intel RS3DC080 PCI-Express 3.0 x8 Low Profile Ready SATA / SAS Controller Card

Intel 750 Series AIC 400GB PCI-Express 3.0 x4 MLC Internal Solid State Drive (SSD) SSDPEDMW400G4X1

4X HGST HITACHI Ultrastar SSD400M 400GB SAS 6GB SSD 2.5″ HUSML4040ASS600 (SSD DataStore)

4X SAMSUNG SM843T 480GB SATA3 2.5″ SSD INTERNAL SOLID STATE ENTERPRISE DRIVE LAPTOP (SSD DataStore 2)

Intel Ethernet Converged Network Adapter X540-T2

Intel Ethernet Server Adapter I350-F4

MikroTik – CRS125-24G-1S-IN – , Cloud Router Gigabit Switch, 24x 10/100/1000 Mbit/s Gigabit Ethernet with AutoMDI/X, Fully manageable Layer3, RouterOS v6, Level 5 license.

ASUS XG-U2008 Unmanaged 2-port 10G and 8-port Gigabit Switch

Notes

  • Both the SSD Datastores are running in a RAID 5
  • All Parts except the Intel 750 SSD, Motherboard, and CPU were purchased on eBay.   I saved hundreds by doing that.  DDR4 ECC is dirt cheap on eBay right now.
  • I will run out of storage space and memory long before I run out of CPU
    • To up the memory I’ll need to purchase 32GB DIMMS as I am maxed out with 16GB DIMMScapture
  • The Mikrotik Switch Handles 1G Traffic
  • The Asus Switch Handles 10G Traffic between my NAS and my Vmware Server
    • Current the 10G NIC is direct connected to one of my VMs that allows me to transfers data over the 10G NICs
  • The storage breakdown is below.  NappItDataStore is my NAS and it has ISO files for OS installs, and other installers.  All told, VM storage is only 2.8TB.  I would love to consolidate down to 4 X 1.2TB NVME SSDs and currently have my eye on this.  At 500$ a drive, it’s too expensive.  Hopefully these drives will start to come down on the used market.  At that point I could eliminate the Intel RAID Card, and go with all PCIE SSDs.

Capture.PNG

 

 

Early NAS Benchmarks

capture

I’m been racking my brain trying to figure out this Synology NAS. I wanted a very simple benchmark, as I don’t need to go into complex workloads.   The problem is that I’m using Synology Hybrid Raid.  From what I can see from mdadm is that Synology created a RAID 5 from two slices of from each 6TB Disk and a RAID 1 from each 3TB Drive, and then RAIDed those two arrays in a RAID6 for 10TB.  It is in interesting way to handle different size disks.

capture

@NAS:/$ sudo mdadm --detail /dev/md2
/dev/md2:
Version : 1.2
Creation Time : Sun Jun 5 14:32:46 2016
Raid Level : raid5
Array Size : 8776306368 (8369.74 GiB 8986.94 GB)
Used Dev Size : 2925435456 (2789.91 GiB 2995.65 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent

Update Time : Tue Oct 4 16:12:35 2016
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

Name : DiskStation:2
UUID : 7bda9cd2:56823007:93f428e6:169ca70e
Events : 1336

Number Major Minor RaidDevice State
4 8 5 0 active sync /dev/sda5
5 8 21 1 active sync /dev/sdb5
2 8 37 2 active sync /dev/sdc5
3 8 53 3 active sync /dev/sdd5
@NAS:/$ sudo mdadm --detail /dev/md3
/dev/md3:
Version : 1.2
Creation Time : Sat Jun 25 05:33:13 2016
Raid Level : raid1
Array Size : 2930246912 (2794.50 GiB 3000.57 GB)
Used Dev Size : 2930246912 (2794.50 GiB 3000.57 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Tue Oct 4 16:19:16 2016
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : NAS:3 (local to host NAS)
UUID : 57946685:1d64279d:4929b9da:9c01f117
Events : 915

Number Major Minor RaidDevice State
0 8 6 0 active sync /dev/sda6
2 8 22 1 active sync /dev/sdb6

These benchmark speeds are to be expected.  I plan to benchmark the Synology in different RAID Modes as well, but for now, I’ve benchmarked md2 (RAID5), and md1 (RAID1)6

@NAS:/$ sudo hdparm -t /dev/md2
/dev/md2:
Timing buffered disk reads: 422 MB in 3.00 seconds = 140.61 MB/sec

@NAS:/$ sudo hdparm -T /dev/md2

/dev/md2:
Timing cached reads: 1478 MB in 2.00 seconds = 738.35 MB/sec

@NAS:/$ sudo hdparm -t /dev/md1

/dev/md1:
Timing buffered disk reads: 498 MB in 3.04 seconds = 163.86 MB/sec

@NAS:/$ sudo hdparm -T /dev/md1

/dev/md1:
Timing cached reads: 1508 MB in 2.00 seconds = 754.19 MB/sec