How to Install vRealize Operations

First step is to obtain the vRealize Operations Manager Appliance and install it via your VMware server.  Once the Appliance is uploaded and the installation is complete you will see the below screen on your VM.

2.PNG

Once you go to the URL you will find the Initial Setup screen.

1.PNG

I chose new Installation and started by setting the Administrator’s Password.

3.PNG

I then accepted the default certificate.

4.PNG

I named my Cluster Node.  We can add additional nodes later.

5.PNG

I then completed the install by clicking Finish.6.PNG

Now we wait, the Appliance will configure itself.

7.PNG

Then we need to start vRealize Operations Manager

8.PNG

9.PNG

Eventually the Node will show a State of Running and Online.

10.PNG

Once the node is up and running you will have to login

11.PNG

Note that the login username is admin and not root.

Now we have to do more configuration.

12.PNG

Accept the EULA, enter our Product Key, and opt in or out of CEIP.15.PNG

13.PNG

14.PNG

15.PNG

Finally we configure the installation.

16.PNG

First in order to gather information of our VMs and ESXI servers we need to configure the vSphere Adapter.

17.PNG

18.PNG

20.PNG19.PNG

20.PNG

Once the adapter is configured you should see Collecting.

21

For a time you will see your items in a grey color while the information is aggregated.

22.PNG

After a while you see the aggregation working and less and less grey.

Capture.PNG

Advertisements

How to Install VSphere Appliance 6.0

There is only so many installations of a Windows or Linux box you can do before you ask yourself if there is any easier way to do this.  In order to use templates and automate your VM building tasks, one of the choices is to use Vsphere with a variety of tools that plug in like Vagrant, Chef, or Puppet.

First we must acquire the VCSA 6.0 ISO and have a VM or other Windows Host on which to run the VSphere installer from.  For me this is just my Windows 10 Laptop.

First we browse the ISO to /vcsa/VMWare-ClientIntegrationPlugin-6.0.0.exe and install the Plugin.  This is needed to run the installer which is Web Browser Based.

Once installed Capture.PNGwe open vcsa-setup.html

Capture.PNG

 

 

 

 

 

 

 

 

On Internet Explorer, you will be prompted twice to accept the Client Integration Plugin Access to your system.   Accept the access and you will see the Install Window for vCenter Server Appliance 6.0

Capture.PNG

Capture.PNG

First we click Install and Accept the License AgreementCapture.PNG

Then we enter the IP, User, and Password for our ESXi Host

Capture.PNG

You will be prompted to trust your SSL Certificate, and since its the default install certificate it will be for localhost.localdomain.  If you have changed your hostname it won’t match and won’t be trusted.  You can click Yes to accept and continue.

Capture.PNG

The installer will Validate your setup and if any connection issues are found, it will let you know.  Then it asks you to set the appliance name, and OS Password.  Set these as appropriate.  When setting the OS Password be sure to have an uppercase, lowercase, and special character as it is required by the installer.

Capture.PNG

For our installation, since it is such a small VCenter Install, we are going to go with the Embedded Platform Service Controller.

Capture.PNG

Next we create our SSO or Single Sign On Domain.  If you plan to have Active Directory Integration, you need to be sure that your Domain Name and Site Name are different than your Active Directory Forest Name.

Capture.PNG

Next you pick your Appliance Size.  The Tiny Size fits our installation well, but just incase we decided to bigger, I have decided to go to the Small Size.

Capture.PNG

Next Select your Datastore.  Be sure there is enough space to fit your VMs.  The Installer states 150GB is needed, so I have chose a Datastore that has at least that.  I have also enabled Thin Disk Mode as it only allocates storage as needed.

Capture.PNG

We chose the embedded database.

Capture.PNG

We chose the appropriate Network Settings.  Since my entire homelab is 10Gbe, I want to be sure that I am using a 10gbe Virtual Switch/Network.

Capture.PNG

The installer will warn you not to use DHCP, but if you do reservations of DHCP address you will be fine.  Finally accept the Customer Improvement Settings and Finish the installer so that it can build your VMs.  If you want to see how the installer is doing, hop over to your Web Page for your ESXi host.

Capture.PNG

Once the VM is built you can also watch it do the installation.

Capture.PNG

Once it’s finished, verify you can login and off you go!

Capture.PNG

No 10gbe NICs showing in VMware 6.X with X10SDV-7TP4F

When I started building my VMware ESXi server, I did not have a Switch that could handle 10Gbe SFP+.  Now that I have a Dell X1052, I figured I would cable up a 10Gbe DAC and get moving.  Much to my surprise, I received a link light on the switch, but not on the motherboard.

Capture.PNG

Digging into the VMware side, I noticed that the 10Gbe NICs are not available, only the 1Gbe NICs.

Capture

A quick Google search brought me to a great site for VMware knowledge, tinkertry.com.  It appears that the drivers for the 10Gbe are not loaded.  So following the directions here, we open the SSH Console and enter the following command.

esxcli software vib install -v https://cdn.tinkertry.com/files/net-ixgbe_4.5.1-1OEM.600.0.0.2494585.vib –no-sig-check

We then reboot our host to get the new VIB to be loaded.

Capture

Low and behold on reboot we see the two 10gbe NICs.

Capture

Easy ESXi Patch Updates

If you are not familiar with v-front.de and you have a ESXi servers, you are really doing yourself a dis-service.  v-front.de maintains a repository of all ESXi updates and makes it very easy for you to update your servers when the newest patches come out.

Lately I have been having a series of interested GUI crashes with ESXi no matter what OS or Browser I use.  Knowing I was a bit behind on patches, I decided to update.

From their patching site I was able to grab the latest software profile and install it.  The steps are easy to do this and I will detail it here.

    1.  Enable the SSH Shell for your ESXi Host. At the Host Tab, click Actions -> Services – Enable Secure Shell (SSH).
    2. Going to the patching site and clicking on the latest update, v-front.de lists us the steps how to update our host.
      esxcli network firewall ruleset set -e true -r httpClient
      esxcli software profile update -p ESXi-6.5.0-20170404001-standard
      -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
      esxcli network firewall ruleset set -e false -r httpClient
    3. Reboot and then verify that the vibs have updated
      [root@esxi:~] esxcli software vib list | grep esx-base
      esx-base                       6.5.0-0.19.5310538                    VMware    VMwareCertified     2017-05-13
      [root@esxi:~] esxcli software vib list | grep esx-ui
      esx-ui                         1.18.0-5270848                        VMware    VMwareCertified     2017-05-13
      [root@esxi:~] esxcli software vib list | grep vsan
      vsan                           6.5.0-0.19.5310540                    VMware    VMwareCertified     2017-05-13
      vsanhealth                     6.5.0-0.19.5310541                    VMware    VMwareCertified     2017-05-13

VMWare ESXi Passthrough and Freenas

I recently had user colicstew comment that they could not get Freenas 9.10.2 U3 to see drives on his LSI2116.  Luckily I have some unused 2TB Drives and LSI 2116 Controller as part of my Xeon D Lab.

IMG_0561.jpgIMG_0560.jpg

First step was to passthrough my LSI2116 Controller and Reboot the Server.  You should see Enabled/Active for the LSI Controller which we do

Snip20170511_1.png

Next I created a VM. Snip20170511_2.png

I chose to install the OS on one of my VMware Datastores.

Snip20170511_3.png

On the Hardware Settings I set as shown.

Snip20170511_4.png

Then I clicked Add other device -> PCI Device and Add the LSI ControllerSnip20170511_5.png

Finally I added a CD-ROM Drive and attached the Freenas ISO downloaded from here.

Snip20170511_7.png

Snip20170511_8.png

Finally we boot the VM

Snip20170511_9.png

Then we select Install at the first Screen

Snip20170511_13.png

One of the oddities I ran into while creating this tutorial was the key sequence to release the mouse from the VMWare Remote Console was the escape key in the Freenas installer that bounces you back to Initial Setup Screen, so the remaining pictures are via my phone.

On the next screen you will be warned about only having 4GB of memory.  Since I’m only using this as a test system, I am not concerned.  If you were running this as an actual file server you would want at a minimum 1GB of Memory per TB of storage.

IMG_0562.jpgOn the this screen we see five drives.  One is our drive we will install FreeNAS OS to, and the other four are 2TB Hitachi Drives I attached to the Xeon D Board.

IMG_0563.jpg

Next you are warned that FreeNAS will clear the drive you are installing to.

IMG_0564.jpg

Next you are asked to set the root password.

IMG_0565.jpg

Finally you are asked to choose a BIOS Mode.  

IMG_0566.jpg

At this point, I went to install the system, but no matter what I did, it would not install the OS, failing with the below error.

Snip20170511_15.png

The problem here is that the drive that we are installing the OS to is thin provisioned.  FreeNAS does not like this.

Snip20170511_16.png

Fix is to create the disk as “Thick provisioned, eagerly zeroed”.

Snip20170511_17.png

Once that was fixed, the OS installation continued without issue.

Snip20170511_18.png

Once completed, you will see the following success message.Snip20170511_19.png

Then, I chose option four to shutdown the system.  Be sure to remove the DVD Drive, or the system will continue to boot from the Installer.  Once that is completed your system will work as expected.

Snip20170511_23.png

Unfortunately, I had no issues getting the system up and running.  colicstew feel free to email me via the contact method and we can see about troubleshooting this issue.  I would start with verifying you have the correct breakout cable.  I used one of the four breakout cables from the Motherboard.

 

How to Enable SNMPD on ESXi 6.5

I had some difficulties enabling SNMPD from the GUI on ESXi 6.5 and kept recieving the following error

Failed – An error occurred during host configuration

Quick search let me here.

Running the below steps listed at the above link allowed me to start SNMPD in the EXSi GUI without issue

esxcli system snmp set -r
esxcli system snmp set -c YOUR_STRING
esxcli system snmp set -p 161
esxcli system snmp set -L "City, State, Country"
esxcli system snmp set -C noc@example.com
esxcli system snmp set -e yes

Software Raid in Windows and NVMe U.2 Drive Benchmark

I have recently acquired a couple Intel 750 NVMe U.2 Drives to play around with.  In order to utilize these drives you need to source a SuperMicro AOC-SLG3-2E4 NVMe PCIe card or similar variant.  I good write up on the HBAs available is from our good friends at ServerTheHome.

In my VMware Server, I have passed through the two NVMe drives into a Windows 2016 VM.  From there we launch Disk Management, and below we see the two drives.

Snip20170427_1.png

For our testing we want the raw speed at first, so we right click on a drive and select “New Striped Volume”.  If we want redundancy we would choose a Mirrored Volume.  If we just wanted the combined storage of the two drives to appear to the OS as one drive, we would chose Spanned Volume.

Snip20170427_2.png

For there we make sure both our drives are selected.

Snip20170427_3.png

We assign the drive a letter, tell the OS to format the drive, give it a name if we wish, and click Finish.

We will get a warming, as the OS will convert the Disk from basic to dynamic.  Snip20170427_4.png

A note, if you receive an error message stating that there is not enough space on the drive, be sure you have the same Disk Type before you start and that the same amount of space is available on each drive.  For myself one drive was GPT and the other was Basic, resulting in a slight mismatch of drive space.  Once both drives were set to GPT, the drive space was the same and the mirrored operation could continue.

The drives will format and then appear once completed.

Snip20170427_5.png

Once formatted we can run some benchmarks against the hardware.

Snip20170428_7.png

In a RAID0 essentially this is what we expect to see, double the Reads and double the Writes.

The ATTO Results are also as expected.  There is some ludicrous speeds here, but you need an application that needs and can take advantage of these speeds.

Snip20170428_8.png