Home Lab

Home Lab: A List of uncommon or niche products

Posted on Updated on

Part of the joy of building out a home lab or virtualization workstation is finding those one-off items that enable you to build something great, cheap, and unique. Below is a list of some those niche items and distributors I’ve found along the way. I’ll continue to update this post as we go along and I encourage you to post up some of your findings too!

Sybausa.com

This place is full of all types of unique adapters and gadgets to make your home lab or workstation PC better. What I like about their product line focus is the support of cards with a PCIe x1 slot. Various server based add-on cards (example 2/4 port NIC cards) typically require a PCIe x4 or x8 port. However, most home labs typically have plenty of x1 slots and very little to no support for x4 and x8. Syba seems to make a “plethora” of add-on cards that support x1. Their only downside poor documentation/support.

Some products I like from them —

  • 2 Port Gbe PCIe x1 card (SY-PEX24028): I own and use several of these, they seem to work quite well. Dislikes – No Jumbo frames and it uses a Realtek 8111e chips set which means you must add these drivers to support ESXi
  • Another cool item they make is a M.2 to 4-port SATA III Adapter. This little RAID controller allows you to plug directly into an M.2 port and allows 4 SATA devices. I think this would be handy for smaller systems (ie. NUC builds), but performance might be a concern.

StarTech.com

StarTech is really becoming a great company with a very diverse and well supported/documented product line. I think they are really starting to give Blackbox a run for their money. I really like their cable and adapter card lines.

I’ve been using their Startech Null Modem DB9 to USB to run the CLI on my Netgear manage Switch since 2012 and have yet to have an issue with it.

William Lam has blogged many times around the use of NUC style home labs with StarTech Single and Dual USB 3.0 network adapters.

Winyao

Winyao is a “boutique” distributor specializing in NICs, Fibre adapters, and Transceivers. One item I find of value is their PCIe x1 Dual NIC with Intel or Broadcom chipset. Personally, I don’t know much about this company or own any of their products, but at $40-$60 per brand new adapter, I wished I had found them before buying the Syba adapters.

Fractal Design

If you are looking for your next server, workstation, media, or top of the line PC case then take a peek at Fractal Design. Founded in 2007 and based out of Sweden they have really started to dominate the custom case design market. Their innovative designs blend elegance with flexibility, which I might add is a hard combination to find. I like their Arc Midi and Arc Mini R2 line of cases for home lab build outs. However, when or if my trusty Antec Sonata from 2003 lets me down, then Fractal will be next on my list. Here is a great blog post from Erik Bussink around his use of Fractal Design for his 2014 Home Lab.

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

Gigabyte Firmware / BIOS update for MergePoint Embedded Management Software and Motherboard

Posted on

You’d think by now manufactures would have a solid and concise process around updating their products. They are quick to warn users to not update their BIOS unless there is a problem and quick to state if there is a problem they usually won’t support it. This total cycle of disservice is a constant for low-end manufacturers, heck even some high server platforms have the same issues. I have these same concerns when I started to look into updating my current MX31-BS0 Motherboard (mobo).

What can soften this blow a bit? How about the ability to update your BIOS remotely? This is a great feature of the MX-31BS0 and in this blog post, I’ll show you how I updated the BIOS and the remote MergePoint EMS (MP-EMS) firmware too.

Initial Steps –

  • My system is powered off and the power supply can supply power to the mobo.
  • I have setup remote access to the MP-EMS site with an IP address and have access to it via a browser. Additionally, I have validated the vKVM function works without issue
  • I downloaded the correct Mobo BIOS and BMC or MP-EMS Firmware and have extracted these files
  • Steps below were completed on a Gigabyte MX31-BS0 from BIOS F01 > F10 and MP-EMS 8.01 > 8.41, your system may vary

1 – Access the MergePoint EMS site

Start out by going to the IP address for MP-EMS site. From the initial display screen, we can see the MP-EMS Firmware versions but not the Platform (or Mobo) BIOS Version. Why not you may ask? Well, the MP-EMS will only display Mobo information when the Mobo is powered on. Before you power on your Mobo I would recommend opening the vKVM session so that you can see the boot screen. When you power on your mobo (MP-EMS > Power > Control > Power On ) use the vKVM screen to halt at the ‘boot menu’ or even go into setup and disable all the boot devices.

In this PIC, we can see my Firmware for the MP-EMS is 8.01 and the BIOS is blank as the Mobo is not powered on.

2- Selecting the Mobo BIOS Update

I choose the following to update the Mobo BIOS. Start out by uploading the file: Update > ‘BIOS & ME’ > Choose File > Image.RBU > Upload

Once the upload is complete, click on ‘Update’ to proceed. NOTE: a warning dialog box appeared for me stating the system would be powered off to update the BIOS. Good thing I’m in the Boot Menu as the system will just directly power off with no regard of the system state

3 – Installing the Mobo BIOS Update: Be Patient for the BIOS install to complete

Once I saw the message the ‘BIOS firmware image has been updated successfully’ I then exited the browser session and vKVM .  Note: I’d recommend closing the browser out entirely and then reopening a new session.


Once I restarted my vKVM and MP-EMS sessions and then powered on my Mobo. This allowed the BIOS update to continue.

Here is the patience part – My system was going from BIOS F01 > F10 and it rebooted 2 times to complete the update. Be patient it will complete.

Here is the behavior I noted:

  • First Reboot – The system posted normally, it cleared the screen, and then white text stated a warning message about the BIOS booted to default settings. Very shortly after it rebooted again.
  • On the 2nd reboot, it posted normally and I pressed F10 to get back to the Boot menu. I did this because next, we’ll need to update the MP-EMS firmware.

Once the system had rebooted I then refreshed my MP-EMS screen and viola there it was BIOS Version F10.

4 – Selecting the MP-EMS Firmware

While the Mobo is booted and I’m in the boot menu, I went into the MP-EMS session and choose the following Update > BMC > Choose File > 841.img > upload


5 – Installing the MP-EMS firmware update

Once the file was uploaded I could see the Current and New versions. I then choose Update button which promptly disconnected my vKVM session and Status changed from None to a % Completed.

Again, be patient and allow the system to update. For my systems the % Complete seemed to hang a few times but the total process, for me, took about


At 100% complete my system did an auto-reboot. When I heard my system beep I then closed my MP-EMS session and started anew.


Shortly after the system booted I went into the MP-EMS and validated the firmware was no 8.41.


Wrapping this up…

Ever heard the saying “It really is a simple process we just make it complicated”? Recent BIOS updates and overall system management sometimes feel this way when trying to do simple processes. Not trying to date myself but BIOS/Firmware updates have been around for decades now. I’ve done countless updates where it was simply extracting an update to simple media and then it completes the update on its own. Now one could argue that systems are more complicated and local boot devices don’t scale well for large environments and I’d say both are very true but that doesn’t mean the process can’t be made more simple.

My recommendation to firmware / bios manufactures — invest in simplicity or make it a requirement for your suppliers. You’ll have happier customers, less service calls, and more $$ in your pocket but then again if you do, what would I have to blog about?

Am I happy with with the way I have to update this Mobo? Yes, I am happy with it. For the price I paid it’s really nice to have a headless environment that I can remotely update. I won’t have to do it very often so I’m glad I wrote down my steps in this blog.

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

Home Lab Gen IV – Part III: Best ESXi White box Mobo yet?

Posted on Updated on

It’s been a while since I posted about the Gen IV home lab but what can I say I’ve been a bit busy. Initially, when I decided to start this refresh I planned to wipe just the software, add InfiniBand, and keep most of the hardware. However, as I started to get into this transformation I decided it was time for a hardware refresh too. About that time is when I found, what could be, a nearly perfect ESXi white box mobo.

In this post, I wanted to write a bit about this new mobo and why I think it’s a great choice for a home lab. The past workhorse of my home lab has been my trusty MSI Z68MS-G45(B3) Rev 3.0 (AKA MSI-7676). I started using them in 2012 and I had 3 of these mobos. This mobo has been a solid performer and they treated me very well. However, they were starting to age a bit and I sold them off to a good buddy of mine. I used those recourses to help fund 3 new mobo’s, DDR4 and CPU.

My new workhorse –

Items kept from Home Lab Gen III:

  • 3 x Antec Sonata Gen I and III each with 500W PS by Antec: I’ve had one of these cases since 2003, now that is some serious return on investment
  • Each Server: 2 x SATAIIII 2TB HDD, 1 x SATAIII 60GB or 90GB SSD, 1 x 60GB HDD (Boot)
  • Each Server: 1 x SYBA SY-PEX24028 Dual Port Gigabit Ethernet Network Adapter

New Items:

  • 3 x Gigabyte MX31-BS0 – So feature rich, I found them for $139 each, and this is partly why I feel it’s the best ESXi white box mobo
  • 3 x Intel Xeon E3-1230 v5 – I bought the one without the GPU and saved some $$
  • 3 x 32GB DDR4 RAM – Nothing special here, just 2133Mhz DDR4 RAM
  • 3 x Mellanox Connectx InfiniBand cards (More to come on this soon)

Why I chose the Gigabyte MX31-BS0 –

Likes:

  • Headless environment: This Mobo comes with an AST2400 headless chipset environment. This means I no longer am tied to my KVM. With a java enabled browser, I can view the host screen, reboot, go into BIOS, BIOS updates, view hardware, and make adjustments as if I was physically at the box
  • Virtual Media: I now can virtually mount ISOs to the ESXi host without directly being at the console (Still to test ESXi install)
  • Onboard 2D Video: No VGA card needed, the onboard video controller takes care of it all. Why is this important? You can save money by choosing a CPU that doesn’t have the integrated GPU, the onboard video does this for you
  • vSphere HCL Support: Really? Yep, most of the components on this mobo are on the HCL and Gigabyte lists ESXi 6 as a supported OS, its not 100% HCL but for a white box its darn close
  • Full 16x PCIe Socket: Goes right into the CPU
  • Full 8x PCIe Socket: Goes into the C232
  • M.2 Socket: Supporting 10Gb/s for SSD cards
  • 4 x SATA III ports (white)
  • 2 x SATA III can be used for Satadom ports (orange) with onboard power connectors
  • 2 x Intel i210 1Gbe (HCL supported) NICs
  • E3 v5 Xeon Support
  • 64GB RAM Support (ECC or Non-ECC Support
  • 1 x Onboard USB 2.0 Port

Dislikes: (Very little)

  • Manual is terrible
  • Mobo Power connector is horizontal with the mobo, this made it a bit tight for a common case
  • 4 x SATA III Ports (White) are horizontal too, again hard to seat and maintain
  • No Audio (Really not needed, but would be nice)
  • For some installs, it could be a bit limited on PCIe Ports

Some PICS :

The pic directly below shows 2 windows: Window 1 has the large Gigabyte logo, this is the headless environmental controls. From here you can control your host and launch the video viewer (window 2). The video viewer allows you to control your host just as if you were physically there. In windows 2 I’m in the BIOS settings for the ESXi host.

This is a stock photo of the MX31-BSO. It’s a bit limited on the PCIe ports, however, I don’t need many ports as soon I’ll have 20Gb/s InfiniBand running on this board but that is another post soon to come!

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

DCUI from ssh for vSphere 6 — so awesome!

Posted on Updated on

This is one of those great command line items to put in your toolkit that will impress your co-workers. I think this command is one of the least known commands but could have a huge impact on an admins ability to manage their environment. The vSphere command is simply ‘dcui’ and it is a very simple way to access the DCUI without having to go into your remote IPMI tools (ilo, iDRAC, KVM over IP, etc). The only down side compared to IPMI tools is it doesn’t work when you reboot your system as you’ll lose your ssh session.

How to use it:

  • After your server is fully booted, start an ssh session to your target server and logon
  • From the command prompt type in dcui and press enter

  • From there you can use the dcui remotely.
  • Press CTRL + C to exit

Tips:

  • Have your ssh screen size where you want it prior to going into the dcui. If you resize after connecting it will exit out of the DCUI
  • The DCUI command worked great in putty but it did not work with the MAC Terminal program. Not sure why, but if you got this working on a MAC then post up!

Reference: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2039638

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

Home Lab Gen IV – Part II: Lab Clean Up and Adding Realtek 8186 NIC Drivers to ESXi 6u2 ISO

Posted on Updated on

To prep my Home Lab for ESXi 6.0U2 with VSAN + IB. I wanted to ensure it was in pristine condition. It had been running ESXi 5.5 + VSAN for many years but it was in need of some updates. I plan to fully wipe my environment (no backups) and reinstall it all. Yes, that’s right I’m going to wipe it all – this means goodbye to those Windows 2008 VM’s I’ve been hanging on to for years now. Tip: If you’d like to understand my different Home lab generations please see my dedicated page around this topic.

In this post, I am going to focus on listing out my current to-do items, then describing how to flattening all SSD/HDD and finally building a custom ESXi 6.0U2 ISO with Realtek 8186 drivers.

Current to Do list –

Completed

  • PM the Hosts – While they are off it’s a good time to do some routine PM (Complete)
  • BIOS and Firmware – Check all MoBo BIOS, pNIC, and HDD/SDD firmware (Complete)
  • Netgear Switch BIOS – It’s doubtful but always worth a check (Complete)
  • Flatten all SDD / HDD with Mini-Partition Tool (This Post)
  • Create ISO with ESXi 6.0U2 and Realtek 8168 Drivers (This Post)
Still to do
  • Install Windows 2012 Server VM for DNS and AD Service (Local disk)
  • Install vCenter Server Appliance (Local Disk)
  • Get Infiniband Functional (Needs work)
  • Setup FT and VSAN Networks
  • Enable VSAN
  • Rebuild VM Environment

Flatten all SDD / HDD with Mini-Partition Tool

Installing VSAN fresh on to an environment requires the SDD / HDD’s to be free of data and partition information. The Mini-Partition tool is a FREE bootable software product allowing you to remove all the partitions on your ESXi Hosts and other PCs. You can download it here >> https://www.partitionwizard.com/partition-wizard-bootable-cd.html

Once I created the BOOT CD and allowed the product to boot. I was quickly able to see all the HDD / SDD’s in my Host.

I simply right clicked on each host and choose ‘Delete All Partitions’

After choosing ‘Delete All Partitions’ for all my disks I clicked on ‘Apply’ in the upper right-hand corner. The following window appeared, I choose ‘Yes’ to Apply pending changes, and it removed all my partitions on all my disks quite quickly.

Create ISO with ESXi 6.0U2 and Realtek 8168 Drivers

ESXi no longer supports RealTek Network drivers, so home lab users who need these drivers will have to create a custom ISO to add these drivers back in. Keep in mind these are unsupported drivers by VMware, so use at your own risk. My trusty ESXi-Customizer GUI program is no more for ESXi 6. It has moved to a CLI based product. However, PowerCLI has all the functionality I need to build my customer ISO. In this section, I’ll be using PowerCLI to create my ISO. Keep in mind these are the steps that worked for me, your environment may vary.

To get started you will need two files and PowerCLI Installed on a Windows PC.

  1. File 1: VMware Offline ZIP >> www.vmware.com/download

2. RealTek 8186 Offline bundle >> https://vibsdepot.v-front.de/wiki/index.php/Net55-r8168

3. PowerCLI Download and install >> https://communities.vmware.com/community/vmtn/automationtools/powercli

Tip: If you don’t know PowerCLI try starting here

4. Place the files from Step 1 and 2 into c:\tmp folder

–POWERCLI COMMANDS— For each command, I have included a screenshot and the actual command allowing to copy, paste, and edit into your environment.

  1. Add ESXi 6.0u2 and RealTek8186 products to the local Software Depot

Add-EsxSoftwareDepot C:\tmp\update-from-esxi6.0-6.0_update02.zip

Add-EsxSoftwareDepot C:\tmp\net55-r8168-8.039.01-napi-offline_bundle.zip

2. Confirm the products are in the depot

Get-EsxSoftwareDepot

3. List out the ESXi Image Profiles

Get-EsxImageProfile4

4. Create a Clone Image to be modified – Ensure you are targeting the “ESXi…..standard” profile from step 3

New-EsxImageProfile -cloneprofile ESXi-6.0.0-20160302001-standard -Name “RealTek8186a”

Forward-Looking Tip: Whatever name you choose it will show up in your boot ISO

5. Set the Acceptance Level to Community Supported – Remember RealTek is unsupported by VMware

Set-EsxImageProfile -Name RealTek8186a -AcceptanceLevel CommunitySupported

For ImageProfile Enter – RealTek8186a

6. Ensure the RealTek net55-r8186 driver is loaded from the local depot (Screenshot shortened)

Get-EsxSoftwarePackage

7. Add the RealTek software package to the profile

Add-EsxSoftwarePackage

ImageProfile: RealTek8186a

SoftwarePackage[0]: net55-r8168 8.039.01-napi

Tip: You MUST enter the full name here if you just use the short name it will not work

8. Validate the RealTek drivers are now part of the RealTek8186a Profile (Screenshot shortened)

(Get-EsxImageProfile “RealTek8186a”).viblist

9. Export the profile to an ISO

Export-EsxImageProfile -ImageProfile “RealTek8186a” -ExportToIso -FilePath c:\tmp\RealTek8186a.iso

And that’s it… now with my clean/updated hosts, flatten HDD/SDD’s, and a newly pressed custom ISO I am ready to install ESXi onto my systems. Next Steps for me will be to install ESXi, AD/DNS VM, and vCenter Server Appliance. However, my next post will be focused on getting InfiniBand running in my environment.

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

Home Lab Gen IV – Part I: To InfiniBand and beyond!

Posted on Updated on

I’ve been running ESXi 5.5 with VSAN using a Netgear 24 Port Managed Gig switch for some time now, and though it has performed okay I’d like to step up my home lab to be able to support the emerging vSphere features (VSAN 6.x, FT-SMP, and faster vMotion). To support some of these features 10Gb/s is HIGHLY recommend if not fully required. Looking at 10Gbe switches and pNICS the cost is very prohibitive for a home lab. I’ve toyed around with InfiniBand in the past (See my Xsigo Posts here) and since then I’ve always wanted to use this SUPER fast and cost effective technology. Initially, the cost to do HPC (High-performance computing) has always been very expensive. However, in recent years the InfiniBand price per port has become very cost effective for the home lab.

Let’s take a quick peek at the speed InfiniBand brings. When most of us were still playing around with 100Mb/s Ethernet InfiniBand was able to provide 10Gb/s since 2001. When I state 10Gb/s I’m talking about each port being able to produce 10Gb/s and in most cases Infiniband switches have a non-blocking backplane.  So a 24 Port InfiniBand Switch, 10Gb/s per port, Full duplex, Non-blocking switch will support 480Gb/s!   Over time InfiniBand speed has greatly increased and over time the older switches have dropped in price making InfiniBand a good choice for a growing home lab. For most home labs a 40Gb/s per port QDR switch is financial achievable. Even the 20Gb/s DDR or 10Gb/s SDR switch give ample speed and are VERY cost effective.  However, step above QDR and you’ll find the price point is a bit too steep for home lab use.

So let’s take a look at the price / speed comparisons for InfiniBand vs. 10Gb/s Ethernet.

10Gb/s 20Gb/s 40Gb/s
InfiniBand HCA 2 Port 10Gb/s ($15-$75) 2 Port 20Gb/s ($20-$100) 2 Ports 40GB/s ($30-$150)
InfiniBand Switch 24 Ports SDR (~$30-$70) 24 Ports DDR (~$70-$120) 8-36 Ports QDR (~$250-$500)
InfiniBand Cable CX4 (SFF-8470) ($15-$30) CX4 (SFF-8470) ($15-$30) QSFP (SFF-8436) ($15-$30)
Ethernet Switch 8 Ports 10Gbe ($700-$900)
Ethernet pNIC 2 Port 10Gbe ($300-$450)
Ethernet Cable 1M / 3ft. CAT 6a ($5-$10)

Let’s break this down a bit further. I used the high dollar from each line item above and figured 3 x HCAs or pNICs and 6 cables for my 3 hosts.

Ethernet 10Gb/s – (3 Host Total cost $2310)

  • Cost Per Switch – $900 Switch / 8 Ports = $112 per port
  • Cost to enable 3 Hosts with 3 pNICs and 2 Cables -(3 Hosts x $450 pNICS) + ((2 Cables x 3 Hosts) x $10 each) = $1410 for three hosts or $470 per Host
  • Total Cost to enable 3 hosts and switch cost – $1410 + $900 = $2310
  • Fully populated 8 Port switch supporting 4 hosts = $2776

InfiniBand SDR 10Gb/s – (3 Host Total Costs $385)

  • Cost Per Switch Port – $70 / 24 Ports = $2.91 per port
  • Host Costs – (3 Hosts x $75 HCA) + ((2 Cables x 3 Hosts) x $30 = $315 (Per Host $105)
  • Total Cost to enable 3 hosts and switch cost – $315 + $70 = $385
  • Fully populated 24 port switch supporting 12 hosts = $1330

InfiniBand DDR 20Gb/s – (3 Host Total Cost $510)

  • Cost Per Switch Port – $120 / 24 Ports = $5 per port
  • Host Costs – (3 Hosts x $100 HCA) + ((2 Cables x 3 Hosts) x $30 = $390 (Per Host $130)
  • Total Cost to enable 3 hosts and switch cost – $390 + $120 = $510
  • Fully populated 24 port switch supporting 12 hosts = $1680

InfiniBand QDR 40Gb/s – (3 Host Total Cost $1040)

  • Cost Per Switch Port – $500 / 24 Ports = $20.83 per port
  • Host Costs – (3 Hosts x $150 HCA) + ((2 Cables x 3 Hosts) x $30 = $540 (Per Host $180)
  • Total Cost to enable 3 hosts and switch cost – $540 + $500 = $1040
  • Fully populated 24 port switch supporting 12 hosts = $2660

From these costs you can clearly see that InfiniBand is TRULY the best value for speed and port price. Even if you got a great deal, let’s say 50% off on 10Gbe, it still would be slower and it would cost you more. Heck, for the price you could easily buy an extra switch as a backup.

With this in mind my plan it to replace my backend Gbe network with InfiniBand. Using IPoIB (IP over InfiniBand) for VSAN, vMotion, and FT traffic and my 1Gbe network for the VM’s and ESXi management traffic. However, without knowledge wisdom cannot be achieved.  So, my next steps are to learn more about InfiniBand and review these great videos by Mellanox. Then come up with a plan to move forward using this technology.

Check out these Videos: InfiniBand Principles Every HPC Expert MUST Know!

Pathping for windows – think of it as a better way to ping

Posted on Updated on

I was working on a remote server today and I needed better stats around ping and trace route. I could not install additional software and then I came across the windows command ‘pathping’. Its hard to believe this tool has been around since NT4 days and I don’t recall ever hearing about it. Give it a go when you get a chance and I’m adding it to my “virtual” tool belt.

More information here… https://en.wikipedia.org/wiki/PathPing

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know…

Else, I’ll start writing boring blog content.

Quick ways to check disk alignment with ESXi and Windows VM’s

Posted on Updated on

There are two simple checks a virtual infrastructure (VI) admin should be doing to ensure ESXi Datastores and the Windows VM’s are properly aligned. If either are misaligned then performance issues will follow. Though I’m not going to get into the whys and how’s of alignment issues I will show you how to quickly check.

1 – ESXi Datastores (DS)

By default if the VI admin formats the DS with vCenter Server or directly connected to a host via the VI Client the starting sector will be 2048. A starting sector of 2048 will satisfy nearly all of the storage vendors out there, however 2048 starting sector should be validated with your storage vendor.

If the VI Admin chose to format the DS via a script then they should choose a starting sector of 2048 or what the storage vendor recommends

Example — partedUtil setptbl \$disk gpt “1 2048…..” More info here on partedUtil

Here is a simple command to check your “Start Sector”.   SSH or Direct console into a host that has DSs you want to check and run this command.

~ # esxcli storage core device partition list

esxistartingsector

 

Some note about this –

RED Box – Is the local boot disk, so its starting sector will be 64, this is okay it’s just a ESXi Boot disk

Yellow, Green, and Blue – Are all VSAN Disks and all have a starting sector of 2048   << This is what I’m looking for, I want to make sure all DS disks start at 2048, if not they could experience performance issues.

 

2 – Windows VM Check

Windows checks are pretty easy too, the starting sector offset should be 2048. Note the screenshot below shows the Partition starting offset of 1,048,576, also note it’s in labeled in bytes not sectors. To find the starting sector just divide the Partition Starting Offset by the Bytes per Sector.   Simple math tells us its right — 1048576/512 = 2048 Sector. If your Partition Starting offset is anything other than 1,048,576 Bytes or 2048 Sectors then the VM is not aligned and will need adjusted.

From a Command Prompt, type in ‘msinfo32.exe’ to bring up this screen, navigate as shown below, and note your Partition Starting Offset.

windowsstartingsector

 


VSAN – Setting up VSAN Observer in my Home Lab

Posted on Updated on

VSAN Observer is a slick way to display diagnostic statics not only around how the VSAN is performing but how the VM’s are as well.

Here are the commands I entered in my Home Lab to enable and disable the Observer.

Note: this is a diagnostic tool and should not be allowed to run for long periods of time as it will consume many GB of disk space. Ctrl+C will stop the collection

How to Start the collection….

  • vCenter239:~ # rvc root@localhost << Logon into vCenter Server Appliance | Note you may have to enable SSH
  • password:
  • /localhost> cd /localhost/Home.Lab
  • /localhost/Home.Lab> cd computers/Home.Lab.C1 << Navigate to your cluster | Mine Datacenter is Home.Lab, and cluster is Home.Lab.C1
  • /localhost/Home.Lab/computers/Home.Lab.C1> vsan.observer ~/computers/Home.Lab.C1 –run-webserver –force << Enter this command to get things started, keep in mind double dashes “—” are used in front of run-webserver and force
  • [2014-09-17 03:39:54] INFO WEBrick 1.3.1
  • [2014-09-17 03:39:54] INFO ruby 1.9.2 (2011-07-09) [x86_64-linux]
  • [2014-09-17 03:39:54] WARN TCPServer Error: Address already in use – bind(2)
  • Press <Ctrl>+<C> to stop observing at any point ...[2014-09-17 03:39:54] INFO WEBrick::HTTPServer#start: pid=25461 port=8010 << Note the Port and that Ctrl+C to stop
  • 2014-09-17 03:39:54 +0000: Collect one inventory snapshot
  • Query VM properties: 0.05 sec
  • Query Stats on 172.16.76.231: 0.65 sec (on ESX: 0.15, json size: 241KB)
  • Query Stats on 172.16.76.233: 0.63 sec (on ESX: 0.15, json size: 241KB)
  • Query Stats on 172.16.76.232: 0.68 sec (on ESX: 0.15, json size: 257KB)
  • Query CMMDS from 172.16.76.231: 0.74 sec (json size: 133KB)
  • 2014-09-17 03:40:15 +0000: Live-Processing inventory snapshot
  • 2014-09-17 03:40:15 +0000: Collection took 20.77s, sleeping for 39.23s
  • 2014-09-17 03:40:15 +0000: Press <Ctrl>+<C> to stop observing

How to stop the collection… Note: the collection has to be started and running to web statics as in the screenshots below

  • ^C2014-09-17 03:40:26 +0000: Execution interrupted, wrapping up … << Control+C is entered and the observer goes into shutdown mode
  • [2014-09-17 03:40:26] INFO going to shutdown …
  • [2014-09-17 03:40:26] INFO WEBrick::HTTPServer#start done.
  • /localhost/Home.Lab/computers/Home.Lab.C1>

How to launch the web interface…

I used Firefox to logon to the web interface of VSAN Observer, IE didn’t seem to function correctly

Simply go to http://[IP of vCenter Server]:8010 Note: this is the port number noted above when starting and its http not https

 

So what does it look like and what is the purpose of each screen… Note: By Default the ‘? What am I looking at’ is not displayed, I expanded this view to enhance the description of the screenshot.

 

 

 

 

References:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2064240

http://www.yellow-bricks.com/2013/10/21/configure-virtual-san-observer-monitoring/

VSAN – The Migration from FreeNAS

Posted on Updated on

Well folks it’s my long awaited blog post around moving my Homelab from FreeNAS to VMware VSAN.

Here are the steps I took to migrate my Home Lab GEN II with FreeNAS to Home Lab GEN III with VSAN.

Note –

  • I am not putting a focus on ESXi setup as I want to focus on the steps to setup VSAN.
  • My home lab is in no way on the VMware HCL, if you are building something like this for production you should use the VSAN HCL as your reference

The Plan –

  • Meet the Requirements
  • Backup VM’s
  • Update and Prepare Hardware
  • Distribute Existing hardware to VSAN ESXi Hosts
  • Install ESXi on all Hosts
  • Setup VSAN

The Steps –

Meet the Requirements – Detailed list here

  • Minimum of three hosts
  • Each host has a minimum of one SSD and one HDD
  • The host must be managed by vCenter Server 5.5 and configured as a Virtual SAN cluster
  • Min 6GB RAM
  • Each host has a Pass-thru RAID controller as specified in the HCL. The RAID controller must be able to present disks directly to the host without a RAID configuration.
  • 1GB NIC, I’ll be running 2 x 1Gbs NICs. However 10GB and Jumbo frames are recommended
  • VSAN VMkernel port configured on every host participating in the cluster.
  • All disks that VSAN will be allocated to should be clear of any data.

Backup Existing VMs

  • No secret here around backups. I just used vCenter Server OVF Export to a local disk to backup all my critical VM’s
  • More Information Here

Update and Prepare Hardware

  • Update all Motherboard (Mobo) BIOS and disk Firmware
  • Remove all HDD’s / SDD’s from FreeNAS SAN
  • Remove any Data from HDD/SDD’s . Either of these tools do the job

Distribute Existing hardware to VSAN ESXi Hosts

  • Current Lab – 1 x VMware Workstation PC, 2 x ESXi Hosts boot to USB (Host 1 and 2), 1 x FreeNAS SAN
  • Desired Lab – 3 x ESXi hosts with VSAN and 1 x Workstation PC
  • End Results after moves
    • All Hosts ESXi 5.5U1 with VSAN enabled
    • Host 1 – MSI 7676, i7-3770, 24GB RAM, Boot 160GB HDD, VSAN disks (2 x 2TB HDD SATA II, 1 x 60GB SSD SATA III), 5 xpNICs
    • Host 2 – MSI 7676, i7-2600, 32 GB RAM, Boot 160GB HDD, VSAN disks (2 x 2TB HDD SATA II, 1 x 90 GB SSD SATA III), 5 x pNICs
    • Host 3 – MSI 7676, i7-2600, 32 GB RAM, Boot 160GB HDD, VSAN disks (2 x 2TB HDD SATA II, 1 x 90 GB SSD SATA III), 5 x pNICs
    • Note – I have ditched my Gigabyte z68xp-UD3 Mobo and bought another MSI 7676 board. I started this VSAN conversion with it and it started to give me fits again similar to the past. There are many web posts with bugs around this board. I am simply done with it and will move to a more reliable Mobo that is working well for me.

Install ESXi on all Hosts

  • Starting with Host 1
    • Prior to Install ensure all data has been removed and all disk show up in BIOS in AHCI Mode
    • Install ESXi to Local Boot HD
    • Setup ESXi base IP address via direct Console, DNS, disable IP 6, enable shell and SSH
    • Using the VI Client setup the basic ESXi networking and vSwitch
    • Using VI Client I restored the vCSA and my AD server from OVF and powered them on
    • Once booted I logged into the vCSA via the web client
    • I built out Datacenter and add host 1
    • Create a cluster but only enabled EVC to support my different Intel CPU’s
    • Cleaned up any old DNS settings and ensure all ESXi Hosts are correct
    • From the Web client Validate that 2 x HDD and 1 x SDD are present in Host
    • Installed ESXi Host 2 / 3, followed most of these steps, and added them to the cluster

Setup VSAN

  • Logon to the Webclient
    • Ensure on all the hosts
      • Networking is setup and all functions are working
      • NTP is working
      • All expected HDD’s for VSAN are reporting in to ESXi
    • Create a vSwitch for VSAN and attach networking to it
      • I attached 2 x 1Gbs NICs for my load that should be enough
    • Assign the VSAN License Key
      • Click on the Cluster > Manage > Settings > Virtual SAN Licensing > Assign License Key

  • Enable VSAN
    • Under Virtual SAN click on General then Edit
    • Choose ‘Turn on Virtual SAN’
    • Set ‘Add disks to storage’ to Manual
    • Note – for a system on the HCL, chances are the Automatic setting will work without issue. However my system is not on the any VMware HCL and I want to control the drives to add to my Disk Group.

       

  • Add Disks to VSAN
    • Under Virtual SAN click on ‘Disk Management’
    • Choose the ICON with the Check boxes on it
    • Finally add the disks you want in your disk group

  • Allow VSAN to complete its tasks, you can check on its progress by going to ‘Tasks’

  • Once complete ensure all disks report in as healthy.

  • Ensure VSAN General tab is coming up correct
    • 3 Hosts
    • 3 of 3 SSD’s
    • 6 of 6 Data disks

  • Check to see if the data store is online

 

Summary –

Migrating from FreeNAS to VSAN was relatively a simple process. I simply moved, prepared, and installed and the product came right up. My only issue was working with a faulty Gigabyte Mobo which I resolved by replacing it. I’ll post up more as I continue to work with VSAN. If you are interested in more detail around VSAN I would recommend the following book.