Whitebox

VSAN – The Migration from FreeNAS

Posted on Updated on

Well folks it’s my long awaited blog post around moving my Homelab from FreeNAS to VMware VSAN.

Here are the steps I took to migrate my Home Lab GEN II with FreeNAS to Home Lab GEN III with VSAN.

Note –

  • I am not putting a focus on ESXi setup as I want to focus on the steps to setup VSAN.
  • My home lab is in no way on the VMware HCL, if you are building something like this for production you should use the VSAN HCL as your reference

The Plan –

  • Meet the Requirements
  • Backup VM’s
  • Update and Prepare Hardware
  • Distribute Existing hardware to VSAN ESXi Hosts
  • Install ESXi on all Hosts
  • Setup VSAN

The Steps –

Meet the Requirements – Detailed list here

  • Minimum of three hosts
  • Each host has a minimum of one SSD and one HDD
  • The host must be managed by vCenter Server 5.5 and configured as a Virtual SAN cluster
  • Min 6GB RAM
  • Each host has a Pass-thru RAID controller as specified in the HCL. The RAID controller must be able to present disks directly to the host without a RAID configuration.
  • 1GB NIC, I’ll be running 2 x 1Gbs NICs. However 10GB and Jumbo frames are recommended
  • VSAN VMkernel port configured on every host participating in the cluster.
  • All disks that VSAN will be allocated to should be clear of any data.

Backup Existing VMs

  • No secret here around backups. I just used vCenter Server OVF Export to a local disk to backup all my critical VM’s
  • More Information Here

Update and Prepare Hardware

  • Update all Motherboard (Mobo) BIOS and disk Firmware
  • Remove all HDD’s / SDD’s from FreeNAS SAN
  • Remove any Data from HDD/SDD’s . Either of these tools do the job

Distribute Existing hardware to VSAN ESXi Hosts

  • Current Lab – 1 x VMware Workstation PC, 2 x ESXi Hosts boot to USB (Host 1 and 2), 1 x FreeNAS SAN
  • Desired Lab – 3 x ESXi hosts with VSAN and 1 x Workstation PC
  • End Results after moves
    • All Hosts ESXi 5.5U1 with VSAN enabled
    • Host 1 – MSI 7676, i7-3770, 24GB RAM, Boot 160GB HDD, VSAN disks (2 x 2TB HDD SATA II, 1 x 60GB SSD SATA III), 5 xpNICs
    • Host 2 – MSI 7676, i7-2600, 32 GB RAM, Boot 160GB HDD, VSAN disks (2 x 2TB HDD SATA II, 1 x 90 GB SSD SATA III), 5 x pNICs
    • Host 3 – MSI 7676, i7-2600, 32 GB RAM, Boot 160GB HDD, VSAN disks (2 x 2TB HDD SATA II, 1 x 90 GB SSD SATA III), 5 x pNICs
    • Note – I have ditched my Gigabyte z68xp-UD3 Mobo and bought another MSI 7676 board. I started this VSAN conversion with it and it started to give me fits again similar to the past. There are many web posts with bugs around this board. I am simply done with it and will move to a more reliable Mobo that is working well for me.

Install ESXi on all Hosts

  • Starting with Host 1
    • Prior to Install ensure all data has been removed and all disk show up in BIOS in AHCI Mode
    • Install ESXi to Local Boot HD
    • Setup ESXi base IP address via direct Console, DNS, disable IP 6, enable shell and SSH
    • Using the VI Client setup the basic ESXi networking and vSwitch
    • Using VI Client I restored the vCSA and my AD server from OVF and powered them on
    • Once booted I logged into the vCSA via the web client
    • I built out Datacenter and add host 1
    • Create a cluster but only enabled EVC to support my different Intel CPU’s
    • Cleaned up any old DNS settings and ensure all ESXi Hosts are correct
    • From the Web client Validate that 2 x HDD and 1 x SDD are present in Host
    • Installed ESXi Host 2 / 3, followed most of these steps, and added them to the cluster

Setup VSAN

  • Logon to the Webclient
    • Ensure on all the hosts
      • Networking is setup and all functions are working
      • NTP is working
      • All expected HDD’s for VSAN are reporting in to ESXi
    • Create a vSwitch for VSAN and attach networking to it
      • I attached 2 x 1Gbs NICs for my load that should be enough
    • Assign the VSAN License Key
      • Click on the Cluster > Manage > Settings > Virtual SAN Licensing > Assign License Key

  • Enable VSAN
    • Under Virtual SAN click on General then Edit
    • Choose ‘Turn on Virtual SAN’
    • Set ‘Add disks to storage’ to Manual
    • Note – for a system on the HCL, chances are the Automatic setting will work without issue. However my system is not on the any VMware HCL and I want to control the drives to add to my Disk Group.

       

  • Add Disks to VSAN
    • Under Virtual SAN click on ‘Disk Management’
    • Choose the ICON with the Check boxes on it
    • Finally add the disks you want in your disk group

  • Allow VSAN to complete its tasks, you can check on its progress by going to ‘Tasks’

  • Once complete ensure all disks report in as healthy.

  • Ensure VSAN General tab is coming up correct
    • 3 Hosts
    • 3 of 3 SSD’s
    • 6 of 6 Data disks

  • Check to see if the data store is online

 

Summary –

Migrating from FreeNAS to VSAN was relatively a simple process. I simply moved, prepared, and installed and the product came right up. My only issue was working with a faulty Gigabyte Mobo which I resolved by replacing it. I’ll post up more as I continue to work with VSAN. If you are interested in more detail around VSAN I would recommend the following book.

Geeks.com – Time to Say goodbye for now

Posted on Updated on

I was a bit shell shocked when I went to one of my favorite online stores, geeks.com, only to find out they had closed.

They had been open for 17 years and they were one of the first sites I trusted to buy quality products from new or used.

They had a lot of common items but every now and then they had something different or unique. It was one of the reasons why I kept coming back.

I had recommended geeks.com many times and everyone I sent there always let me know what excellent service and product they had.

Well Geeks.com – I salute you – you had a good run, I’m sorry to see you go, and I hope one day you return!

Just a quick note, if you liked geeks.com then check out http://www.pacificgeek.com/ they were very similar in product and layout.

Home Lab – VMware ESXi 5.1 with iSCSI and freeNAS

Posted on Updated on

Recently I updated my home lab with a freeNAS server (post here). In this post, I will cover my iSCSI setup with freeNAS and ESXi 5.1.

Keep this in mind when reading – This Post is about my home lab. My Home Lab is not a high-performance production environment, its intent is to allow me to test and validate virtualization software. Some of the choices I have made here you might question, but keep in mind I’ve made these choices because they fit my environment and its intent.

Overall Hardware…

Click on these links for more information on my lab setup…

  • ESXi Hosts – 2 x ESXi 5.1, iCore 7, USB Boot, 32GB RAM, 5 x NICS
  • freeNAS SAN – freeNAS 8.3.0, 5 x 2TB SATA III, 8GB RAM, Zotac M880G-ITX Mobo
  • Networking – Netgear GSM7324 with several VLAN and Routing setup

Here are the overall goals…

  • Setup iSCSI connection from my ESXi Hosts to my freeNAS server
  • Use the SYBS Dual NIC to make balanced connections to my freeNAS server
  • Enable Balancing or teaming where I can
  • Support a CIFS Connection

Here is basic setup…

freeNAS Settings

Create 3 networks on separate VLANs – 1 for CIFS, 2 x for iSCSI < No need for freeNAS teaming

CIFS

The CIFS settings are simple. I followed the freeNAS guide and set up a CIFS share.

iSCSI

Create 2 x iSCSI LUNS 500GB each

Setup the basic iSCSI Settings under “Servers > iSCSI”

  • I used this doc to help with the iSCSI setup
  • The only exception is – Enable both of the iSCSI network adapters in the “Portals” area

ESXi Settings

Setup your iSCSI vSwitch and attach two dedicated NICS

Setup two VMKernel Ports for iSCSI connections

Ensure that the First VMKernel Port group (iSCSI72) goes to ONLY vmnic0 and vice versa for iSCSI73

Enable the iSCSI LUNs by following the standard VMware instructions

Note – Ensure you bind BOTH iSCSI VMKernel Ports

Once you have your connectivity working, it’s time to setup round robin for path management.

Right click on one of the LUNS, choose ‘Manage Paths…’

Change the path selection on both the LUNS to ‘Round Robin’

Tip – After the fact if you make changes to your iSCSI settings, then ensure you check your path selection as it may go back to default

Notes and other Thoughts…

Browser Cache Issues — I had issues with freeNAS updating information on their web interface, even after reboots of the NAS and my PC. I moved to Firefox and all issues went away. I then cleared my cache in IE and these issues were gone.

Jumbo Frames — Can I use Jumbo Frames with the SYBA Dual NICs SY-PEX24028? – Short Answer is NO I was unable to get them to work in ESXi 5.1. SYBA Tech support stated the MAX Jumbo frames for this card is 7168 and it supports Windows OS’s only. I could get ESXi to accept a 4096 frame size but nothing larger. However, when enabled none of the LUNS would connect, once I moved the frame size back to 1500 everything worked perfectly. I beat this up pretty hard, adjusting all types of ESXi, networking, and freeNAS settings but in the end, I decided the 7% boost that Jumbo frames offer wasn’t worth the time or effort.

Summary…

These settings will enable my 2 ESXi Hosts to balance their connections to my iSCSI LUNS hosted by freeNAS server without the use of freeNAS Networking Teaming or aggregation. By far it is the simplest way to setup and the out of the box performance works well.

My advice is — go simple with these settings for your home lab and save your time to beat up more important issues like “how do I shutdown windows 8” J

I hope you found this post useful and if you have further questions or comments feel free to post up or reach out to me.

Home Lab – freeNAS build with LIAN LI PC-Q25, and Zotac M880G-ITX

Posted on Updated on

I’ve decided to repurpose my IOMega IX4 and build out a freeNAS server for my ever growing home lab. In this blog post I’m not going to get in to the reasons why I choose freeNAS, trust me I ran through lot of open source NAS software, but rather on the actual hardware build of the NAS device.

Here are the hardware components I choose to build my freeNAS box with…

Tip – Watch for sales on all these items, the prices go up and down daily…

Factors in choosing this hardware…

  • Case – the Lian LI case supports 7 Hard disks (5 being hotswap) in a small and very quiet case, Need I say more…
  • Power supply – Usually I go with a Antec Power supply, however this time I’m tight on budget so I went with a Cooler Master 80PLUS rated Power supply
  • Motherboard – The case and the NAS software I choose really drove the Mobo selection, I played with a bunch of Open soruce NAS software on VM’s, once I made my choice on the case and choosing freeNAS it was simple as finding one that fit both. However 2 options I was keen on – 1) 6 SATA III Ports (To support all the Hard disks), 2) PCIex1 slot (to support the Dual Port NIC). Note – I removed the onboard Wireless NIC and the antenna, no need for them on this NAS device
  • NIC – the SYBA Dual NIC I have used in both of my ESXi hosts, they run on the Realtek 8111e chipset and have served me well. The Mobo I choose has the same chipset and they should integrate well into my environment.
  • RAM – 8GB of RAM, since I will have ~7TB of usable space with freeNAS, the general rule of thumb is to use 1GB of RAM per 1TB of storage, 8GB should be enough.
  • Hard Disks – I choose the hard disks mainly on Price, speed, and size. These hard disks are NOT rated above RAID 1 however I believe they will serve my needs accordingly. If you looking for HIGH performance and duty cycle HD’s then go with an enterprise class SAS or SATA disk.
  • SSD – I’ll use this for cache setup with freeNAS, I just wanted it to be SATA III

Install Issues and PIC’s

What went well…

  • Hard disk installs into case went well
  • Mobo came up without issue
  • freeNAS 8.3.xx installed without issue

Minor Issues….

  • Had to modify (actually drill out) the mounting plate on the LIAN LI case to fit the Cooler Master Power supply
  • LIAN LI Mobo Mount points were off about a quarter inch, this leaves a gap when installing the NIC card
  • LIAN LI case is tight in areas where the Mobo power supply edge connector meets the hard disk tray

PICS…

LIAN LI Case

5 Seagate HD’s installed…

Rear view…

Side Panel…

Zotac Mobo with RAM

Removal of the Wireless NIC….

Zotac Mobo installed in case with dual NIC…

Everything Mounted (Except for the SSD)….

Home Lab – More updates to my design

Posted on Updated on

Most recently I posted about adding a Layer 3 switch to my growing home lab. The Netgear Layer 3 switch I added (GSM7324) is preforming quite well in my home lab. In fact it’s quite zippy compared to my older switches and for the price it was worth it. However my ever growing home lab is having some growing pains, 2 to be exact.

In this post I’ll outline the issues, the solutions I’ve chosen, and my new direction for my home lab.

The issues…

Initially my thoughts were I could use my single ESXi Host and Workstation with specific VM’s to do most of my lab needs.

There were two issues I ran into, 1 – Workstation doesn’t support VLANs and 2 – my trusty IOMega IX4 wasn’t preforming very well.

Issue 1 – Workstation VLANs

Plain and simple Workstation doesn’t support VLANs and working with one ESXi Host is prohibiting me from fully using my lab and switch.

Issues 2 – IOMega IX4 Performance

My IOMega IX4 has been a very reliable appliance and it has done its job quite well.

However when I put any type of load on it (More than One or Two VM’s booting) its performance becomes a bit intolerable.

The Solutions…

Issue 1 – Workstation VLANs

I plan to still use Workstation for testing of newer ESXi platforms and various software components

I will install a second ESXi host similar to the one I built earlier this year only both Hosts will have 32GB of RAM.

The second Host will allow me to test more advanced software and develop my home lab further.

Issues 2 – IOMega IX4 Performance

I’ve decided to separate my personal data from my home lab data.

I will use my IX4 for personal needs and build a new NAS for my home lab.

A New Direction…

My intent is to build out a second ESXi Physical Host and ~9TB FreeNAS server so that I can support a vCloud Director lab environment.

vCD will enable me to spin up multiple test labs and continue to do the testing that I need.

 

So that’s it for now… I’m off to build my second host and my freeNAS server…

Thank you Computer Gods for your divine intervention and BIOS Settings

Posted on Updated on

I’ve been in IT for over 20 years now and in my time I’ve seen some crazy stuff like –

  • Grass growing in a Unisys Green Screen terminal that was sent in for repair by a Lumber yard
  • A Goofy screen saver on a IBM PS/2 running OS/2 kept bringing down Token Ring till we found it

But this friend is one of the more weird issues I’ve come across….

This all started last March 2012. I bought some more RAM and a pair of 2TB Hitachi HD’s for my Workstation 8 PC. I needed to expand my system and Newegg had a great deal on them. I imaged up my existing Windows 7 OS and pushed it down to the new HD. When the system booted I noticed that is was running very slow. I figured this to be an issue with the image process. So I decided to install from Windows 7 from scratch but I ran into various installation issues and slowness problems. I put my old Samsung HD back in my system and it booted fine. When I plugged the new Hitachi HD in the system as a second HD via SATA or USB the problems started again, basically it was decreased performance, programs not loading, and choppy video. I repeated these same steps with the 2nd Hitachi HD that I bought and it had the same issues.

A bit perplexed at this point I figure I have a pair of bad HD’s or bad HD BIOS. Newegg would not take back the HD’s, so I start working with Hitachi. I tried a firmware HD update, I RMA both HD’s and I still have the same issue. Hitachi sends me different model but slower HD and it works fine. So now I know there is something up with this model of HD.

I start working with Gigabyte – Same deal as Hitachi BIOS Update, RMA for a new System board Revision (Now I’m at a Rev 1.3) and I still have the same issue. I send an HD to Gigabyte in California and they cannot reproduce the problem. I’ll spare you all the details but trust me I try every combination I can think of. At this point I’m now at this for 5 Months, I still cannot use my new HD, and then I discover the following…

I put in a PCI (Not PCIe) VGA video card into my system and it works…

and then it hit me – “I wonder if this is some weird HDMI Video HD conflict problem”

I asked Gigabyte if disabling onboard HDMI video might help.

They were unsure but I try it anyway and sure enough I found the solution!

It was like the computer gods had finally shone down on me from above – halle-freaking-lujah…..

 

 

 

Here are the overall symptoms….

Windows 7 x64 Enterprise or Professional installer fails to load or complete the installation process

If the installation completes, mouse movements are choppy, the system locks up or will not boot

Attaching the Hitachi HD to a booted system via USB the system will start to exhibit performance issues.

Here is what I found out….

Any Combination of the following products will result in a failure…. Change any one out and it works!

1 x Gigabyte Z68XP-UD3 (Rev 1.0 and 1.3)

1 x Hitachi GST Deskstar 5K3000 HDS5C3020ALA632

1 x PCIe Video Card with HDMI Output (I tried the following card with the same Results – ZOTAC ZT-40604-10L GeForce GT 430 and EVGA – GeForce GT 610)

Here is the solution to making them work together….

BIOS under Advanced BIOS Settings – Change On Board VGA to ‘Enable if No Ext PEG’

This simple setting disabled the on board HDMI Video and resolved the conflicts with the products not working together.

Summary….

I got to meet some really talented engineers at Hitachi and Gigabyte. All were friendly and worked with me to solve my issue. One person Danny from Gigabyte was the most responsive and talented MoBo engineer I’ve meet. Even though in the end I found my own solution, I wouldn’t have made it there without some of their expert guidance!

Whitebox ESXi 5.x Diskless install

Posted on Updated on

I wanted to build a simple diskless ESXi 5.x server that I could use as an extension to my Workstations 8 LAB.

Here’s the build I completed today….

  • Antec Sonata Gen I Case (Own, Buy for ~$59)
  • Antec Earth Watts 650 PS (Own, Buy for ~$70)
  • MSI Z68MS-G45(B3) Rev 3.0 AKA MS-7676 (currently $59 at Fry’s)
  • Intel i7-2600 CPU LGA 1155 (Own, Buy for ~$300)
  • 16GB DDR3-1600 Corsair RAM (Own, Buy for ~$80)
  • Intel PCIe NIC (Own, Buy for ~$20)
  • Super Deluxe VMware 1GB USB Stick (Free!)
  • Classy VMware Sticker on front (Free)

Total Build Cost New — $590

My total Cost as I already owned the Hardware – $60 J

ESXi Installation –

  • Installed ESXi 5.0 via USB CD ROM to the VMware 1GB USB Stick
  • No install issues
  • All NIC’s and video recognized
  • It’s a very quiet running system that I can use as an extension from my Workstation 8 Home lab…
Front View with Nice VMware Sticker!
Rear View with 1GB VMware USB Stick
System Board with CPU, RAM and NIC – Look Mom no Hard Disks!
Model Detail on the MSI System board, ESXi reports the Mobo as a MS-7676