** NOV-2020 Update: I’ve updated my Home Lab to Gen 7, please go to my BLOG Series for more information on this update. **
** MAR-2020 Update: Though I had good luck with the HP 593742-001 NC523SFP DUAL PORT SFP+ 10Gb card in my Gen 4 Home Lab, I found it faulty when running in my Gen 5 Home Lab. Could be I was using a PCIe x4 slot in Gen 4, or it could be the card runs to hot to touch. For now this card was removed from VMware HCL, HP has advisories out about it, and after doing some poking around there seem to be lots of issues with it. **
Original Post Below:
I have decided to update my Home Lab into Generation V. In doing this I am going to follow my best practices laid out in my ‘Home Lab Generations’ and ‘VMware Home Labs: a Definitive guide’. As you read through the “Home Lab Generations page” you should notice a theme around planning each generation and documenting its outcomes and unplanned items. In this blog post, I am going to start laying out Design Considerations which include the ‘Initial use case/goals and needed Resources as they relate to GEN V.
First off, lets answer why am I updating my home lab. Over the past 4+ Home Lab generations I had deemed that CPU’s with 4 Physical Cores with up to 32GB RAM would meet the demands of my use cases and, in most cases it did. However, most recently I starting having resource constraints when I wanted to use multiple VMware products. This caused me to do a bit of shuffling to be able to run the software I wanted. Now this is not the fault of VMware, its just that there are so many products that have resource demands and my current home lab was undersized. Additionally, the fan noise from the InfiniBand switch and others was just to loud.
First – Here are my initial use case and goals:
- Be able to run vSphere 6.x and vSAN Environment
- Reuse as much as possible from Gen IV Home lab, this will keep costs down
- Choose products that bring value to the goals, are cost effective, and if they are on the VMware HCL that a plus but not necessary for a home lab
- Move networking (vSAN / FT) from 40Gb InfiniBand to 10Gbe Switch
- Have enough CPU cores and RAM to be able to support multiple VMware products (ESXi, VCSA, vSAN, vRO, vRA, NSX, LogInsight)
- Be able to fit the the environment into 3 ESXi Hosts
- The environment should run well, but doesn’t have to be a production level environment
Second – Evaluate Software, Hardware, and VM requirements:
Before I run off and start buying items. I need to look at the software requirements on the hardware. Using the table from my ‘HOME LABS: A DEFINITIVE GUIDE’, I can start to figure out how much CPU, RAM, and Disk space I’ll need. Through experience working with these products I already know my Dual Port 10Gbe per host network is adequate to support these products. Its the other items I’m concerned with in this build.
Using this information I can quickly see I need the following across all my hosts:
- CPU: ~32 cores, of which 30 are vCores and 4 are pCores
- RAM: ~95GB RAM, of which 64GB used by VMs and 32 GB RAM used by ESXi + vSAN
- Disk: ~1.3 TB of disk space
- For vROPS I used this sizer: http://vropssizer.vmware.com/sizing-wizard/choose-installation
Lastly I figure I run between 20-30 VM’s for testing, these could be Windows, Linux, etc. which can be over subscribed
- CPU: 30 x 2 vCPU = ~60 vCPU
- RAM: 30 X 8GB = ~240 GB
- Disk: 30 x 30GB = ~900GB
Total resource needs for the Cluster:
- CPUs: 92 cores
- RAM: 335GB
- Disk: 2.2TB
Third – Home Lab Design Considerations
As you can see form the totals above my existing Gen IV Home Lab would not be able to keep up. Lets do keep in mind the totals for the CPU/RAM are a 1:1 ratio and doesn’t take in consolidation. For a home lab I should be able to reduce these numbers quite a bit. What I do next is review my Home lab Design Considerations. This plus the information from step two will help me to decide which hardware to select.
Home Lab Design Considerations:
|Initial Cost||How much does the Home lab solution cost to build out?
This is always top of mind for me and I do a lot of cost comparisons, research, and evaluation. For this build I found that reusing what I have plus purchasing a few more items kept my cost lower with more value then buying new or even used hardware.
|Noise||When the home lab is running how much noise will it produce, and are the noise levels appropriate for your use case?
In my design I’m looking to reduce main fan noise. My lab is in my home office so it needs to be whisper quiet.
|Heat / Power Consumption||Does the home lab produce to much heat for the intended location?
Heat/Power is always a balancing act. I want something that will not heat up my room, has enough cores to to do the job but doesn’t consume so much power I don’t want to turn it on.
|Monthly Operational Cost||Based on Power (watts) and the average cost for electricity for the USA, cost is an estimate if running for 24x7x30 days?
My home lab power needs are as follows:
|Foot Print Space and Flexibility||How much space does the solution take up. Based on the type of product you choose, how flexible is the solution when hardware or other changes are needed to expand?
What I’m really looking for are 3 x Tall tower PC Cases with maximum flexibility. This means where the power supplies are located, the amount of drives it can hold, fits many different motherboards, and has vertical / horizontal slots for Host Cards.
|Software products are constantly adding new requirements for home labs (example: 10Gbe Networks, or more HDD/SDD) How does the solution align to bleeding edge products without major over haul?
If I build this system beefy enough I should be able to be in the position to run just about any software that comes my way.
|Hands on Software||Measures viability from the ESXi layer through the entire stack of products
My new system should be designed to accommodate the software stack mentioned in step 2
|Hands on Hardware||Considers the effectiveness of the hardware solution to real world technologies
Choosing a system that allows the most flexibility is key here to be future looking
|ESXi / vSAN HCL Support||How does the hardware align to the Hardware compatibility guides
Not top of mind for a Home lab as I’m not looking for VMware to support it. However, the closer I can get to the HCL the better off I will be.
Hyper Converged Infrastructure
|How well does the solution adapt to HCI (vSAN)
I should ensure that JBOD Disk Controller and NIC both have a PCIe 8x slot or better and I can fit many drives into my case.
|Refresh Cost||Financially, what would it take to refresh, replace, or update the hardware solution
Consider how adaptable is the solution to changing hardware and software demands.
I want to choose products that are cost effective but I can reuse down the road. This should put my lab in a potion to keep costs down.
Step Four – Choosing Hardware
Based on my estimations above I’m going to need a very flexible case, Dual CPU Mobo, lots of RAM, and good network connectivity.
Here is what each Host will have:
- Rosewill RISE Case
- JINGSHA EATX X79 Dual CPU motherboard (Worked with 6.7, did not test with 7.0)
- 128GB DDR3 ECC RAM
- 4 x 200 SAS SSD
- 4 x 600 SAS HDD
- 1 x IBM 5210 JBOD
1 x HP 593742-001 NC523SFP DUAL PORT SFP+ 10GbCard found faulty with ESXi
- Connect into a MikoTik 10gbe CN309
Here are the resources I’ll need to build out my 3 hosts:
- To meet the initial use case/goals I’m will be investing quite a bit into this total refresh.
- Here are some of the initial GEN V resource choices (Still in the works and not all proven out)
- Purchase Items:
- Mobo: JINGSHA EATX X79 Dual CPU motherboard LGA 2011 Supports Xeon v2 processor ($86 Alibaba)
- Mobo Stands: 4mm Nylon Plastic Pillar (Amazon $8)
- RAM: 128GB DDR3 ECC (Ebay $110)
- CPU: Xeon E5-2640 v2 8 Cores / 16 HT (Ebay $30)
- CPU Cooler: DEEPCOOL GAMMAXX 400 (Amazon $19)
- Video: ASUS Neon PCIe 1x with DMS-59 Splitter (Ebay $15)
- Video Riser: PCI-E 1x to 16x Riser Adapter (Amazon $4)
- DISK: 600GB SSD (Ebay $80 for 10 Drives)
- Power Supply Adapter: Dual 8(4+4) Pin Male for Motherboard Power Adapter Cable (Amazon $11)
- Power Supply Extension Cable: StarTech.com 8in 24 Pin ATX 2.01 Power Extension Cable (Amazon $9)
- CableCreation Internal Mini SAS SFF-8643 to (4) 29pin SFF-8482 (Amazon $18)
- Case: Rosewill RISE Glow EATX (Newegg $54)
- Existing Items I’ll move over from the old 3 Hosts:
- Power Supply’s
- 200GB SAS SSD
- 600GB SAS HHD
- 2TB SATA HDD
- 64GB USB Thumb Drive
- IBM 5210 JBOD Disk Controller
- CableCreation Internal Mini SAS SFF-8643 to (4) 29pin SFF-8482 connectors with SATA Power,1M
- HP 593742-001 NC523SFP DUAL PORT SFP+ 10Gb SERVER ADAPTER W/ HIGH PROFILE BRACKET
- HP 684517-001 Twinax SFP 10gbe 0.5m DAC Cable Assembly
- Purchase Items:
The total cost for me to upgrade each server using purchased and existing items came out to ~$425 US Each. If you built this configuration without existing items the cost would be around ~$850 US. Clearly, you can see reusing my existing hardware and taking a step back with older Xeon/DDR3 RAM it saved quite a few dollars.
Next Steps for me is to finalize my orders and start the assembly process. I’ll post up soon around my progress.Here are a few initial photos from the build.
For now here are a few pre-deployment pics- ~Enjoy!
If you like my ‘no-nonsense’ videos and blogs that get straight to the point… then post a comment or let me know… Else, I’ll start posting really boring content!