In part 3 of this series asustor joins the 10Gbe NAS Home Lab build! In this video I take a first look and unbox the asustor DRIVESTOR 2 PRO and LOCKERSTOR10. I also go over some of their features.
** Products Seen in this Video **
DRIVESTOR 2 PRO
asustor – https://www.asustor.com/product?p_id=64
In part 2 of this series I dissect the Synology DiskStation DS1621+. This is a pretty long video – best to watch it at 2x speed :) Note: This was a loaner unit and I do not recommend others doing this as it may void the warranty. I made this video to simply show others what the insides look like. Reach out if you have questions.
** PART Seen in this Video **
1x Synology 6 bay NAS DiskStation DS1621+ https://www.amazon.com/Synology-Bay-DiskStation-DS1621-Diskless/dp/B08HYQJJ62
2x Synology M.2 2280 NVMe SSD SNV3400 800GB https://www.amazon.com/Synology-2280-NVMe-SNV3400-800GB/dp/B08WLJYY76/
In this new video series I’ll be testing serval NAS products with in my NAS test lab.
My plan is to setup a new 10GBe network with 2 x Windows 10 PCs w/10gbe NICs, Laptops, Cell Phones, VMware ESXi and various other devices to see how they perform with the different NAS devices. Additionally, I’ll be going over the NAS devices and their software options too. A big part of my Home Network is the use of PLEX. Seeing how these devices handle PLEX and their other built in apps should make for some interesting content.
It all starts with this blog and in this initial video I go over some of the parts I’ve assembled for my NAS development lab. As the series progresses I plan to enhance the lab and how the devices interact with it.
** Advisement **
- 07/30/21: I am just starting to work with these components and set them up. Everything tells me they should work together. However, I have not tested them together.
- Any products that I blog/vblog about may or may not work – YOU ultimately assume all risk
** PARTS Seen in this Video **
- 1x Synology 6 bay NAS DiskStation DS1621+ https://www.amazon.com/Synology-Bay-DiskStation-DS1621-Diskless/dp/B08HYQJJ62
- 2x Synology M.2 2280 NVMe SSD SNV3400 800GB https://www.amazon.com/Synology-2280-NVMe-SNV3400-800GB/dp/B08WLJYY76/
- 1x Synology 10Gb Ethernet Adapter 2 SFP+ Ports (E10G21-F2), Black https://www.amazon.com/Synology-Ethernet-Adapter-Ports-E10G21-F2/dp/B08WLJQYL2
- 10x Cable Matters 5-Pack Snagless Short Cat6A (SSTP, SFTP) Shielded Ethernet Cable in Black 7 ft https://www.amazon.com/gp/product/B00HEM5FEI
- 2x ASUS XG-C100C 10G Network Adapter Pci-E X4 Card with Single RJ-45 Port https://www.amazon.com/gp/product/B072N84DG6/
- 1x MikroTik 9-Port Desktop Switch, 1 Gigabit Ethernet Port, 8 SFP+ 10Gbps Ports (CRS309-1G-8S+IN) https://www.amazon.com/gp/product/B07NFXN4SS/
- 8x FLYPROFiber 10GBase-T SFP+ to RJ45 for MikroTik https://www.amazon.com/gp/product/B08FXBFZP8/
- 10x HP 684517-001 TWINAX SFP+ 10GBE 0.5M DAC Cable Assy 611980001 4N6H4-01 https://www.ebay.com/itm/264745339079?ssPageName=STRK%3AMEBIDX%3AIT&_trksid=p2060353.m2749.l2649
A few of you reached out and asked that I create a video that shows the SATA cage installed. The point of this video is to show what a Rosewell SATA cage looks like when installed Antec Sonata III 500, its over all noise levels, and its activity lights.
In this vblog I go over the Rosewill RSV-SATA-Cage-34 and some of its features. I plan to use it in a host where I have a RAID group for file storage.
Not to long ago I updated my Gen 4 Home Lab to Gen 5 and I posted many blogs and video around this. The Gen 5 Lab ran well for vSphere 6.7 deployments but moving into vSphere 7.0 I had a few issues adapting it. Mostly these issues were with the design of the Jingsha Motherboard. I noted most of these challenges in the Gen 5 wrap up video. Additionally, I had some new networking requirements mainly around adding multiple Intel NIC ports and Home Lab Gen 5 was not going to adapt well or would be very costly to adapt. These combined adaptions forced my hand to migrate to what I’m calling Home Lab Gen 7. Wait a minute, what happen to Home Lab Gen 6? I decided to align my Home Lab Generation numbers to match vSphere release number, so I skipped Gen 6 to align.
First: I review my design goals:
- Be able to run vSphere 7.x and vSAN Environment
- Reuse as much as possible from Gen 5 Home lab, this will keep costs down
- Choose products that bring value to the goals, are cost effective, and if they are on the VMware HCL that a plus but not necessary for a home lab
- Keep networking (vSAN / FT) on 10Gbe MikroTik Switch
- Support 4 x Intel Gbe Networks
- Ensure there will be enough CPU cores and RAM to be able to support multiple VMware products (ESXi, VCSA, vSAN, vRO, vRA, NSX, LogInsight)
- Be able to fit the the environment into 3 ESXi Hosts
- The environment should run well, but doesn’t have to be a production level environment
Second – Evaluate Software, Hardware, and VM requirements:
My calculated numbers from my Gen 5 build will stay rather static for Gen 7. The only update for Gen 7 is to use the updated requirements table which can be found here >> ‘HOME LABS: A DEFINITIVE GUIDE’
Third – Home Lab Design Considerations
This too will be very similar to Gen 5, but I do review this table and made any last changes to my design
Four – Choosing Hardware
Based on my estimations above I’m going to need a very flexible Mobo, supporting lots of RAM, good network connectivity, and should be as compatible as possible with my Gen 5 hardware. I’ve reused many parts from Gen 5 but the main change came with the Supermicro Motherboard and the addition of 2TB SAS HDD listed below.
Note: I’ve listed the newer items in Italics all other parts I’ve carried over from Gen 5.
- My Gen 7 Home Lab is based on vSphere 7 (VCSA, ESXi, and vSAN) and it contains 3 x ESXi Hosts, 1 x Windows 10 Workstation, 4 x Cisco Switches, 1 x MikroTik 10gbe Switch, 2 x APC UPS
- Rosewill RISE Glow EATX (Newegg $54)
- CPU: Xeon E5-2640 v2 8 Cores / 16 HT (Ebay $30 each)
- CPU Cooler: DEEPCOOL GAMMAXX 400 (Amazon $19)
- 128GB DDR3 ECC RAM (Ebay $170)
- 64GB USB Thumb Drive (Boot)
- 2 x 200 SAS SSD (vSAN Cache)
- 2 x 2TB SAS HDD (vSAN Capacity – See this post)
- 1 x 2TB SATA (Extra Space)
- SAS Controller:
- 1 x IBM 5210 JBOD (Ebay)
- CableCreation Internal Mini SAS SFF-8643 to (4) 29pin SFF-8482 (Amazon $18)
- Motherboard Integrated i350 1gbe 4 Port
- 1 x MellanoxConnectX3 Dual Port (HP INFINIBAND 4X DDR PCI-E HCA CARD 452372-001)
- Power Supply:
- Antec Earthwatts 500-600 Watt (Adapters needed to support case and motherboard connections)
- Core VM Switches:
- 2 x Cisco 3650 (WS-C3560CG-8TC-S 8 Gigabit Ports, 2 Uplink)
- 2 x Cisco 2960 (WS-C2960G-8TC-L)
- 10gbe Network:
Battery Backup UPS:
- 2 x APC NS1250
Windows 10 Workstation:
- Case: Phanteks Enthoo Pro series PH-ES614PC_BK Black Steel
- Motherboard: MSI PRO Z390-A PRO
- CPU: Intel Core i7-8700
- RAM: 64GB DDR4 RAM
- 1TB NVMe
Thanks for reading, please do reach out if you have any questions.
If you like my ‘no-nonsense’ videos and blogs that get straight to the point… then post a comment or let me know… Else, I’ll start posting really boring content!
VMware announced the GA Releases of the following: VMware PowerCLI 12.1.0
See the base table for all the technical enablement links including a VMworld 2020 session and new Hands On Lab
|VMware PowerCLI is a command-line and scripting tool built on Windows PowerShell, and provides more than 700 cmdlets for managing and automating vSphere, VMware Cloud Director, vRealize Operations Manager, vSAN, NSX-T, VMware Cloud Services, VMware Cloud on AWS, VMware HCX, VMware Site Recovery Manager, and VMware Horizon environments.
|VMware PowerCLI 12.1.0 introduces the following new features, changes, and improvements:
Added cmdlets for
Added support for
|Ensure the following software is present on your system
|In VMware PowerCLI 12.1.0, the following modules have been updated:
|Release Notes||Click Here | What’s New in This Release | Resolved Issues | Known Issues|
|docs.vmware.com/pCLI||Introduction | Installing | Configuring | cmdlet Reference|
|Compatibility Information||Interoperability Matrix | Upgrade Path Matrix|
|Blogs & Infolinks||VMware What’s New pCLI vRLCM | VMware What’s New pCLI with AWS | PM’s Blog pCLI SSO|
|VMworld 2020 Sessions||PowerCLI: Into the Deep [HCP1286]|
|Hands On Labs||HOL-2111-04-SDC – VMware vSphere Automation – PowerCLI|
VMware announced the GA Releases of the following:
- VMware vCenter 7.0 Update 1
- VMware ESXi 7.0 Update 1
- VMware vSAN 7.0 Update 1
See the base table for all the technical enablement links, now including VMworld 2020 OnDemand Sessions
|vCenter Server 7.0 Update 1 | ISO Build 16860138
ESXi 7.0 Update 1 | ISO Build 16850804
VMware vSAN 7.0 Update 1 | Build 16850804
|What’s New vCenter Server|
|Inclusive terminology: In vCenter Server 7.0 Update 1, as part of a company-wide effort to remove instances of non-inclusive language in our products, the vSphere team has made changes to some of the terms used in the vSphere Client. APIs and CLIs still use legacy terms, but updates are pending in an upcoming release.
|Upgrade/Install Considerations vCenter|
|Before upgrading to vCenter Server 7.0 Update 1, you must confirm that the Link Aggregation Control Protocol (LACP) mode is set to enhanced, which enables the Multiple Link Aggregation Control Protocol (the multipleLag parameter) on the VMware vSphere Distributed Switch (VDS) in your vCenter Server system.
If the LACP mode is set to basic, indicating One Link Aggregation Control Protocol (singleLag), the distributed virtual port groups on the vSphere Distributed Switch might lose connection after the upgrade and affect the management vmknic, if it is on one of the dvPort groups. During the upgrade precheck, you see an error such as Source vCenter Server has instance(s) of Distributed Virtual Switch at unsupported lacpApiVersion.
For more information on converting to Enhanced LACP Support on a vSphere Distributed Switch, see VMware knowledge base article 2051311. For more information on the limitations of LACP in vSphere, see VMware knowledge base article 2051307.
Product Support Notices
|What’s New ESXi|
|Upgrade/Install Considerations ESXi|
|In vSphere 7.x, the Update Manager plug-in, used for administering vSphere Update Manager, is replaced with the Lifecycle Manager plug-in. Administrative operations for vSphere Update Manager are still available under the Lifecycle Manager plug-in, along with new capabilities for vSphere Lifecycle Manager.
You can also update ESXi hosts without using the Lifecycle Manager plug-in, and use an image profile instead. To do this, you must manually download the patch offline bundle ZIP file from the VMware download page or the Product Patches page and use the esxcli software profile command.
|What’s New vSAN|
|vSAN 7.0 Update 1 introduces the following new features and enhancements:
Scale Without Compromise
Note: vSAN 7.0 Update 1 improves CPU performance by standardizing task timers throughout the system. This change addresses issues with timers activating earlier or later than requested, resulting in degraded performance for some workloads.
|Upgrade/Install Considerations vSAN|
|For instructions about upgrading vSAN, see vSAN Documentation Upgrading the vSAN Cluster Before You Upgrade Upgrading vCenter Server Upgrading Hosts
Note: Before performing the upgrade, please review the most recent version of the VMware Compatibility Guide to validate that the latest vSAN version is available for your platform.
vSAN 7.0 Update 1 is a new release that requires a full upgrade to vSphere 7.0 Update 1. Perform the following tasks to complete the upgrade:
1. Upgrade to vCenter Server 7.0 Update 1. For more information, see the VMware vSphere 7.0 Update 1 Release Notes.
Note: vSAN retired disk format version 1.0 in vSAN 7.0 Update 1. Disks running disk format version 1.0 are no longer recognized by vSAN. vSAN will block upgrade through vSphere Update Manager, ISO install, or esxcli to vSAN 7.0 Update 1. To avoid these issues, upgrade disks running disk format version 1.0 to a higher version. If you have disks on version 1, a health check alerts you to upgrade the disk format version.
Disk format version 1.0 does not have performance and snapshot enhancements, and it lacks support for advanced features including checksum, deduplication and compression, and encryption. For more information about vSAN disk format version, see KB2145267.
Upgrading the On-disk Format for Hosts with Limited Capacity
During an upgrade of the vSAN on-disk format from version 1.0 or 2.0, a disk group evacuation is performed. The disk group is removed and upgraded to on-disk format version 13.0, and the disk group is added back to the cluster. For two-node or three-node clusters, or clusters without enough capacity to evacuate each disk group, select Allow Reduced Redundancy from the vSphere Client. You also can use the following RVC command to upgrade the on-disk format: vsan.ondisk_upgrade –allow-reduced-redundancy
When you allow reduced redundancy, your VMs are unprotected for the duration of the upgrade, because this method does not evacuate data to the other hosts in the cluster. It removes each disk group, upgrades the on-disk format, and adds the disk group back to the cluster. All objects remain available, but with reduced redundancy.
If you enable deduplication and compression during the upgrade to vSAN 7.0 Update 1, you can select Allow Reduced Redundancy from the vSphere Client.
For information about maximum configuration limits for the vSAN 7.0 Update 1 release, see the Configuration Maximums documentation.
|Release Notes vCenter||Click Here | What’s New | Earlier Releases | Patch Info | Installation & Upgrade Notes | Product Support Notices|
|Release Notes ESXi||Click Here | What’s New | Earlier Releases | Patch Info | Product Support Notices | Resolved Issues | Known Issues|
|Release Notes vSAN||Click Here | What’s New | VMware vSAN Community | Upgrades for This Release | Limitations | Known Issues|
|docs.vmware/vCenter||Installation & Setup | vCenter Server Upgrade | vCenter Server Configuration|
|Docs.vmware/ESXi||Installation & Setup | Upgrading | Managing Host and Cluster Lifecycle | Host Profiles | Networking | Storage | Security|
|docs.vmware/vSAN||Using vSAN Policies | Expanding & Managing a vSAN Cluster | Device Management | Increasing Space Efficiency | Encryption|
|Compatibility Information||Interoperability Matrix vCenter | Configuration Maximums vSphere (All) | Ports Used vSphere (All)|
|Blogs & Infolinks||What’s New with VMware vSphere 7 Update 1 | Main VMware Blog vSphere 7 | vSAN | vSphere | vCenter Server|
|Download||vSphere | vSAN|
|VMworld 2020 OnDemand
(Free Account Needed)
|Deep Dive: What’s New with vCenter Server [HCP1100] | 99 Problems, But A vSphere Upgrade Ain’t One [HCP1830]|
|VMworld HOL Walkthrough
(VMworld Account Needed)
|Introduction to vSphere Performance [HOL-2104-95-ISM]|
My GEN5 Home Lab is ever expanding and the space demands on the vSAN cluster were becoming more apparent. This past weekend I updated my vSAN 7 cluster capacity disks from 6 x 600GB SAS HDD to 6 x 2TB SAS HDD and it went very smoothly. Below are my notes and the order I followed around this upgrade. Additionally, I created a video blog (link further below) around these steps. Lastly, I can’t stress this enough – this is my home lab and not a production environment. The steps in this blog/video are just how I went about it and are not intended for any other purpose.
- 3 x ESXi 7.0 Hosts (Supermicro X9DRD-7LN4F-JBOD, Dual E5 Xeon, 128GB RAM, 64GB USB Boot)
- vSAN Storage is:
- 600GB SAS Capacity HDD
- 200GB SAS Cache SDD
- 2 Disk Groups per host (1 x 200GB SSD + 1 x 600GB HDD)
- IBM 5210 HBA Disk Controller
- vSAN Datastore Capacity: ~3.5TB
- Amount Allocated: ~3.7TB
- Amount in use: ~1.3TB
- Keep the 6 x 200GB SAS Cache SDD Drives
- Remove 6 x 600GB HDD Capacity Disk from hosts
- Replace with 6 x 2TB HDD Capacity Disks
- Upgraded vSAN Datastore ~11TB
- I choose to backup (via clone to offsite storage) and power off most of my VMs
- I clicked on the Cluster > Configure > vSAN > Disk Management
- I selected the one host I wanted to work with and then the Disk group I wanted to work with
- I located one of the capacity disks (600GB) and clicked on it
- I noted its NAA ID (will need later)
- I then clicked on “Pre-check Data Migration” and choose ‘full data migration’
- The test completed successfully
- Back at the Disk Management screen I clicked on the HDD I am working with
- Next I clicked on the ellipse dots and choose ‘remove’
- A new window appeared and for vSAN Data Migration I choose ‘Full Data Migration’ then clicked remove
- I monitored the progress in ‘Recent Tasks’
- Depending on how much data needed to be migrated, and if there were other objects being resynced it could take a bit of time per drive. For me this was ~30-90 mins per drive
- Once the data migration was complete, I went to my host and found the WWN# of the physical disk that matched the NAA ID from Step 5
- While the system was still running, removed disk from the chassis, and replaced it with the new 2TB HDD
- Back at vCenter Server I clicked on the Host on the Cluster > Configure > Storage > Storage Devices
- I made sure the new 2TB drive was present
- I clicked on the 2TB drive, choose ‘erase partitions’ and choose OK
- I clicked on the Cluster > Configure > vSAN > Disk Management > ‘Claim Unused Disks’
- A new Window appeared and I choose ‘Capacity’ for the 2TB HDD, ‘Cache’ for the 200GB SDD drives, and choose OK
- Recent Task showed the disk being added
- When it was done I clicked on the newly added disk group and ensured it was in a health state
- I repeated this process until all the new HDDs were added
- After upgrade the vSAN Storage is:
- 2TB SAS Capacity HDD
- 200GB SAS Cache SDD
- 2 Disk Groups per host (1 x 200GB SSD + 1 x 2TB HDD)
- IBM 5210 HBA Disk Controller
- vSAN Datastore is ~11.7TB
Notes & other thoughts:
- I was able complete the upgrade in this order due to the nature my home lab components. Mainly because I’m running a SAS Storage HBA that is just a JBOD controller supporting Hot-Pluggable drives.
- Make sure you run the data migration pre-checks and follow any advice it has. This came in very handy.
- If you don’t have enough space to fully evacuate a capacity drive you will either have to add more storage or completely remove VM’s from the cluster.
- Checking Cluster>Monitor>vSAN>Resyncing Objects, gave me a good idea when I should start my next migration. I look for it to be complete before I start. If you have an very active cluster this maybe harder to achieve.
- Checking the vSAN Cluster Health should be done, especially the Cluster > Monitor > Skyline Health > Data > vSAN Object Health, any issues in these areas should be looked into prior to migration
- Not always, but mostly, the disk NAA ID reported in vCenter Server/vSAN usually coincides with the WWN Number on the HDD
- By changing my HDDs from 600GB SAS 10K to 2TB SAS 7.2K there will be a performance hit. However, my lab needed more space and 10k-15K drives were just out of my budget.
- Can’t recommend this reference Link from VMware enough: Expanding and Managing a vSAN Cluster
If you like my ‘no-nonsense’ videos and blogs that get straight to the point… then post a comment or let me know… Else, I’ll start posting really boring content!
Video Posted on Updated on
One of the challenges in running a VMware based home lab is the ability to work with old / inexpensive hardware but run latest software. Its a balance that is sometimes frustrating, but when it works it is very rewarding. Most recently I decided to move to 10Gbe from my InfiniBand 40Gb network. Part of this transition was to create an ESXi ISO with the latest build (6.7U3) and appropriate network card drivers. In this video blog post I’ll show 9 easy steps to create your own customized ESXi ISO and how to pin point IO Cards on the vmware HCL.
** Update 03/06/2020 ** Though I had good luck with the HP 593742-001 NC523SFP DUAL PORT SFP+ 10Gb card in my Gen 4 Home Lab, I found it faulty when running in my Gen 5 Home Lab. Could be I was using a PCIe x4 slot in Gen 4, or it could be the card runs to hot to touch. For now this card was removed from VMware HCL, HP has advisories out about it, and after doing some poking around there seem to be lots of issues with it. I’m looking for a replacement and may go with the HP NC550SFP. However, this doesn’t mean the steps in this video are only for this card, the steps in this video help you to better understand how to add drivers into an ISO.
Here are the written steps I took from my video blog. If you are looking for more detail, watch the video.
Before you start – make sure you have PowerCLI installed, have download these files, and have placed these files in c:\tmp.
- Download driver –
- LSI Driver: https://my.vmware.com/group/vmware/details?downloadGroup=DT-ESXI60-QLOGIC-QLCNIC-61191&productId=491
- Note: Extract the offline bundle from this package
- Download ESXi –
- ESXi Update ZIP File: vmware.com/downloads
- Note: make sure you download the Update ZIP file and not the ESXi ISO file
I started up PowerCLI and did the following commands:
1) Add the ESXi Update ZIP file to the depot:
2) Add the LSI Offline Bundle ZIP file to the depot:
3) Make sure the files from step 1 and 2 are in the depot:
4) Show the Profile names from update-from-esxi6.7-6.7_update03. The default command only shows part of the name. To correct this and see the full name use the ‘| select name’
Get-EsxImageProfile | select name
5) Create a clone profile to start working with.
New-EsxImageProfile -cloneprofile ESXi-6.7.0-20190802001-standard -Name ESXi-6.7.0-20190802001-standard-QLogic -Vendor QLogic
6) Validate the LSI driver is loaded in the local depot. It should match the driver from step 2. Make sure you note the name and version number columns. We’ll need to combine these two with a space in the next step.
Get-EsxSoftwarePackage -Vendor q*
7) Add the software package to the cloned profile. Tip: For ‘SoftwarePackage:’ you should enter the ‘name’ space ‘version number’ from step 6. If you just use the short name it might not work.
SoftwarePackage: net-qlcnic 6.1.191-1OEM.600.0.0.2494585
8) Optional: Compare the profiles, to see differences, and ensure the driver file is in the profile.
Get-EsxImageProfile | select name << Run this if you need a reminder on the profile names
Compare-EsxImageProfile -ComparisonProfile ESXi-6.7.0-20190802001-standard-QLogic -ReferenceProfile ESXi-6.7.0-20190802001-standard
9) Create the ISO
Export-EsxImageProfile -ImageProfile “ESXi-6.7.0-20190802001-standard-QLogic” -ExportToIso -FilePath c:\tmp\ESXi-6.7.0-20190802001-standard-QLogic.iso
That’s it! If you like my ‘no-nonsense’ videos and blogs that get straight to the point… then post a comment or let me know… Else, I’ll start posting boring video blogs!
Cross vSAN Cluster support for FT