homelab
VMware Workstation Gen 9: BOM2 P1 Motherboard upgrade
**Urgent Note ** The Gigabyte mobo in BOM2 initially was working well in my deployment. However, shortly after I completed this post the mobo failed. I was able to return it but to replace it the cost doubled. I’m currently looking for a different mobo and will post about it soon.
To take the next step in deploying a VCF 9 Simple stack with VCF Automation, I’m going to need to make some updates to my Workstation Home Lab. BOM1 simply doesn’t have enough RAM, and I’m a bit concerned about VCF Automation being CPU hungry. In this blog post I’ll cover some of the products I chose for BOM2.
Although my ASRock Rack motherboard (BOM1) was performing well, it was constrained by available memory capacity. I had additional 32 GB DDR4 modules on hand, but all RAM slots were already populated. I considered upgrading to higher-capacity DIMMs; however, the cost was prohibitive. Ultimately, replacing the motherboard proved to be a more cost-effective solution, allowing me to leverage the memory I already owned.
The mobo I chose was the Gigabyte Gigabyte MD71-HB0, it was rather affordable but it lacked PCIe bifurcation. Bifurcation is a feature I needed to support the dual NVMe disks into one PCIe slot. To overcome this I chose the RIITOP M.2 NVMe SSD to PCI-e 3.1 These cards essentially emulate a bifurcated PCIe slot which allows for the dual NVMe disks in a single PCIe slot.

The table below outlines the changes planned for BOM2. There was minimal unused products from the original configuration, and after migrating components, the updated build will provide more than sufficient resources to meet my VCF 9 compute/RAM requirements.
Pro Tip: When assembling new hardware, I take a methodical, incremental approach. I install and validate one component at a time, which makes troubleshooting far easier if an issue arises. I typically start with the CPUs and a minimal amount of RAM, then scale up to the full memory configuration, followed by the video card, add-in cards, and then storage. It’s a practical application of the old adage: don’t bite off more than you can chew—or in this case, compute.
| KEEP from BOM1 | Added to create BOM2 | UNUSED |
| Case: Phanteks Enthoo Pro series PH-ES614PC_BK Black Steel | Mobo: Gigabyte MD71-HB0 | Mobo: ASRack Rock EPC621D8A |
| CPU: 1 x Xeon Gold ES 6252 (ES means Engineering Samples) 24 pCores | CPU: 1 x Xeon Gold ES 6252 (ES means Engineering Samples) New net total 48 pCores | NVMe Adapter: 3 x Supermicro PCI-E Add-On Card for up to two NVMe SSDs |
| Cooler: 1 x Noctua NH-D9 DX-3647 4U | Cooler: 1 x Noctua NH-D9 DX-3647 4U | 10Gbe NIC: ASUS XG-C100C 10G Network Adapter |
| RAM: 384GB 4 x 64GB Samsung M393A8G40MB2-CVFBY 4 x 32GB Micron MTA36ASF4G72PZ-2G9E2 | RAM: New net total 640GB 8 x 32GB Micron MTA36ASF4G72PZ-2G9E2 | |
| NVMe: 2 x 1TB NVMe (Win 11 Boot Disk and Workstation VMs) | NVMe Adapter: 3 x RIITOP M.2 NVMe SSD to PCI-e 3.1 | |
| NVMe: 6 x Sabrent 2TB ROCKET NVMe PCIe (Workstation VMs) | Disk Cables: 2 x Slimline SAS 4.0 SFF-8654 | |
| HDD: 1 x Seagate IronWolf Pro 18TB | ||
| SSD: 1 x 3.84TB Intel D3-4510 (Workstations VMs) | ||
| Video Card: GIGABYTE GeForce GTX 1650 SUPER | ||
| Power Supply: Antec NeoECO Gold ZEN 700W | ||
PCIe Slot Placement:
For the best performance, PCIe Slot placement is really important. Things to consider – speed and size of the devices, and how the data will flow. Typically if data has to flow between CPUs or through the C622 chipset then, though minor, some latency is induced. If you have a larger video card, like the Super 1650, it’ll need to be placed in a PCIe slot that supports its length plus doesn’t interfere with onboard connectors or RAM modules.
Using Fig-1 below, here is how I laid out my devices.
- Slot 2 for Video Card. The Video card is 2 slots wide and covers Slot 1 the slowest PCIe slot
- Slot 3 Open
- Slot 4, 5, and 6 are the RIITOP cards with the dual NVMe
- Slimline 1 (Connected to CPU 1) has my 2 SATA drives, typically these ports are for U.2 drives but they also will work on SATA drives.
Why this PCIe layout? By isolating all my primary disks on CPU1 I don’t cross over the CPU nor do I go through the C622 chipset. My 2 NVMe disks will be attached to CPU0. They will be non-impactful to my VCF environment as one is used to boot the system and the other supports unimportant VCF VMs.
Other Thoughts:
- I did look for other mobos, workstations, and servers but most were really expensive. The upgrades I had to choose from were a bit constrained due to the products I had on hand (DDR4 RAM and the Xeon 6252 LGA-3647 CPUs). This narrowed what I could select from.
- Adding the RIITOP cards added quite a bit of expense to this deployment. Look for mobos that support bifurcation and match your needs. However, this combination + the additional parts were more than 50% less when compared to just updating the RAM modules.
- The Gigabyte mobo requires 2 CPUs if you want to use all the PCIe slots.
- Updating the Gigabyte firmware and BMC was a bit wonky. I’ve seen and blogged about these mobo issues before, hopefully their newer products have improved.
- The layout (Fig-1) of the Gigabyte mobo included support for SlimLine U.2 connectors. These will come in handy if I deploy my U.2 Optane Disks.
(Fig-1)

Now starts the fun, in the next posts I’ll reinstall Windows 11, performance tune it, and get my VCF 9 Workstation VMs operational.
VMware Workstation Gen 9: Part 3 Windows Core Services and Routing
A big part of my nested VCF 9 environment relies on core services. Core services are AD, NTP, DHCP, and RAS. Core services are supplied by my Windows Server (aka AD230.nested.local). Of those services, RAS will enable routing between the LAN Segments and allow for Internet access. Additionally, I have a VM named DomainTools. DomainTools is used for testing network connectivity, SSH, WinSCP, and other tools. In this blog I’ll create both of these VMs and adapt them to work in my new VCF 9 environment.
Create the Window Server and establish core services
A few years back I published a Workstation 17 YouTube multipart series on how to create a nested vSphere 8 with vSAN ESA. Part of that series was creating a Windows Server with core services. For my VCF 9 environment I’ll need to create a new Windows server with the same core services. To create a similar Windows Server I used my past 2 videos: VMware Workstation 17 Nested Home Lab Part 4A and 4B.
Windows Server updates the VCF 9 environment
Now that I have established AD230 I need to update it to match the VCF 9 networks. I’ll be adding additional vNICs, attaching them to networks, and then ensuring traffic can route via the RAS service. Additionally, I created a new Windows 11 VM named DomainTools. I’ll use DomainTools for network connectivity testing and other functions. Fig-1 shows the NIC to network layout that I will be following.
(Fig-1)

Adjustments to AD230 and DomainTools
I power off AD230 and DomainTools. On both I add the appropriate vNICs and align them to the LAN segments. Next, I edit their VMware VM configuration file changing the vNICs from “e1000e” to “vmxnet3”.

Starting with DomainTools for each NIC, I power it on, input the IPv4 information (IP Address, Subnet, VLAN ID), and optionally disable IPv6. The only NIC to get a Default Gateway is NIC1. TIP – To ID the NICs, I disconnect the NIC in the VM settings and watch for it to show unplugged in Windows Networking. This way I know which NIC is assigned to which LAN Segment. Additionally, in Windows Networking I add a verbose name to the NIC to help ID it.

I make the same network adjustments to AD230 and I update its DNS service to only supply DNS from the 10.0.10.230 network adapter.

Once completed I do a ping test between all the networks for AD230 and DomainTools to validate IP Connectivity works. TIP – Use ipconfig at the CLI to check your adapter IP settings. If ping is not working there may be firewall enabled.
Setting up RAS on AD230
Once you have your network setup correctly validate that RAS has accepted your new adapters and their information. On AD230 I go in to RAS > IPv4 > General
I validate that my network adapters are present.
Looking ahead — RAS seemed to work right out of the box with no config needed. In all my testing below it worked fine, this may change as I advance my lab. If so, I’ll be sure to update my blog.

Next I need to validate routing between the different LAN Segments. To do this I’ll use the DomainTools VM to ensure routing is working correctly. You may notice in some of my testing results that VCF Appliances are present. I added this testing part after I had completed my VCF deployment.
I need to test all of the VLAN networks. On the DomainTools VM, I disable each network adapter except for the one I want to test. In this case I disabled every adapter except for 10-0-11-228 (VLAN 11 – VM NIC3). I then add the gateway IP of 10.0.11.1 (this is the IP address assigned to my AD230 RAS server).

Next I do an ipconfig to validate the IP address, and use Angry IP Scanner to locate devices on the 10.0.10.x network. Several devices responded, plus resolving their DNS name, proving that DomainTools is successfully routing from the 11 network into the 10 network. I’ll repeat this process, plus do an internet check, on all the remaining networks.

Now that we have a stable network and core Window services established we are ready to move on to ESX Host Deployment and initial configuration.
Why your Home Lab needs a non-static port group.
We’ve all been there, during a recovery or migration of a VCSA server we get the error – “Addition or reconfiguration of network adapters attached to non-ephemeral distributed virtual port groups is not supported.” But what does this mean and how do I prepare for this? In the blog post I’ll cover some of the basics and how I setup my home lab.

What does non-ephemeral and ephemeral mean?
- Non-ephemeral or static binding is a port group setting that guarantees a port in the vDS. Think of it like seats at a table and once a seat is assigned it’s always reserved for that assignment.
- Ephemeral or non-static binding will not guarantee a port in the switch. It’s kind of like first come first seated at the table, you leave the table someone else can take your spot.
- Of course you’d want to make sure your ESXi hosts and important VM’s like the VCSA appliance have a “reserved seat at the table” and this is why vDS port groups are static by default.
- See this KB for more information.
What are some of the impacts of not having a non-static port group?
- If you are doing an migration, or recovery of a VM you’ll sometimes end up at the ESXi Host Client.
- At some point during the network discovery process it’ll determine the target network is static bound.
- As an example, restoring a VCSA server if the vDS port group it’s using is static or non-ephemeral binding port group (vDS) then it will surely through the error.

How do I prepare my Home Lab?
- Choice 1 – simply create a vDS Port Group with the Ephemeral – no binding setting that uses the same uplinks as the network I want to communicate on.
- Choice 2 – set your managment vDS Port Group to Ephemeral – no binding
- By doing one of these 2 ahead of time, this will allow the correct network to be chosen.
- Example – The screen shot below is a migration of a VCSA 8 server. When I get to step 4 I’m able to choose a non-static network. Had I not setup this port group ahead of time the migration would have been more difficult.

Want more information?
- Check out this design link that explains how VCF is assgined Static and Non-Static port groups
- Tech UnGlued did a good video around this topic.