homelab

VMware Workstation Gen 9: BOM2 P4 Workstation/Win11 Performance enhancements

Posted on Updated on

There can be a multitude of factors that could impact performance of your Workstation VMs. Running a VCF 9 stack on VMware Workstation demands every ounce of performance your Windows 11 host can provide. To ensure a smooth lab experience, certain optimizations are essential. In this post, I’ll walk through the key adjustments to maximize efficiency and responsiveness.

Note: There are a LOT of settings I did to improve performance. I take a structured approach by trying things slowly vs. applying all. The items listed below are what worked for my system and it’s recommend for that use case only. Unless otherwise stated, the VM’s and Workstation were powered down during these adjustments.

Host BIOS/UFEI Settings

  • There are several settings to ensure stable performance with a Supermicro X11DPH-T.
  • Here is what I modified on my system.
  • Enter Setup, confirm/adjust the following, and save then changes:
    • Advanced > CPU Configuration
      • Hyper-Threading > Enabled
      • Cores Enabled > 0
      • Hardware Prefetcher > Enabled
      • Advanced Power Management Configuration
        • Power Technology > Custom
        • Power Performance Tuning > BIOS Controls EPB
        • Energy Performance BIAS Setting > Maximum Performance
        • CPU C State Control, All Disabled
    • Advanced > Chipset Configuration > North Bridge > Memory Configuration
      • Memory Frequency > 2933

Hardware Design

  • In VMware Workstation Gen 9: BOM1 and BOM2 blogs we covered hardware design as it related to the indented load or nested VMs.
  • Topics we covered were:
    • Fast Storage: NVMe, SSD, and U.2 all contribute to VM performance
    • Placement of VM files: We placed and isolated our ESX VMs on specific disks which helps to ensure better performance
    • PCIe Placement: Using the System Block diagram I placed the devices in their optimal locations
    • Ample RAM: Include more than enough RAM to support the VCF 9 VMs
    • CPU cores: Design enough CPU cores to support the VCF 9 VMs
    • Video Card: Using a power efficient GPU and help boost VM performance

VM Design

  • Disk Choices: Matched the VM disk type to the physical drive type they are running on. Example – NVMe physical to a VMs NVMe disk
  • CPU Settings: Match physical CPU Socket(s) to VM CPU settings. Example – VM needs 8 Cores and a Physical host with 2 CPU Sockets and 24 cores per Socket. Setup VM for 2 CPU and 4 Cores
  • vHardware Choices: When creating a VM, Workstation should auto-populate hardware settings. Best vNIC to use is the vmxnet3. You can use the Guest OS Guide to validate which virtual hardware devices are compatible.

Fresh Installs

  • There’s nothing like a fresh install of the base OS to be a reliable foundation for performance improvments.
  • When Workstation is installed it adapts to the base OS. There can be performance gains due to this adaption.
  • However, if you upgrade the OS (Win10 to Win11) with Workstation already installed, you should always fully uninstall Workstation post upgrade and reinstall Workstation post upgrade for optimal performance.
  • Additionally, when installing Workstation I ensure that Hyper-V is disabled as it can impact Workstation performance.

Exclude Virtual Machine Directories From Antivirus Tools

NOTE — AV exceptions exclude certain files, folders, and processes from being scanned. By adding these you can improve Workstation performance but there are security risks in enabling AV Exceptions. Users should do what’s best for their environment. Below is how I set up my environment.

  • Script: Use a script to create AV Exceptions. For an example check out my blog – Using PowerShell to setup AV exceptions for Workstation 25H2u1 and Windows 11.
  • Manual Steps: Manually setup the following exceptions for Windows 11.
    • Open Virus and Threat Protection
    • Virus & threat protection settings > Manage Settings
    • Under ‘Exclusion’ choose ‘Add or remove exclusions’
    • Click on ‘+ Add an exclusion’
    • Choose your type (File, Folder, File Type, Process)
    • File Type: Exclude these specific VMware file types from being scanned: 
      • .vmdk: Virtual machine disk files (the largest and most I/O intensive).
      • .vmem: Virtual machine paging/memory files.
      • .vmsn: Virtual machine snapshot files.
      • .vmsd: Metadata for snapshots.
      • .vmss: Suspended state files.
      • .lck: Disk consistency lock files.
      • .nvram: Virtual BIOS/firmware settings. 
    • Folder: Exclude the following directories to prevent your antivirus from interfering with VM operations 
      • VMware Installation folder
      • VM Storage Folders: Exclude the main directory where you store your virtual machines
      • Installation Folder: Exclude the VMware Workstation installation path (default: C:\Program Files (x86)\VMware\VMware Workstation\).
      • VMware Tools: If you have the VMware Tools installation files extracted locally, exclude that folder as well. 
    • Process: Adding these executable processes to your antivirus exclusion list can prevent lag caused by the AV monitoring VMware’s internal actions: 
      • vmware.exe: The main Workstation interface.
      • vmware-vmx.exe: The core process that actually runs each virtual machine.
      • vmnat.exe: Handles virtual networking (NAT).
      • vmnetdhcp.exe: Handles DHCP for virtual networks.

Power Plan

Typically by default Windows 11 has the “Balanced” Power plan enabled. Though these settings are good for normal use cases, using your system as a dedicated VMware Workstation requires a better plan.

Below I show 2 ways to adjust a power plan. 1) Using a script to create a custom plan or 2) manually make similar adjustments.

  • 1) Script: I created a script that creates a custom power plan named “VMware Workstation Performance Plan” and makes all the needed changes for my system. You can find my blog here.
  • 2) Manual Adjustments:
    • Open the power plan. Control Panel > Hardware and Sound > Power Options > Change settings that are currently unavailable
    • You might see on every page “Change settings that are currently unavailable”, just click on it before making changes.
    • Set Power Plan:
      • Click on ‘Hide Additional Plans’.
      • Choose either “Ultimate Performance” or “High Performance” plan and then click on “Change plan settings”
      • Hard Disk > 0 Minutes
      • Wireless Adapter Settings > Max Performance
      • USB > Hub Selective Suspend Time out > 0
      • PCI Express > Link State Power Management > off
      • Processor power management > Both to 100%
      • Display > Turn off Display > Never

Power Throttling

Power throttling in Windows 11 is an intelligent, user-aware feature that automatically limits CPU resources for background tasks to conserve energy and extend battery life. By identifying non-essential, background-running applications, it reduces power consumption without slowing down active, foreground apps.

To determine if it is active go in to System > Power and look for Power Mode

If you are using a high performance power plan usually this feature is disabled.

If you are running a power plan where this is enabled, and you don’t want to disable it, then you can maximize your performance by disabling power throttling for the Workstation executable.

powercfg /powerthrottling disable /path “C:\Program Files (x86)\VMware\VMware Workstation\x64\vmware-vmx.exe”

Sleep States

Depending on your hardware you may or may not have different Sleep states enabled. Ultimately, for my deployment I don’t want any enabled.

To check if any are from a command prompt type in ‘powercfg /a’ and adjust as needed.

Memory Page files

In my design I don’t plan to overcommit physical RAM (640GB ram) for my nested VM’s. To maximize the performance and ensure VMware Workstation uses the physical memory exclusively, I follow these steps: Configure global memory preferences, Disable Memory Trimming for each VM, Force RAM-Only Operation, and adjust the Windows Page Files.

  • 1) Configure Global Memory Preferences: This setting tells VMware how to prioritize physical RAM for all virtual machines running on the host. 
    • Open Workstation > Edit > Preferences > Memory
    • In the Additional memory section, select the radio button for “Fit all virtual machine memory into reserved host RAM”.
  • 2) Disable Memory Trimming for each VM: Windows and VMware use “trimming” to reclaim unused VM memory for the host. Since RAM will not be overallocated, I disable this to prevent VMs from ever swapping to disk.
    • Right-click your VM and select Settings
    • Go to the Options tab and select the Advanced category.
    • Check the box for “Disable memory page trimming”.
    • Click OK and restart the VM
  • 3) Force RAM-Only Operation (config.ini): This is an advanced step that prevents VMware from creating .vmem swap files, forcing it to use physical RAM or the Windows Page File instead.
    • Close VMware Workstation completely.
    • Navigate to C:\ProgramData\VMware\VMware Workstation\ in File Explorer (Note: ProgramData is a hidden folder).
    • Open the file named config.ini with Notepad (you may need to run Notepad as Administrator).
    • Add the following lines to the end of the file:
      • mainMem.useNamedFile = “FALSE”
      • prefvmx.minVmMemPct = “100”
    • Save the file and restart your computer
  • 4) Windows Page Files: With 640GB of RAM Windows 11 makes a huge memory page file. Though I don’t need one this large I still need one for crash dumps, core functionality, and memory management. According to Microsoft, for a high-memory workstation or server, a fixed page file of 16GB to 32GB is the “sweet spot.” I’m going a bit larger.
    • Go to System > About > Advanced system Settings
    • System Properties window appears, under Performance choose ‘Settings’
    • Performance Options appears > Advanced > under Virtual memory choose ‘change’
    • Uncheck ‘Automatically manage paging…’
    • Choose Custom size, MIN 64000 and MAX 84000
    • Click ‘Set’ > OK
    • Restart the computer

Windows Visual Effects Performance

The visual effects in Windows 11 can be very helpful but they can also minimally slow down your performance. I prefer to create a custom profile and only enable ‘Smooth edges of screen fonts’

  • Go to System > About > Advanced system Settings
  • System Properties window appears,
  • On the Advanced Tab, under Performance choose ‘Settings’
  • On the Visual Effect tab choose ‘Custom’ and I chose ‘Smooth edges of screen fonts’

Disable BitLocker

Windows 11 (especially version 24H2 and later) may automatically re-enable encryption during a fresh install or major update. By default to install Windows 11 it requires TPM 1.2 or higher chip (TPM 2.0 recommended/standard for Win11), and UEFI firmware with Secure Boot enabled. BitLocker uses these features to “do its work”.

But, there are a couple of ways to disable BitLocker.

  • Create a Custom ISO
    • My deployment doesn’t have a TPM modules nor is Secure Boot enabled. To overcome these requirements I used Rufus to make the Windows 11 USB install disk. This means BitLocker can not be enabled.
  • Registry Edit (Post-Installation – may already be set):
    • Press Win + R, type regedit, and press Enter
    • Navigate to: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\BitLocker
    • Right-click in the right pane, select New > DWORD (32-bit) Value
    • Name it PreventDeviceEncryption and set its value to 1
    • Disable the Service:
      • Press Win + R, type services.msc, and press Enter.
      • Find BitLocker Drive Encryption Service.
      • Right-click it, select Properties, set the “Startup type” to Disabled, and click Apply.

Disable Side-Channel Mitigations: Disabling these can boost performance, especially on older processors, but may reduce security.

  • Open the Windows Security app by searching for it in the Start menu.
  • Select Device security from the left panel.
  • Click on the Core isolation details link.
  • Toggle the switch for Memory integrity to Off.
  • Select Yes when the User Account Control (UAC) prompt appears.
  • Restart your computer for the changes to take effect

Note: if you host is running Hyper-V virtualization, for your Workstation VM’s you may need to check the “disable side channel mitigations for Hyper-V enabled hosts” options in the advanced options for each VM.

Clean out unused Devices:

Windows leaves behind all types of unused devices that are hidden from your view in device manager. Though these are usually pretty harmless its a best practice to clean them up from time to time.

The quickest way to do this is using a tool called Device Cleanup Tool. Check out my video for more how to with this tool.

Here is Device Cleanup Tool running on my newly (<2 months) installed system. As you can see unused devices can build up even after a short time frame.

Debloat, Clean up, and so much more

There are several standard Windows based features, software, and cleanup tools that can impact the performance of my deployment. I prefer to run tools that help optimize Windows due to their ability to complete tasks quickly. The tool I use to debloat and clean up my system is Winutil. It’s been a proven util for not only optimizing systems, installing software, updates, but helping to maintain them too. For more information about Winutil check out their most recent update.

For ‘Tweaking’ new installs I do the following:

  • Launch the WinUtil program
  • Click on Tweaks
  • Choose Standard
  • Unselect ‘Run Disk Cleanup’
  • Click on Run Teaks

Additionally, you may have noticed Winutil can create an Ultimate Preforamnce power plan. That may come in handy.

Remove Windows Programs:

Here is a list of all the Windows Programs I remove, they are simply not needed for a Workstation Deployment. Some of these can be removed using the WinUtil.

  • Cortana
  • Co-polit
  • Camera
  • Game Bar
  • Teams
  • News
  • Mail and Calendar
  • Maps
  • Microsoft OneDrive
  • Microsoft to do
  • Movies and TV
  • People
  • Phone Link
  • Solitare
  • Sticky NOtes
  • Tips
  • Weather
  • Xbox / xbox live

References and Other Performance Articles:

Using PowerShell to setup AV exceptions for Workstation 25H2u1 and Windows 11

Posted on Updated on

Adding AV exceptions to your Workstation deployment on Windows 11 can really improve the overall performance. In this blog post I’ll share my exclusion script recently used on my environment.

NOTE: You cannot just cut, paste, and run the script below. There are parts of the script that have to be configured for the intended system.

What the script does.

  • For Windows 11 users with Microsoft’s Virus & threat protection enabled this script will tell AV to not scan specific file types, folders, and processes related to VMware Workstation.
  • It will ignore exceptions that already exist.
  • It will display appropriate messages (Successful, or Already Exists) as it completes tasks.

What is the risk?

  • Adding an exception (or exclusion) to antivirus (AV) software, while sometimes necessary for application functionality, significantly lowers the security posture of a device. The primary risk is creating a security blind spot where malicious code can be downloaded, stored, and executed without being detected.
  • Use at your own risk. This code is for my personal use.

What will the code do?

It will add several exclusions listed below.

  • File Type: Exclude these specific VMware file types from being scanned: 
    • .vmdk: Virtual machine disk files (the largest and most I/O intensive).
    • .vmem: Virtual machine paging/memory files.
    • .vmsn: Virtual machine snapshot files.
    • .vmsd: Metadata for snapshots.
    • .vmss: Suspended state files.
    • .lck: Disk consistency lock files.
    • .nvram: Virtual BIOS/firmware settings. 
  • Folder: Unique to my deployment it will exclude the following directories to prevent your antivirus from interfering with VM operations 
    • VMware Installation folder
    • VM Storage Folders: Exclude the main directory where I store my virtual machines.
    • Installation Folder: Exclude the VMware Workstation installation path ((default: C:\Program Files (x86)\VMware\VMware Workstation).
  • Process:
    • vmware.exe: The main Workstation interface.
    • vmware-vmx.exe: The core process that actually runs each virtual machine.
    • vmnat.exe: Handles virtual networking (NAT).
    • vmnetdhcp.exe: Handles DHCP for virtual networks.

The Script

Under the section ‘#1. Define your exclusions ‘ is where I adapted this code to match my environment

# Check for Administrator privileges
if (-not ([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole]::Administrator)) {
Write-Warning "Please run this script as an Administrator."
break
}

# 1. Define your exclusions

# This is where you put in YOUR folder exclusions
$folders = @("C:\Program Files (x86)\VMware\VMware Workstation", "D:\Virtual Machines", "F:\Virtual Machines", "G:\Virtual Machines", "H:\Virtual Machines", "I:\Virtual Machines", "J:\Virtual Machines", "K:\Virtual Machines", "L:\Virtual Machines")

# These are the common process exclusions

$processes = @("C:\Program Files (x86)\VMware\VMware Workstation\vmware.exe", "C:\Program Files (x86)\VMware\VMware Workstation\x64\vmware-vmx.exe", "C:\Program Files (x86)\VMware\VMware Workstation\vmnat.exe", "C:\Program Files (x86)\VMware\VMware Workstation\vmnetdhcp.exe")

# These are the common extension exclusions

$extensions = @(".vmdk", ".vmem", ".vmsd", ".vmss", ".lck", ".nvram")

# Retrieve current settings once for efficiency
$currentPrefs = Get-MpPreference
Write-Host "Checking and applying Windows Defender Exclusions..." -ForegroundColor Cyan

# --- Validate and Add Folders ---
foreach ($folder in $folders) {
if ($currentPrefs.ExclusionPath -contains $folder) {
Write-Host "Note: Folder exclusion already exists, skipping: $folder" -ForegroundColor Yellow
} else {
Add-MpPreference -ExclusionPath $folder
Write-Host "Successfully added folder: $folder" -ForegroundColor Green
}
}

# --- Validate and Add Processes ---
foreach ($proc in $processes) {
if ($currentPrefs.ExclusionProcess -contains $proc) {
Write-Host "Note: Process exclusion already exists, skipping: $proc" -ForegroundColor Yellow
} else {
Add-MpPreference -ExclusionProcess $proc
Write-Host "Successfully added process: $proc" -ForegroundColor Green
}
}

# --- Validate and Add Extensions ---
foreach ($ext in $extensions) {
if ($currentPrefs.ExclusionExtension -contains $ext) {
Write-Host "Note: Extension exclusion already exists, skipping: $ext" -ForegroundColor Yellow
} else {
Add-MpPreference -ExclusionExtension $ext
Write-Host "Successfully added extension: $ext" -ForegroundColor Green
}
}

Write-Host "`nAll exclusion checks complete." -ForegroundColor Cyan

The Output

In the output below, when the script creates an item successfully it will show in green. If it detects a duplicate it will output a message in yellow. I ran the script with a .vmdk exclusion already existing to test it out.

When its complete the AV exclusions in Windows should looks similar to the partial screenshot below.

To view the exclusions, in Win 11 open ‘Virus & Threat Protection’ > Manage Settings > Under Exclusions chose ‘Add or remove exclusion’

VMware Workstation Gen 9: BOM2 P3 Workstation Installation and configuration

Posted on Updated on

Now that my hardware and OS are ready the next step is installing Workstation and adding my previously configured VCF 9 VMs. In this blog I’ll cover these steps and get the VCF Environment up and running.

Workstation Pro 25H2u1 Update

I’ll need to download VMware Workstation Pro 25H2u1. The good news is, it’s free and users can download it at the Broadcom support portal. You can find it there under FREE Downloads.

Tip: Don’t forget to click on the “Terms and Conditions” link, then click the “I agree…” check box. It’s required before you can download this product.

Before I install Workstation I validate that Windows Hyper-V is not enabled. To do this, I go into Windows Features, ensure that Hyper-V and Windows Hypervisor Platform are NOT checked.

Next I ensure I set a static IP address to my Windows system and give the NIC a unique name.

Tip: Want a quick way to your network adapters in Windows 11? Check out my blog post on how to make a super quick Network Settings shortcut.

Once confirmed I install Workstation Pro 25H2u1. For more information on how to install Workstation 25H2u1 see my blog.

Restore Workstation LAN Segments

After the Workstation installation is complete, I go into the Virtual Network Editor. I delete the other VMnets and adjust VMnet0 to match the correct network adapter.

Next I add in one VM and then recreate all the VLAN Segments. For more information on this process, see my post under LAN Segments.

I add in the rest of my VM’s and simply assign their LAN Segments.

This is what I love about Workstation, I was able to change and reconstruct a new server, migrate storage devices, and then recover my entire VCF 9 environment. In my next post I’ll cover how I set up Windows 11 for better performance.

Upgrading Workstation 25H2 to 25Hu1

Posted on Updated on

Last February 26 2026 VMware released Workstation Pro 25H2u1. It’s an update that does repair a few bugs and security patches. In this blog I’ll cover how to upgrade it.

Helpful links

Meet the Requirements:

When installing/upgrading Workstation on Windows most folks seem to overlook the requirements for Workstation and just install the product. You can review the requirements here.

There are a couple of items folks commonly miss when installing Workstation.

  • The number one issue is Processor Requirements for Host Systems . It’s either they have a CPU that is not supported or they simply did not enable Virtualization support in the BIOS.
  • The second item is Microsoft Hyper-V enabled systems. Workstation supports a system with Hyper-V enabled but for the BEST performance its best to just disable these features.
  • Next, if you’ve upgraded your OS but never reinstall Workstation, then its advised to uninstall then install Workstation.
  • Lastly, if doing a fresh OS install ensuring drivers are updated and DirectX is at a supportable version.

How to download Workstation

Download the Workstation Pro 25H2u1 for Windows. Make sure you click on the ‘Terms and Conditions’ AND check the box. Only then can you click on the download icon.

Choose your install path

Before you do a fresh install:

If you are upgrading Workstation, review the following:

  • Ensure your environment is compatible with the requirements
  • If you have existing VMs:

Note: If you are upgrading the Windows OS or in the past have done a Windows Upgrade (Win 10 to 11), you must uninstall Workstation first, and then reinstall Workstation. More information here.

Upgrading Workstation 25H2 to 25Hu1

Run the download file, wait a min for it to confirm space requirements, then click Next

Accept the EULA.

Compatible setup check

Confirm install directory

Check for updates and join the CEIP program.

Allow it to create shortcuts

Click Upgrade to complete the upgrade

Click on Finish to complete the Wizard.

Lastly, check the version number of Workstation. Open Workstation > Help > About

VMware Workstation Gen 9: BOM2 P1 Motherboard upgrade

Posted on Updated on

To take the next step in deploying my nested VCF 9 and adding VCF Automation, I’m going to need to make some updates to my Workstation Home Lab. BOM1 simply doesn’t have enough RAM, and I’m a bit concerned about VCF Automation being CPU/Core demanding. In this blog post I’ll cover some of the products I chose for BOM2.

A bit of Background

It should be noted, my ASRock Rack motherboard (BOM1) was performing well with nested VCF9. However, it was constrained by its available memory slots plus only supported one CPU. I considered upgrading to higher-capacity DIMMs; however, the cost was prohibitive. Ultimately, replacing the motherboard proved to be a more cost-effective solution, allowing me to leverage the memory and CPU I already owned.

Initially, I chose the Gigabyte Gigabyte MD71-HB0. At the time it was rather affordable but it lacked PCIe bifurcation. Bifurcation is a feature I needed to support the dual NVMe disks into one PCIe slot. To overcome this I chose the RIITOP M.2 NVMe SSD to PCI-e 3.1 These cards essentially emulate a bifurcated PCIe slot but added additional expense to the solution. Though I was able to get my nested VCF environment up and running, it was short lived due to a physical fault. I was able to return it but to buy it again the cost doubled so I went a different direction. If you are interested in my write up about the Gigabyte mobo click here, but do know I am not longer using it.

My Motherboard choice for BOM2

I started looking for a motherboard that would fit my needs. Some of the features I was looking for was: support dual Gold 6252 CPUs, support existing 32/64GB RAM modules, adequate PCIe slots, will it fit in my case, what are its power needs, and support for bifurcation. The motherboard I chose was the SuperMicro X11DPH-T. Buying it refurbished was a way to keep the cost down and meet my needs.

The migration from BOM1 to BOM2

The table below outlines the changes planned from BOM1 to BOM2. There were minimal unused products from the original configuration, and after migrating components, the updated build will provide more than sufficient resources to meet my VCF 9 compute/RAM requirements.

Pro Tip: When assembling new hardware, I take a methodical, incremental approach. I install and validate one component at a time, which makes troubleshooting far easier if an issue arises. I typically start with the CPUs and a minimal amount of RAM, then scale up to the full memory configuration, followed by the video card, add-in cards, and then storage. It’s a practical application of the old adage: don’t bite off more than you can chew—or in this case, compute.

KEEP from BOM1Added to create BOM2UNUSED
Mobo: SuperMicro X11DPH-T Mobo: ASRack Rock EPC621D8A
CPU: 2 x Xeon Gold ES 6252 
New net total 48 pCores
CPU: 1 x Xeon Gold ES 6252 (ES means Engineering Samples)
Cooler: 1 x Noctua NH-D9 DX-3647 4UCooler: 1 x Noctua NH-D9 DX-3647 4U10Gbe NIC: ASUS XG-C100C 10G Network Adapter
RAM: 384GB
4 x 64GB Samsung M393A8G40MB2-CVFBY
4 x 32GB Micron MTA36ASF4G72PZ-2G9E2
RAM: New net total 640GB
8 x 32GB Micron MTA36ASF4G72PZ-2G9E2

NVMe: 6 x Sabrent 2TB ROCKET NVMe PCIe (Workstation VMs)NVMe Adapter: 3 x Supermicro PCI-E Add-On Card for up to two NVMe SSDsNVMe:  2 x 1TB NVMe (Win 11 Boot Disk and Workstation VMs)
HDD: 1 x Seagate IronWolf Pro 18TBDisk Cables: 2 x Mini SAS to 4 SATA Cable, 36 Pin SFF 8087Video Card: GIGABYTE GeForce GTX 1650 SUPER
SSD: 1 x 3.84TB Intel D3-4510 (Workstations VMs)Boot/Extra Disk: 2 x Optane 4800x 1.5GB Disk
Case: Phanteks Enthoo Pro series PH-ES614PC_BK Black Steel2 x PCIe 4x to U.2 NVMe Adapter
Power Supply: MAG A1000GL 1000 Watt

Nosie and Power

I commonly get asked “How is the noise and power consumption on this build?”. Fan noise is whisper quiet. It’s one of the reasons I choose to do a DIY build over buying a server. The Phanteck Enthoo case fans and the Noctua fans do a great job keeping the noise levels down. They may spin up from time to time but its nothing compared to the noise a server chassis might make. For power I’m seeing it nominally at ~135 Watts. However, I haven’t spun up my workloads so this may increase.

Uniqueness with the SuperMicro X11DPH-T

Issue 1 – Xeon Gold 6252 Engineering Samples (ES) issues with RAM

I had to switch from ES CPUs to GA released CPUs. The good news was the Xeon Gold 6252 is at an all time low. The ES CPUs had issues with memory timing. When using an ES CPU it’s sometimes hard to pinpoint why they were failing but once I replaced them the following errors went away. At this point with the cost being so low for an actual GA CPUs I will avoid using ES CPUs for this build.

  • Memory training failure. – Assertion
  • Failing DIMM: DIMM location (Uncorrectable memory component found). (P2-DIMME1) – Assertion

Issue 2 – Fan header blocked by PCIe slot.

The 2nd CPU fan header is blocked if a PCIe Card is in this slot. I can only assume they had no other choice, but putting it here?

Issue 3 – NVMe placement

The NVMe slots are placed directly behind most of the PCIe slots, and they are at the same level as the PCIe slot. This blocks the insertion of any long PCIe cards. So if you want to put in a long GPU/Video card you’ll need to not use the onboard NVMe.

Issue 4 – Blocked I-SATA Ports

The edge connectors for the I-SATA ports can become blocked if you are using a long (225mm or greater) PCIe card.

PCIe Slot Placement:

For the best disk performance, PCIe Slot placement is really important. Things to consider – speed and size of the devices, and how the data will flow. Typically if data has to flow between CPUs or through the C621 chipset then, though minor, some latency is induced. If you have a larger video card, like the Super 1650, it’ll need to be placed in a PCIe slot that supports its length plus doesn’t interfere with onboard connectors or RAM modules.

The best way to layout your PCIe Devices is to look at a System Block Diagram (Fig-1). A good one will give you all types of information that you can use to optimize your deployment. Things I look for are the PCIe slots, how fast they are, which CPU are they attached to, and are they shared with other devices.

Fig-1

Using Fig-1, here is how I laid out my devices.

  • Slot 7-6 Optane 1.5Gb Disk, used for boot and VMs
  • Slot 5, 4, and 3 Dual 2TB NVMe disks
  • I-SATA ports for SATA drives (18TB HDD Backup, 3.84 SSD VMs’)

Other Thoughts:

  • I did look for other mobos, workstations, and servers but most were really expensive. The upgrades I had to choose from were a bit constrained due to the products I had on hand (DDR4 RAM and the Xeon 6252 LGA-3647 CPUs). This narrowed what I could select from.
  • The SuperMicro motherboard requires 2 CPUs if you want to use all the PCIe slots.

Now starts the fun, in the next posts I’ll finalize the install of Windows 11/Workstation, tune its performance, and get my VCF 9 Workstation VMs operational.

My Silicon Treasures: Mapping My Home Lab Motherboards Since 2009

Posted on Updated on

I’ve been architecting home labs since the 90s—an era dominated by bare-metal Windows Servers and Cisco products. In 2008, my focus shifted toward virtualization, specifically building out VMware-based environments.  What began as repurposing spare hardware for VMware Workstation quickly evolved. As my resource requirements scaled, I transitioned to dedicated server builds. Aside from a brief stint with Gen8 enterprise hardware, my philosophy has always been “built, not bought,” favoring custom component selection over off-the-shelf rack servers.  I’ve documented this architectural evolution over the years, and in this post, I’m diving into the the specific motherboards that powered my past home labs.

Gen 1: 2009-2011 GA-EP43-UD3L Workstation 7 | ESX 3-4.x

Back in 2009, I was working for a local hospital in Phoenix and running the Phoenix VMUG. I deployed a Workstation 7 Home lab on this Gigabyte motherboard. Though my deployment was simple, I was able deploy ESX 3.5 – 4.x with only 8GB of RAM and attach it to an IOMega ix4-200d. I used it at our Phoenix VMUG meetings to teach others about home labs. I found the receipt for the CPU ($150) and motherboard ($77), wow price sure have changed.

REF Link – Home Lab – Install of ESX 3.5 and 4.0 on Workstation 7

Gen2: 2011-2013 Gigabyte GA-Z68XP-UD3 Workstation 8 | ESXi 4-5

Gen1 worked quite well for what I needed but it was time to expand as my I started working for VMware as a Technical Account Manager. I needed to keep my skills sharp and deploy more complex home lab environments. Though I didn’t know it back then, this was the start of my HOME LABS: A DEFINITIVE GUIDE. I really started to blog about the plan to update and why I was making different choices. I ran into a very unique issues that even Gigabyte or Hitachi could figure out, I blogged about here.

Deployed with an i7-2600 ($300), Gigabyte GA-Z68XP-UD3 ($150), and 16GB DDR3 RAM

REF Link: Update to my Home Lab with VMware Workstation 8 – Part 1 Why

Gen2: Zotac M880G-ITX then the ASRock FM2A85X-ITX | FreeNAS Sever

Back in the day I needed better performance from my shared storage as the IOMega had reached its limits. Enter the short lived FreeNAS server to my home lab. Yes it did preform better but man it was full of bugs and issues. Some due to the Zotac Motherboard and some with FreeNAS. I was happy to be moving on to vSAN with Gen3.

REF: Home Lab – freeNAS build with LIAN LI PC-Q25, and Zotac M880G-ITX

Gen3: 2012-2016 MSI Z68MA-G45 (B3) | ESXi 5-6

I needed to expand my home lab into dedicated hosts. Enter the MSI Z68MA-G45 (B3). It would become my workhorse expanding it from one server with the Gen 2 Workstation to 3 dedicated hosts running vSAN.

REF: VSAN – The Migration from FreeNAS

Gen4: 2016-2019 Gigabyte MX31-BS0 

This mobo was used in my ‘To InfiniBand and beyond’ blog series. It had some “wonkiness” about its firmware updates but other then that it was a solid performer. Deployed with a E3-1500 and 32GB RAM

REF: Home Lab Gen IV – Part I: To InfiniBand and beyond!

Gen 5: 2019-2020 JINGSHA X79

I had maxed out Gen 4 and really needed to expand my CPU cores and RAM. Hence the blog series title – ‘The Quest for More Cores!’. Deployed with 128GB RAM and Xeon E5-2640 v2 8 Cores it fit the bill. This series is where I started YouTube videos and documenting my builds per my design guides. Though this mobo was good for its design its lack of PCIe slots made it short lived.

REF: Home Lab GEN V: The Quest for More Cores! – First Look

Gen 7: 2020-2023: Supermicro X9DRD-7LN4F-JBOD and the MSI PRO Z390-A PRO

Gen 5 motherboard fell short when I wanted to deploy and all flash vSAN based on NVMe. With this Supermicro motherboard I had no issues with IO and deploying it as all Flash vSAN. It also gathered the attention of Intel to which they offered me their Optane drives to create an All Flash Optane system. More on that in Gen 8.

The MSI motherboard was a needed update to my VMware Workstation system. I built it up as a Workstation / Plex server and it did this job quite well.

This generation is when I started to align my Gen#s to vSphere releases. Makes it much easier to track.

REF: Home Lab Generation 7: Updating from Gen 5 to Gen 7

Gen 8: 2023-2024 Dell T7820 VMware Dedicated Hosts

With some support from Intel I was able to uplift my 3 x Dell T7820 workstations into a great home lab. They supplied Engineering Samples CPUs, RAM, and Optane Disks. Plus I was able to coordinate the distribution of Optane disks to vExperts Globally. It was a great homelab and I leaned a ton!

REF: Home Lab Generation 8 Parts List (Part 2)

Gen 8-9: 2023-2026 ASRack Rock EPC621D8A VMware Workstation Motherboard

Evolving my Workstation PC I used this ASRack Rock motherboard. It was the perfect solution for running nested clusters of ESXi VMs with vSAN ESA. It was until most recently a really solid mobo and I even got it to run nested VCF 9 simple install.

REF: Announcing my Generation 8 Super VMware Workstation!

Gen 9: 2024 – Current

As of this date its still under development. See my Home Lab BOM for more information. However, I’m moving my home lab to only nested VCF 9 deployment on Workstation and not dedicated servers.

VMware Workstation Gen 9: Part 3 Windows Core Services and Routing

Posted on Updated on

A big part of my nested VCF 9 environment relies on core services. Core services are AD, NTP, DHCP, and RAS. Core services are supplied by my Windows Server (aka AD230.nested.local). Of those services, RAS will enable routing between the LAN Segments and allow for Internet access. Additionally, I have a VM named DomainTools. DomainTools is used for testing network connectivity, SSH, WinSCP, and other tools. In this blog I’ll create both of these VMs and adapt them to work in my new VCF 9 environment.

Create the Window Server and establish core services

A few years back I published a Workstation 17 YouTube multipart series on how to create a nested vSphere 8 with vSAN ESA. Part of that series was creating a Windows Server with core services. For my VCF 9 environment I’ll need to create a new Windows server with the same core services. To create a similar Windows Server I used my past 2 videos: VMware Workstation 17 Nested Home Lab Part 4A and 4B.

Windows Server updates the VCF 9 environment

Now that I have established AD230 I need to update it to match the VCF 9 networks. I’ll be adding additional vNICs, attaching them to networks, and then ensuring traffic can route via the RAS service. Additionally, I created a new Windows 11 VM named DomainTools. I’ll use DomainTools for network connectivity testing and other functions. Fig-1 shows the NIC to network layout that I will be following.

(Fig-1)

Adjustments to AD230 and DomainTools

I power off AD230 and DomainTools. On both I add the appropriate vNICs and align them to the LAN segments. Next, I edit their VMware VM configuration file changing the vNICs from “e1000e” to “vmxnet3”.

Starting with DomainTools for each NIC, I power it on, input the IPv4 information (IP Address, Subnet, VLAN ID), and optionally disable IPv6. The only NIC to get a Default Gateway is NIC1. TIP – To ID the NICs, I disconnect the NIC in the VM settings and watch for it to show unplugged in Windows Networking. This way I know which NIC is assigned to which LAN Segment. Additionally, in Windows Networking I add a verbose name to the NIC to help ID it.

I make the same network adjustments to AD230 and I update its DNS service to only supply DNS from the 10.0.10.230 network adapter.

Once completed I do a ping test between all the networks for AD230 and DomainTools to validate IP Connectivity works. TIP – Use ipconfig at the CLI to check your adapter IP settings. If ping is not working there may be firewall enabled.

Setting up RAS on AD230

Once you have your network setup correctly validate that RAS has accepted your new adapters and their information. On AD230 I go in to RAS > IPv4 > General

I validate that my network adapters are present.

Looking ahead — RAS seemed to work right out of the box with no config needed. In all my testing below it worked fine, this may change as I advance my lab. If so, I’ll be sure to update my blog.

Next I need to validate routing between the different LAN Segments. To do this I’ll use the DomainTools VM to ensure routing is working correctly. You may notice in some of my testing results that VCF Appliances are present. I added this testing part after I had completed my VCF deployment.

I need to test all of the VLAN networks. On the DomainTools VM, I disable each network adapter except for the one I want to test. In this case I disabled every adapter except for 10-0-11-228 (VLAN 11 – VM NIC3). I then add the gateway IP of 10.0.11.1 (this is the IP address assigned to my AD230 RAS server).

Next I do an ipconfig to validate the IP address, and use Angry IP Scanner to locate devices on the 10.0.10.x network. Several devices responded, plus resolving their DNS name, proving that DomainTools is successfully routing from the 11 network into the 10 network. I’ll repeat this process, plus do an internet check, on all the remaining networks.

Now that we have a stable network and core Window services established we are ready to move on to ESX Host Deployment and initial configuration.

Why your Home Lab needs a non-static port group.

Posted on Updated on

We’ve all been there, during a recovery or migration of a VCSA server we get the error – “Addition or reconfiguration of network adapters attached to non-ephemeral distributed virtual port groups is not supported.” But what does this mean and how do I prepare for this? In the blog post I’ll cover some of the basics and how I setup my home lab.

What does non-ephemeral and ephemeral mean?

  • Non-ephemeral or static binding is a port group setting that guarantees a port in the vDS. Think of it like seats at a table and once a seat is assigned it’s always reserved for that assignment.
  • Ephemeral or non-static binding will not guarantee a port in the switch. It’s kind of like first come first seated at the table, you leave the table someone else can take your spot.
  • Of course you’d want to make sure your ESXi hosts and important VM’s like the VCSA appliance have a “reserved seat at the table” and this is why vDS port groups are static by default.
  • See this KB for more information.

What are some of the impacts of not having a non-static port group?

  • If you are doing an migration, or recovery of a VM you’ll sometimes end up at the ESXi Host Client.
  • At some point during the network discovery process it’ll determine the target network is static bound.
  • As an example, restoring a VCSA server if the vDS port group it’s using is static or non-ephemeral binding port group (vDS) then it will surely through the error.

How do I prepare my Home Lab?

  • Choice 1 – simply create a vDS Port Group with the Ephemeral – no binding setting that uses the same uplinks as the network I want to communicate on.
  • Choice 2 – set your managment vDS Port Group to Ephemeral – no binding
  • By doing one of these 2 ahead of time, this will allow the correct network to be chosen.
  • Example – The screen shot below is a migration of a VCSA 8 server. When I get to step 4 I’m able to choose a non-static network. Had I not setup this port group ahead of time the migration would have been more difficult.

Want more information?

  • Check out this design link that explains how VCF is assgined Static and Non-Static port groups
  • Tech UnGlued did a good video around this topic.

VMware Workstation Gen 9: BOM2 P1 Motherboard upgrade (Failed Gigabyte board)

Posted on Updated on

**Urgent Note ** The Gigabyte mobo in BOM2 initially was working well in my deployment. However, shortly after I completed this post the mobo failed. I was able to return it but to replace it the cost doubled. I replaced this mobo with a SuperMicro Board but am keeping this post up incase someone find it useful.

To take the next step in deploying a VCF 9 Simple stack with VCF Automation, I’m going to need to make some updates to my Workstation Home Lab. BOM1 simply doesn’t have enough RAM, and I’m a bit concerned about VCF Automation being CPU hungry. In this blog post I’ll cover some of the products I chose for BOM2.

Although my ASRock Rack motherboard (BOM1) was performing well, it was constrained by available memory capacity. I had additional 32 GB DDR4 modules on hand, but all RAM slots were already populated. I considered upgrading to higher-capacity DIMMs; however, the cost was prohibitive. Ultimately, replacing the motherboard proved to be a more cost-effective solution, allowing me to leverage the memory I already owned.

The mobo I chose was the Gigabyte Gigabyte MD71-HB0, it was rather affordable but it lacked PCIe bifurcation. Bifurcation is a feature I needed to support the dual NVMe disks into one PCIe slot. To overcome this I chose the RIITOP M.2 NVMe SSD to PCI-e 3.1 These cards essentially emulate a bifurcated PCIe slot which allows for the dual NVMe disks in a single PCIe slot.

The table below outlines the changes planned for BOM2. There was minimal unused products from the original configuration, and after migrating components, the updated build will provide more than sufficient resources to meet my VCF 9 compute/RAM requirements.

Pro Tip: When assembling new hardware, I take a methodical, incremental approach. I install and validate one component at a time, which makes troubleshooting far easier if an issue arises. I typically start with the CPUs and a minimal amount of RAM, then scale up to the full memory configuration, followed by the video card, add-in cards, and then storage. It’s a practical application of the old adage: don’t bite off more than you can chew—or in this case, compute.

KEEP from BOM1Added to create BOM2UNUSED
Case: Phanteks Enthoo Pro series PH-ES614PC_BK Black SteelMobo: Gigabyte MD71-HB0Mobo: ASRack Rock EPC621D8A
CPU: 1 x Xeon Gold ES 6252 (ES means Engineering Samples)
24 pCores
CPU: 1 x Xeon Gold ES 6252 (ES means Engineering Samples)
New net total 48 pCores
NVMe Adapter: 3 x Supermicro PCI-E Add-On Card for up to two NVMe SSDs
Cooler: 1 x Noctua NH-D9 DX-3647 4UCooler: 1 x Noctua NH-D9 DX-3647 4U10Gbe NIC: ASUS XG-C100C 10G Network Adapter
RAM: 384GB
4 x 64GB Samsung M393A8G40MB2-CVFBY
4 x 32GB Micron MTA36ASF4G72PZ-2G9E2
RAM: New net total 640GB
8 x 32GB Micron MTA36ASF4G72PZ-2G9E2
NVMe:  2 x 1TB NVMe (Win 11 Boot Disk and Workstation VMs)NVMe Adapter: 3 x RIITOP M.2 NVMe SSD to PCI-e 3.1
NVMe: 6 x Sabrent 2TB ROCKET NVMe PCIe (Workstation VMs)Disk Cables: 2 x Slimline SAS 4.0 SFF-8654
HDD: 1 x Seagate IronWolf Pro 18TB
SSD: 1 x 3.84TB Intel D3-4510 (Workstations VMs)
Video Card: GIGABYTE GeForce GTX 1650 SUPER
Power Supply: Antec NeoECO Gold ZEN 700W

PCIe Slot Placement:

For the best performance, PCIe Slot placement is really important. Things to consider – speed and size of the devices, and how the data will flow. Typically if data has to flow between CPUs or through the C622 chipset then, though minor, some latency is induced. If you have a larger video card, like the Super 1650, it’ll need to be placed in a PCIe slot that supports its length plus doesn’t interfere with onboard connectors or RAM modules.

Using Fig-1 below, here is how I laid out my devices.

  • Slot 2 for Video Card. The Video card is 2 slots wide and covers Slot 1 the slowest PCIe slot
  • Slot 3 Open
  • Slot 4, 5, and 6 are the RIITOP cards with the dual NVMe
  • Slimline 1 (Connected to CPU 1) has my 2 SATA drives, typically these ports are for U.2 drives but they also will work on SATA drives.

Why this PCIe layout? By isolating all my primary disks on CPU1 I don’t cross over the CPU nor do I go through the C622 chipset. My 2 NVMe disks will be attached to CPU0. They will be non-impactful to my VCF environment as one is used to boot the system and the other supports unimportant VCF VMs.

Other Thoughts:

  • I did look for other mobos, workstations, and servers but most were really expensive. The upgrades I had to choose from were a bit constrained due to the products I had on hand (DDR4 RAM and the Xeon 6252 LGA-3647 CPUs). This narrowed what I could select from.
  • Adding the RIITOP cards added quite a bit of expense to this deployment. Look for mobos that support bifurcation and match your needs. However, this combination + the additional parts were more than 50% less when compared to just updating the RAM modules.
  • The Gigabyte mobo requires 2 CPUs if you want to use all the PCIe slots.
  • Updating the Gigabyte firmware and BMC was a bit wonky. I’ve seen and blogged about these mobo issues before, hopefully their newer products have improved.
  • The layout (Fig-1) of the Gigabyte mobo included support for SlimLine U.2 connectors. These will come in handy if I deploy my U.2 Optane Disks.

(Fig-1)

Now starts the fun, in the next posts I’ll reinstall Windows 11, performance tune it, and get my VCF 9 Workstation VMs operational.