Workstation

VMware Workstation Gen 9: BOM2 P4 Workstation/Win11 Performance enhancements

Posted on Updated on

There can be a multitude of factors that could impact performance of your Workstation VMs. Running a VCF 9 stack on VMware Workstation demands every ounce of performance your Windows 11 host can provide. To ensure a smooth lab experience, certain optimizations are essential. In this post, I’ll walk through the key adjustments to maximize efficiency and responsiveness.

Note: There are a LOT of settings I did to improve performance. I take a structured approach by trying things slowly vs. applying all. The items listed below are what worked for my system and it’s recommend for that use case only. Unless otherwise stated, the VM’s and Workstation were powered down during these adjustments.

Host BIOS/UFEI Settings

  • There are several settings to ensure stable performance with a Supermicro X11DPH-T.
  • Here is what I modified on my system.
  • Enter Setup, confirm/adjust the following, and save then changes:
    • Advanced > CPU Configuration
      • Hyper-Threading > Enabled
      • Cores Enabled > 0
      • Hardware Prefetcher > Enabled
      • Advanced Power Management Configuration
        • Power Technology > Custom
        • Power Performance Tuning > BIOS Controls EPB
        • Energy Performance BIAS Setting > Maximum Performance
        • CPU C State Control, All Disabled
    • Advanced > Chipset Configuration > North Bridge > Memory Configuration
      • Memory Frequency > 2933

Hardware Design

  • In VMware Workstation Gen 9: BOM1 and BOM2 blogs we covered hardware design as it related to the indented load or nested VMs.
  • Topics we covered were:
    • Fast Storage: NVMe, SSD, and U.2 all contribute to VM performance
    • Placement of VM files: We placed and isolated our ESX VMs on specific disks which helps to ensure better performance
    • PCIe Placement: Using the System Block diagram I placed the devices in their optimal locations
    • Ample RAM: Include more than enough RAM to support the VCF 9 VMs
    • CPU cores: Design enough CPU cores to support the VCF 9 VMs
    • Video Card: Using a power efficient GPU and help boost VM performance

VM Design

  • Disk Choices: Matched the VM disk type to the physical drive type they are running on. Example – NVMe physical to a VMs NVMe disk
  • CPU Settings: Match physical CPU Socket(s) to VM CPU settings. Example – VM needs 8 Cores and a Physical host with 2 CPU Sockets and 24 cores per Socket. Setup VM for 2 CPU and 4 Cores
  • vHardware Choices: When creating a VM, Workstation should auto-populate hardware settings. Best vNIC to use is the vmxnet3. You can use the Guest OS Guide to validate which virtual hardware devices are compatible.

Fresh Installs

  • There’s nothing like a fresh install of the base OS to be a reliable foundation for performance improvments.
  • When Workstation is installed it adapts to the base OS. There can be performance gains due to this adaption.
  • However, if you upgrade the OS (Win10 to Win11) with Workstation already installed, you should always fully uninstall Workstation post upgrade and reinstall Workstation post upgrade for optimal performance.
  • Additionally, when installing Workstation I ensure that Hyper-V is disabled as it can impact Workstation performance.

Exclude Virtual Machine Directories From Antivirus Tools

NOTE — AV exceptions exclude certain files, folders, and processes from being scanned. By adding these you can improve Workstation performance but there are security risks in enabling AV Exceptions. Users should do what’s best for their environment. Below is how I set up my environment.

  • Script: Use a script to create AV Exceptions. For an example check out my blog – Using PowerShell to setup AV exceptions for Workstation 25H2u1 and Windows 11.
  • Manual Steps: Manually setup the following exceptions for Windows 11.
    • Open Virus and Threat Protection
    • Virus & threat protection settings > Manage Settings
    • Under ‘Exclusion’ choose ‘Add or remove exclusions’
    • Click on ‘+ Add an exclusion’
    • Choose your type (File, Folder, File Type, Process)
    • File Type: Exclude these specific VMware file types from being scanned: 
      • .vmdk: Virtual machine disk files (the largest and most I/O intensive).
      • .vmem: Virtual machine paging/memory files.
      • .vmsn: Virtual machine snapshot files.
      • .vmsd: Metadata for snapshots.
      • .vmss: Suspended state files.
      • .lck: Disk consistency lock files.
      • .nvram: Virtual BIOS/firmware settings. 
    • Folder: Exclude the following directories to prevent your antivirus from interfering with VM operations 
      • VMware Installation folder
      • VM Storage Folders: Exclude the main directory where you store your virtual machines
      • Installation Folder: Exclude the VMware Workstation installation path (default: C:\Program Files (x86)\VMware\VMware Workstation\).
      • VMware Tools: If you have the VMware Tools installation files extracted locally, exclude that folder as well. 
    • Process: Adding these executable processes to your antivirus exclusion list can prevent lag caused by the AV monitoring VMware’s internal actions: 
      • vmware.exe: The main Workstation interface.
      • vmware-vmx.exe: The core process that actually runs each virtual machine.
      • vmnat.exe: Handles virtual networking (NAT).
      • vmnetdhcp.exe: Handles DHCP for virtual networks.

Power Plan

Typically by default Windows 11 has the “Balanced” Power plan enabled. Though these settings are good for normal use cases, using your system as a dedicated VMware Workstation requires a better plan.

Below I show 2 ways to adjust a power plan. 1) Using a script to create a custom plan or 2) manually make similar adjustments.

  • 1) Script: I created a script that creates a custom power plan named “VMware Workstation Performance Plan” and makes all the needed changes for my system. You can find my blog here.
  • 2) Manual Adjustments:
    • Open the power plan. Control Panel > Hardware and Sound > Power Options > Change settings that are currently unavailable
    • You might see on every page “Change settings that are currently unavailable”, just click on it before making changes.
    • Set Power Plan:
      • Click on ‘Hide Additional Plans’.
      • Choose either “Ultimate Performance” or “High Performance” plan and then click on “Change plan settings”
      • Hard Disk > 0 Minutes
      • Wireless Adapter Settings > Max Performance
      • USB > Hub Selective Suspend Time out > 0
      • PCI Express > Link State Power Management > off
      • Processor power management > Both to 100%
      • Display > Turn off Display > Never

Power Throttling

Power throttling in Windows 11 is an intelligent, user-aware feature that automatically limits CPU resources for background tasks to conserve energy and extend battery life. By identifying non-essential, background-running applications, it reduces power consumption without slowing down active, foreground apps.

To determine if it is active go in to System > Power and look for Power Mode

If you are using a high performance power plan usually this feature is disabled.

If you are running a power plan where this is enabled, and you don’t want to disable it, then you can maximize your performance by disabling power throttling for the Workstation executable.

powercfg /powerthrottling disable /path “C:\Program Files (x86)\VMware\VMware Workstation\x64\vmware-vmx.exe”

Sleep States

Depending on your hardware you may or may not have different Sleep states enabled. Ultimately, for my deployment I don’t want any enabled.

To check if any are from a command prompt type in ‘powercfg /a’ and adjust as needed.

Memory Page files

In my design I don’t plan to overcommit physical RAM (640GB ram) for my nested VM’s. To maximize the performance and ensure VMware Workstation uses the physical memory exclusively, I follow these steps: Configure global memory preferences, Disable Memory Trimming for each VM, Force RAM-Only Operation, and adjust the Windows Page Files.

  • 1) Configure Global Memory Preferences: This setting tells VMware how to prioritize physical RAM for all virtual machines running on the host. 
    • Open Workstation > Edit > Preferences > Memory
    • In the Additional memory section, select the radio button for “Fit all virtual machine memory into reserved host RAM”.
  • 2) Disable Memory Trimming for each VM: Windows and VMware use “trimming” to reclaim unused VM memory for the host. Since RAM will not be overallocated, I disable this to prevent VMs from ever swapping to disk.
    • Right-click your VM and select Settings
    • Go to the Options tab and select the Advanced category.
    • Check the box for “Disable memory page trimming”.
    • Click OK and restart the VM
  • 3) Force RAM-Only Operation (config.ini): This is an advanced step that prevents VMware from creating .vmem swap files, forcing it to use physical RAM or the Windows Page File instead.
    • Close VMware Workstation completely.
    • Navigate to C:\ProgramData\VMware\VMware Workstation\ in File Explorer (Note: ProgramData is a hidden folder).
    • Open the file named config.ini with Notepad (you may need to run Notepad as Administrator).
    • Add the following lines to the end of the file:
      • mainMem.useNamedFile = “FALSE”
      • prefvmx.minVmMemPct = “100”
    • Save the file and restart your computer
  • 4) Windows Page Files: With 640GB of RAM Windows 11 makes a huge memory page file. Though I don’t need one this large I still need one for crash dumps, core functionality, and memory management. According to Microsoft, for a high-memory workstation or server, a fixed page file of 16GB to 32GB is the “sweet spot.” I’m going a bit larger.
    • Go to System > About > Advanced system Settings
    • System Properties window appears, under Performance choose ‘Settings’
    • Performance Options appears > Advanced > under Virtual memory choose ‘change’
    • Uncheck ‘Automatically manage paging…’
    • Choose Custom size, MIN 64000 and MAX 84000
    • Click ‘Set’ > OK
    • Restart the computer

Windows Visual Effects Performance

The visual effects in Windows 11 can be very helpful but they can also minimally slow down your performance. I prefer to create a custom profile and only enable ‘Smooth edges of screen fonts’

  • Go to System > About > Advanced system Settings
  • System Properties window appears,
  • On the Advanced Tab, under Performance choose ‘Settings’
  • On the Visual Effect tab choose ‘Custom’ and I chose ‘Smooth edges of screen fonts’

Disable BitLocker

Windows 11 (especially version 24H2 and later) may automatically re-enable encryption during a fresh install or major update. By default to install Windows 11 it requires TPM 1.2 or higher chip (TPM 2.0 recommended/standard for Win11), and UEFI firmware with Secure Boot enabled. BitLocker uses these features to “do its work”.

But, there are a couple of ways to disable BitLocker.

  • Create a Custom ISO
    • My deployment doesn’t have a TPM modules nor is Secure Boot enabled. To overcome these requirements I used Rufus to make the Windows 11 USB install disk. This means BitLocker can not be enabled.
  • Registry Edit (Post-Installation – may already be set):
    • Press Win + R, type regedit, and press Enter
    • Navigate to: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\BitLocker
    • Right-click in the right pane, select New > DWORD (32-bit) Value
    • Name it PreventDeviceEncryption and set its value to 1
    • Disable the Service:
      • Press Win + R, type services.msc, and press Enter.
      • Find BitLocker Drive Encryption Service.
      • Right-click it, select Properties, set the “Startup type” to Disabled, and click Apply.

Disable Side-Channel Mitigations: Disabling these can boost performance, especially on older processors, but may reduce security.

  • Open the Windows Security app by searching for it in the Start menu.
  • Select Device security from the left panel.
  • Click on the Core isolation details link.
  • Toggle the switch for Memory integrity to Off.
  • Select Yes when the User Account Control (UAC) prompt appears.
  • Restart your computer for the changes to take effect

Note: if you host is running Hyper-V virtualization, for your Workstation VM’s you may need to check the “disable side channel mitigations for Hyper-V enabled hosts” options in the advanced options for each VM.

Clean out unused Devices:

Windows leaves behind all types of unused devices that are hidden from your view in device manager. Though these are usually pretty harmless its a best practice to clean them up from time to time.

The quickest way to do this is using a tool called Device Cleanup Tool. Check out my video for more how to with this tool.

Here is Device Cleanup Tool running on my newly (<2 months) installed system. As you can see unused devices can build up even after a short time frame.

Debloat, Clean up, and so much more

There are several standard Windows based features, software, and cleanup tools that can impact the performance of my deployment. I prefer to run tools that help optimize Windows due to their ability to complete tasks quickly. The tool I use to debloat and clean up my system is Winutil. It’s been a proven util for not only optimizing systems, installing software, updates, but helping to maintain them too. For more information about Winutil check out their most recent update.

For ‘Tweaking’ new installs I do the following:

  • Launch the WinUtil program
  • Click on Tweaks
  • Choose Standard
  • Unselect ‘Run Disk Cleanup’
  • Click on Run Teaks

Additionally, you may have noticed Winutil can create an Ultimate Preforamnce power plan. That may come in handy.

Remove Windows Programs:

Here is a list of all the Windows Programs I remove, they are simply not needed for a Workstation Deployment. Some of these can be removed using the WinUtil.

  • Cortana
  • Co-polit
  • Camera
  • Game Bar
  • Teams
  • News
  • Mail and Calendar
  • Maps
  • Microsoft OneDrive
  • Microsoft to do
  • Movies and TV
  • People
  • Phone Link
  • Solitare
  • Sticky NOtes
  • Tips
  • Weather
  • Xbox / xbox live

References and Other Performance Articles:

Using PowerShell to setup AV exceptions for Workstation 25H2u1 and Windows 11

Posted on Updated on

Adding AV exceptions to your Workstation deployment on Windows 11 can really improve the overall performance. In this blog post I’ll share my exclusion script recently used on my environment.

NOTE: You cannot just cut, paste, and run the script below. There are parts of the script that have to be configured for the intended system.

What the script does.

  • For Windows 11 users with Microsoft’s Virus & threat protection enabled this script will tell AV to not scan specific file types, folders, and processes related to VMware Workstation.
  • It will ignore exceptions that already exist.
  • It will display appropriate messages (Successful, or Already Exists) as it completes tasks.

What is the risk?

  • Adding an exception (or exclusion) to antivirus (AV) software, while sometimes necessary for application functionality, significantly lowers the security posture of a device. The primary risk is creating a security blind spot where malicious code can be downloaded, stored, and executed without being detected.
  • Use at your own risk. This code is for my personal use.

What will the code do?

It will add several exclusions listed below.

  • File Type: Exclude these specific VMware file types from being scanned: 
    • .vmdk: Virtual machine disk files (the largest and most I/O intensive).
    • .vmem: Virtual machine paging/memory files.
    • .vmsn: Virtual machine snapshot files.
    • .vmsd: Metadata for snapshots.
    • .vmss: Suspended state files.
    • .lck: Disk consistency lock files.
    • .nvram: Virtual BIOS/firmware settings. 
  • Folder: Unique to my deployment it will exclude the following directories to prevent your antivirus from interfering with VM operations 
    • VMware Installation folder
    • VM Storage Folders: Exclude the main directory where I store my virtual machines.
    • Installation Folder: Exclude the VMware Workstation installation path ((default: C:\Program Files (x86)\VMware\VMware Workstation).
  • Process:
    • vmware.exe: The main Workstation interface.
    • vmware-vmx.exe: The core process that actually runs each virtual machine.
    • vmnat.exe: Handles virtual networking (NAT).
    • vmnetdhcp.exe: Handles DHCP for virtual networks.

The Script

Under the section ‘#1. Define your exclusions ‘ is where I adapted this code to match my environment

# Check for Administrator privileges
if (-not ([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole]::Administrator)) {
Write-Warning "Please run this script as an Administrator."
break
}

# 1. Define your exclusions

# This is where you put in YOUR folder exclusions
$folders = @("C:\Program Files (x86)\VMware\VMware Workstation", "D:\Virtual Machines", "F:\Virtual Machines", "G:\Virtual Machines", "H:\Virtual Machines", "I:\Virtual Machines", "J:\Virtual Machines", "K:\Virtual Machines", "L:\Virtual Machines")

# These are the common process exclusions

$processes = @("C:\Program Files (x86)\VMware\VMware Workstation\vmware.exe", "C:\Program Files (x86)\VMware\VMware Workstation\x64\vmware-vmx.exe", "C:\Program Files (x86)\VMware\VMware Workstation\vmnat.exe", "C:\Program Files (x86)\VMware\VMware Workstation\vmnetdhcp.exe")

# These are the common extension exclusions

$extensions = @(".vmdk", ".vmem", ".vmsd", ".vmss", ".lck", ".nvram")

# Retrieve current settings once for efficiency
$currentPrefs = Get-MpPreference
Write-Host "Checking and applying Windows Defender Exclusions..." -ForegroundColor Cyan

# --- Validate and Add Folders ---
foreach ($folder in $folders) {
if ($currentPrefs.ExclusionPath -contains $folder) {
Write-Host "Note: Folder exclusion already exists, skipping: $folder" -ForegroundColor Yellow
} else {
Add-MpPreference -ExclusionPath $folder
Write-Host "Successfully added folder: $folder" -ForegroundColor Green
}
}

# --- Validate and Add Processes ---
foreach ($proc in $processes) {
if ($currentPrefs.ExclusionProcess -contains $proc) {
Write-Host "Note: Process exclusion already exists, skipping: $proc" -ForegroundColor Yellow
} else {
Add-MpPreference -ExclusionProcess $proc
Write-Host "Successfully added process: $proc" -ForegroundColor Green
}
}

# --- Validate and Add Extensions ---
foreach ($ext in $extensions) {
if ($currentPrefs.ExclusionExtension -contains $ext) {
Write-Host "Note: Extension exclusion already exists, skipping: $ext" -ForegroundColor Yellow
} else {
Add-MpPreference -ExclusionExtension $ext
Write-Host "Successfully added extension: $ext" -ForegroundColor Green
}
}

Write-Host "`nAll exclusion checks complete." -ForegroundColor Cyan

The Output

In the output below, when the script creates an item successfully it will show in green. If it detects a duplicate it will output a message in yellow. I ran the script with a .vmdk exclusion already existing to test it out.

When its complete the AV exclusions in Windows should looks similar to the partial screenshot below.

To view the exclusions, in Win 11 open ‘Virus & Threat Protection’ > Manage Settings > Under Exclusions chose ‘Add or remove exclusion’

VMware Workstation Gen 9: BOM2 P3 Workstation Installation and configuration

Posted on Updated on

Now that my hardware and OS are ready the next step is installing Workstation and adding my previously configured VCF 9 VMs. In this blog I’ll cover these steps and get the VCF Environment up and running.

Workstation Pro 25H2u1 Update

I’ll need to download VMware Workstation Pro 25H2u1. The good news is, it’s free and users can download it at the Broadcom support portal. You can find it there under FREE Downloads.

Tip: Don’t forget to click on the “Terms and Conditions” link, then click the “I agree…” check box. It’s required before you can download this product.

Before I install Workstation I validate that Windows Hyper-V is not enabled. To do this, I go into Windows Features, ensure that Hyper-V and Windows Hypervisor Platform are NOT checked.

Next I ensure I set a static IP address to my Windows system and give the NIC a unique name.

Tip: Want a quick way to your network adapters in Windows 11? Check out my blog post on how to make a super quick Network Settings shortcut.

Once confirmed I install Workstation Pro 25H2u1. For more information on how to install Workstation 25H2u1 see my blog.

Restore Workstation LAN Segments

After the Workstation installation is complete, I go into the Virtual Network Editor. I delete the other VMnets and adjust VMnet0 to match the correct network adapter.

Next I add in one VM and then recreate all the VLAN Segments. For more information on this process, see my post under LAN Segments.

I add in the rest of my VM’s and simply assign their LAN Segments.

This is what I love about Workstation, I was able to change and reconstruct a new server, migrate storage devices, and then recover my entire VCF 9 environment. In my next post I’ll cover how I set up Windows 11 for better performance.

Upgrading Workstation 25H2 to 25Hu1

Posted on Updated on

Last February 26 2026 VMware released Workstation Pro 25H2u1. It’s an update that does repair a few bugs and security patches. In this blog I’ll cover how to upgrade it.

Helpful links

Meet the Requirements:

When installing/upgrading Workstation on Windows most folks seem to overlook the requirements for Workstation and just install the product. You can review the requirements here.

There are a couple of items folks commonly miss when installing Workstation.

  • The number one issue is Processor Requirements for Host Systems . It’s either they have a CPU that is not supported or they simply did not enable Virtualization support in the BIOS.
  • The second item is Microsoft Hyper-V enabled systems. Workstation supports a system with Hyper-V enabled but for the BEST performance its best to just disable these features.
  • Next, if you’ve upgraded your OS but never reinstall Workstation, then its advised to uninstall then install Workstation.
  • Lastly, if doing a fresh OS install ensuring drivers are updated and DirectX is at a supportable version.

How to download Workstation

Download the Workstation Pro 25H2u1 for Windows. Make sure you click on the ‘Terms and Conditions’ AND check the box. Only then can you click on the download icon.

Choose your install path

Before you do a fresh install:

If you are upgrading Workstation, review the following:

  • Ensure your environment is compatible with the requirements
  • If you have existing VMs:

Note: If you are upgrading the Windows OS or in the past have done a Windows Upgrade (Win 10 to 11), you must uninstall Workstation first, and then reinstall Workstation. More information here.

Upgrading Workstation 25H2 to 25Hu1

Run the download file, wait a min for it to confirm space requirements, then click Next

Accept the EULA.

Compatible setup check

Confirm install directory

Check for updates and join the CEIP program.

Allow it to create shortcuts

Click Upgrade to complete the upgrade

Click on Finish to complete the Wizard.

Lastly, check the version number of Workstation. Open Workstation > Help > About

VMware Workstation Gen 9: BOM2 P2 Device Checks and Windows 11 Install

Posted on Updated on

For the Gen 9 BOM2 project, I have opted for a clean installation of Windows 11 to ensure a baseline of stability and performance. This transition necessitates a full reconfiguration of both the operating system and my primary Workstation environment. In this post, I will ensure devices are correctly deployed, installed Windows 11, and do a quick benchmark test. Please note that this is not intended to be an exhaustive guide, but rather a technical log of my personal implementation process.

Validate Hardware Components

After the hardware configuration is complete its best to ensure it is recognized by the motherboard. There are quite a bit hardware items being carried over from BOM1 plus several new items, so its import these items are recognized before the installation of Windows 11.

Using IPMIView with SuperMicro X11DPH-T is quite handy. IPMIView enables me to see all types of data and allows for remote console access without physically being at the console. Simply connect a network cable into the IPMI port (Fig-1) and by default it will get an DHCP address or you can set the IP address in the BIOS. Next via https go to the assigned address, log in (by default username and password are both ADMIN), and you’ll have access to the IPMIView console. From this console you can manually set the IP address, VLAN ID, remote access to the console, and so much more.

Fig-1

The SuperMicro IPMIView allows me to view some of the system hardware. After logging in I find the information under System > Hardware Information. I simply click on a device and it will expand more information.

The IPMIView is a bit limited on what it can show. To view settings around the PCIe slots or CPU configuration I’ll need to access the BIOS. While in the BIOS I validate that the CPU settings have the Virtual-Machines Extensions (VMX) enabled. This is a requirement for Workstation.

Next I check on the PCIe devices via the bifurcation settings. I’m looking here to ensure the PCIe devices match the expected linked speed. The auto mode for bifurcation worked without issue, it detected every device, speed, and there was no need for any change. To validate this, while in the BIOS I went into Advanced > Chipset Config > North Bridge > IIO Configuration > CPU1 and validated the I0U# are set to Auto. I repeat this for CPU2. Then just below the CPUs, I drill down on each CPU port to ensure the PCIe Link Status, Max, and speed are aligned to the device specifications. I use the System Block Diagram from my last post to ID the CPU, then the CPU port number, which leads me to the PCIe slot number. From there I can determine which hardware device is connected. In fig-2 below, I’m looking at one half of the 8x PCIe card in Slot 5. Auto mode detected it perfectly.

Fig-2

Adjust Power Settings in BIOS

  • There are several settings to ensure stable performance with a Supermicro X11DPH-T.
  • Enter Setup, confirm/adjust the following, and save the changes:
    • Advanced > CPU Configuration
      • Hyper-Threading > Enabled
      • Cores Enabled > 0
      • Hardware Prefetcher > Enabled
      • Advanced Power Management Configuration
        • Power Technology > Custom
        • Power Performance Tuning > BIOS Controls EPB
        • Energy Performance BIAS Setting > Maximum Performance
        • CPU C State Control, All Disabled
    • Advanced > Chipset Configuration > North Bridge > Memory Configuration
      • Memory Frequency > 2933

Windows 11 Install

Once all the hardware is confirmed I create my Windows 11 boot USB using Rufus and boot to it. For more information on this process see my past video around creating it.

Next I install Windows 11 and after it’s complete I update the following drivers.

At this point all the correct drivers should be installed, I validate this by going into Device Manager and ensuring all devices have been recognized.

I then go into Disk Manager and ensure all the drives have the same drive letter as they did in BOM1. If they don’t match up I use Disk Manager to align them.

Install Other Software Tools

Quick Bench Mark

After I installed Windows 11 Pro, I ran a quick ATTO benchmark on my devices. I do this to ensure the drives are working optimally plus it’ll serve a baseline if I have issues in the future. There is nothing worse than having a disk that is not performing well, and it’s better to get performance issues sorted out early on.

These are the results of the 1.5TB Optane Disks.

I tested all 6 of the Rocket 2TB NVMe Disks, here are results for 3 of them, each one on a different PCIe slot.

Lastly, I tested the Intel 3.48GB SSD.

With the hardware confirmed and the OS installed I’m now ready to install Workstation 25H2 and configure it.

VMware Workstation Gen 9: BOM2 P1 Motherboard upgrade

Posted on Updated on

To take the next step in deploying my nested VCF 9 and adding VCF Automation, I’m going to need to make some updates to my Workstation Home Lab. BOM1 simply doesn’t have enough RAM, and I’m a bit concerned about VCF Automation being CPU/Core demanding. In this blog post I’ll cover some of the products I chose for BOM2.

A bit of Background

It should be noted, my ASRock Rack motherboard (BOM1) was performing well with nested VCF9. However, it was constrained by its available memory slots plus only supported one CPU. I considered upgrading to higher-capacity DIMMs; however, the cost was prohibitive. Ultimately, replacing the motherboard proved to be a more cost-effective solution, allowing me to leverage the memory and CPU I already owned.

Initially, I chose the Gigabyte Gigabyte MD71-HB0. At the time it was rather affordable but it lacked PCIe bifurcation. Bifurcation is a feature I needed to support the dual NVMe disks into one PCIe slot. To overcome this I chose the RIITOP M.2 NVMe SSD to PCI-e 3.1 These cards essentially emulate a bifurcated PCIe slot but added additional expense to the solution. Though I was able to get my nested VCF environment up and running, it was short lived due to a physical fault. I was able to return it but to buy it again the cost doubled so I went a different direction. If you are interested in my write up about the Gigabyte mobo click here, but do know I am not longer using it.

My Motherboard choice for BOM2

I started looking for a motherboard that would fit my needs. Some of the features I was looking for was: support dual Gold 6252 CPUs, support existing 32/64GB RAM modules, adequate PCIe slots, will it fit in my case, what are its power needs, and support for bifurcation. The motherboard I chose was the SuperMicro X11DPH-T. Buying it refurbished was a way to keep the cost down and meet my needs.

The migration from BOM1 to BOM2

The table below outlines the changes planned from BOM1 to BOM2. There were minimal unused products from the original configuration, and after migrating components, the updated build will provide more than sufficient resources to meet my VCF 9 compute/RAM requirements.

Pro Tip: When assembling new hardware, I take a methodical, incremental approach. I install and validate one component at a time, which makes troubleshooting far easier if an issue arises. I typically start with the CPUs and a minimal amount of RAM, then scale up to the full memory configuration, followed by the video card, add-in cards, and then storage. It’s a practical application of the old adage: don’t bite off more than you can chew—or in this case, compute.

KEEP from BOM1Added to create BOM2UNUSED
Mobo: SuperMicro X11DPH-T Mobo: ASRack Rock EPC621D8A
CPU: 2 x Xeon Gold ES 6252 
New net total 48 pCores
CPU: 1 x Xeon Gold ES 6252 (ES means Engineering Samples)
Cooler: 1 x Noctua NH-D9 DX-3647 4UCooler: 1 x Noctua NH-D9 DX-3647 4U10Gbe NIC: ASUS XG-C100C 10G Network Adapter
RAM: 384GB
4 x 64GB Samsung M393A8G40MB2-CVFBY
4 x 32GB Micron MTA36ASF4G72PZ-2G9E2
RAM: New net total 640GB
8 x 32GB Micron MTA36ASF4G72PZ-2G9E2

NVMe: 6 x Sabrent 2TB ROCKET NVMe PCIe (Workstation VMs)NVMe Adapter: 3 x Supermicro PCI-E Add-On Card for up to two NVMe SSDsNVMe:  2 x 1TB NVMe (Win 11 Boot Disk and Workstation VMs)
HDD: 1 x Seagate IronWolf Pro 18TBDisk Cables: 2 x Mini SAS to 4 SATA Cable, 36 Pin SFF 8087Video Card: GIGABYTE GeForce GTX 1650 SUPER
SSD: 1 x 3.84TB Intel D3-4510 (Workstations VMs)Boot/Extra Disk: 2 x Optane 4800x 1.5GB Disk
Case: Phanteks Enthoo Pro series PH-ES614PC_BK Black Steel2 x PCIe 4x to U.2 NVMe Adapter
Power Supply: MAG A1000GL 1000 Watt

Nosie and Power

I commonly get asked “How is the noise and power consumption on this build?”. Fan noise is whisper quiet. It’s one of the reasons I choose to do a DIY build over buying a server. The Phanteck Enthoo case fans and the Noctua fans do a great job keeping the noise levels down. They may spin up from time to time but its nothing compared to the noise a server chassis might make. For power I’m seeing it nominally at ~135 Watts. However, I haven’t spun up my workloads so this may increase.

Uniqueness with the SuperMicro X11DPH-T

Issue 1 – Xeon Gold 6252 Engineering Samples (ES) issues with RAM

I had to switch from ES CPUs to GA released CPUs. The good news was the Xeon Gold 6252 is at an all time low. The ES CPUs had issues with memory timing. When using an ES CPU it’s sometimes hard to pinpoint why they were failing but once I replaced them the following errors went away. At this point with the cost being so low for an actual GA CPUs I will avoid using ES CPUs for this build.

  • Memory training failure. – Assertion
  • Failing DIMM: DIMM location (Uncorrectable memory component found). (P2-DIMME1) – Assertion

Issue 2 – Fan header blocked by PCIe slot.

The 2nd CPU fan header is blocked if a PCIe Card is in this slot. I can only assume they had no other choice, but putting it here?

Issue 3 – NVMe placement

The NVMe slots are placed directly behind most of the PCIe slots, and they are at the same level as the PCIe slot. This blocks the insertion of any long PCIe cards. So if you want to put in a long GPU/Video card you’ll need to not use the onboard NVMe.

Issue 4 – Blocked I-SATA Ports

The edge connectors for the I-SATA ports can become blocked if you are using a long (225mm or greater) PCIe card.

PCIe Slot Placement:

For the best disk performance, PCIe Slot placement is really important. Things to consider – speed and size of the devices, and how the data will flow. Typically if data has to flow between CPUs or through the C621 chipset then, though minor, some latency is induced. If you have a larger video card, like the Super 1650, it’ll need to be placed in a PCIe slot that supports its length plus doesn’t interfere with onboard connectors or RAM modules.

The best way to layout your PCIe Devices is to look at a System Block Diagram (Fig-1). A good one will give you all types of information that you can use to optimize your deployment. Things I look for are the PCIe slots, how fast they are, which CPU are they attached to, and are they shared with other devices.

Fig-1

Using Fig-1, here is how I laid out my devices.

  • Slot 7-6 Optane 1.5Gb Disk, used for boot and VMs
  • Slot 5, 4, and 3 Dual 2TB NVMe disks
  • I-SATA ports for SATA drives (18TB HDD Backup, 3.84 SSD VMs’)

Other Thoughts:

  • I did look for other mobos, workstations, and servers but most were really expensive. The upgrades I had to choose from were a bit constrained due to the products I had on hand (DDR4 RAM and the Xeon 6252 LGA-3647 CPUs). This narrowed what I could select from.
  • The SuperMicro motherboard requires 2 CPUs if you want to use all the PCIe slots.

Now starts the fun, in the next posts I’ll finalize the install of Windows 11/Workstation, tune its performance, and get my VCF 9 Workstation VMs operational.

Windows 11 Workstation VM asking for encryption password that you did not explicitly set

Posted on Updated on

I had created a Windows 11 VM on Workstation 25H2 and then moved it to a new deployment of Workstation. Upon powerup it the VM stated I must supply a password (fig-1) as the VM was encrypted. In this post I’ll cover why this happened and how I got around it.

Note: Disabling TPM/Secure Boot is not  recommended for any system. Additionally, bypassing security leaves systems open for attack. If you are curious around VMware system Hardening check out this great video by Bob Plankers.

(Fig-1)

Why did this happen? As of VMware Workstation 17 encryption is required with a TPM 2.0 device, which is a requirement for Windows 11. When you create a new Windows 11×64 VM, the New VM Wizard (fig-2) asks you to set an encryption password or auto-generated one. This enables the VM to support Windows 11 requirements for TPM/Secure boot.

(Fig-2)

I didn’t set a password, where is the auto-generated password kept? If you allowed VMware to “auto-generate” the password, it is likely stored in your host machine’s credential manager. For Windows, open the Windows Credential Manager (search for “Credential Manager” in the Start Menu). Look for an entry related to VMware, specifically something like “VMware Workstation”.

I don’t have access to the PC where the auto-generated password was kept, how did I get around this? All I did was edit the VMs VMX configuration file commenting out the following. Then added the VM back into Workstation. Note: this will remove the vTPM device from the virtual hardware, not recommended.

# vmx.encryptionType
# encryptedVM.guid
# vtpm.ekCSR
# vtpm.ekCRT
# vtpm.present
# encryption.keySafe
# encryption.data

How could I avoid this going forward? 2 Options

Option 1 – When creating the VM, set and record the password.

Option 2 – To avoid this all together, use Rufus to create a new VM without TPM/Secure boot enabled.

  • Use Rufus to create a bootable USB drive with Windows 11. When prompted choose the options to disable Secure Boot and TPM 2.0.
  • Once the USB is created create a new Windows 11×64 VM in Workstation.
  • For creation options choose Typical > choose I will install the OS later > choose Win11x64 for the OS > chose a name/location > note the encryption password > Finish
  • When the VM is completed, edit its settings > remove the Trusted Platform Module > then go to Options > Access Control > Remove Encryption > put in the password to remove it > OK
  • Now attach the Rufus USB to the VM and boot to it.
  • From there install Windows 11.

Wrapping this up — Bypassing security allowed me to access my VM again. However, it leaves the VM more vulnerable to attack. In the end, I enabled security on this vm and properly recorded its password.

VMware Workstation Gen 8: Environment Revitalization

Posted on Updated on

In my last blog post, I shared my journey of upgrading to Workstation build 17.6.4 (build-24832109), plus ensuring I could start up my Workstation VM’s. In this installment, we dive deeper into getting the environment ready, and perform a back up.

Keep in mind my Gen 8 Workstation has been powered down for almost a year, so there are some things I have to do to get it ready. I see this blog as more informational and if users already have a stable environment you can skip these sections. However, you may find value in these steps if you are trying to revitalize an environment that has been shut down for a long period of time.

Before we get started, a little background.

This revitalization follows my designs that were published on my Workstation Home Lab YouTube series. That series focused on building a nested home lab using Workstation 17 and vSphere 8. Nesting with Workstation can evoke comparisons to the movie Inception, where everything is multi-layered. Below is a brief overview of my Workstation layout, aimed at ensuring we all understand which layer we are at.

  • Layer 1 – Physical Layer:
    • The physical hardware I use to support this VMware Workstation environment is a super charged computer with lots of RAM, CPU, and highspeed drives. More information here.
    • Base OS is Windows 11
    • VMware Workstation build is 17.6.4 build-24832109
  • Layer 2 – Workstation VMs: (Blue Box in diagram)
    • I have 4 key VM’s that run directly on Workstation.
    • These VM’s are: Win2022 Sever, VCSA 8u2, and 3 x ESXi 8u2 Hosts
    • The Win2022 Server has the following services: AD, DNS, DHCP, and RAS
    • Current state of these VM’s is suspended.
  • Layer 3 – Workload VM’s: (Purple box)
    • The 3 Nested ESXi Hosts have several VM’s

Lets get started!

Challenges:

1) Changes to License keys.

My vSphere environment vExpert license keys are expired. Those keys were based on vSphere 8.0u2 and were only good for one year. Since the release of vSphere 8.0u2b subscription keys are needed. This means to apply my new license keys I’ll have to upgrade vSphere.

TIP: Being a Broadcom VMware employee I’m not illegible for VMUG or vExpert keys, but if you are interested in the process check out a post by Daniel Kerr. He did a great write up.

2) Root Password is incorrect.

My root password into VCSA is not working and will need to be corrected.

3) VCSA Machine Certs need renewed.

There are several certificates that are expired and will need to be renewed. This is blocking me from being able to log on to the VCSA management console.

4) Time Sync needs to be updated.

I’ve change location and the time zone will need updated with NTP

Here are the steps I took to resume my vSphere Environment.

The beauty of working with Workstation is the ability to backup and/or snapshot Workstation VM’s as files and restore them when things fail. I took many snapshots and restored this lab a few times as I attempted to restart it. Restarting this Lab was a bit of a learning process as it took a few attempts to find everything that needed attention. Additionally, some of the processes you would follow in the real world didn’t apply here. So if you’re a bit concerned by some of the steps below, trust me I tried the correct way first and it simply didn’t work out.

1) Startup Workstation VM AD222:

At this point – I have only resumed AD222.

The other VMs rely on the Windows 2022 VM for its services. First, I need to get this system up and validate that all of its services are operational.

  • I used the Server Manager Dash Board as a quick way to see if everything is working properly.
  • From this dashboard I can see that my services are working and upon checking the red areas I found there was an non-issue with Google updater being stopped.
  • Run and Install Windows Updates
  • Network Time Checks (NTP)
    • All my VM’s get their time from this AD server. So it being correct is important.
    • I ensure the local time on the server is correct. From CLI I type in ‘w32tm /tz’ and confirm the time zone is correct.
    • Using the ‘net time’ command I confirm the local date/time matches the GUI clock in the Windows server.
    • Using ‘w32tm /query /status’ I confirm that time is syncing properly
    • Note: My time ‘Source’ is noted as ‘Local CMOS Clock’. This is okay for my private Workstation environment. Had this been production, we would have wanted a better time source.

2) Fix VCSA223 Server Root Password:

At this point only – I have resumed power to VCSA223 and AD222 is powered on.

Though I was initially able to access VCSA via the vSphere Client, I eventually determined I was unable to log in to the VCSA appliance via DCUI, SSH, or management GUI. The root password was incorrect and needed to be reset.

To fix the password issue I need to gracefully shutdown the VCSA appliance and follow KB 322247. In Workstation I simply right clicked on the VCSA appliance > Power > Shutdown Guest

3) Cannot access the VCSA GUI Error 503-Service Not available.

After fixing the VCSA password I was now able to access it via the SSH and DCUI consoles. However, I was unable to bring up the vSphere Client or the VCSA Management GUI. The management GUI simply stated ‘503 service not available’.

To resolve this issue I used the following KB’s

4) VCSA Management GUI Updates

  • I accessed the VCSA Management GUI and validated/updated its NTP settings.
  • Next I mounted the most recent VCSA ISO and updated the appliance to 8.0.3.24853646

5) Updating ESXi

  • At this point only my AD and VCSA servers have been resumed. My ESXi hosts are still suspended.
  • To start the update from 8.0.2ub to 8.0.3ue, I choose to resume then immediately shutdown all 3 ESXi hosts. This step may seem a bit harsh but no matter how I tried to be graceful about resuming these VM’s I ran into issues.
  • While shut down I mounted VMware-VMvisor-Installer-8.0U3e-24677879.x86_64.ISO and booted/upgraded each ESXi host.

6) License keys in VCSA

Now that everything is powered on I was able to go onto the vSphere client. First thing I noticed was the VMware keys (VCSA, vSAN, ESXi) were all expired.

I updated the license keys in this order:

  • First – Update the VCSA License Key
  • Second – Update the vSAN License Key
  • Third – Update the ESXi Host License Key

7) Restarting vSAN

  • When I shut down or suspend my Workstation Home lab I always shut down my Workload VM’s and do a proper shutdown of vSAN.
  • After I confirmed all my hosts were licensed and connected properly, I simply went into the cluster > configure > vSAN Services.

8) Backup VM’s

Now that my environment is properly working it’s time to do a proper shut down, remove all snapshots, and then take a backup of my Workstation VM’s.

With Workstation a simple Windows File copy from Source to target is all that is needed. In my case I have a large HDD where I store my backups. In Windows I simply right click on the Workstation VMs folder and chose copy. I then go to the target location right click and choose paste.

TIP: I keep track of my backups and notes with a simple notepad. This way I don’t forget their state.

And that’s it, after being down for over a year my Workstation Home lab Gen 8 is now fully functional and backed up. I’ll continue to use it for vSphere 8 testing as I build out a new VCF 9 enviroment.

Thanks for reading and please feel free to ask any questions or comments.

VMware Workstation Gen 9: BOM2 P1 Motherboard upgrade (Failed Gigabyte board)

Posted on Updated on

**Urgent Note ** The Gigabyte mobo in BOM2 initially was working well in my deployment. However, shortly after I completed this post the mobo failed. I was able to return it but to replace it the cost doubled. I replaced this mobo with a SuperMicro Board but am keeping this post up incase someone find it useful.

To take the next step in deploying a VCF 9 Simple stack with VCF Automation, I’m going to need to make some updates to my Workstation Home Lab. BOM1 simply doesn’t have enough RAM, and I’m a bit concerned about VCF Automation being CPU hungry. In this blog post I’ll cover some of the products I chose for BOM2.

Although my ASRock Rack motherboard (BOM1) was performing well, it was constrained by available memory capacity. I had additional 32 GB DDR4 modules on hand, but all RAM slots were already populated. I considered upgrading to higher-capacity DIMMs; however, the cost was prohibitive. Ultimately, replacing the motherboard proved to be a more cost-effective solution, allowing me to leverage the memory I already owned.

The mobo I chose was the Gigabyte Gigabyte MD71-HB0, it was rather affordable but it lacked PCIe bifurcation. Bifurcation is a feature I needed to support the dual NVMe disks into one PCIe slot. To overcome this I chose the RIITOP M.2 NVMe SSD to PCI-e 3.1 These cards essentially emulate a bifurcated PCIe slot which allows for the dual NVMe disks in a single PCIe slot.

The table below outlines the changes planned for BOM2. There was minimal unused products from the original configuration, and after migrating components, the updated build will provide more than sufficient resources to meet my VCF 9 compute/RAM requirements.

Pro Tip: When assembling new hardware, I take a methodical, incremental approach. I install and validate one component at a time, which makes troubleshooting far easier if an issue arises. I typically start with the CPUs and a minimal amount of RAM, then scale up to the full memory configuration, followed by the video card, add-in cards, and then storage. It’s a practical application of the old adage: don’t bite off more than you can chew—or in this case, compute.

KEEP from BOM1Added to create BOM2UNUSED
Case: Phanteks Enthoo Pro series PH-ES614PC_BK Black SteelMobo: Gigabyte MD71-HB0Mobo: ASRack Rock EPC621D8A
CPU: 1 x Xeon Gold ES 6252 (ES means Engineering Samples)
24 pCores
CPU: 1 x Xeon Gold ES 6252 (ES means Engineering Samples)
New net total 48 pCores
NVMe Adapter: 3 x Supermicro PCI-E Add-On Card for up to two NVMe SSDs
Cooler: 1 x Noctua NH-D9 DX-3647 4UCooler: 1 x Noctua NH-D9 DX-3647 4U10Gbe NIC: ASUS XG-C100C 10G Network Adapter
RAM: 384GB
4 x 64GB Samsung M393A8G40MB2-CVFBY
4 x 32GB Micron MTA36ASF4G72PZ-2G9E2
RAM: New net total 640GB
8 x 32GB Micron MTA36ASF4G72PZ-2G9E2
NVMe:  2 x 1TB NVMe (Win 11 Boot Disk and Workstation VMs)NVMe Adapter: 3 x RIITOP M.2 NVMe SSD to PCI-e 3.1
NVMe: 6 x Sabrent 2TB ROCKET NVMe PCIe (Workstation VMs)Disk Cables: 2 x Slimline SAS 4.0 SFF-8654
HDD: 1 x Seagate IronWolf Pro 18TB
SSD: 1 x 3.84TB Intel D3-4510 (Workstations VMs)
Video Card: GIGABYTE GeForce GTX 1650 SUPER
Power Supply: Antec NeoECO Gold ZEN 700W

PCIe Slot Placement:

For the best performance, PCIe Slot placement is really important. Things to consider – speed and size of the devices, and how the data will flow. Typically if data has to flow between CPUs or through the C622 chipset then, though minor, some latency is induced. If you have a larger video card, like the Super 1650, it’ll need to be placed in a PCIe slot that supports its length plus doesn’t interfere with onboard connectors or RAM modules.

Using Fig-1 below, here is how I laid out my devices.

  • Slot 2 for Video Card. The Video card is 2 slots wide and covers Slot 1 the slowest PCIe slot
  • Slot 3 Open
  • Slot 4, 5, and 6 are the RIITOP cards with the dual NVMe
  • Slimline 1 (Connected to CPU 1) has my 2 SATA drives, typically these ports are for U.2 drives but they also will work on SATA drives.

Why this PCIe layout? By isolating all my primary disks on CPU1 I don’t cross over the CPU nor do I go through the C622 chipset. My 2 NVMe disks will be attached to CPU0. They will be non-impactful to my VCF environment as one is used to boot the system and the other supports unimportant VCF VMs.

Other Thoughts:

  • I did look for other mobos, workstations, and servers but most were really expensive. The upgrades I had to choose from were a bit constrained due to the products I had on hand (DDR4 RAM and the Xeon 6252 LGA-3647 CPUs). This narrowed what I could select from.
  • Adding the RIITOP cards added quite a bit of expense to this deployment. Look for mobos that support bifurcation and match your needs. However, this combination + the additional parts were more than 50% less when compared to just updating the RAM modules.
  • The Gigabyte mobo requires 2 CPUs if you want to use all the PCIe slots.
  • Updating the Gigabyte firmware and BMC was a bit wonky. I’ve seen and blogged about these mobo issues before, hopefully their newer products have improved.
  • The layout (Fig-1) of the Gigabyte mobo included support for SlimLine U.2 connectors. These will come in handy if I deploy my U.2 Optane Disks.

(Fig-1)

Now starts the fun, in the next posts I’ll reinstall Windows 11, performance tune it, and get my VCF 9 Workstation VMs operational.