vcf
VMware Workstation Gen 9: BOM2 P4 Workstation/Win11 Performance enhancements
There can be a multitude of factors that could impact performance of your Workstation VMs. Running a VCF 9 stack on VMware Workstation demands every ounce of performance your Windows 11 host can provide. To ensure a smooth lab experience, certain optimizations are essential. In this post, I’ll walk through the key adjustments to maximize efficiency and responsiveness.
Note: There are a LOT of settings I did to improve performance. I take a structured approach by trying things slowly vs. applying all. The items listed below are what worked for my system and it’s recommend for that use case only. Unless otherwise stated, the VM’s and Workstation were powered down during these adjustments.
Host BIOS/UFEI Settings
- There are several settings to ensure stable performance with a Supermicro X11DPH-T.
- Here is what I modified on my system.
- Enter Setup, confirm/adjust the following, and save then changes:
- Advanced > CPU Configuration
- Hyper-Threading > Enabled
- Cores Enabled > 0
- Hardware Prefetcher > Enabled
- Advanced Power Management Configuration
- Power Technology > Custom
- Power Performance Tuning > BIOS Controls EPB
- Energy Performance BIAS Setting > Maximum Performance
- CPU C State Control, All Disabled
- Advanced > Chipset Configuration > North Bridge > Memory Configuration
- Memory Frequency > 2933
- Advanced > CPU Configuration
Hardware Design
- In VMware Workstation Gen 9: BOM1 and BOM2 blogs we covered hardware design as it related to the indented load or nested VMs.
- Topics we covered were:
- Fast Storage: NVMe, SSD, and U.2 all contribute to VM performance
- Placement of VM files: We placed and isolated our ESX VMs on specific disks which helps to ensure better performance
- PCIe Placement: Using the System Block diagram I placed the devices in their optimal locations
- Ample RAM: Include more than enough RAM to support the VCF 9 VMs
- CPU cores: Design enough CPU cores to support the VCF 9 VMs
- Video Card: Using a power efficient GPU and help boost VM performance
VM Design
- Disk Choices: Matched the VM disk type to the physical drive type they are running on. Example – NVMe physical to a VMs NVMe disk
- CPU Settings: Match physical CPU Socket(s) to VM CPU settings. Example – VM needs 8 Cores and a Physical host with 2 CPU Sockets and 24 cores per Socket. Setup VM for 2 CPU and 4 Cores
- vHardware Choices: When creating a VM, Workstation should auto-populate hardware settings. Best vNIC to use is the vmxnet3. You can use the Guest OS Guide to validate which virtual hardware devices are compatible.
Fresh Installs
- There’s nothing like a fresh install of the base OS to be a reliable foundation for performance improvments.
- When Workstation is installed it adapts to the base OS. There can be performance gains due to this adaption.
- However, if you upgrade the OS (Win10 to Win11) with Workstation already installed, you should always fully uninstall Workstation post upgrade and reinstall Workstation post upgrade for optimal performance.
- Additionally, when installing Workstation I ensure that Hyper-V is disabled as it can impact Workstation performance.
Exclude Virtual Machine Directories From Antivirus Tools
NOTE — AV exceptions exclude certain files, folders, and processes from being scanned. By adding these you can improve Workstation performance but there are security risks in enabling AV Exceptions. Users should do what’s best for their environment. Below is how I set up my environment.
- Script: Use a script to create AV Exceptions. For an example check out my blog – Using PowerShell to setup AV exceptions for Workstation 25H2u1 and Windows 11.
- Manual Steps: Manually setup the following exceptions for Windows 11.
- Open Virus and Threat Protection
- Virus & threat protection settings > Manage Settings
- Under ‘Exclusion’ choose ‘Add or remove exclusions’
- Click on ‘+ Add an exclusion’
- Choose your type (File, Folder, File Type, Process)
- File Type: Exclude these specific VMware file types from being scanned:
- .vmdk: Virtual machine disk files (the largest and most I/O intensive).
- .vmem: Virtual machine paging/memory files.
- .vmsn: Virtual machine snapshot files.
- .vmsd: Metadata for snapshots.
- .vmss: Suspended state files.
- .lck: Disk consistency lock files.
- .nvram: Virtual BIOS/firmware settings.
- Folder: Exclude the following directories to prevent your antivirus from interfering with VM operations
- VMware Installation folder
- VM Storage Folders: Exclude the main directory where you store your virtual machines
- Installation Folder: Exclude the VMware Workstation installation path (default:
C:\Program Files (x86)\VMware\VMware Workstation\). - VMware Tools: If you have the VMware Tools installation files extracted locally, exclude that folder as well.
- Process: Adding these executable processes to your antivirus exclusion list can prevent lag caused by the AV monitoring VMware’s internal actions:
- vmware.exe: The main Workstation interface.
- vmware-vmx.exe: The core process that actually runs each virtual machine.
- vmnat.exe: Handles virtual networking (NAT).
- vmnetdhcp.exe: Handles DHCP for virtual networks.
Power Plan
Typically by default Windows 11 has the “Balanced” Power plan enabled. Though these settings are good for normal use cases, using your system as a dedicated VMware Workstation requires a better plan.
Below I show 2 ways to adjust a power plan. 1) Using a script to create a custom plan or 2) manually make similar adjustments.
- 1) Script: I created a script that creates a custom power plan named “VMware Workstation Performance Plan” and makes all the needed changes for my system. You can find my blog here.

- 2) Manual Adjustments:
- Open the power plan. Control Panel > Hardware and Sound > Power Options > Change settings that are currently unavailable
- You might see on every page “Change settings that are currently unavailable”, just click on it before making changes.
- Set Power Plan:
- Click on ‘Hide Additional Plans’.
- Choose either “Ultimate Performance” or “High Performance” plan and then click on “Change plan settings”
- Hard Disk > 0 Minutes
- Wireless Adapter Settings > Max Performance
- USB > Hub Selective Suspend Time out > 0
- PCI Express > Link State Power Management > off
- Processor power management > Both to 100%
- Display > Turn off Display > Never
Power Throttling
Power throttling in Windows 11 is an intelligent, user-aware feature that automatically limits CPU resources for background tasks to conserve energy and extend battery life. By identifying non-essential, background-running applications, it reduces power consumption without slowing down active, foreground apps.
To determine if it is active go in to System > Power and look for Power Mode
If you are using a high performance power plan usually this feature is disabled.

If you are running a power plan where this is enabled, and you don’t want to disable it, then you can maximize your performance by disabling power throttling for the Workstation executable.
powercfg /powerthrottling disable /path “C:\Program Files (x86)\VMware\VMware Workstation\x64\vmware-vmx.exe”
Sleep States
Depending on your hardware you may or may not have different Sleep states enabled. Ultimately, for my deployment I don’t want any enabled.
To check if any are from a command prompt type in ‘powercfg /a’ and adjust as needed.

Memory Page files
In my design I don’t plan to overcommit physical RAM (640GB ram) for my nested VM’s. To maximize the performance and ensure VMware Workstation uses the physical memory exclusively, I follow these steps: Configure global memory preferences, Disable Memory Trimming for each VM, Force RAM-Only Operation, and adjust the Windows Page Files.
- 1) Configure Global Memory Preferences: This setting tells VMware how to prioritize physical RAM for all virtual machines running on the host.
- Open Workstation > Edit > Preferences > Memory
- In the Additional memory section, select the radio button for “Fit all virtual machine memory into reserved host RAM”.

- 2) Disable Memory Trimming for each VM: Windows and VMware use “trimming” to reclaim unused VM memory for the host. Since RAM will not be overallocated, I disable this to prevent VMs from ever swapping to disk.
- Right-click your VM and select Settings
- Go to the Options tab and select the Advanced category.
- Check the box for “Disable memory page trimming”.
- Click OK and restart the VM

- 3) Force RAM-Only Operation (config.ini): This is an advanced step that prevents VMware from creating
.vmemswap files, forcing it to use physical RAM or the Windows Page File instead.- Close VMware Workstation completely.
- Navigate to C:\ProgramData\VMware\VMware Workstation\ in File Explorer (Note: ProgramData is a hidden folder).
- Open the file named config.ini with Notepad (you may need to run Notepad as Administrator).
- Add the following lines to the end of the file:
- mainMem.useNamedFile = “FALSE”
- prefvmx.minVmMemPct = “100”
- Save the file and restart your computer
- 4) Windows Page Files: With 640GB of RAM Windows 11 makes a huge memory page file. Though I don’t need one this large I still need one for crash dumps, core functionality, and memory management. According to Microsoft, for a high-memory workstation or server, a fixed page file of 16GB to 32GB is the “sweet spot.” I’m going a bit larger.
- Go to System > About > Advanced system Settings
- System Properties window appears, under Performance choose ‘Settings’
- Performance Options appears > Advanced > under Virtual memory choose ‘change’
- Uncheck ‘Automatically manage paging…’
- Choose Custom size, MIN 64000 and MAX 84000
- Click ‘Set’ > OK
- Restart the computer

Windows Visual Effects Performance
The visual effects in Windows 11 can be very helpful but they can also minimally slow down your performance. I prefer to create a custom profile and only enable ‘Smooth edges of screen fonts’
- Go to System > About > Advanced system Settings
- System Properties window appears,
- On the Advanced Tab, under Performance choose ‘Settings’
- On the Visual Effect tab choose ‘Custom’ and I chose ‘Smooth edges of screen fonts’

Disable BitLocker
Windows 11 (especially version 24H2 and later) may automatically re-enable encryption during a fresh install or major update. By default to install Windows 11 it requires TPM 1.2 or higher chip (TPM 2.0 recommended/standard for Win11), and UEFI firmware with Secure Boot enabled. BitLocker uses these features to “do its work”.
But, there are a couple of ways to disable BitLocker.
- Create a Custom ISO
- My deployment doesn’t have a TPM modules nor is Secure Boot enabled. To overcome these requirements I used Rufus to make the Windows 11 USB install disk. This means BitLocker can not be enabled.
- Registry Edit (Post-Installation – may already be set):
- Press Win + R, type regedit, and press Enter
- Navigate to: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\BitLocker
- Right-click in the right pane, select New > DWORD (32-bit) Value
- Name it PreventDeviceEncryption and set its value to 1
- Disable the Service:
- Press Win + R, type services.msc, and press Enter.
- Find BitLocker Drive Encryption Service.
- Right-click it, select Properties, set the “Startup type” to Disabled, and click Apply.
Disable Side-Channel Mitigations: Disabling these can boost performance, especially on older processors, but may reduce security.
- Open the Windows Security app by searching for it in the Start menu.
- Select Device security from the left panel.
- Click on the Core isolation details link.
- Toggle the switch for Memory integrity to Off.
- Select Yes when the User Account Control (UAC) prompt appears.
- Restart your computer for the changes to take effect
Note: if you host is running Hyper-V virtualization, for your Workstation VM’s you may need to check the “disable side channel mitigations for Hyper-V enabled hosts” options in the advanced options for each VM.

Clean out unused Devices:
Windows leaves behind all types of unused devices that are hidden from your view in device manager. Though these are usually pretty harmless its a best practice to clean them up from time to time.
The quickest way to do this is using a tool called Device Cleanup Tool. Check out my video for more how to with this tool.
Here is Device Cleanup Tool running on my newly (<2 months) installed system. As you can see unused devices can build up even after a short time frame.

Debloat, Clean up, and so much more
There are several standard Windows based features, software, and cleanup tools that can impact the performance of my deployment. I prefer to run tools that help optimize Windows due to their ability to complete tasks quickly. The tool I use to debloat and clean up my system is Winutil. It’s been a proven util for not only optimizing systems, installing software, updates, but helping to maintain them too. For more information about Winutil check out their most recent update.
For ‘Tweaking’ new installs I do the following:
- Launch the WinUtil program
- Click on Tweaks
- Choose Standard
- Unselect ‘Run Disk Cleanup’
- Click on Run Teaks
Additionally, you may have noticed Winutil can create an Ultimate Preforamnce power plan. That may come in handy.

Remove Windows Programs:
Here is a list of all the Windows Programs I remove, they are simply not needed for a Workstation Deployment. Some of these can be removed using the WinUtil.
- Cortana
- Co-polit
- Camera
- Game Bar
- Teams
- News
- Mail and Calendar
- Maps
- Microsoft OneDrive
- Microsoft to do
- Movies and TV
- People
- Phone Link
- Solitare
- Sticky NOtes
- Tips
- Weather
- Xbox / xbox live
References and Other Performance Articles:
VMware Workstation Gen 9: BOM2 P3 Workstation Installation and configuration
Now that my hardware and OS are ready the next step is installing Workstation and adding my previously configured VCF 9 VMs. In this blog I’ll cover these steps and get the VCF Environment up and running.
Workstation Pro 25H2u1 Update
I’ll need to download VMware Workstation Pro 25H2u1. The good news is, it’s free and users can download it at the Broadcom support portal. You can find it there under FREE Downloads.
Tip: Don’t forget to click on the “Terms and Conditions” link, then click the “I agree…” check box. It’s required before you can download this product.

Before I install Workstation I validate that Windows Hyper-V is not enabled. To do this, I go into Windows Features, ensure that Hyper-V and Windows Hypervisor Platform are NOT checked.

Next I ensure I set a static IP address to my Windows system and give the NIC a unique name.
Tip: Want a quick way to your network adapters in Windows 11? Check out my blog post on how to make a super quick Network Settings shortcut.

Once confirmed I install Workstation Pro 25H2u1. For more information on how to install Workstation 25H2u1 see my blog.
Restore Workstation LAN Segments
After the Workstation installation is complete, I go into the Virtual Network Editor. I delete the other VMnets and adjust VMnet0 to match the correct network adapter.

Next I add in one VM and then recreate all the VLAN Segments. For more information on this process, see my post under LAN Segments.

I add in the rest of my VM’s and simply assign their LAN Segments.

This is what I love about Workstation, I was able to change and reconstruct a new server, migrate storage devices, and then recover my entire VCF 9 environment. In my next post I’ll cover how I set up Windows 11 for better performance.
Upgrading Workstation 25H2 to 25Hu1
Last February 26 2026 VMware released Workstation Pro 25H2u1. It’s an update that does repair a few bugs and security patches. In this blog I’ll cover how to upgrade it.
Helpful links
- Release Notes: VMware Workstation Pro 25H2u1 Release Notes
- Free Download: Download 25H2U1 < Recommend logging into support.broadcom.com first, then paste this link into your browser.
- Documentation: VMware Workstation Pro 25H2 Documentation
Meet the Requirements:
When installing/upgrading Workstation on Windows most folks seem to overlook the requirements for Workstation and just install the product. You can review the requirements here.
There are a couple of items folks commonly miss when installing Workstation.
- The number one issue is Processor Requirements for Host Systems . It’s either they have a CPU that is not supported or they simply did not enable Virtualization support in the BIOS.
- The second item is Microsoft Hyper-V enabled systems. Workstation supports a system with Hyper-V enabled but for the BEST performance its best to just disable these features.
- Next, if you’ve upgraded your OS but never reinstall Workstation, then its advised to uninstall then install Workstation.
- Lastly, if doing a fresh OS install ensuring drivers are updated and DirectX is at a supportable version.
How to download Workstation
Download the Workstation Pro 25H2u1 for Windows. Make sure you click on the ‘Terms and Conditions’ AND check the box. Only then can you click on the download icon.

Choose your install path
Before you do a fresh install:
- Comply with the requirements
- For Accelerated 3D Graphics, ensure DirectX and Video card drivers are updated
- Review: Install Workstation Pro on a Windows Host
If you are upgrading Workstation, review the following:
- Ensure your environment is compatible with the requirements
- If you have existing VMs:
- Document your network settings
- Shut down your current version of Workstation
- Review: Upgrading Workstation Pro
Note: If you are upgrading the Windows OS or in the past have done a Windows Upgrade (Win 10 to 11), you must uninstall Workstation first, and then reinstall Workstation. More information here.
Upgrading Workstation 25H2 to 25Hu1
Run the download file, wait a min for it to confirm space requirements, then click Next

Accept the EULA.

Compatible setup check

Confirm install directory

Check for updates and join the CEIP program.

Allow it to create shortcuts

Click Upgrade to complete the upgrade

Click on Finish to complete the Wizard.

Lastly, check the version number of Workstation. Open Workstation > Help > About

VMware Workstation Gen 9: BOM2 P2 Device Checks and Windows 11 Install
For the Gen 9 BOM2 project, I have opted for a clean installation of Windows 11 to ensure a baseline of stability and performance. This transition necessitates a full reconfiguration of both the operating system and my primary Workstation environment. In this post, I will ensure devices are correctly deployed, installed Windows 11, and do a quick benchmark test. Please note that this is not intended to be an exhaustive guide, but rather a technical log of my personal implementation process.
Validate Hardware Components
After the hardware configuration is complete its best to ensure it is recognized by the motherboard. There are quite a bit hardware items being carried over from BOM1 plus several new items, so its import these items are recognized before the installation of Windows 11.
Using IPMIView with SuperMicro X11DPH-T is quite handy. IPMIView enables me to see all types of data and allows for remote console access without physically being at the console. Simply connect a network cable into the IPMI port (Fig-1) and by default it will get an DHCP address or you can set the IP address in the BIOS. Next via https go to the assigned address, log in (by default username and password are both ADMIN), and you’ll have access to the IPMIView console. From this console you can manually set the IP address, VLAN ID, remote access to the console, and so much more.
Fig-1

The SuperMicro IPMIView allows me to view some of the system hardware. After logging in I find the information under System > Hardware Information. I simply click on a device and it will expand more information.

The IPMIView is a bit limited on what it can show. To view settings around the PCIe slots or CPU configuration I’ll need to access the BIOS. While in the BIOS I validate that the CPU settings have the Virtual-Machines Extensions (VMX) enabled. This is a requirement for Workstation.

Next I check on the PCIe devices via the bifurcation settings. I’m looking here to ensure the PCIe devices match the expected linked speed. The auto mode for bifurcation worked without issue, it detected every device, speed, and there was no need for any change. To validate this, while in the BIOS I went into Advanced > Chipset Config > North Bridge > IIO Configuration > CPU1 and validated the I0U# are set to Auto. I repeat this for CPU2. Then just below the CPUs, I drill down on each CPU port to ensure the PCIe Link Status, Max, and speed are aligned to the device specifications. I use the System Block Diagram from my last post to ID the CPU, then the CPU port number, which leads me to the PCIe slot number. From there I can determine which hardware device is connected. In fig-2 below, I’m looking at one half of the 8x PCIe card in Slot 5. Auto mode detected it perfectly.
Fig-2

Adjust Power Settings in BIOS
- There are several settings to ensure stable performance with a Supermicro X11DPH-T.
- Enter Setup, confirm/adjust the following, and save the changes:
- Advanced > CPU Configuration
- Hyper-Threading > Enabled
- Cores Enabled > 0
- Hardware Prefetcher > Enabled
- Advanced Power Management Configuration
- Power Technology > Custom
- Power Performance Tuning > BIOS Controls EPB
- Energy Performance BIAS Setting > Maximum Performance
- CPU C State Control, All Disabled
- Advanced > Chipset Configuration > North Bridge > Memory Configuration
- Memory Frequency > 2933
- Advanced > CPU Configuration
Windows 11 Install
Once all the hardware is confirmed I create my Windows 11 boot USB using Rufus and boot to it. For more information on this process see my past video around creating it.
Next I install Windows 11 and after it’s complete I update the following drivers.
- Install Intel Chipset drivers
- Install Intel NIC Drivers
- Run Windows updates
At this point all the correct drivers should be installed, I validate this by going into Device Manager and ensuring all devices have been recognized.
I then go into Disk Manager and ensure all the drives have the same drive letter as they did in BOM1. If they don’t match up I use Disk Manager to align them.

Install Other Software Tools
Quick Bench Mark
After I installed Windows 11 Pro, I ran a quick ATTO benchmark on my devices. I do this to ensure the drives are working optimally plus it’ll serve a baseline if I have issues in the future. There is nothing worse than having a disk that is not performing well, and it’s better to get performance issues sorted out early on.
These are the results of the 1.5TB Optane Disks.

I tested all 6 of the Rocket 2TB NVMe Disks, here are results for 3 of them, each one on a different PCIe slot.

Lastly, I tested the Intel 3.48GB SSD.

With the hardware confirmed and the OS installed I’m now ready to install Workstation 25H2 and configure it.
VMware Workstation Gen 9: BOM2 P1 Motherboard upgrade
To take the next step in deploying my nested VCF 9 and adding VCF Automation, I’m going to need to make some updates to my Workstation Home Lab. BOM1 simply doesn’t have enough RAM, and I’m a bit concerned about VCF Automation being CPU/Core demanding. In this blog post I’ll cover some of the products I chose for BOM2.
A bit of Background
It should be noted, my ASRock Rack motherboard (BOM1) was performing well with nested VCF9. However, it was constrained by its available memory slots plus only supported one CPU. I considered upgrading to higher-capacity DIMMs; however, the cost was prohibitive. Ultimately, replacing the motherboard proved to be a more cost-effective solution, allowing me to leverage the memory and CPU I already owned.
Initially, I chose the Gigabyte Gigabyte MD71-HB0. At the time it was rather affordable but it lacked PCIe bifurcation. Bifurcation is a feature I needed to support the dual NVMe disks into one PCIe slot. To overcome this I chose the RIITOP M.2 NVMe SSD to PCI-e 3.1 These cards essentially emulate a bifurcated PCIe slot but added additional expense to the solution. Though I was able to get my nested VCF environment up and running, it was short lived due to a physical fault. I was able to return it but to buy it again the cost doubled so I went a different direction. If you are interested in my write up about the Gigabyte mobo click here, but do know I am not longer using it.
My Motherboard choice for BOM2
I started looking for a motherboard that would fit my needs. Some of the features I was looking for was: support dual Gold 6252 CPUs, support existing 32/64GB RAM modules, adequate PCIe slots, will it fit in my case, what are its power needs, and support for bifurcation. The motherboard I chose was the SuperMicro X11DPH-T. Buying it refurbished was a way to keep the cost down and meet my needs.

The migration from BOM1 to BOM2
The table below outlines the changes planned from BOM1 to BOM2. There were minimal unused products from the original configuration, and after migrating components, the updated build will provide more than sufficient resources to meet my VCF 9 compute/RAM requirements.
Pro Tip: When assembling new hardware, I take a methodical, incremental approach. I install and validate one component at a time, which makes troubleshooting far easier if an issue arises. I typically start with the CPUs and a minimal amount of RAM, then scale up to the full memory configuration, followed by the video card, add-in cards, and then storage. It’s a practical application of the old adage: don’t bite off more than you can chew—or in this case, compute.
| KEEP from BOM1 | Added to create BOM2 | UNUSED |
| Mobo: SuperMicro X11DPH-T | Mobo: ASRack Rock EPC621D8A | |
| CPU: 2 x Xeon Gold ES 6252 New net total 48 pCores | CPU: 1 x Xeon Gold ES 6252 (ES means Engineering Samples) | |
| Cooler: 1 x Noctua NH-D9 DX-3647 4U | Cooler: 1 x Noctua NH-D9 DX-3647 4U | 10Gbe NIC: ASUS XG-C100C 10G Network Adapter |
| RAM: 384GB 4 x 64GB Samsung M393A8G40MB2-CVFBY 4 x 32GB Micron MTA36ASF4G72PZ-2G9E2 | RAM: New net total 640GB 8 x 32GB Micron MTA36ASF4G72PZ-2G9E2 | |
| NVMe: 6 x Sabrent 2TB ROCKET NVMe PCIe (Workstation VMs) | NVMe Adapter: 3 x Supermicro PCI-E Add-On Card for up to two NVMe SSDs | NVMe: 2 x 1TB NVMe (Win 11 Boot Disk and Workstation VMs) |
| HDD: 1 x Seagate IronWolf Pro 18TB | Disk Cables: 2 x Mini SAS to 4 SATA Cable, 36 Pin SFF 8087 | Video Card: GIGABYTE GeForce GTX 1650 SUPER |
| SSD: 1 x 3.84TB Intel D3-4510 (Workstations VMs) | Boot/Extra Disk: 2 x Optane 4800x 1.5GB Disk | |
| Case: Phanteks Enthoo Pro series PH-ES614PC_BK Black Steel | 2 x PCIe 4x to U.2 NVMe Adapter | |
| Power Supply: MAG A1000GL 1000 Watt | ||
Nosie and Power
I commonly get asked “How is the noise and power consumption on this build?”. Fan noise is whisper quiet. It’s one of the reasons I choose to do a DIY build over buying a server. The Phanteck Enthoo case fans and the Noctua fans do a great job keeping the noise levels down. They may spin up from time to time but its nothing compared to the noise a server chassis might make. For power I’m seeing it nominally at ~135 Watts. However, I haven’t spun up my workloads so this may increase.
Uniqueness with the SuperMicro X11DPH-T
Issue 1 – Xeon Gold 6252 Engineering Samples (ES) issues with RAM
I had to switch from ES CPUs to GA released CPUs. The good news was the Xeon Gold 6252 is at an all time low. The ES CPUs had issues with memory timing. When using an ES CPU it’s sometimes hard to pinpoint why they were failing but once I replaced them the following errors went away. At this point with the cost being so low for an actual GA CPUs I will avoid using ES CPUs for this build.
- Memory training failure. – Assertion
- Failing DIMM: DIMM location (Uncorrectable memory component found). (P2-DIMME1) – Assertion
Issue 2 – Fan header blocked by PCIe slot.
The 2nd CPU fan header is blocked if a PCIe Card is in this slot. I can only assume they had no other choice, but putting it here?

Issue 3 – NVMe placement
The NVMe slots are placed directly behind most of the PCIe slots, and they are at the same level as the PCIe slot. This blocks the insertion of any long PCIe cards. So if you want to put in a long GPU/Video card you’ll need to not use the onboard NVMe.

Issue 4 – Blocked I-SATA Ports
The edge connectors for the I-SATA ports can become blocked if you are using a long (225mm or greater) PCIe card.

PCIe Slot Placement:
For the best disk performance, PCIe Slot placement is really important. Things to consider – speed and size of the devices, and how the data will flow. Typically if data has to flow between CPUs or through the C621 chipset then, though minor, some latency is induced. If you have a larger video card, like the Super 1650, it’ll need to be placed in a PCIe slot that supports its length plus doesn’t interfere with onboard connectors or RAM modules.
The best way to layout your PCIe Devices is to look at a System Block Diagram (Fig-1). A good one will give you all types of information that you can use to optimize your deployment. Things I look for are the PCIe slots, how fast they are, which CPU are they attached to, and are they shared with other devices.
Fig-1

Using Fig-1, here is how I laid out my devices.
- Slot 7-6 Optane 1.5Gb Disk, used for boot and VMs
- Slot 5, 4, and 3 Dual 2TB NVMe disks
- I-SATA ports for SATA drives (18TB HDD Backup, 3.84 SSD VMs’)

Other Thoughts:
- I did look for other mobos, workstations, and servers but most were really expensive. The upgrades I had to choose from were a bit constrained due to the products I had on hand (DDR4 RAM and the Xeon 6252 LGA-3647 CPUs). This narrowed what I could select from.
- The SuperMicro motherboard requires 2 CPUs if you want to use all the PCIe slots.
Now starts the fun, in the next posts I’ll finalize the install of Windows 11/Workstation, tune its performance, and get my VCF 9 Workstation VMs operational.
My Silicon Treasures: Mapping My Home Lab Motherboards Since 2009
I’ve been architecting home labs since the 90s—an era dominated by bare-metal Windows Servers and Cisco products. In 2008, my focus shifted toward virtualization, specifically building out VMware-based environments. What began as repurposing spare hardware for VMware Workstation quickly evolved. As my resource requirements scaled, I transitioned to dedicated server builds. Aside from a brief stint with Gen8 enterprise hardware, my philosophy has always been “built, not bought,” favoring custom component selection over off-the-shelf rack servers. I’ve documented this architectural evolution over the years, and in this post, I’m diving into the the specific motherboards that powered my past home labs.
Gen 1: 2009-2011 GA-EP43-UD3L Workstation 7 | ESX 3-4.x
Back in 2009, I was working for a local hospital in Phoenix and running the Phoenix VMUG. I deployed a Workstation 7 Home lab on this Gigabyte motherboard. Though my deployment was simple, I was able deploy ESX 3.5 – 4.x with only 8GB of RAM and attach it to an IOMega ix4-200d. I used it at our Phoenix VMUG meetings to teach others about home labs. I found the receipt for the CPU ($150) and motherboard ($77), wow price sure have changed.
REF Link – Home Lab – Install of ESX 3.5 and 4.0 on Workstation 7

Gen2: 2011-2013 Gigabyte GA-Z68XP-UD3 Workstation 8 | ESXi 4-5
Gen1 worked quite well for what I needed but it was time to expand as my I started working for VMware as a Technical Account Manager. I needed to keep my skills sharp and deploy more complex home lab environments. Though I didn’t know it back then, this was the start of my HOME LABS: A DEFINITIVE GUIDE. I really started to blog about the plan to update and why I was making different choices. I ran into a very unique issues that even Gigabyte or Hitachi could figure out, I blogged about here.
Deployed with an i7-2600 ($300), Gigabyte GA-Z68XP-UD3 ($150), and 16GB DDR3 RAM
REF Link: Update to my Home Lab with VMware Workstation 8 – Part 1 Why

Gen2: Zotac M880G-ITX then the ASRock FM2A85X-ITX | FreeNAS Sever
Back in the day I needed better performance from my shared storage as the IOMega had reached its limits. Enter the short lived FreeNAS server to my home lab. Yes it did preform better but man it was full of bugs and issues. Some due to the Zotac Motherboard and some with FreeNAS. I was happy to be moving on to vSAN with Gen3.
REF: Home Lab – freeNAS build with LIAN LI PC-Q25, and Zotac M880G-ITX



Gen3: 2012-2016 MSI Z68MA-G45 (B3) | ESXi 5-6
I needed to expand my home lab into dedicated hosts. Enter the MSI Z68MA-G45 (B3). It would become my workhorse expanding it from one server with the Gen 2 Workstation to 3 dedicated hosts running vSAN.
REF: VSAN – The Migration from FreeNAS


Gen4: 2016-2019 Gigabyte MX31-BS0
This mobo was used in my ‘To InfiniBand and beyond’ blog series. It had some “wonkiness” about its firmware updates but other then that it was a solid performer. Deployed with a E3-1500 and 32GB RAM
REF: Home Lab Gen IV – Part I: To InfiniBand and beyond!

Gen 5: 2019-2020 JINGSHA X79
I had maxed out Gen 4 and really needed to expand my CPU cores and RAM. Hence the blog series title – ‘The Quest for More Cores!’. Deployed with 128GB RAM and Xeon E5-2640 v2 8 Cores it fit the bill. This series is where I started YouTube videos and documenting my builds per my design guides. Though this mobo was good for its design its lack of PCIe slots made it short lived.
REF: Home Lab GEN V: The Quest for More Cores! – First Look

Gen 7: 2020-2023: Supermicro X9DRD-7LN4F-JBOD and the MSI PRO Z390-A PRO
Gen 5 motherboard fell short when I wanted to deploy and all flash vSAN based on NVMe. With this Supermicro motherboard I had no issues with IO and deploying it as all Flash vSAN. It also gathered the attention of Intel to which they offered me their Optane drives to create an All Flash Optane system. More on that in Gen 8.
The MSI motherboard was a needed update to my VMware Workstation system. I built it up as a Workstation / Plex server and it did this job quite well.
This generation is when I started to align my Gen#s to vSphere releases. Makes it much easier to track.
REF: Home Lab Generation 7: Updating from Gen 5 to Gen 7



Gen 8: 2023-2024 Dell T7820 VMware Dedicated Hosts
With some support from Intel I was able to uplift my 3 x Dell T7820 workstations into a great home lab. They supplied Engineering Samples CPUs, RAM, and Optane Disks. Plus I was able to coordinate the distribution of Optane disks to vExperts Globally. It was a great homelab and I leaned a ton!
REF: Home Lab Generation 8 Parts List (Part 2)

Gen 8-9: 2023-2026 ASRack Rock EPC621D8A VMware Workstation Motherboard
Evolving my Workstation PC I used this ASRack Rock motherboard. It was the perfect solution for running nested clusters of ESXi VMs with vSAN ESA. It was until most recently a really solid mobo and I even got it to run nested VCF 9 simple install.
REF: Announcing my Generation 8 Super VMware Workstation!

Gen 9: 2024 – Current
As of this date its still under development. See my Home Lab BOM for more information. However, I’m moving my home lab to only nested VCF 9 deployment on Workstation and not dedicated servers.
VMware Workstation Gen 9: FAQs
I complied a list of frequently asked questions (FAQs) around my Gen 9 Workstation build. I’ll be updating it from time to time but do feel free to reach out if you have additional questions.
Last Update: 02/04/2026
General FAQs
Why Generation 9? Starting with my Gen 7 build the Gen Number aligns to the version of vSphere it was designed for. So, Gen 9 = VCF 9. It also helps my readers to track the Generations that interests them the most.
Why are you running Workstation vs. dedicated ESX servers? I’m pivoting my home lab strategy. I’ve moved from a complex multi-server setup to a streamlined, single-host configuration using VMware Workstation. Managing multiple hosts, though it gives real world experience, wasn’t meeting my needs when it came to quick system recovery or testing different software versions. With Workstation, I can run/deploy multiple types of home labs and do simple backup/recovery, plus Workstation’s snapshot manager allow me to roll back labs quite quickly. I find Workstation more adaptable, and making my lab time about learning rather than maintenance.
What are your goals with Gen 9? To develop and build a platform that is able to run the stack of VCF 9 product for Home Lab use. See Gen 9 Part 1 for more information on goals.
Where can I find your Gen 9 Workstation Build series? All of my most popular content, including the Gen 9 Workstation builds can be found under Best of VMX.
What version of Workstation are you using? Currently, VMware Workstation 25H2, this may change over time see my Home Lab BOM for more details.
How performant is running VCF 9 on Workstation? In my testing I’ve had adequate success with a simple VCF install on BOM1. Clicks through out the various applications didn’t seem to lag. I plan to expand to a full VCF install under BOM2 and will do some performance testing soon.
What core services are needed to support this VCF Deployment? Core Services are supplied via Windows Server. They include AD, DNS, NTP, RAS, and DHCP. DNS, NTP, and RAS being the most important.
BOM FAQs
Where can I find your Bill of Materials (BOM)? See my Home Lab BOM page.
Why 2 BOMs for Gen 9? Initially, I started with the hardware I had, this became BOM1. It worked perfectly for a simple VCF install. Eventually, I needed to expand my RAM to support the entire VCF stack. I had 32GB DDR4 modules on hand but the BOM1 motherboard was fully populated. It was less expensive to buy a motherboard that had enough RAM slots plus I could add in a 2nd CPU. This upgrade became BOM2. Additionally, It gives my readers some ideas of different configurations that might work for them.
What can I run on BOM1? I have successfully deployed a simple VCF deployment, but I don’t recommend running VCF Automation on this BOM. See the Best of VMX section for a 9 part series.
What VCF 9 products are running in BOM1? Initial components include: VCSA, VCF Operations, VCF Collector, NSX Manager, Fleet Manager, and SDDC Manager all running on the 3 x Nested ESX Hosts.
What are your plans for BOM2? Currently, under development but I would like to see if I could push the full VCF stack to it.
What can I run on BOM2? Under development, updates soon.
Are you running both BOMs configurations? No I’m only running one at a time. Currently, running BOM2.
Do I really need this much hardware? No you don’t. The parts listed on my BOM is just how I did it. I used some parts I had on hand and some I bought used. My recommendation is use what you have and upgrade when you need to.
What should I do to help with performance? Invest in highspeed disk, CPU cores, and RAM. I highly recommend lots of properly deployed NVMe disks for your nested ESX hosts. Make perforamnce adjustments to Windows 11. Check out VMware Workstation Gen 9 P4: BOM2 Workstation/Win11 Performance enhancements for suggestions.
What do I need for multiple NVMe Drives? If you plan to use multiple NVMe drives into a single PCIe slot you’ll need a motherboard that supports bifurcation OR you’ll need an PCIe NVMe adapter that will support it. Not all NVMe adapters are the same, so do your research before buying.
VMware Workstation Gen 9: Part 9 Shutting down and starting up the environment
Deploying the VCF 9 environment on to Workstation was a great learning process. However, I use my server for other purposes and rarely run it 24/7. After its initial deployment, my first task is shutting down the environment, backing it up, and then starting it up. In this blog post I’ll document how I accomplish this.
NOTE:
- License should be completed for a VCF 9 environment first before performing the steps below. If not, the last step, vSAN Shutdown will cause an error. There is a simple work around.
- I do fully complete each step before moving to the next. Some steps can take some time to complete.
How to shutdown my VCF Environment.
My main reference for VCF 9 Shut down procedures is the VCF 9 Documentation on techdocs.broadcom.com (See REF URLs below). The section on “Shutdown and Startup of VMware Cloud Foundation” is well detailed and I have placed the main URL in the reference URLs below. For my environment I need to focus on shutting down my Management Domains as it also houses my Workload VMs.
Here is the order in which I shutdown my environment. This may change over time as I add other components.
Note – it is advised to complete each step fully before proceeding to the next step.
| Shutdown Order | SDDC Component |
|---|---|
| In vCenter, shutdown all non-essential guest VM’s | |
| 1 – Not needed, not deployed yet | VCF Automation |
| 2 – Not needed, not deployed yet | VCF Operations for Networks |
| 3 – From VCSA234, locate a VCF Operations collector appliance.(opscollectorapplaince) – Right-click the appliance and select Power > Shut down Guest OS. – In the confirmation dialog box, click Yes. – Wait for it to fully power off | VCF Operations collector |
| 4 – Not needed, not deployed yet | VCF Operations for logs |
| 5 – Not needed, not deployed yet | VCF Identity Broker |
| 6 – From vcsa234, in the VMs and Templates inventory, locate the VCF Operations fleet management appliance (fleetmgmtappliance.nested.local) – Right-click the VCF Operations fleet management appliance and select Power > Shut down Guest OS. – In the confirmation dialog box, click Yes. – Wait for it to fully power off | VCF Operations fleet management |
| 7 – You shut down VCF Operations by first taking the cluster offline and then shutting down the appliances of the VCF Operations cluster. – Log in to the VCF Operations administration UI at the https://vcfcops.nested.local/admin URL as the admin local user. – Take the VCF Operations cluster offline. On the System status page, click Take cluster offline. – In the Take cluster offline dialog box, provide the reason for the shutdown and click OK. – Wait for the Cluster status to read Offline. This operation might take about an hour to complete. (With no data mine took <10 mins) – Log in to vCenter for the management domain at https://vcsa234.nested.local/ui as a user with the Administrator role. – There could be other options for shutting down this appliance. Using Broadcom KB 341964 as a reference, I determined my next step is to simply Right-click the vfccops appliance and select Power > Shut down Guest OS. – In the VMs and Templates inventory, locate a VCF Operations appliance. – Right-click the appliance and select Power > Shut down Guest OS. – In the confirmation dialog box, click Yes. – This operations takes several minutes to complete. – Wait for it to fully power off | VCF Operations |
| 8 – Not Needed, not deployed yet | VMware Live Site Recovery for the management domain |
| 9 – Not Needed, not deployed yet | NSX Edge nodes |
| 10 – I continue shutting down the NSX infrastructure in the management domain and a workload domain by shutting down the one-node NSX Manager by using the vSphere Client. – Log in to vCenter for the management domain at https://vcsa234.nested.local/ui as a user with the Administrator role. – Identify the vCenter instance that runs NSX Manager. – In the VMs and Templates inventory, locate the NSX Manager (nsxmgr.nested.local) appliance. – Right-click the NSX Manager appliance and select Power > Shut down Guest OS. – In the confirmation dialog box, click Yes. – This operation takes several minutes to complete. – Wait for it to fully power off | NSX Manager |
| 11 – Shut down the SDDC Manager appliance in the management domain by using the vSphere Client. – Log in to vCenter for the management domain at https://vcsa234.nested.local/ui as a user with the Administrator role. – In the VMs and templates inventory, expand the management domain vCenter Server tree and expand the management domain data center. – Right-click the SDDC Manager appliance (SDDCMGR108.nested.local) and click Power > Shut down Guest OS. – In the confirmation dialog box, click Yes. – This operation takes several minutes to complete. – Wait for it to fully power off | SDDC Manager |
| 12 – You use the vSAN shutdown cluster wizard in the vSphere Client to shut down gracefully the vSAN clusters in a management domain. The wizard shuts down the vSAN storage and the ESX hosts added to the cluster. – Identify the cluster that hosts the management vCenter for this management domain. – This cluster must be shut down last. – Log in to vCenter for the management domain at https://vcsa234.nested.local/ui as a user with the Administrator role. – For a vSAN cluster, verify the vSAN health and resynchronization status. – In the Hosts and Clusters inventory, select the cluster and click the Monitor tab. – In the left pane, navigate to vSAN Skyline health and verify the status of each vSAN health check category. – In the left pane, under vSAN Resyncing objects, verify that all synchronization tasks are complete. – Shut down the vSAN cluster. – In the inventory, right-click the vSAN cluster and select vSAN > Shutdown cluster. – In the Shutdown Cluster wizard, verify that all pre-checks are green and click Next. – Review the vCenter Server notice and click Next. – Enter a reason for performing the shutdown, and click Shutdown. – Briefly monitor the progress of the vSAN shutdown in vCenter. Eventually, VCSA will be shutdown and connectivity to it will be lost. I then monitor the shut down of my ESX host in Workstation. – The shutdown operation is complete after all ESX hosts are stopped. | Shut Down vSAN and the ESX Hosts in the Management Domain OR Manually Shut Down and Restart the vSAN Cluster If vSAN Fails to shutdown due to a license issue, then under the vSAN Cluster > Configure > Services, choose ‘Resume Shutdown’ (Fig-3) |
| Next the ESX hosts will power off and then I can do a graceful shutdown of my Windows server AD230. In Workstation, simply right click on this VM > Power > Shutdown Guest. Once all Workstation VM’s are powered off, I can run a backup or exit Workstation and power off my server. | Power off AD230 |
(Fig-3)

Backing up my VCF Environment
With my environment fully shut down, now I can start the backup process. See my blog Backing up Workstation VMs with PowerShell for more details.
How to restart my VCF Environment.
| Startup Order | SDDC Component |
|---|---|
| PRE-STEP: – Power on my Workstation server and start Workstation. – In Workstation power on my AD230 VM and ensure / verify all the core services (AD, DNS, NTP, and RAS) are working okay. Start up the VCF Cluster: 1 – One at a time power on each ESX Host. – vCenter is started automatically. Wait until vCenter is running and the vSphere Client is available again. – Log in to vCenter at https://vcsa234.nested.local/ui as a user with the Administrator role. – Restart the vSAN cluster. In the Hosts and Clusters inventory, right-click the vSAN cluster and select vSAN Restart cluster. – In the Restart Cluster dialog box, click Restart. – Choose the vSAN cluster > Configure > vSAN > Services to see the vSAN Services page. This will display information about the restart process. – After the cluster has been restarted, check the vSAN health service and resynchronization status, and resolve any outstanding issues. Select the cluster and click the Monitor tab. – In the left pane, under vSAN > Resyncing objects, verify that all synchronization tasks are complete. – In the left pane, navigate to vSAN Skyline health and verify the status of each vSAN health check category. | Start vSAN and the ESX Hosts in the Management DomainStart ESX Hosts with NFS or Fibre Channel Storage in the Management Domain |
| 2 – From vcsa234 locate the sddcmgr108 appliance. – In the VMs and templates inventory, Right Click on the SDDC Manager appliance > Power > Power On. – Wait for this vm to boot. Check it by going to https://sddcmgr108.nested.local – As its getting ready you may see “VMware Cloud Foundation is initializing…” – Eventually you’ll be prompted by the SDDC Manager page. – Exit this page. | SDDC Manager |
| 3 – From the VCSA234 locate the nsxmgr VM then Right-click, select Power > Power on. – This operation takes several minutes to complete until the NSX Manager cluster becomes fully operational again and its user interface – accessible. – Log in to NSX Manager for the management domain at https://nsxmgr.nested.local as admin. – Verify the system status of NSX Manager cluster. – On the main navigation bar, click System. – In the left pane, navigate to Configuration > Appliances. – On the Appliances page, verify that the NSX Manager cluster has a Stable status and all NSX Manager nodes are available. Notes — Give it time. – You may see the Cluster status go from Unavailable > Degraded, ultimately you want it to show Available. – In the Node under Service Status you can click on the # next to Degraded. This will pop up the Appliance details and will show you which item are degraded. – If you click on Alarms, you can see which alarms might need addressed | NSX Manager |
| 4 – Not Needed, not deployed yet | NSX Edge |
| 5 – Not Needed, not deployed yet | VMware Live Site Recovery |
| 6 – From vcsa234, locate vcfops.nested.lcoal appliance. – Following the order described in Broadcom KB 341964. – For my environment I simply Right-click on the appliance and select Power > Power On. – Log in to the VCF Operations administration UI at the https://vcfops.nested.lcoal/admin URL as the admin local user. – You may see ‘Retrieving Cluster Status’ , give it time. Mine took about <2mins – On the System status page, Under Cluster Status, click Bring Cluster Online. – You may see ‘Retrieving Cluster Status’ , give it time. Mine took about <2mins Notes — Give it time. – This operation might take about an hour to complete. – Took <15 mins to come Online – Cluster Status update may read: ‘Going Online’ – To the right of the node name, all of the other columns continue to update, eventually showing ‘Running’ and ‘Online’ – Cluster Status will eventually go to ‘Online’ | VCF Operations |
| 7 – From vcsa234 locate the VCF Operations fleet management appliance (fleetmgmtappliance.nested.local) Right-click the VCF Operations fleet management appliance and select Power > Power On. – In the confirmation dialog box, click Yes. – Allow it to boot Note – Direct access to VCF Ops Fleet Management appliance is disabled. Go to VCF Operations > Fleet Mgmt > Lifecycle > VCF Management for appliance management. | VCF Operations fleet management |
| 8 – Not Needed, not deployed yet | VCF Identity Broker |
| 9 – Not Needed, not deployed yet | VCF Operations for logs |
| 10 – From vcsa234, locate a VCF Operations collector appliance. (opscollectorappliance) Right-click the VCF Operations collector appliance and select Power > Power On. In the configuration dialog box, click Yes. | VCF Operations collector |
| 11 – Not Needed, not deployed yet | VCF Operations for Networks |
| 12 – Not Needed, not deployed yet | VCF Automation |
REF:
VMware Workstation Gen 9: Part 7 Deploying VCF 9.0.1
Now that I have set up an VCF 9 Offline depot and downloaded the installation media its time to move on to installing VCF 9 on my Workstation environment. In this blog all document the steps I took to complete this.
PRE-Steps
1) One of the more important steps is making sure I backup my environment and delete any VM snapshots. This way my environment is ready for deployment.
2) Make sure your Windows 11 PC power plan is set to High Performance and does not put the computer to sleep.
3) Next since my hosts are brand new they need their self-signed certificates updated. See the following URL’s.
- VCF Installer fails to add hosts during deployment due to hostname mismatch with subject alternative name
- Regenerate the Self-Signed Certificate on ESX Hosts
4) I didn’t setup all of DNS names ahead of time, I prefer to do it as I’m going through the VCF installer. However, I test all my current DNS settings, and test the newly entered ones as I go.
5) Review the Planning and Resource Workbook.
6) Ensure the NTP Service is running on each of your hosts.
7) The VCF Installer 9.0.1 has some extra features to allow non-vSAN certified disks to pass the validation section. However, nested hosts will fail the HCL checks. Simply add the line below to the /etc/vmware/vcf/domainmanager/application-prod.properties and then restart the SDDC Domain Manager services with the command: systemctl restart domainmanager
This allows me Acknowledge the errors and move the deployment forward.

Installing VCF 9 with the VCF Installer
I log into the VCF Installer.

I click on ‘Depot Settings and Binary Management’

I click on ‘Configure’ under Offline Depot and then click Configure.

I confirm the Offline Depot Connection if active.

I chose ‘9.0.1.0’ next to version, select all except for VMware Cloud Automation, then click on Download.

Allow the downloads to complete.

All selected components should state “Success” and the Download Summary for VCF should state “Partially Downloaded” when they are finished.

Click return home and choose VCF under Deployment Wizard.

This is my first deployment so I’ll choose ‘Deploy a new VCF Fleet’

The Deploy VCF Fleet Wizard starts and I’ll input all the information for my deployment.
For Existing Components I simply choose next as I don’t have any.

I filled in the following information around my environment, choose simple deployment and clicked on next.

I filled out the VCF Operations information and created their DNS records. Once complete I clicked on next.

I chose to “I want to connect a VCF Automation instance later” can chose next.

Filled out the information for vCenter

Entered the details for NSX Manager.

Left the storage items as default.

Added in my 3 x ESX 9 Hosts, confirmed all fingerprints, and clicked on next.
Note: if you skipped the Pre-requisite for the self-signed host certificates, you may want to go back and update it before proceeding with this step.

Filled out the network information based on our VLAN plan.

For Distributed Switch click on Select for Custom Switch Configuration, MTU 9000, 8 Uplinks and chose all services, then scroll down.

Renamed each port group and chose the following network adapters, chose their networks, updated NSX settings then chose next.






Entered the name of the new SDDC Manager and updated it’s name in DNS, then clicked on next.

Reviewed the deployment information and chose next.
TIP – Download this information as a JSON Spec, can save you a lot of typing if you have to deploy again.

Allow it to validate the deployment information.

I reviewed the validation warnings, at the top click on “Acknowledge all Warnings” and click ‘DEPLOY’ to move to the next step.


Allow the deployment to complete.

Once completed, I download the JSON SPEC, Review and document the passwords, (Fig-1) and then log into VCF Operations. (Fig-2)
(Fig-1)

(Fig-2)

Now that I have a VCF 9.0.1 deployment complete I can move on to Day N tasks. Thanks for reading and reach out if you have any questions.
VMware Workstation Gen 9: Part 6 VCF Offline Depot
To deploy VCF 9 the VCF Installer needs access to the VCF installation media or binaries. This is done by enabling Depot Options in the VCF Installer. For users to move to the next part, they will need to complete this step using resources available to them. In this blog article I’m going to supply some resources to help users perform these functions.
Why only supply resources? When it comes to downloading and accessing VCF 9 installation media, as a Broadcom/VMware employee, we are not granted the same access as users. I have an internal process to access the installation media. These processes are not publicly available nor would they be helpful to users. This is why I’m supplying information and resources to help users through this step.
What are the Depot choices in the VCF Installer?
Users have 2 options. 1) Connect to an online depot or 2) Off Line Depot

What are the requirements for the 2 Depot options?
1) Connect to an online depot — Users need to have an entitled support.broadcom.com account and a download token. Once their token is authenticated they are enabled to download.

See These URL’s for more information:
2) Offline Depot – This option may be more common for users building out Home labs.
See these URLs for more information:
- Set Up an Offline Depot Web Server for VMware Cloud Foundation
- Set Up an Offline Depot Web Server for VMware Cloud Foundation << Use this method if you want to setup https on the Photon OS.
- How to deploy VVF/VCF 9.0 using VMUG Advantage & VCP-VCF Certification Entitlement
- Setting up a VCF 9.0 Offline Depot
I’ll be using the Offline Depot method to download my binaries and in the next part I’ll be deploying VCF 9.0.1.