vsphere
VMware Workstation Gen 9: FAQs
I complied a list of frequently asked questions (FAQs) around my Gen 9 Workstation build. I’ll be updating it from time to time but do feel free to reach out if you have additional questions.
Last Update: 01/29/2026
General FAQs
Why Generation 9? To track my home labs I call them Generations plus a number. The Gen Numbers aligns to the version of vSphere it was designed for.
Why are you running Workstation vs. dedicated ESX servers? I’m pivoting my home lab strategy: I’ve moved from a complex multi-server setup to a streamlined, single-host configuration using VMware Workstation. Managing multiple hosts, though it gives real world experience, was meeting my needs when it came to roll back from a crash or testing different software versions. With Workstation, I can run multiple labs at once and use Workstation’s snapshot manager as an ‘undo’ button. It’s much more adaptable, making my lab time about learning rather than maintenance.
Where can I find your Gen 9 Workstation Build series? All of my most popular content, including the Gen 9 Workstation can be found under Best of VMX.
What VCF 9 products are running in BOM1? Initial components include: VCSA, VCF Operations, VCF Collector, NSX Manager, Fleet Manager, and SDDC Manager all running on the 3 x Nested ESX Hosts.
What are your plans for BOM2? Currently, under development but I would like to see if I could push the full VCF stack to it.
What version of Workstation are you using? Currently, VMware Workstation 25H2
What core services are needed to support this VCF Deployment? Core Services are supplied via Windows Server. They include AD, DNS, NTP, RAS, and DHCP. DNS, NTP, and RAS being the most important.
How performant is running VCF 9 on Workstation? In my testing I’ve had adequate success with a simple VCF install on BOM1. Clicks through out the various applications didn’t seem to lag. I plan to expand to a full VCF install under BOM2 and will do some performance testing soon.
BOM FAQs
Where can I find your Bill of Materials (BOM)? See my Home Lab BOM page.
Why 2 BOMs for Gen 9? Initially, I started with the hardware I had, this became BOM1. It worked perfectly for a simple VCF install. Eventually, I needed to expand my RAM to support the entire VCF stack. I had 32GB DDR4 modules on hand but the BOM1 motherboard was fully populated. It was less expensive to buy a motherboard that had enough RAM slots plus I could add in a 2nd CPU. This upgrade became BOM2.
What can I run on BOM1? I have successfully deployed a simple VCF deployment, but I don’t recommend running VCF Automation on this BOM. See the Best of VMX section for a 9 part series.
What can I run on BOM2? Under development, updates soon.
Are you running both BOMs configurations? No I’m only running one at a time. Currently, running BOM2.
Why list 2 BOMs? It gives my readers some ideas of different configurations that might work for them.
Do I really need this much hardware? No you don’t. The parts on my BOM is just how I did it, using some parts I had on hand and some I bought used. My recommendation is use what you got and upgrade it if you need to.
What should I do to help with performance? Invest in highspeed disk, CPU cores, and RAM. I highly recommend lots of NVMe disks for your nested ESX hosts.
VMware Workstation Gen 9: Part 9 Shutting down and starting up the environment
Deploying the VCF 9 environment on to Workstation was a great learning process. However, I use my server for other purposes and rarely run it 24/7. After its initial deployment, my first task is shutting down the environment, backing it up, and then starting it up. In this blog post I’ll document how I accomplish this.
NOTE: Users should license their VCF 9 environment first before performing the steps below. If not, the last step, vSAN Shutdown will cause an error. There is a simple work around.
How to shutdown my VCF Environment.
My main reference for VCF 9 Shut down procedures is the VCF 9 Documentation on techdocs.broadcom.com. (See REF URLs below) The section on “Shutdown and Startup of VMware Cloud Foundation” is well detailed and I have placed the main URL in the reference URLs below. For my environment I need to focus on shutting down my Management Domains as it also houses my Workload VMs.
Here is the order in which I shutdown my environment. This may change over time as I add other components.
| Shutdown Order | SDDC Component |
|---|---|
| 1 – Not needed, not deployed yet | VCF Automation |
| 2 – Not needed, not deployed yet | VCF Operations for Networks |
| 3 – From VCSA234, locate a VCF Operations collector appliance.(opscollectorapplaince) – Right-click the appliance and select Power > Shut down Guest OS. – In the confirmation dialog box, click Yes. | VCF Operations collector |
| 4 – Not needed, not deployed yet | VCF Operations for logs |
| 5 – Not needed, not deployed yet | VCF Identity Broker |
| 6 – From vcsa234, in the VMs and Templates inventory, locate the VCF Operations fleet management appliance (fleetmgmtappliance.nested.local) – Right-click the VCF Operations fleet management appliance and select Power > Shut down Guest OS. – In the confirmation dialog box, click Yes. | VCF Operations fleet management |
| 7 – You shut down VCF Operations by first taking the cluster offline and then shutting down the appliances of the VCF Operations cluster. – Log in to the VCF Operations administration UI at the https://vcops.nested.local/admin URL as the admin local user. – Take the VCF Operations cluster offline. On the System status page, click Take cluster offline. – In the Take cluster offline dialog box, provide the reason for the shutdown and click OK. Wait for the Cluster status to read Offline. This operation might take about an hour to complete. (With no data mine took <10 mins) – Log in to vCenter for the management domain at https://vcsa234.nested.local/ui as a user with the Administrator role. – After reading Broadcom KB 341964, I determined my next step is to simply Right-click the vcops appliance and select Power > Shut down Guest OS. – In the VMs and Templates inventory, locate a VCF Operations appliance. – Right-click the appliance and select Power > Shut down Guest OS. – In the confirmation dialog box, click Yes.This operations takes several minutes to complete. | VCF Operations |
| 8 – Not Needed, not deployed yet | VMware Live Site Recovery for the management domain |
| 9 – Not Needed, not deployed yet | NSX Edge nodes |
| 10 – I continue shutting down the NSX infrastructure in the management domain and a workload domain by shutting down the one-node NSX Manager by using the vSphere Client. – Log in to vCenter for the management domain at https://vcsa234.nested.local/ui as a user with the Administrator role. – Identify the vCenter instance that runs NSX Manager. – In the VMs and Templates inventory, locate an NSX Manager (nsxmgr.nested.local) appliance. – Right-click the NSX Manager appliance and select Power > Shut down Guest OS. – In the confirmation dialog box, click Yes. – This operation takes several minutes to complete. | NSX Manager |
| 11 – Shut down the SDDC Manager appliance in the management domain by using the vSphere Client. – Log in to vCenter for the management domain at https://vcsa234.nested.local/ui as a user with the Administrator role. – In the VMs and templates inventory, expand the management domain vCenter Server tree and expand the management domain data center. – Expand the Management VMs folder. – Right-click the SDDC Manager appliance (SDDCMGR108.nested.local) and click Power > Shut down Guest OS. – In the confirmation dialog box, click Yes. This operation takes several minutes to complete. | SDDC Manager |
| 12 – You use the vSAN shutdown cluster wizard in the vSphere Client to shut down gracefully the vSAN clusters in a management domain. The wizard shuts down the vSAN storage and the ESX hosts added to the cluster. – Identify the cluster that hosts the management vCenter for this management domain. – This cluster must be shut down last. – Log in to vCenter for the management domain at https://vcsa234.nested.local/ui as a user with the Administrator role. – For a vSAN cluster, verify the vSAN health and resynchronization status. – In the Hosts and Clusters inventory, select the cluster and click the Monitor tab. – In the left pane, navigate to vSAN Skyline health and verify the status of each vSAN health check category. – In the left pane, under vSAN Resyncing objects, verify that all synchronization tasks are complete. – Shut down the vSAN cluster. In the inventory, right-click the vSAN cluster and select vSAN Shutdown cluster. – In the Shutdown Cluster wizard, verify that all pre-checks are green and click Next. Review the vCenter Server notice and click Next. – If vCenter is running on the selected cluster, note the orchestration host details. Connection to vCenter is lost because the vSAN shutdown cluster wizard shuts it down. The shutdown operation is complete after all ESXi hosts are stopped. – Enter a reason for performing the shutdown, and click Shutdown. | Shut Down vSAN and the ESX Hosts in the Management Domain OR Manually Shut Down and Restart the vSAN Cluster If vSAN Fails to shutdown due to a license issue, then under the vSAN Cluster > Configure > Services, choose ‘Resume Shutdown’ (Fig-3) |
| Next the ESX hosts will power off and then I can do a graceful shutdown of my Windows server AD230. In Workstation, simply right click on this VM > Power > Shutdown Guest. Once all Workstation VM’s are powered off, I can run a backup or exit Workstation and power off my server. | Power off AD230 |
(Fig-3)

How to restart my VCF Environment.
| Startup Order | SDDC Component |
|---|---|
| PRE-STEP: – Power on my Workstation server and start Workstation. – In Workstation power on my AD230 VM and ensure / verify all the core services (AD, DNS, NTP, and RAS) are working okay. Start up the VCF Cluster: 1 – One at a time power on each ESX Host. – vCenter is started automatically. Wait until vCenter is running and the vSphere Client is available again. – Log in to vCenter at https://vcsa234.nested.local/ui as a user with the Administrator role. – Restart the vSAN cluster. In the Hosts and Clusters inventory, right-click the vSAN cluster and select vSAN Restart cluster. – In the Restart Cluster dialog box, click Restart. – Choose the vSAN cluster > Configure > vSAN > Services to see he vSAN Services page. This will display information about the restart process. – After the cluster has been restarted, check the vSAN health service and resynchronization status, and resolve any outstanding issues. Select the cluster and click the Monitor tab. – In the left pane, under vSAN > Resyncing objects, verify that all synchronization tasks are complete. – In the left pane, navigate to vSAN Skyline health and verify the status of each vSAN health check category. | Start vSAN and the ESX Hosts in the Management DomainStart ESX Hosts with NFS or Fibre Channel Storage in the Management Domain |
| 2 – From vcsa234 locate the sddcmgr108 appliance. – In the VMs and templates inventory, Right Click on the SDDC Manager appliance > Power > Power On. – Wait for this vm to boot. Check it by going to https://sddcmgr108.nested.local – As its getting ready you may see “VMware Cloud Foundation is initializing…” – Eventually you’ll be prompted by the SDDC Manager page. – Exit this page. | SDDC Manager |
| 3 – From the VCSA234 locate the nsxmgr VM then Right-click, select Power > Power on. – This operation takes several minutes to complete until the NSX Manager cluster becomes fully operational again and its user interface – accessible. – Log in to NSX Manager for the management domain at https://nsxmgr.nested.local as admin. – Verify the system status of NSX Manager cluster. – On the main navigation bar, click System. – In the left pane, navigate to Configuration Appliances. – On the Appliances page, verify that the NSX Manager cluster has a Stable status and all NSX Manager nodes are available. Notes — Give it time – You may see the Cluster status go from Unavailable > Degraded, ultimately you want it to show Available. – In the Node under Service Status you can click on the # next to Degraded. This will pop up the Appliance details and will show you which item are degraded. – If you click on Alarms, you can see which alarms might need addressed | NSX Manager |
| 4 – Not Needed, not deployed yet | NSX Edge |
| 5 – Not Needed, not deployed yet | VMware Live Site Recovery |
| 6 – From vcsa234, locate vcfops.nested.lcoal appliance. – Following the order described in Broadcom KB 341964. For my environment I simply Right-click on the appliance and select Power > Power On. – Log in to the VCF Operations administration UI at the https://vcfops.nested.lcoal/admin URL as the admin local user. – On the System status page, click Bring Cluster Online. This operation might take about an hour to complete. Notes: – Cluster Status update may read: ‘Going Online’ and then finally ‘Online’ – Nodes Status may start to appear eventually showing ‘Running’ and ‘Online’ – Took <15 mins to come Online | VCF Operations |
| 7 – From vcsa234 locate the VCF Operations fleet management appliance (fleetmgmtappliance.nested.local) Right-click the VCF Operations fleet management appliance and select Power > Power On. In the confirmation dialog box, click Yes. | VCF Operations fleet management |
| 8 – Not Needed, not deployed yet | VCF Identity Broker |
| 9 – Not Needed, not deployed yet | VCF Operations for logs |
| 10 – From vcsa234, locate a VCF Operations collector appliance. (opscollectorappliance) Right-click the VCF Operations collector appliance and select Power > Power On. In the configuration dialog box, click Yes. | VCF Operations collector |
| 11 – Not Needed, not deployed yet | VCF Operations for Networks |
| 12 – Not Needed, not deployed yet | VCF Automation |
REF:
VMware Workstation Gen 9: Part 7 Deploying VCF 9.0.1
Now that I have set up an VCF 9 Offline depot and downloaded the installation media its time to move on to installing VCF 9 on my Workstation environment. In this blog all document the steps I took to complete this.
PRE-Steps
1) One of the more important steps is making sure I backup my environment and delete any VM snapshots. This way my environment is ready for deployment.
2) Make sure your Windows 11 PC power plan is set to High Performance and does not put the computer to sleep.
3) Next since my hosts are brand new they need their self-signed certificates updated. See the following URL’s.
- VCF Installer fails to add hosts during deployment due to hostname mismatch with subject alternative name
- Regenerate the Self-Signed Certificate on ESX Hosts
4) I didn’t setup all of DNS names ahead of time, I prefer to do it as I’m going through the VCF installer. However, I test all my current DNS settings, and test the newly entered ones as I go.
5) Review the Planning and Resource Workbook.
6) Ensure the NTP Service is running on each of your hosts.
7) The VCF Installer 9.0.1 has some extra features to allow non-vSAN certified disks to pass the validation section. However, nested hosts will fail the HCL checks. Simply add the line below to the /etc/vmware/vcf/domainmanager/application-prod.properties and then restart the SDDC Domain Manager services with the command: systemctl restart domainmanager
This allows me Acknowledge the errors and move the deployment forward.

Installing VCF 9 with the VCF Installer
I log into the VCF Installer.

I click on ‘Depot Settings and Binary Management’

I click on ‘Configure’ under Offline Depot and then click Configure.

I confirm the Offline Depot Connection if active.

I chose ‘9.0.1.0’ next to version, select all except for VMware Cloud Automation, then click on Download.

Allow the downloads to complete.

All selected components should state “Success” and the Download Summary for VCF should state “Partially Downloaded” when they are finished.

Click return home and choose VCF under Deployment Wizard.

This is my first deployment so I’ll choose ‘Deploy a new VCF Fleet’

The Deploy VCF Fleet Wizard starts and I’ll input all the information for my deployment.
For Existing Components I simply choose next as I don’t have any.

I filled in the following information around my environment, choose simple deployment and clicked on next.

I filled out the VCF Operations information and created their DNS records. Once complete I clicked on next.

I chose to “I want to connect a VCF Automation instance later” can chose next.

Filled out the information for vCenter

Entered the details for NSX Manager.

Left the storage items as default.

Added in my 3 x ESX 9 Hosts, confirmed all fingerprints, and clicked on next.
Note: if you skipped the Pre-requisite for the self-signed host certificates, you may want to go back and update it before proceeding with this step.

Filled out the network information based on our VLAN plan.

For Distributed Switch click on Select for Custom Switch Configuration, MTU 9000, 8 Uplinks and chose all services, then scroll down.

Renamed each port group and chose the following network adapters, chose their networks, updated NSX settings then chose next.






Entered the name of the new SDDC Manager and updated it’s name in DNS, then clicked on next.

Reviewed the deployment information and chose next.
TIP – Download this information as a JSON Spec, can save you a lot of typing if you have to deploy again.

Allow it to validate the deployment information.

I reviewed the validation warnings, at the top click on “Acknowledge all Warnings” and click ‘DEPLOY’ to move to the next step.


Allow the deployment to complete.

Once completed, I download the JSON SPEC, Review and document the passwords, (Fig-1) and then log into VCF Operations. (Fig-2)
(Fig-1)

(Fig-2)

Now that I have a VCF 9.0.1 deployment complete I can move on to Day N tasks. Thanks for reading and reach out if you have any questions.
VMware Workstation Gen 9: Part 1 Goals, Requirements, and a bit of planning
It’s time to build my VMware Workstation–based home lab with VCF 9. In a recent blog post, I documented my upgrade journey from VMware Workstation 17 to 25H2. In this installment, we’ll go deeper into the goals, requirements, and overall planning for this new environment. As you read through this series, you may notice that I refer to VCF 9.0.1 simply as VCF 9 or VCF for brevity.
Important Notes:
- VMware Workstation Gen 9 series is still a work in progress. Some aspects of the design and deployment may change as the lab evolves, so readers should consider this a living build. I recommend waiting until the series is complete before attempting to replicate the environment in your own lab.
- There are some parts in this series where I am unable to assist users. In lieu I provide resources and advice to help users through this phase. These areas are VCF Offline Depot and Licensing your environment. As a Broadcom/VMware employee, we are not granted the same access as users. I have an internal process to access resources and these processes would not be helpful to users.
Overall Goals
- Build a nested minimal VCF 9.0.1 environment based on VMware Workstation 25H2 running on Windows 11 Pro.
- Both Workload and Management Domains will run on the same set of nested ESX Hosts.
- Using the VCF Installer I’ll initially deploy the VCF 9 Management Domain Components as a Simple Model.
- Initial components include: VCSA, VCF Operations, VCF Collector, NSX Manager, Fleet Manager, and SDDC Manager all running on the 3 x Nested ESX Hosts.
- Workstation Nested VMs are:
- 3 x ESX 9.0.1 Hosts
- 1 x VCF Installer
- 1 x VCF Offline Depot Appliance
- 1 x Windows 2022 Server (Core Services)
- Core Services supplied via Windows Server: AD, DNS, NTP, RAS, and DHCP.
- Networking: Private to Workstation, support VLANs, and support MTU of 9000. Routing and internet access supplied by the Windows Server VM.
- Should be able to run minimal workload VM’s on nested ESX Hosts.
Hardware BOM
If you are interested in the hardware I’m running to create this environment please see my Build of Materials (BOM) page.
Additionally, check out the FAQ page for more information.
Deployment Items
To deploy the VCF Simple model I’ll need to make sure I have my ESX 9.0.1 Hosts configured properly. With a simple deployment we’ll deploy the 7 required appliances running on the Nested ESX hosts. Additionally, directly on Workstation we’ll be running the AD server, VCF Offline Depot tool, and the VCF Installer appliance.

Using the chart below I can get an idea of how many cores, ram, and disk that will be needed. The one item that stands out to me is the component with the highest core count. In this case it’s VCF Automation at 24 cores. This is important as I’ll need to make sure my nested ESX Servers match or exceed 24 cores. If not, VCF Automation will not be able to deploy. Additionally, I’ll need to make sure I have enough RAM, Disk, and space for Workload VM’s.

Workstation Items
My overall plan is to build out a Windows Server, 3 x ESX 9 hosts, VCF Installer, and the VCF Depot Appliance. Each one of these will be deployed directly onto Workstation. Once the VCF Installer is deployed it will take care of deploying and setting up the necessary VMs.
NOTE: In the network layout below, hosts that are blue in color are running directly on Workstation, and those in purple will be running on the nested ESX hosts.
Network Layout

One of the main network requirements for VCF is supporting VLAN networks. My Gen8 Workstation deployment did not use VLAN networks. Workstation can pass tagged VLAN packets via LAN Segments. The configuration of LAN Segments are done at the VM’s Workstation settings, not via the Virtual Network Editor. We’ll cover this creation soon.
In the next part of this series I’ll show how I used Workstation Templates to create my VMs and align them to the underlying hardware.
Resources:
Migrating VMs running on Workstation 25H2
I came across a need to migrate my Workstation based VCSA 8 appliance into the vSphere/vSAN cluster. Both the VCSA and vSphere Cluster are VM’s and both are running on my Gen8 Workstation Home Lab. In this case I need to complete this migration to prepare for a VCSA 9 upgrade. Part of the VCSA 9 upgrade process requires VCSA to be running directly on an ESXi host plus it will better align my Workstation environment to VCF 9 standards. In this blog I’m going to demonstrate how I migrated my VCSA 8 appliance.

Note: These steps were performed on my Gen 8 Workstation Home lab as I start preparing it for Gen 9 with VCF 9. Though I will try to write this blog post rather generally it may contain references to my home lab. For more information around this Home Lab check out my recent blog post.
Some options for migration:
- Connect to Server option in Workstation
- Workstation offers a convenient ‘Connect to Server’ feature. This allows users to connect to an ESX or VCSA server. When connected you can migrate VMs.
- However, this solution won’t work in my case, as my VCSA and ESXi hosts are on a private network that is inaccessible from my Workstation PC. Check out this link for more information >> ‘Connect to a Remote Server’
- VMware vCenter Converter Standalone (6.6.0 or 9.x)
- VMware vCenter Converter Standalone is a free product allowing for Live or Powered off P2V and V2V migrations. You simply install it on a supported OS and migrate your VM to a target Hosts.
- However, this solution doesn’t support migrating VMware Appliance VM’s.
- Use Workstation to Export to OVF
- OFV is a way to backup your VM’s to files and prepare them to be imported to a different host. Workstation allows users to export VM’s to an OVF file. Once exported I can go to the ESXi Host and import.
Option 3 is the one I choose and here are the steps:
Pre checks:
- I reviewed how many cores the ESXi target host supports (8 cores) and how many cores the VCSA 8 Appliance (4 cores) was deployed with. I do this check to ensure the ESXi host will support the workload.
- I check the HDD size of the VCSA VM (~120GB Used) and ensure I have enough vSAN Storage (~3TB Free) to support it.
- Ensure you have root access to the VCSA server and the ESXi hosts
- Important — Check to ensure a there is a ephemeral/non-static binding vDS port group and it is connected to the same network the VCSA server requires. For more information about static/non-static port groups, see my blog about setting this up.
Let’s migrate the VCSA server.
With the VM’s powered off, I remove any Workstation snapshots on the VCSA 8 Appliance and all 3 vSAN ESXi Hosts.

I power up the vSphere 8 environment (AD, VCSA, ESXi hosts) and ensure everything is functioning properly.
In the vSphere Client, I ensure there is an appropriate Ephemeral or non-static binding port group attached to the management network.

Then I gracefully shutdown the VCSA server. I do not power off the ESXi.
In Workstation I choose the VCSA appliance then choose File > Export to OVF

I choose a location, file name, and choose save.

Workstation creates the OVF files and displays a progress bar. Depending on the size this could take some time to complete.

Once completed, I open up the ESXi Host Client on the target host. Then I right click on Host and choose Create/Register VM.

Choose ‘Deploy a virtual machine from an OVF or OVA file’ then Next.

Enter the name you want for the VM. Choose ‘Click to Select files or drag/drop’, I choose the location where the OVF files are, select ALL the files (not just the OVF file), click on Open and then Next.

Next choose the target datastore. I choose the vsanDatastore.

Validate the Network Mappings is pointed to the Non-Static port group, then click on Next.

Then I click on Finish.

Several tasks are created and I monitor the progress in the Recent tasks display. The task named ‘Import VApp’ tracks the progress of the entire import. When its progress is completed the OVF import will be complete. Depending on the size this could take some time to complete.

Once the transfer is complete and I boot the VCSA server. Once it is ready I log into it via the vSphere Client. From there I right click on the VM > Settings > Network Adapter > Browse Network and choose the Static bound Port Group.

And, that’s all folks. My Workstation based VCSA 8 Appliance has been migrated to a vSphere Cluster which is running as Workstation VMs.
Thanks for reading and I do hope you picked up tip or two. Please do reach out if you have any questions or comments.
VMware Workstation Gen 8: Environment Revitalization
In my last blog post, I shared my journey of upgrading to Workstation build 17.6.4 (build-24832109), plus ensuring I could start up my Workstation VM’s. In this installment, we dive deeper into getting the environment ready, and perform a back up.
Keep in mind my Gen 8 Workstation has been powered down for almost a year, so there are some things I have to do to get it ready. I see this blog as more informational and if users already have a stable environment you can skip these sections. However, you may find value in these steps if you are trying to revitalize an environment that has been shut down for a long period of time.
Before we get started, a little background.
This revitalization follows my designs that were published on my Workstation Home Lab YouTube series. That series focused on building a nested home lab using Workstation 17 and vSphere 8. Nesting with Workstation can evoke comparisons to the movie Inception, where everything is multi-layered. Below is a brief overview of my Workstation layout, aimed at ensuring we all understand which layer we are at.
- Layer 1 – Physical Layer:
- The physical hardware I use to support this VMware Workstation environment is a super charged computer with lots of RAM, CPU, and highspeed drives. More information here.
- Base OS is Windows 11
- VMware Workstation build is 17.6.4 build-24832109
- Layer 2 – Workstation VMs: (Blue Box in diagram)
- I have 4 key VM’s that run directly on Workstation.
- These VM’s are: Win2022 Sever, VCSA 8u2, and 3 x ESXi 8u2 Hosts
- The Win2022 Server has the following services: AD, DNS, DHCP, and RAS
- Current state of these VM’s is suspended.
- Layer 3 – Workload VM’s: (Purple box)
- The 3 Nested ESXi Hosts have several VM’s

Lets get started!
Challenges:
1) Changes to License keys.
My vSphere environment vExpert license keys are expired. Those keys were based on vSphere 8.0u2 and were only good for one year. Since the release of vSphere 8.0u2b subscription keys are needed. This means to apply my new license keys I’ll have to upgrade vSphere.
TIP: Being a Broadcom VMware employee I’m not illegible for VMUG or vExpert keys, but if you are interested in the process check out a post by Daniel Kerr. He did a great write up.
2) Root Password is incorrect.
My root password into VCSA is not working and will need to be corrected.
3) VCSA Machine Certs need renewed.
There are several certificates that are expired and will need to be renewed. This is blocking me from being able to log on to the VCSA management console.
4) Time Sync needs to be updated.
I’ve change location and the time zone will need updated with NTP
Here are the steps I took to resume my vSphere Environment.
The beauty of working with Workstation is the ability to backup and/or snapshot Workstation VM’s as files and restore them when things fail. I took many snapshots and restored this lab a few times as I attempted to restart it. Restarting this Lab was a bit of a learning process as it took a few attempts to find everything that needed attention. Additionally, some of the processes you would follow in the real world didn’t apply here. So if you’re a bit concerned by some of the steps below, trust me I tried the correct way first and it simply didn’t work out.
1) Startup Workstation VM AD222:
At this point – I have only resumed AD222.
The other VMs rely on the Windows 2022 VM for its services. First, I need to get this system up and validate that all of its services are operational.
- I used the Server Manager Dash Board as a quick way to see if everything is working properly.
- From this dashboard I can see that my services are working and upon checking the red areas I found there was an non-issue with Google updater being stopped.

- Run and Install Windows Updates
- Network Time Checks (NTP)
- All my VM’s get their time from this AD server. So it being correct is important.
- I ensure the local time on the server is correct. From CLI I type in ‘w32tm /tz’ and confirm the time zone is correct.
- Using the ‘net time’ command I confirm the local date/time matches the GUI clock in the Windows server.
- Using ‘w32tm /query /status’ I confirm that time is syncing properly
- Note: My time ‘Source’ is noted as ‘Local CMOS Clock’. This is okay for my private Workstation environment. Had this been production, we would have wanted a better time source.

2) Fix VCSA223 Server Root Password:
At this point only – I have resumed power to VCSA223 and AD222 is powered on.
Though I was initially able to access VCSA via the vSphere Client, I eventually determined I was unable to log in to the VCSA appliance via DCUI, SSH, or management GUI. The root password was incorrect and needed to be reset.
To fix the password issue I need to gracefully shutdown the VCSA appliance and follow KB 322247. In Workstation I simply right clicked on the VCSA appliance > Power > Shutdown Guest

3) Cannot access the VCSA GUI Error 503-Service Not available.
After fixing the VCSA password I was now able to access it via the SSH and DCUI consoles. However, I was unable to bring up the vSphere Client or the VCSA Management GUI. The management GUI simply stated ‘503 service not available’.
To resolve this issue I used the following KB’s
- 344201 Verify and resolve expired vCenter Server certificates using the command line interface
- Used this KB to help determine which certificates needed attention.
- Found an expired machine certificate.
- 385107 vCert – Scripted vCenter expired certificate replacement
- Following this KB I downloading vCert via WinSCP to the VCSA appliance.
- I used KB 326317 WinSCP adjustment to help download the file.
- After I completed the install section I chose option 6 and reset all the certificates.
- Next I rebooted the VCSA appliance.
- After the reboot I was now able to access the VCSA vSphere Client and Management GUI.
4) VCSA Management GUI Updates
- I accessed the VCSA Management GUI and validated/updated its NTP settings.

- Next I mounted the most recent VCSA ISO and updated the appliance to 8.0.3.24853646
5) Updating ESXi
- At this point only my AD and VCSA servers have been resumed. My ESXi hosts are still suspended.
- To start the update from 8.0.2ub to 8.0.3ue, I choose to resume then immediately shutdown all 3 ESXi hosts. This step may seem a bit harsh but no matter how I tried to be graceful about resuming these VM’s I ran into issues.
- While shut down I mounted VMware-VMvisor-Installer-8.0U3e-24677879.x86_64.ISO and booted/upgraded each ESXi host.
6) License keys in VCSA
Now that everything is powered on I was able to go onto the vSphere client. First thing I noticed was the VMware keys (VCSA, vSAN, ESXi) were all expired.
I updated the license keys in this order:
- First – Update the VCSA License Key
- Second – Update the vSAN License Key
- Third – Update the ESXi Host License Key
7) Restarting vSAN
- When I shut down or suspend my Workstation Home lab I always shut down my Workload VM’s and do a proper shutdown of vSAN.
- After I confirmed all my hosts were licensed and connected properly, I simply went into the cluster > configure > vSAN Services.
8) Backup VM’s
Now that my environment is properly working it’s time to do a proper shut down, remove all snapshots, and then take a backup of my Workstation VM’s.
With Workstation a simple Windows File copy from Source to target is all that is needed. In my case I have a large HDD where I store my backups. In Windows I simply right click on the Workstation VMs folder and chose copy. I then go to the target location right click and choose paste.
TIP: I keep track of my backups and notes with a simple notepad. This way I don’t forget their state.
And that’s it, after being down for over a year my Workstation Home lab Gen 8 is now fully functional and backed up. I’ll continue to use it for vSphere 8 testing as I build out a new VCF 9 enviroment.
Thanks for reading and please feel free to ask any questions or comments.
VCF9: What’s new in Licensing
VCF9 offers so many fantastic enhancements. There were many stand out items which are getting a fair share publicity. However, I wasn’t seeing many posts around the changes to licensing. There are several new and impactful requirements for licensing which deserve some attention. This post is a culmination of data and documentation I found on the Broadcom website and is publicly available. I just repurposed and organized it a bit.
Quick Summary –
- You now manage your licenses through VCF Operations across your entire fleet and can manage licenses for multiple VCF Operations instances from the VCF Business Services console (vcf.broadcom.com), a part of the Broadcom Support Portal.
- To license a VCF9 deployment customers must deploy VCF Operations and a vCenter server. Then in the VCF Business Services console attach their license key to their site ID and register the VCF Operation instance. Next, deploy a secure license file to VCF Operations. Lastly, VCF Operations deploys keys to the vCenter server to be attached to hosts.
Quick Walk Through:
- Your VCF9 Subscription is tied to your site ID.
- In this example we have 300 Cores of VCF.
- Your VCF Operations is registered in the VCF Business Services console and tied to this site ID.
- In the VCF Business Services console, you allocate cores and create a Secure license file.
- This Secure License file is deployed to VCF Operations.
- In this example 256 cores were allocated to a Secure license file.
- Via VCF Operation, the Secure License file is attached to a vCenter Sever Instance
- vCenter Server allocates cores to hosts
- In this example, you can see where Host 1& 2 received 128 cores each, but there were not enough cores for the 3rd cluster.
- 180 Days (6 Months) later VCF Operations automatically reaches out to VCF Business Services console and reports in.
What is the VCF Business Services console?
- VCF Business Services console provides the ability to manage licenses, VMware Cloud Foundation Usage Meter appliances, user roles, and resource access.
- More information here
Licensing Types:
- There are two types of licenses
- Primary licenses, such as VMware Cloud Foundation and VMware vSphere Foundation licenses.
- Add-on licenses, such as vSAN add-on capacity or VMware Private AI Foundation with NVIDIA licenses.
- NOTE: You no longer license individual components such as NSX, HCX, VCF Automation, and so on. Instead, for VCF and vSphere Foundation, you have a single license capacity provided for that product.
Licensing Modes:
- Connected Mode:
- Most customers will have a “connected” or what some call a phone home mode.
- License usage reports are required at least once every 180 days to maintain your licenses and you must update your license to confirm that the license usage report was submitted.
- This data is sent to the VCF Business Services console automatically, and licenses can be updated with a button click.
- Disconnected Mode:
- If VCF Operations is registered in disconnected mode, to report license usage, you generate a usage file and upload it in the VCF Business Services console. For detailed instructions for both connected and disconnected registration modes, see Updating Licenses.
- Critical Infrastructure Mode:
- This mode is reserved for critical infrastructure. Think military or federal use.
- This is a very uncommon mode and isn’t intended for customer consumption.
Other Notes:
- Manage licenses and assign them to vCenter instances from VCF Operations. All hosts and components connected to a vCenter instance with an assigned license are automatically licensed from vCenter assignments.
- VCF Operations can be connected to the VCF Business Services console for faster licensing, updates, and automated reporting. VCF Operations can also operate in disconnected mode.
- Fewer licenses to manage.
- Now, instead of 11 license keys, there are only two licenses for VCF – “VMware Cloud Foundation (cores)” and “VMware vSAN (TiBs)”. vSphere Foundation follows this same pattern.
- Multiple subscriptions pool together into a single license that can optionally be split later.
- All licenses can be applied into your environment by importing a single license file. For connected VCF Operations instances, the first license file will download automatically after you complete the registration.
- License your vCenter, ESX hosts, NSX , VCF Operations HCX, VCF Automation, and other components by assigning the license to the vCenter instance.
- License usage must be submitted from VCF Operations every 180 days, or hosts will disconnect from the vCenter instance and new workloads cannot be started (existing workloads will not be proactively stopped). If VCF Operations is in connected mode, license usage submission is automatic but still must be confirmed in VCF Operations by clicking Update Licenses. For VCF Operations in disconnected mode, follow the steps in the documentation to submit license usage.
- Hosts are automatically reconnected to the respective vCenter instance with full capabilities when a valid license is applied and/or license usage is submitted and license refreshed.
- Dynamic license quantity adjustment means that license changes made in the VCF Business Services console do not require reassignment.
- Visualize a unified view of your usage over time for your fleet in VCF Operations and across multiple VCF Operations instances in the VCF Business Services console.
- Evaluation Mode has been extended to 90 days.
- The license usage file only records the following license usage data points: the usage generation timestamp, utilization details for both post-version 9 and pre-version 9 licenses, the unique VCF Operations instance ID, a unique identifier for the usage report, a list of post-version 9 licenses added to VCF Operations but currently unused, any detected usage anomalies, and the active status. Note that the license usage file exclusively gathers this specific information and, for clarity, does not collect personal data and customer data.
REF: