cloud
VMware Workstation Gen 9: BOM2 P4 Workstation/Win11 Performance enhancements
There can be a multitude of factors that could impact performance of your Workstation VMs. Running a VCF 9 stack on VMware Workstation demands every ounce of performance your Windows 11 host can provide. To ensure a smooth lab experience, certain optimizations are essential. In this post, I’ll walk through the key adjustments to maximize efficiency and responsiveness.
Note: There are a LOT of settings I did to improve performance. I take a structured approach by trying things slowly vs. applying all. The items listed below are what worked for my system and it’s recommend for that use case only. Unless otherwise stated, the VM’s and Workstation were powered down during these adjustments.
Host BIOS/UFEI Settings
- There are several settings to ensure stable performance with a Supermicro X11DPH-T.
- Here is what I modified on my system.
- Enter Setup, confirm/adjust the following, and save then changes:
- Advanced > CPU Configuration
- Hyper-Threading > Enabled
- Cores Enabled > 0
- Hardware Prefetcher > Enabled
- Advanced Power Management Configuration
- Power Technology > Custom
- Power Performance Tuning > BIOS Controls EPB
- Energy Performance BIAS Setting > Maximum Performance
- CPU C State Control, All Disabled
- Advanced > Chipset Configuration > North Bridge > Memory Configuration
- Memory Frequency > 2933
- Advanced > CPU Configuration
Hardware Design
- In VMware Workstation Gen 9: BOM1 and BOM2 blogs we covered hardware design as it related to the indented load or nested VMs.
- Topics we covered were:
- Fast Storage: NVMe, SSD, and U.2 all contribute to VM performance
- Placement of VM files: We placed and isolated our ESX VMs on specific disks which helps to ensure better performance
- PCIe Placement: Using the System Block diagram I placed the devices in their optimal locations
- Ample RAM: Include more than enough RAM to support the VCF 9 VMs
- CPU cores: Design enough CPU cores to support the VCF 9 VMs
- Video Card: Using a power efficient GPU and help boost VM performance
VM Design
- Disk Choices: Matched the VM disk type to the physical drive type they are running on. Example – NVMe physical to a VMs NVMe disk
- CPU Settings: Match physical CPU Socket(s) to VM CPU settings. Example – VM needs 8 Cores and a Physical host with 2 CPU Sockets and 24 cores per Socket. Setup VM for 2 CPU and 4 Cores
- vHardware Choices: When creating a VM, Workstation should auto-populate hardware settings. Best vNIC to use is the vmxnet3. You can use the Guest OS Guide to validate which virtual hardware devices are compatible.
Fresh Installs
- There’s nothing like a fresh install of the base OS to be a reliable foundation for performance improvments.
- When Workstation is installed it adapts to the base OS. There can be performance gains due to this adaption.
- However, if you upgrade the OS (Win10 to Win11) with Workstation already installed, you should always fully uninstall Workstation post upgrade and reinstall Workstation post upgrade for optimal performance.
- Additionally, when installing Workstation I ensure that Hyper-V is disabled as it can impact Workstation performance.
Exclude Virtual Machine Directories From Antivirus Tools
NOTE — AV exceptions exclude certain files, folders, and processes from being scanned. By adding these you can improve Workstation performance but there are security risks in enabling AV Exceptions. Users should do what’s best for their environment. Below is how I set up my environment.
- Script: Use a script to create AV Exceptions. For an example check out my blog – Using PowerShell to setup AV exceptions for Workstation 25H2u1 and Windows 11.
- Manual Steps: Manually setup the following exceptions for Windows 11.
- Open Virus and Threat Protection
- Virus & threat protection settings > Manage Settings
- Under ‘Exclusion’ choose ‘Add or remove exclusions’
- Click on ‘+ Add an exclusion’
- Choose your type (File, Folder, File Type, Process)
- File Type: Exclude these specific VMware file types from being scanned:
- .vmdk: Virtual machine disk files (the largest and most I/O intensive).
- .vmem: Virtual machine paging/memory files.
- .vmsn: Virtual machine snapshot files.
- .vmsd: Metadata for snapshots.
- .vmss: Suspended state files.
- .lck: Disk consistency lock files.
- .nvram: Virtual BIOS/firmware settings.
- Folder: Exclude the following directories to prevent your antivirus from interfering with VM operations
- VMware Installation folder
- VM Storage Folders: Exclude the main directory where you store your virtual machines
- Installation Folder: Exclude the VMware Workstation installation path (default:
C:\Program Files (x86)\VMware\VMware Workstation\). - VMware Tools: If you have the VMware Tools installation files extracted locally, exclude that folder as well.
- Process: Adding these executable processes to your antivirus exclusion list can prevent lag caused by the AV monitoring VMware’s internal actions:
- vmware.exe: The main Workstation interface.
- vmware-vmx.exe: The core process that actually runs each virtual machine.
- vmnat.exe: Handles virtual networking (NAT).
- vmnetdhcp.exe: Handles DHCP for virtual networks.
Power Plan
Typically by default Windows 11 has the “Balanced” Power plan enabled. Though these settings are good for normal use cases, using your system as a dedicated VMware Workstation requires a better plan.
Below I show 2 ways to adjust a power plan. 1) Using a script to create a custom plan or 2) manually make similar adjustments.
- 1) Script: I created a script that creates a custom power plan named “VMware Workstation Performance Plan” and makes all the needed changes for my system. You can find my blog here.

- 2) Manual Adjustments:
- Open the power plan. Control Panel > Hardware and Sound > Power Options > Change settings that are currently unavailable
- You might see on every page “Change settings that are currently unavailable”, just click on it before making changes.
- Set Power Plan:
- Click on ‘Hide Additional Plans’.
- Choose either “Ultimate Performance” or “High Performance” plan and then click on “Change plan settings”
- Hard Disk > 0 Minutes
- Wireless Adapter Settings > Max Performance
- USB > Hub Selective Suspend Time out > 0
- PCI Express > Link State Power Management > off
- Processor power management > Both to 100%
- Display > Turn off Display > Never
Power Throttling
Power throttling in Windows 11 is an intelligent, user-aware feature that automatically limits CPU resources for background tasks to conserve energy and extend battery life. By identifying non-essential, background-running applications, it reduces power consumption without slowing down active, foreground apps.
To determine if it is active go in to System > Power and look for Power Mode
If you are using a high performance power plan usually this feature is disabled.

If you are running a power plan where this is enabled, and you don’t want to disable it, then you can maximize your performance by disabling power throttling for the Workstation executable.
powercfg /powerthrottling disable /path “C:\Program Files (x86)\VMware\VMware Workstation\x64\vmware-vmx.exe”
Sleep States
Depending on your hardware you may or may not have different Sleep states enabled. Ultimately, for my deployment I don’t want any enabled.
To check if any are from a command prompt type in ‘powercfg /a’ and adjust as needed.

Memory Page files
In my design I don’t plan to overcommit physical RAM (640GB ram) for my nested VM’s. To maximize the performance and ensure VMware Workstation uses the physical memory exclusively, I follow these steps: Configure global memory preferences, Disable Memory Trimming for each VM, Force RAM-Only Operation, and adjust the Windows Page Files.
- 1) Configure Global Memory Preferences: This setting tells VMware how to prioritize physical RAM for all virtual machines running on the host.
- Open Workstation > Edit > Preferences > Memory
- In the Additional memory section, select the radio button for “Fit all virtual machine memory into reserved host RAM”.

- 2) Disable Memory Trimming for each VM: Windows and VMware use “trimming” to reclaim unused VM memory for the host. Since RAM will not be overallocated, I disable this to prevent VMs from ever swapping to disk.
- Right-click your VM and select Settings
- Go to the Options tab and select the Advanced category.
- Check the box for “Disable memory page trimming”.
- Click OK and restart the VM

- 3) Force RAM-Only Operation (config.ini): This is an advanced step that prevents VMware from creating
.vmemswap files, forcing it to use physical RAM or the Windows Page File instead.- Close VMware Workstation completely.
- Navigate to C:\ProgramData\VMware\VMware Workstation\ in File Explorer (Note: ProgramData is a hidden folder).
- Open the file named config.ini with Notepad (you may need to run Notepad as Administrator).
- Add the following lines to the end of the file:
- mainMem.useNamedFile = “FALSE”
- prefvmx.minVmMemPct = “100”
- Save the file and restart your computer
- 4) Windows Page Files: With 640GB of RAM Windows 11 makes a huge memory page file. Though I don’t need one this large I still need one for crash dumps, core functionality, and memory management. According to Microsoft, for a high-memory workstation or server, a fixed page file of 16GB to 32GB is the “sweet spot.” I’m going a bit larger.
- Go to System > About > Advanced system Settings
- System Properties window appears, under Performance choose ‘Settings’
- Performance Options appears > Advanced > under Virtual memory choose ‘change’
- Uncheck ‘Automatically manage paging…’
- Choose Custom size, MIN 64000 and MAX 84000
- Click ‘Set’ > OK
- Restart the computer

Windows Visual Effects Performance
The visual effects in Windows 11 can be very helpful but they can also minimally slow down your performance. I prefer to create a custom profile and only enable ‘Smooth edges of screen fonts’
- Go to System > About > Advanced system Settings
- System Properties window appears,
- On the Advanced Tab, under Performance choose ‘Settings’
- On the Visual Effect tab choose ‘Custom’ and I chose ‘Smooth edges of screen fonts’

Disable BitLocker
Windows 11 (especially version 24H2 and later) may automatically re-enable encryption during a fresh install or major update. By default to install Windows 11 it requires TPM 1.2 or higher chip (TPM 2.0 recommended/standard for Win11), and UEFI firmware with Secure Boot enabled. BitLocker uses these features to “do its work”.
But, there are a couple of ways to disable BitLocker.
- Create a Custom ISO
- My deployment doesn’t have a TPM modules nor is Secure Boot enabled. To overcome these requirements I used Rufus to make the Windows 11 USB install disk. This means BitLocker can not be enabled.
- Registry Edit (Post-Installation – may already be set):
- Press Win + R, type regedit, and press Enter
- Navigate to: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\BitLocker
- Right-click in the right pane, select New > DWORD (32-bit) Value
- Name it PreventDeviceEncryption and set its value to 1
- Disable the Service:
- Press Win + R, type services.msc, and press Enter.
- Find BitLocker Drive Encryption Service.
- Right-click it, select Properties, set the “Startup type” to Disabled, and click Apply.
Disable Side-Channel Mitigations: Disabling these can boost performance, especially on older processors, but may reduce security.
- Open the Windows Security app by searching for it in the Start menu.
- Select Device security from the left panel.
- Click on the Core isolation details link.
- Toggle the switch for Memory integrity to Off.
- Select Yes when the User Account Control (UAC) prompt appears.
- Restart your computer for the changes to take effect
Note: if you host is running Hyper-V virtualization, for your Workstation VM’s you may need to check the “disable side channel mitigations for Hyper-V enabled hosts” options in the advanced options for each VM.

Clean out unused Devices:
Windows leaves behind all types of unused devices that are hidden from your view in device manager. Though these are usually pretty harmless its a best practice to clean them up from time to time.
The quickest way to do this is using a tool called Device Cleanup Tool. Check out my video for more how to with this tool.
Here is Device Cleanup Tool running on my newly (<2 months) installed system. As you can see unused devices can build up even after a short time frame.

Debloat, Clean up, and so much more
There are several standard Windows based features, software, and cleanup tools that can impact the performance of my deployment. I prefer to run tools that help optimize Windows due to their ability to complete tasks quickly. The tool I use to debloat and clean up my system is Winutil. It’s been a proven util for not only optimizing systems, installing software, updates, but helping to maintain them too. For more information about Winutil check out their most recent update.
For ‘Tweaking’ new installs I do the following:
- Launch the WinUtil program
- Click on Tweaks
- Choose Standard
- Unselect ‘Run Disk Cleanup’
- Click on Run Teaks
Additionally, you may have noticed Winutil can create an Ultimate Preforamnce power plan. That may come in handy.

Remove Windows Programs:
Here is a list of all the Windows Programs I remove, they are simply not needed for a Workstation Deployment. Some of these can be removed using the WinUtil.
- Cortana
- Co-polit
- Camera
- Game Bar
- Teams
- News
- Mail and Calendar
- Maps
- Microsoft OneDrive
- Microsoft to do
- Movies and TV
- People
- Phone Link
- Solitare
- Sticky NOtes
- Tips
- Weather
- Xbox / xbox live
References and Other Performance Articles:
VMware Workstation Gen 9: Part 9 Shutting down and starting up the environment
Deploying the VCF 9 environment on to Workstation was a great learning process. However, I use my server for other purposes and rarely run it 24/7. After its initial deployment, my first task is shutting down the environment, backing it up, and then starting it up. In this blog post I’ll document how I accomplish this.
NOTE:
- License should be completed for a VCF 9 environment first before performing the steps below. If not, the last step, vSAN Shutdown will cause an error. There is a simple work around.
- I do fully complete each step before moving to the next. Some steps can take some time to complete.
How to shutdown my VCF Environment.
My main reference for VCF 9 Shut down procedures is the VCF 9 Documentation on techdocs.broadcom.com (See REF URLs below). The section on “Shutdown and Startup of VMware Cloud Foundation” is well detailed and I have placed the main URL in the reference URLs below. For my environment I need to focus on shutting down my Management Domains as it also houses my Workload VMs.
Here is the order in which I shutdown my environment. This may change over time as I add other components.
Note – it is advised to complete each step fully before proceeding to the next step.
| Shutdown Order | SDDC Component |
|---|---|
| In vCenter, shutdown all non-essential guest VM’s | |
| 1 – Not needed, not deployed yet | VCF Automation |
| 2 – Not needed, not deployed yet | VCF Operations for Networks |
| 3 – From VCSA234, locate a VCF Operations collector appliance.(opscollectorapplaince) – Right-click the appliance and select Power > Shut down Guest OS. – In the confirmation dialog box, click Yes. – Wait for it to fully power off | VCF Operations collector |
| 4 – Not needed, not deployed yet | VCF Operations for logs |
| 5 – Not needed, not deployed yet | VCF Identity Broker |
| 6 – From vcsa234, in the VMs and Templates inventory, locate the VCF Operations fleet management appliance (fleetmgmtappliance.nested.local) – Right-click the VCF Operations fleet management appliance and select Power > Shut down Guest OS. – In the confirmation dialog box, click Yes. – Wait for it to fully power off | VCF Operations fleet management |
| 7 – You shut down VCF Operations by first taking the cluster offline and then shutting down the appliances of the VCF Operations cluster. – Log in to the VCF Operations administration UI at the https://vcfcops.nested.local/admin URL as the admin local user. – Take the VCF Operations cluster offline. On the System status page, click Take cluster offline. – In the Take cluster offline dialog box, provide the reason for the shutdown and click OK. – Wait for the Cluster status to read Offline. This operation might take about an hour to complete. (With no data mine took <10 mins) – Log in to vCenter for the management domain at https://vcsa234.nested.local/ui as a user with the Administrator role. – There could be other options for shutting down this appliance. Using Broadcom KB 341964 as a reference, I determined my next step is to simply Right-click the vfccops appliance and select Power > Shut down Guest OS. – In the VMs and Templates inventory, locate a VCF Operations appliance. – Right-click the appliance and select Power > Shut down Guest OS. – In the confirmation dialog box, click Yes. – This operations takes several minutes to complete. – Wait for it to fully power off | VCF Operations |
| 8 – Not Needed, not deployed yet | VMware Live Site Recovery for the management domain |
| 9 – Not Needed, not deployed yet | NSX Edge nodes |
| 10 – I continue shutting down the NSX infrastructure in the management domain and a workload domain by shutting down the one-node NSX Manager by using the vSphere Client. – Log in to vCenter for the management domain at https://vcsa234.nested.local/ui as a user with the Administrator role. – Identify the vCenter instance that runs NSX Manager. – In the VMs and Templates inventory, locate the NSX Manager (nsxmgr.nested.local) appliance. – Right-click the NSX Manager appliance and select Power > Shut down Guest OS. – In the confirmation dialog box, click Yes. – This operation takes several minutes to complete. – Wait for it to fully power off | NSX Manager |
| 11 – Shut down the SDDC Manager appliance in the management domain by using the vSphere Client. – Log in to vCenter for the management domain at https://vcsa234.nested.local/ui as a user with the Administrator role. – In the VMs and templates inventory, expand the management domain vCenter Server tree and expand the management domain data center. – Right-click the SDDC Manager appliance (SDDCMGR108.nested.local) and click Power > Shut down Guest OS. – In the confirmation dialog box, click Yes. – This operation takes several minutes to complete. – Wait for it to fully power off | SDDC Manager |
| 12 – You use the vSAN shutdown cluster wizard in the vSphere Client to shut down gracefully the vSAN clusters in a management domain. The wizard shuts down the vSAN storage and the ESX hosts added to the cluster. – Identify the cluster that hosts the management vCenter for this management domain. – This cluster must be shut down last. – Log in to vCenter for the management domain at https://vcsa234.nested.local/ui as a user with the Administrator role. – For a vSAN cluster, verify the vSAN health and resynchronization status. – In the Hosts and Clusters inventory, select the cluster and click the Monitor tab. – In the left pane, navigate to vSAN Skyline health and verify the status of each vSAN health check category. – In the left pane, under vSAN Resyncing objects, verify that all synchronization tasks are complete. – Shut down the vSAN cluster. – In the inventory, right-click the vSAN cluster and select vSAN > Shutdown cluster. – In the Shutdown Cluster wizard, verify that all pre-checks are green and click Next. – Review the vCenter Server notice and click Next. – Enter a reason for performing the shutdown, and click Shutdown. – Briefly monitor the progress of the vSAN shutdown in vCenter. Eventually, VCSA will be shutdown and connectivity to it will be lost. I then monitor the shut down of my ESX host in Workstation. – The shutdown operation is complete after all ESX hosts are stopped. | Shut Down vSAN and the ESX Hosts in the Management Domain OR Manually Shut Down and Restart the vSAN Cluster If vSAN Fails to shutdown due to a license issue, then under the vSAN Cluster > Configure > Services, choose ‘Resume Shutdown’ (Fig-3) |
| Next the ESX hosts will power off and then I can do a graceful shutdown of my Windows server AD230. In Workstation, simply right click on this VM > Power > Shutdown Guest. Once all Workstation VM’s are powered off, I can run a backup or exit Workstation and power off my server. | Power off AD230 |
(Fig-3)

Backing up my VCF Environment
With my environment fully shut down, now I can start the backup process. See my blog Backing up Workstation VMs with PowerShell for more details.
How to restart my VCF Environment.
| Startup Order | SDDC Component |
|---|---|
| PRE-STEP: – Power on my Workstation server and start Workstation. – In Workstation power on my AD230 VM and ensure / verify all the core services (AD, DNS, NTP, and RAS) are working okay. Start up the VCF Cluster: 1 – One at a time power on each ESX Host. – vCenter is started automatically. Wait until vCenter is running and the vSphere Client is available again. – Log in to vCenter at https://vcsa234.nested.local/ui as a user with the Administrator role. – Restart the vSAN cluster. In the Hosts and Clusters inventory, right-click the vSAN cluster and select vSAN Restart cluster. – In the Restart Cluster dialog box, click Restart. – Choose the vSAN cluster > Configure > vSAN > Services to see the vSAN Services page. This will display information about the restart process. – After the cluster has been restarted, check the vSAN health service and resynchronization status, and resolve any outstanding issues. Select the cluster and click the Monitor tab. – In the left pane, under vSAN > Resyncing objects, verify that all synchronization tasks are complete. – In the left pane, navigate to vSAN Skyline health and verify the status of each vSAN health check category. | Start vSAN and the ESX Hosts in the Management DomainStart ESX Hosts with NFS or Fibre Channel Storage in the Management Domain |
| 2 – From vcsa234 locate the sddcmgr108 appliance. – In the VMs and templates inventory, Right Click on the SDDC Manager appliance > Power > Power On. – Wait for this vm to boot. Check it by going to https://sddcmgr108.nested.local – As its getting ready you may see “VMware Cloud Foundation is initializing…” – Eventually you’ll be prompted by the SDDC Manager page. – Exit this page. | SDDC Manager |
| 3 – From the VCSA234 locate the nsxmgr VM then Right-click, select Power > Power on. – This operation takes several minutes to complete until the NSX Manager cluster becomes fully operational again and its user interface – accessible. – Log in to NSX Manager for the management domain at https://nsxmgr.nested.local as admin. – Verify the system status of NSX Manager cluster. – On the main navigation bar, click System. – In the left pane, navigate to Configuration > Appliances. – On the Appliances page, verify that the NSX Manager cluster has a Stable status and all NSX Manager nodes are available. Notes — Give it time. – You may see the Cluster status go from Unavailable > Degraded, ultimately you want it to show Available. – In the Node under Service Status you can click on the # next to Degraded. This will pop up the Appliance details and will show you which item are degraded. – If you click on Alarms, you can see which alarms might need addressed | NSX Manager |
| 4 – Not Needed, not deployed yet | NSX Edge |
| 5 – Not Needed, not deployed yet | VMware Live Site Recovery |
| 6 – From vcsa234, locate vcfops.nested.lcoal appliance. – Following the order described in Broadcom KB 341964. – For my environment I simply Right-click on the appliance and select Power > Power On. – Log in to the VCF Operations administration UI at the https://vcfops.nested.lcoal/admin URL as the admin local user. – You may see ‘Retrieving Cluster Status’ , give it time. Mine took about <2mins – On the System status page, Under Cluster Status, click Bring Cluster Online. – You may see ‘Retrieving Cluster Status’ , give it time. Mine took about <2mins Notes — Give it time. – This operation might take about an hour to complete. – Took <15 mins to come Online – Cluster Status update may read: ‘Going Online’ – To the right of the node name, all of the other columns continue to update, eventually showing ‘Running’ and ‘Online’ – Cluster Status will eventually go to ‘Online’ | VCF Operations |
| 7 – From vcsa234 locate the VCF Operations fleet management appliance (fleetmgmtappliance.nested.local) Right-click the VCF Operations fleet management appliance and select Power > Power On. – In the confirmation dialog box, click Yes. – Allow it to boot Note – Direct access to VCF Ops Fleet Management appliance is disabled. Go to VCF Operations > Fleet Mgmt > Lifecycle > VCF Management for appliance management. | VCF Operations fleet management |
| 8 – Not Needed, not deployed yet | VCF Identity Broker |
| 9 – Not Needed, not deployed yet | VCF Operations for logs |
| 10 – From vcsa234, locate a VCF Operations collector appliance. (opscollectorappliance) Right-click the VCF Operations collector appliance and select Power > Power On. In the configuration dialog box, click Yes. | VCF Operations collector |
| 11 – Not Needed, not deployed yet | VCF Operations for Networks |
| 12 – Not Needed, not deployed yet | VCF Automation |
REF:
VMware Workstation Gen 9: Part 7 Deploying VCF 9.0.1
Now that I have set up an VCF 9 Offline depot and downloaded the installation media its time to move on to installing VCF 9 on my Workstation environment. In this blog all document the steps I took to complete this.
PRE-Steps
1) One of the more important steps is making sure I backup my environment and delete any VM snapshots. This way my environment is ready for deployment.
2) Make sure your Windows 11 PC power plan is set to High Performance and does not put the computer to sleep.
3) Next since my hosts are brand new they need their self-signed certificates updated. See the following URL’s.
- VCF Installer fails to add hosts during deployment due to hostname mismatch with subject alternative name
- Regenerate the Self-Signed Certificate on ESX Hosts
4) I didn’t setup all of DNS names ahead of time, I prefer to do it as I’m going through the VCF installer. However, I test all my current DNS settings, and test the newly entered ones as I go.
5) Review the Planning and Resource Workbook.
6) Ensure the NTP Service is running on each of your hosts.
7) The VCF Installer 9.0.1 has some extra features to allow non-vSAN certified disks to pass the validation section. However, nested hosts will fail the HCL checks. Simply add the line below to the /etc/vmware/vcf/domainmanager/application-prod.properties and then restart the SDDC Domain Manager services with the command: systemctl restart domainmanager
This allows me Acknowledge the errors and move the deployment forward.

Installing VCF 9 with the VCF Installer
I log into the VCF Installer.

I click on ‘Depot Settings and Binary Management’

I click on ‘Configure’ under Offline Depot and then click Configure.

I confirm the Offline Depot Connection if active.

I chose ‘9.0.1.0’ next to version, select all except for VMware Cloud Automation, then click on Download.

Allow the downloads to complete.

All selected components should state “Success” and the Download Summary for VCF should state “Partially Downloaded” when they are finished.

Click return home and choose VCF under Deployment Wizard.

This is my first deployment so I’ll choose ‘Deploy a new VCF Fleet’

The Deploy VCF Fleet Wizard starts and I’ll input all the information for my deployment.
For Existing Components I simply choose next as I don’t have any.

I filled in the following information around my environment, choose simple deployment and clicked on next.

I filled out the VCF Operations information and created their DNS records. Once complete I clicked on next.

I chose to “I want to connect a VCF Automation instance later” can chose next.

Filled out the information for vCenter

Entered the details for NSX Manager.

Left the storage items as default.

Added in my 3 x ESX 9 Hosts, confirmed all fingerprints, and clicked on next.
Note: if you skipped the Pre-requisite for the self-signed host certificates, you may want to go back and update it before proceeding with this step.

Filled out the network information based on our VLAN plan.

For Distributed Switch click on Select for Custom Switch Configuration, MTU 9000, 8 Uplinks and chose all services, then scroll down.

Renamed each port group and chose the following network adapters, chose their networks, updated NSX settings then chose next.






Entered the name of the new SDDC Manager and updated it’s name in DNS, then clicked on next.

Reviewed the deployment information and chose next.
TIP – Download this information as a JSON Spec, can save you a lot of typing if you have to deploy again.

Allow it to validate the deployment information.

I reviewed the validation warnings, at the top click on “Acknowledge all Warnings” and click ‘DEPLOY’ to move to the next step.


Allow the deployment to complete.

Once completed, I download the JSON SPEC, Review and document the passwords, (Fig-1) and then log into VCF Operations. (Fig-2)
(Fig-1)

(Fig-2)

Now that I have a VCF 9.0.1 deployment complete I can move on to Day N tasks. Thanks for reading and reach out if you have any questions.
VMware Workstation Gen 9: Part 6 VCF Offline Depot
To deploy VCF 9 the VCF Installer needs access to the VCF installation media or binaries. This is done by enabling Depot Options in the VCF Installer. For users to move to the next part, they will need to complete this step using resources available to them. In this blog article I’m going to supply some resources to help users perform these functions.
Why only supply resources? When it comes to downloading and accessing VCF 9 installation media, as a Broadcom/VMware employee, we are not granted the same access as users. I have an internal process to access the installation media. These processes are not publicly available nor would they be helpful to users. This is why I’m supplying information and resources to help users through this step.
What are the Depot choices in the VCF Installer?
Users have 2 options. 1) Connect to an online depot or 2) Off Line Depot

What are the requirements for the 2 Depot options?
1) Connect to an online depot — Users need to have an entitled support.broadcom.com account and a download token. Once their token is authenticated they are enabled to download.

See These URL’s for more information:
2) Offline Depot – This option may be more common for users building out Home labs.
See these URLs for more information:
- Set Up an Offline Depot Web Server for VMware Cloud Foundation
- Set Up an Offline Depot Web Server for VMware Cloud Foundation << Use this method if you want to setup https on the Photon OS.
- How to deploy VVF/VCF 9.0 using VMUG Advantage & VCP-VCF Certification Entitlement
- Setting up a VCF 9.0 Offline Depot
I’ll be using the Offline Depot method to download my binaries and in the next part I’ll be deploying VCF 9.0.1.
VMware Workstation Gen 9: Part 5 Deploying the VCF Installer with VLANs
The VCF Installer (aka SDDC Manager Appliance) is the appliance that will allow me to deploy VCF on to my newly created ESX hosts. The VCF Installer can be deployed on to a ESX Host or directly on Workstation. There are a couple of challenges with this deployment in my Home lab and in this blog post I’ll cover how I overcame this. It should be noted, the modifications below are strictly for my home lab use.
Challenge 1: VLAN Support
By default the VCF Installer doesn’t support VLANS. It’s a funny quandary as VCF 9 requires VLANS. Most production environments will allow you to deploy the VCF Installer and be able to route to a vSphere environment. However, in my Workstation Home Lab I use LAN Segments which are local to Workstation. To overcome this issue I’ll need to add VLAN support to the VCF Installer.
Challenge 2: Size Requirements
The installer takes up a massive 400+ GB of disk space, 16GB of RAM, and 4 vCPUs. The current configuration of my ESX hosts don’t have a datastore large enough to deploy it to, plus vSAN is not set up. To overcome this issue I’ll need to deploy it as a Workstation VM and attach it to the correct LAN Segment.
In the steps below I’ll show you how I added a VLAN to the VCF Installer, deployed it directly on Workstation, and ensured it’s communicating with my ESX Hosts.
Deploy the VCF Installer
Download the VCF Installer OVA and place the file in a location where Workstation can access it.
In Workstation click on File > Open. Choose the location of your OVA file and click open.
Check the Accept box > Next

Choose your location for the VCF Installer Appliance to be deployed. Additionally, you can change the name of the VM. Then click Next.

Fill in the passwords, hostname, and NTP Server. Do not click on Import at this time. Click on ‘Network Configuration’.

Enter the network configuration and click on import.

Allow the import to complete.

Allow the VM to boot.

Change the VCF Installer Network Adapter Settings to match the correct LAN Segment. In this case I choose 10 VLAN Management.

Setup a Network Adapter with VLAN support for the VCF Installer.
1) Login as root and create the following file.

vi /etc/systemd/network/10-eth0.10.netdev
Press Insert the add the following
[NetDev]
Name=eth0.10
Kind=vlan
[VLAN]
Id=10
Press Escape, Press :, Enter wq! and press enter to save

2) Create the following file.
vi /etc/systemd/network/10-eth0.10.network
Press insert and add the following
[Match]
Name=eth0.10
[Network]
DHCP=no
Address=10.0.10.110/24
Gateway=10.0.10.230
DNS=10.0.10.230
Domain=nested.local
Press Escape, Press :, Enter wq! and press enter to save

3) Modify the original network file
vi /etc/systemd/network/10-eth0.network
Press Escape, Press Insert, and remove the static IP address configuration and change the configuration as following:
[Match]
Name=eth0
[Network]
VLAN=eth0.10
Press Escape, Press :, Enter wq! and press enter to save

4) Update the permissions to the newly created files
chmod 644 /etc/systemd/network/10-eth0.10.netdev
chmod 644 /etc/systemd/network/10-eth0.10.network
chmod 644 /etc/systemd/network/10-eth0.network
5) Restart services or restart the vm.
systemctl restart systemd-networkd
6) Check the network status of the newly created network eth0.10
nmctl status

7) Do a ping test from the VCF Installer appliance and try an SSH session from another device on the same vlan. In my case I pinged 10.0.10.230.
Note – The firewall needs to be adjusted to allow other devices to ping the VCF Installer appliance.

Next I do a ping to an internet location to confirm this appliance can route to the internet.

8) Allow SSH access to the VCF Installer Appliance
Follow this BLOG to allow SSH Access.
From the Windows AD server or other device on the same network, putty into the VCF Installer Appliance.

Adjust the VCF Installer Firewall to allow inbound traffic to the new adapter
Note – Might be a good time to make a snapshot of this VM.
1) From SSH check the firewall rules for the VCF Installer with the following command.
iptables -L –verbose –line-numbers
From this output I can see that eth0 is set up to allow access to https, ping, and other services. However, there are no rules for the eth0.10 adapter. I’ll need to adjust the firewall to allow this traffic.

Next I insert a new rule allowing all traffic to flow through e0.10 and check the rule list.
iptables -I INPUT 4 -i eth0.10 -j ACCEPT

The firewall rules are not static. To make the current firewall rules stay static I need to save the rules.
Save Config Commands

Restart and make sure you can now access the VCF Installer webpage, and I do a ping test again just to be sure.

Now that I got VCF Installer installed and working on VLANs I’m now ready to deploy the VCF Offline Depot tool into my environment and in my next blog post I’ll do just that.
VMware Workstation Gen 9 Part 4 ESX Host Deployment and initial configuration
Now that I created 3 ESX hosts from templates it is time to install ESX. To do this I simply power on the Hosts and follow the prompts. The only requirement at this point is my Windows Server and Core Services be up and functional. In this blog we’ll complete the installation of ESX.
Choose a host then click on “Power on this virtual machine”.

The host should boot to the ESX ISO I choose when I created my template.
Choose Enter to Continue

Choose F11 to Accept and Continue

If the correct boot disk is selected, press Enter to continue.

Choose pressed enter to accept the US Default keyboard layout

Entered a root password and pressed enter.

Pressed enter at the warning of CPU support.

Pushed F11 to install

Allowed ESX to install.

Disconnected the media and pressed enter to reboot

Once rebooted I choose F2 to customize the system and logged in with my root password

Choose Configure Management Network > Network Adapters, and validate the vmnic0 is selected, then pressed escape

Choose VLAN (optional) > Entered in 10 for my VLAN > pressed enter to exit

Choose IPv4 Configuration and enter the following for VCF9111 host and then pressed enter.

Choose DNS Configuration and enter the following.

Press Escape to go to the main screen. Press Y to restart management. Arrow down to ‘Enable ESXi Shell” and press enter, then the same for SSH. Both should now be enabled.

Press Escape and choose Configure Management Network. Next choose IPv6 Configuration, choose “Disable IPv6” and press enter.

Press Escape and the host will prompt you to reboot, press Y to reboot.

Test connectivity
From the AD server simply ping the VCF9111 host. This test ensures DNS is working properly and the LAN Segment is passing VLAN10.

From here I repeat this process for the other 2 hosts, only assigning them unique IPs.
Next up Deploying the VCF Installer with VLANs.
VMware Workstation Gen 9: Part 3 Windows Core Services and Routing
A big part of my nested VCF 9 environment relies on core services. Core services are AD, NTP, DHCP, and RAS. Core services are supplied by my Windows Server (aka AD230.nested.local). Of those services, RAS will enable routing between the LAN Segments and allow for Internet access. Additionally, I have a VM named DomainTools. DomainTools is used for testing network connectivity, SSH, WinSCP, and other tools. In this blog I’ll create both of these VMs and adapt them to work in my new VCF 9 environment.
Create the Window Server and establish core services
A few years back I published a Workstation 17 YouTube multipart series on how to create a nested vSphere 8 with vSAN ESA. Part of that series was creating a Windows Server with core services. For my VCF 9 environment I’ll need to create a new Windows server with the same core services. To create a similar Windows Server I used my past 2 videos: VMware Workstation 17 Nested Home Lab Part 4A and 4B.
Windows Server updates the VCF 9 environment
Now that I have established AD230 I need to update it to match the VCF 9 networks. I’ll be adding additional vNICs, attaching them to networks, and then ensuring traffic can route via the RAS service. Additionally, I created a new Windows 11 VM named DomainTools. I’ll use DomainTools for network connectivity testing and other functions. Fig-1 shows the NIC to network layout that I will be following.
(Fig-1)

Adjustments to AD230 and DomainTools
I power off AD230 and DomainTools. On both I add the appropriate vNICs and align them to the LAN segments. Next, I edit their VMware VM configuration file changing the vNICs from “e1000e” to “vmxnet3”.

Starting with DomainTools for each NIC, I power it on, input the IPv4 information (IP Address, Subnet, VLAN ID), and optionally disable IPv6. The only NIC to get a Default Gateway is NIC1. TIP – To ID the NICs, I disconnect the NIC in the VM settings and watch for it to show unplugged in Windows Networking. This way I know which NIC is assigned to which LAN Segment. Additionally, in Windows Networking I add a verbose name to the NIC to help ID it.

I make the same network adjustments to AD230 and I update its DNS service to only supply DNS from the 10.0.10.230 network adapter.

Once completed I do a ping test between all the networks for AD230 and DomainTools to validate IP Connectivity works. TIP – Use ipconfig at the CLI to check your adapter IP settings. If ping is not working there may be firewall enabled.
Setting up RAS on AD230
Once you have your network setup correctly validate that RAS has accepted your new adapters and their information. On AD230 I go in to RAS > IPv4 > General
I validate that my network adapters are present.
Looking ahead — RAS seemed to work right out of the box with no config needed. In all my testing below it worked fine, this may change as I advance my lab. If so, I’ll be sure to update my blog.

Next I need to validate routing between the different LAN Segments. To do this I’ll use the DomainTools VM to ensure routing is working correctly. You may notice in some of my testing results that VCF Appliances are present. I added this testing part after I had completed my VCF deployment.
I need to test all of the VLAN networks. On the DomainTools VM, I disable each network adapter except for the one I want to test. In this case I disabled every adapter except for 10-0-11-228 (VLAN 11 – VM NIC3). I then add the gateway IP of 10.0.11.1 (this is the IP address assigned to my AD230 RAS server).

Next I do an ipconfig to validate the IP address, and use Angry IP Scanner to locate devices on the 10.0.10.x network. Several devices responded, plus resolving their DNS name, proving that DomainTools is successfully routing from the 11 network into the 10 network. I’ll repeat this process, plus do an internet check, on all the remaining networks.

Now that we have a stable network and core Window services established we are ready to move on to ESX Host Deployment and initial configuration.
VMware Workstation Gen 9: Part 2 Using Workstation Templates
Workstation templates are a quick and easy way to create VMs with common settings. My nested VCF 9 ESX Hosts have some commonalities where they could benefit from template deployments. In this blog post I’ll show you how I use Workstation templates to quickly deploy these hosts and the hardware layout.
My nested ESX Hosts have a lot of settings. From RAM, CPU, DISK, and networking there are tons of clicks per host which is prone to mistakes. The LAN Segments as an example entail 8 clicks per network adapter. That’s 192 clicks to set up my 3 ESX hosts. Templates cover about 95% of all the settings, the only caveat is the disk deployment. Each host has a unique disk deployment which I cover below.
There are 2 things I do first before creating my VM templates. 1) I need to set up my VM folder Structure, and 2) Setup LAN Segments.
VM folder Structure
The 3 x Nested ESX hosts in my VCF 9 Cluster will be using vSAN ESA. These nested ESX Hosts will have 5 virtual NVMe disks (142GB Boot, and 4 x 860GB for vSAN). These virtual NVMe disks will be placed on to 2 physical 2TB NVMe Disks. At the physical Windows 11 layer I created folders for the 5 virtual NVMe disks on each Host. On physical disk 1 I create a BOOT, ESA DISK 1, and ESA DISK 2 folders. Then on physical disk 2 I created ESA DISK 3 and ESA DISK 4. By doing this I have found it keeps my VMs disks more organized and running efficiently. Later in this post we’ll create and position these disks into the folder.

Setup LAN Segments
Prior to creating a Workstation VM Template I need to create my LAN Segments. Workstation LAN Segments allow VLAN traffic to pass. VLANs are a requirement of VCF 9. Using any Workstation VM, choose a network adapter > LAN Segments > LAN Segments Button. The “Global LAN Segments” window appears, click on Add, name your LAN Segment, and OK when you are done.
For my use case I need to make 4 LAN Segments to support the network configuration for my VCF 9 deployment.

Pro-Tip: These are Global LAN Segments, which makes them universally available—once created, every VM can select and use them. Create these first before you create your ESX VM’s or Templates.
Create your ESX Workstation Template
To save time and create all my ESX hosts with similar settings I used a Workstation Template.
NOTE: The screenshot to the right it is the final configuration.
1) I created an ESX 9 VM in Workstation:
- Click on File > New Virtual Machine
- Chose Custom
- For Hardware I chose Workstation 25H2
- Chose my Installer disc (iso) for VCF 9
- Chose my directory and gave it a name of VCF9 ESX Template
- Chose 1 Processor with 24 Cores (Matches my underlying hardware)
- 117GB of RAM > Next
- Use NAT on the networking > Next
- Paravirtualized SCSI > Next
- NVMe for the Disk type > Next
- Create a new Virtual Disk > Next
- 142GB for Disk Size > Store as a Single File > Next
- Confirm the correct Directory > Next
- Click on the Customize Hardware button
- Add in 8 NICs > Close
- Make sure Power on this VM after creation is NOT checked > Finish
- Go back in to VM Settings and align your Network adapters to your LAN Segments
- NIC 0 and 2 > 10 VLAN Management
- NIC 3 and 4 > 11 VLAN ESA Network
- NIC 5 and 6 > 12 VLAM FT vMo RPL
- NIC 7 and 8 > 13 VLAN VM Network

Note: You might have noticed we didn’t add the vSAN disks in this deployment, we’ll create them manually below.
2) Next we’ll turn this VM into a Template
Go to VM Settings > Options > Advanced > Check Box “Use this virtual machine as a linked clone template” and click on ok.

Next, make a snapshot of the VM. Right click on VM > chose Snapshot > Take Snapshot. In the description I put in “Initial hardware configuration.”

Deploy the ESX Template
I’ll need to create 3 ESX Hosts base off of the ESX template. I’ll use my template to create these VM’s, and then I’ll add in their unique hard drives.
Right click on the ESX Template > Manage > Clone

Click Next > Choose “The current state of the VM” > Choose “Create a full clone”
Input a name for the VM
MOST Important – Make sure you select the correct disk and folder you want the boot disk to be deployed to. In the Fig-1 below, I’m deploying my second ESX host boot disk so I chose its BOOT folder.
Click on finish > The VM is created > click on close
(Fig-1)

Adding the vSAN Disks
Since we are using unique vSAN disk folders and locations we need to add our disks manually.
For each nested ESX host I right click on the VM > Settings
Click on Add > chose Hard disk > Next > NVMe > Create New Virtual Disk
Type in the size (860GB) > Store as a single file > Next
Rename the disk filename to reflect the nested vSAN ESA disk number
Choose the correct folder > Save
Repeat for the next 3 disks, placing each one in the correct folder
When I’m done I created 4 x 860GB disks for each host, all as single files, and all in unique folders and designated physical disks.
(Fig-2, below) I’m creating the first vSAN ESA disk named VCF9112-DISK1.vmdk

That’s it!
Workstation Templates save me a bunch of time when creating these 3 ESX Hosts. Next we’ll cover Windows Core Services and Routing.
VMware Workstation Gen 9: Part 1 Goals, Requirements, and a bit of planning
It’s time to build my VMware Workstation–based home lab with VCF 9. In a recent blog post, I documented my upgrade journey from VMware Workstation 17 to 25H2. In this installment, we’ll go deeper into the goals, requirements, and overall planning for this new environment. As you read through this series, you may notice that I refer to VCF 9.0.1 simply as VCF 9 or VCF for brevity.
Important Notes:
- VMware Workstation Gen 9 series is still a work in progress. Some aspects of the design and deployment may change as the lab evolves, so readers should consider this a living build. I recommend waiting until the series is complete before attempting to replicate the environment in your own lab.
- There are some parts in this series where I am unable to assist users. In lieu I provide resources and advice to help users through this phase. These areas are VCF Offline Depot and Licensing your environment. As a Broadcom/VMware employee, we are not granted the same access as users. I have an internal process to access resources and these processes would not be helpful to users.
Overall Goals
- Build a nested minimal VCF 9.0.1 environment based on VMware Workstation 25H2 running on Windows 11 Pro.
- Both Workload and Management Domains will run on the same set of nested ESX Hosts.
- Using the VCF Installer I’ll initially deploy the VCF 9 Management Domain Components as a Simple Model.
- Initial components include: VCSA, VCF Operations, VCF Collector, NSX Manager, Fleet Manager, and SDDC Manager all running on the 3 x Nested ESX Hosts.
- Workstation Nested VMs are:
- 3 x ESX 9.0.1 Hosts
- 1 x VCF Installer
- 1 x VCF Offline Depot Appliance
- 1 x Windows 2022 Server (Core Services)
- Core Services supplied via Windows Server: AD, DNS, NTP, RAS, and DHCP.
- Networking: Private to Workstation, support VLANs, and support MTU of 9000. Routing and internet access supplied by the Windows Server VM.
- Should be able to run minimal workload VM’s on nested ESX Hosts.
Hardware BOM
If you are interested in the hardware I’m running to create this environment please see my Build of Materials (BOM) page.
Additionally, check out the FAQ page for more information.
Deployment Items
To deploy the VCF Simple model I’ll need to make sure I have my ESX 9.0.1 Hosts configured properly. With a simple deployment we’ll deploy the 7 required appliances running on the Nested ESX hosts. Additionally, directly on Workstation we’ll be running the AD server, VCF Offline Depot tool, and the VCF Installer appliance.

Using the chart below I can get an idea of how many cores, ram, and disk that will be needed. The one item that stands out to me is the component with the highest core count. In this case it’s VCF Automation at 24 cores. This is important as I’ll need to make sure my nested ESX Servers match or exceed 24 cores. If not, VCF Automation will not be able to deploy. Additionally, I’ll need to make sure I have enough RAM, Disk, and space for Workload VM’s.

Workstation Items
My overall plan is to build out a Windows Server, 3 x ESX 9 hosts, VCF Installer, and the VCF Depot Appliance. Each one of these will be deployed directly onto Workstation. Once the VCF Installer is deployed it will take care of deploying and setting up the necessary VMs.
NOTE: In the network layout below, hosts that are blue in color are running directly on Workstation, and those in purple will be running on the nested ESX hosts.
Network Layout

One of the main network requirements for VCF is supporting VLAN networks. My Gen8 Workstation deployment did not use VLAN networks. Workstation can pass tagged VLAN packets via LAN Segments. The configuration of LAN Segments are done at the VM’s Workstation settings, not via the Virtual Network Editor. We’ll cover this creation soon.
In the next part of this series I’ll show how I used Workstation Templates to create my VMs and align them to the underlying hardware.
Resources:
Backing up Workstation VMs with PowerShell
It’s pretty common for me to backup my Workstation VMs and I’m always looking for quick way to accomplish this. I’ve been using SyncBack Free for many years but most recently I’ve out grown. In this blog I’ll show you the script I wrote to backup my VM’s to a target location.
My Workstation server has many data disks with many folders for my VM’s. I backup my VM’s to a large hard disk and then regularly I’ll off load these backups to a NAS for archive purposes. This keeps the VM’s local for quick restores and the NAS provides some further protection.

My PowerShell 7 script is rather simple.
- Define my sources
- Choose a target folder
- Asks if you want to simulate a backup
- Robocopy copies or simulates a copy of the files while appending to a logfile
- Appends the folders and log file with a date stamp
It’s a pretty simple process but it works quite well.
Write-Output "`n**** Workstation VM Backups for VCF 9 vSAN ESA 3 Node *****`n"
# Define Sources
$source1 = "d:\Virtual Machines\VCF 9 vSAN ESA 3 Node"
$source2 = "f:\Virtual Machines\VCF 9 vSAN ESA 3 Node"
$source3 = "g:\Virtual Machines\VCF 9 vSAN ESA 3 Node"
$source4 = "h:\Virtual Machines\VCF 9 vSAN ESA 3 Node"
$source5 = "i:\Virtual Machines\VCF 9 vSAN ESA 3 Node"
$source6 = "j:\Virtual Machines\VCF 9 vSAN ESA 3 Node"
$source7 = "k:\Virtual Machines\VCF 9 vSAN ESA 3 Node"
$source8 = "l:\Virtual Machines\VCF 9 vSAN ESA 3 Node"
$source8 = "D:\Virtual Machines\Domain Services\DomainToolsVM - 12 05 2025"
# Function user selected destination folder
function Select-FolderDialog {
param([string]$Description="Select a EMPTY folder",
[string]$RootFolder="MyComputer")
# Load the necessary assembly
Add-Type -AssemblyName System.Windows.Forms
# Create an instance of the FolderBrowserDialog object
$objForm = New-Object System.Windows.Forms.FolderBrowserDialog
$objForm.RootFolder = $RootFolder
$objForm.Description = $Description
# Show the dialog box
$Show = $objForm.ShowDialog()
# Check if the user clicked 'OK' and return the selected path
if ($Show -eq "OK") {
return $objForm.SelectedPath
} else {
Write-Error "****Operation cancelled by user****"
pause
exit 1
}
# Clean up the object
$objForm.Dispose()
}
Write-Output "`n***** Choose Destination Folder *****"
# Prompt User for desintation folder
$selectedFolderPath = Select-FolderDialog -Description "Please choose the destination folder"
if ($selectedFolderPath) {
Write-Host "You selected: $selectedFolderPath"
# You can now use $selectedFolderPath in the rest of your script
}
Write-output "`n****Choose Robo options****"
# Robocopy options
# /E Copies subdirectories. This option automatically includes empty directories.
# /TEE Writes the status output to the console window, and to the log file.
# /ZB Restart Mode, if denied back to backup mode
# /R:# Retires
# /W:# Wait time between retires
# /J Unbuffered IO for faster large file backups
# /L Simulate backup
# https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/robocopy
#To simulate backup or not
$question = "Do you want run a simulated backup? (Y/N)"
do {
$response = Read-Host -Prompt $question
# Use ToLower() for case-insensitive comparison
$response = $response.ToLower()
} until ($response -eq 'y' -or $response -eq 'n')
if ($response -eq 'y') {
Write-Host "Continuing... with Simulated Robocopy backup`n"
$robocopyoptions = $roboOptions = @("/E", "/TEE", "/ZB", "/R:2", "/W:10", "/J", "/L")
} else {
Write-Host "Continuing.... with Robocopy backup`n"
$robocopyoptions = $roboOptions = @("/E", "/TEE", "/ZB", "/R:2", "/W:10", "/J")
}
Write-Output "`n****Robocopy START****"
#Define Log loction
$logfile = $selectedFolderPath + "\WorkstationBackupLog.txt"
# Start Robocopy and append to log file
robocopy $source1 $selectedFolderPath $robocopyoptions /LOG+:$logfile
robocopy $source2 $selectedFolderPath $robocopyoptions /LOG+:$logfile
robocopy $source3 $selectedFolderPath $robocopyoptions /LOG+:$logfile
robocopy $source4 $selectedFolderPath $robocopyoptions /LOG+:$logfile
robocopy $source5 $selectedFolderPath $robocopyoptions /LOG+:$logfile
robocopy $source6 $selectedFolderPath $robocopyoptions /LOG+:$logfile
robocopy $source7 $selectedFolderPath $robocopyoptions /LOG+:$logfile
robocopy $source8 $selectedFolderPath $robocopyoptions /LOG+:$logfile
robocopy $source9 $selectedFolderPath $robocopyoptions /LOG+:$logfile
Write-Output "****Robocopy FINISH****"
Write-Output "`n****Rename Files START****"
#Rename Folders/file with date stamp
$DateStamp = Get-Date -Format "_yyyy-MM-dd"
Get-ChildItem -Path $selectedFolderPath -Directory | ForEach-Object {
# Construct the new name: original name + date stamp
$NewName = $_.Name + $DateStamp
# Rename the item (folder)
Rename-Item -Path $_.FullName -NewName $NewName
}
Get-ChildItem -Path $selectedFolderPath -File | Rename-Item -NewName {
$_.BaseName + $DateStamp + $_.Extension
}
Write-Output "****Rename Files FINISH****"
# Exit
Write-Output "`n`n****Script finished. Press Enter to exit.****"
pause
