Why I didn’t choose a Noctua replacement fan

Posted on Updated on

We’ve all been there, we’ve picked out a new router, switch, or other device for our home lab and the fans are LOUD. First thing we do is to replace those fans with something a little more quiet. We hit up our favorite online store, maybe read some reviews, and choose a fan that fits. Sometimes that fan is an expensive Noctua fan because its promise of being quiet is so alluring. After the fan is replaced it is a bit more quiet but now the fan error lights are on or it malfunctions. Clearly it’s the wrong fan for our device.

In this blog I’ll go over some of the items you should look for when buying a replacement fan for your devices that can help you find a better fit and not break your wallet. Fair warning, the stock fans in these engineered devices were designed to be optimal for said device. Altering them in any way can be harmful to the device plus working on electronics without proper training is never advised.

First, identify the stock fan in your device and find its datasheet. You may need to remove the fan from your device. I recently replaced some fans in my Mellanox IS5022 InfiniBand Switch. The stock fan was made by Delta, the make #EPB0142VHD Subtype -R00, it has 3 wires, 12 Volt DC Brushless, and draws .18 AMPS. I underlined Subtype as it is very important when identifying your stock fan. In this case if I just search for the make I’ll get the wrong fan information. In fact EPB0142VHD with no subtype only has 2 wires.

Second, I review the stock fan specification datasheet. I already know the Voltage and Amp rating but here are the things I also need:

  • Fan Size – 40mm x 40mm x 20mm
  • Hole Mount Size – 32 mm between mount points
  • Hole diameter – 3.5 mm diameter.
  • Length of Wires – 330 mm
  • Identify the 3 wires and their purpose – 12v, Ground, and Lock Rotor
  • Db Noise rating – 32-36 Dba
  • RPM – 9000 RPM
  • CFM – 10

Not sure if you caught it but identifying the 3 wires on the stock fan is critical if you want to resolve these error lights. Most 3 wires fans are going to have 12v DC and Ground. It’s that 3rd wire that makes them unique and its one of the more important items you must find out to select the correct replacement fan.

The 3 most common types of 3 wire fans are:

  • Step RPM Speed – think of this like gears on a bike. The fan speed steps from one RPM to another. Most have between 3-5 steps in RPM.
  • PWM – Pulse Width Modulation, allows for granular speed control. Instead of instantly stepping to the next speed it is gradually sped up and down.
  • Locked Rotor (sometimes called alert) – This is a fan spin error detection. Normally, the fan will spin at one speed. 40 mm Locked Rotor fans seem to be the most common for routers, switches, and other similar devices.

Another item is the length of the wires. The datasheet shows 330 mm (+-10mm), however the fan you order could be shorter. It’s best just to measure the stock fan, and make sure the replacement fan you ordered has enough length or room to stash the wires if they are too long.

Third, now that I understand my stock fan I’m ready to choose a replacement fan that meets with my goal of reducing fan noise. In most cases, fan noise is reduced by slowing the RPM. Additionally, there are fans specifically designed to reduce noise but they can be expensive. I thoroughly looked at 40mm Noctua fans but none of them matched the voltage and Locked Rotor requirements. However, I still see a lot of folks buying Noctua 40mm fans and then complaining about the fan error lights or issues with it malfunctioning. Most just ignore these errors or alter the fan wires to send a false message to the device. Both I don’t recommend.

In this case I choose the Sunon MagLev KDE1204PKV3 MS.AR.GN 40x40x20mm 3pin Low-Speed 5200RPM 6.3CFM (Locked Rotor Alarm Signal). Cost is about $6.50 US, compared to a non-compliant Noctura $14 US

How do the stock and replacement fans compare:

Item (recommendation)Delta EPB0142VHD-R00Sunon KDE1204PKV3 MS.AR.GN
DC Volts (match)1212
Amps (do not exceed stock)0.180.03
Fan Size (match)40mm x 40mm x 20mm40mm x 40mm x 20mm
Hole Mount Size (match)32 mm32 mm
Hole Diameter (close match)3.5 mm4 mm
Length of Wires (match)330 mm300 mm
3 wire purpose (match)12v, Ground, Lock Rotor12v, Ground, Locked Rotor
Db Noise rating (reduce)32-36 Dba18 Dba
RPM (close match)90005200
CFM (close match)106.3

Fourth, Prepare the fan to be installed. One item I didn’t mention was the fan edge connector. Most data sheets do not come with information on the edge connector as device manufactures may customize this. In this size of fan the edge connectors seem to be a standard size with some variants.

Some fans will need their wire order changed to match the circuitry on the device. Aligning these pins is critical, if they are wrong you could damage your device. For example your replacement fan came with Pin 1 12v Red, Pin 2 Ground Black, and Pin 3 Motor Lock Yellow (Sometimes White or Blue) you might need to reordered them to match your device. Simply use a wire pin removal tool, light pressure down, and push the pin out. Then, reorder the pins to match your device and you are good to go.

Next the replacement fan mount hole might be a factor. Some replacement fans come with screws or bolts that you may be able to use. If not, you may be able to use the stock hardware or hardware you provide. Either way, depending on the hole size you may have to work this out a bit. In my case, the stock fan screws worked perfectly. Tip – Don’t over crank or force in screws, it may damage your fan.

If your stock fan had a protection sleeve over the wires you may want to reuse it as some devices have sharp metal edges that may cut into your wires. Fan vibration may also cause this too. As an alternative, you may want to consider adding heat shrink when you re-pin the fan.

Lastly, how did my selection perform? Basically, the Sunon is a very close replacement to the Delta. It has a reduced RPM and CFM which drops its Db noise by 20 Db. Since I choose a replacement fan that is not an exact match, I’ll need to monitor the device and ensure its temps are within normal thresholds.

Very unscientifically, I used a Db meter app in my smartphone to measure the Db for the Delta and Sunon Fans. The noise reduction was notable and best of all no fan error lights.

Summary, there is no doubt that Noctua makes a quality fan product but they can be expensive and sometimes do not meet the requirements of your stock fans. If you can find one that does, it may be worth the extra spend. However, by doing just a bit of research you are sure to land on a replacement fan that will meet your goals and not break your wallet. My goal was to reduce fan noise for my home lab and by doing my homework I hit a home run with the first fan I chose.

Thanks for reading and do feel free to leave a comment or suggestion.

10Gbe NAS Home Lab: Part 8 Interconnecting MikroTik Switches

Posted on Updated on

It’s been a long wait for Part 8 but I was able to release it today! If you are interested on how to network performance test your storage environment this session might help. The purpose of this session is to show how to interconnect two MikroTik switches and ensure their performance is optimal when compared to a single switch. The two NAS devices in this session have different physical capabilities and by no means is this a comparison of their performance. The results are merely data points. Users should work with their vendor of choice to ensure best performance and optimization.

10Gb Switch Options for VMware Home Lab

Posted on Updated on

With so many 10Gbe Switch options out there for VMware Home Labs I thought I would take some time to create a list of some of the more common options.

Where did I get this data?

William Lam started the VMware Community Homelab project a few years ago. It allows Home Lab users to enter their information around their Home lab. To date the VMware Home Lab community have entered over 125 different VMware Home Labs. When a user registers they provide a URL link which leads to their home lab build-of-materials (BOM) or a description of the users home lab. Its a great resource when you are looking to see what others are doing. This was my primary data source for the results below.

On to the Results!

Over this past weekend, I took some time to review all VMware Community Homelab project links and specifically documented all the folks that noted their 10Gb Switch. I found where 25 users listed the use of a 10Gbe Switch. As I went to each link I documented the switch, its 10Gb Port count, who made it, the model, a current price, and a helpful link.

Here are the TOP 3 most popular and a curious switch:

#1 – With a user count of 7 the Ubiquity Unifi US-16-XG was the most used switch by a single model. Additionally, I noticed many of their other products in users home labs.

#2 – MikroTik with a user count of 8 across 4 different models. Their products are know to be very cost effective for 10Gbe so its no wonder they are in the top 3.

#3 – Our surprise result with a user count of 4 across 2 models is Netgear. But, its no surprise that Netgear has been making great home lab products for decades and they seem to be a bit popular in this 10Gbe arena.

Lastly, a curious switch I noted was the Brocade Communications BR-VDX6720-24-R VDX 6720. With 24 Ports of SFP+ 10Gbe its got me curious why you can find these on Ebay for ~$150. This is one switch I’ll have to look into.

This table contains to total results and extra information :

Count10Gb PortsPortsManufactureProductUSD Cost (05/2022)LinkNotes
71612 x 10G SFP+ ports | 4 x 10Gbe RJ45UbiquityUniFi US-16-XG 10G$600-800
488 x 10 Gb SFP+ | 1 x 1Gbe RJ45MikroTikCRS309-1G-8S+IN$269
288 x 10Gbe RJ45 | 2 RJ45/SFP+ Combo PortsNetgearProsafe XS708T$850
21616 x 10 Gb SFP+ | 1 x 1Gbe RJ45MikroTikCRS317-1G-16S+RM$400
288 x 10Gbe RJ45 | 1 x 10GB RJ45/SFP+ Combo PortsNetgearXS708EEOL
1128 x 10Gbe RJ45| 4 x Combo (TP and SFP+) | 1 10/100Gbe RJ45MikroTikCRS312-4C+8XG-RM$625
188 x 10Gbe RJ45BuffaloBS-XP20EOL
12424 x 10 Gb SFP+LenovoRackSwitch G8124EEOL
12424 x 10 Gb SFP+Brocade CommunicationsBR-VDX6720-24-R VDX 6720EOL $150-400 to find information on this switch
1See Note48 x 1Gbe RG45 | 4 x QSFP+ 40GBCiscoN3K-C3064PQ-10GX Nexus 3064$1,200 like the 4 x 40GB QSFP+ can be spilt into mutiple 10Gb SFP
144 x 10Gb SFP+MikroTikCRS305-1G-4S+IN$140
144 x 10GB SFP+ | 24 x 1Gbe RJ45Cisco3750-24P w/ Cisco C3KX-NM-10G 3K-X Network Module
184 x 10GB SFP+ | 4 x 10Gb RJ45/SFP+ Combo PortsQnapQSW-804-4C$500

Update: Here are a few switches that folks mentioned to me in their comments but were not part of the VMware Community HomeLab listing:

It was a bit of a surprise the the following switch vendors were not mentioned by users: Linksys, Aruba (now HPE), Juniper, and Extreme Networks.

For a really good list of Network Switch and Router vendors check out this wiki page.

Lastly, it should be noted, there is a another way for Home lab users to enter their BOMs. Most recently a VMware fling known as Solution Designer is allowing Home lab users to enter their data. Here is a quick description of the new service:

The Solution Designer Fling provides a platform to manage custom VMware solutions. Building a custom VMware solution involves many challenging tasks. One of the most difficult is continuous manual verifications: checking the interoperability of multiple VMware products and performing compatible hardware validations. Solution Designer seeks to resolve these issues by automating repetitive manual steps and collecting scattered resources in a single platform.

Note: The only downside to this fling is you can only see your data and not others.

To sum it up, I’m sure this table is less then 100% accurate when it comes to VMware Home Labs. In viewing the listings on the VMware Community Home lab project, I found many dead user links and incomplete BOMs. The list above is more about how many folks are using which switch vs. the specifics of the switch. The specifics are something you might want to review at a deeper level. However, its a good start and the table above should come in handy if you are looking to compare some common 10Gbe switches for your home lab.

Thanks for reading and if I missed your switch, please do comment below and I’ll be glad to add it!

10Gbe NAS Home Lab Part 7: Network testing with iperf3 on containers

Posted on Updated on

In Part 7 I go over how I used iperf3 to test between my different NAS devices and Windows PCs. Each NAS device are running Docker and had a ubuntu container with iperf3 installed. If you want more information on how I setup the container check out my other post here. 



Set NSX-T 3.1.1 Password Expiration Policy for Home Labs

Posted on Updated on

If you run a home lab like I do then sometimes VM’s are powered off until you need them.  In my case I use NSX but not as frequently as I’d like to. So its not uncommon for my NSX manager passwords to expire.  VMware NSX-T has a preconfigured password expiration policy of 90 days. When the password expiration day is near, a notification is displayed in the Web interface OR in my case they already expired so you can’t login. There are 3 preconfigured local users: admin, audit, and root. All passwords have to be changed after 90 days. In this blog I’m going to cover how I set the policy to not expire.

First off a bit of warning — I wouldn’t recommend this for a production environment and I’d follow your best practices around password policies.


There are 3 x ESXi 7u2d Hosts with 3 NSX-T 3.1.1 Manager Nodes in my enviroment.  The NSX-T Manager Nodes have a virtual IP (VIP) that allows me to access the NSX Web GUI.  No Edge Nodes are installed.


  • Ensure your vSphere Hosts are in a health state and all NSX Manager VM’s are powered on
  • Ensure you can logon to the NSX-T Environment as Admin and Root on all Management nodes. Update Passwords if needed.
  • If your admin password has already expired then, logon via SSH to the NSX VIP as admin and update the password. If you logon as root it will not enable the NSX CLI commands.


The following commands can be used to remove the password expiration policy. If you have multiple manager appliances, the commands only need to be executed on one node.

  • Connect directly to a NSX-T Manager or the VIP address with SSH
  • Login as admin  << this is key to enable the NSX CLI Command set
  • Enter clear user [username] password-expiration
    • NSXMGR220> clear user admin password-expiration
      NSXMGR220> clear user root password-expiration
      NSXMGR220> clear user audit password-expiration 
  • Validate the password expiration with get user [username] password-expiration
    • NSXMGR220> get user admin password-expiration
      Sat Nov 06 2021 UTC 18:52:00.552
      Password expiration not configured for this user

GA Release VMware NSX-T Data Center 3.1 | Announcement, information, and links

Posted on

VMware Announced the GA Releases of VMware NSX-T Data Center 3.1

See the base table for all the technical enablement links including VMworld 2020 sessions and new Hands On Labs.

Release Overview
VMware NSX-T Data Center 3.1.0   |  Build 17107167

What’s New
NSX-T Data Center 3.1 includes a large list of new features to offer new functionalities for virtualized networking and security for private, public, and multi-clouds. Highlights include new features and enhancements in the following focus areas:

  • Cloud-scale Networking: Federation enhancements, Enhanced Multicast capabilities.
  • Move to Next Gen SDN: Simplified migration from NSX-V to NSX-T,
  • Intrinsic Security: Distributed IPS, FQDN-based Enhancements
  • Lifecycle and monitoring: NSX-T support with vSphere Lifecycle Manager (vLCM), simplified installation, enhanced monitoring, search and filtering.
  • Federation is now considered production ready.

 In addition to these enhancements, the following capabilities and improvements have been added.

  • Federation

Support for standby Global Manager Cluster

Global Manager can now have an active cluster and a standby cluster in another location. Latency between active and standby cluster must be a maximum of 150ms round-trip time.

With the support of Federation upgrade and Standby GM, Federation is now considered production ready.

  • L2 Networking

Change the display name for TCP/IP stack: The netstack keys remain “vxlan” and “hyperbus” but the display name in the UI is now “nsx-overlay” and “nsx-hyperbus”.

The display name will change in both the list of Netstacks and list of VMKNICs

This change will be visible with vCenter 6.7

Improvements in L2 Bridge Monitoring and Troubleshooting

Consistent terminology across documentation, UI and CLI

Addition of new CLI commands to get summary and detailed information on L2 Bridge profiles and stats

Log messages to identify the bridge profile, the reason for the state change, as well as the logical switch(es) impacted

Support TEPs in different subnets to fully leverage different physical uplinks

A Transport Node can have multiple host switches attaching to several Overlay Transport Zones. However, the TEPs for all those host switches need to have an IP address in the same subnet. This restriction has been lifted to allow you to pin different host switches to different physical uplinks that belong to different L2 domains.

Improvements in IP Discovery and NS Groups: IP Discovery profiles can now be applied to NS Groups simplifying usage for Firewall Admins.

  • L3 Networking

Policy API enhancements

Ability to configure BFD peers on gateways and forwarding up timer per VRF through policy API.

Ability to retrieve the proxy ARP entries of gateway through policy API.

  • Multicast

NSX-T 3.1 is a major release for Multicast, which extends its feature set and confirms its status as enterprise ready for deployment.

Support for Multicast Replication on Tier-1 gateway. Allows to turn on multicast for a Tier-1 with Tier-1 Service Router (mandatory requirement) and have Multicast receivers and sources attached to it.

Support for IGMPv2 on all downlinks and uplinks from Tier-1

Support for PIM-SM on all uplinks (config max supported) between each Tier-0 and all TORs  (protection against TOR failure)

Ability to run Multicast in A/S and Unicast ECMP in A/A from Tier-1 → Tier-0 → TOR 

Please note that Unicast ECMP will not be supported from ESXi host → T1 when it is attached to a T1 which also has Multicast enabled.

Support for static RP programming and learning through BS & Support for Multiple Static RPs

Distributed Firewall support for Multicast Traffic

Improved Troubleshooting: This adds the ability to configure IGMP Local Groups on the uplinks so that the Edge can act as a receiver. This will greatly help in triaging multicast issues by being able to attract multicast traffic of a particular group to Edge.

  • Edge Platform and Services

Inter TEP communication within the same host: Edge TEP IP can be on the same subnet as the local hypervisor TEP.

Support for redeployment of Edge node: A defunct Edge node, VM or physical server, can be replaced with a new one without requiring it to be deleted.

NAT connection limit per Gateway: The maximum NAT sessions can be configured per Gateway.

  • Firewall

Improvements in FQDN-based Firewall: You can define FQDNs that can be applied to a Distributed Firewall. You can either add individual FQDNs or import a set of FQDNs from CSV files.

Firewall Usability Features

  • Firewall Export & Import: NSX now provides the option for you to export and import firewall rules and policies as CSVs.
  • Enhanced Search and Filtering: Improved search indexing and filtering options for firewall rules based on IP ranges.
  • Distributed Intrusion Detection/Prevention System (D-IDPS)

Distributed IPS

NSX-T will have a Distributed Intrusion Prevention System. You can block threats based on signatures configured for inspection.

Enhanced dashboard to provide details on threats detected and blocked.

IDS/IPS profile creation is enhanced with Attack Types, Attack Targets, and CVSS scores to create more targeted detection.

  • Load Balancing

HTTP server-side Keep-alive: An option to keep one-to-one mapping between the client side connection and the server side connection; the backend connection is kept until the frontend connection is closed.

HTTP cookie security compliance: Support for “httponly” and “secure” options for HTTP cookie.

A new diagnostic CLI command: The single command captures various troubleshooting outputs relevant to Load Balancer.

  • VPN

TCP MSS Clamping for L2 VPN: The TCP MSS Clamping feature allows L2 VPN session to pass traffic when there is MTU mismatch.

  • Automation, OpenStack and API

NSX-T Terraform Provider support for Federation: The NSX-T Terraform Provider extends its support to NSX-T Federation. This allows you to create complex logical configurations with networking, security (segment, gateways, firewall etc.) and services in an infra-as-code model. For more details, see the NSX-T Terraform Provider release notes.

Conversion to NSX-T Policy Neutron Plugin for OpenStack environment consuming Management API: Allows you to move an OpenStack with NSX-T environment from the Management API to the Policy API. This gives you the ability to move an environment deployed before NSX-T 2.5 to the latest NSX-T Neutron Plugin and take advantage of the latest platform features.

 Ability to change the order of NAT and FWLL on OpenStack Neutron Router: This gives you the choice in your deployment for the order of operation between NAT and FWLL. At the OpenStack Neutron Router level (mapped to a Tier-1 in NSX-T), the order of operation can be defined to be either NAT then firewall or firewall then NAT. This is a global setting for a given OpenStack Platform.

NSX Policy API Enhancements: Ability to filter and retrieve all objects within a subtree of the NSX Policy API hierarchy. In previous version filtering was done from the root of the tree policy/api/v1/infra?filter=Type-, this will allow you to retrieve all objects from sub-trees instead. For example, this allows a network admin to look at all Tier-0 configurations by simply /policy/api/v1/infra/tier-0s?filter=Type-  instead of specifying from the root all the Tier-0 related objects.

  • Operations

NSX-T support with vSphere Lifecycle Manager (vLCM): Starting with vSphere 7.0 Update 1, VMware NSX-T Data Center can be supported on a cluster that is managed with a single vSphere Lifecycle Manager (vLCM) image. As a result, NSX Manager can be used to install, upgrade, or remove NSX components on the ESXi hosts in a cluster that is managed with a single image.

  • Hosts can be added and removed from a cluster that is managed with a single vSphere Lifecycle Manager and enabled with VMware NSX-T Data Center.
  • Both VMware NSX-T Data Center and ESXi can be upgraded in a single vSphere Lifecycle Manager remediation task. The workflow is supported only if you upgrade from VMware NSX-T Data Center version 3.1.
  • Compliance can be checked, a remediation pre-check report can be generated, and a cluster can be remediated with a single vSphere Lifecycle Manager image and that is enabled with VMware NSX-T Data Center.

Simplification of host/cluster installation with NSX-T: Through the “Getting Started” button in the VMware NSX-T Data Center user interface, simply select the cluster of hosts that needs to be installed with NSX, and the UI will automatically prompt you with a network configuration that is recommended by NSX based on your underlying host configuration. This can be installed on the cluster of hosts thereby completing the entire installation in a single click after selecting the clusters. The recommended host network configuration will be shown in the wizard with a rich UI, and any changes to the desired network configuration before NSX installation will be dynamically updated so users can refer to it as needed.

Enhancements to in-place upgrades: Several enhancements have been made to the VMware NSX-T Data Center in-place host upgrade process, like increasing the max limit of virtual NICs supported per host, removing previous limitations, and reducing the downtime in data path during in-place upgrades. Refer to the VMware NSX-T Data Center Upgrade Guide for more details.

Reduction of VIB size in NSX-T: VMware NSX-T Data Center 3.1.0 has a smaller VIB footprint in all NSX host installations so that you are able to install ESX and other 3rd party VIBs along with NSX on their hypervisors.

Enhancements to Physical Server installation of NSX-T: To simplify the workflow of installing VMware NSX-T Data Center on Physical Servers, the entire end-to-end physical server installation process is now through the NSX Manager. The need for running Ansible scripts for configuring host network connectivity is no longer a requirement.

ERSPAN support on a dedicated network stack with ENS: ERSPAN can now be configured on a dedicated network stack i.e., vmk stack and supported with the enhanced NSX network switch i.e., ENS, thereby resulting in higher performance and throughput for ERSPAN Port Mirroring.

Singleton Manager with vSphere HA: NSX now supports the deployment of a single NSX Manager in production deployments. This can be used in conjunction with vSphere HA to recover a failed NSX Manager. Please note that the recovery time for a single NSX Manager using backup/restore or vSphere HA may be much longer than the availability provided by a cluster of NSX Managers.

Log consistency across NSX components: Consistent logging format and documentation across different components of NSX so that logs can be easily parsed for automation and you can efficiently consume the logs for monitoring and troubleshooting.

Support for Rich Common Filters: This is to support rich common filters for operations features like packet capture, port mirroring, IPFIX, and latency measurements for increasing the efficiency of customers while using these features. Currently, these features have either very simple filters which are not always helpful, or no filters leading to inconvenience.

CLI Enhancements: Several CLI related enhancements have been made in this release:

CLI “get” commands will be accompanied with timestamps now to help with debugging

GET / SET / RESET the Virtual IP (VIP) of the NSX Management cluster through CLI

§  While debugging through the central CLI, run ping commands directly on the local machines eliminating extra steps needed to log in to the machine and do the same

§  View the list of core on any NSX component through CLI

§  Use the “*” operator now in CLI

§  Commands for debugging L2Bridge through CLI have also been introduced in this release

Distributed Load Balancer Traceflow: Traceflow now supports Distributed Load Balancer for troubleshooting communication failures from endpoints deployed in vSphere with Tanzu to a service endpoint via the Distributed Load Balancer.

  •  Monitoring

Events and Alarms

  • Capacity Dashboard: Maximum Capacity, Maximum Capacity Threshold, Minimum Capacity Threshold
  • Edge Health: Standby move to different edge node, Datapath thread deadlocked, NSXT Edge core file has been generated, Logical Router failover event, Edge process failed, Storage Latency High, Storage Error
  • ISD/IPS: NSX-IDPS Engine Up/Down, NSX-IDPS Engine CPU Usage exceeded 75%, NSX-IDPS Engine CPU Usage exceeded 85%, NSX-IDPS Engine CPU Usage exceeded 95%, Max events reached, NSX-IDPS Engine Memory Usage exceeded 75%,
    NSX-IDPS Engine MemoryUsage exceeded 85%, NSX-IDPS Engine MemoryUsage exceeded 95%
  • IDFW: Connectivity to AD server, Errors during Delta Sync
  • Federation: GM to GM Split Brain
  • Communication: Control Channel to Transport Node Down, Control Channel to Transport Node Down for too Long, Control Channel to Manager Node Down, Control Channel to Manager Node Down for too Long, Management Channel to Transport Node Down, Management Channel to Transport Node Down for too Long, Manager FQDN Lookup Failure, Manager FQDN Reverse Lookup Failure

ERSPAN for ENS fast path: Support port mirroring for ENS fast path.

System Health Plugin Enhancements: System Health plugin enhancements and status monitoring of processes running on different nodes to ensure that system is running properly by on-time detection of errors.

Live Traffic Analysis & Tracing: A live traffic analysis tool to support bi-directional traceflow between on-prem and VMC data centers.

Latency Statistics and Measurement for UA Nodes: Latency measurements between NSX Manager nodes per NSX Manager cluster and between NSX Manager clusters across different sites.

Performance Characterization for Network Monitoring using Service Insertion: To provide performance metrics for network monitoring using Service Insertion.

  • Usability and User Interface

Graphical Visualization of VPN: The Network Topology map now visualizes the VPN tunnels and sessions that are configured. This aids you to quickly visualize and troubleshoot VPN configuration and settings.

Dark Mode: NSX UI now supports dark mode. You can toggle between light and dark mode.

Firewall Export & Import: NSX now provides the option for you to export and import firewall rules and policies as CSVs.

Enhanced Search and Filtering: Improved the search indexing and filtering options for firewall rules based on IP ranges.

Reducing Number of Clicks: With this UI enhancement, NSX-T now offers a convenient and easy way to edit Network objects.

  • Licensing

Multiple license keys: NSX now has the ability to accept multiple license keys of same edition and metric. This functionality allows you to maintain all your license keys without having to combine your license keys.

License Enforcement: NSX-T now ensures that users are license-compliant by restricting access to features based on license edition. New users will be able to access only those features that are available in the edition that they have purchased. Existing users who have used features that are not in their license edition will be restricted to only viewing the objects; create and edit will be disallowed.

New VMware NSX Data Center Licenses: Adds support for new VMware NSX Firewall and NSX Firewall with Advanced Threat Prevention license introduced in October 2020, and continues to support NSX Data Center licenses (Standard, Professional, Advanced, Enterprise Plus, Remote Office Branch Office) introduced in June 2018, and previous VMware NSX for vSphere license keys. See VMware knowledge base article 52462 for more information about NSX licenses.

  • AAA and Platform Security

Security Enhancements for Use of Certificates And Key Store Management: With this architectural enhancement, NSX-T offers a convenient and secure way to store and manage a multitude of certificates that are essential for platform operations and be in compliance with industry and government guidelines. This enhancement also simplifies API use to install and manage certificates.

Alerts for Audit Log Failures: Audit logs play a critical role in managing cybersecurity risks within an organization and are often the basis of forensic analysis, security analysis and criminal prosecution, in addition to aiding with diagnosis of system performance issues. Complying with NIST-800-53 and industry-benchmark compliance directives, NSX offers alert notification via alarms in the event of failure to generate or process audit data.

Custom Role Based Access Control: Users desire the ability to configure roles and permissions that are customized to their specific operating environment. The custom RBAC feature allows granular feature-based privilege customization capabilities enabling NSX customers the flexibility to enforce authorization based on least privilege principles. This will benefit users in fulfilling specific operational requirements or meeting compliance guidelines. Please note in NSX-T 3.1, only policy based features are available for role customization.

FIPS – Interoperability with vSphere 7.x: Cryptographic modules in use with NSX-T are FIPS 140-2 validated since NSX-T 2.5. This change extends formal certification to incorporate module upgrades and interoperability with vSphere 7.0.

  • NSX Data Center for vSphere to NSX-T Data Center Migration

Migration of NSX for vSphere Environment with vRealize Automation: The Migration Coordinator now interacts with vRealize Automation (vRA) in order to migrate environments where vRealize Automation provides automation capabilities. This will offer a first set of topologies which can be migrated in an environment with vRealize Automation and NSX-T Data Center. Note: This will require support on vRealize Automation.

Modular Distributed Firewall Config Migration: The Migration Coordinator is now able to migrate firewall configurations and state from a NSX Data Center for vSphere environment to NSX-T Data Center environment. This functionality allows a customer to do migrate virtual machines (using vMotion) from one environment to the other and keep their firewall rules and state.

Migration of Multiple VTEP: The NSX Migration Coordinator now has the ability to migrate environments deployed with multiple VTEPs.

Increase Scale in Migration Coordinator to 256 Hosts: The Migration Coordinator can now migrate up to 256 hypervisor hosts from NSX Data Center for vSphere to NSX-T Data Center.

Migration Coordinator coverage of Service Insertion and Guest Introspection: The Migration Coordinator can migrate environments with Service Insertion and Guest Introspection. This will allow partners to offer a solution for migration integrated with complete migrator workflow.

Upgrade Considerations
API Deprecations and Behavior Changes

Retention Period of Unassigned Tags: In NSX-T 3.0.x, NSX Tags with 0 Virtual Machines assigned are automatically deleted by the system after five days. In NSX-T 3.1.0, the system task has been modified to run on a daily basis, cleaning up unassigned tags that are older than one day. There is no manual way to force delete unassigned tags.

I recommend you reviewing the known issues sections General  |  Installation  |  Upgrade  |  NSX Edge  |  NSX Cloud  |  Security  |  Federation

Enablement Links
Release Notes Click Here  |  What’s New  |  General Behavior Changes  |  API and CLI Resources  |  Resolved Issues  |  Known Issues Installation Guide  |  Administration Guide  |  Upgrade Guide  |  Migration Coordinator  |  VMware NSX Intelligence

REST API Reference Guide  |  CLI Reference Guide  |  Global Manager REST API

Upgrading Docs Upgrade Checklist  |  Preparing to Upgrade  |  Upgrading  |  Upgrading NSX Cloud Components  |  Post-Upgrade Tasks

Troubleshooting Upgrade Failures

Installation Docs Preparing for Installation   |  NSX Manager Installation  |    |  Installing NSX Manager Cluster on vSphere  |  Installing NSX Edge

vSphere Lifecycle Manager  |  Host Profile integration  |  Getting Started with Federation  |  Getting Started with NSX Cloud

Migrating Docs Migrating NSX Data Center for vSphere  |  Migrating vSphere Networking  |  Migrating NSX Data Center for vSphere with vRA
Requirements Docs NSX Manager Cluster  |  System  |  NSX Manager VM & Host Transport Node System
NSX Edge VM System  |  NSX Edge Bare Metal  |  Bare Metal Server System  |  Bare Metal Linux Container
Compatibility Information Ports Used  |  Compatibility Guide (Select NSX-T)  |  Product Interoperability Matrix  |
Downloads Click Here
Hands On Labs (New) HOL-2103-01-NET – VMware NSX for vSphere Advanced Topics

HOL-2103-02-NET – VMware NSX Migration Coordinator

HOL-2103-91-NET – VMware NSX for vSphere Flow Monitoring and Traceflow

HOL-2122-01-NET – NSX Cloud Consistent Networking and Security across Enterprise, AWS & Azure

HOL-2122-91-ISM – NSX Cloud Consistent Networking and Security across Enterprise, AWS & Azure Lightning Lab

VMworld 2020 Sessions Update on NSX-T Switching: NSX on VDS (vSphere Distributed Switch) VCNC1197

Demystifying the NSX-T Data Center Control Plane VCNC1164

NSX-T security and compliance deep dive ISNS2256

NSX Data Center for vSphere to NSX-T Migration: Real-World Experience VCNC1590

Blogs NSX-T 3.0 – Innovations in Cloud, Security, Containers, and Operations


Create an ESXi installation ISO with custom drivers in 9 easy steps!

Video Posted on Updated on

One of the challenges in running a VMware based home lab is the ability to work with old / inexpensive hardware but run latest software. Its a balance that is sometimes frustrating, but when it works it is very rewarding. Most recently I decided to move to 10Gbe from my InfiniBand 40Gb network. Part of this transition was to create an ESXi ISO with the latest build (6.7U3) and appropriate network card drivers. In this video blog post I’ll show 9 easy steps to create your own customized ESXi ISO and how to pin point IO Cards on the vmware HCL.

** Update 06/22/2022 **  If you are looking to do USB NICs with ESXi check out the new fling (USB Network Native Driver for ESXi) that helps with this.  This Fling supports the most popular USB network adapter chipsets ASIX USB 2.0 gigabit network ASIX88178a, ASIX USB 3.0 gigabit network ASIX88179, Realtek USB 3.0 gigabit network RTL8152/RTL8153 and Aquantia AQC111U.

NOTE – Flings are NOT supported by VMware

** Update 03/06/2020 ** Though I had good luck with the HP 593742-001 NC523SFP DUAL PORT SFP+ 10Gb card in my Gen 4 Home Lab, I found it faulty when running in my Gen 5 Home Lab.  Could be I was using a PCIe x4 slot in Gen 4, or it could be the card runs to hot to touch.  For now this card was removed from VMware HCL, HP has advisories out about it, and after doing some poking around there seem to be lots of issues with it.  I’m looking for a replacement and may go with the HP NC550SFP.   However, this doesn’t mean the steps in this video are only for this card, the steps in this video help you to better understand how to add drivers into an ISO.

Here are the written steps I took from my video blog.  If you are looking for more detail, watch the video.

Before you start – make sure you have PowerCLI installed, have download these files,  and have placed these files in c:\tmp.


I started up PowerCLI and did the following commands:

1) Add the ESXi Update ZIP file to the depot:

Add-EsxSoftwareDepot C:\tmp\

2) Add the LSI Offline Bundle ZIP file to the depot:

Add-EsxSoftwareDepot ‘C:\tmp\’

3) Make sure the files from step 1 and 2 are in the depot:


4) Show the Profile names from update-from-esxi6.7-6.7_update03. The default command only shows part of the name. To correct this and see the full name use the ‘| select name’ 

Get-EsxImageProfile | select name

5) Create a clone profile to start working with.

New-EsxImageProfile -cloneprofile ESXi-6.7.0-20190802001-standard -Name ESXi-6.7.0-20190802001-standard-QLogic -Vendor QLogic

6) Validate the LSI driver is loaded in the local depot.  It should match the driver from step 2.  Make sure you note the name and version number columns.  We’ll need to combine these two with a space in the next step.

Get-EsxSoftwarePackage -Vendor q*

7) Add the software package to the cloned profile. Tip: For ‘SoftwarePackage:’ you should enter the ‘name’ space ‘version number’ from step 6.  If you just use the short name it might not work.


ImageProfile: ESXi-6.7.0-20190802001-standard-QLogic
SoftwarePackage[0]: net-qlcnic 6.1.191-1OEM.600.0.0.2494585

8) Optional: Compare the profiles, to see differences, and ensure the driver file is in the profile.

Get-EsxImageProfile | select name   << Run this if you need a reminder on the profile names

Compare-EsxImageProfile -ComparisonProfile ESXi-6.7.0-20190802001-standard-QLogic -ReferenceProfile ESXi-6.7.0-20190802001-standard

9) Create the ISO

Export-EsxImageProfile -ImageProfile “ESXi-6.7.0-20190802001-standard-QLogic” -ExportToIso -FilePath c:\tmp\ESXi-6.7.0-20190802001-standard-QLogic.iso

That’s it!  If you like my ‘no-nonsense’ videos and blogs that get straight to the point… then post a comment or let me know… Else, I’ll start posting boring video blogs!

Cross vSAN Cluster support for FT


FIX for Netgear Orbi Router / Firewall blocks additional subnets

Posted on Updated on

**2021-NOV Update**  With the release of Orbi Router Firmware Version V2.7.3.22 the telnet option is no longer available in the debug menu.  This means the steps below will not work unless you are a earlier router firmware version.  I looked for other Orib solutions but didn’t find any.  However, I solved this issue by using an additional firewall using NAT between VLAN74 and VLAN 75.  If you find an Orbi solution, please post a comment and I’ll be glad to update this blog.

Last April 2019 I decided to update my home network with the Orbi WiFi System (RBK50) AC3000 by Netgear.  My previous Netgear Wifi router worked quite well but I really needed something to support multiple locations seamlessly.

The Orbi Mesh has a primary device and allows for satellites to be connected to it.  It creates a Wifi mesh that allows devices to go from room to room or building to building seamlessly.  I’ve had it up for a while now and its been working out great – that is until I decided to ask it to route more than one subnet.   In this blog I’ll show you the steps I took to over come this feature limitation but like all content on my blog this is for my reference. Use at your own risk.

To understand the problem we need to first understand the network layout.   My Orbi Router is the Gateway of last resort and it supplies DHCP and DNS services. In my network I have two subnets which are untagged VLANS known as VLAN 74 – 172.16.74.x/24 and VLAN 75 – 172.16.75.x/24.   VLAN 74 is used by my home devices and VLAN 75 is where I manage my ESXi hosts.  I have enabled RIP v2 on the Orbi and on the Dell 6224 switch.  The routing tables are populated correctly, and I can ping from any internal subnet to any host without issue, except when the Orbi is involved.


Issue:  Hosts on VLAN 75 are not able to get to the internet.  Hosts on VLAN 75 can resolve DNS names (example: but it cannot ping any host on the Inet. Conversely, VLAN 74 can ping Inet hosts and get to the internet.  I’d like for my hosts on VLAN 75 to have all the same functionally as my hosts on VLAN 74.

Findings:  By default, the primary Orbi router is blocking any host that is not on VLAN 74 from getting to the INET.  I believe Netgear enabled this block to limit the number of devices the Orbi could NAT.  I can only guess that either the router just can’t handle the load or this was a maximum Netgear tested it to.  I found this firewall block out by logging into the CLI of my Orbi and looking at the IPTables settings.  There I could clearly see there was firewall rule blocking hosts that were not part of VLAN 74.

Solution:  Adjust the Orbi to allow all VLAN traffic (USE AT YOUR OWN RISK)

  1. Enable Telnet access on your Primary Orbi Router.
    1. Go to http://{your orbi ip address}/debug.htm
    2. Choose ‘Enable Telnet’ (**reminder to disable this when done**)
    3. Telnet into the Orbi Router (I just used putty)
    4. Logon as root using your routers main password
  2. I issued the command ‘iptables -t filter -L loc2net’. Using the output of this command I can see where line 5 is dropping all traffic that is not (!) VLAN74.
  3. Let’s remove this firewall rule. The one I want to target is the 5th in the list, yours may vary.  This command will remove it ‘iptables -t filter -D loc2net 5’
    • NOTES:
    • Router Firmware Version V2.5.1.16 (Noted: 10.2020) — It appears that more recent firmware updates have changed the targeting steps.  I noticed in Router Firmware Version V2.5.1.16 I had to add 2 to the targeted line number to remove it with the ip tables command.  This my vary for the device that is being worked on.
    • Router Firmware Version V2.5.2.4  (Noted: Jan-2021) — It appears the targeting for steps are now fixed in this version.
    • Again, as with all my posts, blogs, and videos are for my records and not for any intended purpose. 
  4. Next, we need to clean up some post routing issues ‘iptables -t nat -I POSTROUTING 1 -o brwan -j MASQUERADE’
  5. A quick test and I can now PING and get to the internet from VLAN 75
  6. Disconnect from Telnet and disable it on your router.

Note:  Unfortunately, this is not a permanent fix.  Once you reboot your router the old settings come back.  The good news is, its only two to three lines to fix this problem.  Check out the links below for more information and a script.

Easy Copy Commands for my reference:

iptables -t filter -L loc2net

iptables -t filter -D loc2net 7  << Check this number

iptables -t nat -I POSTROUTING 1 -o brwan -j MASQUERADE

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.


No web interface on a Dell PowerConnect 6224 Switch

Posted on Updated on

I picked up a Dell Powerconnect 6224 switch the other day as my older Netgear switch (2007) finally died.  After connecting via console cable (9600,8,1,none) I updated the Firmware image to the latest revision. I then followed the “Dell Easy Setup Wizard”, which by the way stated the web interface will work after the wizard is completed. After completing the easy wizard I opened a  browser to the switch IP address which failed.   I then pinged the switch IP address, yep it is replying.  Next, rebooted the switch – still no web interface connection.

How did I fix this?

1- While in the console, entered into config mode, and issued the following command.

console(config)#ip http server

2- Next I issued a ‘show run’ to ensure the command was present

console#show run
!Current Configuration:
!System Description “PowerConnect 6224,, VxWorks 6.5”
!System Software Version
!Cut-through mode is configured as disabled
member 1 1
ip address
ip default-gateway
ip http server
username “admin” password HASHCODE level 15 encrypted
snmp-server community public rw

3 – This time I connected to the switch via a browser without issue.

4 – Finally, saved the running-configuration

console#copy running-config startup-config

This operation may take a few minutes.
Management interfaces will not be available during this time.

Are you sure you want to save? (y/n) y

Configuration Saved!

Summary:  These were some pretty basic commands to get the http service up and running, but I’m sure I’ll run into this again and I’ll have this blog to refer too.  Next, I’m off to setup some VLANs and a few static routes.

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

vCenter Server 6.5 – Migrate VMs from Virtual Distributed Switch to Standard Switch

Posted on

I was doing some testing the other day and had migrated about 25 test VM’s to a Virtual Distributed Switch (vDS) from a Standard Switch (vSS). However, to complete my testing I needed the VM’s to be migrated back to a vSS and delete the vDS entirely. In this blog I’m going to document the steps I took to accomplish this.

First step in completing this migration is to migrate the VM’s port groups from the vDS Port group to the vSS port group. One Option could be editing each VM but with 24 VM’s each with 2 vNICS, that would be about 100+ clicks to complete = very slow. However, a more efficient way is to use the vDS migrate VM’s option.

  • In the vCenter Server 6.5 HTML client, I clicked on Network > then under my vDS I right clicked on my active vDS port group
  • I then chose “Migrate VM’s to Another Network”


Second I chose BROWSE > a new pop up window appeared and I chose the vSS Port group that I wanted to migrate to then selected ok > I confirmed the ‘Destination Network’ was correct then clicked on next

Third, I wanted to migration both network adapters and I selected ‘All virtual machines’ then Next

Note: If I wanted to migrate individual vNIC vs all, I could have expanded each VM and then choose which network adapter I wanted to migrate.

Fourth, I reviewed the confirmation page and confirmed all 24 VM’s would be migrated, then clicked next

Fifth, here is the final report… All the VM’s have been migrated

Finally, now that all VM’s have been removed from the vDS Port group I can remove the vDS switch from my test environment.

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.