Networking

GA Release VMware NSX-T Data Center 3.1 | Announcement, information, and links

Posted on

VMware Announced the GA Releases of VMware NSX-T Data Center 3.1

See the base table for all the technical enablement links including VMworld 2020 sessions and new Hands On Labs.

Release Overview
VMware NSX-T Data Center 3.1.0   |  Build 17107167

What’s New
NSX-T Data Center 3.1 includes a large list of new features to offer new functionalities for virtualized networking and security for private, public, and multi-clouds. Highlights include new features and enhancements in the following focus areas:

  • Cloud-scale Networking: Federation enhancements, Enhanced Multicast capabilities.
  • Move to Next Gen SDN: Simplified migration from NSX-V to NSX-T,
  • Intrinsic Security: Distributed IPS, FQDN-based Enhancements
  • Lifecycle and monitoring: NSX-T support with vSphere Lifecycle Manager (vLCM), simplified installation, enhanced monitoring, search and filtering.
  • Federation is now considered production ready.

 In addition to these enhancements, the following capabilities and improvements have been added.

  • Federation

Support for standby Global Manager Cluster

Global Manager can now have an active cluster and a standby cluster in another location. Latency between active and standby cluster must be a maximum of 150ms round-trip time.

With the support of Federation upgrade and Standby GM, Federation is now considered production ready.

  • L2 Networking

Change the display name for TCP/IP stack: The netstack keys remain “vxlan” and “hyperbus” but the display name in the UI is now “nsx-overlay” and “nsx-hyperbus”.

The display name will change in both the list of Netstacks and list of VMKNICs

This change will be visible with vCenter 6.7

Improvements in L2 Bridge Monitoring and Troubleshooting

Consistent terminology across documentation, UI and CLI

Addition of new CLI commands to get summary and detailed information on L2 Bridge profiles and stats

Log messages to identify the bridge profile, the reason for the state change, as well as the logical switch(es) impacted

Support TEPs in different subnets to fully leverage different physical uplinks

A Transport Node can have multiple host switches attaching to several Overlay Transport Zones. However, the TEPs for all those host switches need to have an IP address in the same subnet. This restriction has been lifted to allow you to pin different host switches to different physical uplinks that belong to different L2 domains.

Improvements in IP Discovery and NS Groups: IP Discovery profiles can now be applied to NS Groups simplifying usage for Firewall Admins.

  • L3 Networking

Policy API enhancements

Ability to configure BFD peers on gateways and forwarding up timer per VRF through policy API.

Ability to retrieve the proxy ARP entries of gateway through policy API.

  • Multicast

NSX-T 3.1 is a major release for Multicast, which extends its feature set and confirms its status as enterprise ready for deployment.

Support for Multicast Replication on Tier-1 gateway. Allows to turn on multicast for a Tier-1 with Tier-1 Service Router (mandatory requirement) and have Multicast receivers and sources attached to it.

Support for IGMPv2 on all downlinks and uplinks from Tier-1

Support for PIM-SM on all uplinks (config max supported) between each Tier-0 and all TORs  (protection against TOR failure)

Ability to run Multicast in A/S and Unicast ECMP in A/A from Tier-1 → Tier-0 → TOR 

Please note that Unicast ECMP will not be supported from ESXi host → T1 when it is attached to a T1 which also has Multicast enabled.

Support for static RP programming and learning through BS & Support for Multiple Static RPs

Distributed Firewall support for Multicast Traffic

Improved Troubleshooting: This adds the ability to configure IGMP Local Groups on the uplinks so that the Edge can act as a receiver. This will greatly help in triaging multicast issues by being able to attract multicast traffic of a particular group to Edge.

  • Edge Platform and Services

Inter TEP communication within the same host: Edge TEP IP can be on the same subnet as the local hypervisor TEP.

Support for redeployment of Edge node: A defunct Edge node, VM or physical server, can be replaced with a new one without requiring it to be deleted.

NAT connection limit per Gateway: The maximum NAT sessions can be configured per Gateway.

  • Firewall

Improvements in FQDN-based Firewall: You can define FQDNs that can be applied to a Distributed Firewall. You can either add individual FQDNs or import a set of FQDNs from CSV files.

Firewall Usability Features

  • Firewall Export & Import: NSX now provides the option for you to export and import firewall rules and policies as CSVs.
  • Enhanced Search and Filtering: Improved search indexing and filtering options for firewall rules based on IP ranges.
  • Distributed Intrusion Detection/Prevention System (D-IDPS)

Distributed IPS

NSX-T will have a Distributed Intrusion Prevention System. You can block threats based on signatures configured for inspection.

Enhanced dashboard to provide details on threats detected and blocked.

IDS/IPS profile creation is enhanced with Attack Types, Attack Targets, and CVSS scores to create more targeted detection.

  • Load Balancing

HTTP server-side Keep-alive: An option to keep one-to-one mapping between the client side connection and the server side connection; the backend connection is kept until the frontend connection is closed.

HTTP cookie security compliance: Support for “httponly” and “secure” options for HTTP cookie.

A new diagnostic CLI command: The single command captures various troubleshooting outputs relevant to Load Balancer.

  • VPN

TCP MSS Clamping for L2 VPN: The TCP MSS Clamping feature allows L2 VPN session to pass traffic when there is MTU mismatch.

  • Automation, OpenStack and API

NSX-T Terraform Provider support for Federation: The NSX-T Terraform Provider extends its support to NSX-T Federation. This allows you to create complex logical configurations with networking, security (segment, gateways, firewall etc.) and services in an infra-as-code model. For more details, see the NSX-T Terraform Provider release notes.

Conversion to NSX-T Policy Neutron Plugin for OpenStack environment consuming Management API: Allows you to move an OpenStack with NSX-T environment from the Management API to the Policy API. This gives you the ability to move an environment deployed before NSX-T 2.5 to the latest NSX-T Neutron Plugin and take advantage of the latest platform features.

 Ability to change the order of NAT and FWLL on OpenStack Neutron Router: This gives you the choice in your deployment for the order of operation between NAT and FWLL. At the OpenStack Neutron Router level (mapped to a Tier-1 in NSX-T), the order of operation can be defined to be either NAT then firewall or firewall then NAT. This is a global setting for a given OpenStack Platform.

NSX Policy API Enhancements: Ability to filter and retrieve all objects within a subtree of the NSX Policy API hierarchy. In previous version filtering was done from the root of the tree policy/api/v1/infra?filter=Type-, this will allow you to retrieve all objects from sub-trees instead. For example, this allows a network admin to look at all Tier-0 configurations by simply /policy/api/v1/infra/tier-0s?filter=Type-  instead of specifying from the root all the Tier-0 related objects.

  • Operations

NSX-T support with vSphere Lifecycle Manager (vLCM): Starting with vSphere 7.0 Update 1, VMware NSX-T Data Center can be supported on a cluster that is managed with a single vSphere Lifecycle Manager (vLCM) image. As a result, NSX Manager can be used to install, upgrade, or remove NSX components on the ESXi hosts in a cluster that is managed with a single image.

  • Hosts can be added and removed from a cluster that is managed with a single vSphere Lifecycle Manager and enabled with VMware NSX-T Data Center.
  • Both VMware NSX-T Data Center and ESXi can be upgraded in a single vSphere Lifecycle Manager remediation task. The workflow is supported only if you upgrade from VMware NSX-T Data Center version 3.1.
  • Compliance can be checked, a remediation pre-check report can be generated, and a cluster can be remediated with a single vSphere Lifecycle Manager image and that is enabled with VMware NSX-T Data Center.

Simplification of host/cluster installation with NSX-T: Through the “Getting Started” button in the VMware NSX-T Data Center user interface, simply select the cluster of hosts that needs to be installed with NSX, and the UI will automatically prompt you with a network configuration that is recommended by NSX based on your underlying host configuration. This can be installed on the cluster of hosts thereby completing the entire installation in a single click after selecting the clusters. The recommended host network configuration will be shown in the wizard with a rich UI, and any changes to the desired network configuration before NSX installation will be dynamically updated so users can refer to it as needed.

Enhancements to in-place upgrades: Several enhancements have been made to the VMware NSX-T Data Center in-place host upgrade process, like increasing the max limit of virtual NICs supported per host, removing previous limitations, and reducing the downtime in data path during in-place upgrades. Refer to the VMware NSX-T Data Center Upgrade Guide for more details.

Reduction of VIB size in NSX-T: VMware NSX-T Data Center 3.1.0 has a smaller VIB footprint in all NSX host installations so that you are able to install ESX and other 3rd party VIBs along with NSX on their hypervisors.

Enhancements to Physical Server installation of NSX-T: To simplify the workflow of installing VMware NSX-T Data Center on Physical Servers, the entire end-to-end physical server installation process is now through the NSX Manager. The need for running Ansible scripts for configuring host network connectivity is no longer a requirement.

ERSPAN support on a dedicated network stack with ENS: ERSPAN can now be configured on a dedicated network stack i.e., vmk stack and supported with the enhanced NSX network switch i.e., ENS, thereby resulting in higher performance and throughput for ERSPAN Port Mirroring.

Singleton Manager with vSphere HA: NSX now supports the deployment of a single NSX Manager in production deployments. This can be used in conjunction with vSphere HA to recover a failed NSX Manager. Please note that the recovery time for a single NSX Manager using backup/restore or vSphere HA may be much longer than the availability provided by a cluster of NSX Managers.

Log consistency across NSX components: Consistent logging format and documentation across different components of NSX so that logs can be easily parsed for automation and you can efficiently consume the logs for monitoring and troubleshooting.

Support for Rich Common Filters: This is to support rich common filters for operations features like packet capture, port mirroring, IPFIX, and latency measurements for increasing the efficiency of customers while using these features. Currently, these features have either very simple filters which are not always helpful, or no filters leading to inconvenience.

CLI Enhancements: Several CLI related enhancements have been made in this release:

CLI “get” commands will be accompanied with timestamps now to help with debugging

GET / SET / RESET the Virtual IP (VIP) of the NSX Management cluster through CLI

§  While debugging through the central CLI, run ping commands directly on the local machines eliminating extra steps needed to log in to the machine and do the same

§  View the list of core on any NSX component through CLI

§  Use the “*” operator now in CLI

§  Commands for debugging L2Bridge through CLI have also been introduced in this release

Distributed Load Balancer Traceflow: Traceflow now supports Distributed Load Balancer for troubleshooting communication failures from endpoints deployed in vSphere with Tanzu to a service endpoint via the Distributed Load Balancer.

  •  Monitoring

Events and Alarms

  • Capacity Dashboard: Maximum Capacity, Maximum Capacity Threshold, Minimum Capacity Threshold
  • Edge Health: Standby move to different edge node, Datapath thread deadlocked, NSXT Edge core file has been generated, Logical Router failover event, Edge process failed, Storage Latency High, Storage Error
  • ISD/IPS: NSX-IDPS Engine Up/Down, NSX-IDPS Engine CPU Usage exceeded 75%, NSX-IDPS Engine CPU Usage exceeded 85%, NSX-IDPS Engine CPU Usage exceeded 95%, Max events reached, NSX-IDPS Engine Memory Usage exceeded 75%,
    NSX-IDPS Engine MemoryUsage exceeded 85%, NSX-IDPS Engine MemoryUsage exceeded 95%
  • IDFW: Connectivity to AD server, Errors during Delta Sync
  • Federation: GM to GM Split Brain
  • Communication: Control Channel to Transport Node Down, Control Channel to Transport Node Down for too Long, Control Channel to Manager Node Down, Control Channel to Manager Node Down for too Long, Management Channel to Transport Node Down, Management Channel to Transport Node Down for too Long, Manager FQDN Lookup Failure, Manager FQDN Reverse Lookup Failure

ERSPAN for ENS fast path: Support port mirroring for ENS fast path.

System Health Plugin Enhancements: System Health plugin enhancements and status monitoring of processes running on different nodes to ensure that system is running properly by on-time detection of errors.

Live Traffic Analysis & Tracing: A live traffic analysis tool to support bi-directional traceflow between on-prem and VMC data centers.

Latency Statistics and Measurement for UA Nodes: Latency measurements between NSX Manager nodes per NSX Manager cluster and between NSX Manager clusters across different sites.

Performance Characterization for Network Monitoring using Service Insertion: To provide performance metrics for network monitoring using Service Insertion.

  • Usability and User Interface

Graphical Visualization of VPN: The Network Topology map now visualizes the VPN tunnels and sessions that are configured. This aids you to quickly visualize and troubleshoot VPN configuration and settings.

Dark Mode: NSX UI now supports dark mode. You can toggle between light and dark mode.

Firewall Export & Import: NSX now provides the option for you to export and import firewall rules and policies as CSVs.

Enhanced Search and Filtering: Improved the search indexing and filtering options for firewall rules based on IP ranges.

Reducing Number of Clicks: With this UI enhancement, NSX-T now offers a convenient and easy way to edit Network objects.

  • Licensing

Multiple license keys: NSX now has the ability to accept multiple license keys of same edition and metric. This functionality allows you to maintain all your license keys without having to combine your license keys.

License Enforcement: NSX-T now ensures that users are license-compliant by restricting access to features based on license edition. New users will be able to access only those features that are available in the edition that they have purchased. Existing users who have used features that are not in their license edition will be restricted to only viewing the objects; create and edit will be disallowed.

New VMware NSX Data Center Licenses: Adds support for new VMware NSX Firewall and NSX Firewall with Advanced Threat Prevention license introduced in October 2020, and continues to support NSX Data Center licenses (Standard, Professional, Advanced, Enterprise Plus, Remote Office Branch Office) introduced in June 2018, and previous VMware NSX for vSphere license keys. See VMware knowledge base article 52462 for more information about NSX licenses.

  • AAA and Platform Security

Security Enhancements for Use of Certificates And Key Store Management: With this architectural enhancement, NSX-T offers a convenient and secure way to store and manage a multitude of certificates that are essential for platform operations and be in compliance with industry and government guidelines. This enhancement also simplifies API use to install and manage certificates.

Alerts for Audit Log Failures: Audit logs play a critical role in managing cybersecurity risks within an organization and are often the basis of forensic analysis, security analysis and criminal prosecution, in addition to aiding with diagnosis of system performance issues. Complying with NIST-800-53 and industry-benchmark compliance directives, NSX offers alert notification via alarms in the event of failure to generate or process audit data.

Custom Role Based Access Control: Users desire the ability to configure roles and permissions that are customized to their specific operating environment. The custom RBAC feature allows granular feature-based privilege customization capabilities enabling NSX customers the flexibility to enforce authorization based on least privilege principles. This will benefit users in fulfilling specific operational requirements or meeting compliance guidelines. Please note in NSX-T 3.1, only policy based features are available for role customization.

FIPS – Interoperability with vSphere 7.x: Cryptographic modules in use with NSX-T are FIPS 140-2 validated since NSX-T 2.5. This change extends formal certification to incorporate module upgrades and interoperability with vSphere 7.0.

  • NSX Data Center for vSphere to NSX-T Data Center Migration

Migration of NSX for vSphere Environment with vRealize Automation: The Migration Coordinator now interacts with vRealize Automation (vRA) in order to migrate environments where vRealize Automation provides automation capabilities. This will offer a first set of topologies which can be migrated in an environment with vRealize Automation and NSX-T Data Center. Note: This will require support on vRealize Automation.

Modular Distributed Firewall Config Migration: The Migration Coordinator is now able to migrate firewall configurations and state from a NSX Data Center for vSphere environment to NSX-T Data Center environment. This functionality allows a customer to do migrate virtual machines (using vMotion) from one environment to the other and keep their firewall rules and state.

Migration of Multiple VTEP: The NSX Migration Coordinator now has the ability to migrate environments deployed with multiple VTEPs.

Increase Scale in Migration Coordinator to 256 Hosts: The Migration Coordinator can now migrate up to 256 hypervisor hosts from NSX Data Center for vSphere to NSX-T Data Center.

Migration Coordinator coverage of Service Insertion and Guest Introspection: The Migration Coordinator can migrate environments with Service Insertion and Guest Introspection. This will allow partners to offer a solution for migration integrated with complete migrator workflow.

Upgrade Considerations
API Deprecations and Behavior Changes

Retention Period of Unassigned Tags: In NSX-T 3.0.x, NSX Tags with 0 Virtual Machines assigned are automatically deleted by the system after five days. In NSX-T 3.1.0, the system task has been modified to run on a daily basis, cleaning up unassigned tags that are older than one day. There is no manual way to force delete unassigned tags.

I recommend you reviewing the known issues sections General  |  Installation  |  Upgrade  |  NSX Edge  |  NSX Cloud  |  Security  |  Federation

Enablement Links
Release Notes Click Here  |  What’s New  |  General Behavior Changes  |  API and CLI Resources  |  Resolved Issues  |  Known Issues
docs.vmware.com/NSX-T Installation Guide  |  Administration Guide  |  Upgrade Guide  |  Migration Coordinator  |  VMware NSX Intelligence

REST API Reference Guide  |  CLI Reference Guide  |  Global Manager REST API

Upgrading Docs Upgrade Checklist  |  Preparing to Upgrade  |  Upgrading  |  Upgrading NSX Cloud Components  |  Post-Upgrade Tasks

Troubleshooting Upgrade Failures

Installation Docs Preparing for Installation   |  NSX Manager Installation  |    |  Installing NSX Manager Cluster on vSphere  |  Installing NSX Edge

vSphere Lifecycle Manager  |  Host Profile integration  |  Getting Started with Federation  |  Getting Started with NSX Cloud

Migrating Docs Migrating NSX Data Center for vSphere  |  Migrating vSphere Networking  |  Migrating NSX Data Center for vSphere with vRA
Requirements Docs NSX Manager Cluster  |  System  |  NSX Manager VM & Host Transport Node System
NSX Edge VM System  |  NSX Edge Bare Metal  |  Bare Metal Server System  |  Bare Metal Linux Container
Compatibility Information Ports Used  |  Compatibility Guide (Select NSX-T)  |  Product Interoperability Matrix  |
Downloads Click Here
Hands On Labs (New) HOL-2103-01-NET – VMware NSX for vSphere Advanced Topics

HOL-2103-02-NET – VMware NSX Migration Coordinator

HOL-2103-91-NET – VMware NSX for vSphere Flow Monitoring and Traceflow

HOL-2122-01-NET – NSX Cloud Consistent Networking and Security across Enterprise, AWS & Azure

HOL-2122-91-ISM – NSX Cloud Consistent Networking and Security across Enterprise, AWS & Azure Lightning Lab

VMworld 2020 Sessions Update on NSX-T Switching: NSX on VDS (vSphere Distributed Switch) VCNC1197

Demystifying the NSX-T Data Center Control Plane VCNC1164

NSX-T security and compliance deep dive ISNS2256

NSX Data Center for vSphere to NSX-T Migration: Real-World Experience VCNC1590

Blogs NSX-T 3.0 – Innovations in Cloud, Security, Containers, and Operations
 

 

Create an ESXi installation ISO with custom drivers in 9 easy steps!

Video Posted on Updated on

One of the challenges in running a VMware based home lab is the ability to work with old / inexpensive hardware but run latest software. Its a balance that is sometimes frustrating, but when it works it is very rewarding. Most recently I decided to move to 10Gbe from my InfiniBand 40Gb network. Part of this transition was to create an ESXi ISO with the latest build (6.7U3) and appropriate network card drivers. In this video blog post I’ll show 9 easy steps to create your own customized ESXi ISO and how to pin point IO Cards on the vmware HCL.

** Update 03/06/2020 ** Though I had good luck with the HP 593742-001 NC523SFP DUAL PORT SFP+ 10Gb card in my Gen 4 Home Lab, I found it faulty when running in my Gen 5 Home Lab.  Could be I was using a PCIe x4 slot in Gen 4, or it could be the card runs to hot to touch.  For now this card was removed from VMware HCL, HP has advisories out about it, and after doing some poking around there seem to be lots of issues with it.  I’m looking for a replacement and may go with the HP NC550SFP.   However, this doesn’t mean the steps in this video are only for this card, the steps in this video help you to better understand how to add drivers into an ISO.

Here are the written steps I took from my video blog.  If you are looking for more detail, watch the video.

Before you start – make sure you have PowerCLI installed, have download these files,  and have placed these files in c:\tmp.

I started up PowerCLI and did the following commands:

1) Add the ESXi Update ZIP file to the depot:

Add-EsxSoftwareDepot C:\tmp\update-from-esxi6.7-6.7_update03.zip

2) Add the LSI Offline Bundle ZIP file to the depot:

Add-EsxSoftwareDepot ‘C:\tmp\qlcnic-esx55-6.1.191-offline_bundle-2845912.zip’

3) Make sure the files from step 1 and 2 are in the depot:

Get-EsxSoftwareDepot

4) Show the Profile names from update-from-esxi6.7-6.7_update03. The default command only shows part of the name. To correct this and see the full name use the ‘| select name’ 

Get-EsxImageProfile | select name

5) Create a clone profile to start working with.

New-EsxImageProfile -cloneprofile ESXi-6.7.0-20190802001-standard -Name ESXi-6.7.0-20190802001-standard-QLogic -Vendor QLogic

6) Validate the LSI driver is loaded in the local depot.  It should match the driver from step 2.  Make sure you note the name and version number columns.  We’ll need to combine these two with a space in the next step.

Get-EsxSoftwarePackage -Vendor q*

7) Add the software package to the cloned profile. Tip: For ‘SoftwarePackage:’ you should enter the ‘name’ space ‘version number’ from step 6.  If you just use the short name it might not work.

Add-EsxSoftwarePackage

ImageProfile: ESXi-6.7.0-20190802001-standard-QLogic
SoftwarePackage[0]: net-qlcnic 6.1.191-1OEM.600.0.0.2494585

8) Optional: Compare the profiles, to see differences, and ensure the driver file is in the profile.

Get-EsxImageProfile | select name   << Run this if you need a reminder on the profile names

Compare-EsxImageProfile -ComparisonProfile ESXi-6.7.0-20190802001-standard-QLogic -ReferenceProfile ESXi-6.7.0-20190802001-standard

9) Create the ISO

Export-EsxImageProfile -ImageProfile “ESXi-6.7.0-20190802001-standard-QLogic” -ExportToIso -FilePath c:\tmp\ESXi-6.7.0-20190802001-standard-QLogic.iso

That’s it!  If you like my ‘no-nonsense’ videos and blogs that get straight to the point… then post a comment or let me know… Else, I’ll start posting boring video blogs!

Cross vSAN Cluster support for FT

 

FIX for Netgear Orbi Router / Firewall blocks additional subnets

Posted on Updated on

Last April my trusty Netgear Switch finally gave in.  I bought a nifty Dell PowerConnect 6224 switch and have been working with it off an on.  About the same time, I decided to update my home network with the Orbi WiFi System (RBK50) AC3000 by Netgear.  My previous Netgear Wifi router worked quite well but I really needed something to support multiple locations seamlessly.

The Orbi Mesh has a primary device and allows for satellites to be connected to it.  It creates a Wifi mesh that allows devices to go from room to room or building to building seamlessly.  I’ve had it up for a while now and its been working out great – that is until I decided to ask it to route more than one subnet.   In this blog I’ll show you the steps I took to over come this feature limitation but like all content on my blog this is for my reference – travel, use, or follow at your own risk.

To understand the problem we need to first understand the network layout.   My Orbi Router is the Gateway of last resort and it supplies DHCP and DNS services. In my network I have two subnets which are untagged VLANS known as VLAN 74 – 172.16.74.x/24 and VLAN 75 – 172.16.75.x/24.   VLAN 74 is used by my home devices and VLAN 75 is where I manage my ESXi hosts.  I have enabled RIP v2 on the Orbi and on the Dell 6224 switch.  The routing tables are populated correctly, and I can ping from any internal subnet to any host without issue, except when the Orbi is involved.

Issue:  Hosts on VLAN 75 are not able to get to the internet.  Hosts on VLAN 75 can resolve DNS names (example: yahoo.com) but it cannot ping any host on the Inet. Conversely VLAN 74 can ping Inet hosts and get to the internet.  I’d like for my hosts on VLAN 75 to have all the same functionally as my hosts on VLAN 74.

Findings:  By default, the primary Orbi router is blocking any host that is not on VLAN 74 from getting to the INET.  I believe Netgear enabled this block to limit the number of devices the Orbi could NAT.  I can only guess that either the router just can’t handle the load or this was a maximum Netgear tested it to.  I found this firewall block out by logging into the CLI of my Orbi and looking at the IPTables settings.  There I could clearly see there was firewall rule blocking hosts that were not part of VLAN 74.

Solution:  Adjust the Orbi to allow all VLAN traffic (USE AT YOUR OWN RISK)

  1. Enable Telnet access on your Primary Orbi Router.
    1. Go to http://{your orbi ip address}/debug.htm
    2. Choose ‘Enable Telnet’ (**reminder to disable this when done**)
    3. Telnet into the Orbi Router (I just used putty)
    4. Logon as root using your routers main password
  2. I issued the command ‘iptables -t filter -L loc2net’. Using the output of this command I can see where line 5 is dropping all traffic that is not (!) VLAN74.
  3. Let’s remove this firewall rule. The one I want to target is the 5th in the list, yours may vary.  This command will remove it ‘iptables -t filter -D loc2net 5’
    • NOTES:
    • Router Firmware Version V2.5.1.16 (Noted: 10.2020) — It appears that more recent firmware updates have changed the targeting steps.  I noticed in Router Firmware Version V2.5.1.16 I had to add 2 to the targeted line number to remove it with the ip tables command.  This my vary for the device that is being worked on.
    • Router Firmware Version V2.5.2.4  (Noted: Jan-2021) — It appears the targeting for steps are now fixed in this version.
    • Again, as with all my posts, blogs, and videos are for my records and not for any intended purpose. 
  4. Next, we need to clean up some post routing issues ‘iptables -t nat -I POSTROUTING 1 -o brwan -j MASQUERADE’
  5. A quick test and I can now PING and get to the internet from VLAN 75
  6. Disconnect from Telnet and disable it on your router.

Note:  Unfortunately, this is not a permanent fix.  Once you reboot your router the old settings come back.  The good news is, its only two to three lines to fix this problem.  Check out the links below for more information and a script.

Easy Copy Commands for my reference:

iptables -t filter -L loc2net

iptables -t filter -D loc2net 7  << Check this number

iptables -t nat -I POSTROUTING 1 -o brwan -j MASQUERADE

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

REF:

No web interface on a Dell PowerConnect 6224 Switch

Posted on Updated on

I picked up a Dell Powerconnect 6224 switch the other day as my older Netgear switch (2007) finally died.  After connecting to it via console cable (9600,8,1,none) I updated the Firmware image to the latest revision. I then followed the “Dell Easy Setup Wizard”, which by the way stated the web interface will work after the wizard is completed. After completing the easy wizard I opened a  browser to the switch IP address which failed.   I then pinged the switch IP address, yep it is replying.  Next, rebooted the switch – still no connection.

How did I fix this?

1- Went back into the console and entered the following command.

console(config)#ip http server

2- Next I issued a ‘show run’ to ensure the command was present

console#show run
!Current Configuration:
!System Description “PowerConnect 6224, 3.3.18.1, VxWorks 6.5”
!System Software Version 3.3.18.1
!Cut-through mode is configured as disabled
!
configure
stack
member 1 1
exit
ip address 172.16.74.254 255.255.255.0
ip default-gateway 172.16.74.1
ip http server
username “admin” password HASHCODE level 15 encrypted
snmp-server community public rw
exit

3 – This time I connected to the switch via a browser without issue.

4 – Finally, saved the running-configuration

console#copy running-config startup-config

This operation may take a few minutes.
Management interfaces will not be available during this time.

Are you sure you want to save? (y/n) y

Configuration Saved!
console#

Summary:  These were some pretty basic commands to get the http service up and running, but I’m sure I’ll run into this again and I’ll have this blog to refer too.  Next, I’m off to setup some VLANs and a few static routes.

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

vCenter Server 6.5 – Migrate VMs from Virtual Distributed Switch to Standard Switch

Posted on

I was doing some testing the other day and had migrated about 25 test VM’s to a Virtual Distributed Switch (vDS) from a Standard Switch (vSS). However, to complete my testing I needed the VM’s to be migrated back to a vSS and delete the vDS entirely. In this blog I’m going to document the steps I took to accomplish this.

First step in completing this migration is to migrate the VM’s port groups from the vDS Port group to the vSS port group. One Option could be editing each VM but with 24 VM’s each with 2 vNICS, that would be about 100+ clicks to complete = very slow. However, a more efficient way is to use the vDS migrate VM’s option.

  • In the vCenter Server 6.5 HTML client, I clicked on Network > then under my vDS I right clicked on my active vDS port group
  • I then chose “Migrate VM’s to Another Network”

 

Second I chose BROWSE > a new pop up window appeared and I chose the vSS Port group that I wanted to migrate to then selected ok > I confirmed the ‘Destination Network’ was correct then clicked on next

Third, I wanted to migration both network adapters and I selected ‘All virtual machines’ then Next

Note: If I wanted to migrate individual vNIC vs all, I could have expanded each VM and then choose which network adapter I wanted to migrate.

Fourth, I reviewed the confirmation page and confirmed all 24 VM’s would be migrated, then clicked next

Fifth, here is the final report… All the VM’s have been migrated

Finally, now that all VM’s have been removed from the vDS Port group I can remove the vDS switch from my test environment.

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

Home Lab Gen IV – Part V Installing Mellanox HCAs with ESXi 6.5

Posted on Updated on

The next step on my InfiniBand home lab journey was getting the InfiniBand HCAs to play nice with ESXi. To do this I need to update the HCA firmware, this proved to be a bit of a challenge. In this blog post I go into how I solved this issue and got them working with ESXi 6.5.

My initial HCA selection was the ConnectX aka HP INFINIBAND 4X DDR PCI-E HCA CARD 452372-001, and Mellanox MHGA28-XTC InfiniHost III HCA these two cards proved to be a challenge when updating their firmware. I tried all types of operating systems, different drivers, different mobos, and MFT tools versions but they would not update or be OS recognized. Only thing I didn’t try was Linux OS. The Mellanox forums are filled with folks trying to solve these issues with mixed success. I went with these cheaper cards and they simply do not have the product support necessary. I don’t recommend the use of these cards with ESXi and have migrated to a ConnectX-3 which you will see below.

Updating the ConnectX 3 Card:

After a little trial and error here is how I updated the firmware on the ConnectX 3. I found the ConnectX 3 card worked very well with Windows 2012 and I was able to install the latest Mellanox OFED for Windows (aka Windows Drivers for Mellanox HCA card) and updated the firmware very smoothly.

First, I confirm the drivers via Windows Device Manager (Update to latest if needed)

Once you confirm Windows device functionality then install the Mellanox Firmware Tools for windows (aka WinMFT)

Next, it’s time to update the HCA firmware. To do this you need to know the exact model number and sometimes the card revision. Normally this information can be found on the back of your HCA. With this in hand go to the Mellanox firmware page and locate your card then download the update.

After you download the firmware place it in an accessible directory. Next use the CLI, navigate to the WinMFT directory and use the ‘mst status’ command to reveal the HCA identifier or the MST Device Name. If this command is working, then it is a good sign your HCA is working properly and communicating with the OS. Next, I use the flint command to update my firmware. Syntax is — flint -d <MST Device Name> -i <Firmware Name> burn

Tip: If you are having trouble with your Mellanox HCA I highly recommend the Mellanox communities. The community there is generally very responsive and helpful!

Installation of ESXi 6.5 with Mellanox ConnectX-3

I would love to tell you how easy this was, but the truth is it was hard. Again, old HCA’s with new ESXi doesn’t equal easy or simple to install but it does equal Home lab fun. Let me save you hours of work. Here is the simple solution when trying to get Mellanox ConnextX Cards working with ESXi 6.5. In the end I was able to get ESXi 6.5 working with my ConnectX Card (aka HP INFINIBAND 4X DDR PCI-E HCA CARD 452372-001) and with my ConnectX-3 CX354A.

Tip: I do not recommend the use of the ConnectX Card (aka HP INFINIBAND 4X DDR PCI-E HCA CARD 452372-001) with ESXi 6.x. No matter how I tried I could not update its firmware and it has VERY limited or non-existent support. Save time go with ConnectX-3 or above.

After I installed ESXi 6.5 I followed the following commands and it worked like a champ.

Disable native driver for vRDMA

  • esxcli system module set –enabled=false -m=nrdma
  • esxcli system module set –enabled=false -m=nrdma_vmkapi_shim
  • esxcli system module set –enabled=false -m=nmlx4_rdma
  • esxcli system module set –enabled=false -m=vmkapi_v2_3_0_0_rdma_shim
  • esxcli system module set –enabled=false -m=vrdma

Uninstall default driver set

  • esxcli software vib remove -n net-mlx4-en
  • esxcli software vib remove -n net-mlx4-core
  • esxcli software vib remove -n nmlx4-rdma
  • esxcli software vib remove -n nmlx4-en
  • esxcli software vib remove -n nmlx4-core
  • esxcli software vib remove -n nmlx5-core

Install Mellanox OFED 1.8.2.5 for ESXi 6.x.

  • esxcli software vib install -d /var/log/vmware/MLNX-OFED-ESX-1.8.2.5-10EM-600.0.0.2494585.zip

Ref Links:

After a quick reboot, I got 40Gb networking up and running. I did a few vmkpings between hosts and they ping perfectly.

So, what’s next? Now that I have the HCA working I need to get VSAN (if possible) working with my new highspeed network, but this folks is another post.

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

FREE 5-partVMware NSX webcast series!

Posted on Updated on

The VMware education department is starting a 5 Session NSX webcast for free. Below is information on session one starting on February 01, 2018. However, when you click on the RSVP button you’ll have the option to register for all 5 sessions! I’d recommend registering now and take advantage of this great opportunity.

TIP: After going to the RSVP webpage, pay attention to the session description.  Because sessions are in different regions (AMER, APJ, and EMEA).  I’d suggest choosing one closest to your region.

Here are the Sessions descriptions for AMER.  All locations seem to be have the same description, the only exception would be their timezone.

Session 1: Simplify Network Provisioning with Logical Routing and Switching using VMware NSX (AMER Session)

Date: Thursday, February 01, 2018

Time: 08:00 AM Pacific Standard Time

Duration: 1 hour

Summary

Did you know it’s possible to extend LANs beyond their previous boundaries and optimize routing in the data center? Or decouple virtual network operations from your physical environment to literally eliminate potential network disruptions for future deployments? Join us to learn how VMware NSX can make these a reality. We’ll also cover the networking components of NSX to help you understand how they provide solutions to three classic pain points in network operations:

  • Non-disruptive network changes
  • Optimized East-West routing
  • Reduction in deployment time through

Session 2: Automate Your Network Services Deployments with VMware NSX and vRealize Automation (AMER Session)

Date: Thursday, February 22, 2018

Time: 08:00 AM Pacific Standard Time

Duration: 1 hour

Summary

Can you automate your Software-Defined Data Center (SDDC) without automating network services? Of course not! In this session, we’ll discuss building your vRealize Automation blueprints with automated network services deployments from VMware NSX.

Session 3: Design Multi-Layered Security in the Software-Defined Datacenter using VMware vSphere 6.5 and VMware NSX 6.3 (AMER Session)

Date: Thursday, March 08, 2018

Time: 08:00 AM Pacific Standard Time

Duration: 1 hour

Summary

Did you know that more than 1.5 billion data records were compromised in the first half of 2017? Experts are expecting these numbers to grow. Are you prepared?  Join us to learn how a design based on VMware vSphere and VMware NSX can help you protect the integrity of your information as well as your organization. Among the areas covered will be the VMware ESXi host within vSphere that includes the host firewall and virtual machine encryption, along with the VMware vCenter layer that provides certificate management. We’ll also dive into a number of features within NSX, including the distributed Logical Router and Distributed Firewall that protect traffic within the data center and the Edge Services Gateway that secures north/south traffic through the edge firewall and virtual private network.

Session 4: Advanced VMware NSX: Demystifying the VTEP, MAC, and ARP Tables (AMER Session)

Date: Thursday, March 29, 2018

Time: 08:00 AM Pacific Daylight Time

Duration: 1 hour

Summary

The VMware NSX controllers are the central control point for all logical switches within a network and maintain information for all virtual machines, hosts, logical switches, and VXLANs. If you ever wanted to efficiently troubleshoot end-to-end communications in an NSX environment, it is imperative to understand the role of the NSX controllers, what information they maintain, and how the tables are populated. Well look no further. Give us an hour and you will see the various agents that the NSX controllers use for proper functionality. Use the NSX Central CLI to display the contents of the VTEP, MAC, and ARP tables. We will examine scenarios that would cause the contents of these tables to change and confirm the updates. Finally, we will examine, in detail, Controller Disconnected Operation and how this feature can minimize downtime.

Session 5: That Firewall Did What? Advanced Troubleshooting for the VMware NSX Distributed Firewall (AMER Session)

Date: Thursday, April 19, 2018

Time: 08:00 AM Pacific Daylight Time

Duration: 1 hour

Summary:

The VMware NSX Distributed Firewall (DFW) is the hottest topic within the NSX community. It is the WOW of micro-segmentation. But many questions arise. Who made the rule? Who changed the rule? Is the rule working? Where are these packets being stopped? Why aren’t these packets getting through? What is happening with my implementation of the DFW? These questions can be answered using the native NSX tools. We will give you an overview on how to track, manage, and troubleshoot packets traveling through the DFW using a combination of User Interface (UI) tools, the VMware Command Line Interface (vCLI) to view logs manually, and integrating with VMware vRealize Log Insight (vRLI) and VMware vRealize Network Insight (vRNI).

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

Review: Wireless Network Devices All in One vs. Standalone

Posted on Updated on

I’m sure like most of my fellow computer geeks we get asked quite a bit around home wireless networks etc. Well, I’ve been in the market for a new Cable Modem and Router and in the past I’ve never recommended the “all in one” solution (meaning Cable Modem and Router/Firewall in one unit). Mainly this recommendation was based on my field experience back in 2007 and seeing so many of them fail. This week, going against my own advice, I gave the Netgear C3700-100NAS all in one a try for $99. Not a bad deal as it means not as many cables, it has an integrated DOCSIS 3.0 cable modem, and it’s on COX Phoenix AZ supported list. This unit worked well for about 20 mins and as I was reading reviews around its issues, it started having them. Over and Over again it slows down. You’d think by 2016 they’d have the all in one finally figured out, but alas they don’t. My recommendation still stands, avoid the all in one.

What I have been recommending for home users in the Phoenix Area with Cox Cable running their 60-100Mbs Internet are the Arris Motorolla SB6141 Cable Modem and the NETGEAR WNDR4500-100PAS N900 wireless router. I’ve had the combo since 2012 and it’s been very rock solid. If I do have an issue with this combo it is usually outside of their control, meaning the cable company is having an issue.

I bought my pair at NewEgg — WNDR4500 for $89 and the SB6141 for $69,

When I bought mine (Feb-2016) NewEgg gave me a TP-LINK TL-WR841N and a N150 wireless routers.

If there is enough interest in the post, I’ll post up how the other 2 routers work out. Enjoy!

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

Solved: WARNING: Link is up but PHY type 0x3 is not recognized – Can cause ESXi 6 purple screens

Posted on Updated on

The Error >> When running an Intel x710 NIC with the ESXi i40e driver you notice your vmkernel.log completely full of the error “WARNING: Link is up but PHY type 0x3 is not recognized”

The Solution >> Ensure X710 firmware is at 17.5.11 (aka 5.04 in ESXi) and ESXi i40e Driver to 1.4.26 or 1.4.28 and these errors stop

The Follow-up >> Check out your NIC on the VMWare HCL for the Correct driver/firmware guidance. This is the link I used.

Other notes…

Sending Millions of the PHY errors to your event logs could be causing other issues for your ESXi host. Look for local boot disk latency or Networking errors in your ESXi host event logs. Once you apply this solution these issues should stop. If not, then you may have other issues impacting your boot disks.

*Updates*

  • After applying this solution we then noticed the vmkernel started to populate with ‘driver issue detected, PF reset issued’ the solution for this is to disable TSO/LRO.  VMWare KB 205140.
  • 04-10-2017 There is a new VMware driver listed for the X710, will be testing soon and will post up results.  Release notes indicate fixes for the following:
    – Fix duplicate mulicast packet issue
    – Fix PSOD caused by small TSO segmentation

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

ESXi Host NIC failure and the Web client vSwitch orange line doesn’t move? — The results are Shocking!

Posted on Updated on

Okay, the title was a bit dramatic, but it got your attention. Now keeping with my quest to deliver no-nonsense blog articles here is what the orange line means…

Question 1 – What is the function of the orange line when selecting a vmnic, port group, or vSwitch while viewing them in the Web client network settings?

The orange line is showing you the teaming order for the pNICs or vmnics based on their vSwitch or port group teaming policy. In this screenshot, the policy is Active / Active for both vmnic0 and 1.

The orange line will not move to the other pNIC’s unless they are marked as “active” in the teaming policy. “Active in the teaming policy” vs. “which pNIC is passing traffic” are two different things. The orange line is not a representation of the latter, “pNIC passing traffic”.

 

Question 2 – How can I tell which pNIC is currently passing traffic?

The Web or Thick client vSwitch display (aka the orange line) doesn’t display the pNIC which is currently passing network traffic. You need to use ESXTOP to determine the active pNIC.

Simply go into ESXTOP, Press N, find your vSwitch and it will lead you to the pNIC currently being used to pass traffic.

 

Question 3 – I had a pNIC failure why isn’t the Web client moving the orange line to the standby NIC?

Again… the orange line ONLY points to the Active pNIC in the teaming policy. In this screenshot below, the teaming policy is setup for vmnic3 as Active and vmnic2 as stand by.

Even though vmnic3 is down, traffic should be flowing through vmnic2. Use ESXTOP to determine this (See Question 2)

 

 

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.