Possible Security issues with #solarwinds #loggly and #Trojan #BrowserAssistant PS

Posted on Updated on

Lately, I haven’t had much involvement with malware, trojans, and virus’.  However, most recently Norton Family started to alert me to a few websites I didn’t recognize on one of my personal PC’s.  Norton reported these three sites: loggly.com | pads289.net | sun346.net   Something I also noticed was all three sites were posting at the same date/time, and the Pads/Sun sites had the same ID number in their URL (see pic below).  This behavior just seemed odd.    I didn’t initially recognize any of these sites, but a quick search revealed loggly.com was a solarwinds product.  My mind started to wander, could this be related to their recent security issues?  Just to be clear, this post isn’t about any current issues with solarwinds, VMware, or others. These issues were located on my personal network.  I’m posting this information as I know many of us are working from home, have kids doing online school, and the last thing we need is a pesky virus slowing things down.

I use Norton Family on all of my personal PC and the first thing I did was block the sites on the affected PC and the via Internet firewall.

Next, I started searching the Inet to see what I could find out on these three sites.  Multiple security sites of these URLs turned up no warnings, no black lists, whois seemed normal, just pretty much nothing alarming.  In fact, I was even running Sophos UTM Home Firewall, and it never alerted on this either. If I went directly to these sites it resulted in a blank page.  Additionally, the PC seemed to run normal, no popups, or redirection of sites.  Really it had no issues at all except it just kept going to these odd sites.

That’s when I found urlscan.io.  I pointed it at one of the sites and I noticed there were several update.txt files.

When I clicked on the update.txt it brought me to this screen where I could view the text file via the screenshot.

One thing I noticed about the text file was ‘Realistic Media Inc.’ and ‘Browser Assistant’,  and MSI installable. These things seemed like a programs that could be installed on a PC.

Looking at the installed programs on the affected PC and I found a match.

A quick search, and sure enough lots of hits on this Trojan.

Next I ran Microsoft Safety Scanner, it removed some of it, and then I uninstalled the ‘Browser Assistant’ program.

Lastly, I sent an email into AWS and Solarwinds asking them to look into this issue.  

Within 24 hours Amazon Responded with:  “The security concern that you have reported is specific to a customer application and / or how an AWS customer has chosen to use an AWS product or service.  To be clear, the security concern you have reported cannot be resolved by AWS but must be addressed by the customer, who may not be aware of or be following our recommended security best practices.  We have passed your security concern on to the specific customer for their awareness and potential mitigation.” 

Within 24 hours Solarwinds responded with:  They are working with me to see if there are any issues with this. 

Summary:

This pattern for Trojans or Mal/ad-ware probably isn’t new to security folks but either way I hope this blog helps you to better understand odd behavior on your personal network.

Thanks for reading and please do reach out if you have any questions.

Reference Links / Tools:

If you like my ‘no-nonsense’ videos and blogs that get straight to the point… then post a comment or let me know… Else, I’ll start posting really boring content!

GA Release #VMware #vSphere + #vSAN 7.0 Update 1c/P02 | Announcement, information, and links

Posted on

Announcing GA Releases of the following

  • VMware vSphere 7.0 Update 1c/P02 (Including Tanzu)
  • VMware vSAN™ 7.0 Update 1c/P02

Note: The included ESXi patch pertains to the Low severity Security Advisory for VMSA-2020-0029 & CVE-2020-3999

See the base table for all the technical enablement links.

Release Overview
vCenter Server 7.0 Update 1c | ISO Build 1732751

ESXi 7.0 Update 1c | ISO Build 17325551

What’s New vCenter
  • Physical NIC statistics: vCenter Server 7.0 Update 1c adds five physical NIC statistics:droppedRx, droppedTx, errorsRx, RxCRCErrors and errorsTx, to the hostd.log file at /var/run/log/hostd.log to enable you detect uncorrected networking errors and take necessary corrective action
  • Advanced Cross vCenter vMotion: With vCenter Server 7.0 Update 1c, in the vSphere Client, you can use the Advanced Cross vCenter vMotion feature to manage the bulk migration of workloads across vCenter Server systems in different vCenter Single Sign-On domains. Advanced Cross vCenter vMotion does not depend on vCenter Enhanced Linked Mode or Hybrid Linked Mode and works for both on-premise and cloud environments. Advanced Cross vCenter vMotion facilitates your migration from VMware Cloud Foundation 3 to VMware Cloud Foundation 4, which includes vSphere with Tanzu Kubernetes Grid, and delivers a unified platform for both VMs and containers, allowing operators to provision Kubernetes clusters from vCenter Server. The feature also allows smooth transition to the latest version of vCenter Server by simplifying workload migration from any vCenter Server instance of 6.x or later
  • Parallel remediation on hosts in clusters that you manage with vSphere Lifecycle Manager baselines: With vCenter Server 7.0 Update 1c, you can run parallel remediation on ESXi hosts in maintenance mode in clusters that you manage with vSphere Lifecycle Manager baselines
  • Third-party plug-ins to manage services on the vSAN Data Persistence platform: With vCenter Server 7.0 Update 1c, you can enable third-party plug-ins to manage services on the vSAN Data Persistence platform from the vSphere Client, the same way you manage your vCenter Server system. For more information, see the vSphere with Tanzu Configuration and Management documentation.
What’s New vSphere With Tanzu
Supervisor Cluster

  • Supervisor Namespace Isolation with Dedicated T1 Router – Supervisor Clusters using NSX-T network uses a new topology where each namespace has its own dedicated T1 router.

·      Newly created Supervisor Clusters uses this new topology automatically.

·      Existing Supervisor Clusters are migrated to this new topology during an upgrade

  • Supervisor Clusters Support NSX-T 3.1.0 – Supervisor Clusters is compatible with NSX-T 3.1.0
  • Supervisor Cluster Version 1.16.x Support Removed – Supervisor Cluster Version 1.16.x is now removed. Supervisor Clusters running 1.16.x should be upgraded to a new version

Tanzu Kubernetes Grid Service for vSphere

  • HTTP/HTTPS Proxy Support  – Newly created Tanzu Kubernetes clusters can use a global HTTP/HTTPS Proxy for egress traffic as well as for pulling container images from internet registries.
  • Integration with Registry Service – Newly created Tanzu Kubernetes clusters work out of the box with the vSphere Registry Service. Existing clusters, once updated to a new version, also work with the Registry Service.
  • Configurable Node Storage  – Tanzu Kubernetes clusters can now mount an additional storage volume to virtual machines thereby increasing available node storage capacity. This enables users to deploy larger container images that might exceed the default 16GB root volume size.
  • Improved status information  WCPCluster and WCPMachine Custom Resource Definitions now implement conditional status reporting. Successful Tanzu Kubernetes cluster lifecycle management depends on a number of subsystems (for example, Supervisor, storage, networking) and understanding failures can be challenging. Now WCPCluster and WCPMachine CRDs surface common status and failure conditions to ease troubleshooting.

Missing new default VM Classes introduced in vSphere 7.0 U1

  • After upgrading to vSphere 7.0.1, and then performing a vSphere Namespaces update of the Supervisor Cluster, running the command “kubectl get virtualmachineclasses” did not list the new VM class sizes 2x-large, 4x-large, 8x-large. This has been resolved and all Supervisor Clusters will be configured with the correct set of default VM Classes. 
  • With ESXi 7.0 Update 1c, you can use the –remote-host-max-msg-len parameter to set the maximum length of syslog messages, to up to 16 KiB, before they must be split. By default, the ESXi syslog daemon (vmsyslogd), strictly adheres to the maximum message length of 1 KiB set by RFC 3164. Longer messages are split into multiple parts. Set the maximum message length up to the smallest length supported by any of the syslog receivers or relays involved in the syslog infrastructure
  • With ESXi 7.0 Update 1c, you can use the installer boot option systemMediaSize to limit the size of system storage partitions on the boot media. If your system has a small footprint that does not require the maximum 138 GB system-storage size, you can limit it to the minimum of 33 GB. The systemMediaSize parameter accepts the following values:
    • min (33 GB, for single disk or embedded servers)
    • small (69 GB, for servers with at least 512 GB RAM)
    • default (138 GB)
    • max (consume all available space, for multi-terabyte servers)

The selected value must fit the purpose of your system. For example, a system with 1TB of memory must use the minimum of 69 GB for system storage. To set the boot option at install time, for example systemMediaSize=small, refer to Enter Boot Options to Start an Installation or Upgrade Script. For more information, see VMware knowledge base article 81166.

VMSA-2020-0029 Information for ESXi
VMSA-2020-0029 Low
CVSSv3 Range 3.3
Issue date: 12/17/2020
CVE numbers: CVE-2020-3999
Synopsis: VMware ESXi, Workstation, Fusion and Cloud Foundation updates address a denial of service vulnerability (CVE-2020-3999)
ESXi 7 Patch Info VMware Patch Release ESXi 7.0 ESXi70U1c-17325551
This section derives from our full VMware Security Advisory VMSA-2020-0029 covering ESXi only.  It is accurate at the time of creation and it is recommended you reference the full VMSA for expanded or updated information.
What’s New vSAN
vSAN 7.0 Update 1c/P02 includes the following summarized fixes as documented within the Resolved Sections for vCenter & ESXi

  • DOM Scrubber enhancement feature to enhance DOM scrubber functionality
  • Improvements in checksum verification during write prepare in LLOG
  • Persistence in network settings of witness appliance while creating witness VM
  • Enhancement in storage capacity/usage calculation on host level
  • NFS File bench performance improvements
  • LSOM fixes for random high write latency spikes in vSAN all-flash
  • File services improvements

 

Technical Enablement
Release Notes vCenter Click Here  |  What’s New  |  Patches Contained in this Release  |  Product Support Notices  |  Resolved Issues  |  Known Issues
Release Notes ESXi Click Here  |  What’s New  |  Patches Contained in this Release  |  Product Support Notices  |  Resolved Issues  |  Known Issues
Release Notes vSAN 7.0 U1 Click Here  |  What’s New  |  VMware vSAN Community  |  Upgrades for This Release  |  Limitations  |  Known Issues
Release Notes Tanzu Click Here  |  What’s New  |  Learn About vSphere with Tanzu  |  Known Issues
docs.vmware.com/vSphere vCenter Server Upgrade  |   ESXi Upgrade  |  Upgrading vSAN Cluster  |   Tanzu Configuration & Management
Download Click Here
Compatibility Information ports.vmware.com/vSphere 7 + vSAN  |  Configuration Maximums vSphere 7  |  Compatibility Matrix  |  Interoperability
VMSA Reference VMSA-2020-0029  |  VMware Patch Release ESXi 7.0 ESXi70U1c-17325551

GA Release VMware NSX Data Center for vSphere 6.4.9 | Announcement, information, and links

Posted on Updated on

Announcing GA Releases of the following

  • VMware NSX Data Center for vSphere 6.4.9 (See the base table for all the technical enablement links.)

 

Release Overview
VMware NSX Data Center for vSphere 6.4.9 | Build 17267008 

NSX for vSphere 6.4 End Of General Support Was Extended to 01/16/2022

lifecycle.vmware.com

What’s New
NSX Data Center for vSphere 6.4.9 adds usability enhancements and addresses a number of specific customer bugs. 

  • vSphere 7.0 Update 1 Support
  • VMware NSX – Functionality Updates for vSphere Client (HTML): The following VMware NSX features are now available through the vSphere Client: Service Definitions for Guest Introspection and Network Introspection. For a list of supported functionality, please see VMware NSX for vSphere UI Plug-in Functionality in vSphere Client.
  • Guest Introspection: Adds the ability to change logging level without requiring a restart of 3rd-party Guest Introspection partner service.
Minimum Supported Versions & Depreciated Notes
VMware declares minimum supported versions, this content has been simplified, please view the full details in the  Versions, System Requirements, and Installation section.

For vSphere 6.5:

Recommended: 6.5 Update 3 Build Number 14020092.
Important: If you are using NSX Guest Introspection on vSphere 6.5, vSphere 6.5 P03 or higher is recommended.

VMware Product Interoperability Matrix | NSX-V 6.4.9 & vSphere 6.5

For vSphere 6.7:

Recommended: 6.7 Update 2
Important:  If you are using NSX Guest Introspection on vSphere 6.7, please refer to Knowledge Base Article KB57248 prior to installing NSX 6.4.6, and consult VMware Customer Support for more information.

For vSphere 7, Update 1 is now supported

Note vSphere 6.0 has reached End of General Support and is not supported with NSX 6.4.7 onwards.

Guest Introspection for Windows

It is recommended that you upgrade VMware Tools to 10.3.10 before upgrading NSX for vSphere.

End of Life and End of Support Warnings

For information about NSX and other VMware products that must be upgraded soon, please consult the VMware Lifecycle Product Matrix.

  • NSX for vSphere 6.1.x reached End of Availability (EOA) and End of General Support (EOGS) on January 15, 2017. (See also VMware knowledge base article 2144769.)
  • vCNS Edges no longer supported. You must upgrade to an NSX Edge first before upgrading to NSX 6.3 or later.
  • NSX for vSphere 6.2.x has reached End of General Support (EOGS) as of August 20, 2018.

General Behavior Changes

If you have more than one vSphere Distributed Switch, and if VXLAN is configured on one of them, you must connect any Distributed Logical Router interfaces to port groups on that vSphere Distributed Switch. Starting in NSX 6.4.1, this configuration is enforced in the UI and API. In earlier releases, you were not prevented from creating an invalid configuration.  If you upgrade to NSX 6.4.1 or later and have incorrectly connected DLR interfaces, you will need to take action to resolve this. See the Upgrade Notes for details.

In NSX 6.4.7, the following functionality is deprecated in vSphere Client 7.0:

  • NSX Edge: SSL VPN-Plus (see KB79929 for more information)
  • Tools: Endpoint Monitoring (all functionality)
  • Tools: Flow Monitoring (Flow Monitoring Dashboard, Details by Service, and Configuration)
  • System Events: NSX Ticket Logger

For the complete list of NSX installation prerequisites, see the System Requirements for NSX section in the NSX Installation Guide.

For installation instructions, see the NSX Installation Guide or the NSX Cross-vCenter Installation Guide.

Also refer to the complete Deprecated and Discontinued Functionality for all depreciated features, API Removals and Behavior Changes

General Upgrade Considerations
For more information, notes and considerations for upgrading please see the Upgrade Notes & FIPS Compliance section.

  • To upgrade NSX, you must perform a full NSX upgrade including host cluster upgrade (which upgrades the host VIBs). For instructions, see the NSX Upgrade Guide including the Upgrade Host Clusters section.
  • Upgrading NSX VIBs on host clusters using VUM is not supported. Use Upgrade Coordinator, Host Preparation, or the associated REST APIs to upgrade NSX VIBs on host clusters.
  • System Requirements: For information on system requirements while installing and upgrading NSX, see the System Requirements for NSX section in NSX documentation.
  • Upgrade path for NSX: The VMware Product Interoperability Matrix provides details about the upgrade paths from VMware NSX.
  • Cross-vCenter NSX upgrade is covered in the NSX Upgrade Guide.
  • Downgrades are not supported:
    • Always capture a backup of NSX Manager before proceeding with an upgrade.
    • Once NSX has been upgraded successfully, NSX cannot be downgraded.
  • To validate that your upgrade to NSX 6.4.x was successful see knowledge base article 2134525.
  • There is no support for upgrades from vCloud Networking and Security to NSX 6.4.x. You must upgrade to a supported 6.2.x release first.
  • Interoperability: Check the VMware Product Interoperability Matrix for all relevant VMware products before upgrading.
    • Upgrading to NSX Data Center for vSphere 6.4.7: VIO is not compatible with NSX 6.4.7 due to multiple scale issues.
    • Upgrading to NSX Data Center for vSphere 6.4: NSX 6.4 is not compatible with vSphere 5.5.
    • Upgrading to NSX Data Center for vSphere 6.4.5: If NSX is deployed with VMware Integrated OpenStack (VIO), upgrade VIO to 4.1.2.2 or 5.1.0.1, as 6.4.5 is incompatible with previous releases due to spring package update to version 5.0.
    • Upgrading to vSphere 6.5: When upgrading to vSphere 6.5a or later 6.5 versions, you must first upgrade to NSX 6.3.0 or later. NSX 6.2.x is not compatible with vSphere 6.5. See Upgrading vSphere in an NSX Environment in the NSX Upgrade Guide.
    • Upgrading to vSphere 6.7: When upgrading to vSphere 6.7 you must first upgrade to NSX 6.4.1 or later. Earlier versions of NSX are not compatible with vSphere 6.7. See Upgrading vSphere in an NSX Environment in the NSX Upgrade Guide.
  • Partner services compatibility: If your site uses VMware partner services for Guest Introspection or Network Introspection, you must review the  VMware Compatibility Guide before you upgrade, to verify that your vendor’s service is compatible with this release of NSX.
  • Networking and Security plug-in: After upgrading NSX Manager, you must log out and log back in to the vSphere Web Client. If the NSX plug-in does not display correctly, clear your browser cache and history. If the Networking and Security plug-in does not appear in the vSphere Web Client, reset the vSphere Web Client server as explained in the NSX Upgrade Guide.
  • Stateless environments: In NSX upgrades in a stateless host environment, the new VIBs are pre-added to the Host Image profile during the NSX upgrade process. As a result, NSX on stateless hosts upgrade process follows this sequence:
  • Service Definitions functionality is not supported in NSX 6.4.7 UI with vSphere Client 7.0:
    For example, if you have an old Trend Micro Service Definition registered with vSphere 6.5 or 6.7, follow any one of these two options:
    1. Option #1: Before upgrading to vSphere 7.0, navigate to the Service Definition tab in the vSphere Web Client, edit the Service Definition to 7.0, and then upgrade to vSphere 7.0.
    2. Option #2: After upgrading to vSphere 7.0, run the following NSX API to add or edit the Service Definition to 7.0.

POST  https://<nsmanager>/api/2.0/si/service/<service-id>/servicedeploymentspec/versioneddeploymentspec

Upgrade Consideration for NSX Components
Support for VM Hardware version 11 for NSX components

  • For new installs of NSX Data Center for vSphere 6.4.2, the NSX components (Manager, Controller, Edge, Guest Introspection) are on VM Hardware version 11.
  • For upgrades to NSX Data Center for vSphere 6.4.2, the NSX Edge and Guest Introspection components are automatically upgraded to VM Hardware version 11. The NSX Manager and NSX Controller components remain on VM Hardware version 8 following an upgrade. Users have the option to upgrade the VM Hardware to version 11. Consult KB (https://kb.vmware.com/s/article/1010675) for instructions on upgrading VM Hardware versions.
  • For new installs of NSX 6.3.x, 6.4.0, 6.4.1, the NSX components (Manager, Controller, Edge, Guest Introspection) are on VM Hardware version 8.

NSX Manager Upgrade

  • Important: If you are upgrading NSX 6.2.0, 6.2.1, or 6.2.2 to NSX 6.3.5 or later, you must complete a workaround before starting the upgrade. See VMware Knowledge Base article 000051624 for details.
  • If you are upgrading from NSX 6.3.3 to NSX 6.3.4 or later you must first follow the workaround instructions in VMware Knowledge Base article 2151719.
  • If you use SFTP for NSX backups, change to hmac-sha2-256 after upgrading to 6.3.0 or later because there is no support for hmac-sha1. See VMware Knowledge Base article 2149282  for a list of supported security algorithms.
  • When you upgrade NSX Manager to NSX 6.4.1, a backup is automatically taken and saved locally as part of the upgrade process. See Upgrade NSX Manager for more information.
  • When you upgrade to NSX 6.4.0, the TLS settings are preserved. If you have only TLS 1.0 enabled, you will be able to view the NSX plug-in in the vSphere Web Client, but NSX Managers are not visible. There is no impact to datapath, but you cannot change any NSX Manager configuration. Log in to the NSX appliance management web UI at https://nsx-mgr-ip/ and enable TLS 1.1 and TLS 1.2. This reboots the NSX Manager appliance.

Controller Upgrade

  • The NSX Controller cluster must contain three controller nodes. If it has fewer than three controllers, you must add controllers before starting the upgrade. See Deploy NSX Controller Cluster for instructions.
  • In NSX 6.3.3, the underlying operating system of the NSX Controller changes. This means that when you upgrade from NSX 6.3.2 or earlier to NSX 6.3.3 or later, instead of an in-place software upgrade, the existing controllers are deleted one at a time, and new Photon OS based controllers are deployed using the same IP addresses.

When the controllers are deleted, this also deletes any associated DRS anti-affinity rules. You must create new anti-affinity rules in vCenter to prevent the new controller VMs from residing on the same host.

See Upgrade the NSX Controller Cluster for more information on controller upgrades.

 Host Cluster Upgrade

  • If you upgrade from NSX 6.3.2 or earlier to NSX 6.3.3 or later, the NSX VIB names change.
    The esx-vxlan and esx-vsip VIBs are replaced with esx-nsxv if you have NSX 6.3.3 or later installed on ESXi 6.0 or later.
  • Rebootless upgrade and uninstall on hosts: On vSphere 6.0 and later, once you have upgraded from NSX 6.2.x to NSX 6.3.x or later, any subsequent NSX VIB changes will not require a reboot. Instead hosts must enter maintenance mode to complete the VIB change. This affects both NSX host cluster upgrade, and ESXi upgrade. See the NSX Upgrade Guide for more information.

NSX Edge Upgrade

  • Validation added in NSX 6.4.1 to disallow an invalid distributed logical router configurations: In environments where VXLAN is configured and more than one vSphere Distributed Switch is present, distributed logical router interfaces must be connected to the VXLAN-configured vSphere Distributed Switch only. Upgrading a DLR to NSX 6.4.1 or later will fail in those environments if the DLR has interfaces connected to the vSphere Distributed Switch that is not configured for VXLAN. Use the API to connect any incorrectly configured interfaces to port groups on the VXLAN-configured vSphere Distributed Switch. Once the configuration is valid, retry the upgrade. You can change the interface configuration using

PUT /api/4.0/edges/{edgeId} or PUT /api/4.0/edges/{edgeId}/interfaces/{index}. See the NSX API Guide for more information.

  • Delete UDLR Control VM from vCenter Server that is associated with secondary NSX Manager before upgrading UDLR from 6.2.7 to 6.4.5:
    In a multi-vCenter environment, when you upgrade NSX UDLRs from 6.2.7 to 6.4.5, the upgrade of the UDLR virtual appliance (UDLR Control VM) fails on the secondary NSX Manager, if HA is enabled on the UDLR Control VM. During the upgrade, the VM with ha index #0 in the HA pair is removed from the NSX database; but, this VM continues to exist on the vCenter Server. Therefore, when the UDLR Control VM is upgraded on the secondary NSX Manager, the upgrade fails because the name of the VM clashes with an existing VM on the vCenter Server. To resolve this issue, delete the Control VM from the vCenter Server that is associated with the UDLR on the secondary NSX Manager, and then upgrade the UDLR from 6.2.7 to 6.4.5.
  • Host clusters must be prepared for NSX before upgrading NSX Edge appliances: Management-plane communication between NSX Manager and Edge via the VIX channel is no longer supported starting in 6.3.0. Only the message bus channel is supported. When you upgrade from NSX 6.2.x or earlier to NSX 6.3.0 or later, you must verify that host clusters where NSX Edge appliances are deployed are prepared for NSX, and that the messaging infrastructure status is GREEN. If host clusters are not prepared for NSX, upgrade of the NSX Edge appliance will fail. See Upgrade NSX Edge in the NSX Upgrade Guide for details.
  • Upgrading Edge Services Gateway (ESG):
    Starting in NSX 6.2.5, resource reservation is carried out at the time of NSX Edge upgrade. When vSphere HA is enabled on a cluster having insufficient resources, the upgrade operation may fail due to vSphere HA constraints being violated.

To avoid such upgrade failures, perform the following steps before you upgrade an ESG:

The following resource reservations are used by the NSX Manager if you have not explicitly set values at the time of install or upgrade.

  1. Always ensure that your installation follows the best practices laid out for vSphere HA. Refer to document KB1002080 .
  2. Use the NSX tuning configuration API:
    PUT https://<nsxmanager>/api/4.0/edgePublish/tuningConfiguration
    ensuring that values for edgeVCpuReservationPercentage and edgeMemoryReservationPercentage fit within available resources for the form factor (see table above for defaults).
  • Disable vSphere’s Virtual Machine Startup option where vSphere HA is enabled and Edges are deployed. After you upgrade your 6.2.4 or earlier NSX Edges to 6.2.5 or later, you must turn off the vSphere Virtual Machine Startup option for each NSX Edge in a cluster where vSphere HA is enabled and Edges are deployed. To do this, open the vSphere Web Client, find the ESXi host where NSX Edge virtual machine resides, click Manage > Settings, and, under Virtual Machines, select VM Startup/Shutdown, click Edit, and make sure that the virtual machine is in Manual mode (that is, make sure it is not added to the Automatic Startup/Shutdown list).
  • Before upgrading to NSX 6.2.5 or later, make sure all load balancer cipher lists are colon separated. If your cipher list uses another separator such as a comma, make a PUT call to https://nsxmgr_ip/api/4.0/edges/EdgeID/loadbalancer/config/applicationprofiles and replace each  <ciphers> </ciphers> list in <clientssl> </clientssl> and <serverssl> </serverssl> with a colon-separated list. For example, the relevant segment of the request body might look like the following. Repeat this procedure for all application profiles:

<applicationProfile>

<name>https-profile</name>

<insertXForwardedFor>false</insertXForwardedFor>

<sslPassthrough>false</sslPassthrough>

<template>HTTPS</template>

<serverSslEnabled>true</serverSslEnabled>

<clientSsl>

<ciphers>AES128-SHA:AES256-SHA:ECDHE-ECDSA-AES256-SHA</ciphers>

<clientAuth>ignore</clientAuth>

<serviceCertificate>certificate-4</serviceCertificate>

</clientSsl>

<serverSsl>

<ciphers>AES128-SHA:AES256-SHA:ECDHE-ECDSA-AES256-SHA</ciphers>

<serviceCertificate>certificate-4</serviceCertificate>

</serverSsl>

</applicationProfile>

 

  • Set Correct Cipher version for Load Balanced Clients on vROps versions older than 6.2.0: vROps pool members on vROps versions older than 6.2.0 use TLS version 1.0 and therefore you must set a monitor extension value explicitly by setting “ssl-version=10” in the NSX Load Balancer configuration. See Create a Service Monitor in the NSX Administration Guide for instructions.

{

“expected” : null,

“extension” : “ssl-version=10”,

“send” : null,

“maxRetries” : 2,

“name” : “sm_vrops”,

“url” : “/suite-api/api/deployment/node/status”,

“timeout” : 5,

“type” : “https”,

“receive” : null,

“interval” : 60,

“method” : “GET”

}

  • After upgrading to NSX 6.4.6, L2 bridges and interfaces on a DLR cannot connect to logical switches belonging to different transport zones:  In NSX 6.4.5 or earlier, L2 bridge instances and interfaces on a Distributed Logical Router (DLR) supported use of logical switches that belonged to different transport zones. Starting in NSX 6.4.6, this configuration is not supported. The L2 bridge instances and interfaces on a DLR must connect to logical switches that are in a single transport zone. If logical switches from multiple transport zones are used, edge upgrade is blocked during pre-upgrade validation checks when you upgrade NSX to 6.4.6. To resolve this edge upgrade issue, ensure that the bridge instances and interfaces on a DLR are connected to logical switches in a single transport zone.
  • After upgrading to NSX 6.4.7, bridges and interfaces on a DLR cannot connect to dvPortGroups belonging to different VDS: If such a configuration is present, NSX Manager upgrade to 6.4.7 is blocked in pre-upgrade validation checks. To resolve this, ensure that interfaces and L2 bridges of a DLR are connected to a single VDS.
  • After upgrading to NSX 6.4.7, DLR cannot be connected to VLAN-backed port groups if the transport zone of logical switch it is connected to spans more than one VDS: This is to ensure correct alignment of DLR instances with logical switch dvPortGroups across hosts. If such configuration is present, NSX Manager upgrade to 6.4.7 is blocked in pre-upgrade validation checks. To resolve this issue, ensure that there are no logical interfaces connected to VLAN-backed port groups, if a logical interface exists with a logical switch belonging to a transport zone spanning multiple VDS.
  • After upgrading to NSX 6.4.7, different DLRs cannot have their interfaces and L2 bridges on a same network: If such a configuration is present, NSX Manager upgrade to 6.4.7 is blocked in pre-upgrade validation checks. To resolve this issue, ensure that a network is used in only a single DLR.

 

Technical Enablement
Release Notes Click Here  |  What’s New  |  Versions, System Requirements, and Installation  |  Deprecated and Discontinued Functionality

Upgrade Notes  |  FIPS Compliance  |  Resolved Issues  |  Known Issues

docs.vmware.com/nsx-v Installation  |   Cross-vCenter Installation  |   Administration  |   Upgrade  |   Troubleshooting  |   Logging & System Events

API Guide  |  vSphere CLI Guide  |  vSphere Configuration Maximums

Networking Documentation Transport Zones  |  Logical Switches  |  Configuring Hardware Gateway  |  L2 Bridges  |  Routing  |  Logical Firewall

Firewall Scenarios  |  Identity Firewall Overview  |  Working with Active Directory Domains  |  Using SpoofGuard

Virtual Private Networks (VPN)  |  Logical Load Balancer  |  Other Edge Services

Compatibility Information Interoperability Matrix  |  Configuration Maximums  | ports.vmware.com/NSX-V
Download Click Here
VMware HOLs HOL-2103-01-NET – VMware NSX for vSphere Advanced Topics

 

Using vRealize Log Insight to troubleshoot #ESXi 7 Error – Host hardware voltage System board 18 VBAT

Posted on

This blog post demonstrates how I used vRLI to solve what seemed like a complex issue and it helped to simplify the outcome.   I use vRLI all the time to parse log files from my devices (hosts, VM’s, etc.), pinpoint data, and resolve issues.  In this case a simple CMOS battery was the issue but its the power of vRLI that allowed me to find detailed enough information to pinpoint the problem.

Recently I was doing some updates on my Home Lab Gen 7 and I noticed this error kept popping up – ‘Host hardware voltage’.  At first I started thinking, might be time for a new power supply, this seems pretty serious.

Next I started looking into this error.  On the host I went into Monitor > Hardware Health > Sensors.  The first sensor to appear gave me some detail around the sensor fault but not quite enough information to figure out what the issue was.  I noted the sensor information – ‘System Board 18 VBAT’

I went into the Supermicro Management interface to see if I could find out more information.  I found some more information around VBAT.  Looks like 3.3v DC is what its expecting, and the event log seems to be registering errors around it, but still not enough to know what exactly is faulting.

With this information I launched vRLI and went into Interactive Analytics.  I choose the last 48 hours and typed ‘vbat’ into the search field.  The first hit that came up stated – ‘Sensor 56 type voltage, Description System Board 18 VBAT state assert for…’  This was very simlar to the errors I noted from ESXi and from the Supermicro motherboard.

Finally, a quick google led me to Intel webpage.  Turns out VBAT was just a CMOS battery issue.

I powered down the host and pulled out the old CMOS battery.  The old battery was pretty warm to the touch. When I placed in on a volt meter and it read less than one volt.

I checked the voltage on the new battery, it came back with 3.3v and inserted into the host.  Since the change the system board has not reported any new errors.

Next I go into vRNI to ensure the error has disappeared from the logs.  I type in ‘vbat’, set my date/time range, and view the results.  From the results, you can see that the errors stopped about 16:00 hours.  That is about the time I put the new battery in, and you see its been error free from for the last hour.  Over the next day or two I’ll check back and make sure its error free.  Additionally, if I wanted to I could setup and alarm to trigger if the log entry returns.

Its results like this is why I like using vRLI to help me troubleshoot, resolve, alert, and monitor results.

If you like my ‘no-nonsense’ videos and blogs that get straight to the point… then post a comment or let me know… Else, I’ll start posting really boring content!

 

 

 

 

Update to VMware Security-Advisory VMSA-2020-0023.1 | Critical, Important CSSv3 5.9-9.8 OpenSLP | New ESXi Patches Released

Posted on Updated on

VMware Security team released this updated information, follow up with VMware if you have questions.

 

Important Update Notes

The ESXi patches released on October 20, 2020 did not address CVE-2020-3992 completely. The ESXi patches listed in the Response Matrix in section 3a have been updated to contain the complete fix for CVE-2020-3992.

In Reference to OpenSLP vulnerability in Section 3a

VMware ESXi 7.0 ESXi70U1a-17119627   (Updated)

Download
Documentation

VMware ESXi 6.7 ESXi670-202011301-SG  (Updated)
Download
Documentation

Note; VMware Cloud Foundation ESXi 3.x & 4.x are still pending at this time.

VMware ESXi

  • VMware vCenter
  • VMware Workstation Pro / Player (Workstation)
  • VMware Fusion Pro / Fusion (Fusion)
  • NSX-T
  • VMware Cloud Foundation
VMSA-2020-0023.1 Severity: Critical
CVSSv3 Range 5.9-9.8
Issue date: 10/20/2020 and updated 11/04/2020
Synopsis: VMware ESXi, vCenter, Workstation, Fusion and NSX-T updates address multiple security vulnerabilities
CVE numbers: CVE-2020-3981   CVE-2020-3982  CVE-2020-3992  CVE-2020-3993  CVE-2020-3994  CVE-2020-3995

 

 

1. Impacted Products
  • VMware ESXi
  • VMware vCenter
  • VMware Workstation Pro / Player (Workstation)
  • VMware Fusion Pro / Fusion (Fusion)
  • NSX-T
  • VMware Cloud Foundation
2. Introduction
Multiple vulnerabilities in VMware ESXi, Workstation, Fusion and NSX-T were privately reported to VMware. Updates are available to remediate these vulnerabilities in affected VMware products.
3a. ESXi  OpenSLP remote code execution vulnerability (CVE-2020-3992)  Critical
IMPORTANT: The ESXi patches released on October 20, 2020 did not address CVE-2020-3992 completely, see section (3a) Notes for an update.

 Description:
OpenSLP as used in ESXi has a use-after-free issue. VMware has evaluated the severity of this issue to be in the Critical severity range with a maximum CVSSv3 base score of 9.8.

Known Attack Vectors

A malicious actor residing in the management network who has access to port 427 on an ESXi machine may be able to trigger a use-after-free in the OpenSLP service resulting in remote code execution.

Resolution To remediate CVE-2020-3992 apply the patches listed in the ‘Fixed Version’ column of the ‘Response Matrix’ found below.

Workarounds Workarounds for CVE-2020-3992 have been listed in the ‘Workarounds’ column of the ‘Response Matrix’ below.

Notes

The ESXi patches released on October 20, 2020 did not address CVE-2020-3992 completely. The ESXi patches listed in the Response Matrix below are updated versions that contain the complete fix for CVE-2020-3992.

Response Matrix Critical
Product Version Running On CVE Identifier CVSSv3 Fixed Version Workarounds
ESXi 7.0 Any CVE-2020-3992 9.8 ESXi70U1a-17119627 Updated KB76372
ESXi 6.7 Any CVE-2020-3992 9.8 ESXi670-202011301-SG  Updated KB76372
ESXi 6.5 Any CVE-2020-3992 9.8 ESXi650-202011401-SG KB76372
Cloud Foundation (ESXi) 4.x Any CVE-2020-3992 9.8 Patch Pending KB76372
Cloud Foundation (ESXi) 3.x Any CVE-2020-3992 9.8 Patch Pending KB76372
Only section 3a has been updated at this time;  The rest of the VMSA is the same; only the links to the new ESX 7U1a and 6.7 updates have been included below this line.
3b. NSX-T Man-in-the-Middle vulnerability MITM (CVE-2020-3993) Important
Description:
VMware NSX-T contains a security vulnerability that exists in the way it allows a KVM host to download and install packages from NSX manager. VMware has evaluated the severity of this issue to be in the Important severity range with a maximum CVSSv3 base score of 7.5.Known Attack Vectors A malicious actor with MITM positioning may be able to exploit this issue to compromise the transport node.Resolution To remediate CVE-2020-3993 apply the patches listed in the ‘Fixed Version’ column of the ‘Response Matrix’ found below.

Workarounds: None

Response Matrix Important
Product Version Running On CVE Identifier CVSSv3 Fixed Version Workarounds
NSX-T 3.x Any CVE-2020-3993 7.5 3.0.2 None
NSX-T 2.5.x Any CVE-2020-3993 7.5 2.5.2.2.0 None
Cloud Foundation (NSX-T) 4.x Any CVE-2020-3993 7.5 4.1 None
Cloud Foundation (NSX-T) 3.x Any CVE-2020-3993 7.5 3.10.1.1 None
3c. Time-of-check to time-of-use TOCTOU out-of-bounds read vulnerability (CVE-2020-3981)  Important
Description:
VMware ESXi, Workstation and Fusion contain an out-of-bounds read vulnerability due to a time-of-check time-of-use issue in ACPI device. VMware has evaluated the severity of this issue to be in the Important severity range with a maximum CVSSv3 base score of 7.1.Known Attack Vectors A malicious actor with administrative access to a virtual machine may be able to exploit this issue to leak memory from the vmx process.Resolution To remediate CVE-2020-3981 apply the patches listed in the ‘Fixed Version’ column of the ‘Response Matrix’ found below.

 Workarounds: None

Response Matrix Important
Product Version Running On CVE Identifier CVSSv3 Fixed Version Workarounds
ESXi 7.0 Any CVE-2020-3981 7.1 ESXi_7.0.1-0.0.16850804 None
ESXi 6.7 Any CVE-2020-3981 7.1 ESXi670-202008101-SG None
ESXi 6.5 Any CVE-2020-3981 7.1 ESXi650-202007101-SG None
Fusion 12.x OS X CVE-2020-3981 N/A Unaffected N/A
Fusion 11.x OS X CVE-2020-3981 7.1 11.5.6 None
Workstation 16.x Any CVE-2020-3981 N/A Unaffected N/A
Workstation 15.x Any CVE-2020-3981 7.1 Patch pending None
Cloud Foundation (ESXi) 4.x Any CVE-2020-3981 7.1 4.1 None
Cloud Foundation (ESXi) 3.x Any CVE-2020-3981 7.1 3.10.1 None
3d. TOCTOU out-of-bounds write vulnerability (CVE-2020-3982)
Description:
VMware ESXi, Workstation and Fusion contain an out-of-bounds write vulnerability due to a time-of-check time-of-use issue in ACPI device. VMware has evaluated the severity of this issue to be in the Moderate severity range with a maximum CVSSv3 base score of 5.9.Known Attack Vectors A malicious actor with administrative access to a virtual machine may be able to exploit this vulnerability to crash the virtual machine’s vmx process or corrupt hypervisor’s memory heap.

Resolution To remediate CVE-2020-3982 apply the patches listed in the ‘Fixed Version’ column of the ‘Response Matrix’ found below.

 Workarounds: None

Response Matrix Moderate
Product Version Running On CVE Identifier CVSSv3 Fixed Version Workarounds
ESXi 7.0 Any CVE-2020-3982 5.9 ESXi_7.0.1-0.0.16850804 None
ESXi 6.7 Any CVE-2020-3982 5.9 ESXi670-202008101-SG None
ESXi 6.5 Any CVE-2020-3982 5.9 ESXi650-202007101-SG None
Fusion 12.x OS X CVE-2020-3982 N/A Unaffected N/A
Fusion 11.x OS X CVE-2020-3982 5.9 11.5.6 None
Workstation 16.x Any CVE-2020-3982 N/A Unaffected N/A
Workstation 15.x Any CVE-2020-3982 5.9 Patch pending None
Cloud Foundation (ESXi) 4.x Any CVE-2020-3982 5.9 4.1 None
Cloud Foundation (ESXi) 3.x Any CVE-2020-3982 5.9 3.10.1 None
3e. vCenter Server update function MITM vulnerability (CVE-2020-3994)  Important
Description:  VMware vCenter Server contains a session hijack vulnerability in the vCenter Server Appliance Management Interface update function due to a lack of certificate validation. VMware has evaluated the severity of this issue to be in the Important severity range with a maximum CVSSv3 base score of 7.5.

Known Attack Vectors A malicious actor with network positioning between vCenter Server and an update repository may be able to perform a session hijack when the vCenter Server Appliance Management Interface is used to download vCenter updates.

Resolution To remediate CVE-2020-3994 apply the patches listed in the ‘Fixed Version’ column of the ‘Response Matrix’ found below.

 Workarounds: None 

Response Matrix Important
Product Version Running On CVE Identifier CVSSv3 Fixed Version Workarounds
vCenter Server 7.0 Any CVE-2020-3994 N/A Unaffected N/A
vCenter Server 6.7 vAppliance CVE-2020-3994 7.5 6.7u3 None
vCenter Server 6.7 Windows CVE-2020-3994 N/A Unaffected N/A
vCenter Server 6.5 vAppliance CVE-2020-3994 7.5 6.5u3k None
vCenter Server 6.5 Windows CVE-2020-3994 N/A Unaffected N/A
Cloud Foundation (vCenter) 4.x Any CVE-2020-3994 N/A Unaffected N/A
Cloud Foundation (vCenter) 3.x Any CVE-2020-3994 7.5 3.9.0 None
3f. VMCI host driver memory leak vulnerability (CVE-2020-3995)  Important
Description:  The VMCI host drivers used by VMware hypervisors contain a memory leak vulnerability. VMware has evaluated the severity of this issue to be in the Important severity range with a maximum CVSSv3 base score of 7.1.

Known Attack Vectors A malicious actor with access to a virtual machine may be able to trigger a memory leak issue resulting in memory resource exhaustion on the hypervisor if the attack is sustained for extended periods of time.

 Resolution To remediate CVE-2020-3995 apply the patches listed in the ‘Fixed Version’ column of the ‘Response Matrix’ found below.

 Workarounds: None.

Response Matrix Important
Product Version Running On CVE Identifier CVSSv3 Fixed Version Workarounds
ESXi 7.0 Any CVE-2020-3995 N/A Unaffected N/A
ESXi 6.7 Any CVE-2020-3995 7.1 ESXi670-201908101-SG None
ESXi 6.5 Any CVE-2020-3995 7.1 ESXi650-201907101-SG None
Fusion 11.x Any CVE-2020-3995 7.1 11.1.0 None
Workstation 15.x Any CVE-2020-3995 7.1 15.1.0 None
Cloud Foundation (ESXi) 4.x Any CVE-2020-3995 N/A Unaffected N/A
Cloud Foundation (ESXi) 3.x Any CVE-2020-3995 7.1 3.9.0 None
4. References
VMware ESXi 7.0 ESXi70U1a-17119627   (Updated)

Download
Documentation

VMware ESXi 6.7 ESXi670-202011301-SG  (Updated)
Download
Documentation

VMware ESXi670-202008101-SG  (Included with August’s Release of ESXi670-202008001)

Download
Documentation

 VMware ESXi 6.7 ESXi670-202010401-SG
Download
Documentation

VMware vCenter Server 6.7u3

Download
Documentation

VMware vCenter Server 6.5u3k

Download
Documentation

VMware Workstation Pro 15.6

Download

Documentation

VMware Workstation Player 15.6
Download
Documentation

VMware Fusion 11.5.6
Download
Documentation

 VMware NSX-T 3.0.2
Download
Documentation

 VMware NSX-T 2.5.2.2.0
Download

Documentation

VMware vCloud Foundation 4.1

Download

Documentation

VMware vCloud Foundation 3.10.1 & 3.10.1

Download
Documentation

VMware vCloud Foundation 3.9.0

Download
Documentation

Mitre CVE Dictionary Links:
CVE-2020-3981
CVE-2020-3982
CVE-2020-3992
CVE-2020-3993
CVE-2020-3994
CVE-2020-3995 

FIRST CVSSv3 Calculator:

CVE-2020-3981
CVE-2020-3982 

CVE-2020-3992

CVE-2020-3993

CVE-2020-3994

CVE-2020-3995

5. Change Log
2020-10-20 VMSA-2020-0023 Initial security advisory.

2020-11-04 VMSA-2020-0023.1 Updated ESXi patches for section 3a

Disclaimer
This enablement email derives from our VMware Security Advisory and is accurate at the time of creation.  Bulletins maybe updated periodically, when using this email as future reference material, please refer to the full & updated VMware Security Advisory VMSA-2020-0023.1

Updating #VMware #HomeLab Gen 5 to Gen 7

Posted on Updated on

Not to long ago I updated my Gen 4 Home Lab to Gen 5 and I posted many blogs and video around this.  The Gen 5 Lab ran well for vSphere 6.7 deployments but moving into vSphere 7.0 I had a few issues adapting it.  Mostly these issues were with the design of the Jingsha Motherboard.  I noted most of these challenges in the Gen 5 wrap up video. Additionally, I had some new networking requirements mainly around adding multiple Intel NIC ports and Home Lab Gen 5 was not going to adapt well or would be very costly to adapt.  These combined adaptions forced my hand to migrate to what I’m calling Home Lab Gen 7.  Wait a minute, what happen to Home Lab Gen 6? I decided to align my Home Lab Generation numbers to match vSphere release number, so I skipped Gen 6 to align.

First: I review my design goals:

  • Be able to run vSphere 7.x and vSAN Environment
  • Reuse as much as possible from Gen 5 Home lab, this will keep costs down
  • Choose products that bring value to the goals, are cost effective, and if they are on the VMware HCL that a plus but not necessary for a home lab
  • Keep networking (vSAN / FT) on 10Gbe MikroTik Switch
  • Support 4 x Intel Gbe Networks
  • Ensure there will be enough CPU cores and RAM to be able to support multiple VMware products (ESXi, VCSA, vSAN, vRO, vRA, NSX, LogInsight)
  • Be able to fit the the environment into 3 ESXi Hosts
  • The environment should run well, but doesn’t have to be a production level environment

Second – Evaluate Software, Hardware, and VM requirements:

My calculated numbers from my Gen 5 build will stay rather static for Gen 7.  The only update for Gen 7 is to use the updated requirements table which can be found here >>  ‘HOME LABS: A DEFINITIVE GUIDE’

Third – Home Lab Design Considerations

This too will be very similar to Gen 5, but I do review this table and made any last changes to my design

Four – Choosing Hardware

Based on my estimations above I’m going to need a very flexible Mobo, supporting lots of RAM, good network connectivity, and should be as compatible as possible with my Gen 5 hardware.  I’ve reused many parts from Gen 5 but the main change came with the Supermicro Motherboard and the addition of 2TB SAS HDD listed below.

Note: I’ve listed the newer items in Italics all other parts I’ve carried over from Gen 5.

Overview:

  • My Gen 7 Home Lab is based on vSphere 7 (VCSA, ESXi, and vSAN) and it contains 3 x ESXi Hosts, 1 x Windows 10 Workstation,  4 x Cisco Switches, 1 x MikroTik 10gbe Switch, 2 x APC UPS

ESXi Hosts:

  • Case:
  • Motherboard:
  • CPU:
    • CPU: Xeon E5-2640 v2 8 Cores / 16 HT (Ebay $30 each)
    • CPU Cooler: DEEPCOOL GAMMAXX 400 (Amazon $19)
  • RAM:
    • 128GB DDR3 ECC RAM (Ebay $170)
  • Disks:
    • 64GB USB Thumb Drive (Boot)
    • 2 x 200 SAS SSD (vSAN Cache)
    • 2 x 2TB SAS HDD (vSAN Capacity – See this post)
    • 1 x 2TB SATA (Extra Space)
  • SAS Controller:
    • 1 x IBM 5210 JBOD (Ebay)
    • CableCreation Internal Mini SAS SFF-8643 to (4) 29pin SFF-8482 (Amazon $18)
  • Network:
    • Motherboard Integrated i350 1gbe 4 Port
    • 1 x MellanoxConnectX3 Dual Port (HP INFINIBAND 4X DDR PCI-E HCA CARD 452372-001)
  • Power Supply:
    • Antec Earthwatts 500-600 Watt (Adapters needed to support case and motherboard connections)
      • Adapter: Dual 8(4+4) Pin Male for Motherboard Power Adapter Cable (Amazon $11)
      • Adapter: LP4 Molex Male to ATX 4 pin Male Auxiliary (Amazon $11)
      • Power Supply Extension Cable: StarTech.com 8in 24 Pin ATX 2.01 Power Extension Cable (Amazon $9)

Network:

  • Core VM Switches:
    • 2 x Cisco 3650 (WS-C3560CG-8TC-S 8 Gigabit Ports, 2 Uplink)
    • 2 x Cisco 2960 (WS-C2960G-8TC-L)
  • 10gbe Network:
    • 1 x MikroTik 10gbe CN309 (Used for vSAN and Replication Network)
    • 2 ea. x HP 684517-001 Twinax SFP 10gbe 0.5m DAC Cable (Ebay)
    • 2 ea. x MELLANOX QSFP/SFP ADAPTER 655874-B21 MAM1Q00A-QSA (Ebay)

Battery Backup UPS:

  • 2 x APC NS1250

Windows 10 Workstation:

Thanks for reading, please do reach out if you have any questions.

If you like my ‘no-nonsense’ videos and blogs that get straight to the point… then post a comment or let me know… Else, I’ll start posting really boring content!

#VMware OCTO Initiative: Nonprofit Connect – Complementary Education and Enablement General Links

Posted on Updated on

The VMware Office of the CTO Ambassadors (CTOA) is an internal VMware program which allows field employees to connect and advocate their customer needs inside of VMware.  Additionally, the CTOA program enables field employees to engage in initiates to better serve our customers.  This past year I’ve been working on an CTOA initiative known as Nonprofit Connect (NPC). NPC has partnered with the VMware Foundation to help VMware Non-profit customers through more effective and sustainable technology.   Part of this program was creating and updating an enablement guide which helps Non-Profits gain access to resources.  This resource is open to all our customers and is publicly posted >> NPC Enablement Guide

Michelle Kaiser is leading the Nonprofit Connect initiative and from what I’ve seen she and the team are doing a great job — Keep up the good work!

More information around NPC, CTOA, and the VMware Foundation can be found in the links below:

GA Release VMware NSX-T Data Center 3.1 | Announcement, information, and links

Posted on

VMware Announced the GA Releases of VMware NSX-T Data Center 3.1

See the base table for all the technical enablement links including VMworld 2020 sessions and new Hands On Labs.

Release Overview
VMware NSX-T Data Center 3.1.0   |  Build 17107167

What’s New
NSX-T Data Center 3.1 includes a large list of new features to offer new functionalities for virtualized networking and security for private, public, and multi-clouds. Highlights include new features and enhancements in the following focus areas:

  • Cloud-scale Networking: Federation enhancements, Enhanced Multicast capabilities.
  • Move to Next Gen SDN: Simplified migration from NSX-V to NSX-T,
  • Intrinsic Security: Distributed IPS, FQDN-based Enhancements
  • Lifecycle and monitoring: NSX-T support with vSphere Lifecycle Manager (vLCM), simplified installation, enhanced monitoring, search and filtering.
  • Federation is now considered production ready.

 In addition to these enhancements, the following capabilities and improvements have been added.

  • Federation

Support for standby Global Manager Cluster

Global Manager can now have an active cluster and a standby cluster in another location. Latency between active and standby cluster must be a maximum of 150ms round-trip time.

With the support of Federation upgrade and Standby GM, Federation is now considered production ready.

  • L2 Networking

Change the display name for TCP/IP stack: The netstack keys remain “vxlan” and “hyperbus” but the display name in the UI is now “nsx-overlay” and “nsx-hyperbus”.

The display name will change in both the list of Netstacks and list of VMKNICs

This change will be visible with vCenter 6.7

Improvements in L2 Bridge Monitoring and Troubleshooting

Consistent terminology across documentation, UI and CLI

Addition of new CLI commands to get summary and detailed information on L2 Bridge profiles and stats

Log messages to identify the bridge profile, the reason for the state change, as well as the logical switch(es) impacted

Support TEPs in different subnets to fully leverage different physical uplinks

A Transport Node can have multiple host switches attaching to several Overlay Transport Zones. However, the TEPs for all those host switches need to have an IP address in the same subnet. This restriction has been lifted to allow you to pin different host switches to different physical uplinks that belong to different L2 domains.

Improvements in IP Discovery and NS Groups: IP Discovery profiles can now be applied to NS Groups simplifying usage for Firewall Admins.

  • L3 Networking

Policy API enhancements

Ability to configure BFD peers on gateways and forwarding up timer per VRF through policy API.

Ability to retrieve the proxy ARP entries of gateway through policy API.

  • Multicast

NSX-T 3.1 is a major release for Multicast, which extends its feature set and confirms its status as enterprise ready for deployment.

Support for Multicast Replication on Tier-1 gateway. Allows to turn on multicast for a Tier-1 with Tier-1 Service Router (mandatory requirement) and have Multicast receivers and sources attached to it.

Support for IGMPv2 on all downlinks and uplinks from Tier-1

Support for PIM-SM on all uplinks (config max supported) between each Tier-0 and all TORs  (protection against TOR failure)

Ability to run Multicast in A/S and Unicast ECMP in A/A from Tier-1 → Tier-0 → TOR 

Please note that Unicast ECMP will not be supported from ESXi host → T1 when it is attached to a T1 which also has Multicast enabled.

Support for static RP programming and learning through BS & Support for Multiple Static RPs

Distributed Firewall support for Multicast Traffic

Improved Troubleshooting: This adds the ability to configure IGMP Local Groups on the uplinks so that the Edge can act as a receiver. This will greatly help in triaging multicast issues by being able to attract multicast traffic of a particular group to Edge.

  • Edge Platform and Services

Inter TEP communication within the same host: Edge TEP IP can be on the same subnet as the local hypervisor TEP.

Support for redeployment of Edge node: A defunct Edge node, VM or physical server, can be replaced with a new one without requiring it to be deleted.

NAT connection limit per Gateway: The maximum NAT sessions can be configured per Gateway.

  • Firewall

Improvements in FQDN-based Firewall: You can define FQDNs that can be applied to a Distributed Firewall. You can either add individual FQDNs or import a set of FQDNs from CSV files.

Firewall Usability Features

  • Firewall Export & Import: NSX now provides the option for you to export and import firewall rules and policies as CSVs.
  • Enhanced Search and Filtering: Improved search indexing and filtering options for firewall rules based on IP ranges.
  • Distributed Intrusion Detection/Prevention System (D-IDPS)

Distributed IPS

NSX-T will have a Distributed Intrusion Prevention System. You can block threats based on signatures configured for inspection.

Enhanced dashboard to provide details on threats detected and blocked.

IDS/IPS profile creation is enhanced with Attack Types, Attack Targets, and CVSS scores to create more targeted detection.

  • Load Balancing

HTTP server-side Keep-alive: An option to keep one-to-one mapping between the client side connection and the server side connection; the backend connection is kept until the frontend connection is closed.

HTTP cookie security compliance: Support for “httponly” and “secure” options for HTTP cookie.

A new diagnostic CLI command: The single command captures various troubleshooting outputs relevant to Load Balancer.

  • VPN

TCP MSS Clamping for L2 VPN: The TCP MSS Clamping feature allows L2 VPN session to pass traffic when there is MTU mismatch.

  • Automation, OpenStack and API

NSX-T Terraform Provider support for Federation: The NSX-T Terraform Provider extends its support to NSX-T Federation. This allows you to create complex logical configurations with networking, security (segment, gateways, firewall etc.) and services in an infra-as-code model. For more details, see the NSX-T Terraform Provider release notes.

Conversion to NSX-T Policy Neutron Plugin for OpenStack environment consuming Management API: Allows you to move an OpenStack with NSX-T environment from the Management API to the Policy API. This gives you the ability to move an environment deployed before NSX-T 2.5 to the latest NSX-T Neutron Plugin and take advantage of the latest platform features.

 Ability to change the order of NAT and FWLL on OpenStack Neutron Router: This gives you the choice in your deployment for the order of operation between NAT and FWLL. At the OpenStack Neutron Router level (mapped to a Tier-1 in NSX-T), the order of operation can be defined to be either NAT then firewall or firewall then NAT. This is a global setting for a given OpenStack Platform.

NSX Policy API Enhancements: Ability to filter and retrieve all objects within a subtree of the NSX Policy API hierarchy. In previous version filtering was done from the root of the tree policy/api/v1/infra?filter=Type-, this will allow you to retrieve all objects from sub-trees instead. For example, this allows a network admin to look at all Tier-0 configurations by simply /policy/api/v1/infra/tier-0s?filter=Type-  instead of specifying from the root all the Tier-0 related objects.

  • Operations

NSX-T support with vSphere Lifecycle Manager (vLCM): Starting with vSphere 7.0 Update 1, VMware NSX-T Data Center can be supported on a cluster that is managed with a single vSphere Lifecycle Manager (vLCM) image. As a result, NSX Manager can be used to install, upgrade, or remove NSX components on the ESXi hosts in a cluster that is managed with a single image.

  • Hosts can be added and removed from a cluster that is managed with a single vSphere Lifecycle Manager and enabled with VMware NSX-T Data Center.
  • Both VMware NSX-T Data Center and ESXi can be upgraded in a single vSphere Lifecycle Manager remediation task. The workflow is supported only if you upgrade from VMware NSX-T Data Center version 3.1.
  • Compliance can be checked, a remediation pre-check report can be generated, and a cluster can be remediated with a single vSphere Lifecycle Manager image and that is enabled with VMware NSX-T Data Center.

Simplification of host/cluster installation with NSX-T: Through the “Getting Started” button in the VMware NSX-T Data Center user interface, simply select the cluster of hosts that needs to be installed with NSX, and the UI will automatically prompt you with a network configuration that is recommended by NSX based on your underlying host configuration. This can be installed on the cluster of hosts thereby completing the entire installation in a single click after selecting the clusters. The recommended host network configuration will be shown in the wizard with a rich UI, and any changes to the desired network configuration before NSX installation will be dynamically updated so users can refer to it as needed.

Enhancements to in-place upgrades: Several enhancements have been made to the VMware NSX-T Data Center in-place host upgrade process, like increasing the max limit of virtual NICs supported per host, removing previous limitations, and reducing the downtime in data path during in-place upgrades. Refer to the VMware NSX-T Data Center Upgrade Guide for more details.

Reduction of VIB size in NSX-T: VMware NSX-T Data Center 3.1.0 has a smaller VIB footprint in all NSX host installations so that you are able to install ESX and other 3rd party VIBs along with NSX on their hypervisors.

Enhancements to Physical Server installation of NSX-T: To simplify the workflow of installing VMware NSX-T Data Center on Physical Servers, the entire end-to-end physical server installation process is now through the NSX Manager. The need for running Ansible scripts for configuring host network connectivity is no longer a requirement.

ERSPAN support on a dedicated network stack with ENS: ERSPAN can now be configured on a dedicated network stack i.e., vmk stack and supported with the enhanced NSX network switch i.e., ENS, thereby resulting in higher performance and throughput for ERSPAN Port Mirroring.

Singleton Manager with vSphere HA: NSX now supports the deployment of a single NSX Manager in production deployments. This can be used in conjunction with vSphere HA to recover a failed NSX Manager. Please note that the recovery time for a single NSX Manager using backup/restore or vSphere HA may be much longer than the availability provided by a cluster of NSX Managers.

Log consistency across NSX components: Consistent logging format and documentation across different components of NSX so that logs can be easily parsed for automation and you can efficiently consume the logs for monitoring and troubleshooting.

Support for Rich Common Filters: This is to support rich common filters for operations features like packet capture, port mirroring, IPFIX, and latency measurements for increasing the efficiency of customers while using these features. Currently, these features have either very simple filters which are not always helpful, or no filters leading to inconvenience.

CLI Enhancements: Several CLI related enhancements have been made in this release:

CLI “get” commands will be accompanied with timestamps now to help with debugging

GET / SET / RESET the Virtual IP (VIP) of the NSX Management cluster through CLI

§  While debugging through the central CLI, run ping commands directly on the local machines eliminating extra steps needed to log in to the machine and do the same

§  View the list of core on any NSX component through CLI

§  Use the “*” operator now in CLI

§  Commands for debugging L2Bridge through CLI have also been introduced in this release

Distributed Load Balancer Traceflow: Traceflow now supports Distributed Load Balancer for troubleshooting communication failures from endpoints deployed in vSphere with Tanzu to a service endpoint via the Distributed Load Balancer.

  •  Monitoring

Events and Alarms

  • Capacity Dashboard: Maximum Capacity, Maximum Capacity Threshold, Minimum Capacity Threshold
  • Edge Health: Standby move to different edge node, Datapath thread deadlocked, NSXT Edge core file has been generated, Logical Router failover event, Edge process failed, Storage Latency High, Storage Error
  • ISD/IPS: NSX-IDPS Engine Up/Down, NSX-IDPS Engine CPU Usage exceeded 75%, NSX-IDPS Engine CPU Usage exceeded 85%, NSX-IDPS Engine CPU Usage exceeded 95%, Max events reached, NSX-IDPS Engine Memory Usage exceeded 75%,
    NSX-IDPS Engine MemoryUsage exceeded 85%, NSX-IDPS Engine MemoryUsage exceeded 95%
  • IDFW: Connectivity to AD server, Errors during Delta Sync
  • Federation: GM to GM Split Brain
  • Communication: Control Channel to Transport Node Down, Control Channel to Transport Node Down for too Long, Control Channel to Manager Node Down, Control Channel to Manager Node Down for too Long, Management Channel to Transport Node Down, Management Channel to Transport Node Down for too Long, Manager FQDN Lookup Failure, Manager FQDN Reverse Lookup Failure

ERSPAN for ENS fast path: Support port mirroring for ENS fast path.

System Health Plugin Enhancements: System Health plugin enhancements and status monitoring of processes running on different nodes to ensure that system is running properly by on-time detection of errors.

Live Traffic Analysis & Tracing: A live traffic analysis tool to support bi-directional traceflow between on-prem and VMC data centers.

Latency Statistics and Measurement for UA Nodes: Latency measurements between NSX Manager nodes per NSX Manager cluster and between NSX Manager clusters across different sites.

Performance Characterization for Network Monitoring using Service Insertion: To provide performance metrics for network monitoring using Service Insertion.

  • Usability and User Interface

Graphical Visualization of VPN: The Network Topology map now visualizes the VPN tunnels and sessions that are configured. This aids you to quickly visualize and troubleshoot VPN configuration and settings.

Dark Mode: NSX UI now supports dark mode. You can toggle between light and dark mode.

Firewall Export & Import: NSX now provides the option for you to export and import firewall rules and policies as CSVs.

Enhanced Search and Filtering: Improved the search indexing and filtering options for firewall rules based on IP ranges.

Reducing Number of Clicks: With this UI enhancement, NSX-T now offers a convenient and easy way to edit Network objects.

  • Licensing

Multiple license keys: NSX now has the ability to accept multiple license keys of same edition and metric. This functionality allows you to maintain all your license keys without having to combine your license keys.

License Enforcement: NSX-T now ensures that users are license-compliant by restricting access to features based on license edition. New users will be able to access only those features that are available in the edition that they have purchased. Existing users who have used features that are not in their license edition will be restricted to only viewing the objects; create and edit will be disallowed.

New VMware NSX Data Center Licenses: Adds support for new VMware NSX Firewall and NSX Firewall with Advanced Threat Prevention license introduced in October 2020, and continues to support NSX Data Center licenses (Standard, Professional, Advanced, Enterprise Plus, Remote Office Branch Office) introduced in June 2018, and previous VMware NSX for vSphere license keys. See VMware knowledge base article 52462 for more information about NSX licenses.

  • AAA and Platform Security

Security Enhancements for Use of Certificates And Key Store Management: With this architectural enhancement, NSX-T offers a convenient and secure way to store and manage a multitude of certificates that are essential for platform operations and be in compliance with industry and government guidelines. This enhancement also simplifies API use to install and manage certificates.

Alerts for Audit Log Failures: Audit logs play a critical role in managing cybersecurity risks within an organization and are often the basis of forensic analysis, security analysis and criminal prosecution, in addition to aiding with diagnosis of system performance issues. Complying with NIST-800-53 and industry-benchmark compliance directives, NSX offers alert notification via alarms in the event of failure to generate or process audit data.

Custom Role Based Access Control: Users desire the ability to configure roles and permissions that are customized to their specific operating environment. The custom RBAC feature allows granular feature-based privilege customization capabilities enabling NSX customers the flexibility to enforce authorization based on least privilege principles. This will benefit users in fulfilling specific operational requirements or meeting compliance guidelines. Please note in NSX-T 3.1, only policy based features are available for role customization.

FIPS – Interoperability with vSphere 7.x: Cryptographic modules in use with NSX-T are FIPS 140-2 validated since NSX-T 2.5. This change extends formal certification to incorporate module upgrades and interoperability with vSphere 7.0.

  • NSX Data Center for vSphere to NSX-T Data Center Migration

Migration of NSX for vSphere Environment with vRealize Automation: The Migration Coordinator now interacts with vRealize Automation (vRA) in order to migrate environments where vRealize Automation provides automation capabilities. This will offer a first set of topologies which can be migrated in an environment with vRealize Automation and NSX-T Data Center. Note: This will require support on vRealize Automation.

Modular Distributed Firewall Config Migration: The Migration Coordinator is now able to migrate firewall configurations and state from a NSX Data Center for vSphere environment to NSX-T Data Center environment. This functionality allows a customer to do migrate virtual machines (using vMotion) from one environment to the other and keep their firewall rules and state.

Migration of Multiple VTEP: The NSX Migration Coordinator now has the ability to migrate environments deployed with multiple VTEPs.

Increase Scale in Migration Coordinator to 256 Hosts: The Migration Coordinator can now migrate up to 256 hypervisor hosts from NSX Data Center for vSphere to NSX-T Data Center.

Migration Coordinator coverage of Service Insertion and Guest Introspection: The Migration Coordinator can migrate environments with Service Insertion and Guest Introspection. This will allow partners to offer a solution for migration integrated with complete migrator workflow.

Upgrade Considerations
API Deprecations and Behavior Changes

Retention Period of Unassigned Tags: In NSX-T 3.0.x, NSX Tags with 0 Virtual Machines assigned are automatically deleted by the system after five days. In NSX-T 3.1.0, the system task has been modified to run on a daily basis, cleaning up unassigned tags that are older than one day. There is no manual way to force delete unassigned tags.

I recommend you reviewing the known issues sections General  |  Installation  |  Upgrade  |  NSX Edge  |  NSX Cloud  |  Security  |  Federation

Enablement Links
Release Notes Click Here  |  What’s New  |  General Behavior Changes  |  API and CLI Resources  |  Resolved Issues  |  Known Issues
docs.vmware.com/NSX-T Installation Guide  |  Administration Guide  |  Upgrade Guide  |  Migration Coordinator  |  VMware NSX Intelligence

REST API Reference Guide  |  CLI Reference Guide  |  Global Manager REST API

Upgrading Docs Upgrade Checklist  |  Preparing to Upgrade  |  Upgrading  |  Upgrading NSX Cloud Components  |  Post-Upgrade Tasks

Troubleshooting Upgrade Failures

Installation Docs Preparing for Installation   |  NSX Manager Installation  |    |  Installing NSX Manager Cluster on vSphere  |  Installing NSX Edge

vSphere Lifecycle Manager  |  Host Profile integration  |  Getting Started with Federation  |  Getting Started with NSX Cloud

Migrating Docs Migrating NSX Data Center for vSphere  |  Migrating vSphere Networking  |  Migrating NSX Data Center for vSphere with vRA
Requirements Docs NSX Manager Cluster  |  System  |  NSX Manager VM & Host Transport Node System
NSX Edge VM System  |  NSX Edge Bare Metal  |  Bare Metal Server System  |  Bare Metal Linux Container
Compatibility Information Ports Used  |  Compatibility Guide (Select NSX-T)  |  Product Interoperability Matrix  |
Downloads Click Here
Hands On Labs (New) HOL-2103-01-NET – VMware NSX for vSphere Advanced Topics

HOL-2103-02-NET – VMware NSX Migration Coordinator

HOL-2103-91-NET – VMware NSX for vSphere Flow Monitoring and Traceflow

HOL-2122-01-NET – NSX Cloud Consistent Networking and Security across Enterprise, AWS & Azure

HOL-2122-91-ISM – NSX Cloud Consistent Networking and Security across Enterprise, AWS & Azure Lightning Lab

VMworld 2020 Sessions Update on NSX-T Switching: NSX on VDS (vSphere Distributed Switch) VCNC1197

Demystifying the NSX-T Data Center Control Plane VCNC1164

NSX-T security and compliance deep dive ISNS2256

NSX Data Center for vSphere to NSX-T Migration: Real-World Experience VCNC1590

Blogs NSX-T 3.0 – Innovations in Cloud, Security, Containers, and Operations
 

 

VCSA 7 Error in method invocation [Errno 2] No such file or directory: ‘/storage/core/software-update/updates/index’

Posted on Updated on

This could be my shortest blog to date, but it’s still good to note this error.

In my home lab I wanted to update my VCSA 7 appliance to 7.0u1.  I went into the VCSA Management site, choose update, and the auto update started to look for files in the default repository.  Then I got the following error:

Error in method invocation [Errno 2] No such file or directory: ‘/storage/core/software-update/updates/index’

Doing a bit of research I found out, when the VCSA cannot locate the default vmware.com site repository, then the VSCA will display this error.

In my case, my VCSA could not access the internet so it couldn’t locate the repository. Once I corrected a network issue, the VCSA was able to access the repository and it downloaded the upgrade options.

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

GA Release VMware PowerCLI 12.1.0 | Announcement, information, and links

Posted on

VMware announced the GA Releases of the following: VMware PowerCLI 12.1.0

See the base table for all the technical enablement links including a VMworld 2020 session and new Hands On Lab

 

Release Overview
VMware PowerCLI is a command-line and scripting tool built on Windows PowerShell, and provides more than 700 cmdlets for managing and automating vSphere, VMware Cloud Director, vRealize Operations Manager, vSAN, NSX-T, VMware Cloud Services, VMware Cloud on AWS, VMware HCX, VMware Site Recovery Manager, and VMware Horizon environments.

 

What’s New
VMware PowerCLI 12.1.0 introduces the following new features, changes, and improvements:

Added cmdlets for

  • vSphere Lifecycle Manager
  • Managing Workload Management clusters in vSphere with Tanzu
  • Specifying cluster’s EDRS policies in VMware Cloud on AWS
  • Managing Cloud Native Storage volumes
  • Managing vSAN secure disk wipe
  • Managing Virtual Volume (vVol) storage containers

New Features

  • VMware Cloud on AWS module is extended with support for i3en host type, large appliance size SDDCs and support for adding new hosts to specific clusters
  • Implemented seamless integration between the VMware Cloud on AWS module and the vSphere module to allow easier way to connect to the cloud SDDC
  • Content Library enhancements to allow uploading from internet and datastore URLs

Added support for

  • Secure Encrypted Virtualization
  • Added support for Site Recovery Manager 8.3.1
  • Added support for VMware Horizon 7.13
Upgrade Considerations
Ensure the following software is present on your system

OS Type .NET Version PowerShell Version
Windows .NET Framework 4.7.2 or later Windows PowerShell 5.1
Linux .NET Core 3.1 PowerShell 7
macOS .NET Core 3.1 PowerShell 7
Updated Components
In VMware PowerCLI 12.1.0, the following modules have been updated:

  • VMware.PowerCLI: Provides a root module which other modules are dependent on. This ensures the PowerCLI product can be installed, upgraded, and removed as a complete package if needed.
  • VMware.VimAutomation.Core: Provides cmdlets for automated administration of the vSphere environment.
  • VMware.VimAutomation.Common: Provides functionality that is common to all PowerCLI modules. This module has no cmdlets, but is required for other modules to function correctly.
  • VMware.VimAutomation.Sdk: Provides SDK functionality that is needed by all PowerCLI modules. This module has no cmdlets, but is required for other modules to function correctly.
  • VMware.VimAutomation.Vds: Provides cmdlets for managing vSphere distributed switches and distributed port groups.
  • VMware.VimAutomation.Cis.Core: Provides cmdlets for managing vSphere Automation SDK servers.
  • VMware.VimAutomation.Storage: Provides cmdlets for managing vSphere policy-based storage.
  • VMware.VimAutomation.StorageUtility: Provides utility scripts for storage.
  • VMware.VumAutomation: Provides cmdlets for automating vSphere Update Manager features.
  • VMware.VimAutomation.Srm: Provides cmdlets for managing VMware Site Recovery Manager features.
  • VMware.VimAutomation.HorizonView: Provides cmdlets for automating VMware Horizon features.
  • VMware.VimAutomation.Vmc: Provides cmdlets for managing VMware Cloud on AWS features.
  • VMware.Vim: Provides vSphere low-level binding libraries. This module has no cmdlets.
  • VMware.VimAutomation.Security: Provides cmdlets for managing vSphere Security, including virtual Trusted Platform Module.
  • VMware.VimAutomation.Hcx: Provides cmdlets for managing VMware HCX features.
  • VMware.VimAutomation.WorkloadManagement: Provides cmdlets for managing Project Pacific.
  • VMware.CloudServices: Provides cmdlets for managing VMware Cloud Services
Enablement Links
Release Notes Click Here  |  What’s New in This Release  |  Resolved Issues  |  Known Issues
docs.vmware.com/pCLI Introduction  |  Installing  |  Configuring  |  cmdlet Reference
Compatibility Information Interoperability Matrix  |  Upgrade Path Matrix
Blogs & Infolinks VMware What’s New pCLI vRLCM  |  VMware What’s New pCLI with AWS  |  PM’s Blog pCLI SSO
Download Click Here
VMworld 2020 Sessions PowerCLI: Into the Deep [HCP1286]
Hands On Labs HOL-2111-04-SDC – VMware vSphere Automation – PowerCLI