GA Release #VMware vESXi 7.0 Update 2 | ISO Build 17630552 | Announcement, information, and links

Posted on

VMware announced the GA Release of the following:

  • VMware ESXi 7.0 Update 2

See the base table for all the technical enablement links.

Product Overview
ESXi 7.0 Update 2 | ISO Build 17630552
What’s New
  • ESXi 7.0 Update 2 supports vSphere Quick Boot on the following servers:
    • Dell Inc. PowerEdge M830, PowerEdge R830. HPE ProLiant XL675d Gen10 Plus. Lenovo   ThinkSystem SR 635,  ThinkSystem SR 655
  • Some ESXi configuration files become read-only: As of ESXi 7.0 Update 2, configuration formerly stored in the files  /etc/keymap, /etc/vmware/welcome, /etc/sfcb/sfcb.cfg,  /etc/vmware/snmp.xml,  /etc/vmware/logfilters, /etc/vmsyslog.conf, and /etc/vmsyslog.conf.d/*.conf files, now resides in the ConfigStore database. You can modify this configuration only by using ESXCLI commands, and not by editing files. For more information, see VMware knowledge base articles 82637 and 82638.
  • VMware vSphere Virtual Volumes statistics for better debugging: With ESXi 7.0 Update 2, you can track performance statistics for vSphere Virtual Volumes to quickly identify issues such as latency in third-party VASA provider responses. By using a set of commands, you can get statistics for all VASA providers in your system, or for a specified namespace or entity in the given namespace, or enable statistics tracking for the complete namespace. For more information, see Collecting Statistical Information for vVols.
  • NVIDIA Ampere architecture support: vSphere 7.0 Update 2 adds support for the NVIDIA Ampere architecture that enables you to perform high end AI/ML training, and ML inference workloads, by using the accelerated capacity of the A100 GPU. In addition, vSphere 7.0 Update 2 improves GPU sharing and utilization by supporting the Multi-Instance GPU (MIG) technology. With vSphere 7.0 Update 2, you also see enhanced performance of device-to-device communication, building on the existing NVIDIA GPUDirect functionality, by enabling Address Translation Services (ATS) and Access Control Services (ACS) at the PCIe bus layer in the ESXi kernel.  Read more here…

  • Support for Mellanox ConnectX-6 200G NICs: ESXi 7.0 Update 2 supports Mellanox Technologies MT28908 Family (ConnectX-6) and Mellanox Technologies MT2892 Family (ConnectX-6 Dx) 200G NICs.
  • Performance improvements for AMD Zen CPUs: With ESXi 7.0 Update 2, out-of-the-box optimizations can increase AMD Zen CPU performance by up to 30% in various benchmarks. The updated ESXi scheduler takes full advantage of the AMD NUMA architecture to make the most appropriate placement decisions for virtual machines and containers. AMD Zen CPU optimizations allow a higher number of VMs or container deployments with better performance.
  • Reduced compute and I/O latency, and jitter for latency sensitive workloads: Latency sensitive workloads, such as in financial and telecom applications, can see significant performance benefit from I/O latency and jitter optimizations in ESXi 7.0 Update 2. The optimizations reduce interference and jitter sources to provide a consistent runtime environment. With ESXi 7.0 Update 2, you can also see higher speed in interrupt delivery for passthrough devices.
  • Confidential vSphere Pods on a Supervisor Cluster in vSphere with Tanzu: Starting with vSphere 7.0 Update 2, you can run confidential vSphere Pods, keeping guest OS memory encrypted and protected against access from the hypervisor, on a Supervisor Cluster in vSphere with Tanzu. You can configure confidential vSphere Pods by adding Secure Encrypted Virtualization-Encrypted State (SEV-ES) as an extra security enhancement. For more information, see Deploy a Confidential vSphere Pod.
  • vSphere Lifecycle Manager fast upgrades: Starting with vSphere 7.0 Update 2, you can significantly reduce upgrade time and system downtime, and minimize system boot time, by suspending virtual machines to memory and using the Quick Boot functionality. You can configure vSphere Lifecycle Manager to suspend virtual machines to memory instead of migrating them, powering them off, or suspending them to disk when you update an ESXi host. For more information, see Configuring vSphere Lifecycle Manager for Fast Upgrades.
  • Encrypted Fault Tolerance log traffic: Starting with vSphere 7.0 Update 2, you can encrypt Fault Tolerance log traffic to get enhanced security. vSphere Fault Tolerance performs frequent checks between the primary and secondary VMs to enable quick resumption from the last successful checkpoint. The checkpoint contains the VM state that has been modified since the previous checkpoint. Encrypting the log traffic prevents malicious access or network attacks.
Upgrade Considerations
  • In the Lifecycle Manager plug-in of the vSphere Client, the release date for the ESXi 7.0.2 base image, profiles, and components is 2021-02-17. This is expected. To ensure you can use correct filters by release date, only the release date of the rollup bulletin is 2021-03-09.
  • Starting with vSphere 7.0, VMware uses components for packaging VIBs along with bulletins. The ESXi and esx-update bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching.
  • When patching ESXi hosts by using VMware Update Manager from a version prior to ESXi 7.0 Update 2, it is strongly recommended to use the rollup bulletin in the patch baseline. If you cannot use the rollup bulletin, be sure to include all of the following packages in the patching baseline. If the following packages are not included in the baseline, the update operation fails:
    • VMware-vmkusb_0.1-1vmw.701.0.0.16850804 or higher
    • VMware-vmkata_0.1-1vmw.701.0.0.16850804 or higher
    • VMware-vmkfcoe_1.0.0.2-1vmw.701.0.0.16850804 or higher
    • VMware-NVMeoF-RDMA_1.0.1.2-1vmw.701.0.0.16850804 or higher

Product Support Notices

  • Removal of SHA1 from Secure Shell (SSH): In vSphere 7.0 Update 2, the SHA-1 cryptographic hashing algorithm is removed from the SSHD default configuration.
  • Standard formats of log files and syslog transmissions: In a future major ESXi release, VMware plans to standardize the formats of all ESXi log files and syslog transmissions. This standardization affects the metadata associated with each log file line or syslog transmission. For example, the time stamp, programmatic source identifier, message severity, and operation identifier data. For more information, visit https://core.vmware.com/esxi-log-message-formats. 

 

Impact on ESXi upgrade due to expired ESXi VIB Certificate (76555)

 

Refer to the Interoperability Matrix for more product support notices.

Technical Enablement
Release Notes Click Here  |  What’s New  |  Patches Contained in this Release  |  Product Support Notices  |  Resolved Issues  |  Known Issues
docs.vmware.com/vSphere  Installation and Setup  |  Upgrade  |  vSphere Virtual Machine Administration  |  vSphere Host Profiles  |  vSphere Networking

vSphere Storage  |  vSphere Security  |  vSphere Resource Management  |  vSphere Availability  |  Monitoring & Performance

vSphere Single Host Management – VMware Host Client

More Documentation vSphere Security Configuration Guide 7
Compatibility Information Configuration Maximums  |  Interoperability Matrix  |  Upgrade Paths  |  ports.vmware.com/vSphere7
Download Click Here
Blogs Multiple Machine Learning Workloads Using GPUs: New Features in vSphere 7 Update 2

Introducing the vSphere Native Key Provider

ESXi Log Message Formats

Videos Quicker ESXi Host Upgrades with Suspend to Memory (4 min video)

Introduction to vSphere Native Key Provider video (9 min video)

HOLs HOL-2111-03-SDC – VMware vSphere – Security Getting Started

Explore the new security features of vSphere, including the Trusted Platform Module (TPM) 2.0 for ESXi, the Virtual TPM 2.0 for virtual machines (VM), and support for Microsoft Virtualization Based Security (VBS)

HOL-2111-05-SDC – VMware vSphere Automation and Development – API and SDK

The vSphere Automation API and SDK are developer-friendly and have simplified interfaces.

 

GA Release #VMware vCenter Server 7.0 Update 2 | ISO Build 17694817 | Announcement, information, and links

Posted on

VMware announced the GA Release of the following:

  • VMware vCenter Server 7.0 Update 2

See the base table for all the technical enablement links.

Product Overview
vCenter Server 7.0 Update 2 | ISO Build 17694817
What’s New
New in vSphere 7 Update 2 is that vMotion is able to fully benefit from high-speed bandwidth NICs for even faster live-migrations. Manual tuning is no longer required as it was in previous vSphere versions, to achieve the same.

The evolution of vMotion

  • vSphere With Tanzu Load Balancer support encompasses access to the Supervisor Cluster, Tanzu Kubernetes Grid clusters and to Kubernetes Services of Type LoadBalancer deployed in the TKG clusters.   Users are allocated a single Virtual IP (VIP) to access the Supervisor Cluster Kubernetes API.  Traffic is spread across the three Kubernetes Controllers that make up the Supervisor Cluster.  Read more here….

  • New CLI deployment of vCenter Server: With vCenter Server 7.0 Update 2, by using the vCSA_with_cluster_on_ESXi.json template, you can bootstrap a single node vSAN cluster and enable vSphere Lifecycle Manager cluster image management when deploying vCenter Server on an ESXi host. For more information, see JSON Templates for CLI Deployment of the vCenter Server Appliance.
  • Parallel remediation on hosts in clusters that you manage with vSphere Lifecycle Manager baselines: With vCenter Server 7.0 Update 2, to reduce the time needed for patching or upgrading the ESXi hosts in your environment, you can enable vSphere Lifecycle Manager to remediate in parallel the hosts within a cluster by using baselines. You can remediate in parallel only ESXi hosts that are already in maintenance mode. You cannot remediate in parallel hosts in a vSAN cluster. For more information, see Remediating ESXi Hosts Against vSphere Lifecycle Manager Baselines and Baseline Groups.
  • Improved vSphere Lifecycle Manager error messages: vCenter Server 7.0 Update 2 introduces improved error messages that help you better understand the root cause for issues such as skipped nodes during upgrades and updates, or hardware compatibility, or ESXi installation and update as part of the Lifecycle Manager operations.
  • Increased scalability with vSphere Lifecycle Manager: With vCenter Server 7.0 Update 2, scalability for vSphere Lifecycle Manager operations with ESXi hosts and clusters is up to 400 supported ESXi hosts managed by a vSphere Lifecycle Manager Image from 280.
  • Upgrade and migration from NSX-T-managed Virtual Distributed Switches to vSphere Distributed Switches: By using vSphere Lifecycle Manager baselines, you can upgrade your system to vSphere 7.0 Update 2 and simultaneously migrate from NSX-T-managed Virtual Distributed Switches to vSphere Distributed Switches for clusters enabled with VMware NSX-T Data Center. For more information, see Using vSphere Lifecycle Manager to Migrate an NSX-T Virtual Distributed Switch to a vSphere Distributed Switch.
  • Create new clusters by importing the desired software specification from a single reference host: With vCenter Server 7.0 Update 2, you can save time and effort to ensure that you have all necessary components and images available in the vSphere Lifecycle Manager depot before creating a new cluster by importing the desired software specification from a single reference host. You do not compose or validate a new image, because during image import, vSphere Lifecycle Manager extracts in the vCenter Server instance where you create the cluster the software specification from the reference host, as well as the software depot associated with the image. You can import an image from an ESXi host that is in the same or a different vCenter Server instance. You can also import an image from an ESXi host that is not managed by vCenter Server, move the reference host to the cluster or use the image on the host and seed it to the new cluster without moving the host. For more information, see Create a Cluster That Uses a Single Image by Importing an Image from a Host.
  • Enable vSphere with Tanzu on a cluster managed by the vSphere Lifecycle Manager: As a vSphere administrator, you can enable vSphere with Tanzu on vSphere clusters that you manage with a single VMware vSphere Lifecycle Manager image. You can then use the Supervisor Cluster while it is managed by vSphere Lifecycle Manager. For more information, see Working with vSphere Lifecycle Manager.
  • vSphere Lifecycle Manager fast upgrades: Starting with vSphere 7.0 Update 2, you can configure vSphere Lifecycle Manager to suspend virtual machines to memory instead of migrating them, powering them off, or suspending them to disk. For more information, see Configuring vSphere Lifecycle Manager for Fast Upgrades.
  • Confidential vSphere Pods on a Supervisor Cluster in vSphere with Tanzu: Starting with vSphere 7.0 Update 2, you can run confidential vSphere Pods, keeping guest OS memory encrypted and protected against access from the hypervisor, on a Supervisor Cluster in vSphere with Tanzu. You can configure confidential vSphere Pods by adding Secure Encrypted Virtualization-Encrypted State (SEV-ES) as an extra security enhancement. For more information, see Deploy a Confidential vSphere Pod.
  • In-product feedback: vCenter Server 7.0 Update 2 introduces an in-product feedback option in the vSphere Client to enable you provide real-time rating and comments on key VMware vSphere workflows and features.
Upgrade Considerations
Product Support Notices

  • Deprecation of SSPI, CAC and RSA: In a future vSphere release, VMware plans to discontinue support for Windows Session Authentication (SSPI), Common Access Card (CAC), and RSA SecurID for vCenter Server. In place of SSPI, CAC, or RSA SecurID, users and administrators can configure and use Identity Federation with a supported Identity Provider to sign in to their vCenter Server system.
  • Removal of SHA1 from Secure Shell (SSH): In vSphere 7.0 Update 2, the SHA-1 cryptographic hashing algorithm is removed from the SSHD default configuration.
  • Support for Federal Information Processing Standards (FIPS): FIPS will be added to and enabled by default in vCenter Server in a future release of vSphere. FIPS support is also available but not enabled by default in vCenter Server 7.0 Update 2, and can be enabled by following the steps described in vCenter Server and FIPS.
  • Client plug-ins compliance with FIPS: In a future vSphere release, all client plug-ins for vSphere must become compliant with the Federal Information Processing Standards (FIPS). When FIPS is enabled by default in the vCenter Server, you cannot use local plug-ins that do not conform to the standards. For more information, see Preparing Local Plug-ins for FIPS Compliance.
  • PowerCLI support for updating vSphere Native Key Providers: PowerCLI support for updating vSphere Native Key Providers will be added in an upcoming PowerCLI release. For more information, see VMware knowledge base article 82732.
  • Site Recovery Manager 8.4 and vSphere Replication 8.4 support: If virtual machine encryption is switched on, Site Recovery Manager 8.4 and vSphere Replication 8.4 do not support vSphere 7.0 Update 2.

Refer to our Interoperability Matrix for more product support notices.

Technical Enablement
Release Notes Click Here  |  What’s New  | Patches Contained in this Release  |  Product Support Notices  |  Resolved Issues  |  Known Issues
docs.vmware.com/vSphere vCenter Server Installation and Setup  |  vCenter Server Upgrade  |  vSphere Authentication  |  Managing Host and Cluster Lifecycle

vCenter Server Configuration  |   vCenter Server and Host Management

More Documentation vSphere with Tanzu   |  vSphere Bitfusion
Compatibility Information Configuration Maximums  |  Interoperability Matrix  |  Upgrade Paths  |  ports.vmware.com/vSphere7
Download Click Here
Blogs Announcing: vSphere 7 Update 2 Release

Faster vMotion Makes Balancing Workloads Invisible

vSphere With Tanzu – NSX Advanced Load Balancer Essentials

The AI-Ready Enterprise Platform: Unleashing AI for Every Enterprise

REST API Modernization

Videos What’s New (35 mins)

Learn About the vMotion Improvements in vSphere 7 (8 min video)

vSphere Lifecycle Manager – Host Seeding Demo (5 min video)

HOLs HOL-2104-01-SDC – Introduction to vSphere Performance

This lab showcases what is new in vSphere 7.0 with regards to performance.

HOL-2113-01-SDC – vSphere with Tanzu

vSphere 7 with Tanzu is the new generation of vSphere for modern applications and it is available standalone on vSphere or as part of VMware Cloud Foundation

HOL-2147-01-ISM Accelerate Machine Learning in vSphere Using GPUs

In this lab, you will learn how you can accelerate Machine Learning Workloads on vSphere using GPUs. VMware vSphere combines GPU power with the management benefits of vSphere

 

 

GA Release #VMware #NSX-T Data Center 3.1.1Build 17483185 | Announcement, information, and links

Posted on

VMware Announced the GA Releases of VMware NSX-T Data Center 3.1.1

See the base table for all the technical enablement links.

Product Overview
VMware NSX-T Data Center 3.1.1   |   Build 17483185
What’s New
NSX-T Data Center 3.1.1 provides a variety of new features to offer new functionalities for virtualized networking and security for private, public, and multi-clouds. Highlights include new features and enhancements in the following focus areas.

L3 Networking

OSPFv2 Support on Tier-0 Gateways

NSX-T Data Center now supports OSPF version 2 as a dynamic routing protocol between Tier-0 gateways and physical routers. OSPF can be enabled only on external interfaces and can all be in the same OSPF area (standard area or NSSA), even across multiple Edge Nodes. This simplifies migration from the existing NSX for vSphere deployment already using OSPF to NSX-T Data Center.

NSX Data Center for vSphere to NSX-T Data Center Migration

Support of Universal Objects Migration for a Single Site

You can migrate your NSX Data Center for vSphere environment deployed with a single NSX Manager in Primary mode (not secondary). As this is a single NSX deployment, the objects (local and universal) are migrated to local objects on a local NSX-T.  This feature does not support cross-vCenter environments with Primary and Secondary NSX Managers.

Migration of NSX-V Environment with vRealize Automation – Phase 2

The Migration Coordinator interacts with vRealize Automation (vRA) to migrate environments where vRealize Automation provides automation capabilities. This release adds additional topologies and use cases to those already supported in NSX-T 3.1.0.

Modular Migration for Hosts and Distributed Firewall

The NSX-T Migration Coordinator adds a new mode to migrate only the distributed firewall configuration and the hosts, leaving the logical topology(L3 topology, services) for you to complete. You can benefit from the in-place migration offered by the Migration Coordinator (hosts moved from NSX-V to NSX-T while going through maintenance mode, firewall states and memberships maintained, layer 2 extended between NSX for vSphere and NSX-T during migration) that lets you (or a third party automation) deploy the Tier-0/Tier-1 gateways and relative services, hence giving greater flexibility in terms of topologies. This feature is available from UI and API.

Modular Migration for Distributed Firewall available from UI

The NSX-T user interface now exposes the Modular Migration of firewall rules. This feature was introduced in 3.1.0 (API only) and allows the migration of firewall configurations, memberships and state from an NSX Data Center for vSphere environment to an NSX-T Data Center environment. This feature simplifies lift-and-shift migration where you vMotion VMs between an environment with hosts with NSX for vSphere and another environment with hosts with NSX-T by migrating firewall rules and keeping states and memberships (hence maintaining security between VMs in the old environment and the new one).

Fully Validated Scenario for Lift and Shift Leveraging vMotion, Distributed Firewall Migration and L2 Extension with Bridging

This feature supports the complete scenario for migration between two parallel environments (lift and shift) leveraging NSX-T bridge to extend L2 between NSX for vSphere and NSX-T, the Modular Distributed Firewall.

Identity Firewall

NSX Policy API support for Identity Firewall configuration – Setup of Active Directory, for use in Identity Firewall rules, can now be configured through NSX Policy API (https://<nsx-mgr>/policy/api/v1/infra/firewall-identity-stores), equivalent to existing NSX Manager API (https://<nsx-mgr>/api/v1/directory/domains).

Advanced Load Balancer Integration

Support Policy API for Avi Configuration

The NSX Policy API can be used to manage the NSX Advanced Load Balancer configurations of virtual services and their dependent objects. The unique object types are exposed via the https://<nsx-mgr>/policy/api/v1/infra/alb-<objecttype> endpoints.

Service Insertion Phase 2

This feature supports the Transparent LB in NSX-T advanced load balancer (Avi). Avi sends the load balanced traffic to the servers with the client’s IP as the source IP. This feature leverages service insertion to redirect the return traffic back to the service engine to provide transparent load balancing without requiring any server-side modification.

Edge Platform and Services

DHCPv4 Relay on Service Interface

Tier-0 and Tier-1 Gateways support DHCPv4 Relay on Service Interfaces, enabling a 3rd party DHCP server to be located on a physical network

AAA and Platform Security

Guest Users – Local User accounts: NSX customers integrate their existing corporate identity store to onboard users for normal operations of NSX-T. However, there is an essential need for a limited set of local users — to aid identity and access management in many scenarios. Scenarios such as (1) the ability to bootstrap and operate NSX during early stages of deployment before identity sources are configured in non-administrative mode or (2) when there is failure of communication/access to corporate identity repository. In such cases, local users are effective in bringing NSX-T to normal operational status. Additionally, in certain scenarios such as (3) being able to manage NSX in a specific compliant-state catering to industry or federal regulations, use of local guest users are beneficial. To enable these use-cases and ease-of-operations, two guest local-users have been introduced in 3.1.1, in addition to existing admin and audit local users. With this feature, the NSX admin has extended privileges to manage the lifecycle of the users (e.g., Password rotation, etc.) including the ability to customize and assign appropriate RBAC permissions. Please note that the local user capability is available on both NSX-T Local Managers (LM) and Global Managers (GM) but is unavailable on edge nodes in 3.1.1 via API and UI. The guest users are disabled by default and have to be explicitly activated for consumption and can be disabled at any time.
FIPS Compliant Bouncy Castle Upgrade: NSX-T 3.1.1 contains an updated version of FIPS compliant Bouncy Castle (v1.0.2.1). Bouncy Castle module is a collection of Java based cryptographic libraries, functions, and APIs. Bouncy Castle module is used extensively on NSX-T Manager. The upgraded version resolves critical security bugs and facilitates compliant and secure operations of NSX-T.

NSX Cloud

NSX Marketplace Appliance in Azure: Starting with NSX-T 3.1.1, you have the option to deploy the NSX management plane and control plane fully in Public Cloud (Azure only, for NSX-T 3.1.1. AWS will be supported in a future release). The NSX management/control plane components and NSX Cloud Public Cloud Gateway (PCG) are packaged as VHDs and made available in the Azure Marketplace. For a greenfield deployment in the public cloud, you also have the option to use a ‘one-click’ terraform script to perform the complete installation of NSX in Azure.

NSX Cloud Service Manager HA: In the event that you deploy NSX management/control plane in the public cloud, NSX Cloud Service Manager (CSM) also has HA. PCG is already deployed in Active-Standby mode thereby enabling HA.

NSX-Cloud for Horizon Cloud VDI enhancements: Starting with NSX-T 3.1.1, when using NSX Cloud to protect Horizon VDIs in Azure, you can install the NSX agent as part of the Horizon Agent installation in the VDIs. This feature also addresses one of the challenges with having multiple components ( VDIs, PCG, etc.) and their respective OS versions. Any version of the PCG can work with any version of the agent on the VM. In the event that there is an incompatibility, the incompatibility is displayed in the NSX Cloud Service Manager (CSM), leveraging the existing framework.

Operations

UI-based Upgrade Readiness Tool for migration from NVDS to VDS with NSX-T Data Center

To migrate Transport Nodes from NVDS to VDS with NSX-T, you can use the Upgrade Readiness Tool present in the Getting Started wizard in the NSX Manager user interface. Use the tool to get recommended VDS with NSX configurations, create or edit the recommended VDS with NSX, and then automatically migrate the switch from NVDS to VDS with NSX while upgrading the ESX hosts to vSphere Hypervisor (ESXi) 7.0 U2.

Licensing

Enable VDS in all vSphere Editions for NSX-T Data Center Users: Starting with NSX-T 3.1.1, you can utilize VDS in all versions of vSphere. You are entitled to use an equivalent number of CPU licenses to use VDS. This feature ensures that you can instantiate VDS.

Container Networking and Security

This release supports a maximum scale of 50 Clusters (ESXi clusters) per vCenter enabled with vLCM, on clusters enabled for vSphere with Tanzu as documented at configmax.vmware.com

Upgrade Considerations
API Deprecations and Behavior Changes

Retention Period of Unassigned Tags: In NSX-T 3.0.x, NSX Tags with 0 Virtual Machines assigned are automatically deleted by the system after five days. In NSX-T 3.1.0, the system task has been modified to run on a daily basis, cleaning up unassigned tags that are older than one day. There is no manual way to force delete unassigned tags.

Duplicate certificate extensions not allowed:

Starting with NSX-T 3.1.1, NSX-T will reject x509 certificates with duplicate extensions (or fields) following RFC guidelines and industry best practices for secure certificate management. Please note this will not impact certificates that are already in use prior to upgrading to 3.1.1. Otherwise, checks will be enforced when NSX administrators attempt to replace existing certificates or install new certificates after NSX-T 3.1.1 has been deployed.

Enablement Links
Release Notes Click Here  |  What’s New   |  Compatibility & System Requirements  |  API Deprecations & Behavior Changes

API & CLI Resources  |  Resolved Issues  |  Known Issues

docs.vmware.com/NSX-T Click Here  |   Installation Guide  |  Administration Guide  |  Upgrade Guide  |  Migration Coordinator Guide
Upgrading Docs Data Center Upgrade Checklist  |  Preparing to Upgrade  |  Upgrading  |  Upgrading Cloud Components  |  Post-Upgrade Tasks

Troubleshooting Upgrade Failures  |  Upgrading Federation Deployment

NSX Container Guides For Kubernetes and Cloud Foundry – Installation & Administration Guide  |  For OpenShift – Installation & Administration Guide
API Guides REST API Reference Guide  |  CLI Reference Guide  |  Global Manager REST API
Download Click Here
Blogs NSX-T Data Center Migration Coordinator – Modular Migration
Compatibility & Requirements Interoperability  |  Upgrade Paths  |  ports.vmware.com/NSX-T

 

Possible Security issues with #solarwinds #loggly and #Trojan #BrowserAssistant PS

Posted on Updated on

Lately, I haven’t had much involvement with malware, trojans, and virus’.  However, most recently Norton Family started to alert me to a few websites I didn’t recognize on one of my personal PC’s.  Norton reported these three sites: loggly.com | pads289.net | sun346.net   Something I also noticed was all three sites were posting at the same date/time, and the Pads/Sun sites had the same ID number in their URL (see pic below).  This behavior just seemed odd.    I didn’t initially recognize any of these sites, but a quick search revealed loggly.com was a solarwinds product.  My mind started to wander, could this be related to their recent security issues?  Just to be clear, this post isn’t about any current issues with solarwinds, VMware, or others. These issues were located on my personal network.  I’m posting this information as I know many of us are working from home, have kids doing online school, and the last thing we need is a pesky virus slowing things down.

I use Norton Family on all of my personal PC and the first thing I did was block the sites on the affected PC and the via Internet firewall.

Next, I started searching the Inet to see what I could find out on these three sites.  Multiple security sites of these URLs turned up no warnings, no black lists, whois seemed normal, just pretty much nothing alarming.  In fact, I was even running Sophos UTM Home Firewall, and it never alerted on this either. If I went directly to these sites it resulted in a blank page.  Additionally, the PC seemed to run normal, no popups, or redirection of sites.  Really it had no issues at all except it just kept going to these odd sites.

That’s when I found urlscan.io.  I pointed it at one of the sites and I noticed there were several update.txt files.

When I clicked on the update.txt it brought me to this screen where I could view the text file via the screenshot.

One thing I noticed about the text file was ‘Realistic Media Inc.’ and ‘Browser Assistant’,  and MSI installable. These things seemed like a programs that could be installed on a PC.

Looking at the installed programs on the affected PC and I found a match.

A quick search, and sure enough lots of hits on this Trojan.

Next I ran Microsoft Safety Scanner, it removed some of it, and then I uninstalled the ‘Browser Assistant’ program.

Lastly, I sent an email into AWS and Solarwinds asking them to look into this issue.  

Within 24 hours Amazon Responded with:  “The security concern that you have reported is specific to a customer application and / or how an AWS customer has chosen to use an AWS product or service.  To be clear, the security concern you have reported cannot be resolved by AWS but must be addressed by the customer, who may not be aware of or be following our recommended security best practices.  We have passed your security concern on to the specific customer for their awareness and potential mitigation.” 

Within 24 hours Solarwinds responded with:  They are working with me to see if there are any issues with this. 

Summary:

This pattern for Trojans or Mal/ad-ware probably isn’t new to security folks but either way I hope this blog helps you to better understand odd behavior on your personal network.

Thanks for reading and please do reach out if you have any questions.

Reference Links / Tools:

If you like my ‘no-nonsense’ videos and blogs that get straight to the point… then post a comment or let me know… Else, I’ll start posting really boring content!

GA Release #VMware #vSphere + #vSAN 7.0 Update 1c/P02 | Announcement, information, and links

Posted on

Announcing GA Releases of the following

  • VMware vSphere 7.0 Update 1c/P02 (Including Tanzu)
  • VMware vSAN™ 7.0 Update 1c/P02

Note: The included ESXi patch pertains to the Low severity Security Advisory for VMSA-2020-0029 & CVE-2020-3999

See the base table for all the technical enablement links.

Release Overview
vCenter Server 7.0 Update 1c | ISO Build 1732751

ESXi 7.0 Update 1c | ISO Build 17325551

What’s New vCenter
  • Physical NIC statistics: vCenter Server 7.0 Update 1c adds five physical NIC statistics:droppedRx, droppedTx, errorsRx, RxCRCErrors and errorsTx, to the hostd.log file at /var/run/log/hostd.log to enable you detect uncorrected networking errors and take necessary corrective action
  • Advanced Cross vCenter vMotion: With vCenter Server 7.0 Update 1c, in the vSphere Client, you can use the Advanced Cross vCenter vMotion feature to manage the bulk migration of workloads across vCenter Server systems in different vCenter Single Sign-On domains. Advanced Cross vCenter vMotion does not depend on vCenter Enhanced Linked Mode or Hybrid Linked Mode and works for both on-premise and cloud environments. Advanced Cross vCenter vMotion facilitates your migration from VMware Cloud Foundation 3 to VMware Cloud Foundation 4, which includes vSphere with Tanzu Kubernetes Grid, and delivers a unified platform for both VMs and containers, allowing operators to provision Kubernetes clusters from vCenter Server. The feature also allows smooth transition to the latest version of vCenter Server by simplifying workload migration from any vCenter Server instance of 6.x or later
  • Parallel remediation on hosts in clusters that you manage with vSphere Lifecycle Manager baselines: With vCenter Server 7.0 Update 1c, you can run parallel remediation on ESXi hosts in maintenance mode in clusters that you manage with vSphere Lifecycle Manager baselines
  • Third-party plug-ins to manage services on the vSAN Data Persistence platform: With vCenter Server 7.0 Update 1c, you can enable third-party plug-ins to manage services on the vSAN Data Persistence platform from the vSphere Client, the same way you manage your vCenter Server system. For more information, see the vSphere with Tanzu Configuration and Management documentation.
What’s New vSphere With Tanzu
Supervisor Cluster

  • Supervisor Namespace Isolation with Dedicated T1 Router – Supervisor Clusters using NSX-T network uses a new topology where each namespace has its own dedicated T1 router.

·      Newly created Supervisor Clusters uses this new topology automatically.

·      Existing Supervisor Clusters are migrated to this new topology during an upgrade

  • Supervisor Clusters Support NSX-T 3.1.0 – Supervisor Clusters is compatible with NSX-T 3.1.0
  • Supervisor Cluster Version 1.16.x Support Removed – Supervisor Cluster Version 1.16.x is now removed. Supervisor Clusters running 1.16.x should be upgraded to a new version

Tanzu Kubernetes Grid Service for vSphere

  • HTTP/HTTPS Proxy Support  – Newly created Tanzu Kubernetes clusters can use a global HTTP/HTTPS Proxy for egress traffic as well as for pulling container images from internet registries.
  • Integration with Registry Service – Newly created Tanzu Kubernetes clusters work out of the box with the vSphere Registry Service. Existing clusters, once updated to a new version, also work with the Registry Service.
  • Configurable Node Storage  – Tanzu Kubernetes clusters can now mount an additional storage volume to virtual machines thereby increasing available node storage capacity. This enables users to deploy larger container images that might exceed the default 16GB root volume size.
  • Improved status information  WCPCluster and WCPMachine Custom Resource Definitions now implement conditional status reporting. Successful Tanzu Kubernetes cluster lifecycle management depends on a number of subsystems (for example, Supervisor, storage, networking) and understanding failures can be challenging. Now WCPCluster and WCPMachine CRDs surface common status and failure conditions to ease troubleshooting.

Missing new default VM Classes introduced in vSphere 7.0 U1

  • After upgrading to vSphere 7.0.1, and then performing a vSphere Namespaces update of the Supervisor Cluster, running the command “kubectl get virtualmachineclasses” did not list the new VM class sizes 2x-large, 4x-large, 8x-large. This has been resolved and all Supervisor Clusters will be configured with the correct set of default VM Classes. 
  • With ESXi 7.0 Update 1c, you can use the –remote-host-max-msg-len parameter to set the maximum length of syslog messages, to up to 16 KiB, before they must be split. By default, the ESXi syslog daemon (vmsyslogd), strictly adheres to the maximum message length of 1 KiB set by RFC 3164. Longer messages are split into multiple parts. Set the maximum message length up to the smallest length supported by any of the syslog receivers or relays involved in the syslog infrastructure
  • With ESXi 7.0 Update 1c, you can use the installer boot option systemMediaSize to limit the size of system storage partitions on the boot media. If your system has a small footprint that does not require the maximum 138 GB system-storage size, you can limit it to the minimum of 33 GB. The systemMediaSize parameter accepts the following values:
    • min (33 GB, for single disk or embedded servers)
    • small (69 GB, for servers with at least 512 GB RAM)
    • default (138 GB)
    • max (consume all available space, for multi-terabyte servers)

The selected value must fit the purpose of your system. For example, a system with 1TB of memory must use the minimum of 69 GB for system storage. To set the boot option at install time, for example systemMediaSize=small, refer to Enter Boot Options to Start an Installation or Upgrade Script. For more information, see VMware knowledge base article 81166.

VMSA-2020-0029 Information for ESXi
VMSA-2020-0029 Low
CVSSv3 Range 3.3
Issue date: 12/17/2020
CVE numbers: CVE-2020-3999
Synopsis: VMware ESXi, Workstation, Fusion and Cloud Foundation updates address a denial of service vulnerability (CVE-2020-3999)
ESXi 7 Patch Info VMware Patch Release ESXi 7.0 ESXi70U1c-17325551
This section derives from our full VMware Security Advisory VMSA-2020-0029 covering ESXi only.  It is accurate at the time of creation and it is recommended you reference the full VMSA for expanded or updated information.
What’s New vSAN
vSAN 7.0 Update 1c/P02 includes the following summarized fixes as documented within the Resolved Sections for vCenter & ESXi

  • DOM Scrubber enhancement feature to enhance DOM scrubber functionality
  • Improvements in checksum verification during write prepare in LLOG
  • Persistence in network settings of witness appliance while creating witness VM
  • Enhancement in storage capacity/usage calculation on host level
  • NFS File bench performance improvements
  • LSOM fixes for random high write latency spikes in vSAN all-flash
  • File services improvements

 

Technical Enablement
Release Notes vCenter Click Here  |  What’s New  |  Patches Contained in this Release  |  Product Support Notices  |  Resolved Issues  |  Known Issues
Release Notes ESXi Click Here  |  What’s New  |  Patches Contained in this Release  |  Product Support Notices  |  Resolved Issues  |  Known Issues
Release Notes vSAN 7.0 U1 Click Here  |  What’s New  |  VMware vSAN Community  |  Upgrades for This Release  |  Limitations  |  Known Issues
Release Notes Tanzu Click Here  |  What’s New  |  Learn About vSphere with Tanzu  |  Known Issues
docs.vmware.com/vSphere vCenter Server Upgrade  |   ESXi Upgrade  |  Upgrading vSAN Cluster  |   Tanzu Configuration & Management
Download Click Here
Compatibility Information ports.vmware.com/vSphere 7 + vSAN  |  Configuration Maximums vSphere 7  |  Compatibility Matrix  |  Interoperability
VMSA Reference VMSA-2020-0029  |  VMware Patch Release ESXi 7.0 ESXi70U1c-17325551

GA Release VMware NSX Data Center for vSphere 6.4.9 | Announcement, information, and links

Posted on Updated on

Announcing GA Releases of the following

  • VMware NSX Data Center for vSphere 6.4.9 (See the base table for all the technical enablement links.)

 

Release Overview
VMware NSX Data Center for vSphere 6.4.9 | Build 17267008 

NSX for vSphere 6.4 End Of General Support Was Extended to 01/16/2022

lifecycle.vmware.com

What’s New
NSX Data Center for vSphere 6.4.9 adds usability enhancements and addresses a number of specific customer bugs. 

  • vSphere 7.0 Update 1 Support
  • VMware NSX – Functionality Updates for vSphere Client (HTML): The following VMware NSX features are now available through the vSphere Client: Service Definitions for Guest Introspection and Network Introspection. For a list of supported functionality, please see VMware NSX for vSphere UI Plug-in Functionality in vSphere Client.
  • Guest Introspection: Adds the ability to change logging level without requiring a restart of 3rd-party Guest Introspection partner service.
Minimum Supported Versions & Depreciated Notes
VMware declares minimum supported versions, this content has been simplified, please view the full details in the  Versions, System Requirements, and Installation section.

For vSphere 6.5:

Recommended: 6.5 Update 3 Build Number 14020092.
Important: If you are using NSX Guest Introspection on vSphere 6.5, vSphere 6.5 P03 or higher is recommended.

VMware Product Interoperability Matrix | NSX-V 6.4.9 & vSphere 6.5

For vSphere 6.7:

Recommended: 6.7 Update 2
Important:  If you are using NSX Guest Introspection on vSphere 6.7, please refer to Knowledge Base Article KB57248 prior to installing NSX 6.4.6, and consult VMware Customer Support for more information.

For vSphere 7, Update 1 is now supported

Note vSphere 6.0 has reached End of General Support and is not supported with NSX 6.4.7 onwards.

Guest Introspection for Windows

It is recommended that you upgrade VMware Tools to 10.3.10 before upgrading NSX for vSphere.

End of Life and End of Support Warnings

For information about NSX and other VMware products that must be upgraded soon, please consult the VMware Lifecycle Product Matrix.

  • NSX for vSphere 6.1.x reached End of Availability (EOA) and End of General Support (EOGS) on January 15, 2017. (See also VMware knowledge base article 2144769.)
  • vCNS Edges no longer supported. You must upgrade to an NSX Edge first before upgrading to NSX 6.3 or later.
  • NSX for vSphere 6.2.x has reached End of General Support (EOGS) as of August 20, 2018.

General Behavior Changes

If you have more than one vSphere Distributed Switch, and if VXLAN is configured on one of them, you must connect any Distributed Logical Router interfaces to port groups on that vSphere Distributed Switch. Starting in NSX 6.4.1, this configuration is enforced in the UI and API. In earlier releases, you were not prevented from creating an invalid configuration.  If you upgrade to NSX 6.4.1 or later and have incorrectly connected DLR interfaces, you will need to take action to resolve this. See the Upgrade Notes for details.

In NSX 6.4.7, the following functionality is deprecated in vSphere Client 7.0:

  • NSX Edge: SSL VPN-Plus (see KB79929 for more information)
  • Tools: Endpoint Monitoring (all functionality)
  • Tools: Flow Monitoring (Flow Monitoring Dashboard, Details by Service, and Configuration)
  • System Events: NSX Ticket Logger

For the complete list of NSX installation prerequisites, see the System Requirements for NSX section in the NSX Installation Guide.

For installation instructions, see the NSX Installation Guide or the NSX Cross-vCenter Installation Guide.

Also refer to the complete Deprecated and Discontinued Functionality for all depreciated features, API Removals and Behavior Changes

General Upgrade Considerations
For more information, notes and considerations for upgrading please see the Upgrade Notes & FIPS Compliance section.

  • To upgrade NSX, you must perform a full NSX upgrade including host cluster upgrade (which upgrades the host VIBs). For instructions, see the NSX Upgrade Guide including the Upgrade Host Clusters section.
  • Upgrading NSX VIBs on host clusters using VUM is not supported. Use Upgrade Coordinator, Host Preparation, or the associated REST APIs to upgrade NSX VIBs on host clusters.
  • System Requirements: For information on system requirements while installing and upgrading NSX, see the System Requirements for NSX section in NSX documentation.
  • Upgrade path for NSX: The VMware Product Interoperability Matrix provides details about the upgrade paths from VMware NSX.
  • Cross-vCenter NSX upgrade is covered in the NSX Upgrade Guide.
  • Downgrades are not supported:
    • Always capture a backup of NSX Manager before proceeding with an upgrade.
    • Once NSX has been upgraded successfully, NSX cannot be downgraded.
  • To validate that your upgrade to NSX 6.4.x was successful see knowledge base article 2134525.
  • There is no support for upgrades from vCloud Networking and Security to NSX 6.4.x. You must upgrade to a supported 6.2.x release first.
  • Interoperability: Check the VMware Product Interoperability Matrix for all relevant VMware products before upgrading.
    • Upgrading to NSX Data Center for vSphere 6.4.7: VIO is not compatible with NSX 6.4.7 due to multiple scale issues.
    • Upgrading to NSX Data Center for vSphere 6.4: NSX 6.4 is not compatible with vSphere 5.5.
    • Upgrading to NSX Data Center for vSphere 6.4.5: If NSX is deployed with VMware Integrated OpenStack (VIO), upgrade VIO to 4.1.2.2 or 5.1.0.1, as 6.4.5 is incompatible with previous releases due to spring package update to version 5.0.
    • Upgrading to vSphere 6.5: When upgrading to vSphere 6.5a or later 6.5 versions, you must first upgrade to NSX 6.3.0 or later. NSX 6.2.x is not compatible with vSphere 6.5. See Upgrading vSphere in an NSX Environment in the NSX Upgrade Guide.
    • Upgrading to vSphere 6.7: When upgrading to vSphere 6.7 you must first upgrade to NSX 6.4.1 or later. Earlier versions of NSX are not compatible with vSphere 6.7. See Upgrading vSphere in an NSX Environment in the NSX Upgrade Guide.
  • Partner services compatibility: If your site uses VMware partner services for Guest Introspection or Network Introspection, you must review the  VMware Compatibility Guide before you upgrade, to verify that your vendor’s service is compatible with this release of NSX.
  • Networking and Security plug-in: After upgrading NSX Manager, you must log out and log back in to the vSphere Web Client. If the NSX plug-in does not display correctly, clear your browser cache and history. If the Networking and Security plug-in does not appear in the vSphere Web Client, reset the vSphere Web Client server as explained in the NSX Upgrade Guide.
  • Stateless environments: In NSX upgrades in a stateless host environment, the new VIBs are pre-added to the Host Image profile during the NSX upgrade process. As a result, NSX on stateless hosts upgrade process follows this sequence:
  • Service Definitions functionality is not supported in NSX 6.4.7 UI with vSphere Client 7.0:
    For example, if you have an old Trend Micro Service Definition registered with vSphere 6.5 or 6.7, follow any one of these two options:
    1. Option #1: Before upgrading to vSphere 7.0, navigate to the Service Definition tab in the vSphere Web Client, edit the Service Definition to 7.0, and then upgrade to vSphere 7.0.
    2. Option #2: After upgrading to vSphere 7.0, run the following NSX API to add or edit the Service Definition to 7.0.

POST  https://<nsmanager>/api/2.0/si/service/<service-id>/servicedeploymentspec/versioneddeploymentspec

Upgrade Consideration for NSX Components
Support for VM Hardware version 11 for NSX components

  • For new installs of NSX Data Center for vSphere 6.4.2, the NSX components (Manager, Controller, Edge, Guest Introspection) are on VM Hardware version 11.
  • For upgrades to NSX Data Center for vSphere 6.4.2, the NSX Edge and Guest Introspection components are automatically upgraded to VM Hardware version 11. The NSX Manager and NSX Controller components remain on VM Hardware version 8 following an upgrade. Users have the option to upgrade the VM Hardware to version 11. Consult KB (https://kb.vmware.com/s/article/1010675) for instructions on upgrading VM Hardware versions.
  • For new installs of NSX 6.3.x, 6.4.0, 6.4.1, the NSX components (Manager, Controller, Edge, Guest Introspection) are on VM Hardware version 8.

NSX Manager Upgrade

  • Important: If you are upgrading NSX 6.2.0, 6.2.1, or 6.2.2 to NSX 6.3.5 or later, you must complete a workaround before starting the upgrade. See VMware Knowledge Base article 000051624 for details.
  • If you are upgrading from NSX 6.3.3 to NSX 6.3.4 or later you must first follow the workaround instructions in VMware Knowledge Base article 2151719.
  • If you use SFTP for NSX backups, change to hmac-sha2-256 after upgrading to 6.3.0 or later because there is no support for hmac-sha1. See VMware Knowledge Base article 2149282  for a list of supported security algorithms.
  • When you upgrade NSX Manager to NSX 6.4.1, a backup is automatically taken and saved locally as part of the upgrade process. See Upgrade NSX Manager for more information.
  • When you upgrade to NSX 6.4.0, the TLS settings are preserved. If you have only TLS 1.0 enabled, you will be able to view the NSX plug-in in the vSphere Web Client, but NSX Managers are not visible. There is no impact to datapath, but you cannot change any NSX Manager configuration. Log in to the NSX appliance management web UI at https://nsx-mgr-ip/ and enable TLS 1.1 and TLS 1.2. This reboots the NSX Manager appliance.

Controller Upgrade

  • The NSX Controller cluster must contain three controller nodes. If it has fewer than three controllers, you must add controllers before starting the upgrade. See Deploy NSX Controller Cluster for instructions.
  • In NSX 6.3.3, the underlying operating system of the NSX Controller changes. This means that when you upgrade from NSX 6.3.2 or earlier to NSX 6.3.3 or later, instead of an in-place software upgrade, the existing controllers are deleted one at a time, and new Photon OS based controllers are deployed using the same IP addresses.

When the controllers are deleted, this also deletes any associated DRS anti-affinity rules. You must create new anti-affinity rules in vCenter to prevent the new controller VMs from residing on the same host.

See Upgrade the NSX Controller Cluster for more information on controller upgrades.

 Host Cluster Upgrade

  • If you upgrade from NSX 6.3.2 or earlier to NSX 6.3.3 or later, the NSX VIB names change.
    The esx-vxlan and esx-vsip VIBs are replaced with esx-nsxv if you have NSX 6.3.3 or later installed on ESXi 6.0 or later.
  • Rebootless upgrade and uninstall on hosts: On vSphere 6.0 and later, once you have upgraded from NSX 6.2.x to NSX 6.3.x or later, any subsequent NSX VIB changes will not require a reboot. Instead hosts must enter maintenance mode to complete the VIB change. This affects both NSX host cluster upgrade, and ESXi upgrade. See the NSX Upgrade Guide for more information.

NSX Edge Upgrade

  • Validation added in NSX 6.4.1 to disallow an invalid distributed logical router configurations: In environments where VXLAN is configured and more than one vSphere Distributed Switch is present, distributed logical router interfaces must be connected to the VXLAN-configured vSphere Distributed Switch only. Upgrading a DLR to NSX 6.4.1 or later will fail in those environments if the DLR has interfaces connected to the vSphere Distributed Switch that is not configured for VXLAN. Use the API to connect any incorrectly configured interfaces to port groups on the VXLAN-configured vSphere Distributed Switch. Once the configuration is valid, retry the upgrade. You can change the interface configuration using

PUT /api/4.0/edges/{edgeId} or PUT /api/4.0/edges/{edgeId}/interfaces/{index}. See the NSX API Guide for more information.

  • Delete UDLR Control VM from vCenter Server that is associated with secondary NSX Manager before upgrading UDLR from 6.2.7 to 6.4.5:
    In a multi-vCenter environment, when you upgrade NSX UDLRs from 6.2.7 to 6.4.5, the upgrade of the UDLR virtual appliance (UDLR Control VM) fails on the secondary NSX Manager, if HA is enabled on the UDLR Control VM. During the upgrade, the VM with ha index #0 in the HA pair is removed from the NSX database; but, this VM continues to exist on the vCenter Server. Therefore, when the UDLR Control VM is upgraded on the secondary NSX Manager, the upgrade fails because the name of the VM clashes with an existing VM on the vCenter Server. To resolve this issue, delete the Control VM from the vCenter Server that is associated with the UDLR on the secondary NSX Manager, and then upgrade the UDLR from 6.2.7 to 6.4.5.
  • Host clusters must be prepared for NSX before upgrading NSX Edge appliances: Management-plane communication between NSX Manager and Edge via the VIX channel is no longer supported starting in 6.3.0. Only the message bus channel is supported. When you upgrade from NSX 6.2.x or earlier to NSX 6.3.0 or later, you must verify that host clusters where NSX Edge appliances are deployed are prepared for NSX, and that the messaging infrastructure status is GREEN. If host clusters are not prepared for NSX, upgrade of the NSX Edge appliance will fail. See Upgrade NSX Edge in the NSX Upgrade Guide for details.
  • Upgrading Edge Services Gateway (ESG):
    Starting in NSX 6.2.5, resource reservation is carried out at the time of NSX Edge upgrade. When vSphere HA is enabled on a cluster having insufficient resources, the upgrade operation may fail due to vSphere HA constraints being violated.

To avoid such upgrade failures, perform the following steps before you upgrade an ESG:

The following resource reservations are used by the NSX Manager if you have not explicitly set values at the time of install or upgrade.

  1. Always ensure that your installation follows the best practices laid out for vSphere HA. Refer to document KB1002080 .
  2. Use the NSX tuning configuration API:
    PUT https://<nsxmanager>/api/4.0/edgePublish/tuningConfiguration
    ensuring that values for edgeVCpuReservationPercentage and edgeMemoryReservationPercentage fit within available resources for the form factor (see table above for defaults).
  • Disable vSphere’s Virtual Machine Startup option where vSphere HA is enabled and Edges are deployed. After you upgrade your 6.2.4 or earlier NSX Edges to 6.2.5 or later, you must turn off the vSphere Virtual Machine Startup option for each NSX Edge in a cluster where vSphere HA is enabled and Edges are deployed. To do this, open the vSphere Web Client, find the ESXi host where NSX Edge virtual machine resides, click Manage > Settings, and, under Virtual Machines, select VM Startup/Shutdown, click Edit, and make sure that the virtual machine is in Manual mode (that is, make sure it is not added to the Automatic Startup/Shutdown list).
  • Before upgrading to NSX 6.2.5 or later, make sure all load balancer cipher lists are colon separated. If your cipher list uses another separator such as a comma, make a PUT call to https://nsxmgr_ip/api/4.0/edges/EdgeID/loadbalancer/config/applicationprofiles and replace each  <ciphers> </ciphers> list in <clientssl> </clientssl> and <serverssl> </serverssl> with a colon-separated list. For example, the relevant segment of the request body might look like the following. Repeat this procedure for all application profiles:

<applicationProfile>

<name>https-profile</name>

<insertXForwardedFor>false</insertXForwardedFor>

<sslPassthrough>false</sslPassthrough>

<template>HTTPS</template>

<serverSslEnabled>true</serverSslEnabled>

<clientSsl>

<ciphers>AES128-SHA:AES256-SHA:ECDHE-ECDSA-AES256-SHA</ciphers>

<clientAuth>ignore</clientAuth>

<serviceCertificate>certificate-4</serviceCertificate>

</clientSsl>

<serverSsl>

<ciphers>AES128-SHA:AES256-SHA:ECDHE-ECDSA-AES256-SHA</ciphers>

<serviceCertificate>certificate-4</serviceCertificate>

</serverSsl>

</applicationProfile>

 

  • Set Correct Cipher version for Load Balanced Clients on vROps versions older than 6.2.0: vROps pool members on vROps versions older than 6.2.0 use TLS version 1.0 and therefore you must set a monitor extension value explicitly by setting “ssl-version=10” in the NSX Load Balancer configuration. See Create a Service Monitor in the NSX Administration Guide for instructions.

{

“expected” : null,

“extension” : “ssl-version=10”,

“send” : null,

“maxRetries” : 2,

“name” : “sm_vrops”,

“url” : “/suite-api/api/deployment/node/status”,

“timeout” : 5,

“type” : “https”,

“receive” : null,

“interval” : 60,

“method” : “GET”

}

  • After upgrading to NSX 6.4.6, L2 bridges and interfaces on a DLR cannot connect to logical switches belonging to different transport zones:  In NSX 6.4.5 or earlier, L2 bridge instances and interfaces on a Distributed Logical Router (DLR) supported use of logical switches that belonged to different transport zones. Starting in NSX 6.4.6, this configuration is not supported. The L2 bridge instances and interfaces on a DLR must connect to logical switches that are in a single transport zone. If logical switches from multiple transport zones are used, edge upgrade is blocked during pre-upgrade validation checks when you upgrade NSX to 6.4.6. To resolve this edge upgrade issue, ensure that the bridge instances and interfaces on a DLR are connected to logical switches in a single transport zone.
  • After upgrading to NSX 6.4.7, bridges and interfaces on a DLR cannot connect to dvPortGroups belonging to different VDS: If such a configuration is present, NSX Manager upgrade to 6.4.7 is blocked in pre-upgrade validation checks. To resolve this, ensure that interfaces and L2 bridges of a DLR are connected to a single VDS.
  • After upgrading to NSX 6.4.7, DLR cannot be connected to VLAN-backed port groups if the transport zone of logical switch it is connected to spans more than one VDS: This is to ensure correct alignment of DLR instances with logical switch dvPortGroups across hosts. If such configuration is present, NSX Manager upgrade to 6.4.7 is blocked in pre-upgrade validation checks. To resolve this issue, ensure that there are no logical interfaces connected to VLAN-backed port groups, if a logical interface exists with a logical switch belonging to a transport zone spanning multiple VDS.
  • After upgrading to NSX 6.4.7, different DLRs cannot have their interfaces and L2 bridges on a same network: If such a configuration is present, NSX Manager upgrade to 6.4.7 is blocked in pre-upgrade validation checks. To resolve this issue, ensure that a network is used in only a single DLR.

 

Technical Enablement
Release Notes Click Here  |  What’s New  |  Versions, System Requirements, and Installation  |  Deprecated and Discontinued Functionality

Upgrade Notes  |  FIPS Compliance  |  Resolved Issues  |  Known Issues

docs.vmware.com/nsx-v Installation  |   Cross-vCenter Installation  |   Administration  |   Upgrade  |   Troubleshooting  |   Logging & System Events

API Guide  |  vSphere CLI Guide  |  vSphere Configuration Maximums

Networking Documentation Transport Zones  |  Logical Switches  |  Configuring Hardware Gateway  |  L2 Bridges  |  Routing  |  Logical Firewall

Firewall Scenarios  |  Identity Firewall Overview  |  Working with Active Directory Domains  |  Using SpoofGuard

Virtual Private Networks (VPN)  |  Logical Load Balancer  |  Other Edge Services

Compatibility Information Interoperability Matrix  |  Configuration Maximums  | ports.vmware.com/NSX-V
Download Click Here
VMware HOLs HOL-2103-01-NET – VMware NSX for vSphere Advanced Topics

 

Using vRealize Log Insight to troubleshoot #ESXi 7 Error – Host hardware voltage System board 18 VBAT

Posted on

This blog post demonstrates how I used vRLI to solve what seemed like a complex issue and it helped to simplify the outcome.   I use vRLI all the time to parse log files from my devices (hosts, VM’s, etc.), pinpoint data, and resolve issues.  In this case a simple CMOS battery was the issue but its the power of vRLI that allowed me to find detailed enough information to pinpoint the problem.

Recently I was doing some updates on my Home Lab Gen 7 and I noticed this error kept popping up – ‘Host hardware voltage’.  At first I started thinking, might be time for a new power supply, this seems pretty serious.

Next I started looking into this error.  On the host I went into Monitor > Hardware Health > Sensors.  The first sensor to appear gave me some detail around the sensor fault but not quite enough information to figure out what the issue was.  I noted the sensor information – ‘System Board 18 VBAT’

I went into the Supermicro Management interface to see if I could find out more information.  I found some more information around VBAT.  Looks like 3.3v DC is what its expecting, and the event log seems to be registering errors around it, but still not enough to know what exactly is faulting.

With this information I launched vRLI and went into Interactive Analytics.  I choose the last 48 hours and typed ‘vbat’ into the search field.  The first hit that came up stated – ‘Sensor 56 type voltage, Description System Board 18 VBAT state assert for…’  This was very simlar to the errors I noted from ESXi and from the Supermicro motherboard.

Finally, a quick google led me to Intel webpage.  Turns out VBAT was just a CMOS battery issue.

I powered down the host and pulled out the old CMOS battery.  The old battery was pretty warm to the touch. When I placed in on a volt meter and it read less than one volt.

I checked the voltage on the new battery, it came back with 3.3v and inserted into the host.  Since the change the system board has not reported any new errors.

Next I go into vRNI to ensure the error has disappeared from the logs.  I type in ‘vbat’, set my date/time range, and view the results.  From the results, you can see that the errors stopped about 16:00 hours.  That is about the time I put the new battery in, and you see its been error free from for the last hour.  Over the next day or two I’ll check back and make sure its error free.  Additionally, if I wanted to I could setup and alarm to trigger if the log entry returns.

Its results like this is why I like using vRLI to help me troubleshoot, resolve, alert, and monitor results.

If you like my ‘no-nonsense’ videos and blogs that get straight to the point… then post a comment or let me know… Else, I’ll start posting really boring content!

 

 

 

 

Update to VMware Security-Advisory VMSA-2020-0023.1 | Critical, Important CSSv3 5.9-9.8 OpenSLP | New ESXi Patches Released

Posted on Updated on

VMware Security team released this updated information, follow up with VMware if you have questions.

 

Important Update Notes

The ESXi patches released on October 20, 2020 did not address CVE-2020-3992 completely. The ESXi patches listed in the Response Matrix in section 3a have been updated to contain the complete fix for CVE-2020-3992.

In Reference to OpenSLP vulnerability in Section 3a

VMware ESXi 7.0 ESXi70U1a-17119627   (Updated)

Download
Documentation

VMware ESXi 6.7 ESXi670-202011301-SG  (Updated)
Download
Documentation

Note; VMware Cloud Foundation ESXi 3.x & 4.x are still pending at this time.

VMware ESXi

  • VMware vCenter
  • VMware Workstation Pro / Player (Workstation)
  • VMware Fusion Pro / Fusion (Fusion)
  • NSX-T
  • VMware Cloud Foundation
VMSA-2020-0023.1 Severity: Critical
CVSSv3 Range 5.9-9.8
Issue date: 10/20/2020 and updated 11/04/2020
Synopsis: VMware ESXi, vCenter, Workstation, Fusion and NSX-T updates address multiple security vulnerabilities
CVE numbers: CVE-2020-3981   CVE-2020-3982  CVE-2020-3992  CVE-2020-3993  CVE-2020-3994  CVE-2020-3995

 

 

1. Impacted Products
  • VMware ESXi
  • VMware vCenter
  • VMware Workstation Pro / Player (Workstation)
  • VMware Fusion Pro / Fusion (Fusion)
  • NSX-T
  • VMware Cloud Foundation
2. Introduction
Multiple vulnerabilities in VMware ESXi, Workstation, Fusion and NSX-T were privately reported to VMware. Updates are available to remediate these vulnerabilities in affected VMware products.
3a. ESXi  OpenSLP remote code execution vulnerability (CVE-2020-3992)  Critical
IMPORTANT: The ESXi patches released on October 20, 2020 did not address CVE-2020-3992 completely, see section (3a) Notes for an update.

 Description:
OpenSLP as used in ESXi has a use-after-free issue. VMware has evaluated the severity of this issue to be in the Critical severity range with a maximum CVSSv3 base score of 9.8.

Known Attack Vectors

A malicious actor residing in the management network who has access to port 427 on an ESXi machine may be able to trigger a use-after-free in the OpenSLP service resulting in remote code execution.

Resolution To remediate CVE-2020-3992 apply the patches listed in the ‘Fixed Version’ column of the ‘Response Matrix’ found below.

Workarounds Workarounds for CVE-2020-3992 have been listed in the ‘Workarounds’ column of the ‘Response Matrix’ below.

Notes

The ESXi patches released on October 20, 2020 did not address CVE-2020-3992 completely. The ESXi patches listed in the Response Matrix below are updated versions that contain the complete fix for CVE-2020-3992.

Response Matrix Critical
Product Version Running On CVE Identifier CVSSv3 Fixed Version Workarounds
ESXi 7.0 Any CVE-2020-3992 9.8 ESXi70U1a-17119627 Updated KB76372
ESXi 6.7 Any CVE-2020-3992 9.8 ESXi670-202011301-SG  Updated KB76372
ESXi 6.5 Any CVE-2020-3992 9.8 ESXi650-202011401-SG KB76372
Cloud Foundation (ESXi) 4.x Any CVE-2020-3992 9.8 Patch Pending KB76372
Cloud Foundation (ESXi) 3.x Any CVE-2020-3992 9.8 Patch Pending KB76372
Only section 3a has been updated at this time;  The rest of the VMSA is the same; only the links to the new ESX 7U1a and 6.7 updates have been included below this line.
3b. NSX-T Man-in-the-Middle vulnerability MITM (CVE-2020-3993) Important
Description:
VMware NSX-T contains a security vulnerability that exists in the way it allows a KVM host to download and install packages from NSX manager. VMware has evaluated the severity of this issue to be in the Important severity range with a maximum CVSSv3 base score of 7.5.Known Attack Vectors A malicious actor with MITM positioning may be able to exploit this issue to compromise the transport node.Resolution To remediate CVE-2020-3993 apply the patches listed in the ‘Fixed Version’ column of the ‘Response Matrix’ found below.

Workarounds: None

Response Matrix Important
Product Version Running On CVE Identifier CVSSv3 Fixed Version Workarounds
NSX-T 3.x Any CVE-2020-3993 7.5 3.0.2 None
NSX-T 2.5.x Any CVE-2020-3993 7.5 2.5.2.2.0 None
Cloud Foundation (NSX-T) 4.x Any CVE-2020-3993 7.5 4.1 None
Cloud Foundation (NSX-T) 3.x Any CVE-2020-3993 7.5 3.10.1.1 None
3c. Time-of-check to time-of-use TOCTOU out-of-bounds read vulnerability (CVE-2020-3981)  Important
Description:
VMware ESXi, Workstation and Fusion contain an out-of-bounds read vulnerability due to a time-of-check time-of-use issue in ACPI device. VMware has evaluated the severity of this issue to be in the Important severity range with a maximum CVSSv3 base score of 7.1.Known Attack Vectors A malicious actor with administrative access to a virtual machine may be able to exploit this issue to leak memory from the vmx process.Resolution To remediate CVE-2020-3981 apply the patches listed in the ‘Fixed Version’ column of the ‘Response Matrix’ found below.

 Workarounds: None

Response Matrix Important
Product Version Running On CVE Identifier CVSSv3 Fixed Version Workarounds
ESXi 7.0 Any CVE-2020-3981 7.1 ESXi_7.0.1-0.0.16850804 None
ESXi 6.7 Any CVE-2020-3981 7.1 ESXi670-202008101-SG None
ESXi 6.5 Any CVE-2020-3981 7.1 ESXi650-202007101-SG None
Fusion 12.x OS X CVE-2020-3981 N/A Unaffected N/A
Fusion 11.x OS X CVE-2020-3981 7.1 11.5.6 None
Workstation 16.x Any CVE-2020-3981 N/A Unaffected N/A
Workstation 15.x Any CVE-2020-3981 7.1 Patch pending None
Cloud Foundation (ESXi) 4.x Any CVE-2020-3981 7.1 4.1 None
Cloud Foundation (ESXi) 3.x Any CVE-2020-3981 7.1 3.10.1 None
3d. TOCTOU out-of-bounds write vulnerability (CVE-2020-3982)
Description:
VMware ESXi, Workstation and Fusion contain an out-of-bounds write vulnerability due to a time-of-check time-of-use issue in ACPI device. VMware has evaluated the severity of this issue to be in the Moderate severity range with a maximum CVSSv3 base score of 5.9.Known Attack Vectors A malicious actor with administrative access to a virtual machine may be able to exploit this vulnerability to crash the virtual machine’s vmx process or corrupt hypervisor’s memory heap.

Resolution To remediate CVE-2020-3982 apply the patches listed in the ‘Fixed Version’ column of the ‘Response Matrix’ found below.

 Workarounds: None

Response Matrix Moderate
Product Version Running On CVE Identifier CVSSv3 Fixed Version Workarounds
ESXi 7.0 Any CVE-2020-3982 5.9 ESXi_7.0.1-0.0.16850804 None
ESXi 6.7 Any CVE-2020-3982 5.9 ESXi670-202008101-SG None
ESXi 6.5 Any CVE-2020-3982 5.9 ESXi650-202007101-SG None
Fusion 12.x OS X CVE-2020-3982 N/A Unaffected N/A
Fusion 11.x OS X CVE-2020-3982 5.9 11.5.6 None
Workstation 16.x Any CVE-2020-3982 N/A Unaffected N/A
Workstation 15.x Any CVE-2020-3982 5.9 Patch pending None
Cloud Foundation (ESXi) 4.x Any CVE-2020-3982 5.9 4.1 None
Cloud Foundation (ESXi) 3.x Any CVE-2020-3982 5.9 3.10.1 None
3e. vCenter Server update function MITM vulnerability (CVE-2020-3994)  Important
Description:  VMware vCenter Server contains a session hijack vulnerability in the vCenter Server Appliance Management Interface update function due to a lack of certificate validation. VMware has evaluated the severity of this issue to be in the Important severity range with a maximum CVSSv3 base score of 7.5.

Known Attack Vectors A malicious actor with network positioning between vCenter Server and an update repository may be able to perform a session hijack when the vCenter Server Appliance Management Interface is used to download vCenter updates.

Resolution To remediate CVE-2020-3994 apply the patches listed in the ‘Fixed Version’ column of the ‘Response Matrix’ found below.

 Workarounds: None 

Response Matrix Important
Product Version Running On CVE Identifier CVSSv3 Fixed Version Workarounds
vCenter Server 7.0 Any CVE-2020-3994 N/A Unaffected N/A
vCenter Server 6.7 vAppliance CVE-2020-3994 7.5 6.7u3 None
vCenter Server 6.7 Windows CVE-2020-3994 N/A Unaffected N/A
vCenter Server 6.5 vAppliance CVE-2020-3994 7.5 6.5u3k None
vCenter Server 6.5 Windows CVE-2020-3994 N/A Unaffected N/A
Cloud Foundation (vCenter) 4.x Any CVE-2020-3994 N/A Unaffected N/A
Cloud Foundation (vCenter) 3.x Any CVE-2020-3994 7.5 3.9.0 None
3f. VMCI host driver memory leak vulnerability (CVE-2020-3995)  Important
Description:  The VMCI host drivers used by VMware hypervisors contain a memory leak vulnerability. VMware has evaluated the severity of this issue to be in the Important severity range with a maximum CVSSv3 base score of 7.1.

Known Attack Vectors A malicious actor with access to a virtual machine may be able to trigger a memory leak issue resulting in memory resource exhaustion on the hypervisor if the attack is sustained for extended periods of time.

 Resolution To remediate CVE-2020-3995 apply the patches listed in the ‘Fixed Version’ column of the ‘Response Matrix’ found below.

 Workarounds: None.

Response Matrix Important
Product Version Running On CVE Identifier CVSSv3 Fixed Version Workarounds
ESXi 7.0 Any CVE-2020-3995 N/A Unaffected N/A
ESXi 6.7 Any CVE-2020-3995 7.1 ESXi670-201908101-SG None
ESXi 6.5 Any CVE-2020-3995 7.1 ESXi650-201907101-SG None
Fusion 11.x Any CVE-2020-3995 7.1 11.1.0 None
Workstation 15.x Any CVE-2020-3995 7.1 15.1.0 None
Cloud Foundation (ESXi) 4.x Any CVE-2020-3995 N/A Unaffected N/A
Cloud Foundation (ESXi) 3.x Any CVE-2020-3995 7.1 3.9.0 None
4. References
VMware ESXi 7.0 ESXi70U1a-17119627   (Updated)

Download
Documentation

VMware ESXi 6.7 ESXi670-202011301-SG  (Updated)
Download
Documentation

VMware ESXi670-202008101-SG  (Included with August’s Release of ESXi670-202008001)

Download
Documentation

 VMware ESXi 6.7 ESXi670-202010401-SG
Download
Documentation

VMware vCenter Server 6.7u3

Download
Documentation

VMware vCenter Server 6.5u3k

Download
Documentation

VMware Workstation Pro 15.6

Download

Documentation

VMware Workstation Player 15.6
Download
Documentation

VMware Fusion 11.5.6
Download
Documentation

 VMware NSX-T 3.0.2
Download
Documentation

 VMware NSX-T 2.5.2.2.0
Download

Documentation

VMware vCloud Foundation 4.1

Download

Documentation

VMware vCloud Foundation 3.10.1 & 3.10.1

Download
Documentation

VMware vCloud Foundation 3.9.0

Download
Documentation

Mitre CVE Dictionary Links:
CVE-2020-3981
CVE-2020-3982
CVE-2020-3992
CVE-2020-3993
CVE-2020-3994
CVE-2020-3995 

FIRST CVSSv3 Calculator:

CVE-2020-3981
CVE-2020-3982 

CVE-2020-3992

CVE-2020-3993

CVE-2020-3994

CVE-2020-3995

5. Change Log
2020-10-20 VMSA-2020-0023 Initial security advisory.

2020-11-04 VMSA-2020-0023.1 Updated ESXi patches for section 3a

Disclaimer
This enablement email derives from our VMware Security Advisory and is accurate at the time of creation.  Bulletins maybe updated periodically, when using this email as future reference material, please refer to the full & updated VMware Security Advisory VMSA-2020-0023.1

Updating #VMware #HomeLab Gen 5 to Gen 7

Posted on Updated on

Not to long ago I updated my Gen 4 Home Lab to Gen 5 and I posted many blogs and video around this.  The Gen 5 Lab ran well for vSphere 6.7 deployments but moving into vSphere 7.0 I had a few issues adapting it.  Mostly these issues were with the design of the Jingsha Motherboard.  I noted most of these challenges in the Gen 5 wrap up video. Additionally, I had some new networking requirements mainly around adding multiple Intel NIC ports and Home Lab Gen 5 was not going to adapt well or would be very costly to adapt.  These combined adaptions forced my hand to migrate to what I’m calling Home Lab Gen 7.  Wait a minute, what happen to Home Lab Gen 6? I decided to align my Home Lab Generation numbers to match vSphere release number, so I skipped Gen 6 to align.

First: I review my design goals:

  • Be able to run vSphere 7.x and vSAN Environment
  • Reuse as much as possible from Gen 5 Home lab, this will keep costs down
  • Choose products that bring value to the goals, are cost effective, and if they are on the VMware HCL that a plus but not necessary for a home lab
  • Keep networking (vSAN / FT) on 10Gbe MikroTik Switch
  • Support 4 x Intel Gbe Networks
  • Ensure there will be enough CPU cores and RAM to be able to support multiple VMware products (ESXi, VCSA, vSAN, vRO, vRA, NSX, LogInsight)
  • Be able to fit the the environment into 3 ESXi Hosts
  • The environment should run well, but doesn’t have to be a production level environment

Second – Evaluate Software, Hardware, and VM requirements:

My calculated numbers from my Gen 5 build will stay rather static for Gen 7.  The only update for Gen 7 is to use the updated requirements table which can be found here >>  ‘HOME LABS: A DEFINITIVE GUIDE’

Third – Home Lab Design Considerations

This too will be very similar to Gen 5, but I do review this table and made any last changes to my design

Four – Choosing Hardware

Based on my estimations above I’m going to need a very flexible Mobo, supporting lots of RAM, good network connectivity, and should be as compatible as possible with my Gen 5 hardware.  I’ve reused many parts from Gen 5 but the main change came with the Supermicro Motherboard and the addition of 2TB SAS HDD listed below.

Note: I’ve listed the newer items in Italics all other parts I’ve carried over from Gen 5.

Overview:

  • My Gen 7 Home Lab is based on vSphere 7 (VCSA, ESXi, and vSAN) and it contains 3 x ESXi Hosts, 1 x Windows 10 Workstation,  4 x Cisco Switches, 1 x MikroTik 10gbe Switch, 2 x APC UPS

ESXi Hosts:

  • Case:
  • Motherboard:
  • CPU:
    • CPU: Xeon E5-2640 v2 8 Cores / 16 HT (Ebay $30 each)
    • CPU Cooler: DEEPCOOL GAMMAXX 400 (Amazon $19)
  • RAM:
    • 128GB DDR3 ECC RAM (Ebay $170)
  • Disks:
    • 64GB USB Thumb Drive (Boot)
    • 2 x 200 SAS SSD (vSAN Cache)
    • 2 x 2TB SAS HDD (vSAN Capacity – See this post)
    • 1 x 2TB SATA (Extra Space)
  • SAS Controller:
    • 1 x IBM 5210 JBOD (Ebay)
    • CableCreation Internal Mini SAS SFF-8643 to (4) 29pin SFF-8482 (Amazon $18)
  • Network:
    • Motherboard Integrated i350 1gbe 4 Port
    • 1 x MellanoxConnectX3 Dual Port (HP INFINIBAND 4X DDR PCI-E HCA CARD 452372-001)
  • Power Supply:
    • Antec Earthwatts 500-600 Watt (Adapters needed to support case and motherboard connections)
      • Adapter: Dual 8(4+4) Pin Male for Motherboard Power Adapter Cable (Amazon $11)
      • Adapter: LP4 Molex Male to ATX 4 pin Male Auxiliary (Amazon $11)
      • Power Supply Extension Cable: StarTech.com 8in 24 Pin ATX 2.01 Power Extension Cable (Amazon $9)

Network:

  • Core VM Switches:
    • 2 x Cisco 3650 (WS-C3560CG-8TC-S 8 Gigabit Ports, 2 Uplink)
    • 2 x Cisco 2960 (WS-C2960G-8TC-L)
  • 10gbe Network:
    • 1 x MikroTik 10gbe CN309 (Used for vSAN and Replication Network)
    • 2 ea. x HP 684517-001 Twinax SFP 10gbe 0.5m DAC Cable (Ebay)
    • 2 ea. x MELLANOX QSFP/SFP ADAPTER 655874-B21 MAM1Q00A-QSA (Ebay)

Battery Backup UPS:

  • 2 x APC NS1250

Windows 10 Workstation:

Thanks for reading, please do reach out if you have any questions.

If you like my ‘no-nonsense’ videos and blogs that get straight to the point… then post a comment or let me know… Else, I’ll start posting really boring content!

#VMware OCTO Initiative: Nonprofit Connect – Complementary Education and Enablement General Links

Posted on Updated on

The VMware Office of the CTO Ambassadors (CTOA) is an internal VMware program which allows field employees to connect and advocate their customer needs inside of VMware.  Additionally, the CTOA program enables field employees to engage in initiates to better serve our customers.  This past year I’ve been working on an CTOA initiative known as Nonprofit Connect (NPC). NPC has partnered with the VMware Foundation to help VMware Non-profit customers through more effective and sustainable technology.   Part of this program was creating and updating an enablement guide which helps Non-Profits gain access to resources.  This resource is open to all our customers and is publicly posted >> NPC Enablement Guide

Michelle Kaiser is leading the Nonprofit Connect initiative and from what I’ve seen she and the team are doing a great job — Keep up the good work!

More information around NPC, CTOA, and the VMware Foundation can be found in the links below: