ESXi

VMware vSphere 7.0 Update 1 | vCenter, ESXi, vSAN | Information

Posted on Updated on

VMware announced the GA Releases of the following:

  • VMware vCenter 7.0 Update 1
  • VMware ESXi 7.0 Update 1
  • VMware vSAN 7.0 Update 1

See the base table for all the technical enablement links, now including VMworld 2020 OnDemand Sessions

.

Release Overview
vCenter Server 7.0 Update 1 | ISO Build 16860138

ESXi 7.0 Update 1 | ISO Build 16850804

VMware vSAN 7.0 Update 1 | Build 16850804

What’s New vCenter Server
Inclusive terminology: In vCenter Server 7.0 Update 1, as part of a company-wide effort to remove instances of non-inclusive language in our products, the vSphere team has made changes to some of the terms used in the vSphere Client. APIs and CLIs still use legacy terms, but updates are pending in an upcoming release.

  • vSphere Accessibility Enhancements: vCenter Server 7.0 Update 1 comes with significant accessibility enhancements based on recommendations by the Accessibility Conformance Report (ACR), which is the internationally accepted standard.  Read more
  • vSphere Ideas Portal: With vCenter Server 7.0 Update 1, any user with a valid my.vmware.com account can submit feature requests by using the vSphere Ideas portal. Read more
  • Enhanced vSphere Lifecycle Manager hardware compatibility pre-checks for vSAN environments: vCenter Server 7.0 Update 1 adds vSphere Lifecycle Manager hardware compatibility pre-checks. Read more
  • Increased scalability with vSphere Lifecycle Manager: For vSphere Lifecycle Manager​ operations with ESXi hosts and clusters is up to:
    • 64 supported clusters from 15
    • 96 supported ESXi hosts within a cluster from 64. For vSAN environments, the limit is still 64
    • 280 supported ESXi hosts managed by a vSphere Lifecycle Manager Image from 150
    • 64 clusters on which you can run remediation in parallel, if you initiate remediation at a data center level, from 15
  • vSphere Lifecycle Manager support for coordinated upgrades between availability zones: With vCenter Server 7.0 Update 1, to prevent overlapping operations, vSphere Lifecycle Manager updates fault domains in vSAN clusters in a sequence. ESXi hosts within each fault domain are still updated in a rolling fashion. For vSAN stretched clusters, the first fault domain is always the preferred site.
  • Extended list of supported Red Hat Enterprise Linux and Ubuntu versions for the VMware vSphere Update Manager Download Service (UMDS): vCenter Server 7.0 Update 1 adds new Red Hat Enterprise Linux and Ubuntu versions that UMDS supports. For the complete list of supported versions, see Supported Linux-Based Operating Systems for Installing UMDS.
  • Silence Alerts button in VMware Skyline Health – With vCenter Server 7.0 Update 1, you can stop alerts for certain health checks, such as notifications for known issues, by using the Silence Alerts button.  Read more
  • Configure SMTP authentication: vCenter Server 7.0 Update 1 adds support to SMTP authentication in the vCenter Server Appliance to enable sending alerts and alarms by email in secure mode. Configure Mail Sender Settings.   Read more
  • System virtual machines for vSphere Cluster Services: In vCenter Server 7.0 Update 1, vSphere Cluster Services adds a set of system virtual machines in every vSphere cluster to ensure the healthy operation of VMware vSphere Distributed Resource Scheduler. For more information, see VMware knowledge base articles KB80472KB79892 and KB80483.
  • Licensing for VMware Tanzu Basic: With vCenter Server 7.0 Update 1, licensing for VMware Tanzu Basic splits into separate license keys for vSphere 7 Enterprise Plus and VMware Tanzu Basic. In vCenter Server 7.0 Update 1, you must provide either a vSphere 7 Enterprise Plus license key or a vSphere 7 Enterprise Plus with an add-on for Kubernetes license key to enable the Enterprise Plus functionality for ESXi hosts. In addition, you must provide a VMware Tanzu Basic license key to enable Kubernetes functionality for all ESXi hosts that you want to use as part of a Supervisor Cluster.
    When you upgrade a 7.0 deployment to 7.0 Update 1, existing Supervisor Clusters automatically start a 60-day evaluation mode. If you do not install a VMware Tanzu Basic license key and assign it to existing Supervisor Clusters within 60 days, you see some limitations in the Kubernetes functionality. For more information, see Licensing for vSphere with Tanzu and VMware knowledge base article KB80868.
  • For VMware vSphere with Tanzu updates, see VMware vSphere with Tanzu Release Notes.
Upgrade/Install Considerations vCenter
Before upgrading to vCenter Server 7.0 Update 1, you must confirm that the Link Aggregation Control Protocol (LACP) mode is set to enhanced, which enables the Multiple Link Aggregation Control Protocol (the multipleLag parameter) on the VMware vSphere Distributed Switch (VDS) in your vCenter Server system.

If the LACP mode is set to basic, indicating One Link Aggregation Control Protocol (singleLag), the distributed virtual port groups on the vSphere Distributed Switch might lose connection after the upgrade and affect the management vmknic, if it is on one of the dvPort groups. During the upgrade precheck, you see an error such as Source vCenter Server has instance(s) of Distributed Virtual Switch at unsupported lacpApiVersion.

For more information on converting to Enhanced LACP Support on a vSphere Distributed Switch, see VMware knowledge base article 2051311. For more information on the limitations of LACP in vSphere, see VMware knowledge base article 2051307.

Product Support Notices

  • vCenter Server 7.0 Update 1 does not support VMware Site Recovery Manager 8.3.1.
  • Deprecation of Server Message Block (SMB) protocol version 1.0
    File-based backup and restore of vCenter Server by using Server Message Block (SMB) protocol version 1.0 is deprecated in vCenter Server 7.0 Update 1. Removal of SMBv.1 is due in a future vSphere release.
  • End of General Support for ​VMware Tools 9.10.x and 10.0.x  VMware Product Lifecycle Matrix
  • Deprecation of the VMware Service Lifecycle Manager API
    VMware plans to deprecate the VMware Service Lifecycle Manager API (vmonapi service) in a future release. For more information, see VMware knowledge base article 80775.
  • End of support for Internet Explorer 11
    Removal of Internet Explorer 11 from the list of supported browsers for the vSphere Client is due in a future vSphere release.
  • VMware Host Client in maintenance mode
What’s New ESXi
What’s New

  • ESXi 7.0 Update 1 supports vSphere Quick Boot on the following servers:
    • HPE ProLiant BL460c Gen9
    • HPE ProLiant DL325 Gen10 Plus
    • HPE ProLiant DL360 Gen9
    • HPE ProLiant DL385 Gen10 Plus
    • HPE ProLiant XL225n Gen10 Plus
    • HPE Synergy 480 Gen9
  • Enhanced vSphere Lifecycle Manager hardware compatibility pre-checks for vSAN environments: ESXi 7.0 Update 1 adds vSphere Lifecycle Manager hardware compatibility pre-checks. The pre-checks automatically trigger after certain change events such as modification of the cluster desired image or addition of a new ESXi host in vSAN environments. Also, the hardware compatibility framework automatically polls the Hardware Compatibility List database at predefined intervals for changes that trigger pre-checks as necessary.
  • Increased number of vSphere Lifecycle Manager concurrent operations on clusters: With ESXi 7.0 Update 1, if you initiate remediation at a data center level, the number of clusters on which you can run remediation in parallel, increases from 15 to 64 clusters.
  • vSphere Lifecycle Manager support for coordinated updates between availability zones: With ESXi 7.0 Update 1, to prevent overlapping operations, vSphere Lifecycle Manager updates fault domains in vSAN clusters in a sequence. ESXi hosts within each fault domain are still updated in a rolling fashion. For vSAN stretched clusters, the first fault domain is always the preferred site.
  • Extended list of supported Red Hat Enterprise Linux and Ubuntu versions for the VMware vSphere Update Manager Download Service (UMDS): ESXi 7.0 Update 1 adds new Red Hat Enterprise Linux and Ubuntu versions that UMDS supports. For the complete list of supported versions, see Supported Linux-Based Operating Systems for Installing UMDS.
  • Improved control of VMware Tools time synchronization: With ESXi 7.0 Update 1, you can select a VMware Tools time synchronization mode from the vSphere Client instead of using the command prompt. When you navigate to VM Options > VMware Tools > Synchronize Time with Host, you can select Synchronize at startup and resume (recommended)Synchronize time periodically, or, if no option is selected, you can prevent synchronization.
  • Increased Support for Multi-Processor Fault Tolerance (SMP-FT) maximums: With ESXi 7.0 Update 1, you can configure more SMP-FT VMs, and more total SMP-FT vCPUs in an ESXi host, or a cluster, depending on your workloads and capacity planning.
  • Virtual hardware version 18: ESXi Update 7.0 Update 1 introduces virtual hardware version 18 to enable support for virtual machines with higher resource maximums, and:
    • Secure Encrypted Virtualization – Encrypted State (SEV-ES)
    • Virtual remote direct memory access (vRDMA) native endpoints
    • EVC Graphics Mode (vSGA).
  • Increased resource maximums for virtual machines and performance enhancements:
    • With ESXi 7.0 Update 1, you can create virtual machines with three times more virtual CPUs and four times more memory to enable applications with larger memory and CPU footprint to scale in an almost linear fashion, comparable with bare metal. Virtual machine resource maximums are up to 768 vCPUs from 256 vCPUs, and to 24 TB of virtual RAM from 6 TB. Still, not over-committing memory remains a best practice. Only virtual machines with hardware version 18 and operating systems supporting such large configurations can be set up with these resource maximums.
    • Performance enhancements in ESXi that support the larger scale of virtual machines include widening of the physical address, address space optimizations, better NUMA awareness for guest virtual machines, and more scalable synchronization techniques. vSphere vMotion is also optimized to work with the larger virtual machine configurations.
    • ESXi hosts with AMD processors can support virtual machines with twice more vCPUs, 256, and up to 8 TB of RAM.
    • Persistent memory (PMEM) support is up twofold to 12 TB from 6 TB for both Memory Mode and App Direct Mode.
Upgrade/Install Considerations ESXi
In vSphere 7.x, the Update Manager plug-in, used for administering vSphere Update Manager, is replaced with the Lifecycle Manager plug-in. Administrative operations for vSphere Update Manager are still available under the Lifecycle Manager plug-in, along with new capabilities for vSphere Lifecycle Manager.

The typical way to apply patches to ESXi 7.x hosts is by using the vSphere Lifecycle Manager. For details, see About vSphere Lifecycle Manager and vSphere Lifecycle Manager Baselines and Images.

You can also update ESXi hosts without using the Lifecycle Manager plug-in, and use an image profile instead. To do this, you must manually download the patch offline bundle ZIP file from the VMware download page or the Product Patches page and use the esxcli software profile command.
For more information, see the Upgrading Hosts by Using ESXCLI Commands and the VMware ESXi Upgrade guide.

What’s New vSAN
vSAN 7.0 Update 1 introduces the following new features and enhancements:

Scale Without Compromise

  • HCI Mesh. HCI Mesh is a software-based approach for disaggregation of compute and storage resources in vSAN. HCI Mesh brings together multiple independent vSAN clusters by enabling cross-cluster utilization of remote datastore capacity within vCenter Server. HCI Mesh enables you to efficiently utilize and consume data center resources, which provides simple storage management at scale.
  • vSAN File Service enhancements. Native vSAN File Service includes support for SMB file shares. Support for Microsoft Active Directory, Kerberos authentication, and scalability improvements also are available.
  • Compression-only vSAN. You can enable compression independently of deduplication, which provides a storage efficiency option for workloads that cannot take advantage of deduplication. With compression-only vSAN, a failed capacity device only impacts that device and not the entire disk group.
  • Increased usable capacity. Internal optimizations allow vSAN to no longer need the 25-30% of free space available for internal operations and host failure rebuilds. The amount of space required is a deterministic value based on deployment variables, such as size of the cluster and density of storage devices. These changes provide more usable capacity for workloads.
  • Shared witness for two-node clusters. vSAN 7.0 Update 1 enables a single vSAN witness host to manage multiple two-node clusters. A single witness host can support up to 64 clusters, which greatly reduces operational and resource overhead.

Simplify Operations

  • vSAN Data-in-Transit encryption. This feature enables secure over the wire encryption of data traffic between nodes in a vSAN cluster. vSAN data-in-transit encryption is a cluster-wide feature and can be enabled independently or along with vSAN data-at-rest encryption. Traffic encryption uses the same FIPS-2 validated cryptographic module as existing encryption features and does not require use of a KMS server.
  • Enhanced data durability during maintenance mode. This improvement protects the integrity of data when you place a host into maintenance mode with the Ensure Accessibility option. All incremental writes which would have been written to the host in maintenance are now redirected to another host, if one is available. This feature benefits VMs that have PFTT=1 configured, and also provides an alternative to using PFTT=2 for ensuring data integrity during maintenance operations
  • vLCM enhancements. vSphere Lifecycle Manager (vLCM) is a solution for unified software and firmware lifecycle management. In this release, vLCM is enhanced with firmware support for Lenovo ReadyNodes, awareness of vSAN stretched cluster and fault domain configurations, additional hardware compatibility pre-checks, and increased scalability for concurrent cluster operations.
  • Reserved capacity. You can enable capacity reservations for internal cluster operations and host failure rebuilds. Reservations are soft-thresholds designed to prevent user-driven provisioning activity from interfering with internal operations, such as data rebuilds, rebalancing activity, or policy re-configurations.
  • Default gateway override. You can override the default gateway for VMkernel adapter to provide a different gateway for vSAN network. This feature simplifies routing configuration for stretched clusters, two-node clusters, and fault domain deployments that previously required manual configuration of static routes. Static routing is not necessary
  • Faster vSAN host restarts. The time interval for a planned host restart has been reduced by persisting in-memory metadata to disk before the restart or shutdown. This method reduces the time required for hosts in a vSAN cluster to restart, which decreases the overall cluster downtime during maintenance windows.
  • Workload I/O analysis. Analyze VM I/O metrics with IOInsight, a monitoring and troubleshooting tool that is integrated directly into vCenter Server. Gain a detailed view of VM I/O characteristics such as performance, I/O size and type, read/write ratio, and other important data metrics. You can run IOInsight operations against VMs, hosts, or the entire cluster
  • Consolidated I/O performance view. You can select multiple VMs, and display a combined view of storage performance metrics such as IOPS, throughput, and latency. You can compare storage performance characteristics across multiple VMs.
  • VM latency monitoring with IOPS limits. This improvement in performance monitoring helps you distinguish the periods of latency that can occur due to enforced IOPS limits. This view can help organizations that set IOPS limits in VM storage policies.
  • Secure drive erase. Securely wipe flash storage devices before decommissioning from a vSAN cluster through a set of new PowerCLI or API commands. Use these commands to safely erase data in accordance to NIST standards
  • Data migration pre-check for disks. vSAN’s data migration pre-check for host maintenance mode now includes support for individual disk devices or entire disk groups. This offers more granular pre-checks for disk or disk group decommissioning.
  • VPAT section 508 compliant. vSAN is compliant with the Voluntary Product Accessibility Template (VPAT). VPAT section 508 compliance ensures that vSAN had a thorough audit of accessibility requirements, and has instituted product changes for proper compliance.

 Note: vSAN 7.0 Update 1 improves CPU performance by standardizing task timers throughout the system. This change addresses issues with timers activating earlier or later than requested, resulting in degraded performance for some workloads.

Upgrade/Install Considerations vSAN
For instructions about upgrading vSAN, see vSAN Documentation   Upgrading the vSAN Cluster   Before You Upgrade   Upgrading vCenter Server  Upgrading Hosts

Note: Before performing the upgrade, please review the most recent version of the VMware Compatibility Guide to validate that the latest vSAN version is available for your platform.

vSAN 7.0 Update 1 is a new release that requires a full upgrade to vSphere 7.0 Update 1. Perform the following tasks to complete the upgrade:

1. Upgrade to vCenter Server 7.0 Update 1. For more information, see the VMware vSphere 7.0 Update 1 Release Notes.
2. Upgrade hosts to ESXi 7.0 Update 1. For more information, see the VMware vSphere 7.0 Update 1 Release Notes.
3. Upgrade the vSAN on-disk format to version 13.0. If upgrading from on-disk format version 3.0 or later, no data evacuation is required (metadata update only).

 Note: vSAN retired disk format version 1.0 in vSAN 7.0 Update 1. Disks running disk format version 1.0 are no longer recognized by vSAN. vSAN will block upgrade through vSphere Update Manager, ISO install, or esxcli to vSAN 7.0 Update 1. To avoid these issues, upgrade disks running disk format version 1.0 to a higher version. If you have disks on version 1, a health check alerts you to upgrade the disk format version.

Disk format version 1.0 does not have performance and snapshot enhancements, and it lacks support for advanced features including checksum, deduplication and compression, and encryption. For more information about vSAN disk format version, see KB2145267.

Upgrading the On-disk Format for Hosts with Limited Capacity

During an upgrade of the vSAN on-disk format from version 1.0 or 2.0, a disk group evacuation is performed. The disk group is removed and upgraded to on-disk format version 13.0, and the disk group is added back to the cluster. For two-node or three-node clusters, or clusters without enough capacity to evacuate each disk group, select Allow Reduced Redundancy from the vSphere Client. You also can use the following RVC command to upgrade the on-disk format: vsan.ondisk_upgrade –allow-reduced-redundancy

When you allow reduced redundancy, your VMs are unprotected for the duration of the upgrade, because this method does not evacuate data to the other hosts in the cluster. It removes each disk group, upgrades the on-disk format, and adds the disk group back to the cluster. All objects remain available, but with reduced redundancy.

If you enable deduplication and compression during the upgrade to vSAN 7.0 Update 1, you can select Allow Reduced Redundancy from the vSphere Client.

Limitations

For information about maximum configuration limits for the vSAN 7.0 Update 1 release, see the Configuration Maximums  documentation.

Technical Enablement
Release Notes vCenter Click Here  |  What’s New  |  Earlier Releases  |  Patch Info  |  Installation & Upgrade Notes   |  Product Support Notices

Resolved Issues  |  Known Issues

Release Notes ESXi Click Here  |  What’s New  |  Earlier Releases  |  Patch Info  |  Product Support Notices  |  Resolved Issues  |  Known Issues
Release Notes vSAN Click Here  |  What’s New  |  VMware vSAN Community  |  Upgrades for This Release  |  Limitations  |  Known Issues
docs.vmware/vCenter Installation & Setup  |   vCenter Server Upgrade  |   vCenter Server Configuration
Docs.vmware/ESXi Installation & Setup  |  Upgrading   |   Managing Host and Cluster Lifecycle  |   Host Profiles  |   Networking  |   Storage  |   Security

Resource Management  |   Availability  |  Monitoring & Performance

docs.vmware/vSAN Using vSAN Policies  |  Expanding & Managing a vSAN Cluster  |  Device Management  |  Increasing Space Efficiency  |  Encryption

Upgrading the vSAN Cluster   Before You Upgrade   Upgrading vCenter Server  Upgrading Hosts

Compatibility Information Interoperability Matrix vCenter  |  Configuration Maximums vSphere (All)  |  Ports Used vSphere (All)

Interoperability Matrix ESXi  |  Interoperability Matrix vSAN  |  Configuration Maximums vSAN  |  Ports Used vSAN

Blogs & Infolinks What’s New with VMware vSphere 7 Update 1  |  Main VMware Blog vSphere 7    |  vSAN  |  vSphere  |   vCenter Server

Announcing the ESXi-Arm Fling  |  In-Product Evaluation of vSphere with Tanzu

vSphere 7 Update 1 – Unprecedented Scalability

YouTube A Quick Look at What’s New in vSphere 7 Update 1  |  vSphere with Tanzu Overview in 3 Minutes

VMware vSphere with Tanzu webpage  |  eBook: Deliver Developer-Ready Infrastructure Using vSphere with Tanzu

What’s New in vSAN 7 Update 1   |  PM’s Blog, Cormac vSAN 7.0 Update 1

Download vSphere   |   vSAN
VMworld 2020 OnDemand

(Free Account Needed)

Deep Dive: What’s New with vCenter Server [HCP1100]    |   99 Problems, But A vSphere Upgrade Ain’t One [HCP1830]

Certificate Management in vSphere [HCP2050]      |     Connect vSAN Capacity Across Clusters with VMware HCI Mesh [DEM3206]

Deep Dive: vSphere 7 Developer Center [HCP1211]    |

More vSphere & vSAN VMworld Sessions

VMworld HOL Walkthrough

(VMworld Account Needed)

Introduction to vSphere Performance [HOL-2104-95-ISM]

VMware vSphere – What’s New [HOL-2111-95-ISM]

What’s New in vSAN – Getting Started [HOL-2108-95-ISM]

Step by Step: Upgrading the capacity disks in a vSAN 7 Hybrid Cluster

Posted on

My GEN5 Home Lab is ever expanding and the space demands on the vSAN cluster were becoming more apparent.  This past weekend I updated my vSAN 7 cluster capacity disks from 6 x 600GB SAS HDD to 6 x 2TB SAS HDD and it went very smoothly.   Below are my notes and the order I followed around this upgrade.  Additionally, I created a video blog (link further below) around these steps.  Lastly, I can’t stress this enough – this is my home lab and not a production environment. The steps in this blog/video are just how I went about it and are not intended for any other purpose.

Current Cluster:

  • 3 x ESXi 7.0 Hosts (Supermicro X9DRD-7LN4F-JBOD, Dual E5 Xeon, 128GB RAM, 64GB USB Boot)
  • vSAN Storage is:
    • 600GB SAS Capacity HDD
    • 200GB SAS Cache SDD
    • 2 Disk Groups per host (1 x 200GB SSD + 1 x 600GB HDD)
    • IBM 5210 HBA Disk Controller
    • vSAN Datastore Capacity: ~3.5TB
    • Amount Allocated: ~3.7TB
    • Amount in use: ~1.3TB

Proposed Change:

  • Keep the 6 x 200GB SAS Cache SDD Drives
  • Remove 6 x 600GB HDD Capacity Disk from hosts
  • Replace with 6 x 2TB HDD Capacity Disks
  • Upgraded vSAN Datastore ~11TB

Upgrade Notes:

  1. I choose to backup (via clone to offsite storage) and power off most of my VMs
  2. I clicked on the Cluster > Configure > vSAN > Disk Management
  3. I selected the one host I wanted to work with and then the Disk group I wanted to work with
  4. I located one of the capacity disks (600GB) and clicked on it
  5. I noted its NAA ID (will need later)
  6. I then clicked on “Pre-check Data Migration” and choose ‘full data migration’
  7. The test completed successfully
  8. Back at the Disk Management screen I clicked on the HDD I am working with
  9. Next I clicked on the ellipse dots and choose ‘remove’
  10. A new window appeared and for vSAN Data Migration I choose ‘Full Data Migration’ then clicked remove
  11. I monitored the progress in ‘Recent Tasks’
  12. Depending on how much data needed to be migrated, and if there were other objects being resynced it could take a bit of time per drive.  For me this was ~30-90 mins per drive
  13. Once the data migration was complete, I went to my host and found the WWN# of the physical disk that matched the NAA ID from Step 5
  14. While the system was still running, removed disk from the chassis, and replaced it with the new 2TB HDD
  15. Back at vCenter Server I clicked on the Host on the Cluster > Configure > Storage > Storage Devices
  16. I made sure the new 2TB drive was present
  17. I clicked on the 2TB drive, choose ‘erase partitions’ and choose OK
  18. I clicked on the Cluster > Configure > vSAN > Disk Management > ‘Claim Unused Disks’
  19. A new Window appeared and I choose ‘Capacity’ for the 2TB HDD, ‘Cache’ for the 200GB SDD drives, and choose OK
  20. Recent Task showed the disk being added
  21. When it was done I clicked on the newly added disk group and ensured it was in a health state
  22. I repeated this process until all the new HDDs were added

Final Outcome:

  • After upgrade the vSAN Storage is:
    • 2TB SAS Capacity HDD
    • 200GB SAS Cache SDD
    • 2 Disk Groups per host (1 x 200GB SSD + 1 x 2TB HDD)
    • IBM 5210 HBA Disk Controller
    • vSAN Datastore is ~11.7TB

Notes & other thoughts:

  • I was able complete the upgrade in this order due to the nature my home lab components.  Mainly because I’m running a SAS Storage HBA that is just a JBOD controller supporting Hot-Pluggable drives.
  • Make sure you run the data migration pre-checks and follow any advice it has.  This came in very handy.
  • If you don’t have enough space to fully evacuate a capacity drive you will either have to add more storage or completely remove VM’s from the cluster.
  • Checking Cluster>Monitor>vSAN>Resyncing Objects, gave me a good idea when I should start my next migration.  I look for it to be complete before I start. If you have an very active cluster this maybe harder to achieve.
  • Checking the vSAN Cluster Health should be done, especially the Cluster > Monitor > Skyline Health > Data > vSAN Object Health, any issues in these areas should be looked into prior to migration
  • Not always, but mostly, the disk NAA ID reported in vCenter Server/vSAN usually coincides with the WWN Number on the HDD
  • By changing my HDDs from 600GB SAS 10K to 2TB SAS 7.2K there will be a performance hit. However, my lab needed more space and 10k-15K drives were just out of my budget.
  • Can’t recommend this reference Link from VMware enough: Expanding and Managing a vSAN Cluster

 

Video Blog:

Various Photos:

If you like my ‘no-nonsense’ videos and blogs that get straight to the point… then post a comment or let me know… Else, I’ll start posting really boring content!

Create an ESXi installation ISO with custom drivers in 9 easy steps!

Video Posted on Updated on

One of the challenges in running a VMware based home lab is the ability to work with old / inexpensive hardware but run latest software. Its a balance that is sometimes frustrating, but when it works it is very rewarding. Most recently I decided to move to 10Gbe from my InfiniBand 40Gb network. Part of this transition was to create an ESXi ISO with the latest build (6.7U3) and appropriate network card drivers. In this video blog post I’ll show 9 easy steps to create your own customized ESXi ISO and how to pin point IO Cards on the vmware HCL.

** Update 03/06/2020 ** Though I had good luck with the HP 593742-001 NC523SFP DUAL PORT SFP+ 10Gb card in my Gen 4 Home Lab, I found it faulty when running in my Gen 5 Home Lab.  Could be I was using a PCIe x4 slot in Gen 4, or it could be the card runs to hot to touch.  For now this card was removed from VMware HCL, HP has advisories out about it, and after doing some poking around there seem to be lots of issues with it.  I’m looking for a replacement and may go with the HP NC550SFP.   However, this doesn’t mean the steps in this video are only for this card, the steps in this video help you to better understand how to add drivers into an ISO.

Here are the written steps I took from my video blog.  If you are looking for more detail, watch the video.

Before you start – make sure you have PowerCLI installed, have download these files,  and have placed these files in c:\tmp.

I started up PowerCLI and did the following commands:

1) Add the ESXi Update ZIP file to the depot:

Add-EsxSoftwareDepot C:\tmp\update-from-esxi6.7-6.7_update03.zip

2) Add the LSI Offline Bundle ZIP file to the depot:

Add-EsxSoftwareDepot ‘C:\tmp\qlcnic-esx55-6.1.191-offline_bundle-2845912.zip’

3) Make sure the files from step 1 and 2 are in the depot:

Get-EsxSoftwareDepot

4) Show the Profile names from update-from-esxi6.7-6.7_update03. The default command only shows part of the name. To correct this and see the full name use the ‘| select name’ 

Get-EsxImageProfile | select name

5) Create a clone profile to start working with.

New-EsxImageProfile -cloneprofile ESXi-6.7.0-20190802001-standard -Name ESXi-6.7.0-20190802001-standard-QLogic -Vendor QLogic

6) Validate the LSI driver is loaded in the local depot.  It should match the driver from step 2.  Make sure you note the name and version number columns.  We’ll need to combine these two with a space in the next step.

Get-EsxSoftwarePackage -Vendor q*

7) Add the software package to the cloned profile. Tip: For ‘SoftwarePackage:’ you should enter the ‘name’ space ‘version number’ from step 6.  If you just use the short name it might not work.

Add-EsxSoftwarePackage

ImageProfile: ESXi-6.7.0-20190802001-standard-QLogic
SoftwarePackage[0]: net-qlcnic 6.1.191-1OEM.600.0.0.2494585

8) Optional: Compare the profiles, to see differences, and ensure the driver file is in the profile.

Get-EsxImageProfile | select name   << Run this if you need a reminder on the profile names

Compare-EsxImageProfile -ComparisonProfile ESXi-6.7.0-20190802001-standard-QLogic -ReferenceProfile ESXi-6.7.0-20190802001-standard

9) Create the ISO

Export-EsxImageProfile -ImageProfile “ESXi-6.7.0-20190802001-standard-QLogic” -ExportToIso -FilePath c:\tmp\ESXi-6.7.0-20190802001-standard-QLogic.iso

That’s it!  If you like my ‘no-nonsense’ videos and blogs that get straight to the point… then post a comment or let me know… Else, I’ll start posting boring video blogs!

Cross vSAN Cluster support for FT

 

FIX for Netgear Orbi Router / Firewall blocks additional subnets

Posted on Updated on

Last April my trusty Netgear Switch finally gave in.  I bought a nifty Dell PowerConnect 6224 switch and have been working with it off an on.  About the same time, I decided to update my home network with the Orbi WiFi System (RBK50) AC3000 by Netgear.  My previous Netgear Wifi router worked quite well but I really needed something to support multiple locations seamlessly.

The Orbi Mesh has a primary device and allows for satellites to be connected to it.  It creates a Wifi mesh that allows devices to go from room to room or building to building seamlessly.  I’ve had it up for a while now and its been working out great – that is until I decided to ask it to route more than one subnet.   In this blog I’ll show you the steps I took to over come this feature limitation but like all content on my blog this is for my reference – travel, use, or follow at your own risk.

To understand the problem we need to first understand the network layout.   My Orbi Router is the Gateway of last resort and it supplies DHCP and DNS services. In my network I have two subnets which are untagged VLANS known as VLAN 74 – 172.16.74.x/24 and VLAN 75 – 172.16.75.x/24.   VLAN 74 is used by my home devices and VLAN 75 is where I manage my ESXi hosts.  I have enabled RIP v2 on the Orbi and on the Dell 6224 switch.  The routing tables are populated correctly, and I can ping from any internal subnet to any host without issue, except when the Orbi is involved.

Issue:  Hosts on VLAN 75 are not able to get to the internet.  Hosts on VLAN 75 can resolve DNS names (example: yahoo.com) but it cannot ping any host on the Inet. Conversely VLAN 74 can ping Inet hosts and get to the internet.  I’d like for my hosts on VLAN 75 to have all the same functionally as my hosts on VLAN 74.

Findings:  By default, the primary Orbi router is blocking any host that is not on VLAN 74 from getting to the INET.  I believe Netgear enabled this block to limit the number of devices the Orbi could NAT.  I can only guess that either the router just can’t handle the load or this was a maximum Netgear tested it to.  I found this firewall block out by logging into the CLI of my Orbi and looking at the IPTables settings.  There I could clearly see there was firewall rule blocking hosts that were not part of VLAN 74.

Solution:  Adjust the Orbi to allow all VLAN traffic (USE AT YOUR OWN RISK)

  1. Enable Telnet access on your Primary Orbi Router.
    1. Go to http://{your orbi ip address}/debug.htm
    2. Choose ‘Enable Telnet’ (**reminder to disable this when done**)
    3. Telnet into the Orbi Router (I just used putty)
    4. Logon as root using your routers main password
  2. I issued the command ‘iptables -t filter -L loc2net’. Using the output of this command I can see where line 5 is dropping all traffic that is not (!) VLAN74.
  3. Let’s remove this firewall rule. The one I want to target is the 5th in the list, yours may vary.  This command will remove it ‘iptables -t filter -D loc2net 5’** Update 10-2020 **

    It appears that more recent firmware updates have changed the the targeting for steps below.  I noticed in Router Firmware Version V2.5.1.16 I had to add 2 to the targeted line number to remove it with the ip tables command.  This my vary for the device that is being worked on.  Again, all my posts, blogs, and videos are for my records and not for any intended purpose.

    ***********

  4. Next, we need to clean up some post routing issues ‘iptables -t nat -I POSTROUTING 1 -o brwan -j MASQUERADE’
  5. A quick test and I can now PING and get to the internet from VLAN 75
  6. Disconnect from Telnet and disable it on your router.

Note:  Unfortunately, this is not a permanent fix.  Once you reboot your router the old settings come back.  The good news is, its only two to three lines to fix this problem.  Check out the links below for more information and a script.

Easy Copy Commands for my reference:

iptables -t filter -L loc2net

iptables -t filter -D loc2net 7  << Check this number

iptables -t nat -I POSTROUTING 1 -o brwan -j MASQUERADE

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

REF:

Home Lab Gen IV – Part V Installing Mellanox HCAs with ESXi 6.5

Posted on Updated on

The next step on my InfiniBand home lab journey was getting the InfiniBand HCAs to play nice with ESXi. To do this I need to update the HCA firmware, this proved to be a bit of a challenge. In this blog post I go into how I solved this issue and got them working with ESXi 6.5.

My initial HCA selection was the ConnectX aka HP INFINIBAND 4X DDR PCI-E HCA CARD 452372-001, and Mellanox MHGA28-XTC InfiniHost III HCA these two cards proved to be a challenge when updating their firmware. I tried all types of operating systems, different drivers, different mobos, and MFT tools versions but they would not update or be OS recognized. Only thing I didn’t try was Linux OS. The Mellanox forums are filled with folks trying to solve these issues with mixed success. I went with these cheaper cards and they simply do not have the product support necessary. I don’t recommend the use of these cards with ESXi and have migrated to a ConnectX-3 which you will see below.

Updating the ConnectX 3 Card:

After a little trial and error here is how I updated the firmware on the ConnectX 3. I found the ConnectX 3 card worked very well with Windows 2012 and I was able to install the latest Mellanox OFED for Windows (aka Windows Drivers for Mellanox HCA card) and updated the firmware very smoothly.

First, I confirm the drivers via Windows Device Manager (Update to latest if needed)

Once you confirm Windows device functionality then install the Mellanox Firmware Tools for windows (aka WinMFT)

Next, it’s time to update the HCA firmware. To do this you need to know the exact model number and sometimes the card revision. Normally this information can be found on the back of your HCA. With this in hand go to the Mellanox firmware page and locate your card then download the update.

After you download the firmware place it in an accessible directory. Next use the CLI, navigate to the WinMFT directory and use the ‘mst status’ command to reveal the HCA identifier or the MST Device Name. If this command is working, then it is a good sign your HCA is working properly and communicating with the OS. Next, I use the flint command to update my firmware. Syntax is — flint -d <MST Device Name> -i <Firmware Name> burn

Tip: If you are having trouble with your Mellanox HCA I highly recommend the Mellanox communities. The community there is generally very responsive and helpful!

Installation of ESXi 6.5 with Mellanox ConnectX-3

I would love to tell you how easy this was, but the truth is it was hard. Again, old HCA’s with new ESXi doesn’t equal easy or simple to install but it does equal Home lab fun. Let me save you hours of work. Here is the simple solution when trying to get Mellanox ConnextX Cards working with ESXi 6.5. In the end I was able to get ESXi 6.5 working with my ConnectX Card (aka HP INFINIBAND 4X DDR PCI-E HCA CARD 452372-001) and with my ConnectX-3 CX354A.

Tip: I do not recommend the use of the ConnectX Card (aka HP INFINIBAND 4X DDR PCI-E HCA CARD 452372-001) with ESXi 6.x. No matter how I tried I could not update its firmware and it has VERY limited or non-existent support. Save time go with ConnectX-3 or above.

After I installed ESXi 6.5 I followed the following commands and it worked like a champ.

Disable native driver for vRDMA

  • esxcli system module set –enabled=false -m=nrdma
  • esxcli system module set –enabled=false -m=nrdma_vmkapi_shim
  • esxcli system module set –enabled=false -m=nmlx4_rdma
  • esxcli system module set –enabled=false -m=vmkapi_v2_3_0_0_rdma_shim
  • esxcli system module set –enabled=false -m=vrdma

Uninstall default driver set

  • esxcli software vib remove -n net-mlx4-en
  • esxcli software vib remove -n net-mlx4-core
  • esxcli software vib remove -n nmlx4-rdma
  • esxcli software vib remove -n nmlx4-en
  • esxcli software vib remove -n nmlx4-core
  • esxcli software vib remove -n nmlx5-core

Install Mellanox OFED 1.8.2.5 for ESXi 6.x.

  • esxcli software vib install -d /var/log/vmware/MLNX-OFED-ESX-1.8.2.5-10EM-600.0.0.2494585.zip

Ref Links:

After a quick reboot, I got 40Gb networking up and running. I did a few vmkpings between hosts and they ping perfectly.

So, what’s next? Now that I have the HCA working I need to get VSAN (if possible) working with my new highspeed network, but this folks is another post.

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

The 3 Amigos – NUC, LIAN LI, and Cooler Master

Posted on Updated on

Today I wanted to look at the Cooler Master Elite 110 and compare it a bit to some other cases.

Let’s see how its foot print measures up to some familiar cases.  I stacked it up to the Intel NUC5i7RYH and my Lian Li PC-Q25 and surprisingly the Elite 110 is like a big cube that is reminiscent of older Shuttle cases. The size is nice for a small foot print PC but depending on your use it may be too bulky for appliance based work. One thing I did note the manufacture states the case is 20.8 mm but my measurements are coming out close to 21.2 mm

Note: I used my Lian Li case for my FreeNAS build, it’s a great case for those wanting to build a NAS (Click here for more PICS)

Inside the Elite 110, there are your standard edge cables (USB, Audio, Switches, and lights). The Power button is located in front bottom center and is the Cool Master logo. On the right hand side are all your typical USB 3.0, Audio, Reset and HDD LED.

The case allows for a maximum of 3 x 3.5″ or 4 x 2.5″ disk drives.  You can also work this into different combinations. For example – 3 x 3.5″ HDD and 1 x 2.5″ SDD, could make a VSAN Hybrid combination or 3 x 2.5″ SDD for VSAN All Flash and 1 x 3.5″ for the boot disk.

The mount point for these disk drives can be mount to the lefthand side and top. When mounting the disks I found it better to mount the SATA and power connectors to the rear.

Top Mount – Allows for 2 x 3.5″ or 2 x 2.5″.  In the photo below I’m using 1 x 3.5″  and 1 x 2.5″

Left Side Mount – Only allows for 1 x 3.5″ or 2 x 2.5″ disk drives.  In this photo I’m showing the 3.5″ disk mounted in its only position and the 2.5″ disk is unmounted to show some of the mount points.

The Rear of the case will allow for a standard ATX power supply, which sticks out about an inch. The case also supports two PCI Slots which should be enough for most ITX motherboards with one or two PCI Slots.

Inside we find only 4 Motherboard pre-threaded mount points and a 120 mm fan.  The fans power cable can connect to the power supply or to your motherboard.

Quick Summary – The Elite 110 is a nice budget case. Depending on your use case it could make a nice case for your home lab, NAS server or even a VSAN box. Its footprint is a bit too big for those appliance-based needs and the case metal is thin. I don’t like the fact there are only 4 mount points for your motherboard, this is great for an ITX Single PCI Slot but not so good for Dual. This is not a fault of the Elite 110 but more of an ATX/mATX/ITX standards problem. With no mount points for the second PCI slot it puts a lot of pressure on your motherboard during insertion.  This could lead to cards being miss-inserted.

Overall for the $35 I spent on this case it’s a pretty good value. Further photos can be found here on NewEgg and if you hurry the case is $28 with a rebate.

Manufacture Links:

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

Limited vCenter Server options with Windows 2016

Posted on Updated on

If you plan to update your vCenter Server to Windows 2016 then you might want to make sure you do your homework. Recently after reviewing the following KB its apparent that vCenter Server for Windows 2016 is only supported with vCenter Server 6.5. This might be a great time to consider moving to the vCenter Server Appliance (aka VCSA).

Here is the KB around the compatibility – https://kb.vmware.com/s/article/2091273?language=en_US

vSphere 6.0 / 6.5 Cross reference build release for ESXi, vSAN, and vCenter Server

Posted on Updated on

I love the Correlating build numbers and versions of VMware products (1014508). This one KB has made my job, and I’m sure yours too, so much easier. Before this KB was released it was a bit difficult to correlate build, patch, and update levels to vSphere Environments. Now with just a few clicks one can find out all this information and more. However, I really need the ability to correlate multiple core products. Typically, I work with — ESXi, vCenter Server, and vSAN. So, today I took the time today to align all this information.

It took me about 5 mins to build the chart below but it will save me loads of time. Can’t tell you how many times I’ve been asked which version of ESXi was related to which version of vSAN and Oh, what version of vCenter Server was released with it? Well with this cart below you can answer those questions and more.

~ Enjoy!

vSAN version

ESXi version

Release Date

Build Number

vCenter Server

Version

Release Date

Build Number

vSAN 6.6.1

ESXi 6.5 Update 1

7/27/2017

5969303

vCenter Server 6.5 Update 1

7/27/2017

5973321

       

vCenter Server 6.5 0e Express Patch 3

6/15/2017

5705665

vSAN 6.6

ESXi 6.5.0d

4/18/2017

5310538

vCenter Server 6.5 0d Express Patch 2

4/18/2017

5318154

vSAN 6.5 Express Patch 1a

ESXi 6.5 Express Patch 1a

3/28/2017

5224529

vCenter Server 6.5 0c Express Patch 1b

4/13/2017

5318112

vSAN 6.5 Patch 01

ESXi 6.5 Patch 01

3/9/2017

5146846

vCenter Server 6.5 0b Patch 1

2017-03-14

5178943

vSAN 6.5.0a

ESXi 6.5.0a

2/2/2017

4887370

vCenter Server 6.5 0a Express Patch 1

2/2/2017

4944578

vSAN 6.5

ESXi 6.5 GA

11/15/2016

4564106

vCenter Server 6.5 GA

11/15/2016

4602587

vSAN 6.2 Patch 5

ESXi 6.0 Patch 5

7/11/2017

5572656

     

vSAN 6.2 Express Patch 7c

ESXi 6.0 Express Patch 7c

3/28/2017

5251623

vCenter Server 6.0 Update 3b

4/13/2017

5318200/5318203

vSAN 6.2 Express Patch 7a

ESXi 6.0 Express Patch 7a

3/28/2017

5224934

vCenter Server 6.0 Update 3a

3/21/2017

5183549

vSAN 6.2 Update 3

ESXi 6.0 Update 3

2/24/2017

5050593

vCenter Server 6.0 Update 3

2/24/2017

5112527

vSAN 6.2 Patch 4

ESXi 6.0 Patch 4

11/22/2016

4600944

vCenter Server 6.0 Update 2a

11/22/2016

4541947

vSAN 6.2 Express Patch 7

ESXi 6.0 Express Patch 7

10/17/2016

4510822

     

vSAN 6.2 Patch 3

ESXi 6.0 Patch 3

8/4/2016

4192238

     

vSAN 6.2 Express Patch 6

ESXi 6.0 Express Patch 6

5/12/2016

3825889

     

vSAN 6.2

ESXi 6.0 Update 2

3/16/2016

3620759

vCenter Server 6.0 Update 2

3/16/2016

3634793

vSAN 6.1 Express Patch 5

ESXi 6.0 Express Patch 5

2/23/2016

3568940

     

vSAN 6.1 Update 1b

ESXi 6.0 Update 1b

1/7/2016

3380124

vCenter Server 6.0 Update 1b

1/7/2016

3339083

vSAN 6.1 Express Patch 4

ESXi 6.0 Express Patch 4

11/25/2015

3247720

     

vSAN 6.1 U1a (Express Patch 3)

ESXi 6.0 U1a (Express Patch 3)

10/6/2015

3073146

     

vSAN 6.1

ESXi 6.0 U1

9/10/2015

3029758

vCenter Server 6.0 Update 1

9/10/2015

3018524

vSAN 6.0.0b

ESXi 6.0.0b

7/7/2015

2809209

vCenter Server 6.0.0b

7/7/2015

2776511

vSAN 6.0 Express Patch 2

ESXi 6.0 Express Patch 2

5/14/2015

2715440

     

vSAN 6.0 Express Patch 1

ESXi 6.0 Express Patch 1

4/9/2015

2615704

vCenter Server 6.0.0a

4/16/2015

2656760

vSAN 6.0

ESXi 6.0 GA

3/12/2015

2494585

vCenter Server 6.0 GA

3/12/2015

2559268

 

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

2 VMTools Secrets your mother never told you about!

Posted on Updated on

These are pretty common asks amongst operators of ESXi – ‘Which VMtools version came with my ESXi Host’ and ‘Where can I view and download all the VMTools directly?’ The answers are below and the outputs aren’t pretty but they sure are useful!

1st – Check out the URL below to see all the ESXi Host build to released versions.

https://packages.vmware.com/tools/versions

2nd – Where can I view and download all the VMTools directly

https://packages.vmware.com/tools/esx/index.html

Finally, if you read this far then you are in luck here is the best tip — Watch this video and you’ll know more about VMtools than your mom :)

http://vmware.mediasite.com/mediasite/Play/6d33be3f5da840a19ec1997e220aedfe1d

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

 

Home Lab Gen IV – Part IV: Overcoming installation challenges

Posted on Updated on

One of the joys of working with a home lab is doing something that no one has done before. Sure, your configuration might be similar to others, but in a way your home lab is unique. However, with this uniqueness comes its share of installation challenges. My new lab was no exception, there were a few challenges and one major issue I uncovered while setting up this new environment. In this blog post I am going review the environment I am working on, break down some of the hardware layout placement challenges, fun using the MAC PowerBook to complete the installation, and finally overcoming ESXi installation challenges.

Here is my new environment:

  • Mac Powerbook with macOS Sierra (Used for remote connection into my environment, normally I use a PC)
  • Gigabyte MX31-BS0
  • Intel Xeon E3-1230 v5
  • 32GB DDR4 RAM
  • 1 x Mellanox Connectx InfiniBand HCA
  • 4 x 200GB SSD, 1 x 64GB USB (Boot)
  • 1 x IBM M5210 JBOD SAS Controller
  • 1 x Mini SAS SFF-8643 to (4) 29pin SFF-8482
  • 1 x 64GB USB Boot Stick:

Hardware layout/placement challanges:

32GB of RAM: Pay attention to the placement of the RAM. As Channel 1 for the RAM are the two closest slots to the CPU, channel 2 being the two farthest away. Normally you would place the RAM pairs in like colors however this Mobo is a bit different

Mellanox Connectx InfiniBand HCA: Placed it in the 16x slot right next to the CPU. The HCA requires an 8x slot so this slot should not slow it down. No BIOS changes were required and I could see this HCA in the BIOS.

IBM M5210 JBOD SAS Controller: Placed it in the 8x slot which goes through the C232 chipset on the motherboard. Next, I needed to update the firmware but this proved to be a challenge. Keep in mind the M5210 with NO cache will not allow you to enter its BIOS management page (aka MegaRAID webbios). This means you’ll need to use the command line or other software to update and view its information. Initially, I tried several command line options (UEFI Shell, DOS CLI, etc.) with the MegaRAID CLI but I just could not find the right combination to get it to work. My solution — I simply used an older SSD drive, installed Windows Server 2012 on it, and used the Windows exe to update the firmware. It worked perfectly with no issues.

After the update, I had some issues decoding the M5210 running firmware version vs. the vSAN HCL. As you know when running vSAN in a home lab the closer you are to the HCL and vSAN HCL the better. (NOTE: as I’m sure you know production environments MUST match the HCLs). The published firmware version on the vSAN HCL is 4.660.00-8218. However, when the M5210 boots it shows 24.16.0-0104.

Solution: When you are looking at the boot screen you are seeing the FW Package number not the Firmware of your controller. Simply look at the release notes for the ‘FW Package’ and you’ll find the correct MR FW versions that match the vSAN HCL.

IBM / Lenovo doesn’t make it easy to find the firmware for this device.

Here are a few more recent links:

Boot Screen

Release notes

200GB SSD: The Sonata cases I am using are a bit dated but they fully meet my needs so there is no need to replace them. There are 4 x 3.5″ bottom mount disk trays in each case. Bottom mount means you insert your 3.5″ drive into the tray and bolt it to the tray from the bottom. I bought several 3.5″ to 2.5″ converters which will allow me to mount my 2.5″ SSDs. However, the converters didn’t have bottom mount holes that lined up with the standard 3.5″ holes. Fix — I used a hole in the existing tray to secure the converter to the tray. I also made sure I mounted the converter as far back as I could to ensure the SAS cables would not be on the side of the case. This mount position moved the drives back about 1.5″(38mm). The red line in the PIC show where the original mount point was.

Mini SAS SFF-8643 to (4) 29pin SFF-8482: From the PIC above you can see the disk end of the SAS cables. What is nice about them is each one has a disk number labeled and has integrated power and all 4 drives go back to a single connector. The only downside to the cable I bought was they seemed a bit frail, so I’d recommend if you plan to mod your environment frequently look into a better-quality cable. If you interested more in SAS and the associated cables I would recommend this wiki page – https://en.wikipedia.org/wiki/Serial_Attached_SCSI

64GB USB Boot Stick: I decided to use the internal USB port freeing up the rear ports for other items. The USB stick I am using is the SanDisk Ultra Fit 64GB USB 3.0 Flash Drive. ESXi will only take up ~10GB of this stick, so is 64GB overkill? Keep in mind I plan to run vSAN 6.6.x and one of the benefits is the log files now write to RAM and in case of a system failure, they can write these logs to the USB stick. However, the default partition sizes (2.5GB for diags) might not be large enough. The vSAN team as released a nifty script that will estimate and resize you USB partitions. I’ll cover this topic in later posts and show you how to “auto-resize” your USB storage after you have installed vSAN.

Fun with the MAC:

Function Keys: One of the challenges was MAC keyboard mapping into the remote KVM. For some reason, the function keys on a MAC always assume you want to their special function vs. the F# key you are pushing. This proves to be a challenge when you are trying to pass standard function keys. Simple fix: System Preferences > Keyboard > Ensure ‘Use F1, F2, etc. as standard function keys’ is checked.

Another option for F# keys is to create a macro inside of the vKVM Viewer to pass the key. The screenshot below shows where you can setup user defined Macros and in the background is the MeregPoint console for one of my ESXi hosts.

Java: One of the joys of this motherboard is the use of vKVM viewer and VM Media. However, these functions need JAVA installed and up to date to function properly. If your JAVA is behind, trust me just update it’ll save you hours of pain. Here is the remaining gotcha. In the Mergpoint web page, you simply click on the ‘Launch Java vKVM Viewer’ button to start your host remote session. The webpage will download a .jnlp file. If you just click on this file you are presented with an error stating it can’t be opened because it is from an unidentified developer. Solution – After the java app downloads, click on the down arrow next to the file and choose ‘Show in finder’. When finder launches select that file by holding down the control key and right-clicking on it. A pop-up window will appear, release the control key and finally choose open. This allows you to override the ‘unidentified developer’ error and launch the viewer.

ESXi Installation:

Setting up the ESXi hosts had one big challenge – after the install of ESXi I could not see my SAS disks. I am using the ESXi 6.5U1 Rollup.iso to do my installs and my main goal was to install and boot ESXi from the 64GB USB stick and be able to access the 4 x 200GB SSD attached to the IBM M5210

Problem – During the install of ESXi, I booted the host using the ESXi6.5 ISO via virtual media console. The installer program would recognize the IBM M5210 controller, the attached 4 x SAS disks, and the 64GB USB stick. The installation would complete without issue. However, after ESXi booted the SAS disks and the controller would not appear but I could see the 64GB USB stick.

Other observations –

First, in the ESXi Log files I noticed the megasas was having firmware issues:

2017-09-21T10:26:31.310Z cpu5:66065)<6>megasas: Waiting for FW to come to ready state 2017-09-21T10:26:31.310Z cpu5:66065)<7>megasas: FW in FAULT state!!

ESC[7m2017-09-21T10:26:31.310Z cpu5:66065)WARNING: vmklinux: pci_announce_device:1486: PCI: driver megaraid_sas probe failed for device 0000:07:00.0ESC[0m 2017-09-21T10:26:31.310Z cpu5:66065)LinPCI: LinuxPCI_DeviceUnclaimed:257: Device 0000:07:00.0 unclaimed.

And… even though ESXi saw the M5210 as vmhba1, its status was unknown

vmhba1 Avago (LSI) MegaRAID SAS Invader Controller

vmhba1 0000:07:00.0 PCI 0:0:29:0 PCI 0:7:0:0 Slot1 UNKNOWN

Second, I use Partition Wizard bootable ISO to remove all partitions prior to installing ESXi. I noted that sometimes after I booted to it as virtual media it would see the 4 x SAS disks and other times it would not.

Third, Installation of ESXI onto SAS or SATA SSD as the boot disk worked perfectly. After booting I could see the M5210 and SAS disks but my goal of using the 64GB USB stick for the boot device was not achieved.

Fourth, occasionally when I booted the ESXi host to the USB stick it would work okay, but upon reboot would not

Final Solution – The core reason why I could not see the SAS disks with ESXi or Partition Wizard was the boot type was UEFI and not legacy. During boot time the boot order would sometimes change if I had virtual media connected, meaning sometimes it would boot the 64GB USB stick or Partition Wizard as UEFI and other times as legacy. Apparently, UEFI boot was giving the M5210 firmware issues not allowing the SAS disk to come online.

FIX – I went into the BIOS of the motherboard > Advanced > CSM Configuration > changed ‘Boot option filter’ to ‘Legacy Only’ and all my issues went away.

Summary – I spent a lot of after-hours and weekends working out all various installation tweaks but what can I say, this is the joy of setting up a home lab! My hopes are in some way this post helps you move your home lab forward too. In my next post, I’ll be going over how to enable the InfiniBand HCA in ESXi 6.5.

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.