SAN

VMware vSphere 7.0 Update 1 | vCenter, ESXi, vSAN | Information

Posted on Updated on

VMware announced the GA Releases of the following:

  • VMware vCenter 7.0 Update 1
  • VMware ESXi 7.0 Update 1
  • VMware vSAN 7.0 Update 1

See the base table for all the technical enablement links, now including VMworld 2020 OnDemand Sessions

.

Release Overview
vCenter Server 7.0 Update 1 | ISO Build 16860138

ESXi 7.0 Update 1 | ISO Build 16850804

VMware vSAN 7.0 Update 1 | Build 16850804

What’s New vCenter Server
Inclusive terminology: In vCenter Server 7.0 Update 1, as part of a company-wide effort to remove instances of non-inclusive language in our products, the vSphere team has made changes to some of the terms used in the vSphere Client. APIs and CLIs still use legacy terms, but updates are pending in an upcoming release.

  • vSphere Accessibility Enhancements: vCenter Server 7.0 Update 1 comes with significant accessibility enhancements based on recommendations by the Accessibility Conformance Report (ACR), which is the internationally accepted standard.  Read more
  • vSphere Ideas Portal: With vCenter Server 7.0 Update 1, any user with a valid my.vmware.com account can submit feature requests by using the vSphere Ideas portal. Read more
  • Enhanced vSphere Lifecycle Manager hardware compatibility pre-checks for vSAN environments: vCenter Server 7.0 Update 1 adds vSphere Lifecycle Manager hardware compatibility pre-checks. Read more
  • Increased scalability with vSphere Lifecycle Manager: For vSphere Lifecycle Manager​ operations with ESXi hosts and clusters is up to:
    • 64 supported clusters from 15
    • 96 supported ESXi hosts within a cluster from 64. For vSAN environments, the limit is still 64
    • 280 supported ESXi hosts managed by a vSphere Lifecycle Manager Image from 150
    • 64 clusters on which you can run remediation in parallel, if you initiate remediation at a data center level, from 15
  • vSphere Lifecycle Manager support for coordinated upgrades between availability zones: With vCenter Server 7.0 Update 1, to prevent overlapping operations, vSphere Lifecycle Manager updates fault domains in vSAN clusters in a sequence. ESXi hosts within each fault domain are still updated in a rolling fashion. For vSAN stretched clusters, the first fault domain is always the preferred site.
  • Extended list of supported Red Hat Enterprise Linux and Ubuntu versions for the VMware vSphere Update Manager Download Service (UMDS): vCenter Server 7.0 Update 1 adds new Red Hat Enterprise Linux and Ubuntu versions that UMDS supports. For the complete list of supported versions, see Supported Linux-Based Operating Systems for Installing UMDS.
  • Silence Alerts button in VMware Skyline Health – With vCenter Server 7.0 Update 1, you can stop alerts for certain health checks, such as notifications for known issues, by using the Silence Alerts button.  Read more
  • Configure SMTP authentication: vCenter Server 7.0 Update 1 adds support to SMTP authentication in the vCenter Server Appliance to enable sending alerts and alarms by email in secure mode. Configure Mail Sender Settings.   Read more
  • System virtual machines for vSphere Cluster Services: In vCenter Server 7.0 Update 1, vSphere Cluster Services adds a set of system virtual machines in every vSphere cluster to ensure the healthy operation of VMware vSphere Distributed Resource Scheduler. For more information, see VMware knowledge base articles KB80472KB79892 and KB80483.
  • Licensing for VMware Tanzu Basic: With vCenter Server 7.0 Update 1, licensing for VMware Tanzu Basic splits into separate license keys for vSphere 7 Enterprise Plus and VMware Tanzu Basic. In vCenter Server 7.0 Update 1, you must provide either a vSphere 7 Enterprise Plus license key or a vSphere 7 Enterprise Plus with an add-on for Kubernetes license key to enable the Enterprise Plus functionality for ESXi hosts. In addition, you must provide a VMware Tanzu Basic license key to enable Kubernetes functionality for all ESXi hosts that you want to use as part of a Supervisor Cluster.
    When you upgrade a 7.0 deployment to 7.0 Update 1, existing Supervisor Clusters automatically start a 60-day evaluation mode. If you do not install a VMware Tanzu Basic license key and assign it to existing Supervisor Clusters within 60 days, you see some limitations in the Kubernetes functionality. For more information, see Licensing for vSphere with Tanzu and VMware knowledge base article KB80868.
  • For VMware vSphere with Tanzu updates, see VMware vSphere with Tanzu Release Notes.
Upgrade/Install Considerations vCenter
Before upgrading to vCenter Server 7.0 Update 1, you must confirm that the Link Aggregation Control Protocol (LACP) mode is set to enhanced, which enables the Multiple Link Aggregation Control Protocol (the multipleLag parameter) on the VMware vSphere Distributed Switch (VDS) in your vCenter Server system.

If the LACP mode is set to basic, indicating One Link Aggregation Control Protocol (singleLag), the distributed virtual port groups on the vSphere Distributed Switch might lose connection after the upgrade and affect the management vmknic, if it is on one of the dvPort groups. During the upgrade precheck, you see an error such as Source vCenter Server has instance(s) of Distributed Virtual Switch at unsupported lacpApiVersion.

For more information on converting to Enhanced LACP Support on a vSphere Distributed Switch, see VMware knowledge base article 2051311. For more information on the limitations of LACP in vSphere, see VMware knowledge base article 2051307.

Product Support Notices

  • vCenter Server 7.0 Update 1 does not support VMware Site Recovery Manager 8.3.1.
  • Deprecation of Server Message Block (SMB) protocol version 1.0
    File-based backup and restore of vCenter Server by using Server Message Block (SMB) protocol version 1.0 is deprecated in vCenter Server 7.0 Update 1. Removal of SMBv.1 is due in a future vSphere release.
  • End of General Support for ​VMware Tools 9.10.x and 10.0.x  VMware Product Lifecycle Matrix
  • Deprecation of the VMware Service Lifecycle Manager API
    VMware plans to deprecate the VMware Service Lifecycle Manager API (vmonapi service) in a future release. For more information, see VMware knowledge base article 80775.
  • End of support for Internet Explorer 11
    Removal of Internet Explorer 11 from the list of supported browsers for the vSphere Client is due in a future vSphere release.
  • VMware Host Client in maintenance mode
What’s New ESXi
What’s New

  • ESXi 7.0 Update 1 supports vSphere Quick Boot on the following servers:
    • HPE ProLiant BL460c Gen9
    • HPE ProLiant DL325 Gen10 Plus
    • HPE ProLiant DL360 Gen9
    • HPE ProLiant DL385 Gen10 Plus
    • HPE ProLiant XL225n Gen10 Plus
    • HPE Synergy 480 Gen9
  • Enhanced vSphere Lifecycle Manager hardware compatibility pre-checks for vSAN environments: ESXi 7.0 Update 1 adds vSphere Lifecycle Manager hardware compatibility pre-checks. The pre-checks automatically trigger after certain change events such as modification of the cluster desired image or addition of a new ESXi host in vSAN environments. Also, the hardware compatibility framework automatically polls the Hardware Compatibility List database at predefined intervals for changes that trigger pre-checks as necessary.
  • Increased number of vSphere Lifecycle Manager concurrent operations on clusters: With ESXi 7.0 Update 1, if you initiate remediation at a data center level, the number of clusters on which you can run remediation in parallel, increases from 15 to 64 clusters.
  • vSphere Lifecycle Manager support for coordinated updates between availability zones: With ESXi 7.0 Update 1, to prevent overlapping operations, vSphere Lifecycle Manager updates fault domains in vSAN clusters in a sequence. ESXi hosts within each fault domain are still updated in a rolling fashion. For vSAN stretched clusters, the first fault domain is always the preferred site.
  • Extended list of supported Red Hat Enterprise Linux and Ubuntu versions for the VMware vSphere Update Manager Download Service (UMDS): ESXi 7.0 Update 1 adds new Red Hat Enterprise Linux and Ubuntu versions that UMDS supports. For the complete list of supported versions, see Supported Linux-Based Operating Systems for Installing UMDS.
  • Improved control of VMware Tools time synchronization: With ESXi 7.0 Update 1, you can select a VMware Tools time synchronization mode from the vSphere Client instead of using the command prompt. When you navigate to VM Options > VMware Tools > Synchronize Time with Host, you can select Synchronize at startup and resume (recommended)Synchronize time periodically, or, if no option is selected, you can prevent synchronization.
  • Increased Support for Multi-Processor Fault Tolerance (SMP-FT) maximums: With ESXi 7.0 Update 1, you can configure more SMP-FT VMs, and more total SMP-FT vCPUs in an ESXi host, or a cluster, depending on your workloads and capacity planning.
  • Virtual hardware version 18: ESXi Update 7.0 Update 1 introduces virtual hardware version 18 to enable support for virtual machines with higher resource maximums, and:
    • Secure Encrypted Virtualization – Encrypted State (SEV-ES)
    • Virtual remote direct memory access (vRDMA) native endpoints
    • EVC Graphics Mode (vSGA).
  • Increased resource maximums for virtual machines and performance enhancements:
    • With ESXi 7.0 Update 1, you can create virtual machines with three times more virtual CPUs and four times more memory to enable applications with larger memory and CPU footprint to scale in an almost linear fashion, comparable with bare metal. Virtual machine resource maximums are up to 768 vCPUs from 256 vCPUs, and to 24 TB of virtual RAM from 6 TB. Still, not over-committing memory remains a best practice. Only virtual machines with hardware version 18 and operating systems supporting such large configurations can be set up with these resource maximums.
    • Performance enhancements in ESXi that support the larger scale of virtual machines include widening of the physical address, address space optimizations, better NUMA awareness for guest virtual machines, and more scalable synchronization techniques. vSphere vMotion is also optimized to work with the larger virtual machine configurations.
    • ESXi hosts with AMD processors can support virtual machines with twice more vCPUs, 256, and up to 8 TB of RAM.
    • Persistent memory (PMEM) support is up twofold to 12 TB from 6 TB for both Memory Mode and App Direct Mode.
Upgrade/Install Considerations ESXi
In vSphere 7.x, the Update Manager plug-in, used for administering vSphere Update Manager, is replaced with the Lifecycle Manager plug-in. Administrative operations for vSphere Update Manager are still available under the Lifecycle Manager plug-in, along with new capabilities for vSphere Lifecycle Manager.

The typical way to apply patches to ESXi 7.x hosts is by using the vSphere Lifecycle Manager. For details, see About vSphere Lifecycle Manager and vSphere Lifecycle Manager Baselines and Images.

You can also update ESXi hosts without using the Lifecycle Manager plug-in, and use an image profile instead. To do this, you must manually download the patch offline bundle ZIP file from the VMware download page or the Product Patches page and use the esxcli software profile command.
For more information, see the Upgrading Hosts by Using ESXCLI Commands and the VMware ESXi Upgrade guide.

What’s New vSAN
vSAN 7.0 Update 1 introduces the following new features and enhancements:

Scale Without Compromise

  • HCI Mesh. HCI Mesh is a software-based approach for disaggregation of compute and storage resources in vSAN. HCI Mesh brings together multiple independent vSAN clusters by enabling cross-cluster utilization of remote datastore capacity within vCenter Server. HCI Mesh enables you to efficiently utilize and consume data center resources, which provides simple storage management at scale.
  • vSAN File Service enhancements. Native vSAN File Service includes support for SMB file shares. Support for Microsoft Active Directory, Kerberos authentication, and scalability improvements also are available.
  • Compression-only vSAN. You can enable compression independently of deduplication, which provides a storage efficiency option for workloads that cannot take advantage of deduplication. With compression-only vSAN, a failed capacity device only impacts that device and not the entire disk group.
  • Increased usable capacity. Internal optimizations allow vSAN to no longer need the 25-30% of free space available for internal operations and host failure rebuilds. The amount of space required is a deterministic value based on deployment variables, such as size of the cluster and density of storage devices. These changes provide more usable capacity for workloads.
  • Shared witness for two-node clusters. vSAN 7.0 Update 1 enables a single vSAN witness host to manage multiple two-node clusters. A single witness host can support up to 64 clusters, which greatly reduces operational and resource overhead.

Simplify Operations

  • vSAN Data-in-Transit encryption. This feature enables secure over the wire encryption of data traffic between nodes in a vSAN cluster. vSAN data-in-transit encryption is a cluster-wide feature and can be enabled independently or along with vSAN data-at-rest encryption. Traffic encryption uses the same FIPS-2 validated cryptographic module as existing encryption features and does not require use of a KMS server.
  • Enhanced data durability during maintenance mode. This improvement protects the integrity of data when you place a host into maintenance mode with the Ensure Accessibility option. All incremental writes which would have been written to the host in maintenance are now redirected to another host, if one is available. This feature benefits VMs that have PFTT=1 configured, and also provides an alternative to using PFTT=2 for ensuring data integrity during maintenance operations
  • vLCM enhancements. vSphere Lifecycle Manager (vLCM) is a solution for unified software and firmware lifecycle management. In this release, vLCM is enhanced with firmware support for Lenovo ReadyNodes, awareness of vSAN stretched cluster and fault domain configurations, additional hardware compatibility pre-checks, and increased scalability for concurrent cluster operations.
  • Reserved capacity. You can enable capacity reservations for internal cluster operations and host failure rebuilds. Reservations are soft-thresholds designed to prevent user-driven provisioning activity from interfering with internal operations, such as data rebuilds, rebalancing activity, or policy re-configurations.
  • Default gateway override. You can override the default gateway for VMkernel adapter to provide a different gateway for vSAN network. This feature simplifies routing configuration for stretched clusters, two-node clusters, and fault domain deployments that previously required manual configuration of static routes. Static routing is not necessary
  • Faster vSAN host restarts. The time interval for a planned host restart has been reduced by persisting in-memory metadata to disk before the restart or shutdown. This method reduces the time required for hosts in a vSAN cluster to restart, which decreases the overall cluster downtime during maintenance windows.
  • Workload I/O analysis. Analyze VM I/O metrics with IOInsight, a monitoring and troubleshooting tool that is integrated directly into vCenter Server. Gain a detailed view of VM I/O characteristics such as performance, I/O size and type, read/write ratio, and other important data metrics. You can run IOInsight operations against VMs, hosts, or the entire cluster
  • Consolidated I/O performance view. You can select multiple VMs, and display a combined view of storage performance metrics such as IOPS, throughput, and latency. You can compare storage performance characteristics across multiple VMs.
  • VM latency monitoring with IOPS limits. This improvement in performance monitoring helps you distinguish the periods of latency that can occur due to enforced IOPS limits. This view can help organizations that set IOPS limits in VM storage policies.
  • Secure drive erase. Securely wipe flash storage devices before decommissioning from a vSAN cluster through a set of new PowerCLI or API commands. Use these commands to safely erase data in accordance to NIST standards
  • Data migration pre-check for disks. vSAN’s data migration pre-check for host maintenance mode now includes support for individual disk devices or entire disk groups. This offers more granular pre-checks for disk or disk group decommissioning.
  • VPAT section 508 compliant. vSAN is compliant with the Voluntary Product Accessibility Template (VPAT). VPAT section 508 compliance ensures that vSAN had a thorough audit of accessibility requirements, and has instituted product changes for proper compliance.

 Note: vSAN 7.0 Update 1 improves CPU performance by standardizing task timers throughout the system. This change addresses issues with timers activating earlier or later than requested, resulting in degraded performance for some workloads.

Upgrade/Install Considerations vSAN
For instructions about upgrading vSAN, see vSAN Documentation   Upgrading the vSAN Cluster   Before You Upgrade   Upgrading vCenter Server  Upgrading Hosts

Note: Before performing the upgrade, please review the most recent version of the VMware Compatibility Guide to validate that the latest vSAN version is available for your platform.

vSAN 7.0 Update 1 is a new release that requires a full upgrade to vSphere 7.0 Update 1. Perform the following tasks to complete the upgrade:

1. Upgrade to vCenter Server 7.0 Update 1. For more information, see the VMware vSphere 7.0 Update 1 Release Notes.
2. Upgrade hosts to ESXi 7.0 Update 1. For more information, see the VMware vSphere 7.0 Update 1 Release Notes.
3. Upgrade the vSAN on-disk format to version 13.0. If upgrading from on-disk format version 3.0 or later, no data evacuation is required (metadata update only).

 Note: vSAN retired disk format version 1.0 in vSAN 7.0 Update 1. Disks running disk format version 1.0 are no longer recognized by vSAN. vSAN will block upgrade through vSphere Update Manager, ISO install, or esxcli to vSAN 7.0 Update 1. To avoid these issues, upgrade disks running disk format version 1.0 to a higher version. If you have disks on version 1, a health check alerts you to upgrade the disk format version.

Disk format version 1.0 does not have performance and snapshot enhancements, and it lacks support for advanced features including checksum, deduplication and compression, and encryption. For more information about vSAN disk format version, see KB2145267.

Upgrading the On-disk Format for Hosts with Limited Capacity

During an upgrade of the vSAN on-disk format from version 1.0 or 2.0, a disk group evacuation is performed. The disk group is removed and upgraded to on-disk format version 13.0, and the disk group is added back to the cluster. For two-node or three-node clusters, or clusters without enough capacity to evacuate each disk group, select Allow Reduced Redundancy from the vSphere Client. You also can use the following RVC command to upgrade the on-disk format: vsan.ondisk_upgrade –allow-reduced-redundancy

When you allow reduced redundancy, your VMs are unprotected for the duration of the upgrade, because this method does not evacuate data to the other hosts in the cluster. It removes each disk group, upgrades the on-disk format, and adds the disk group back to the cluster. All objects remain available, but with reduced redundancy.

If you enable deduplication and compression during the upgrade to vSAN 7.0 Update 1, you can select Allow Reduced Redundancy from the vSphere Client.

Limitations

For information about maximum configuration limits for the vSAN 7.0 Update 1 release, see the Configuration Maximums  documentation.

Technical Enablement
Release Notes vCenter Click Here  |  What’s New  |  Earlier Releases  |  Patch Info  |  Installation & Upgrade Notes   |  Product Support Notices

Resolved Issues  |  Known Issues

Release Notes ESXi Click Here  |  What’s New  |  Earlier Releases  |  Patch Info  |  Product Support Notices  |  Resolved Issues  |  Known Issues
Release Notes vSAN Click Here  |  What’s New  |  VMware vSAN Community  |  Upgrades for This Release  |  Limitations  |  Known Issues
docs.vmware/vCenter Installation & Setup  |   vCenter Server Upgrade  |   vCenter Server Configuration
Docs.vmware/ESXi Installation & Setup  |  Upgrading   |   Managing Host and Cluster Lifecycle  |   Host Profiles  |   Networking  |   Storage  |   Security

Resource Management  |   Availability  |  Monitoring & Performance

docs.vmware/vSAN Using vSAN Policies  |  Expanding & Managing a vSAN Cluster  |  Device Management  |  Increasing Space Efficiency  |  Encryption

Upgrading the vSAN Cluster   Before You Upgrade   Upgrading vCenter Server  Upgrading Hosts

Compatibility Information Interoperability Matrix vCenter  |  Configuration Maximums vSphere (All)  |  Ports Used vSphere (All)

Interoperability Matrix ESXi  |  Interoperability Matrix vSAN  |  Configuration Maximums vSAN  |  Ports Used vSAN

Blogs & Infolinks What’s New with VMware vSphere 7 Update 1  |  Main VMware Blog vSphere 7    |  vSAN  |  vSphere  |   vCenter Server

Announcing the ESXi-Arm Fling  |  In-Product Evaluation of vSphere with Tanzu

vSphere 7 Update 1 – Unprecedented Scalability

YouTube A Quick Look at What’s New in vSphere 7 Update 1  |  vSphere with Tanzu Overview in 3 Minutes

VMware vSphere with Tanzu webpage  |  eBook: Deliver Developer-Ready Infrastructure Using vSphere with Tanzu

What’s New in vSAN 7 Update 1   |  PM’s Blog, Cormac vSAN 7.0 Update 1

Download vSphere   |   vSAN
VMworld 2020 OnDemand

(Free Account Needed)

Deep Dive: What’s New with vCenter Server [HCP1100]    |   99 Problems, But A vSphere Upgrade Ain’t One [HCP1830]

Certificate Management in vSphere [HCP2050]      |     Connect vSAN Capacity Across Clusters with VMware HCI Mesh [DEM3206]

Deep Dive: vSphere 7 Developer Center [HCP1211]    |

More vSphere & vSAN VMworld Sessions

VMworld HOL Walkthrough

(VMworld Account Needed)

Introduction to vSphere Performance [HOL-2104-95-ISM]

VMware vSphere – What’s New [HOL-2111-95-ISM]

What’s New in vSAN – Getting Started [HOL-2108-95-ISM]

Step by Step: Upgrading the capacity disks in a vSAN 7 Hybrid Cluster

Posted on

My GEN5 Home Lab is ever expanding and the space demands on the vSAN cluster were becoming more apparent.  This past weekend I updated my vSAN 7 cluster capacity disks from 6 x 600GB SAS HDD to 6 x 2TB SAS HDD and it went very smoothly.   Below are my notes and the order I followed around this upgrade.  Additionally, I created a video blog (link further below) around these steps.  Lastly, I can’t stress this enough – this is my home lab and not a production environment. The steps in this blog/video are just how I went about it and are not intended for any other purpose.

Current Cluster:

  • 3 x ESXi 7.0 Hosts (Supermicro X9DRD-7LN4F-JBOD, Dual E5 Xeon, 128GB RAM, 64GB USB Boot)
  • vSAN Storage is:
    • 600GB SAS Capacity HDD
    • 200GB SAS Cache SDD
    • 2 Disk Groups per host (1 x 200GB SSD + 1 x 600GB HDD)
    • IBM 5210 HBA Disk Controller
    • vSAN Datastore Capacity: ~3.5TB
    • Amount Allocated: ~3.7TB
    • Amount in use: ~1.3TB

Proposed Change:

  • Keep the 6 x 200GB SAS Cache SDD Drives
  • Remove 6 x 600GB HDD Capacity Disk from hosts
  • Replace with 6 x 2TB HDD Capacity Disks
  • Upgraded vSAN Datastore ~11TB

Upgrade Notes:

  1. I choose to backup (via clone to offsite storage) and power off most of my VMs
  2. I clicked on the Cluster > Configure > vSAN > Disk Management
  3. I selected the one host I wanted to work with and then the Disk group I wanted to work with
  4. I located one of the capacity disks (600GB) and clicked on it
  5. I noted its NAA ID (will need later)
  6. I then clicked on “Pre-check Data Migration” and choose ‘full data migration’
  7. The test completed successfully
  8. Back at the Disk Management screen I clicked on the HDD I am working with
  9. Next I clicked on the ellipse dots and choose ‘remove’
  10. A new window appeared and for vSAN Data Migration I choose ‘Full Data Migration’ then clicked remove
  11. I monitored the progress in ‘Recent Tasks’
  12. Depending on how much data needed to be migrated, and if there were other objects being resynced it could take a bit of time per drive.  For me this was ~30-90 mins per drive
  13. Once the data migration was complete, I went to my host and found the WWN# of the physical disk that matched the NAA ID from Step 5
  14. While the system was still running, removed disk from the chassis, and replaced it with the new 2TB HDD
  15. Back at vCenter Server I clicked on the Host on the Cluster > Configure > Storage > Storage Devices
  16. I made sure the new 2TB drive was present
  17. I clicked on the 2TB drive, choose ‘erase partitions’ and choose OK
  18. I clicked on the Cluster > Configure > vSAN > Disk Management > ‘Claim Unused Disks’
  19. A new Window appeared and I choose ‘Capacity’ for the 2TB HDD, ‘Cache’ for the 200GB SDD drives, and choose OK
  20. Recent Task showed the disk being added
  21. When it was done I clicked on the newly added disk group and ensured it was in a health state
  22. I repeated this process until all the new HDDs were added

Final Outcome:

  • After upgrade the vSAN Storage is:
    • 2TB SAS Capacity HDD
    • 200GB SAS Cache SDD
    • 2 Disk Groups per host (1 x 200GB SSD + 1 x 2TB HDD)
    • IBM 5210 HBA Disk Controller
    • vSAN Datastore is ~11.7TB

Notes & other thoughts:

  • I was able complete the upgrade in this order due to the nature my home lab components.  Mainly because I’m running a SAS Storage HBA that is just a JBOD controller supporting Hot-Pluggable drives.
  • Make sure you run the data migration pre-checks and follow any advice it has.  This came in very handy.
  • If you don’t have enough space to fully evacuate a capacity drive you will either have to add more storage or completely remove VM’s from the cluster.
  • Checking Cluster>Monitor>vSAN>Resyncing Objects, gave me a good idea when I should start my next migration.  I look for it to be complete before I start. If you have an very active cluster this maybe harder to achieve.
  • Checking the vSAN Cluster Health should be done, especially the Cluster > Monitor > Skyline Health > Data > vSAN Object Health, any issues in these areas should be looked into prior to migration
  • Not always, but mostly, the disk NAA ID reported in vCenter Server/vSAN usually coincides with the WWN Number on the HDD
  • By changing my HDDs from 600GB SAS 10K to 2TB SAS 7.2K there will be a performance hit. However, my lab needed more space and 10k-15K drives were just out of my budget.
  • Can’t recommend this reference Link from VMware enough: Expanding and Managing a vSAN Cluster

 

Video Blog:

Various Photos:

If you like my ‘no-nonsense’ videos and blogs that get straight to the point… then post a comment or let me know… Else, I’ll start posting really boring content!

Upgrading or adding New Hard Disks to the IOMega / EMC / Lenovo ix4-200d

Posted on Updated on

I currently have an IOMega ix4-200d with 4 x 500GB Hard Disk Drives (HDD). I am in the process of rebuilding my vSAN Home lab to all flash. This means I’ll have plenty of spare 2TB HDDs. So why not repurpose them to upgrade my IOMega. Updating the HDDs in an IOMega is a pretty simple process. However, documenting and waiting are most of this battle.

There are 2 different ways you can update your IOMega: 1 via Command Line and 2 via the Web client. From what I understand the command line version is far faster. However, I wanted to document the non-command line version as most of the blogs around this process were a bit sparse on the details. I started off by reading a few blog posts on the non-command line version of this upgrade. From there I came up with the basic steps and filled in the blanks as I went along. Below are the steps I took to update mine, your steps might vary. After documenting this process I can now see why most of the blogs were sparse on the details, there are a lot of steps and details to complete this task.  So, be prepared as this process can be quite lengthy.

NOTES:

  • YOU WILL LOSE YOUR DATA, SO BACK IT UP
  • You will lose the IOMega configuration (documenting it might be helpful)

Here are the steps I took:

  • Ensure you can logon to the website of your IOMega Device (lost the password – follow these steps)
  • Backup the IOMega Configuration
    • If needed screen shot the configuration or document how it is setup
  • Backup the data (YOU WILL LOSE YOUR DATA)
    • For me, I have an external 3TB USB disk and I used Syncback via my Windows PC to back up the data
  • Firmware: ensure the new HDDs and the IOMega IX4 are up to date
    • Seagate Disks ST2000DM001 -9YN164
    • Iomega IX4-200d (Product is EOL, no updates from Lenovo)
  • After backing up the data, power off the IOMega, unplug the power, and remove the cover
  • Remove the non-boot 500GB disks from the IOMega and label them (Disks 2-4), do not remove Disk 1
    Special Notes:
    • From what I read usually Disk 1 is the “boot” disk for the IOMega
    • In my case, it was Disk 1
    • For some of you, it may not be. One way to find this out is to remove disks 2-4 and see if the IOMega Boots, if so you found it, if not power off try with only disk 2 and so on till you find this right disk
  • Replace Disks 2-4 with the new HDD, in my case I put in the 2TB HDDs
  • Power on system (Don’t forget to plug it back in)
    • The IOMega display may note there are new disks added, just push the down arrow till you see the main screen
    • Also at this point, you won’t see the correct size as we need to adjust for the new disks
  • Go into IOMega web client

    • Settings > Disks Storage
    • Choose “Click here for steps…”
    • Check box to authorize overwrite

  • About a minute or two later my IOMega Auto Restarted
    • Note: Yours may not, give it some time and if not go to the Dashboard and choose restart
  • After reboot, I noted my configuration was gone but the Parity was reconstructing with 500GB disks
    • This is expected, as the system is replicating the parity to the new disks
    • This step took about +12 hours to complete

  • After the reconstruction, I went into the Web client and the IOMega configuration was gone.  It asked me to type in the device name, time zone, email, and then it auto Rebooted
  • After Reboot I noted all the disks are now healthy and part of the current 1.4TB parity set. This size is expected.

  • Now that the Iomega has accepted the 3 x 2TB disks we need to break parity group and add the final 2TB HDD
  • First, you have to delete the shares before you can change the parity type.
    • Shared Storage > Delete both shares and check to confirm delete

  • Now go to — Settings > disks > Manage Disks > Data Protection
    • Choose “Without data protection
    • Check the box to change data protection

  • Once complete the Power off the IOMega
    • Dashboard > Shutdown > Allow device to shutdown
  • After it powers off, replace Disk 1 with last 2TB Disk
  • Power On
  • Validate all disks are online
    • Go to Settings > Disks > “Click here for steps….” Then check box to authorize overwrite, choose OK.

  • After the last step observe the error message below and press ‘OK’

  • Go to Dashboard > Restart to restart the IOMega
  • After the restart the display should show “The filesystem is being prepared” with a progress bar, allow this to finish
  • Now create the Parity set with the new 2TB Disks
    • First, remove all Shared folders (See earlier steps if needed)
    • Second go to Settings > Disks > Manage Disks > Data Protection > Choose Parity > Next

  • Choose “check this box….” then click on apply…

  • After clicking apply my screen updated with a reconstruction of 0% and the display screen on the IOMega showed a progress bar too.
  • Mine took more than 24+ hours to complete the rebuild.

  • After the rebuild is complete then restore the config
  • Finally, restore your data. Again, I used syncback to copy my data back

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

Quick ways to check disk alignment for ESXi Datastores and Windows VM’s

Posted on Updated on

There are two simple checks a virtual infrastructure (VI) admin should be doing to ensure ESXi Datastores and the Windows VM’s are properly aligned. If either are misaligned then performance issues will follow. Though I’m not going to get into the whys and how’s of alignment issues I will show you how to quickly check.

1 – ESXi Datastores (DS)

By default if the VI admin formats a target datastore with vCenter Server or directly connected to a host via the VI Client the starting sector will be 2048. A starting sector of 2048 will satisfy nearly all of the storage vendors out there, however a 2048 starting sector should be validated with your storage vendor.

If the VI Admin chose to format the DS via a script then they should choose a starting sector of 2048 or what the storage vendor recommends

Example — partedUtil setptbl \$disk gpt “1 2048…..” More info here on partedUtil

Here is a simple command to check your “Start Sector”.   SSH or Direct console into a host that has DSs you want to check and run this command.

~ # esxcli storage core device partition list

esxistartingsector

Some note about this –

RED Box – Is the local boot disk, so its starting sector will be 64, this is not an issue as this is the ESXi Boot disk

Yellow, Green, and Blue – Are all VSAN Disks and all have a starting sector of 2048   << This is what I’m looking for, I want to make sure all DS disks start at 2048, if not they could experience performance issues.

2 – Windows VM Check

Windows checks are pretty easy too, the starting sector offset should be 2048. Note the screenshot below shows the Partition starting offset of 1,048,576, also note it’s labeled in bytes not sectors. To find the starting sector just divide the Partition Starting Offset by the Bytes/Sector.   Simple math tells us its right — 1048576/512 = 2048 Sector. If your Partition Starting offset is anything other than 1,048,576 Bytes or 2048 Sectors then the VM is not aligned and will need adjusted.

To find your Partition Starting offset, from a Windows Command Prompt, type in ‘msinfo32.exe’, go to Components > Storage > Disks, and note your Partition Starting Offset.

windowsstartingsector

 

 

VSAN – Setting up VSAN Observer in my Home Lab

Posted on Updated on

VSAN Observer is a slick way to display diagnostic statics not only around how the VSAN is performing but how the VM’s are as well.

Here are the commands I entered in my Home Lab to enable and disable the Observer.

Note: this is a diagnostic tool and should not be allowed to run for long periods of time as it will consume many GB of disk space. Ctrl+C will stop the collection

How to Start the collection….

  • vCenter239:~ # rvc root@localhost << Logon into vCenter Server Appliance | Note you may have to enable SSH
  • password:
  • /localhost> cd /localhost/Home.Lab
  • /localhost/Home.Lab> cd computers/Home.Lab.C1 << Navigate to your cluster | Mine Datacenter is Home.Lab, and cluster is Home.Lab.C1
  • /localhost/Home.Lab/computers/Home.Lab.C1> vsan.observer ~/computers/Home.Lab.C1 –run-webserver –force << Enter this command to get things started, keep in mind double dashes “—” are used in front of run-webserver and force
  • [2014-09-17 03:39:54] INFO WEBrick 1.3.1
  • [2014-09-17 03:39:54] INFO ruby 1.9.2 (2011-07-09) [x86_64-linux]
  • [2014-09-17 03:39:54] WARN TCPServer Error: Address already in use – bind(2)
  • Press <Ctrl>+<C> to stop observing at any point ...[2014-09-17 03:39:54] INFO WEBrick::HTTPServer#start: pid=25461 port=8010 << Note the Port and that Ctrl+C to stop
  • 2014-09-17 03:39:54 +0000: Collect one inventory snapshot
  • Query VM properties: 0.05 sec
  • Query Stats on 172.16.76.231: 0.65 sec (on ESX: 0.15, json size: 241KB)
  • Query Stats on 172.16.76.233: 0.63 sec (on ESX: 0.15, json size: 241KB)
  • Query Stats on 172.16.76.232: 0.68 sec (on ESX: 0.15, json size: 257KB)
  • Query CMMDS from 172.16.76.231: 0.74 sec (json size: 133KB)
  • 2014-09-17 03:40:15 +0000: Live-Processing inventory snapshot
  • 2014-09-17 03:40:15 +0000: Collection took 20.77s, sleeping for 39.23s
  • 2014-09-17 03:40:15 +0000: Press <Ctrl>+<C> to stop observing

How to stop the collection… Note: the collection has to be started and running to web statics as in the screenshots below

  • ^C2014-09-17 03:40:26 +0000: Execution interrupted, wrapping up … << Control+C is entered and the observer goes into shutdown mode
  • [2014-09-17 03:40:26] INFO going to shutdown …
  • [2014-09-17 03:40:26] INFO WEBrick::HTTPServer#start done.
  • /localhost/Home.Lab/computers/Home.Lab.C1>

How to launch the web interface…

I used Firefox to logon to the web interface of VSAN Observer, IE didn’t seem to function correctly

Simply go to http://[IP of vCenter Server]:8010 Note: this is the port number noted above when starting and its http not https

 

So what does it look like and what is the purpose of each screen… Note: By Default the ‘? What am I looking at’ is not displayed, I expanded this view to enhance the description of the screenshot.

 

 

 

 

References:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2064240

http://www.yellow-bricks.com/2013/10/21/configure-virtual-san-observer-monitoring/

VSAN – The Migration from FreeNAS

Posted on Updated on

Well folks it’s my long awaited blog post around moving my Homelab from FreeNAS to VMware VSAN.

Here are the steps I took to migrate my Home Lab GEN II with FreeNAS to Home Lab GEN III with VSAN.

Note –

  • I am not putting a focus on ESXi setup as I want to focus on the steps to setup VSAN.
  • My home lab is in no way on the VMware HCL, if you are building something like this for production you should use the VSAN HCL as your reference

The Plan –

  • Meet the Requirements
  • Backup VM’s
  • Update and Prepare Hardware
  • Distribute Existing hardware to VSAN ESXi Hosts
  • Install ESXi on all Hosts
  • Setup VSAN

The Steps –

Meet the Requirements – Detailed list here

  • Minimum of three hosts
  • Each host has a minimum of one SSD and one HDD
  • The host must be managed by vCenter Server 5.5 and configured as a Virtual SAN cluster
  • Min 6GB RAM
  • Each host has a Pass-thru RAID controller as specified in the HCL. The RAID controller must be able to present disks directly to the host without a RAID configuration.
  • 1GB NIC, I’ll be running 2 x 1Gbs NICs. However 10GB and Jumbo frames are recommended
  • VSAN VMkernel port configured on every host participating in the cluster.
  • All disks that VSAN will be allocated to should be clear of any data.

Backup Existing VMs

  • No secret here around backups. I just used vCenter Server OVF Export to a local disk to backup all my critical VM’s
  • More Information Here

Update and Prepare Hardware

  • Update all Motherboard (Mobo) BIOS and disk Firmware
  • Remove all HDD’s / SDD’s from FreeNAS SAN
  • Remove any Data from HDD/SDD’s . Either of these tools do the job

Distribute Existing hardware to VSAN ESXi Hosts

  • Current Lab – 1 x VMware Workstation PC, 2 x ESXi Hosts boot to USB (Host 1 and 2), 1 x FreeNAS SAN
  • Desired Lab – 3 x ESXi hosts with VSAN and 1 x Workstation PC
  • End Results after moves
    • All Hosts ESXi 5.5U1 with VSAN enabled
    • Host 1 – MSI 7676, i7-3770, 24GB RAM, Boot 160GB HDD, VSAN disks (2 x 2TB HDD SATA II, 1 x 60GB SSD SATA III), 5 xpNICs
    • Host 2 – MSI 7676, i7-2600, 32 GB RAM, Boot 160GB HDD, VSAN disks (2 x 2TB HDD SATA II, 1 x 90 GB SSD SATA III), 5 x pNICs
    • Host 3 – MSI 7676, i7-2600, 32 GB RAM, Boot 160GB HDD, VSAN disks (2 x 2TB HDD SATA II, 1 x 90 GB SSD SATA III), 5 x pNICs
    • Note – I have ditched my Gigabyte z68xp-UD3 Mobo and bought another MSI 7676 board. I started this VSAN conversion with it and it started to give me fits again similar to the past. There are many web posts with bugs around this board. I am simply done with it and will move to a more reliable Mobo that is working well for me.

Install ESXi on all Hosts

  • Starting with Host 1
    • Prior to Install ensure all data has been removed and all disk show up in BIOS in AHCI Mode
    • Install ESXi to Local Boot HD
    • Setup ESXi base IP address via direct Console, DNS, disable IP 6, enable shell and SSH
    • Using the VI Client setup the basic ESXi networking and vSwitch
    • Using VI Client I restored the vCSA and my AD server from OVF and powered them on
    • Once booted I logged into the vCSA via the web client
    • I built out Datacenter and add host 1
    • Create a cluster but only enabled EVC to support my different Intel CPU’s
    • Cleaned up any old DNS settings and ensure all ESXi Hosts are correct
    • From the Web client Validate that 2 x HDD and 1 x SDD are present in Host
    • Installed ESXi Host 2 / 3, followed most of these steps, and added them to the cluster

Setup VSAN

  • Logon to the Webclient
    • Ensure on all the hosts
      • Networking is setup and all functions are working
      • NTP is working
      • All expected HDD’s for VSAN are reporting in to ESXi
    • Create a vSwitch for VSAN and attach networking to it
      • I attached 2 x 1Gbs NICs for my load that should be enough
    • Assign the VSAN License Key
      • Click on the Cluster > Manage > Settings > Virtual SAN Licensing > Assign License Key

  • Enable VSAN
    • Under Virtual SAN click on General then Edit
    • Choose ‘Turn on Virtual SAN’
    • Set ‘Add disks to storage’ to Manual
    • Note – for a system on the HCL, chances are the Automatic setting will work without issue. However my system is not on the any VMware HCL and I want to control the drives to add to my Disk Group.

       

  • Add Disks to VSAN
    • Under Virtual SAN click on ‘Disk Management’
    • Choose the ICON with the Check boxes on it
    • Finally add the disks you want in your disk group

  • Allow VSAN to complete its tasks, you can check on its progress by going to ‘Tasks’

  • Once complete ensure all disks report in as healthy.

  • Ensure VSAN General tab is coming up correct
    • 3 Hosts
    • 3 of 3 SSD’s
    • 6 of 6 Data disks

  • Check to see if the data store is online

 

Summary –

Migrating from FreeNAS to VSAN was relatively a simple process. I simply moved, prepared, and installed and the product came right up. My only issue was working with a faulty Gigabyte Mobo which I resolved by replacing it. I’ll post up more as I continue to work with VSAN. If you are interested in more detail around VSAN I would recommend the following book.

Turning a ‘No you cannot attend’ to a ‘Yes’ for VMworld

Posted on Updated on

I’ve been lucky enough to make it to every VMworld since 2008 and 2014 will be my 7th. time in a row. In this blog post I wanted to share with you a breakdown of some of the tips and tricks I’ve used to get to these events. Being the former Phoenix VMUG leader I’ve shared these tips with fellow VMUG users and now I’m sharing them with all of you. Users would tell me cost is the number one reason why they don’t go – “My Company sees value in this event but will not pay for it”. This breaks down to Food, Hotel, Travel, and the infamous golden ticket, aka the VMworld pass. So how do users overcome the cost to attend? This is what this blog post is all about…

Working with your employer –

Having your employer pick up the tab not only benefits them as a company but yourself too. As you know VMworld is full of great content and the socialization aspects are second to none. Chances are you’ll be asked to put together a total cost to attend and this cost can be quite high for some companies on a tight budget. My suggestion is if you are getting the big ‘No’ then work with your boss around the total costs. First find out why it’s a ‘No’ and look for opportunities to overcome this. Maybe your company will pay for some of the items. Example – They might be able to cover airfare, but the rest is on you. Don’t forget if your company has a VMware TAM (Technical Account Manager) reach out them. Even if you are not directly working with the TAM they are your best resource not only for VMware Technology but also for getting you to VMworld. They don’t have passes but they usually know the community very well and can assist.

Sometimes I hear “My employer will not allow me to accept gifts”. True your company may have a policy around the type of gifts you can receive and by all means follow this policy. However, keep in mind you may be able to take vacation time and represent yourself at this event not your employer. Then there is a possibility gifts could be accepted but on the premises you don’t represent your company. Some companies are okay with this but just make sure they are. If you are able to do this I would suggest you represent it as ‘personal development’.

How do I get a free VMworld Pass?

This can be your biggest challenge. However here are some ways to get your hands on one.

  • Give-a-ways
    • I can’t tell you how many vendors have giveaways contests right now — hit them early and enter as many contests as you can find
    • Tips-
      • When you enter, find out who your local vendor contact is and let them know you entered. Then stay in contact with them.
      • Keep in mind not all contests are the same, some are based on random drawing and others are not. This is why I say keep in contact with the vendor.
      • How do I find give-a-ways >> Google ‘VMworld getting there for free’
  • Get the word out
    • Tell your boss, workmates, vendors, and partners.  Post on Twitter, Linked-In, etc. and Repeat again and again. By doing this you let others know about your strong interest in getting there, in turn they might get a lead for you.
    • Most importantly, reach out to your local VMUG leader and ask them for tips in your area. They are usually well connected and might have a lead for you as well.
    • Follow Twitter and Linked-In – You never know who is going to post up “I have a pass and need to give it to someone”. Yes that is right, before the event you can transfer a pass to someone.
      • New to Twitter, need contacts? It’s a pretty simple to get started.  Simply find the #VMworld hastag, see who is posting to it and start following them. Then look at all their contacts and follow them too, soon you’ll have a gaggle of folks.
    • This sound like work.  Why do all this? Simple, distributed coverage model. The more people know the more likely they are to help and in turn the more likely you’ll succeed
  • Don’t forgo an Expo-Only or Solutions Exchange Pass
    • If you get offered this pass take it. I can’t tell you how many vendors have these passes and have trouble giving them away, seriously this is gold but folks don’t know how to leverage them.
    • First off this pass has great value, there is a TON of value here.
    • Second this pass can get you on to the Solutions Exchange floor where all the vendors and partners are.
      • Once there start talking to all the vendors, fellow attendee, all those folks you meet on Twitter, etc. as you never know who has a full pass they couldn’t get rid of, take it and upgrade yours.
    • Third, while you are there with an Expo Pass use Twitter and the VMworld hash tags to let folks know you’re here and you are looking for a full pass.
    • Stop by the VMUG booth on the Expo floor, you never know who will be there and you never know if users there might be able to help you.
  • Vendors and Partners
    • Find out who is sponsoring VMworld this year, and then…
      • Start calling the ones you know well, ask them for support getting there.
      • Don’t forget to call the ones you don’t know so well too.
      • If you have an upcoming deal on the table with a vendor, inquire if they will throw in passes, travel, etc.

What about Food, Hotel, and Travel Costs?

  • Food
    • There will be free food everywhere, in-fact feel free to give some to the homeless I usually do.
    • If you get a pass then lunch and usually breakfast are included.
    • For dinner, find out where the nightly events are as they usually have food.
    • Talk with Vendors as they might take you out, you never know.
  • Hotel
    • Ask a Vendor to pay for just the room or ask them to gift hotel points to you.
    • Room Share with someone at the event << Think about it, you won’t be in the room that often and chances are from 7AM till 10PM you’ll be out of your room.
    • Use travel sites to cut down the cost.
      • Secret Hotels: Best Western Carriage Inn and The Mosser. Good if you’re on a budget but chances are they are full this year (2014).
    • Use your hotel or other travel points to book the hotel for free.
    • Get a low cost hotel away from the event, but watch your travel costs.
  • Travel
    • Airfare
      • Ask a vendor to pay for just the airfare, or maybe they have points they can gift you.
      • Use your own travel points to pay for this.
    • Rideshare to the event
      • See if one of your connections are driving to the event, offer to split fuel costs.
      • You drive someone to the event, and they pick up the hotel or vise versa.
    • Local Travel
      • Use the following –
        • VMworld Shuttle
        • Bus
        • Uber
        • BART
      • Once again hit up those vendors, they might have a way to get you around for free

Finally here is a breakdown of how I got to so many events and how/who paid for it….

Year Pass Travel Food Hotel
2008 VMworld Vendor Sponsor – Full Pass Employer Paid Vendor / Event Employer Paid
2009 VMworld VMUG Sponsored – Full Pass Vendor paid for Airfare with Miles Vendor / Event Employer Paid
2010 VMworld VMUG Sponsored – Full Pass Vendor paid for Airfare with Miles Vendor / Event Vendor Sponsored
2011 VMworld Vendor Sponsor – Expo Pass but I got an upgrade to Full by asking others I drove two others and I paid for the fuel Vendor / Event Travel Companion paid for room
2012 VMworld Employee Labs Employer Paid Employer Paid Employer Paid
2013 VMworld Employee TAM Employer Paid Employer Paid Employer Paid
2014 VMworld Employee TAM Employer Paid Employer Paid Employer Paid

Summing it up…

My take is this, if you REALLY want to go you’ll get there but sometimes it takes effort to do so and if you do it right it might not cost you a thing. Don’t let anything stop you and find your way there.

Finally, after you’ve been to the event don’t forget about the folks who got you there and say ‘Thank you’. Then over the next year continue to build this relationship, as you never know if you’ll need help again, or you want to help someone else get there.

vSAN 1.0 Released Home lab update here I come!

Posted on Updated on

In case you missed the vSAN announcement and demo on www.vmware.com/now Here is a quick review…

  • General Availability of Virtual SAN 1.0 the week of March 10th
  • vSphere 5.5 Update 1 will support VSAN GA
  • Support for 32 hosts in a Virtual SAN cluster
  • Support for 3200 VMs in a Virtual SAN cluster
    • Note, due to HA restrictions only 2048 VMs can be HA protected
  • Full support for VMware Horizon / View
  • Elastic and Linear Scalability for both capacity and performance
  • VSAN is not a virtual storage appliance (VSA). Performance is much better than any VSA!
  • 2 Million IOPS validated in a 32 host Virtual SAN cluster
  • ~ 4.5PB in a 32 host cluster
  • 13 different VSAN Ready Node configurations between Cisco IBM Fujitsu and Dell available at GA, with more coming soon

Elaboration and analysis: http://www.theregister.co.uk/2014/03/06/vsan_emerges_at_a_whopping_32_nodes_and_two_meeelion_iops/

VSAN Hands-on Labs (already available): https://blogs.vmware.com/hol/2014/03/click-go-take-vsan-hands-labs.html

Cormac as always does a great review as well — http://cormachogan.com/2014/03/06/virtual-san-vsan-announcement-review/

 

vSAN will be the next direction for my home lab as I plan to move away from in my opinion a buggy FreeNAS product.

High speed networking is required for the replication network and my back plane will be something like this — http://www.bussink.ch/?p=1183

I’ll post up more as it progresses.

Enjoy!

vCenter Server datastores for heartbeats

Posted on Updated on

I recently did some exploring on my home lab around datastore heatbeats and came up with the following notes around how to determine which ones are active, how to change the defaults, and why vCenter Server might now choose a datastore.

http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-50-availability-guide.pdf

Page 16

vCenter Server selects a preferred set of datastores for heartbeating. This selection is made to maximize the

number of hosts that have access to a heartbeating datastore and minimize the likelihood that the datastores

are backed by the same storage array or NFS server. To replace a selected datastore, use the Cluster Settings

dialog box of the vSphere Client to specify the heartbeating datastores. The Datastore Heartbeating tab lets

you specify alternative datastores. Only datastores mounted by at least two hosts are available. You can also

see which datastores vSphere HA has selected for use by viewing the Heartbeat Datastores tab of the HA

Cluster Status dialog box.


Only use these settings if you want to override the default vCenter Server Choice

Here is an article around why it might not choose a Datastore…

http://pubs.vmware.com/vsphere-50/index.jsp#com.vmware.vsphere.troubleshooting.doc_50/GUID-333C3315-A862-470E-8DA9-6FE45C8C8E38.html?resultof=%2522%2568%2565%2561%2572%2574%2562%2565%2561%2574%2569%256e%2567%2522%2520%2522%2568%2565%2561%2572%2574%2562%2565%2561%2574%2522%2520

User-Preferred Datastore is Not Chosen

vCenter Server might not choose a datastore that you specify as a preference for vSphere HA storage heartbeating.

Problem

You can specify the datastores preferred for storage heartbeating, and based on this preference, vCenter Server determines the final set of datastores to use. However, vCenter Server might not choose the datastores that you specify.

Cause

This problem can occur in the following cases:

The specified number of datastores is more than is required. vCenter Server chooses the optimal number of required datastores out of the stated user preference and ignores the rest.

A specified datastore is not optimal for host accessibility and storage backing redundancy. More specifically, the datastore might not be chosen if it is accessible to only a small set of hosts in the cluster. A datastore also might not be chosen if it is on the same LUN or the same NFS server as datastores that vCenter Server has already chosen.

A specified datastore is inaccessible because of storage failures, for example, storage array all paths down or permanent device loss.

If the cluster contains a network partition, or if a host is unreachable or isolated, the host continues to use the existing heartbeat datastores even if the user preferences change.

Solution

Verify that all the hosts in the cluster are reachable and have the vSphere HA agent running.

Also, ensure that the specified datastores are accessible to most, if not all, hosts in the cluster and that the datastores are on different LUNs or NFS servers.

Home Lab – Adding freeNAS 8.3 iSCSI LUNS to ESXi 5.1

Posted on Updated on

About a half a year ago I setup my freeNAS iSCSI SAN, created 2 x 500GB iSCSI LUNS and attached them to ESXi 5.1. These were ample for quite a while. However I have the need to add additional LUNS…. My first thought was – “Okay, Okay, where are my notes on adding LUNS…” They are non-existent… Eureka! Its time for a new blog post… So here are my new notes around adding iSCSI LUNS with freeNAS to my ESXi 5.1 Home lab – As always read and use at your own risk
J

  1. Start in the FreeNAS admin webpage for your device. Choose Storage > Expand Volumes > Expand the volume you want to work with > Choose Create ZFS volume and fill out the Create Volume Pop up.

When done click on Add and ensure is show up under the Storage Tab

.

  1. On the left-hand pane click on Services > iSCSI > Device Extents > View Device Extents. Type in your Extent Name, Choose the Disk Device that you just created in Step 1 and choose OK

     

  2. Click on Associated Targets > Add Extent to Target, Choose your Target and select the new Extent

     

  3. To add to ESXi do the following… Log into the Web Client for vCenter Server, Navigate to a host > Manage > Storage > Storage Devices > Rescan Host

    If done correctly your new LUN should show up below. TIP – ID the LUN by its location number, in this case its 4

  4. Ensure your on the Host in the left Pane > Related Objects > Datastores > Add Datastore

     

  5. Type in the Name > VMFS Type > Choose the Right LUN (4) > VMFS Version (5) > Partition Lay out (All or Partial), Review > Finish

     

  6. Setup Multi-Pathing – Select a Host > Manage > Storage > Storage Devices > Select LUN > Slide down the Devices Details Property Box and Choose Edit Multipathing

     

     

  7. Choose Round Robin and Click On Okay

     

  8. Validate all Datastores still have Round Robin enabled. 2 Ways to do this.
    1. Click on the LUN > Paths. Status should read Active I/O for both paths
    2. Click on LUN > Properties > Edit Multipathing – Path section Policy should state – Round Robin (See PIC in Step 8)

     

     

    Summary – These steps worked like a charm for me, then again my environment is already setup, and hopefully these steps might be helpful to you.