Storage

VMware Workstation 17 Nested vSAN ESA Overview

Posted on Updated on

In this high level video I give an overview of my #VMware #workstation running 3 x nested ESXi 8 Hosts, vSAN ESA, VCSA, and a Windows 2022 AD. Additionally, I show some early performance results using HCI Bench.

For more information around my VMware Workstation Generation 8 Build check out my latest BOM here

How to upgrade a Dell T7820 to a U.2 Backplane

Posted on Updated on

In this video I show how I upgraded my Dell T7820 SATA backplane to a U.2 backplane. I’m doing this upgrade to enable support for 2 x #intel #Optane drives. I’ll be using these #Dell #T7820 Workstations for my Next Generation #homelab where I’ll need 4 x Intel Optane drives to support #VMware #vsan ESA.

Part Installed in this Video: (XN8TT) Dell Precision T7820 T5820 U.2 NVME Solid State Drive Backplane Kit found used on Ebay.

For more information around My Next Generation 8 Home Lab based on the Dell T7820 check out my blog series at https://vmexplorer.com/blog-series/

First Look GEN8 ESXi/vSAN ESA 8 Home Lab (Part 1)

Posted on Updated on

I’m kicking off my next generation home lab with this first look in to my choice for an ESXi/vSAN 8 host. There will be more videos to come as this series evolves!

10Gbe NAS Home Lab: Part 8 Interconnecting MikroTik Switches

Posted on Updated on

It’s been a long wait for Part 8 but I was able to release it today! If you are interested on how to network performance test your storage environment this session might help. The purpose of this session is to show how to interconnect two MikroTik switches and ensure their performance is optimal when compared to a single switch. The two NAS devices in this session have different physical capabilities and by no means is this a comparison of their performance. The results are merely data points. Users should work with their vendor of choice to ensure best performance and optimization.

Home Lab Generation 7: Upgrading and Replacing a vSAN 7 Cache Disk

Posted on Updated on

In this video I go over some of the rational and the steps I took to replace the vSAN 7 2 x 200GB SSD SAS cache disks with a 512GB NVMe flash device.

 

*Products in this video*
Sabrent 512 Rocket – https://www.sabrent.com/product/SB-ROCKET-512/512gb-rocket-nvme-pcie-m-2-2280-internal-ssd-high-performance-solid-state-drive/#description

Dual M.2 PCIe Adapter Card for NVMe/SATA – https://www.amazon.com/gp/product/B08MZGN1C5

Quick NAS Topics: Serial USB Server with the LOCKERSTOR 10

Posted on Updated on

In this Quick NAS Topic video I go over how to install VirutalHere USB Server on the LOCKERSTOR 10 and its client on my Windows 10 PC. This enables the client to establish a link to the a USB NULL Model Cable which is connected directly into the NAS.  Once established I’m able to use putty to create a serial SSH connection.

** Products in this Video **

10Gbe NAS Home Lab Part 7: Network testing with iperf3 on containers

Posted on Updated on

In Part 7 I go over how I used iperf3 to test between my different NAS devices and Windows PCs. Each NAS device are running Docker and had a ubuntu container with iperf3 installed. If you want more information on how I setup the container check out my other post here. 

 

 

Quick NAS Topics: Create your own iperf3 Docker Container

Posted on Updated on

In this Quick NAS Topic video and the steps further below, I use docker to create a ubuntu container with Linux tools and iperf3.

This video is a supplement for the 10Gbe Home NAS Lab Part 7. In Part 7 I show how to use these containers to network performance test the 3 NAS devices I have.  

Notes:

Docker Ubuntu/iperf3 Basic Steps:  Items in-between [ ] and the brackets should be removed

  • On the NAS:
    1. Ensure devices can access the inet OR not covered in this blog, you’ll need to manually import and export images, etc. 
    2. Ensure Docker-ce and if needed Shell-in-a-box and portainer are installed and basic configuration is done.  The Synology didn’t need shell in a box or portainter
    3. Test Docker Install
      • docker -v << Shows the version
      • docker images << Show the images that are available
      • docker ps  << Shows the running containers
    4. Elevate local privileges to run docker commands
      • It may be necessary to use ‘sudo’ in front of docker commands to get them to execute, followed by the admin/root password.  Example:  sudo docker ps
    5. Download and run Ubuntu
      • docker pull ubuntu   << Image is located here https://hub.docker.com/_/ubuntu
      • docker run -it ubuntu bash  << Creates an instance of this image for us to modify and opens up the terminal
    6. Update the Ubuntu running container
      • apt-get -y update
      • apt-get install iproute2
      • apt-get install net-tools
      • apt-get install iputils
      • apt-get install iputils-ping
      • apt-get install -y iperf3
      • Test with ping and iperf3 -v
      • Do not exit
    7. Commit and push the new image
      • docker ps -l  << Check for the latest running container, and note the Container ID of the container that was just updated with these steps
      • docker commit  [Container ID]  [repository name]/[insert-container-name] 
      • docker images  << will validate that the image is now there
      • docker push [repository name]/[Container you want to push] 
  • Testing Steps
    • Check basic ping between all devices
    • Put one device in server mode iperf3 -s
    • On the other device start the test iperf3 -c [Target IP]

Quick NAS Topics Changing Storage Pool from RAID 1 to RAID5 with the Synology 1621+

Posted on

In this not so Quick NAS topic I cover how to expand a RAID 1 volume and migrate it to a RAID 5 storage pool with the Synology 1621+. Along the way we find a disk that has some bad sectors, run an extended test and then finalize the migration.

** Products / Links Seen in this Video **

Synology DiskStation DS1621+ — https://www.synology.com/en-us/products/DS1621+

Home Lab Generation 7: Part 1 – Change Rational for software and hardware changes

Posted on Updated on

Well its that time of year again, time to deploy new changes, upgrades, and add some new hardware.  I’ll be updating my ESXi hosts and vCenter Server to the latest vSphere 7 Update 3a from 7U2d. Additionally, I’ll be swapping out the IBM 5210 JBOD for a Dell HBA330+ and lastly I’ll change my boot device to a more reliable and persistent disk.  I have 3 x ESXi hosts with VSAN, vDS switches, and NSX-T.  If you want to better understand my environment a bit better check out this page on my blog.  In this 2 part blog I’ll go through the steps I took to update my home lab and some of the rational behind it.

There are two main parts to the blog:

  • Part 1 – Change Rational for software and hardware changes – In this part I’ll explain some of my thoughts around why I’m making these software and hardware changes. 
  • Part 2 – Installation and Upgrade Steps – These are the high level steps I took to change and upgrade my Home lab

Part 1 – Change Rational for software and hardware changes:

There are three key changes that I plan to make to my environment:

  • One – Update to vSphere 7U3a
    • vSphere 7U3 has brought many new changes to vSphere including many needed features updates to vCenter server and ESXi.  Additionally, there have been serval important bug fixes and corrections that vSphere 7U3 and 7U3a will address. For more information on the updates with vSphere 7U3 please see the “vSphere 7 Update 3 – What’s New” by Bob Plankers.  For even more information check out the release notes.   
    • Part of my rational in upgrading is to prepare to talk with my customers around the benefits of this update.   I always test out the latest updates on Workstation first then migrate those learnings in to Home Lab.  
  • Two – Change out the IBM 5210 JBOD
    • The IBM 5210 JBOD is a carry over component from my vSphere 6.x vSAN environment. It worked well with vSphere 6.x and 7U1.  However, starting in 7U2 it started to exhibit stuck IO issues and the occasional PSOD.  This card was only certified with vSphere/vSAN 6.x and at some point the cache module became a requirement.  My choices at this point are to update this controller with a cache module (~$50 each) and hope it works better or make a change.  In this case I decided to make a change to the Dell HBA330 (~$70 each).  The HBA330 is a JBOD controller that Dell pretty much worked with VMware to create for vSAN.  It is on the vSphere/vSAN 7U3 HCL and should have a long life there too.  Additionally, the HBA330 edge connectors (Mini SAS SFF-8643) line up with the my existing SAS break-out cables. When I compare the benefits of the Dell HBA330 to upgrading the cache module for the IBM 5210 the HBA330 was the clear choice.  The trick is finding a HBA330 that is cost effective and comes with a full sized slot cover.  Its a bit tricky but you can find them on eBay, just have to look a bit harder.

  • Three – Change my boot disk
    • Last September-2021, VMware announced boot from USB is going to change and customers were advised to plan ahead for these upcoming changes.   My current hosts are using cheap SanDisk USB 64GB memory sticks.  Its something I would never recommend for a production environment, but for a Home Lab these worked okay.  I originally chose them during my Home Lab Gen 5 updates as I need to do testing with USB booted Hosts.  Now that VMware has deprecated support for USB/SD devices it’s time to make a change. Point of clarity: the word deprecated can mean different things to different people.  However, in the software industry deprecated means “discourage the use of (something, such as a software product) in favor of a newer or better alternative”.  vSphere 7 is in a deprecated mode when it comes to USB/SD booted hosts, they are still supported, and customers are highly advised to plan ahead. As of this writing, legacy (legacy is a fancy word for vSphere.NEXT) USB hosts will require a persistent disk and eventually (Long Term Supported) USB/SD booted hosts will no longer be supported.  Customers should seek guidance from VMware when making these changes.

    • The requirement to be in a “Long Term Supported” mode is to have a ESXi host be booted from HDD, SSD, or a PCIe device.  In my case, I didn’t want to add more disks to my system and chose to go with a PCIe SSD/NVMe card. I chose this PCIe device that will support M.2 (SATA SSD) and NMVe devices in one slot and I decided to go with a Kingston A400 240G Internal SSD M.2  as my boot disk. The A400 with 240GB should be more than enough to boot the ESXi hosts and keep up with its disk demands going forward.   

 

Final thoughts and a important warning.  Making changes that affect your current environment are never easy but are sometimes necessary.  With a little planning it can make the journey a bit easier.  I’ll be testing these changes over the next few months and will post up if issues occur.  However, a bit of warning – adding new devices to an environment can directly impact your ability to migrate or upgrade your hosts.  Due to the hardware decisions I have made a direct ESXi upgrade is not possible and I’ll have to back out my current hosts from vCenter Server plus other software and do a new installation.  However, those details and more will be in Part 2 – Installation and Upgrade Steps.

Opportunity for vendor improvement – If backup vendors like Synology, asustor, Veeam, Veritas, naviko, and Arcoins could really shine.  If they could backup and restore a ESXi host to dislike hardware  or boot disks this would be a huge improvement for VI Admin, especially when they have tens of thousands of hosts the need to change from their USB to persistent disks.  This is not a new ask, VI admins have been asking for this option for years, now maybe these companies will listen as many users and their hosts are going to be affected by these upcoming requirements.