Home Lab

Home Lab Generation 7: Part 1 – Change Rational for software and hardware changes

Posted on Updated on

Well its that time of year again, time to deploy new changes, upgrades, and add some new hardware.  I’ll be updating my ESXi hosts and vCenter Server to the latest vSphere 7 Update 3a from 7U2d. Additionally, I’ll be swapping out the IBM 5210 JBOD for a Dell HBA330+ and lastly I’ll change my boot device to a more reliable and persistent disk.  I have 3 x ESXi hosts with VSAN, vDS switches, and NSX-T.  If you want to better understand my environment a bit better check out this page on my blog.  In this 2 part blog I’ll go through the steps I took to update my home lab and some of the rational behind it.

There are two main parts to the blog:

  • Part 1 – Change Rational for software and hardware changes – In this part I’ll explain some of my thoughts around why I’m making these software and hardware changes. 
  • Part 2 – Installation and Upgrade Steps – These are the high level steps I took to change and upgrade my Home lab

Part 1 – Change Rational for software and hardware changes:

There are three key changes that I plan to make to my environment:

  • One – Update to vSphere 7U3a
    • vSphere 7U3 has brought many new changes to vSphere including many needed features updates to vCenter server and ESXi.  Additionally, there have been serval important bug fixes and corrections that vSphere 7U3 and 7U3a will address. For more information on the updates with vSphere 7U3 please see the “vSphere 7 Update 3 – What’s New” by Bob Plankers.  For even more information check out the release notes.   
    • Part of my rational in upgrading is to prepare to talk with my customers around the benefits of this update.   I always test out the latest updates on Workstation first then migrate those learnings in to Home Lab.  
  • Two – Change out the IBM 5210 JBOD
    • The IBM 5210 JBOD is a carry over component from my vSphere 6.x vSAN environment. It worked well with vSphere 6.x and 7U1.  However, starting in 7U2 it started to exhibit stuck IO issues and the occasional PSOD.  This card was only certified with vSphere/vSAN 6.x and at some point the cache module became a requirement.  My choices at this point are to update this controller with a cache module (~$50 each) and hope it works better or make a change.  In this case I decided to make a change to the Dell HBA330 (~$70 each).  The HBA330 is a JBOD controller that Dell pretty much worked with VMware to create for vSAN.  It is on the vSphere/vSAN 7U3 HCL and should have a long life there too.  Additionally, the HBA330 edge connectors (Mini SAS SFF-8643) line up with the my existing SAS break-out cables. When I compare the benefits of the Dell HBA330 to upgrading the cache module for the IBM 5210 the HBA330 was the clear choice.  The trick is finding a HBA330 that is cost effective and comes with a full sized slot cover.  Its a bit tricky but you can find them on eBay, just have to look a bit harder.

  • Three – Change my boot disk
    • Last September-2021, VMware announced boot from USB is going to change and customers were advised to plan ahead for these upcoming changes.   My current hosts are using cheap SanDisk USB 64GB memory sticks.  Its something I would never recommend for a production environment, but for a Home Lab these worked okay.  I originally chose them during my Home Lab Gen 5 updates as I need to do testing with USB booted Hosts.  Now that VMware has deprecated support for USB/SD devices it’s time to make a change. Point of clarity: the word deprecated can mean different things to different people.  However, in the software industry deprecated means “discourage the use of (something, such as a software product) in favor of a newer or better alternative”.  vSphere 7 is in a deprecated mode when it comes to USB/SD booted hosts, they are still supported, and customers are highly advised to plan ahead. As of this writing, legacy (legacy is a fancy word for vSphere.NEXT) USB hosts will require a persistent disk and eventually (Long Term Supported) USB/SD booted hosts will no longer be supported.  Customers should seek guidance from VMware when making these changes.

    • The requirement to be in a “Long Term Supported” mode is to have a ESXi host be booted from HDD, SSD, or a PCIe device.  In my case, I didn’t want to add more disks to my system and chose to go with a PCIe SSD/NVMe card. I chose this PCIe device that will support M.2 (SATA SSD) and NMVe devices in one slot and I decided to go with a Kingston A400 240G Internal SSD M.2  as my boot disk. The A400 with 240GB should be more than enough to boot the ESXi hosts and keep up with its disk demands going forward.   

 

Final thoughts and a important warning.  Making changes that affect your current environment are never easy but are sometimes necessary.  With a little planning it can make the journey a bit easier.  I’ll be testing these changes over the next few months and will post up if issues occur.  However, a bit of warning – adding new devices to an environment can directly impact your ability to migrate or upgrade your hosts.  Due to the hardware decisions I have made a direct ESXi upgrade is not possible and I’ll have to back out my current hosts from vCenter Server plus other software and do a new installation.  However, those details and more will be in Part 2 – Installation and Upgrade Steps.

Opportunity for vendor improvement – If backup vendors like Synology, asustor, Veeam, Veritas, naviko, and Arcoins could really shine.  If they could backup and restore a ESXi host to dislike hardware  or boot disks this would be a huge improvement for VI Admin, especially when they have tens of thousands of hosts the need to change from their USB to persistent disks.  This is not a new ask, VI admins have been asking for this option for years, now maybe these companies will listen as many users and their hosts are going to be affected by these upcoming requirements.

kubeAcademy Building Applications for Kubernetes: Docker Desktop Installation for Windows 10

Posted on Updated on

While taking the course on kubeAcademy ‘Building Applications for Kubernetes’ the first lesson was about setting up your workstation to complete the course. Though the first lesson was good, the instructions were based on the MAC OS and how to install on Windows was very lightly touched on. I soon found out why, the Windows 10 install of Docker Desktop and Tools isn’t a simple process. In this video I go through the choices I made to get my workstation up and running. Moving past lesson one it became obvious that most of these courses are based on CLI commands common on the MAC (example cat, v, and rm).  If you choose the Windows install be aware you’ll need to translate commands like these and more.  I highly recommend the MAC OS install if you want to really align to these courses.

Post Video Corrections and Observations:

  1. In the video I showed how to remove the Ubuntu Image via containers and apps.  To fully remove the ubuntu image, do so in Docker Desktop > Images > 3Dots > Delete, wait about a min or two and it will disappear.

Some Links from this video:

Home Lab Generation 7: Updating from Gen 5 to Gen 7

Posted on Updated on

Not to long ago I updated my Gen 4 Home Lab to Gen 5 and I posted many blogs and video around this.  The Gen 5 Lab ran well for vSphere 6.7 deployments but moving into vSphere 7.0 I had a few issues adapting it.  Mostly these issues were with the design of the Jingsha Motherboard.  I noted most of these challenges in the Gen 5 wrap up video. Additionally, I had some new networking requirements mainly around adding multiple Intel NIC ports and Home Lab Gen 5 was not going to adapt well or would be very costly to adapt.  These combined adaptions forced my hand to migrate to what I’m calling Home Lab Gen 7.  Wait a minute, what happen to Home Lab Gen 6? I decided to align my Home Lab Generation numbers to match vSphere release number, so I skipped Gen 6 to align.

First: I review my design goals:

  • Be able to run vSphere 7.x and vSAN Environment
  • Reuse as much as possible from Gen 5 Home lab, this will keep costs down
  • Choose products that bring value to the goals, are cost effective, and if they are on the VMware HCL that a plus but not necessary for a home lab
  • Keep networking (vSAN / FT) on 10Gbe MikroTik Switch
  • Support 4 x Intel Gbe Networks
  • Ensure there will be enough CPU cores and RAM to be able to support multiple VMware products (ESXi, VCSA, vSAN, vRO, vRA, NSX, LogInsight)
  • Be able to fit the the environment into 3 ESXi Hosts
  • The environment should run well, but doesn’t have to be a production level environment

Second – Evaluate Software, Hardware, and VM requirements:

My calculated numbers from my Gen 5 build will stay rather static for Gen 7.  The only update for Gen 7 is to use the updated requirements table which can be found here >>  ‘HOME LABS: A DEFINITIVE GUIDE’

Third – Home Lab Design Considerations

This too will be very similar to Gen 5, but I do review this table and made any last changes to my design

Four – Choosing Hardware

Based on my estimations above I’m going to need a very flexible Mobo, supporting lots of RAM, good network connectivity, and should be as compatible as possible with my Gen 5 hardware.  I’ve reused many parts from Gen 5 but the main change came with the Supermicro Motherboard and the addition of 2TB SAS HDD listed below.

Note: I’ve listed the newer items in Italics all other parts I’ve carried over from Gen 5.

Overview:

  • My Gen 7 Home Lab is based on vSphere 7 (VCSA, ESXi, and vSAN) and it contains 3 x ESXi Hosts, 1 x Windows 10 Workstation,  4 x Cisco Switches, 1 x MikroTik 10gbe Switch, 2 x APC UPS

ESXi Hosts:

  • Case:
  • Motherboard:
  • CPU:
    • CPU: Xeon E5-2640 v2 8 Cores / 16 HT (Ebay $30 each)
    • CPU Cooler: DEEPCOOL GAMMAXX 400 (Amazon $19)
    • CPU Cooler Bracket: Rectangle Socket 2011 CPU Cooler Mounting Bracket (Ebay $16)
  • RAM:
    • 128GB DDR3 ECC RAM (Ebay $170)
  • Disks:
    • 64GB USB Thumb Drive (Boot)
    • 2 x 200 SAS SSD (vSAN Cache)
    • 2 x 2TB SAS HDD (vSAN Capacity – See this post)
    • 1 x 2TB SATA (Extra Space)
  • SAS Controller:
    • 1 x IBM 5210 JBOD (Ebay)
    • CableCreation Internal Mini SAS SFF-8643 to (4) 29pin SFF-8482 (Amazon $18)
  • Network:
    • Motherboard Integrated i350 1gbe 4 Port
    • 1 x MellanoxConnectX3 Dual Port (HP INFINIBAND 4X DDR PCI-E HCA CARD 452372-001)
  • Power Supply:
    • Antec Earthwatts 500-600 Watt (Adapters needed to support case and motherboard connections)
      • Adapter: Dual 8(4+4) Pin Male for Motherboard Power Adapter Cable (Amazon $11)
      • Adapter: LP4 Molex Male to ATX 4 pin Male Auxiliary (Amazon $11)
      • Power Supply Extension Cable: StarTech.com 8in 24 Pin ATX 2.01 Power Extension Cable (Amazon $9)

Network:

  • Core VM Switches:
    • 2 x Cisco 3650 (WS-C3560CG-8TC-S 8 Gigabit Ports, 2 Uplink)
    • 2 x Cisco 2960 (WS-C2960G-8TC-L)
  • 10gbe Network:
    • 1 x MikroTik 10gbe CN309 (Used for vSAN and Replication Network)
    • 2 ea. x HP 684517-001 Twinax SFP 10gbe 0.5m DAC Cable (Ebay)
    • 2 ea. x MELLANOX QSFP/SFP ADAPTER 655874-B21 MAM1Q00A-QSA (Ebay)

Battery Backup UPS:

  • 2 x APC NS1250

Windows 10 Workstation:

Thanks for reading, please do reach out if you have any questions.

If you like my ‘no-nonsense’ videos and blogs that get straight to the point… then post a comment or let me know… Else, I’ll start posting really boring content!

Home Lab Generation 7: Upgrading vSAN 7 Hybrid capacity step by step

Posted on Updated on

My GEN5 Home Lab is ever expanding and the space demands on the vSAN cluster were becoming more apparent.  This past weekend I updated my vSAN 7 cluster capacity disks from 6 x 600GB SAS HDD to 6 x 2TB SAS HDD and it went very smoothly.   Below are my notes and the order I followed around this upgrade.  Additionally, I created a video blog (link further below) around these steps.  Lastly, I can’t stress this enough – this is my home lab and not a production environment. The steps in this blog/video are just how I went about it and are not intended for any other purpose.

Current Cluster:

  • 3 x ESXi 7.0 Hosts (Supermicro X9DRD-7LN4F-JBOD, Dual E5 Xeon, 128GB RAM, 64GB USB Boot)
  • vSAN Storage is:
    • 600GB SAS Capacity HDD
    • 200GB SAS Cache SDD
    • 2 Disk Groups per host (1 x 200GB SSD + 1 x 600GB HDD)
    • IBM 5210 HBA Disk Controller
    • vSAN Datastore Capacity: ~3.5TB
    • Amount Allocated: ~3.7TB
    • Amount in use: ~1.3TB

Proposed Change:

  • Keep the 6 x 200GB SAS Cache SDD Drives
  • Remove 6 x 600GB HDD Capacity Disk from hosts
  • Replace with 6 x 2TB HDD Capacity Disks
  • Upgraded vSAN Datastore ~11TB

Upgrade Notes:

  1. I choose to backup (via clone to offsite storage) and power off most of my VMs
  2. I clicked on the Cluster > Configure > vSAN > Disk Management
  3. I selected the one host I wanted to work with and then the Disk group I wanted to work with
  4. I located one of the capacity disks (600GB) and clicked on it
  5. I noted its NAA ID (will need later)
  6. I then clicked on “Pre-check Data Migration” and choose ‘full data migration’
  7. The test completed successfully
  8. Back at the Disk Management screen I clicked on the HDD I am working with
  9. Next I clicked on the ellipse dots and choose ‘remove’
  10. A new window appeared and for vSAN Data Migration I choose ‘Full Data Migration’ then clicked remove
  11. I monitored the progress in ‘Recent Tasks’
  12. Depending on how much data needed to be migrated, and if there were other objects being resynced it could take a bit of time per drive.  For me this was ~30-90 mins per drive
  13. Once the data migration was complete, I went to my host and found the WWN# of the physical disk that matched the NAA ID from Step 5
  14. While the system was still running, removed disk from the chassis, and replaced it with the new 2TB HDD
  15. Back at vCenter Server I clicked on the Host on the Cluster > Configure > Storage > Storage Devices
  16. I made sure the new 2TB drive was present
  17. I clicked on the 2TB drive, choose ‘erase partitions’ and choose OK
  18. I clicked on the Cluster > Configure > vSAN > Disk Management > ‘Claim Unused Disks’
  19. A new Window appeared and I choose ‘Capacity’ for the 2TB HDD, ‘Cache’ for the 200GB SDD drives, and choose OK
  20. Recent Task showed the disk being added
  21. When it was done I clicked on the newly added disk group and ensured it was in a health state
  22. I repeated this process until all the new HDDs were added

Final Outcome:

  • After upgrade the vSAN Storage is:
    • 2TB SAS Capacity HDD
    • 200GB SAS Cache SDD
    • 2 Disk Groups per host (1 x 200GB SSD + 1 x 2TB HDD)
    • IBM 5210 HBA Disk Controller
    • vSAN Datastore is ~11.7TB

Notes & other thoughts:

  • I was able complete the upgrade in this order due to the nature my home lab components.  Mainly because I’m running a SAS Storage HBA that is just a JBOD controller supporting Hot-Pluggable drives.
  • Make sure you run the data migration pre-checks and follow any advice it has.  This came in very handy.
  • If you don’t have enough space to fully evacuate a capacity drive you will either have to add more storage or completely remove VM’s from the cluster.
  • Checking Cluster>Monitor>vSAN>Resyncing Objects, gave me a good idea when I should start my next migration.  I look for it to be complete before I start. If you have an very active cluster this maybe harder to achieve.
  • Checking the vSAN Cluster Health should be done, especially the Cluster > Monitor > Skyline Health > Data > vSAN Object Health, any issues in these areas should be looked into prior to migration
  • Not always, but mostly, the disk NAA ID reported in vCenter Server/vSAN usually coincides with the WWN Number on the HDD
  • By changing my HDDs from 600GB SAS 10K to 2TB SAS 7.2K there will be a performance hit. However, my lab needed more space and 10k-15K drives were just out of my budget.
  • Can’t recommend this reference Link from VMware enough: Expanding and Managing a vSAN Cluster

 

Video Blog:

Various Photos:

If you like my ‘no-nonsense’ videos and blogs that get straight to the point… then post a comment or let me know… Else, I’ll start posting really boring content!

Create an ESXi installation ISO with custom drivers in 9 easy steps!

Video Posted on Updated on

One of the challenges in running a VMware based home lab is the ability to work with old / inexpensive hardware but run latest software. Its a balance that is sometimes frustrating, but when it works it is very rewarding. Most recently I decided to move to 10Gbe from my InfiniBand 40Gb network. Part of this transition was to create an ESXi ISO with the latest build (6.7U3) and appropriate network card drivers. In this video blog post I’ll show 9 easy steps to create your own customized ESXi ISO and how to pin point IO Cards on the vmware HCL.

** Update 06/22/2022 **  If you are looking to do USB NICs with ESXi check out the new fling (USB Network Native Driver for ESXi) that helps with this.  This Fling supports the most popular USB network adapter chipsets ASIX USB 2.0 gigabit network ASIX88178a, ASIX USB 3.0 gigabit network ASIX88179, Realtek USB 3.0 gigabit network RTL8152/RTL8153 and Aquantia AQC111U. https://flings.vmware.com/usb-network-native-driver-for-esxi

NOTE – Flings are NOT supported by VMware

** Update 03/06/2020 ** Though I had good luck with the HP 593742-001 NC523SFP DUAL PORT SFP+ 10Gb card in my Gen 4 Home Lab, I found it faulty when running in my Gen 5 Home Lab.  Could be I was using a PCIe x4 slot in Gen 4, or it could be the card runs to hot to touch.  For now this card was removed from VMware HCL, HP has advisories out about it, and after doing some poking around there seem to be lots of issues with it.  I’m looking for a replacement and may go with the HP NC550SFP.   However, this doesn’t mean the steps in this video are only for this card, the steps in this video help you to better understand how to add drivers into an ISO.

Here are the written steps I took from my video blog.  If you are looking for more detail, watch the video.

Before you start – make sure you have PowerCLI installed, have download these files,  and have placed these files in c:\tmp.

 

I started up PowerCLI and did the following commands:

1) Add the ESXi Update ZIP file to the depot:

Add-EsxSoftwareDepot C:\tmp\update-from-esxi6.7-6.7_update03.zip

2) Add the LSI Offline Bundle ZIP file to the depot:

Add-EsxSoftwareDepot ‘C:\tmp\qlcnic-esx55-6.1.191-offline_bundle-2845912.zip’

3) Make sure the files from step 1 and 2 are in the depot:

Get-EsxSoftwareDepot

4) Show the Profile names from update-from-esxi6.7-6.7_update03. The default command only shows part of the name. To correct this and see the full name use the ‘| select name’ 

Get-EsxImageProfile | select name

5) Create a clone profile to start working with.

New-EsxImageProfile -cloneprofile ESXi-6.7.0-20190802001-standard -Name ESXi-6.7.0-20190802001-standard-QLogic -Vendor QLogic

6) Validate the LSI driver is loaded in the local depot.  It should match the driver from step 2.  Make sure you note the name and version number columns.  We’ll need to combine these two with a space in the next step.

Get-EsxSoftwarePackage -Vendor q*

7) Add the software package to the cloned profile. Tip: For ‘SoftwarePackage:’ you should enter the ‘name’ space ‘version number’ from step 6.  If you just use the short name it might not work.

Add-EsxSoftwarePackage

ImageProfile: ESXi-6.7.0-20190802001-standard-QLogic
SoftwarePackage[0]: net-qlcnic 6.1.191-1OEM.600.0.0.2494585

8) Optional: Compare the profiles, to see differences, and ensure the driver file is in the profile.

Get-EsxImageProfile | select name   << Run this if you need a reminder on the profile names

Compare-EsxImageProfile -ComparisonProfile ESXi-6.7.0-20190802001-standard-QLogic -ReferenceProfile ESXi-6.7.0-20190802001-standard

9) Create the ISO

Export-EsxImageProfile -ImageProfile “ESXi-6.7.0-20190802001-standard-QLogic” -ExportToIso -FilePath c:\tmp\ESXi-6.7.0-20190802001-standard-QLogic.iso

That’s it!  If you like my ‘no-nonsense’ videos and blogs that get straight to the point… then post a comment or let me know… Else, I’ll start posting boring video blogs!

Cross vSAN Cluster support for FT

 

FIX for Netgear Orbi Router / Firewall blocks additional subnets

Posted on Updated on

Last April 2019 my trusty Netgear Switch finally gave in.  I bought a nifty Dell PowerConnect 6224 switch and have been working with it off an on.  About the same time, I decided to update my home network with the Orbi WiFi System (RBK50) AC3000 by Netgear.  My previous Netgear Wifi router worked quite well but I really needed something to support multiple locations seamlessly.

The Orbi Mesh has a primary device and allows for satellites to be connected to it.  It creates a Wifi mesh that allows devices to go from room to room or building to building seamlessly.  I’ve had it up for a while now and its been working out great – that is until I decided to ask it to route more than one subnet.   In this blog I’ll show you the steps I took to over come this feature limitation but like all content on my blog this is for my reference – travel, use, or follow at your own risk.

**2021-NOV Update**  Per the last Orbi Update that I deployed (Router Firmware Version V2.7.3.22) the telnet option is no longer available in the debug menu.  This means the steps below will not work unless you are a earlier router firmware version.  I looked for solutions but didn’t find any.  However, I solved this issue by using an additional firewall using NAT between VLAN74 and VLAN 75.  If you find a solution, please post a comment and I’ll be glad to update this blog.

To understand the problem we need to first understand the network layout.   My Orbi Router is the Gateway of last resort and it supplies DHCP and DNS services. In my network I have two subnets which are untagged VLANS known as VLAN 74 – 172.16.74.x/24 and VLAN 75 – 172.16.75.x/24.   VLAN 74 is used by my home devices and VLAN 75 is where I manage my ESXi hosts.  I have enabled RIP v2 on the Orbi and on the Dell 6224 switch.  The routing tables are populated correctly, and I can ping from any internal subnet to any host without issue, except when the Orbi is involved.

 

Issue:  Hosts on VLAN 75 are not able to get to the internet.  Hosts on VLAN 75 can resolve DNS names (example: yahoo.com) but it cannot ping any host on the Inet. Conversely VLAN 74 can ping Inet hosts and get to the internet.  I’d like for my hosts on VLAN 75 to have all the same functionally as my hosts on VLAN 74.

Findings:  By default, the primary Orbi router is blocking any host that is not on VLAN 74 from getting to the INET.  I believe Netgear enabled this block to limit the number of devices the Orbi could NAT.  I can only guess that either the router just can’t handle the load or this was a maximum Netgear tested it to.  I found this firewall block out by logging into the CLI of my Orbi and looking at the IPTables settings.  There I could clearly see there was firewall rule blocking hosts that were not part of VLAN 74.

Solution:  Adjust the Orbi to allow all VLAN traffic (USE AT YOUR OWN RISK)

  1. Enable Telnet access on your Primary Orbi Router.
    1. Go to http://{your orbi ip address}/debug.htm
    2. Choose ‘Enable Telnet’ (**reminder to disable this when done**)
    3. Telnet into the Orbi Router (I just used putty)
    4. Logon as root using your routers main password
  2. I issued the command ‘iptables -t filter -L loc2net’. Using the output of this command I can see where line 5 is dropping all traffic that is not (!) VLAN74.
  3. Let’s remove this firewall rule. The one I want to target is the 5th in the list, yours may vary.  This command will remove it ‘iptables -t filter -D loc2net 5’
    • NOTES:
    • Router Firmware Version V2.5.1.16 (Noted: 10.2020) — It appears that more recent firmware updates have changed the targeting steps.  I noticed in Router Firmware Version V2.5.1.16 I had to add 2 to the targeted line number to remove it with the ip tables command.  This my vary for the device that is being worked on.
    • Router Firmware Version V2.5.2.4  (Noted: Jan-2021) — It appears the targeting for steps are now fixed in this version.
    • Again, as with all my posts, blogs, and videos are for my records and not for any intended purpose. 
  4. Next, we need to clean up some post routing issues ‘iptables -t nat -I POSTROUTING 1 -o brwan -j MASQUERADE’
  5. A quick test and I can now PING and get to the internet from VLAN 75
  6. Disconnect from Telnet and disable it on your router.

Note:  Unfortunately, this is not a permanent fix.  Once you reboot your router the old settings come back.  The good news is, its only two to three lines to fix this problem.  Check out the links below for more information and a script.

Easy Copy Commands for my reference:

iptables -t filter -L loc2net

iptables -t filter -D loc2net 7  << Check this number

iptables -t nat -I POSTROUTING 1 -o brwan -j MASQUERADE

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

REF:

No web interface on a Dell PowerConnect 6224 Switch

Posted on Updated on

I picked up a Dell Powerconnect 6224 switch the other day as my older Netgear switch (2007) finally died.  After connecting via console cable (9600,8,1,none) I updated the Firmware image to the latest revision. I then followed the “Dell Easy Setup Wizard”, which by the way stated the web interface will work after the wizard is completed. After completing the easy wizard I opened a  browser to the switch IP address which failed.   I then pinged the switch IP address, yep it is replying.  Next, rebooted the switch – still no web interface connection.

How did I fix this?

1- While in the console, entered into config mode, and issued the following command.

console(config)#ip http server

2- Next I issued a ‘show run’ to ensure the command was present

console#show run
!Current Configuration:
!System Description “PowerConnect 6224, 3.3.18.1, VxWorks 6.5”
!System Software Version 3.3.18.1
!Cut-through mode is configured as disabled
!
configure
stack
member 1 1
exit
ip address 172.16.74.254 255.255.255.0
ip default-gateway 172.16.74.1
ip http server
username “admin” password HASHCODE level 15 encrypted
snmp-server community public rw
exit

3 – This time I connected to the switch via a browser without issue.

4 – Finally, saved the running-configuration

console#copy running-config startup-config

This operation may take a few minutes.
Management interfaces will not be available during this time.

Are you sure you want to save? (y/n) y

Configuration Saved!
console#

Summary:  These were some pretty basic commands to get the http service up and running, but I’m sure I’ll run into this again and I’ll have this blog to refer too.  Next, I’m off to setup some VLANs and a few static routes.

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

Home Lab Gen IV – Part V Installing Mellanox HCAs with ESXi 6.5

Posted on Updated on

The next step on my InfiniBand home lab journey was getting the InfiniBand HCAs to play nice with ESXi. To do this I need to update the HCA firmware, this proved to be a bit of a challenge. In this blog post I go into how I solved this issue and got them working with ESXi 6.5.

My initial HCA selection was the ConnectX aka HP INFINIBAND 4X DDR PCI-E HCA CARD 452372-001, and Mellanox MHGA28-XTC InfiniHost III HCA these two cards proved to be a challenge when updating their firmware. I tried all types of operating systems, different drivers, different mobos, and MFT tools versions but they would not update or be OS recognized. Only thing I didn’t try was Linux OS. The Mellanox forums are filled with folks trying to solve these issues with mixed success. I went with these cheaper cards and they simply do not have the product support necessary. I don’t recommend the use of these cards with ESXi and have migrated to a ConnectX-3 which you will see below.

Updating the ConnectX 3 Card:

After a little trial and error here is how I updated the firmware on the ConnectX 3. I found the ConnectX 3 card worked very well with Windows 2012 and I was able to install the latest Mellanox OFED for Windows (aka Windows Drivers for Mellanox HCA card) and updated the firmware very smoothly.

First, I confirm the drivers via Windows Device Manager (Update to latest if needed)

Once you confirm Windows device functionality then install the Mellanox Firmware Tools for windows (aka WinMFT)

Next, it’s time to update the HCA firmware. To do this you need to know the exact model number and sometimes the card revision. Normally this information can be found on the back of your HCA. With this in hand go to the Mellanox firmware page and locate your card then download the update.

After you download the firmware place it in an accessible directory. Next use the CLI, navigate to the WinMFT directory and use the ‘mst status’ command to reveal the HCA identifier or the MST Device Name. If this command is working, then it is a good sign your HCA is working properly and communicating with the OS. Next, I use the flint command to update my firmware. Syntax is — flint -d <MST Device Name> -i <Firmware Name> burn

Tip: If you are having trouble with your Mellanox HCA I highly recommend the Mellanox communities. The community there is generally very responsive and helpful!

Installation of ESXi 6.5 with Mellanox ConnectX-3

I would love to tell you how easy this was, but the truth is it was hard. Again, old HCA’s with new ESXi doesn’t equal easy or simple to install but it does equal Home lab fun. Let me save you hours of work. Here is the simple solution when trying to get Mellanox ConnextX Cards working with ESXi 6.5. In the end I was able to get ESXi 6.5 working with my ConnectX Card (aka HP INFINIBAND 4X DDR PCI-E HCA CARD 452372-001) and with my ConnectX-3 CX354A.

Tip: I do not recommend the use of the ConnectX Card (aka HP INFINIBAND 4X DDR PCI-E HCA CARD 452372-001) with ESXi 6.x. No matter how I tried I could not update its firmware and it has VERY limited or non-existent support. Save time go with ConnectX-3 or above.

After I installed ESXi 6.5 I followed the following commands and it worked like a champ.

Disable native driver for vRDMA

  • esxcli system module set –enabled=false -m=nrdma
  • esxcli system module set –enabled=false -m=nrdma_vmkapi_shim
  • esxcli system module set –enabled=false -m=nmlx4_rdma
  • esxcli system module set –enabled=false -m=vmkapi_v2_3_0_0_rdma_shim
  • esxcli system module set –enabled=false -m=vrdma

Uninstall default driver set

  • esxcli software vib remove -n net-mlx4-en
  • esxcli software vib remove -n net-mlx4-core
  • esxcli software vib remove -n nmlx4-rdma
  • esxcli software vib remove -n nmlx4-en
  • esxcli software vib remove -n nmlx4-core
  • esxcli software vib remove -n nmlx5-core

Install Mellanox OFED 1.8.2.5 for ESXi 6.x.

  • esxcli software vib install -d /var/log/vmware/MLNX-OFED-ESX-1.8.2.5-10EM-600.0.0.2494585.zip

Ref Links:

After a quick reboot, I got 40Gb networking up and running. I did a few vmkpings between hosts and they ping perfectly.

So, what’s next? Now that I have the HCA working I need to get VSAN (if possible) working with my new highspeed network, but this folks is another post.

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

The 3 Amigos – NUC, LIAN LI, and Cooler Master

Posted on Updated on

Today I wanted to look at the Cooler Master Elite 110 and compare it a bit to some other cases.

Let’s see how its foot print measures up to some familiar cases.  I stacked it up to the Intel NUC5i7RYH and my Lian Li PC-Q25 and surprisingly the Elite 110 is like a big cube that is reminiscent of older Shuttle cases. The size is nice for a small foot print PC but depending on your use it may be too bulky for appliance based work. One thing I did note the manufacture states the case is 20.8 mm but my measurements are coming out close to 21.2 mm

Note: I used my Lian Li case for my FreeNAS build, it’s a great case for those wanting to build a NAS (Click here for more PICS)

Inside the Elite 110, there are your standard edge cables (USB, Audio, Switches, and lights). The Power button is located in front bottom center and is the Cool Master logo. On the right hand side are all your typical USB 3.0, Audio, Reset and HDD LED.

The case allows for a maximum of 3 x 3.5″ or 4 x 2.5″ disk drives.  You can also work this into different combinations. For example – 3 x 3.5″ HDD and 1 x 2.5″ SDD, could make a VSAN Hybrid combination or 3 x 2.5″ SDD for VSAN All Flash and 1 x 3.5″ for the boot disk.

The mount point for these disk drives can be mount to the lefthand side and top. When mounting the disks I found it better to mount the SATA and power connectors to the rear.

Top Mount – Allows for 2 x 3.5″ or 2 x 2.5″.  In the photo below I’m using 1 x 3.5″  and 1 x 2.5″

Left Side Mount – Only allows for 1 x 3.5″ or 2 x 2.5″ disk drives.  In this photo I’m showing the 3.5″ disk mounted in its only position and the 2.5″ disk is unmounted to show some of the mount points.

The Rear of the case will allow for a standard ATX power supply, which sticks out about an inch. The case also supports two PCI Slots which should be enough for most ITX motherboards with one or two PCI Slots.

Inside we find only 4 Motherboard pre-threaded mount points and a 120 mm fan.  The fans power cable can connect to the power supply or to your motherboard.

Quick Summary – The Elite 110 is a nice budget case. Depending on your use case it could make a nice case for your home lab, NAS server or even a VSAN box. Its footprint is a bit too big for those appliance-based needs and the case metal is thin. I don’t like the fact there are only 4 mount points for your motherboard, this is great for an ITX Single PCI Slot but not so good for Dual. This is not a fault of the Elite 110 but more of an ATX/mATX/ITX standards problem. With no mount points for the second PCI slot it puts a lot of pressure on your motherboard during insertion.  This could lead to cards being miss-inserted.

Overall for the $35 I spent on this case it’s a pretty good value. Further photos can be found here on NewEgg and if you hurry the case is $28 with a rebate.

Manufacture Links:

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.