ESXi

Upgrading or adding New Hard Disks to the IOMega / EMC / Lenovo ix4-200d

Posted on Updated on

I currently have an IOMega ix4-200d with 4 x 500GB Hard Disk Drives (HDD). I am in the process of rebuilding my vSAN Home lab to all flash. This means I’ll have plenty of spare 2TB HDDs. So why not repurpose them to upgrade my IOMega. Updating the HDDs in an IOMega is a pretty simple process. However, documenting and waiting are most of this battle.

There are 2 different ways you can update your IOMega: 1 via Command Line and 2 via the Web client. From what I understand the command line version is far faster. However, I wanted to document the non-command line version as most of the blogs around this process were a bit sparse on the details. I started off by reading a few blog posts on the non-command line version of this upgrade. From there I came up with the basic steps and filled in the blanks as I went along. Below are the steps I took to update mine, your steps might vary. After documenting this process I can now see why most of the blogs were sparse on the details, there are a lot of steps and details to complete this task.  So, be prepared as this process can be quite lengthy.

NOTES:

  • YOU WILL LOSE YOUR DATA, SO BACK IT UP
  • You will lose the IOMega configuration (documenting it might be helpful)

Here are the steps I took:

  • Ensure you can logon to the website of your IOMega Device (lost the password – follow these steps)
  • Backup the IOMega Configuration
    • If needed screen shot the configuration or document how it is setup
  • Backup the data (YOU WILL LOSE YOUR DATA)
    • For me, I have an external 3TB USB disk and I used Syncback via my Windows PC to back up the data
  • Firmware: ensure the new HDDs and the IOMega IX4 are up to date
    • Seagate Disks ST2000DM001 -9YN164
    • Iomega IX4-200d (Product is EOL, no updates from Lenovo)
  • After backing up the data, power off the IOMega, unplug the power, and remove the cover
  • Remove the non-boot 500GB disks from the IOMega and label them (Disks 2-4), do not remove Disk 1
    Special Notes:
    • From what I read usually Disk 1 is the “boot” disk for the IOMega
    • In my case, it was Disk 1
    • For some of you, it may not be. One way to find this out is to remove disks 2-4 and see if the IOMega Boots, if so you found it, if not power off try with only disk 2 and so on till you find this right disk
  • Replace Disks 2-4 with the new HDD, in my case I put in the 2TB HDDs
  • Power on system (Don’t forget to plug it back in)
    • The IOMega display may note there are new disks added, just push the down arrow till you see the main screen
    • Also at this point, you won’t see the correct size as we need to adjust for the new disks
  • Go into IOMega web client

    • Settings > Disks Storage
    • Choose “Click here for steps…”
    • Check box to authorize overwrite

  • About a minute or two later my IOMega Auto Restarted
    • Note: Yours may not, give it some time and if not go to the Dashboard and choose restart
  • After reboot, I noted my configuration was gone but the Parity was reconstructing with 500GB disks
    • This is expected, as the system is replicating the parity to the new disks
    • This step took about +12 hours to complete

  • After the reconstruction, I went into the Web client and the IOMega configuration was gone.  It asked me to type in the device name, time zone, email, and then it auto Rebooted
  • After Reboot I noted all the disks are now healthy and part of the current 1.4TB parity set. This size is expected.

  • Now that the Iomega has accepted the 3 x 2TB disks we need to break parity group and add the final 2TB HDD
  • First, you have to delete the shares before you can change the parity type.
    • Shared Storage > Delete both shares and check to confirm delete

  • Now go to — Settings > disks > Manage Disks > Data Protection
    • Choose “Without data protection
    • Check the box to change data protection

  • Once complete the Power off the IOMega
    • Dashboard > Shutdown > Allow device to shutdown
  • After it powers off, replace Disk 1 with last 2TB Disk
  • Power On
  • Validate all disks are online
    • Go to Settings > Disks > “Click here for steps….” Then check box to authorize overwrite, choose OK.

  • After the last step observe the error message below and press ‘OK’

  • Go to Dashboard > Restart to restart the IOMega
  • After the restart the display should show “The filesystem is being prepared” with a progress bar, allow this to finish
  • Now create the Parity set with the new 2TB Disks
    • First, remove all Shared folders (See earlier steps if needed)
    • Second go to Settings > Disks > Manage Disks > Data Protection > Choose Parity > Next

  • Choose “check this box….” then click on apply…

  • After clicking apply my screen updated with a reconstruction of 0% and the display screen on the IOMega showed a progress bar too.
  • Mine took more than 24+ hours to complete the rebuild.

  • After the rebuild is complete then restore the config
  • Finally, restore your data. Again, I used syncback to copy my data back

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

Home Lab: A List of uncommon or niche products

Posted on Updated on

Part of the joy of building out a home lab or virtualization workstation is finding those one-off items that enable you to build something great, cheap, and unique. Below is a list of some those niche items and distributors I’ve found along the way. I’ll continue to update this post as we go along and I encourage you post up some of your findings too!

Sybausa.com

This place is full of all types of unique adapters and gadgets to make your home lab or workstation PC better. What I like about their product line focus is the support of cards with a PCIe x1 slot. Various server based add on cards (example 2/4 port NIC cards) typically require a PCIe x4 or x8 port. However, most home labs typically have plenty of x1 slots and very little to no support for x4 and x8. Syba seems to make a “plethora” of add on cards that support x1. The only downside is poor documentation / support.

Some products I like from them —

  • 2 Port Gbe PCIe x1 card (SY-PEX24028): I own and use several of these, they seem to work quite well. Dislikes – No Jumbo frames and it uses a Realtek 8111e chips set which means you must add these drivers to support ESXi
  • Another cool item they make is a M.2 to 4-port SATA III Adapter. This little RAID controller allows you to plug directly into a M.2 port and allow for 4 mort SATA devices. I think this would be handy for smaller systems (ie. NUC builds)

StarTech.com

StarTech is really becoming a great company with a very diverse and well supported / documented product line. I think they are really starting to give Blackbox a run for their money. I really like their cable and adapter card lines.

I’ve been using their Startech Null Modem DB9 to USB to run the CLI on my Netgear manage Switch since 2012 and have yet to have an issue with it.

William Lam has blogged many times around the use of NUC style home labs with StarTech Single and Dual USB 3.0 network adapters.

Winyao

Winyao is a “boutique” distributor specializing in NICs, Fibre adapters, and Transceivers. One item I find of value is their PCIe x1 Dual NIC with Intel or Broadcom chipset. Personally, I don’t know much about this company or own any of their products, but at $40-$60 per brand new adapter I wished I had found them before buying the Syba adapters.

Fractal Design

If you are looking for your next server, workstation, media, or top of the line PC case then take a peek at Fractal Design. Founded in 2007 and based out of Sweden they have really started to dominate the custom case design market. Their innovative designs blend elegance with flexibility, which I might add is a hard combination to find. I like their Arc Midi and Arc Mini R2 line of cases for home lab build outs. However, when or if my trusty Antec Sonata from 2003 lets me down, then Fractal will be next on my list. Here is a great blog post from Erik Bussink around his use of Fractal Design for his 2014 Home Lab.

** 09/06/2017 – Here are some updates to this list **

BitFenix – Cases and products

Came across this interesting case / mod company that builds all kinds of custom cases, cables, etc to mod your PC’s. I like the Prodigy Mini-ITX case, with 2 PCI Slots and a spare slot for Disk or other mods it could be a good fit for a NAS project. However I’m not fond of the excessive top and bottom ornaments.


ASUS

ASUS came out with a great M.2 to U.2 option allowing users to interface with SAS disks. They claim this option will help users to interface with SAS SSD and get extreme performance. There are some constraints around this (cables, disks, chipsets, etc) so read up on this before you buy.


If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

vSAN – Working with the vSAN HCL Database

Posted on Updated on

The vSAN HCL DB is a local file enabling vCenter Server to validate your vSAN hardware deployment.   This local DB file contains information around the supported products on the VMware compatibility guides. Part of the vSAN Health checks is validating the age of the vSAN HCL DB file.  The initial release of the health feature ships with a copy of the vSAN HCL DB, which was current when released. This copy of the database will become outdated over time. The file can be updated via an internet connection or through manual download (See KB’s below). However, if the HCL DB file is not updated and is 90 days past you will see a warning and at 180 days past you’ll receive an error. These alerts in no way will affect your vSAN cluster as they are merely non-impactful alarms.

You can find this check by clicking on your vSAN Cluster > Monitor > Virtual SAN > Health and then expand Hardware compatibility (See the PIC below). Under Hardware compatibility, you will see various checks that validate your installation.   The ‘vSAN HCL DB up to date’ is the check that will alarm when needed.

You might be thinking –

“I validated my vSAN deployment against the HCL & VCL’s when it was initially built, so why do I need to recheck it over and over?” There are a few good reasons why this validation is important. First off – New firmware and drivers are validated on a routine basis, keeping on top of these will help to ensure your vSAN cluster is able to work optimally and is less problematic. Second – Just because a component was listed on the VGC, doesn’t necessarily mean it will stay on the VGC. Allowing vSAN to self-check itself not only will save you time but will identify any potential issues.

“My vSAN cluster doesn’t have an internet connection and I am pretty good about keeping up to date on the VGC. Do I really need these checks, and if not how can I disable them” Frist off I would not recommend disablement but there may be a need for this. It could be very true that your company does a good job of manually checking the VCG but automating these check would only help your efforts and would be more efficient. However, there are some deployments where automated checks may not be desirable. For those cases follow this guidance to disable: Cluster > Manage > Virtual SAN > General > Internet Connectivity > Disable Auto HCL update

For more information around the vSAN HCL DB, including how to disable and update, please see the following KB’s

In this PIC I’m showing where you can locate the vSAN HCL DB Check status.

Screen Shot 2017-04-20 at 5.14.57 PM

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

VSAN – What’s new in vSAN 6.6 Video Demo

Posted on Updated on

What a great video posted by Duncan and VMware! In a short 10 minute video, he is able to hit upon some of the new features within vSAN 6.6

Gigabyte Firmware / BIOS update for MergePoint Embedded Management Software and Motherboard

Posted on Updated on

** Update **

I wrote this post when I first got my MX31-BS0, since then I have updated my BIOS several times using this process.  Here are my notes around my most recent updates

  • 09/2018 – Mix of updates for my three hosts — Updated MX-31BS0 BIOS from R03 or R08 to R10 and 2 hosts MergePoint 8.58 to 8.73, as one host was on 8.73 already
    • Noted behavior:
      • After BIOS update was completed the Mobo powered off vs. rebooting as with previous updates.  Had to power on the mobo to complete the BIOS install.  Then the mobo rebooted one more time as expected.
      • Even though the Mobo had been warm booted the BIOS Version in MergePoint web interface still showed the old version.  However, the Boot BIOS screen reflected the update.  A full power disconnect of the Mobo and a few ‘refreshes’ of the web browser allowed the MergePoint to report R10.  I did not see this behavior with the MergePoint EMS BIOS update, it promptly reported 8.73 properly.
  • 05/2018 –Updated on host to MX-31BS0 BIOS from R03 to R08 and MergePoint 8.58 to 8.73.
    • Blog readers noted issues going to R08 and could not connect to vKVM, I didn’t have any issues with update. Looks like it was a JAVA 8 Update issue (See post comments for more info)
  • 09/2017 – Updated MX-31BS0 BIOS from F10 to R03 and MergePoint 8.41 to 8.58.
  • 03/2017 – Original update documented below. Updated MX-31BS0 BIOS from F01 to F10 and MergePoint 8.01 to 8.41.

**** Blog Post ****

You’d think by now manufactures would have a solid and concise process around updating their products. They are quick to warn users to not update their BIOS unless there is a problem and quick to state if there is a problem they usually won’t support it. This total cycle of disservice is a constant for low-end manufacturers, heck even some high server platforms have the same issues. I have these same concerns when I started to look into updating my current MX31-BS0 Motherboard (mobo).

What can soften this blow a bit? How about the ability to update your BIOS remotely? This is a great feature of the MX-31BS0 and in this blog post, I’ll show you how I updated the BIOS and the remote MergePoint EMS (MP-EMS) firmware too.

Initial Steps –

  • My system is powered off and the power supply can supply power to the mobo.
  • I have setup remote access to the MP-EMS site with an IP address and have access to it via a browser. Additionally, I have validated the vKVM function works without issue
  • I downloaded the correct Mobo BIOS and BMC or MP-EMS Firmware and have extracted these files
  • Steps below were completed on a Gigabyte MX31-BS0 from BIOS F01 > F10 and MP-EMS 8.01 > 8.41, your system may vary

1 – Access the MergePoint EMS site

Start out by going to the IP address for MP-EMS site. From the initial display screen, we can see the MP-EMS Firmware versions but not the Platform (or Mobo) BIOS Version. Why not you may ask? Well, the MP-EMS will only display Mobo information when the Mobo is powered on. Before you power on your Mobo I would recommend opening the vKVM session so that you can see the boot screen. When you power on your mobo (MP-EMS > Power > Control > Power On ) use the vKVM screen to halt at the ‘boot menu’ or even go into setup and disable all the boot devices.

In this PIC, we can see my Firmware for the MP-EMS is 8.01 and the BIOS is blank as the Mobo is not powered on.

2- Selecting the Mobo BIOS Update

I choose the following to update the Mobo BIOS. Start out by uploading the file: Update > ‘BIOS & ME’ > Choose File > Image.RBU > Upload

Once the upload is complete, click on ‘Update’ to proceed. NOTE: a warning dialog box appeared for me stating the system would be powered off to update the BIOS. Good thing I’m in the Boot Menu as the system will just directly power off with no regard of the system state

3 – Installing the Mobo BIOS Update: Be Patient for the BIOS install to complete

Once I saw the message the ‘BIOS firmware image has been updated successfully’ I then exited the browser session and vKVM .  Note: I’d recommend closing the browser out entirely and then reopening a new session.


Once I restarted my vKVM and MP-EMS sessions and then powered on my Mobo. This allowed the BIOS update to continue.

Here is the patience part – My system was going from BIOS F01 > F10 and it rebooted 2 times to complete the update. Be patient it will complete.

Here is the behavior I noted:

  • First Reboot – The system posted normally, it cleared the screen, and then white text stated a warning message about the BIOS booted to default settings. Very shortly after it rebooted again.
  • On the 2nd reboot, it posted normally and I pressed F10 to get back to the Boot menu. I did this because next, we’ll need to update the MP-EMS firmware.

Once the system had rebooted I then refreshed my MP-EMS screen and viola there it was BIOS Version F10.

** Note – Not every time but sometimes, I would notice the MP-EMS Screen would show the old BIOS Version #.  However, in BIOS the updated BIOS Version # would be present.  A cold boot didn’t always fix this, but eventually the MP-EMS would update and would reflect the correct BIOS #**

4 – Selecting the MP-EMS Firmware

While the Mobo is booted and I’m in the boot menu, I went into the MP-EMS session and choose the following Update > BMC > Choose File > 841.img > upload


5 – Installing the MP-EMS firmware update

Once the file was uploaded I could see the Current and New versions. I then choose Update button which promptly disconnected my vKVM session and Status changed from None to a % Completed.

Again, be patient and allow the system to update. For my systems the % Complete seemed to hang a few times but the total process, for me, took about


At 100% complete my system did an auto-reboot. When I heard my system beep I then closed my MP-EMS session and started anew.


Shortly after the system booted I went into the MP-EMS and validated the firmware was now 8.41.


Wrapping this up…

Ever heard the saying “It really is a simple process we just make it complicated”? Recent BIOS updates and overall system management sometimes feel this way when trying to do simple processes. Not trying to date myself but BIOS/Firmware updates have been around for decades now. I’ve done countless updates where it was simply extracting an update to simple media and then it completes the update on its own. Now one could argue that systems are more complicated and local boot devices don’t scale well for large environments and I’d say both are very true but that doesn’t mean the process can’t be made more simple.

My recommendation to firmware / bios manufactures — invest in simplicity or make it a requirement for your suppliers. You’ll have happier customers, less service calls, and more $$ in your pocket but then again if you do, what would I have to blog about?

Am I happy with the way I have to update this Mobo? Yes, I am happy with it. For the price I paid it’s really nice to have a headless environment that I can remotely update. I won’t have to do it very often so I’m glad I wrote down my steps in this blog.

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

Home Lab Gen IV – Part III: Best ESXi White box Mobo yet?

Posted on Updated on

Initially, when I decided to refresh my Home Lab to Generation IV I planned to wipe just the software and add InfiniBand.  I would keep most of the hardware. However, as I started to get into this transformation I decided it was time for a hardware refresh too including moving to All Flash vSAN.

In this post, I wanted to write a bit more about my new motherboard (mobo) and why I think it’s a great choice for a home lab. The past workhorse of my home lab has been my trusty MSI Z68MS-G45(B3) Rev 3.0 (AKA MSI-7676). I bought 3 MSI-7676 in 2012 and this mobo has been a solid performer and they treated me very well. However, they were starting to age a bit so I sold them off to a good buddy of mine and I used those resources to fund my new items.

My new workhorse –

Items kept from Home Lab Gen III:

  • 3 x Antec Sonata Gen I and III each with 500W PS by Antec: I’ve had one of these cases since 2003, now that is some serious return on investment

New Items:

  • 3 x Gigabyte MX31-BS0 – So feature rich, I found them for $139 each, and this is partly why I feel it’s the best ESXi white box mobo
  • 3 x Intel Xeon E3-1230 v5 – I bought the one without the GPU and saved some $$
  • 3 x 32GB DDR4 RAM – Nothing special here, just 2133Mhz DDR4 RAM
  • 3 x Mellanox Connectx InfiniBand cards (More to come on this soon)
  • 4 x 200GB SSD, 1 x 64GB USB (Boot)
  • 1 x IBM M5210 JBOD SAS Controller

Why I chose the Gigabyte MX31-BS0 –

Likes:

  • Headless environment: This Mobo comes with an AST2400 headless chipset environment. This means I no longer am tied to my KVM. With a java enabled browser, I can view the host screen, reboot, go into BIOS, BIOS updates, view hardware, and make adjustments as if I was physically at the box
  • Virtual Media: I now can virtually mount ISOs to the ESXi host without directly being at the console (Still to test ESXi install)
  • Onboard 2D Video: No VGA card needed, the onboard video controller takes care of it all. Why is this important? You can save money by choosing a CPU that doesn’t have the integrated GPU, the onboard video does this for you
  • vSphere HCL Support: Really? Yep, most of the components on this mobo are on the HCL and Gigabyte lists ESXi 6 as a supported OS, its not 100% HCL but for a white box its darn close
  • Full 16x PCIe Socket: Goes right into the CPU << Used for the Infiniband HCA
  • Full 8x PCIe Socket: Goes into the C232  << Used for the IBM M5210
  • M.2 Socket: Supporting 10Gb/s for SSD cards
  • 4 x SATA III ports (white)
  • 2 x SATA III can be used for Satadom ports (orange) with onboard power connectors
  • 2 x Intel i210 1Gbe (HCL supported) NICs
  • E3 v5 Xeon Support
  • 64GB RAM Support (ECC or Non-ECC Support)
  • 1 x Onboard USB 2.0 Port (Great for a boot drive)

Dislikes: (Very little)

  • Manual is terrible
  • Mobo Power connector is horizontal with the mobo, this made it a bit tight for a common case
  • 4 x SATA III Ports (White) are horizontal too, again hard to seat and maintain
  • No Audio (Really not needed, but would be nice)
  • For some installs, it could be a bit limited on PCIe Ports

Some PICS :

The pic directly below shows 2 windows: Window 1 has the large Gigabyte logo, this is the headless environmental controls. From here you can control your host and launch the video viewer (window 2). The video viewer allows you to control your host just as if you were physically there. In windows 2 I’m in the BIOS settings for the ESXi host.

This is a stock photo of the MX31-BS0. It’s a bit limited on the PCIe ports, however, I don’t need many ports as soon I’ll have 20Gb/s InfiniBand running on this board but that is another post soon to come!

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

DCUI from ssh for vSphere 6 — so awesome!

Posted on Updated on

This is one of those great command line items to put in your toolkit that will impress your co-workers. I think this command is one of the least known commands but could have a huge impact on an admins ability to manage their environment. The vSphere command is simply ‘dcui’ and it is a very simple way to access the DCUI without having to go into your remote IPMI tools (ilo, iDRAC, KVM over IP, etc). The only down side compared to IPMI tools is it doesn’t work when you reboot your system as you’ll lose your ssh session.

How to use it:

  • After your server is fully booted, start an ssh session to your target server and logon
  • From the command prompt type in dcui and press enter

  • From there you can use the dcui remotely.
  • Press CTRL + C to exit

Tips:

  • Have your ssh screen size where you want it prior to going into the dcui. If you resize after connecting it will exit out of the DCUI
  • The DCUI command worked great in putty but it did not work with the MAC Terminal program. Not sure why, but if you got this working on a MAC then post up!

Reference: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2039638

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

Using VMware Fusion for your VM Remote Console

Posted on Updated on

These last few months I’ve been working to totally rebuild my Home Lab and I ran into a neat feature of Fusion.  This blog article is a quick tip on using Fusion for your VM Remote console.

Issue – When you want to start a remote console to your VM’s typically you download and install VMRC (VMware Remote Console) service. Sometimes getting it to run can be a bit of a burden (Normally an OS issue).

Observation – While on my MAC I was setting up a VM via the Web Host Client and I need to mount an ISO. When I right clicked on the VM I choose ‘Launch Remote Console’ vs. the normal ‘Download VMRC’

After clicking I was prompted to choose Fusion

And there it was… a simple way to work with VM’s via Fusion!  From there I mounted my ISO and started the rebuild of my home lab.

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

Honeywell Next Generation Platform with Dell FX2 + VMware VSAN

Posted on

I wished over these past years I could blog in technical detail about all the great things I’ve experienced working for VMware. A big part of my job as a VMware TAM is being a trusted advisor and helping VMWare customers build products they can resell to their customers. These past years I’ve worked directly with my customer to help them build a better offering and very soon it will be released. Below is a tweet from Michal Dell around the Honeywell Next Generation Platform and an in-depth video by Paul Hodge. The entire team (Honeywell, Dell, and VMWare) have been working tirelessly to make this product great. It’s been a long haul with so many late nights and deadlines BUT like so many others on this team I’m honored to say I put my personal stamp on this product. Soon it will be deployed globally and it’s a great day for Honeywell, Dell, and VMware. You all should be proud!