Gigabyte Firmware / BIOS update for MergePoint Embedded Management Software and Motherboard

Posted on

You’d think by now manufactures would have a solid and concise process around updating their products. They are quick to warn users to not update their BIOS unless there is a problem and quick to state if there is a problem they usually won’t support it. This total cycle of disservice is a constant for low-end manufacturers, heck even some high server platforms have the same issues. I have these same concerns when I started to look into updating my current MX31-BS0 Motherboard (mobo).

What can soften this blow a bit? How about the ability to update your BIOS remotely? This is a great feature of the MX-31BS0 and in this blog post, I’ll show you how I updated the BIOS and the remote MergePoint EMS (MP-EMS) firmware too.

Initial Steps –

  • My system is powered off and the power supply can supply power to the mobo.
  • I have setup remote access to the MP-EMS site with an IP address and have access to it via a browser. Additionally, I have validated the vKVM function works without issue
  • I downloaded the correct Mobo BIOS and BMC or MP-EMS Firmware and have extracted these files
  • Steps below were completed on a Gigabyte MX31-BS0 from BIOS F01 > F10 and MP-EMS 8.01 > 8.41, your system may vary

1 – Access the MergePoint EMS site

Start out by going to the IP address for MP-EMS site. From the initial display screen, we can see the MP-EMS Firmware versions but not the Platform (or Mobo) BIOS Version. Why not you may ask? Well, the MP-EMS will only display Mobo information when the Mobo is powered on. Before you power on your Mobo I would recommend opening the vKVM session so that you can see the boot screen. When you power on your mobo (MP-EMS > Power > Control > Power On ) use the vKVM screen to halt at the ‘boot menu’ or even go into setup and disable all the boot devices.

In this PIC, we can see my Firmware for the MP-EMS is 8.01 and the BIOS is blank as the Mobo is not powered on.

2- Selecting the Mobo BIOS Update

I choose the following to update the Mobo BIOS. Start out by uploading the file: Update > ‘BIOS & ME’ > Choose File > Image.RBU > Upload

Once the upload is complete, click on ‘Update’ to proceed. NOTE: a warning dialog box appeared for me stating the system would be powered off to update the BIOS. Good thing I’m in the Boot Menu as the system will just directly power off with no regard of the system state

3 – Installing the Mobo BIOS Update: Be Patient for the BIOS install to complete

Once I saw the message the ‘BIOS firmware image has been updated successfully’ I then exited the browser session and vKVM .  Note: I’d recommend closing the browser out entirely and then reopening a new session.


Once I restarted my vKVM and MP-EMS sessions and then powered on my Mobo. This allowed the BIOS update to continue.

Here is the patience part – My system was going from BIOS F01 > F10 and it rebooted 2 times to complete the update. Be patient it will complete.

Here is the behavior I noted:

  • First Reboot – The system posted normally, it cleared the screen, and then white text stated a warning message about the BIOS booted to default settings. Very shortly after it rebooted again.
  • On the 2nd reboot, it posted normally and I pressed F10 to get back to the Boot menu. I did this because next, we’ll need to update the MP-EMS firmware.

Once the system had rebooted I then refreshed my MP-EMS screen and viola there it was BIOS Version F10.

4 – Selecting the MP-EMS Firmware

While the Mobo is booted and I’m in the boot menu, I went into the MP-EMS session and choose the following Update > BMC > Choose File > 841.img > upload


5 – Installing the MP-EMS firmware update

Once the file was uploaded I could see the Current and New versions. I then choose Update button which promptly disconnected my vKVM session and Status changed from None to a % Completed.

Again, be patient and allow the system to update. For my systems the % Complete seemed to hang a few times but the total process, for me, took about


At 100% complete my system did an auto-reboot. When I heard my system beep I then closed my MP-EMS session and started anew.


Shortly after the system booted I went into the MP-EMS and validated the firmware was no 8.41.


Wrapping this up…

Ever heard the saying “It really is a simple process we just make it complicated”? Recent BIOS updates and overall system management sometimes feel this way when trying to do simple processes. Not trying to date myself but BIOS/Firmware updates have been around for decades now. I’ve done countless updates where it was simply extracting an update to simple media and then it completes the update on its own. Now one could argue that systems are more complicated and local boot devices don’t scale well for large environments and I’d say both are very true but that doesn’t mean the process can’t be made more simple.

My recommendation to firmware / bios manufactures — invest in simplicity or make it a requirement for your suppliers. You’ll have happier customers, less service calls, and more $$ in your pocket but then again if you do, what would I have to blog about?

Am I happy with with the way I have to update this Mobo? Yes, I am happy with it. For the price I paid it’s really nice to have a headless environment that I can remotely update. I won’t have to do it very often so I’m glad I wrote down my steps in this blog.

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

Home Lab Gen IV – Part III: Best ESXi White box Mobo yet?

Posted on Updated on

It’s been a while since I posted about the Gen IV home lab but what can I say I’ve been a bit busy. Initially, when I decided to start this refresh I planned to wipe just the software, add InfiniBand, and keep most of the hardware. However, as I started to get into this transformation I decided it was time for a hardware refresh too. About that time is when I found, what could be, a nearly perfect ESXi white box mobo.

In this post, I wanted to write a bit about this new mobo and why I think it’s a great choice for a home lab. The past workhorse of my home lab has been my trusty MSI Z68MS-G45(B3) Rev 3.0 (AKA MSI-7676). I started using them in 2012 and I had 3 of these mobos. This mobo has been a solid performer and they treated me very well. However, they were starting to age a bit and I sold them off to a good buddy of mine. I used those recourses to help fund 3 new mobo’s, DDR4 and CPU.

My new workhorse –

Items kept from Home Lab Gen III:

  • 3 x Antec Sonata Gen I and III each with 500W PS by Antec: I’ve had one of these cases since 2003, now that is some serious return on investment
  • Each Server: 2 x SATAIIII 2TB HDD, 1 x SATAIII 60GB or 90GB SSD, 1 x 60GB HDD (Boot)
  • Each Server: 1 x SYBA SY-PEX24028 Dual Port Gigabit Ethernet Network Adapter

New Items:

  • 3 x Gigabyte MX31-BS0 – So feature rich, I found them for $139 each, and this is partly why I feel it’s the best ESXi white box mobo
  • 3 x Intel Xeon E3-1230 v5 – I bought the one without the GPU and saved some $$
  • 3 x 32GB DDR4 RAM – Nothing special here, just 2133Mhz DDR4 RAM
  • 3 x Mellanox Connectx InfiniBand cards (More to come on this soon)

Why I chose the Gigabyte MX31-BS0 –

Likes:

  • Headless environment: This Mobo comes with an AST2400 headless chipset environment. This means I no longer am tied to my KVM. With a java enabled browser, I can view the host screen, reboot, go into BIOS, BIOS updates, view hardware, and make adjustments as if I was physically at the box
  • Virtual Media: I now can virtually mount ISOs to the ESXi host without directly being at the console (Still to test ESXi install)
  • Onboard 2D Video: No VGA card needed, the onboard video controller takes care of it all. Why is this important? You can save money by choosing a CPU that doesn’t have the integrated GPU, the onboard video does this for you
  • vSphere HCL Support: Really? Yep, most of the components on this mobo are on the HCL and Gigabyte lists ESXi 6 as a supported OS, its not 100% HCL but for a white box its darn close
  • Full 16x PCIe Socket: Goes right into the CPU
  • Full 8x PCIe Socket: Goes into the C232
  • M.2 Socket: Supporting 10Gb/s for SSD cards
  • 4 x SATA III ports (white)
  • 2 x SATA III can be used for Satadom ports (orange) with onboard power connectors
  • 2 x Intel i210 1Gbe (HCL supported) NICs
  • E3 v5 Xeon Support
  • 64GB RAM Support (ECC or Non-ECC Support
  • 1 x Onboard USB 2.0 Port

Dislikes: (Very little)

  • Manual is terrible
  • Mobo Power connector is horizontal with the mobo, this made it a bit tight for a common case
  • 4 x SATA III Ports (White) are horizontal too, again hard to seat and maintain
  • No Audio (Really not needed, but would be nice)
  • For some installs, it could be a bit limited on PCIe Ports

Some PICS :

The pic directly below shows 2 windows: Window 1 has the large Gigabyte logo, this is the headless environmental controls. From here you can control your host and launch the video viewer (window 2). The video viewer allows you to control your host just as if you were physically there. In windows 2 I’m in the BIOS settings for the ESXi host.

This is a stock photo of the MX31-BSO. It’s a bit limited on the PCIe ports, however, I don’t need many ports as soon I’ll have 20Gb/s InfiniBand running on this board but that is another post soon to come!

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

DCUI from ssh for vSphere 6 — so awesome!

Posted on Updated on

This is one of those great command line items to put in your toolkit that will impress your co-workers. I think this command is one of the least known commands but could have a huge impact on an admins ability to manage their environment. The vSphere command is simply ‘dcui’ and it is a very simple way to access the DCUI without having to go into your remote IPMI tools (ilo, iDRAC, KVM over IP, etc). The only down side compared to IPMI tools is it doesn’t work when you reboot your system as you’ll lose your ssh session.

How to use it:

  • After your server is fully booted, start an ssh session to your target server and logon
  • From the command prompt type in dcui and press enter

  • From there you can use the dcui remotely.
  • Press CTRL + C to exit

Tips:

  • Have your ssh screen size where you want it prior to going into the dcui. If you resize after connecting it will exit out of the DCUI
  • The DCUI command worked great in putty but it did not work with the MAC Terminal program. Not sure why, but if you got this working on a MAC then post up!

Reference: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2039638

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

Using VMware Fusion for your VM Remote Console

Posted on Updated on

These last few months I’ve been working to totally rebuild my Home Lab and I ran into a neat feature of Fusion.  This blog article is a quick tip on using Fusion for your VM Remote console.

Issue – When you want to start a remote console to your VM’s typically you download and install VMRC (VMware Remote Console) service. Sometimes getting it to run can be a bit of a burden (Normally an OS issue).

Observation – While on my MAC I was setting up a VM via the Web Host Client and I need to mount an ISO. When I right clicked on the VM I choose ‘Launch Remote Console’ vs. the normal ‘Download VMRC’

After clicking I was prompted to choose Fusion

And there it was… a simple way to work with VM’s via Fusion!  From there I mounted my ISO and started the rebuild of my home lab.

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

Honeywell Next Generation Platform with Dell FX2 + VMware VSAN

Posted on

I wished over these past years I could blog in technical detail about all the great things I’ve experienced working for VMware. A big part of my job as a VMware TAM is being a trusted advisor and helping VMWare customers build products they can resell to their customers. These past years I’ve worked directly with my customer to help them build a better offering and very soon it will be released. Below is a tweet from Michal Dell around the Honeywell Next Generation Platform and an in-depth video by Paul Hodge. The entire team (Honeywell, Dell, and VMWare) have been working tirelessly to make this product great. It’s been a long haul with so many late nights and deadlines BUT like so many others on this team I’m honored to say I put my personal stamp on this product. Soon it will be deployed globally and it’s a great day for Honeywell, Dell, and VMware. You all should be proud!

Passed VMware Certified Professional 6 – Data Center Virtualization Delta Exam 2V0-621D

Posted on Updated on

I passed my VCP 6 – DCV Delta Exam today! Here are some of my notes around it.

  • The test was 65 Questions and you have 75 Minutes to complete it, I had about 30 Mins left.
  • I did get several configuration maximum questions. The questions were more around applying the known maximums vs. just memorizing the data points.
  • I did get some Virtual SAN , vROPS, and LOTS of questions around Lockdown mode
  • Know your licensing Models and what features belong to what.
  • Lots of questions around iSCSI, FCoE, APD, Path loss, PSP Modes
  • Know all about SSO, Authentication types, and don’t forget those newer features like Content Library
  • Make sure you review vSphere Replication and those always fun resource pools
  • Know your command line tools – esxcli and the like
  • In general, questions seem to be straight to the point vs. lengthy paragraphs
  • I followed the blueprint, read my documents, and made sure I read every ‘Note’ section
  • For more information about this test click here

What new in vSphere 6.5

Posted on Updated on

Hey folks — This great video came my way today.  Watch Kevin Steil, (Southeast VMware Technical Account Manager (TAM) Team Lead) talk about HOL-1710-SDC-3 – vSphere with Operations Management: Product Deep Dive which introduces the cool, new features coming in vSphere 6.5.

Home Lab Gen IV – Part II: Lab Clean Up and Adding Realtek 8186 NIC Drivers to ESXi 6u2 ISO

Posted on Updated on

To prep my Home Lab for ESXi 6.0U2 with VSAN + IB. I wanted to ensure it was in pristine condition. It had been running ESXi 5.5 + VSAN for many years but it was in need of some updates. I plan to fully wipe my environment (no backups) and reinstall it all. Yes, that’s right I’m going to wipe it all – this means goodbye to those Windows 2008 VM’s I’ve been hanging on to for years now. Tip: If you’d like to understand my different Home lab generations please see my dedicated page around this topic.

In this post, I am going to focus on listing out my current to-do items, then describing how to flattening all SSD/HDD and finally building a custom ESXi 6.0U2 ISO with Realtek 8186 drivers.

Current to Do list –

Completed

  • PM the Hosts – While they are off it’s a good time to do some routine PM (Complete)
  • BIOS and Firmware – Check all MoBo BIOS, pNIC, and HDD/SDD firmware (Complete)
  • Netgear Switch BIOS – It’s doubtful but always worth a check (Complete)
  • Flatten all SDD / HDD with Mini-Partition Tool (This Post)
  • Create ISO with ESXi 6.0U2 and Realtek 8168 Drivers (This Post)
Still to do
  • Install Windows 2012 Server VM for DNS and AD Service (Local disk)
  • Install vCenter Server Appliance (Local Disk)
  • Get Infiniband Functional (Needs work)
  • Setup FT and VSAN Networks
  • Enable VSAN
  • Rebuild VM Environment

Flatten all SDD / HDD with Mini-Partition Tool

Installing VSAN fresh on to an environment requires the SDD / HDD’s to be free of data and partition information. The Mini-Partition tool is a FREE bootable software product allowing you to remove all the partitions on your ESXi Hosts and other PCs. You can download it here >> https://www.partitionwizard.com/partition-wizard-bootable-cd.html

Once I created the BOOT CD and allowed the product to boot. I was quickly able to see all the HDD / SDD’s in my Host.

I simply right clicked on each host and choose ‘Delete All Partitions’

After choosing ‘Delete All Partitions’ for all my disks I clicked on ‘Apply’ in the upper right-hand corner. The following window appeared, I choose ‘Yes’ to Apply pending changes, and it removed all my partitions on all my disks quite quickly.

Create ISO with ESXi 6.0U2 and Realtek 8168 Drivers

ESXi no longer supports RealTek Network drivers, so home lab users who need these drivers will have to create a custom ISO to add these drivers back in. Keep in mind these are unsupported drivers by VMware, so use at your own risk. My trusty ESXi-Customizer GUI program is no more for ESXi 6. It has moved to a CLI based product. However, PowerCLI has all the functionality I need to build my customer ISO. In this section, I’ll be using PowerCLI to create my ISO. Keep in mind these are the steps that worked for me, your environment may vary.

To get started you will need two files and PowerCLI Installed on a Windows PC.

  1. File 1: VMware Offline ZIP >> www.vmware.com/download

2. RealTek 8186 Offline bundle >> https://vibsdepot.v-front.de/wiki/index.php/Net55-r8168

3. PowerCLI Download and install >> https://communities.vmware.com/community/vmtn/automationtools/powercli

Tip: If you don’t know PowerCLI try starting here

4. Place the files from Step 1 and 2 into c:\tmp folder

–POWERCLI COMMANDS— For each command, I have included a screenshot and the actual command allowing to copy, paste, and edit into your environment.

  1. Add ESXi 6.0u2 and RealTek8186 products to the local Software Depot

Add-EsxSoftwareDepot C:\tmp\update-from-esxi6.0-6.0_update02.zip

Add-EsxSoftwareDepot C:\tmp\net55-r8168-8.039.01-napi-offline_bundle.zip

2. Confirm the products are in the depot

Get-EsxSoftwareDepot

3. List out the ESXi Image Profiles

Get-EsxImageProfile4

4. Create a Clone Image to be modified – Ensure you are targeting the “ESXi…..standard” profile from step 3

New-EsxImageProfile -cloneprofile ESXi-6.0.0-20160302001-standard -Name “RealTek8186a”

Forward-Looking Tip: Whatever name you choose it will show up in your boot ISO

5. Set the Acceptance Level to Community Supported – Remember RealTek is unsupported by VMware

Set-EsxImageProfile -Name RealTek8186a -AcceptanceLevel CommunitySupported

For ImageProfile Enter – RealTek8186a

6. Ensure the RealTek net55-r8186 driver is loaded from the local depot (Screenshot shortened)

Get-EsxSoftwarePackage

7. Add the RealTek software package to the profile

Add-EsxSoftwarePackage

ImageProfile: RealTek8186a

SoftwarePackage[0]: net55-r8168 8.039.01-napi

Tip: You MUST enter the full name here if you just use the short name it will not work

8. Validate the RealTek drivers are now part of the RealTek8186a Profile (Screenshot shortened)

(Get-EsxImageProfile “RealTek8186a”).viblist

9. Export the profile to an ISO

Export-EsxImageProfile -ImageProfile “RealTek8186a” -ExportToIso -FilePath c:\tmp\RealTek8186a.iso

And that’s it… now with my clean/updated hosts, flatten HDD/SDD’s, and a newly pressed custom ISO I am ready to install ESXi onto my systems. Next Steps for me will be to install ESXi, AD/DNS VM, and vCenter Server Appliance. However, my next post will be focused on getting InfiniBand running in my environment.

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

P2V GOLD – Remove all Windows Non-Present devices at once via GUI and CLI!

Posted on Updated on

Issue >> If you’ve done any type of Windows P2V (Physical 2 Virtual) then you’d know all about the value in removing non-present or ghosted devices. Normally non-present devices are harmless but from time to time they can cause you an issue or two. P2V best practice is to remove non-present devices enabling a pristine OS. The issue with removing non-present devices is the time to complete the task. Currently, you have to go to command line, enter a few commands, and then manually remove each non-present device from device manager. If you have to remove 200+ non-present devices that could take several hours to complete. Until now…

Solution >> I located 3 great tools that remove all the non-present devices at once — Device Clean up Tool GUI based, Device Clean up Tool CLI based, and Ghostbuster GUI based. All the links are below.

Other Notes >> Personally, I used the Device Cleanup Tool GUI and I was able to remove 213 devices from my recent P2V. Not only did it clean up my OS but it also fixed a pesky USB issue I was having.

Device Cleanup Tool V0.5 – removes non-present devices from the Windows device management

Device Cleanup CMD

GhostBuster

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

Home Lab Gen IV – Part I: To InfiniBand and beyond!

Posted on Updated on

I’ve been running ESXi 5.5 with VSAN using a Netgear 24 Port Managed Gig switch for some time now, and though it has performed okay I’d like to step up my home lab to be able to support the emerging vSphere features (VSAN 6.x, FT-SMP, and faster vMotion). To support some of these features 10Gb/s is HIGHLY recommend if not fully required. Looking at 10Gbe switches and pNICS the cost is very prohibitive for a home lab. I’ve toyed around with InfiniBand in the past (See my Xsigo Posts here) and since then I’ve always wanted to use this SUPER fast and cost effective technology. Initially, the cost to do HPC (High-performance computing) has always been very expensive. However, in recent years the InfiniBand price per port has become very cost effective for the home lab.

Let’s take a quick peek at the speed InfiniBand brings. When most of us were still playing around with 100Mb/s Ethernet InfiniBand was able to provide 10Gb/s since 2001. When I state 10Gb/s I’m talking about each port being able to produce 10Gb/s and in most cases Infiniband switches have a non-blocking backplane.  So a 24 Port InfiniBand Switch, 10Gb/s per port, Full duplex, Non-blocking switch will support 480Gb/s!   Over time InfiniBand speed has greatly increased and over time the older switches have dropped in price making InfiniBand a good choice for a growing home lab. For most home labs a 40Gb/s per port QDR switch is financial achievable. Even the 20Gb/s DDR or 10Gb/s SDR switch give ample speed and are VERY cost effective.  However, step above QDR and you’ll find the price point is a bit too steep for home lab use.

So let’s take a look at the price / speed comparisons for InfiniBand vs. 10Gb/s Ethernet.

10Gb/s 20Gb/s 40Gb/s
InfiniBand HCA 2 Port 10Gb/s ($15-$75) 2 Port 20Gb/s ($20-$100) 2 Ports 40GB/s ($30-$150)
InfiniBand Switch 24 Ports SDR (~$30-$70) 24 Ports DDR (~$70-$120) 8-36 Ports QDR (~$250-$500)
InfiniBand Cable CX4 (SFF-8470) ($15-$30) CX4 (SFF-8470) ($15-$30) QSFP (SFF-8436) ($15-$30)
Ethernet Switch 8 Ports 10Gbe ($700-$900)
Ethernet pNIC 2 Port 10Gbe ($300-$450)
Ethernet Cable 1M / 3ft. CAT 6a ($5-$10)

Let’s break this down a bit further. I used the high dollar from each line item above and figured 3 x HCAs or pNICs and 6 cables for my 3 hosts.

Ethernet 10Gb/s – (3 Host Total cost $2310)

  • Cost Per Switch – $900 Switch / 8 Ports = $112 per port
  • Cost to enable 3 Hosts with 3 pNICs and 2 Cables -(3 Hosts x $450 pNICS) + ((2 Cables x 3 Hosts) x $10 each) = $1410 for three hosts or $470 per Host
  • Total Cost to enable 3 hosts and switch cost – $1410 + $900 = $2310
  • Fully populated 8 Port switch supporting 4 hosts = $2776

InfiniBand SDR 10Gb/s – (3 Host Total Costs $385)

  • Cost Per Switch Port – $70 / 24 Ports = $2.91 per port
  • Host Costs – (3 Hosts x $75 HCA) + ((2 Cables x 3 Hosts) x $30 = $315 (Per Host $105)
  • Total Cost to enable 3 hosts and switch cost – $315 + $70 = $385
  • Fully populated 24 port switch supporting 12 hosts = $1330

InfiniBand DDR 20Gb/s – (3 Host Total Cost $510)

  • Cost Per Switch Port – $120 / 24 Ports = $5 per port
  • Host Costs – (3 Hosts x $100 HCA) + ((2 Cables x 3 Hosts) x $30 = $390 (Per Host $130)
  • Total Cost to enable 3 hosts and switch cost – $390 + $120 = $510
  • Fully populated 24 port switch supporting 12 hosts = $1680

InfiniBand QDR 40Gb/s – (3 Host Total Cost $1040)

  • Cost Per Switch Port – $500 / 24 Ports = $20.83 per port
  • Host Costs – (3 Hosts x $150 HCA) + ((2 Cables x 3 Hosts) x $30 = $540 (Per Host $180)
  • Total Cost to enable 3 hosts and switch cost – $540 + $500 = $1040
  • Fully populated 24 port switch supporting 12 hosts = $2660

From these costs you can clearly see that InfiniBand is TRULY the best value for speed and port price. Even if you got a great deal, let’s say 50% off on 10Gbe, it still would be slower and it would cost you more. Heck, for the price you could easily buy an extra switch as a backup.

With this in mind my plan it to replace my backend Gbe network with InfiniBand. Using IPoIB (IP over InfiniBand) for VSAN, vMotion, and FT traffic and my 1Gbe network for the VM’s and ESXi management traffic. However, without knowledge wisdom cannot be achieved.  So, my next steps are to learn more about InfiniBand and review these great videos by Mellanox. Then come up with a plan to move forward using this technology.

Check out these Videos: InfiniBand Principles Every HPC Expert MUST Know!