Test Lab

Gigabyte Firmware / BIOS update for MergePoint Embedded Management Software and Motherboard

Posted on

You’d think by now manufactures would have a solid and concise process around updating their products. They are quick to warn users to not update their BIOS unless there is a problem and quick to state if there is a problem they usually won’t support it. This total cycle of disservice is a constant for low-end manufacturers, heck even some high server platforms have the same issues. I have these same concerns when I started to look into updating my current MX31-BS0 Motherboard (mobo).

What can soften this blow a bit? How about the ability to update your BIOS remotely? This is a great feature of the MX-31BS0 and in this blog post, I’ll show you how I updated the BIOS and the remote MergePoint EMS (MP-EMS) firmware too.

Initial Steps –

  • My system is powered off and the power supply can supply power to the mobo.
  • I have setup remote access to the MP-EMS site with an IP address and have access to it via a browser. Additionally, I have validated the vKVM function works without issue
  • I downloaded the correct Mobo BIOS and BMC or MP-EMS Firmware and have extracted these files
  • Steps below were completed on a Gigabyte MX31-BS0 from BIOS F01 > F10 and MP-EMS 8.01 > 8.41, your system may vary

1 – Access the MergePoint EMS site

Start out by going to the IP address for MP-EMS site. From the initial display screen, we can see the MP-EMS Firmware versions but not the Platform (or Mobo) BIOS Version. Why not you may ask? Well, the MP-EMS will only display Mobo information when the Mobo is powered on. Before you power on your Mobo I would recommend opening the vKVM session so that you can see the boot screen. When you power on your mobo (MP-EMS > Power > Control > Power On ) use the vKVM screen to halt at the ‘boot menu’ or even go into setup and disable all the boot devices.

In this PIC, we can see my Firmware for the MP-EMS is 8.01 and the BIOS is blank as the Mobo is not powered on.

2- Selecting the Mobo BIOS Update

I choose the following to update the Mobo BIOS. Start out by uploading the file: Update > ‘BIOS & ME’ > Choose File > Image.RBU > Upload

Once the upload is complete, click on ‘Update’ to proceed. NOTE: a warning dialog box appeared for me stating the system would be powered off to update the BIOS. Good thing I’m in the Boot Menu as the system will just directly power off with no regard of the system state

3 – Installing the Mobo BIOS Update: Be Patient for the BIOS install to complete

Once I saw the message the ‘BIOS firmware image has been updated successfully’ I then exited the browser session and vKVM .  Note: I’d recommend closing the browser out entirely and then reopening a new session.


Once I restarted my vKVM and MP-EMS sessions and then powered on my Mobo. This allowed the BIOS update to continue.

Here is the patience part – My system was going from BIOS F01 > F10 and it rebooted 2 times to complete the update. Be patient it will complete.

Here is the behavior I noted:

  • First Reboot – The system posted normally, it cleared the screen, and then white text stated a warning message about the BIOS booted to default settings. Very shortly after it rebooted again.
  • On the 2nd reboot, it posted normally and I pressed F10 to get back to the Boot menu. I did this because next, we’ll need to update the MP-EMS firmware.

Once the system had rebooted I then refreshed my MP-EMS screen and viola there it was BIOS Version F10.

4 – Selecting the MP-EMS Firmware

While the Mobo is booted and I’m in the boot menu, I went into the MP-EMS session and choose the following Update > BMC > Choose File > 841.img > upload


5 – Installing the MP-EMS firmware update

Once the file was uploaded I could see the Current and New versions. I then choose Update button which promptly disconnected my vKVM session and Status changed from None to a % Completed.

Again, be patient and allow the system to update. For my systems the % Complete seemed to hang a few times but the total process, for me, took about


At 100% complete my system did an auto-reboot. When I heard my system beep I then closed my MP-EMS session and started anew.


Shortly after the system booted I went into the MP-EMS and validated the firmware was no 8.41.


Wrapping this up…

Ever heard the saying “It really is a simple process we just make it complicated”? Recent BIOS updates and overall system management sometimes feel this way when trying to do simple processes. Not trying to date myself but BIOS/Firmware updates have been around for decades now. I’ve done countless updates where it was simply extracting an update to simple media and then it completes the update on its own. Now one could argue that systems are more complicated and local boot devices don’t scale well for large environments and I’d say both are very true but that doesn’t mean the process can’t be made more simple.

My recommendation to firmware / bios manufactures — invest in simplicity or make it a requirement for your suppliers. You’ll have happier customers, less service calls, and more $$ in your pocket but then again if you do, what would I have to blog about?

Am I happy with with the way I have to update this Mobo? Yes, I am happy with it. For the price I paid it’s really nice to have a headless environment that I can remotely update. I won’t have to do it very often so I’m glad I wrote down my steps in this blog.

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

Home Lab Gen IV – Part III: Best ESXi White box Mobo yet?

Posted on Updated on

Initially, when I decided to start this refresh my Home Lab to GEN IV I planned to wipe just the software, add InfiniBand.  I would keep most of the hardware. However, as I started to get into this transformation I decided it was time for a hardware refresh too including moving to All Flash vSAN.

In this post, I wanted to write a bit more about my new mobo and why I think it’s a great choice for a home lab. The past workhorse of my home lab has been my trusty MSI Z68MS-G45(B3) Rev 3.0 (AKA MSI-7676). I bought 3 MƒSI-7676 in 2012 and this mobo has been a solid performer and they treated me very well. However, they were starting to age a bit so I sold them off to a good buddy of mine and I used those resources to fund my new items.

My new workhorse –

Items kept from Home Lab Gen III:

  • 3 x Antec Sonata Gen I and III each with 500W PS by Antec: I’ve had one of these cases since 2003, now that is some serious return on investment

New Items:

  • 3 x Gigabyte MX31-BS0 – So feature rich, I found them for $139 each, and this is partly why I feel it’s the best ESXi white box mobo
  • 3 x Intel Xeon E3-1230 v5 – I bought the one without the GPU and saved some $$
  • 3 x 32GB DDR4 RAM – Nothing special here, just 2133Mhz DDR4 RAM
  • 3 x Mellanox Connectx InfiniBand cards (More to come on this soon)
  • 4 x 200GB SSD, 1 x 64GB USB (Boot)
  • 1 x IBM M5210 JBOD SAS Controller

Why I chose the Gigabyte MX31-BS0 –

Likes:

  • Headless environment: This Mobo comes with an AST2400 headless chipset environment. This means I no longer am tied to my KVM. With a java enabled browser, I can view the host screen, reboot, go into BIOS, BIOS updates, view hardware, and make adjustments as if I was physically at the box
  • Virtual Media: I now can virtually mount ISOs to the ESXi host without directly being at the console (Still to test ESXi install)
  • Onboard 2D Video: No VGA card needed, the onboard video controller takes care of it all. Why is this important? You can save money by choosing a CPU that doesn’t have the integrated GPU, the onboard video does this for you
  • vSphere HCL Support: Really? Yep, most of the components on this mobo are on the HCL and Gigabyte lists ESXi 6 as a supported OS, its not 100% HCL but for a white box its darn close
  • Full 16x PCIe Socket: Goes right into the CPU << Used for the Infiniband HCA
  • Full 8x PCIe Socket: Goes into the C232  << Used for the IBM M5210
  • M.2 Socket: Supporting 10Gb/s for SSD cards
  • 4 x SATA III ports (white)
  • 2 x SATA III can be used for Satadom ports (orange) with onboard power connectors
  • 2 x Intel i210 1Gbe (HCL supported) NICs
  • E3 v5 Xeon Support
  • 64GB RAM Support (ECC or Non-ECC Support)
  • 1 x Onboard USB 2.0 Port (Great for a boot drive)

Dislikes: (Very little)

  • Manual is terrible
  • Mobo Power connector is horizontal with the mobo, this made it a bit tight for a common case
  • 4 x SATA III Ports (White) are horizontal too, again hard to seat and maintain
  • No Audio (Really not needed, but would be nice)
  • For some installs, it could be a bit limited on PCIe Ports

Some PICS :

The pic directly below shows 2 windows: Window 1 has the large Gigabyte logo, this is the headless environmental controls. From here you can control your host and launch the video viewer (window 2). The video viewer allows you to control your host just as if you were physically there. In windows 2 I’m in the BIOS settings for the ESXi host.

This is a stock photo of the MX31-BS0. It’s a bit limited on the PCIe ports, however, I don’t need many ports as soon I’ll have 20Gb/s InfiniBand running on this board but that is another post soon to come!

If you like my ‘no-nonsense’ blog articles that get straight to the point… then post a comment or let me know… Else, I’ll start writing boring blog content.

VSAN – Setting up VSAN Observer in my Home Lab

Posted on Updated on

VSAN Observer is a slick way to display diagnostic statics not only around how the VSAN is performing but how the VM’s are as well.

Here are the commands I entered in my Home Lab to enable and disable the Observer.

Note: this is a diagnostic tool and should not be allowed to run for long periods of time as it will consume many GB of disk space. Ctrl+C will stop the collection

How to Start the collection….

  • vCenter239:~ # rvc root@localhost << Logon into vCenter Server Appliance | Note you may have to enable SSH
  • password:
  • /localhost> cd /localhost/Home.Lab
  • /localhost/Home.Lab> cd computers/Home.Lab.C1 << Navigate to your cluster | Mine Datacenter is Home.Lab, and cluster is Home.Lab.C1
  • /localhost/Home.Lab/computers/Home.Lab.C1> vsan.observer ~/computers/Home.Lab.C1 –run-webserver –force << Enter this command to get things started, keep in mind double dashes “—” are used in front of run-webserver and force
  • [2014-09-17 03:39:54] INFO WEBrick 1.3.1
  • [2014-09-17 03:39:54] INFO ruby 1.9.2 (2011-07-09) [x86_64-linux]
  • [2014-09-17 03:39:54] WARN TCPServer Error: Address already in use – bind(2)
  • Press <Ctrl>+<C> to stop observing at any point ...[2014-09-17 03:39:54] INFO WEBrick::HTTPServer#start: pid=25461 port=8010 << Note the Port and that Ctrl+C to stop
  • 2014-09-17 03:39:54 +0000: Collect one inventory snapshot
  • Query VM properties: 0.05 sec
  • Query Stats on 172.16.76.231: 0.65 sec (on ESX: 0.15, json size: 241KB)
  • Query Stats on 172.16.76.233: 0.63 sec (on ESX: 0.15, json size: 241KB)
  • Query Stats on 172.16.76.232: 0.68 sec (on ESX: 0.15, json size: 257KB)
  • Query CMMDS from 172.16.76.231: 0.74 sec (json size: 133KB)
  • 2014-09-17 03:40:15 +0000: Live-Processing inventory snapshot
  • 2014-09-17 03:40:15 +0000: Collection took 20.77s, sleeping for 39.23s
  • 2014-09-17 03:40:15 +0000: Press <Ctrl>+<C> to stop observing

How to stop the collection… Note: the collection has to be started and running to web statics as in the screenshots below

  • ^C2014-09-17 03:40:26 +0000: Execution interrupted, wrapping up … << Control+C is entered and the observer goes into shutdown mode
  • [2014-09-17 03:40:26] INFO going to shutdown …
  • [2014-09-17 03:40:26] INFO WEBrick::HTTPServer#start done.
  • /localhost/Home.Lab/computers/Home.Lab.C1>

How to launch the web interface…

I used Firefox to logon to the web interface of VSAN Observer, IE didn’t seem to function correctly

Simply go to http://[IP of vCenter Server]:8010 Note: this is the port number noted above when starting and its http not https

 

So what does it look like and what is the purpose of each screen… Note: By Default the ‘? What am I looking at’ is not displayed, I expanded this view to enhance the description of the screenshot.

 

 

 

 

References:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2064240

http://www.yellow-bricks.com/2013/10/21/configure-virtual-san-observer-monitoring/

Home Lab – Adding freeNAS 8.3 iSCSI LUNS to ESXi 5.1

Posted on Updated on

About a half a year ago I setup my freeNAS iSCSI SAN, created 2 x 500GB iSCSI LUNS and attached them to ESXi 5.1. These were ample for quite a while. However I have the need to add additional LUNS…. My first thought was – “Okay, Okay, where are my notes on adding LUNS…” They are non-existent… Eureka! Its time for a new blog post… So here are my new notes around adding iSCSI LUNS with freeNAS to my ESXi 5.1 Home lab – As always read and use at your own risk
J

  1. Start in the FreeNAS admin webpage for your device. Choose Storage > Expand Volumes > Expand the volume you want to work with > Choose Create ZFS volume and fill out the Create Volume Pop up.

When done click on Add and ensure is show up under the Storage Tab

.

  1. On the left-hand pane click on Services > iSCSI > Device Extents > View Device Extents. Type in your Extent Name, Choose the Disk Device that you just created in Step 1 and choose OK

     

  2. Click on Associated Targets > Add Extent to Target, Choose your Target and select the new Extent

     

  3. To add to ESXi do the following… Log into the Web Client for vCenter Server, Navigate to a host > Manage > Storage > Storage Devices > Rescan Host

    If done correctly your new LUN should show up below. TIP – ID the LUN by its location number, in this case its 4

  4. Ensure your on the Host in the left Pane > Related Objects > Datastores > Add Datastore

     

  5. Type in the Name > VMFS Type > Choose the Right LUN (4) > VMFS Version (5) > Partition Lay out (All or Partial), Review > Finish

     

  6. Setup Multi-Pathing – Select a Host > Manage > Storage > Storage Devices > Select LUN > Slide down the Devices Details Property Box and Choose Edit Multipathing

     

     

  7. Choose Round Robin and Click On Okay

     

  8. Validate all Datastores still have Round Robin enabled. 2 Ways to do this.
    1. Click on the LUN > Paths. Status should read Active I/O for both paths
    2. Click on LUN > Properties > Edit Multipathing – Path section Policy should state – Round Robin (See PIC in Step 8)

     

     

    Summary – These steps worked like a charm for me, then again my environment is already setup, and hopefully these steps might be helpful to you.

ESXi Q&A Boot Options – USB, SD, & HD

Posted on

Here are some of my notes around boot options for ESXi.

The post covers a lot of information especially around booting to SD or USB.

Enjoy!

What are the Options to install ESXi?

  • Interactive ESXi Installation
  • Scripted ESXi Installation
  • vSphere Auto Deploy ESXi Installation Option – vSphere 5 Only
  • Customizing Installations with ESXi Image Builder CLI – vSphere 5 Only

 

What are the boot media options for ESXi Installs?

The following boot media are supported for the ESXi installer:

  • Boot from a CD/DVD
  • Boot from a USB flash drive.
  • PXE boot from the network. PXE Booting the ESXi Installer
  • Boot from a remote location using a remote management application.

     

What are the acceptable targets to install/boot ESXi to and are there any dependencies?

ESXi 5.0 supports installing on and booting from the following storage systems:

  • SATA disk drives – SATA disk drives connected behind supported SAS controllers or supported on-board SATA controllers.
    • Note -ESXi does not support using local, internal SATA drives on the host server to create VMFS datastores that are shared across multiple ESXi hosts.
  • Serial Attached SCSI (SAS) disk drives. Supported for installing ESXi 5.0 and for storing virtual machines on VMFS partitions.
  • Dedicated SAN disk on Fibre Channel or iSCSI
  • USB devices. Supported for installing ESXi 5.0. For a list of supported USB devices, see the VMware Compatibility Guide at http://www.vmware.com/resources/compatibility.

 

Storage Requirements for ESXi 5.0 Installation

  • Installing ESXi 5.0 requires a boot device that is a minimum of 1GB in size.
  • When booting from a local disk or SAN/iSCSI LUN, a 5.2GB disk is required to allow for the creation of the VMFS volume and a 4GB scratch partition on the boot device.
  • If a smaller disk or LUN is used, the installer will attempt to allocate a scratch region on a separate local disk.
  • If a local disk cannot be found the scratch partition, /scratch, will be located on the ESXi host ramdisk, linked to /tmp/scratch.
  • You can reconfigure /scratch to use a separate disk or LUN. For best performance and memory optimization, VMware recommends that you do not leave /scratch on the ESXi host ramdisk.
    • To reconfigure /scratch, see Set the Scratch Partition from the vSphere Client.
    • Due to the I/O sensitivity of USB and SD devices the installer does not create a scratch partition on these devices. As such, there is no tangible benefit to using large USB/SD devices as ESXi uses only the first 1GB.
    • When installing on USB or SD devices, the installer attempts to allocate a scratch region on an available local disk or datastore.
    • If no local disk or datastore is found, /scratch is placed on the ramdisk. You should reconfigure /scratch to use a persistent datastore following the installation.

10 Great things to know about Booting ESXi from USB –  http://blogs.vmware.com/esxi/2011/09/booting-esxi-off-usbsd.html   <<< This is worth a read should clear up a LOT of questions….

How do we update a USB Boot Key?

It would follow the same procedure as any install or upgrades, to the infrastructure it acts all the same.

Can an ESXi Host access USB devices ie. Can an External USB Hard Disk be connected directly to the ESXi Host for copying of data?

  • Yes this can be done, see the KB below – ‘Accessing USB storage and other USB devices from the service console’
  • However the technology that supports USB device pass-through from an ESX/ESXi host to a virtual machine does not support simultaneous USB device connections from USB pass-through and from the service console.
  • This means the host is in either Pass Through (to the VM) or service console mode.

References –

vSphere 5 Documentation Center (Mainly Under ‘vSphere Installation and Setup’)

http://pubs.vmware.com/vsphere-50/index.jsp?topic=/com.vmware.vsphere.install.doc_50/GUID-33C3E7D5-20D0-4F84-B2E3-5CD33D32EAA8.html

 

Installing ESXi Installable onto a USB drive or SD flash card

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1020655

 

USB support for ESX/ESXi 4.1 and ESXi 5.0

http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=ex&bbid=TSEBB_1297203662351&url=&stateId=0 0 319975740&dialogID=319971446&docTypeID=DT_KB_1_1&externalId=1022290&sliceId=1&rfId=

 

VMware support for USB/SD devices used for installing VMware ESXi

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1010574

 

Installing ESXi 5.0 on a supported USB flash drive or SD flash card

http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=2004784&sliceId=1&docTypeID=DT_KB_1_1&dialogID=319971409&stateId=0 0 319975522

 

Accessing USB storage and other USB devices from the service console

http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1023976&sliceId=1&docTypeID=DT_KB_1_1&dialogID=319971551&stateId=0 0 319979288


 

Update to my Home Lab with VMware Workstation 8 – Part 2 Fun with a Windows 7 Installer

Posted on Updated on

Part 1 of this series outlined the hardware I wanted to purchase and some of the ideas I had around the products.

I created an image of the current install of Windows 7, then booted it to my new hardware, and to my surprise there were not any hidden files or drivers that needed adjusted.

It worked quite well, so well it was scary but simply Impressive…. Sure beats those old XP days when you had to just about tear it apart to get it to work.

However I would like this install of Workstation 8 to run on a fresh copy of Windows 7 so I have decided to reinstall it.

Now this shouldn’t warrant a blog post however they way I had to get Windows 7 to behave is why I’m posting.

In this post I go into getting Windows 7 to install properly when you don’t have proper installation CD.

 

The CD I own for Windows 7 is an Windows based Installation only, you cannot create boot CD to install the OS fresh.

Trust me I tired many ways but it just doesn’t work…

 

Here is what I wanted to accomplish –

1. I’d like a fresh copy of Windows 7 Installed on to my system

2. I need to enable AHCI in my system BIOS (for more info see here http://en.wikipedia.org/wiki/Advanced_Host_Controller_Interface)

I found on the corsair blogs that my SSD drive will run much better if you enable AHCI in your BIOS.

Unfortunately this pretty much deems a reinstall. I’m okay with this because it is what I’m wanting to do.

 

Issues –

1. The version of Windows 7 I have is an upgrade or restore only version.

2. Currently AHCI is not enabled in my BIOS

 

Here’s how I did it… Oh, did it take some trickery and learning but it worked..

Know this…

Windows 7 will do a recovery install to your current HD (C:) or to a new HD (E:).

If you install to your current HD, C:, then it will install in to a WINDOWS.001 folder, and leave lots of old files laying around.

Not ideal as I want a pristine Install

Do this…

From Windows I initiated the install, choose custom install and  choose my E: drive (At the time E: was just a blank HD)

image

Windows did it typical install, Copying files, and they rebooted the system.

During the reboot I enabled AHCI on ALL controllers in the BIOS << THIS IS VERY important step, if you miss this Windows will install in IDE Mode

Windows completed the install and boots to your E: drive.

Having E:\ be the boot and E:\Windows is not ideal. I really want Windows 7 on my C: drive. 

I formatted my C: drive and ran the windows install, only this time choose the C: HD.

Windows completes the install and reboots.

 

When I was done Windows  7 is a fresh install and running on the C: drive.

 

Summary…

I got to tell you it was a chore figuring this out, it seems very simple now but I went through imaging processes, partition changing, drive renames, lots of blog posts, KB’s, etc…

Nothing worked well and it took up hours of my time.  This pattern worked for me, Windows 7 installed properly and its working quite well.

Now its on to installing Workstation 8…

Test Lab – Day 5 Expanding the IOMega ix12-300r

Posted on Updated on

Recently I installed an IOMega ix12-300r for our ESX test lab and it’s doing quite will

However I wanted to push our Iomega to about 1Gbs of sustained NFS traffic of the available 2Gbs.

To do this I needed to expand our 2 drive storage pool to 4 drives.

 

From a previous post we created 3 storage pools as seen below.

Storage Pool 0 (SP0) 4 Drives for basic file shares (CIFS)

Storage Pool 1 (SP1_NFS) 2 drives for ESX NFS Shares only

Storage Pool 2 (SP2_iSCSI) 2 drives dedicated for ESX iSCSI only

In this post I’m going to delete the unused SP2_iSCSI and add those drives to SP1_NFS

Note: This procedure is simply the pattern I used in my environment. I’m not stating this is the right way but simply the way it was done. I don’t recommend you use this pattern or use it for any type of validation. These are simply my notes, for my personal records, and nothing more.

 

Under settings select storage pools

 

Select the Trash Can to delete the storage pools..

It prompted me to confirm and it deleted the storage pool.

Next I choose the Edit icon on SP2_NFS, selected the drives I wanted, choose RAID 5, and pressed apply.

From there it started to expand the 2 disk RAID1 to a 4 disk RAID5 storage pool..

Screenshot from the IOMega ix12 while it is being expanded…

 

I then went to the Dashboard and under status I can view its progress…

 

ALL this with NO Down time to ESX, in fact I’m writing this post from a VM at the expansion is happening.

It took about 11 Hours to rebuild the RAID set.

Tip: Use the event log under settings to determine how long the rebuild took.

The next day I checked in on ESX and it was reporting the updated store size.

 

Summary…

To be able to expand your storage pool that houses your ESXi test environment with no down time is extremely beneficial and a very cool feature.

Once again IOMega is living up to its tag line – IOmega Kicks NAS!

Tomorrow we’ll see how it performs when we push a higher load to it.

Test Lab – Day 4 Xsigo Redundancy testing with ESXi

Posted on Updated on

Today I tested Xsigo redundancy capabilities within the ESXi test environment.

So far I have built up an environment with 4 x ESXi 4.1 hosts, each with a Single VM, and 2 Xsigo VP780’s.

Each VM is pretty much idle for this test, however tomorrow I plan to introduce some heavier IP and NFS traffic and re-run the tests below.

I used a Laptop and the ESXi console in tech support mode to capture the results.

Keep in mind this deployment is a SINGLE site scenario.

This means both Xsigo are considered at the same site and each ESXi host is connected to the A & B Xsigo.


Note: This test procedure is simply the pattern I used to test my environment. I’m not stating this is the right way to test an environment but simply the way it was done. I don’t recommend you use this pattern to test your systems or use it for validation. These are simply my notes, for my personal records, and nothing more.

Reminder:

XNA, XNB are Xsigo Network on Xsigo Device A or B and are meant for IP Network Traffic.

XSA, XSB are Xsigo Storage or NFS on Xsigo Device A or B and are meant for NFS Data Traffic.

Test 1 – LIVE I/O Card Replacement for Bay 10 for IP Networking

Summary –

Xsigo A sent a message to Xsigo support stating the I/O Module had an issue. Xsigo support contacted me and mailed out the replacement module.

The affected module controls the IP network traffic (VM, Management, vMotion).

Usually, an I/O Module going out is bad news. However, this is a POC (Proof of Concept) so I used this “blip” to our advantage and captured the test results.

Device – Xsigo A

Is the module to be affected currently active? Yes

Pre-Procedure –

Validate by Xsigo CLI – ‘show vnics’ to see if vnics are in the up state – OKAY

Ensure ESX Hosts vNICs are in Active mode and not standby – OKAY

Ensure ESX Hosts network configuration is setup for XNA and XNB in Active Mode – OKAY

Procedure –

Follow replacement procedure supplied with I/O Replacement Module

Basic Steps supplied by Xsigo –

  • Press the Eject button for 5 seconds to gracefully shut down the I/O card
  • Wait LED to go solid blue
  • Remove card
  • Insert new card
  • Wait for I/O card to come online LED will go from Blue to Yellow/Green
    • The Xsigo VP780 will update card as needed Firmware & attached vNIC’s
  • Once the card is online your ready to go

Expected results –

All active IP traffic for ESXi (including VM’s) will continue to pass through XNB

All active IP traffic for ESXi (including VM’s) might see a quick drop depending on which XN# is active

vCenter Server should show XNA as unavailable until new I/O Module is online

The I/O Module should take about 5 Minutes to come online

How I will quantify results –

All active IP traffic for ESXi (including VM’s) will continue to pass through XNB

  • Active PING to ESXi Host (Management Network, VM’s) and other devices to ensure they stay up

All active IP traffic for ESXi (including VM’s) might see a quick drop depending on which XN# is active

  • Active PING to ESXi Host (Management Network, VM’s)

vCenter Server should show XNA as unavailable until new I/O Module is online

  • In vCenter Server under Network Configuration check to see if XNA goes down and back to active

The I/O Module should take about 5 Minutes to come online

  • I will monitor the I/O Module to see how long it takes to come online

Actual Results –

Pings –

From Device Destination Device Type Result During Result coming online / After
External Laptop Windows 7 VM VM No Ping Loss No Ping Loss
External Laptop vCenter Server VM One Ping Loss No Ping Loss
External Laptop ESX Host 1 ESX One Ping Loss One Ping Loss
External Laptop ESX Host 2 ESX One Ping Loss One Ping Loss
External Laptop ESX Host 3 ESX

No Loss

One Ping Loss
External Laptop ESX Host 4 ESX No Loss No Loss
ESX Host IOMega Storage NFS No Loss No Loss


From vCenter Server –

XNA status showing down during module removal on all ESX Hosts

vCenter Server triggered the ‘Network uplink redundancy lost’ – Alarm

I/O Module Online –

The I/O Module took about 4 minutes to come online.

Test 1 Summary –

All results are as expected. There was only very minor ping loss, which for us is nothing to be worried about

Test 2 – Remove fibre 10gig Links on Bay 10 for IP Networking

Summary –

This test will simulate fibre connectivity going down for the IP network traffic.

I will simulate the outage by disconnecting the fibre connection from Xsigo A, measure/record the results, return the environment to normal, and then repeat for Xsigo B.

Device – Xsigo A and B

Is this device currently active? Yes

Pre-Procedure –

Validate by Xsigo CLI – ‘show vnics’ to see if vnics are in up state

  • Xsigo A and B are reporting both I/O Modules are functional

Ensure ESX Host vNICs are in Active mode and not standby

  • vCenter server is reporting all communication is normal

Procedure –

Remove the fibre connection from I/O Module in Bay 10 – Xsigo A

Measure results via Ping and vCenter Server

Replace the cable, ensure system is stable, and repeat for Xsigo B device

Expected results –

All active IP traffic for ESXi (including VM’s) will continue to pass through the redundant XN# adapter

All active IP traffic for ESXi (including VM’s) might see a quick drop if it’s traffic is flowing through the affected adapter.

vCenter Server should show XN# as unavailable until fibre is reconnected

How I will quantify results –

All active IP traffic for ESXi (including VM’s) will continue to pass through XNB

  • Using PING the ESXi Hosts (Management Network, VM’s) and other devices to ensure they stay up

All active IP traffic for ESXi (including VM’s) might see a quick drop depending on which XN# is active

  • Active PING to ESXi Host (Management Network, VM’s)

vCenter Server should show XNA as unavailable until new I/O Module is online

  • In vCenter Server under Network Configuration check to see if XNA goes down and back to active

Actual Results –

Xsigo A Results…

Pings –

From Device Destination Device Type Result During Result coming online / After
External Laptop Windows 7 VM VM No Ping Loss No Ping Loss
External Laptop vCenter Server VM No Ping Loss One Ping Loss
External Laptop ESX Host 1 ESX One Ping Loss No Ping Loss
External Laptop ESX Host 2 ESX No Ping Loss One Ping Loss
External Laptop ESX Host 3 ESX

No Ping Loss

One Ping Loss
External Laptop ESX Host 4 ESX One Ping Loss One Ping Loss
ESX Host IOMega Storage NFS No Ping Loss No Ping Loss


From vCenter Server –

XNA status showing down during module removal on all ESX Hosts

vCenter Server triggered the ‘Network uplink redundancy lost’ – Alarm

Xsigo B Results…

Pings –

From Device Destination Device Type Result During Result coming on line / After
External Laptop Windows 7 VM VM One Ping Loss One Ping Loss
External Laptop vCenter Server VM No Ping Loss No Ping Loss
External Laptop ESX Host 1 ESX No Ping Loss No Ping Loss
External Laptop ESX Host 2 ESX No Ping Loss No Ping Loss
External Laptop ESX Host 3 ESX

No Ping Loss

No Ping Loss
External Laptop ESX Host 4 ESX No Ping Loss One Ping Loss
ESX Host IOMega Storage NFS No Ping Loss No Ping Loss


From vCenter Server –

XNB status showing down during module removal on all ESX Hosts

vCenter Server triggered the ‘Network up link redundancy lost’ – Alarm

Test 2 Summary –

All results are as expected. There was only very minor ping loss, which for us is nothing to be worried about

Test 3 – Remove fibre 10g Links to NFS

Summary –

This test will simulate fibre connectivity going down for the NFS network.

I will simulate the outage by disconnecting the fibre connection from Xsigo A, measure/record the results, return the environment to normal, and then repeat for Xsigo B.

Device – Xsigo A and B

Is this device currently active? Yes

Pre-Procedure –

Validate by Xsigo CLI – ‘show vnics’ to see if vnics are in up state

  • Xsigo A and B are reporting both I/O Modules are functional

Ensure ESX Host vNICs are in Active mode and not standby

  • vCenter server is reporting all communication is normal

Procedure –

Remove the fibre connection from I/O Module in Bay 11 – Xsigo A

Measure results via Ping, vCenter Server, and check for any VM GUI hesitation.

Replace the cable, ensure system is stable, and repeat for Xsigo B device

Expected results –

All active NFS traffic for ESXi (including VM’s) will continue to pass through the redundant XS# adapter

All active NFS traffic for ESXi (including VM’s) might see a quick drop if it’s traffic is flowing through the affected adapter.

vCenter Server should show XS# as unavailable until fibre is reconnected

I don’t expect for ESXi to take any of the NFS datastores off line

How I will quantify results –

All active NFS traffic for ESXi (including VM’s) will continue to pass through XSB

  • Active PING to ESXi Host (Management Network, VM’s) and other devices to ensure they stay up

All active NFS traffic for ESXi (including VM’s) might see a quick drop depending on which XN# is active

  • Active PING to ESXi Host (Storage, Management Network, VM’s)

vCenter Server should show XS# as unavailable until fibre is reconnected

  • In vCenter Server under Network Configuration check to see if XS# goes down and back to active

I don’t expect for ESXi to take any of the NFS datastores offline

  • In vCenter Server under storage, I will determine if the store goes offline

Actual Results –

Xsigo A Results…

Pings –

From Device Destination Device Type Result During Result coming online / After
External Laptop Windows 7 VM VM No Ping Loss No Ping Loss
External Laptop vCenter Server VM No Ping Loss No Ping Loss
External Laptop ESX Host 1 ESX No Ping Loss No Ping Loss
External Laptop ESX Host 2 ESX No Ping Loss No Ping Loss
External Laptop ESX Host 3 ESX

No Ping Loss

No Ping Loss
External Laptop ESX Host 4 ESX No Ping Loss No Ping Loss
ESX Host IOMega Storage NFS No Ping Loss Two Ping Loss


From vCenter Server –

XSA & XSB status showing down during fibre removal on all ESX Hosts

vCenter Server triggered the ‘Network uplink redundancy lost’ – Alarm

No VM GUI Hesitation reported

Xsigo B Results…

Pings –

From Device Destination Device Type Result During Result coming online / After
External Laptop Windows 7 VM VM No Ping Loss No Ping Loss
External Laptop vCenter Server VM No Ping Loss No Ping Loss
External Laptop ESX Host 1 ESX No Ping Loss No Ping Loss
External Laptop ESX Host 2 ESX No Ping Loss No Ping Loss
External Laptop ESX Host 3 ESX

No Ping Loss

No Ping Loss
External Laptop ESX Host 4 ESX No Ping Loss No Ping Loss
ESX Host IOMega Storage NFS No Ping Loss No Ping Loss


From vCenter Server –

XNB status showing down during module removal on all ESX Hosts

vCenter Server triggered the ‘Network up link redundancy lost’ – Alarm

No VM GUI Hesitation reported

Test 3 Summary –

All results are as expected. There was only very minor ping loss, which for us is nothing to be worried about

Test 4 – Remove Infiniband cables from the ESXi HBA.

Summary –

During this test, I will remove all the Infiniband cables (4 of them) from the ESXi HBA.

I will disconnect the Infiniband connection to Xsigo A first, measure/record the results, return the environment to normal, and then repeat for Xsigo B.

Pre-Procedure –

Validate by Xsigo CLI – ‘show vnics’ to see if vnics are in upstate

  • Xsigo A and B are reporting both I/O Modules are functional

Ensure ESX Host vNICs are in Active mode and not standby

  • vCenter server is reporting all communication is normal

Procedure –

Remove the InfiniBand cable from each ESXi Host attaching to Xsigo A

Measure results via Ping, vCenter Server, and check for any VM GUI hesitation.

Replace the cables, ensure system is stable, and repeat for Xsigo B device

Expected results –

ALL active traffic (IP or NFS) for ESXi (including VM’s) will continue to pass through the redundant XNB or XSB accordingly.

All active traffic (IP or NFS) for ESXi (including VM’s) might see a quick drop if it’s traffic is flowing through the affected adapter.

vCenter Server should show XNA and XSA as unavailable until cable is reconnected

I don’t expect for ESXi to take any of the NFS datastores offline

How I will quantify results –

ALL active traffic (IP or NFS) for ESXi (including VM’s) will continue to pass through the redundant XNB or XSB accordingly.

  • Active PING to ESXi Host (Management Network, VM’s) and other devices to ensure they stay up

All active traffic (IP or NFS) for ESXi (including VM’s) might see a quick drop if it’s traffic is flowing through the affected adapter.

  • Active PING to ESXi Host (Storage, Management Network, VM’s)

vCenter Server should show XNA and XSA as unavailable until cable is reconnected

  • In vCenter Server under Network Configuration check to see if XS# goes down and back to active

I don’t expect for ESXi to take any of the NFS datastores offline

  • In vCenter Server under storage, I will determine if the store goes offline

Actual Results –

Xsigo A Results…

Pings –

From Device Destination Device Type Result During Result coming online / After
External Laptop Windows 7 VM VM No Ping Loss Two Ping Loss
External Laptop vCenter Server VM No Ping Loss No Ping Loss
External Laptop ESX Host 1 ESX No Ping Loss No Ping Loss
External Laptop ESX Host 2 ESX No Ping Loss No Ping Loss
External Laptop ESX Host 3 ESX

No Ping Loss

No Ping Loss
External Laptop ESX Host 4 ESX No Ping Loss No Ping Loss
ESX Host IOMega Storage NFS No Ping Loss No Ping Loss


From vCenter Server –

XSA & XNA status showing down during fibre removal on all ESX Hosts

vCenter Server triggered the ‘Network uplink redundancy lost’ – Alarm

No VM GUI Hesitation reported

NFS Storage did not go offline

Xsigo B Results…

Pings –

From Device Destination Device Type Result During Result coming on line / After
External Laptop Windows 7 VM VM No Ping Loss No Ping Loss
External Laptop vCenter Server VM One Ping Loss One Ping Loss
External Laptop ESX Host 1 ESX No Ping Loss No Ping Loss
External Laptop ESX Host 2 ESX No Ping Loss One Ping Loss
External Laptop ESX Host 3 ESX

No Ping Loss

No Ping Loss
External Laptop ESX Host 4 ESX One Ping Loss No Ping Loss
ESX Host IOMega Storage NFS No Ping Loss No Ping Loss


From vCenter Server –

XNB & XSB status showing down during module removal on all ESX Hosts

vCenter Server triggered the ‘Network up link redundancy lost’ – Alarm

NFS Storage did not go offline

Test 4 Summary –

All results are as expected. There was only very minor ping loss, which for us is nothing to be worried about

Test 5 – Pull Power on active Xsigo vp780

Summary –

During this test, I will remove all the power cords from Xsigo A.

I will disconnect the power cords from Xsigo A first, measure/record the results, return the environment to normal, and then repeat for Xsigo B.

Pre-Procedure –

Validate by Xsigo CLI – ‘show vnics’ to see if vnics are in up state

  • Xsigo A and B are reporting both I/O Modules are functional

Ensure ESX Host vNICs are in Active mode and not standby

  • vCenter server is reporting all communication is normal

Procedure –

Remove power cables from Xsigo A

Measure results via Ping, vCenter Server, and check for any VM GUI hesitation.

Replace the cables, ensure system is stable, and repeat for Xsigo B device

Expected results –

ALL active traffic (IP or NFS) for ESXi (including VM’s) will continue to pass through the redundant XNB or XSB accordingly.

All active traffic (IP or NFS) for ESXi (including VM’s) might see a quick drop if it’s traffic is flowing through the affected adapter.

vCenter Server should show XNA and XSA as unavailable until cable is reconnected

I don’t expect for ESXi to take any of the NFS datastores offline

How I will quantify results –

ALL active traffic (IP or NFS) for ESXi (including VM’s) will continue to pass through the redundant XNB or XSB accordingly.

  • Active PING to ESXi Host (Management Network, VM’s) and other devices to ensure they stay up

All active traffic (IP or NFS) for ESXi (including VM’s) might see a quick drop if it’s traffic is flowing through the affected adapter.

  • Active PING to ESXi Host (Storage, Management Network, VM’s)

vCenter Server should show XNA and XSA as unavailable until cable is reconnected

  • In vCenter Server under Network Configuration check to see if XS# goes down and back to active

I don’t expect for ESXi to take any of the NFS datastores offline

  • In vCenter Server under storage, I will determine if the store goes offline

Actual Results –

Xsigo A Results…

Pings –

From Device Destination Device Type Result During Result coming online / After
External Laptop Windows 7 VM VM No Ping Loss No Ping Loss
External Laptop vCenter Server VM One Ping Loss One Ping Loss
External Laptop ESX Host 1 ESX One Ping Loss One Ping Loss
External Laptop ESX Host 2 ESX One Ping Loss One Ping Loss
External Laptop ESX Host 3 ESX

No Ping Loss

No Ping Loss
External Laptop ESX Host 4 ESX No Ping Loss One Ping Loss
ESX Host IOMega Storage NFS No Ping Loss No Ping Loss


From vCenter Server –

XSA & XNA status showing down during the removal on all ESX Hosts

vCenter Server triggered the ‘Network uplink redundancy lost’ – Alarm

No VM GUI Hesitation reported

NFS Storage did not go offline

Xsigo B Results…

Pings –

From Device Destination Device Type Result During Result coming online / After
External Laptop Windows 7 VM VM One Ping Loss No Ping Loss
External Laptop vCenter Server VM One Ping Loss One Ping Loss
External Laptop ESX Host 1 ESX No Ping Loss No Ping Loss
External Laptop ESX Host 2 ESX No Ping Loss One Ping Loss
External Laptop ESX Host 3 ESX

One Ping Loss

One Ping Loss
External Laptop ESX Host 4 ESX One Ping Loss No Ping Loss
ESX Host IOMega Storage NFS One Ping Loss No Ping Loss


From vCenter Server –

XNB & XSB status showing down during module removal on all ESX Hosts

vCenter Server triggered the ‘Network uplink redundancy lost’ – Alarm

No VM GUI Hesitation reported

NFS Storage did not go offline

Test 5 Summary –

All results are as expected. There was only very minor ping loss, which for us is nothing to be worried about

It took about 10 Mins for the Xsigo come up and online from the point I pulled the power cords to the point ESXi reported the vnics were online..

Overall Thoughts…

Under very low load the Xsigo it performed as expected with ESXi. So far the redundancy testing is going well.

Tomorrow I plan to place a pretty hefty load on the Xsigo and IOMega to see how they will perform under the same conditions.

I’m looking forward to seeing if the Xsigo can perform just as well under load.

Trivia Question…

How do you know if someone has rebooted and watched an Xsigo boot?

This very cool logo comes up on the bootup screen! Now that’s Old School and very cool!

Test Lab – Day 2 CLI with the Xsigo!

Posted on Updated on

Yesterday I did about 90% of the hardware install. Today, Day 2, our Xsigo SE will be here to assist with the installation and configuration of the Xsigo to the ESX Hosts..

Today’s Goals..

  • Install 2nd Xsigo VP780
  • Install vmware ESXi 4.1 on 4 servers with Xsigo Drivers
  • Configure both Xsigo vp780’s

 

Install 2nd Xsigo VP780…

Day 2 started out with a gift from Mr. FedEx, it was the parts we needed to install the 2nd Xsigo. Only yesterday afternoon we discovered we were missing some power cords and mounting brackets. A couple quick calls to Xsigo and viola parts are on their way. Props to Xsigo for a VERY quick response to this issue!

Based on the lessons learned from Day 1 we mounted the 2nd Xsigo VP780 and it went much smoother. Notice the WE part of installing the VP780, these things are heavy & large and you’ll need some help or a giant with huge hands to install them into a rack. See their install manual for more information.

When we powered them up I was amazed by the amount of air they moved through the device >> Very NICE!

Keep in mind at this point all the test lab hardware including the Xsigo fiber modules (2 x 10gig Fiber modules per device), and networking is mounted and interconnected…

 

Install vmware ESXi 4.1 on 4 servers with Xsigo Drivers…

You’ll need the Xsigo Drivers installed for ESXi to recognize the infiniband cards and for proper communication.

There are two installation options…

  1. Install ESXi 4.1 and add the Xsigo Drivers after the install.
  2. Download the drivers and re-master the ESXi ISO yourself (This is a good option if your building / rebuilding lots of servers)

We chose to re-master the ESXi ISO with the Xsigo drivers.

Here is the link to master the ISO

I won’t bore you with the details of installing ESXi, however the only gotcha I ran into was the Dell R5400 SATA RAID controller.

I setup a SATA RAID group, during the ESXi install it recognized the RAID volume, and ESXi installed to it without issue.

However after the reboot of the host it would not boot to this volume.

I didn’t have time to troubleshoot, for now we just broke the RAID group, reinstalled, and it worked perfectly.

ESXi Management NICS’s..

Our test lab network will be isolated from production network traffic. However, one of our servers will need to be in the production environment. We setup one physical NIC (pNIC) on to our production environment. This will allow us to temporarily transfer VM’s from production to test, we’ll then disconnect this pNIC and setup ESXi to use the Xsigo NIC for management.

(More to come on this on Day 3)

 

Configure both Xsigo vp780’s…

Configuring the vp780 was very simple. We attached a laptop to the Xsigo and in about 20 commands our Xsigo was up and running..

These are the basic commands we used to setup our pair of Xsigo’s (A and B), the commands below reflect B only.

The commands would be the same for the A Xsigo simply change the appropriate parameters…

NOTE: I don’t recommend you execute these commands in your environment, keep in mind these are for my reference ONLY… I also recommend you contact your Xsigo representative for assistance.

 

Here are the commands we executed..

 

Getting into the Xsigo VP780…

We used a standard Xsigo provided rollover cable plugged into Serial1. (Serial2 is for Tech / Debug – Don’t use)

We connected to the console via Putty or Absolute Telnet (COM Settings are 115200,8,1,None,None)

Tip: All default passwords are in the CLI Config Guide by Xsigo

 

Setup the Xsigo via the Wizard…

Once the connected we used the XgOS config Wizard and entered in the following..

Welcome to XgOS

Copyright (c) 2007-2010 Xsigo Systems, Inc. All rights reserved.

 

Enter “help” for information on available commands.

 

Would you like to use the XgOS Configuration Wizard? [Y/n]

Hostname: xsigo-b

Domain: YOURDOMAIN.COM

Is this Director to be designated as the IB subnet manager (leave as Y unless using an external, non-Xsigo subnet manager) ? [Y/n]

Do you want this Director to send diagnostic data to Xsigo periodically? [Y/n]

Please input the ‘root’ password: ****

Please confirm the ‘root’ password: ****

Please input the ‘admin’ password: *****

Please confirm the ‘admin’ password: *****

Please input the ‘recovery-password’: ****

Please confirm the ‘recovery-password’: ****

IP Address [static/DHCP]: 555.555.555.555

IP Address [static/DHCP]:

Enter NTP Server 1: 555.555.555.555

Enter NTP Server 2:

Enter Timezone [<Tab><Tab> for the list of Timezones]: America_Phoenix

Welcome to XgOS

Copyright (c) 2007-2010 Xsigo Systems, Inc. All rights reserved.

 

Enter “help” for information on available commands.

admin@xsigo-b[xsigo]

 

Now it’s time to setup the Xsigo…

Place the Xsigo into Trunk Mode..

Port 10 and Port 11 are the 10gig Fibre Modules; this command places them in Trunk Mode

set ethernet-port 10/1 -mode=trunk << Port 10 will be used for our IP Network (Vlans for Guests, vmotion, hosts, etc)

set ethernet-port 11/1 -mode=trunk << Port 11 will be used for our NFS

Rear of VP780

Ensure Trunk Mode is activated..

Use the command ‘show ethernet-port ‘

admin@xsigo-b[xsigo] show ethernet-port

 

name type state descr mode flags lag access-vlan vnics vlans

——————————————————————————-

10/1 nwEthernet10GbPort up/up trunk -s— 1 0 none

11/1 nwEthernet10GbPort up/up trunk -s— 1 0 none

2 records displayed

 

Setup Phone Home for Support…

set system phone-home -customer-name=”YOUR COMPANY NAME HERE”

set system phone-home -contact-email-address=YOURNAME@YOURDOMAIN.COM

set system phone-home -contact-phone-numbers=”555-555-5555″

set system phone-home proxy [YOUR PROXY IP HERE] [PROXY PORT if needed, default is 3128]

Note: For this command the syntax is [PROXY IP Address] one space [PROXY PORT], don’t use ‘:’ to as the separator.

 

Once completed then check confirm your information…

Enter the command ‘show system phone-home’

admin@xsigo-b[xsigo] show system phone-home

——————————————————————————-

enabled true

freq weekly

next Fri Jan 14 12:44:52 MST 2011

notify no

strip yes

alarm yes

name COMPANYNAME

email EMAIL@EMAIL.com

phone 5555555555

copy

p-host 555.555.555.555:3128

p-user

——————————————————————————-

1 record displayed

admin@xsigo-b[xsigo]

 

Check on the Phone Home Log….

admin@xsigo-b[xsigo] showlog phonehome.log

Wed Jan 5 17:30:33 MST 2011: Phone home successful to http://phone-home.xsigo.com:6522

Wed Jan 5 18:04:14 MST 2011: Phone home successful to http://phone-home.xsigo.com:6522

Wed Jan 5 18:04:38 MST 2011: Phone home successful to http://phone-home.xsigo.com:6522

[Press CRTL-C to Exit]

admin@xsigo-b[xsigo]

Tip: your log might be empty until it has something to send

 

Ensure your Physical servers are attached…

As expected all 4 servers are attached to this Xsigo.. (If they don’t show up here it could be an interconnect or ESXi issue)

Enter the command ‘show physical-server’ to view your connected servers.

admin@xsigo-b[xsigo] show physical-server

——————————————————————————-

name localhost <<< This is the ESXi Hostname

guid 2c903000b4df5

descr

port xsigo-001397001:ServerPort2 << This is the Xsigo Port the Server is connected to

os VMware/ESXi-4.1.0:xg-3.5.0-1-246491/x86_64 << This is the version of ESX & Xsigo Driver

version 2.7.0/3.0.0

server-profile << Notice this is blank, We configured it next

——————————————————————————-

name localhost

guid 2c903000b4ea5

descr

port xsigo-001397001:ServerPort3

os VMware/ESXi-4.1.0:xg-3.5.0-1-246491/x86_64

version 2.7.0/3.0.0

server-profile

——————————————————————————-

name localhost

guid 2c903000b4ea9

descr

port xsigo-001397001:ServerPort4

os VMware/ESXi-4.1.0:xg-3.5.0-1-246491/x86_64

version 2.7.0/3.0.0

server-profile

——————————————————————————-

name localhost

guid 2c903000b5095

descr

port xsigo-001397001:ServerPort1

os VMware/ESXi-4.1.0:xg-3.5.0-1-246491/x86_64

version 2.7.0/3.0.0

server-profile

——————————————————————————-

4 records displayed

 

Create Server Profiles…

Creating a server profile enables you to assign devices to your specific host.

In our case we used the ESX Hostname as the Xsigo Server Profile name.

This will help us to keep the profiles well organized.

Keep in mind YOURSERVERNAME# equals your ESX Hostname and it will become your Xsigo Server Profile Name…

Long way to create a Server Profile…

add server-profile [server profile name]

View the new server profile…

admin@xsigo-b[xsigo] show server-profile

name state descr connection def-gw vnics vhbas

——————————————————————————-

YOURSERVER1 up/unassigned 0 0

1 record displayed

 

Assign the server profile to a port on the Xsigo…

set server-profile YOURSERVER1 connect localhost@xsigo-001397001:ServerPort1

 

Short way to create a Server Profile…

add server-profile YOURSERVER2 localhost@xsigo-001397001:ServerPort2

add server-profile YOURSERVER1 localhost@xsigo-001397001:ServerPort3

add server-profile YOURSERVER1 localhost@xsigo-001397001:ServerPort4

 

Then use show server-profile to confirm your entries…

admin@xsigo-b[xsigo] show server-profile

name state descr connection def-gw vnics vhbas

——————————————————————————-

Yourserver3 up/up localhost@xsigo-001397001:ServerPort3 0 0

Yourserver4 up/up localhost@xsigo-001397001:ServerPort4 0 0

Yourserver1 up/up localhost@xsigo-001397001:ServerPort1 0 0

Yourserver2 up/up localhost@xsigo-001397001:ServerPort2 0 0

4 records displayed

admin@xsigo-b[xsigo]

 

 

Set Up and attach the virtual NICS to your server profile…

In this step we created our Xsigo vNICS, attached them to the appropriate server profiles, and the 10gig Modules.

When complete each of our ESXi servers will have 4 Xsigo vNICS.

(2 vNICs for IP Network, 2 vNICs for Storage network)

 

Decoding the command…

The command ‘add vnic xnb.yourservername1 10/1 -mode=trunk’ breaks down to…

add vnic << Add vNIC Command

xnb << The vNIC Name (xnb = Xsigo, IP Network, B Xsigo Device, Xsb = Xsigo, Storage Network, B Xsigo Device)

yourservername1 << Which profile to attach to

10/1 << Which Module on the Xsigo to attach to

-mode=trunk << What transport mode

These are the command we entered..

IP Network vNICS

admin@xsigo-b[xsigo] add vnic xnb.yourservername1 10/1 -mode=trunk

admin@xsigo-b[xsigo] add vnic xnb.yourservername2 10/1 -mode=trunk

admin@xsigo-b[xsigo] add vnic xnb.Yourservername3 10/1 -mode=trunk

admin@xsigo-b[xsigo] add vnic xnb.Yourservername4 10/1 -mode=trunk

 

Storage vNICS

admin@xsigo-b[xsigo] add vnic xsb.Yourservername1 11/1 -mode=trunk

admin@xsigo-b[xsigo] add vnic xsb.Yourservername2 11/1 -mode=trunk

admin@xsigo-b[xsigo] add vnic xsb.Yourservername3 11/1 -mode=trunk

admin@xsigo-b[xsigo] add vnic xsb.Yourservername4 11/1 -mode=trunk

 

Results from ESXi…

 

Other Information…

 

Set System back to factory Defaults…

If needed, you can set the System back to factory Defaults by the following command.

When complete you will need to access the system via Serial Cable.

Here are the steps:

set system factory-default

confirm << type in Confirm, my Putty will exited and the system will shutdown

NOTE: This command will erase the configuration from the Xsigo. Do it with caution

Tip: Note this will cause the system to shutdown, this means someone will have to manually power it back on.

 

Upgrade the XgOS via USB…

Download the GOS 2.8.5 to a USB Stick..

We inserted the stick into the USB Port on the VP780, then executed this command

system upgrade file://usb/xsigo-2.8.5.xpf

 

Other Handy commands…

show system status

show system

show system version

show system warnings

show serial

show system info

history

 

CLI Fun…

One thing I like about the CLI for Xsigo is TAB at the end of the command (most modern CLI’s have this and it sure is handy)

If I type in set system phone-home[Press TAB] it displays possible completions and qualifiers and then it displays the last command I typed in.

admin@ xsigo-b[xsigo] set system phone-home [Press TAB]

Possible completions:

disable Disable phone home

enable Enable phone home

noproxy Don’t use HTTP Proxy

proxy HTTP Proxy config

snooze Hit the snooze button

[Optional qualifiers]

-contact-email-address Email address for Xsigo technical support to contact when a problem is discovered. (or ‘none’)

-contact-phone-numbers Telephone number for Xsigo technical support to contact when a problem is discovered. (comma separated, or ‘none’)

-copy-url URL to send audit copy to

-customer-name Customer name (or ‘none’)

-frequency Phone home frequency (relative to when it is set)

-notify Will Xsigo notify you when problems are detected?

-send-alarms Send major alarms to Xsigo?

-strip-private Strip private information from phone-home data

Repeat ‘?’ for detailed help.

admin@xsigo-b[xsigo] set system phone-home

 

Day 2 Summary..

The pair of Xsigo’s were very easy to configure and install. I enjoyed working with Xsigo CLI, it is very well thought out, and I plan do to write additional blog about it alone.

Besides for the very few and sometime self-inflicted gotchas things went smooth.

It was nice to have a Xsigo SE on site to assist with the initial install and I’m looking forward to tomorrow when we spin up some VM’s and then test!

 

Still to do…

  • Copy vCenter Server & other VM’s from Production to this test environment
  • Test, Test, Test and more testing..

Test Lab – The Plan and Layout with Xsigo, juniper, IOMega, vmware, and HP/Dell servers)

Posted on Updated on

This week I have the pleasure of setting up a pretty cool test lab with Xsigo, juniper, IOMega, vmware, and HP/Dell servers.

I’ll be posting up some more information as the days go on…

The idea and approval for the lab came up pretty quickly and we are still defining all the goals we’d like to accomplish.

I’m sure with time the list will grow, however here are the initial goals we laid out.

Goals…

  1. Network Goals
    1. Deploy the vChissis solution by Juniper (Server Core and WAN Core)
    2. Deploy OSPF Routing (particularly between sites)
    3. Multicast Testing
    4. Layer 2 test for vm’s
    5. throughput Monitoring
  2. VMware Goals
    1. Test EVC from Old Dell QuadCores Servers to new HP Nehalem
    2. Test Long Distance vMotion & long distance cluster failures from Site1 to Site 2
    3. Play around with ESXi 4.1
  3. Xsigo Goals
    1. Test Redundant Controller failover with vmware
    2. Throughput between sites, servers, and storage

Caveats…

  • We don’t have a dual storage devices to test SAN replication, however the IOMega will be “spanned” across the metro core
  • Even though this is a “Site to Site” design, this is a lab and all equipment is in the same site
  • The Simulated 10Gbs Site to Site vChassis Connection is merely a 10Gbs fibre cable (We are working on simulating latency)
  • Xsigo recommends 2 controllers per site and DOES NOT recommend this setup for a production enviroment, however this is a test lab — not production.

The Hardware..

2 x Xsigo VP780’s with Dual 10Gbs Modules, All Server hardware will be Dual Connected

2 x HP DL360 G6, Single Quad Core Nehalem , 24GB RAM, Infinband DDR HBA, gNic’s for Mgt (Really not needed but nice to have)

2 x Dell Precision Workstation R5400, Dual QuadCore, 16GB RAM, Infiniband DDR HBA, gNic’s for Mgt (Really not needed but nice to have)

6 x Juniper EX4200’s (using Virtual Chassis and Interconnect Stacking Cables)