Home Lab

Test Lab – The Plan and Layout with Xsigo, juniper, IOMega, vmware, and HP/Dell servers)

Posted on Updated on

This week I have the pleasure of setting up a pretty cool test lab with Xsigo, juniper, IOMega, vmware, and HP/Dell servers.

I’ll be posting up some more information as the days go on…

The idea and approval for the lab came up pretty quickly and we are still defining all the goals we’d like to accomplish.

I’m sure with time the list will grow, however here are the initial goals we laid out.

Goals…

  1. Network Goals
    1. Deploy the vChissis solution by Juniper (Server Core and WAN Core)
    2. Deploy OSPF Routing (particularly between sites)
    3. Multicast Testing
    4. Layer 2 test for vm’s
    5. throughput Monitoring
  2. VMware Goals
    1. Test EVC from Old Dell QuadCores Servers to new HP Nehalem
    2. Test Long Distance vMotion & long distance cluster failures from Site1 to Site 2
    3. Play around with ESXi 4.1
  3. Xsigo Goals
    1. Test Redundant Controller failover with vmware
    2. Throughput between sites, servers, and storage

Caveats…

  • We don’t have a dual storage devices to test SAN replication, however the IOMega will be “spanned” across the metro core
  • Even though this is a “Site to Site” design, this is a lab and all equipment is in the same site
  • The Simulated 10Gbs Site to Site vChassis Connection is merely a 10Gbs fibre cable (We are working on simulating latency)
  • Xsigo recommends 2 controllers per site and DOES NOT recommend this setup for a production enviroment, however this is a test lab — not production.

The Hardware..

2 x Xsigo VP780’s with Dual 10Gbs Modules, All Server hardware will be Dual Connected

2 x HP DL360 G6, Single Quad Core Nehalem , 24GB RAM, Infinband DDR HBA, gNic’s for Mgt (Really not needed but nice to have)

2 x Dell Precision Workstation R5400, Dual QuadCore, 16GB RAM, Infiniband DDR HBA, gNic’s for Mgt (Really not needed but nice to have)

6 x Juniper EX4200’s (using Virtual Chassis and Interconnect Stacking Cables)

Working with the IOMega ix12-300r

Posted on Updated on

 

I installed an IOMega ix12-300r for our ESX test lab and I must say it’s just as feature rich as my personal ix4 and ix2.

I enjoy working with this device for its simplicity and feature depth. It’s very easy to deploy and it’s a snap to integrate with ESX.

 

Here are some of the things I like about ix12 and a high level overview to enable it with esx.

Note: Keep in mind most of the
features below are available on the ix2 and ix4 line but not all..

See http://iomega.com/nas/us-nas-comp.html for more information about the ix line and their features…

 

The Drives…

Our ix12 (the ix## is the amount of possible drives in the unit, ie ix2 = 2 drives, ix4 = 4drives) is populated with 8 x 1TB drives.

By default the 8TB unit will come with 4 x 2TB drives, I opted to buy a 4TB unit and expand it by 4TB, giving us the 8 x 1TB drives.

The drives are Seagate Barracuda Green SATA 3Gb/s 1TB Hard Drive – ST31000520AS – SATA II (Rev 2.6 Drives) 5.9K RPM, they should perform nicely for our environment…

(Buts like most techies, I wish they were faster)

More information here about the drives and SATA 2.6 vs 3.x

http://www.seagate.com/ww/v/index.jsp?vgnextoid=9d373f15020b0210VgnVCM1000001a48090aRCRD#tTabContentSpecifications

http://www.serialata.org/documents/SATA-6-Gbs-The-Path-from-3gbs-to-6gbs.pdf

 

Storage Pools…

A storage pool is not a new concept but in a device this cost effective it’s unheard of.

Basically, I’m dividing up my 8 drives like this..

Storage Pool 0 (SP0) 4 Drives for basic file shares (CIFS)

Storage Pool 1 (SP1_NFS) 2 drives for ESX NFS Shares only

Storage Pool 2 (SP2_iSCSI) 2 drives dedicated for ESX iSCSI only

I could have placed all 8 drives into one Storage pool but…

One of our requirements was to have SP0 isolated from SP1 and SP2 for separation reasons…

 

NO Down time for RAID Expansion… Sweet…

Another great feature is NO down time to expand your RAID5 Set..

Simply edit the Storage pool, Choose your new drive, and click apply.

 

The Raid set will rebuild and you’re all done!

Note: the downside to this… If you decide to remove a drive from a RAID set, you’ll have to rebuild the entire set.

TIP: To check the status of your RAID reconstruction check on the Dashboard under status or the home page at the bottom.

Mine reconstructed the 3 Storage Pools or all 12 drives at the same time in about 4.5 hours…


 

Teaming your NIC’s!

The ix12 comes with 4 x 1gb NICS, these can be bonded together, stay separate, or a mix of both.

You can setup your bonded NICs as Adaptive Load Balancing, Link Aggregation (LG), or Failover modes.

In our case we bonded NIC 3 and 4 with LG for ESX NFS/iSCSI Traffic and set NIC 1 up for our CIFS traffic.

For the most part setting up the networking is simple and easy to do.

Simply enter your IP’s, choose to bond or not and click apply.

Note: Don’t uncheck DHCP from unused adapters, if you do you’ll get an invalid IP address error when you click apply.

Also, making changes to the network area, usually requires a reboot of the device.. Tip: Setup your Network First..

 

Adding the NFS Folder to your ESX server

Note: These steps assume you completed the Iomega installation (Enabled iSCSI, NFS, Files shares,etc), networking, and your ESX Environment…

From the ix12 web interface simply add a folder on the correct Storage pool.

In our case I choose the folder name of ESX_NFS and the SP1_NFS storage pool

Tip: ALL Folders are broadcasted on all networks and protocols… I haven’t found a way to isolate folders to specific networks or protocols.

If needed make sure your security is enabled… I plan to talk with IOMega about this…

 

In vCenter Server, Add NAS storage and point it to the ix12.

Note: use /nfs/[folder name] for the folder name…

 

Once it’s connected it will show up as a NFS Data store!

 

Adding iSCSI to your ESX Server..

Note: This assumes you setup your esx environment to support iSCSI with the ix12…

Add your shared storage as an iSCSI Drive, set your iSCSI Drive name, and Select the correct Storage Pool.

Next is to set the Size of the iSCSI device, in this case we have 922GB free, but can only allocate 921.5GB

After clicking on apply, you should see the information screen…

 

In vCenter Server ensure you can see the iSCSI drive..

Add the iSCSI disk…

Give this disk a name…

 

Choose the right block size…

Finally there she is… one 920GB iSCSI disk…

 

Summary…

From a price vs. performance stand point the IOMega line of NAS devices (ix2, ix4, and our ix12) simply ROCK.

It will be hard to find such a feature rich product that will cost you so little.

This post has merely scratched the features of these devices. It is really hard to believe that 10+ years ago Iomega was known only for ZIP and Jazz Drives…

There new logo is IOMega Kicks NAS, and from what I’ve seen they do!

 

Follow up posts…

Over the next couple of months I hope to performance test my VM’s against the ix12

I’d like to figure out their protocol multi tendency issue (CIFS, NFS, iSCSI broadcasting over all NICS)

I’ll post of the results as they come in..

 

 


 

vSphere: NUMA 706: Can’t boot system as genuine NUMA

Posted on Updated on

If you install vSphere on NON-NUMA hardware the following warning message will be displayed on the Service Console splash screen

cpu0:0)NUMA: 706: Can’t boot system as genuine NUMA. Booting with 1 fake node(s)

To resolve the warning message uncheck the option setting vmkernel.boot.usenumainfo

Home Lab – Workstation 7 to 7.1 Upgrade

Posted on Updated on

I upgraded my Home Lab from Workstation 7.0 to 7.1 tonight..

More info on my home lab here…
http://vmexplorer.blogspot.com/2010/02/home-lab-install-of-esx-35-and-40-on.html

Upgrade Steps I took…

  1. First step was to uninstall Workstation 7, then install 7.1
    • Note: The install will do this automatically if needed
  2. Once the uninstall is completed a reboot is necessary
  3. After the reboot I noticed Windows 7 reconfigure the Network adapters
    • Note.. At this point if you need to adjust your local subnets now might be a good time, once you install 7.1 it will reconfigure all the vmnets around this.
  4. The install of Workstation 7.1 is pretty simple, Choose Custom and Next a few times and one reboot
  5. After the reboot Windows 7 finds the new network adapters, and it was all done..

What I noticed after the upgrade..

  • WS7.1 launched with out issues, it didn’t require me to input my serial number again, and it came right up.
  • I opened up the Virtual Network Editor, and it took about a minute to assign subnets to the 8 difference vmnets. (This is something I should have documented better, as I don’t recall all the subnets.  However I did have 2 documented)
  • When I powered on my good old XP VM, locally Windows 7 noticed this as needing an USB updated driver, it quickly went to the update site and downloaded the driver, no issue.  In the XP VM I updated the vmware tools, rebooted, and it worked normally
  • One new thing was the vmtools ICON now is grey and white
  • I powered up my ESX test environment..
    • 1st my vCenter Server is connected to VMnet0 in Auto-Bridged mode
    • On Power up I noticed my vm had been switched from a static ip to DHCP
    • I correct this by entereing its static IP and it functioned normally
    • 2nd I powered up my ESX 3.5 host
    • It booted fine and attached itself to the vCenter server without issue
    • 3rd I powered up my ESX 4.0 host
    • It booted fine and attached itself to the vCenter server without issue

Final thoughts…

This upgrade was a good warm up for the next Workstation upgrade that I need to do.
This environment was pretty simple, nothing very complex, and pretty much went smoothly.
I think the best rule of thumb is before you upgrade know and document your lab then upgrade.
My home lab was partially documented it would have went smoother if it was fully documented.

Next up… Update of a more complex WS lab with an IOMega iSCSI NAS and multiple subnets…
I’ll post up how it goes…

Here’s whats new with WS7.1… I got this from VMware site…

http://www.vmware.com/support/ws71/doc/releasenotes_ws71.html#whatsnew

What’s New

This release of VMware Workstation adds the following new features and support:

•New Support for 32-Bit and 64-Bit Operating Systems

•New Features in VMware Workstation

New Support for 32-Bit and 64-Bit Operating Systems

This release provides support for the following host and guest operating systems:

Operating System Host and Guest Support

Ubuntu 8.04.4 Host and guest

Ubuntu 10.04 Host and guest

OpenSUSE 11.2 Host and guest

Red Hat Enterprise Linux 5.5 Host and guest

Fedora 12 Guest

Debian 5.0.4 Guest

Mandriva 2009.1 Guest

New Features in VMware Workstation

•OpenGL 2.1 Support for Windows 7 and Windows Vista Guests — Improves the ability to run graphics-based applications in virtual machines.

•Improved Graphics Performance — Enhanced performance with better benchmarks, frame rates, and improved rendering on Windows 7 and Windows Vista guests allows you to run various graphics-based applications. In addition, major improvements in video playback enable you to play high-resolution videos in virtual machines.

•Automatic Software Updates — Download and install VMware Tools and receive maintenance updates when available.

•Direct Launch — Drag guest applications from the Unity start menu directly onto the host desktop. Double-click the shortcut to open the guest application. The shortcut remains on the desktop after you exit Unity and close VMware Workstation.

•Autologon — Save your login credentials and bypass the login dialog box when you power on a Windows guest. Use this feature if you restart the guest frequently and want to avoid entering your login credentials. You can enable Autologon and use direct launch to open guest applications from the host.

•OVF 1.1 Support — Import or export virtual machines and vApps to upload them to VMware vSphere or VMware vCloud. The VMware OVF Tool is a command-line utility bundled in the VMware Workstation installer. Use this tool along with VMware Workstation to convert VMware .vmx files to .ovf format or vice versa. VMware recommends that you use the OVF command-line utility. For more information, see the OVF Web site and OVF Tool User Guide.

•Eight-Way SMP Support — Create and run virtual machines with a total of up to eight-processor cores.

•2TB Virtual Disk Support — Maximum virtual disks and raw disks size increased from 950GB to 2TB.

•Encryption Enhancements — VMware Workstation includes support for Intel’s Advanced Encryption Standard instruction set (AES-NI) to improve performance while encrypting and decrypting virtual machines and faster run-time access to encrypted virtual machines on new processors.

•Memory Management — User interface enhancements have simplified the handling of increased virtual memory capacity.

•User Experience Improvement Program — Help VMware improve future versions of the product by participating in the User Experience Improvement Program. Participation in the program is voluntary and you can opt out at any time. When you participate in the User Experience Improvement Program, your computer sends anonymous information to VMware, which may include product configuration; usage and performance data, virtual machine configuration; usage and performance data, and information about your host system specifications and configuration.

The User Experience Improvement Program does not collect any personal data, such as your name, address, telephone number, or email address that can be used to identify or contact you. No user identifiable data such as the product license key or MAC address are sent to VMware. VMware does not store your IP address with the data that is collected.

For more information about the User Experience Improvement Program, click the Learn More link during installation or from the VMware Workstation Preferences menu.

Home Lab – Install of ESX 3.5 and 4.0 on Workstation 7

Posted on Updated on

Tonight I got the pleasure to work on my home lab a bit..

Here is what I am currently running..

Antec Sonata Gen 1 Case
Antec 650 Earth Watts Power Supply
Gigabyte EP43-UD3L MB
Intel® Core™2 Quad Processor Q9400 2.66Ghz/1333FSB/6MB Cache
Cooler Master TX3
8GB of Patriot DIMM 2GB PC2-5300U CL4-4-4-12 (DDR2-667) (PEP22G5300LL)
500GB/300GB/160GB SATA 3.0 HD’s
Windows 7 – 64 Bit
VMWare Workstation 7

Installation of ESX 4.0 was easy… just follow the steps to create a new VM and choose ESX 4.0

Installation of ESX 3.5 was a bit tricky at first… I did the usually google for answers but everything was on Workstation 6.5 and how to modify the vmx config file…

I ended up doing the following and it seams to be working well..

Create a custom VM
Choose “I will install the OS Later”
Select “Red Hat Enterprise Linux 5 64-bit”
Defaults on the rest
When it completed set it to boot to your ESX3.5 Media, so that you can install the OS
Complete the OS install and your done..
Much easier then WS 6.5

Mine ran with out issue and it really moves..
In fact I installed it with my ESX 4.0 VM running in the background..

So far workstation 7 is seems to be a big improvement and it’s quite speedy for me..

Home Lab – GS724AT – ProSafe® 24-port Gigabit Smart Switch with Advanced Features

Posted on Updated on

GS724AT – ProSafe® 24-port Gigabit Smart Switch with Advanced Features

I found this switch that I believe will do VLAN’s and Tagging for only $340, not to bad for 24 port Gigabit and it seams like a deal for a ESX home lab..

Home Lab – iomega nas for esx

Posted on Updated on

io mega nas esx

Io mega has some cool NAS devices on the cheap… they look like they’d be good for a small business or a home lab!

Here is a cool review on it…
Iomega’s ix4-200d: A Killer Desktop Storage Array

Here is the one I’m looking at for my home lab..

iomega website ix4-200d

ESX / ESXi 4.0 Whitebox HCL

Posted on Updated on

I found this cool link to whitebox with your ESX servers.. check it out!
Thanks to a fellow VMUG User (Vlad N)

ESX / ESXi 4.0 Whitebox HCL: “Motherboards and unsupported servers that work with ESX 4.0 and / or ESXi 4.0 Installable
Lasted updated – 2010.02.02″