This week I have the pleasure of setting up a pretty cool test lab with Xsigo, juniper, IOMega, vmware, and HP/Dell servers.
I’ll be posting up some more information as the days go on…
The idea and approval for the lab came up pretty quickly and we are still defining all the goals we’d like to accomplish.
I’m sure with time the list will grow, however here are the initial goals we laid out.
- Deploy the vChissis solution by Juniper (Server Core and WAN Core)
- Deploy OSPF Routing (particularly between sites)
- Multicast Testing
- Layer 2 test for vm’s
- throughput Monitoring
- Test EVC from Old Dell QuadCores Servers to new HP Nehalem
- Test Long Distance vMotion & long distance cluster failures from Site1 to Site 2
- Play around with ESXi 4.1
- Test Redundant Controller failover with vmware
- Throughput between sites, servers, and storage
- We don’t have a dual storage devices to test SAN replication, however the IOMega will be “spanned” across the metro core
- Even though this is a “Site to Site” design, this is a lab and all equipment is in the same site
- The Simulated 10Gbs Site to Site vChassis Connection is merely a 10Gbs fibre cable (We are working on simulating latency)
- Xsigo recommends 2 controllers per site and DOES NOT recommend this setup for a production enviroment, however this is a test lab — not production.
2 x Xsigo VP780’s with Dual 10Gbs Modules, All Server hardware will be Dual Connected
2 x HP DL360 G6, Single Quad Core Nehalem , 24GB RAM, Infinband DDR HBA, gNic’s for Mgt (Really not needed but nice to have)
2 x Dell Precision Workstation R5400, Dual QuadCore, 16GB RAM, Infiniband DDR HBA, gNic’s for Mgt (Really not needed but nice to have)
6 x Juniper EX4200’s (using Virtual Chassis and Interconnect Stacking Cables)
I installed an IOMega ix12-300r for our ESX test lab and I must say it’s just as feature rich as my personal ix4 and ix2.
I enjoy working with this device for its simplicity and feature depth. It’s very easy to deploy and it’s a snap to integrate with ESX.
Here are some of the things I like about ix12 and a high level overview to enable it with esx.
Note: Keep in mind most of the
features below are available on the ix2 and ix4 line but not all..
See http://iomega.com/nas/us-nas-comp.html for more information about the ix line and their features…
Our ix12 (the ix## is the amount of possible drives in the unit, ie ix2 = 2 drives, ix4 = 4drives) is populated with 8 x 1TB drives.
By default the 8TB unit will come with 4 x 2TB drives, I opted to buy a 4TB unit and expand it by 4TB, giving us the 8 x 1TB drives.
The drives are Seagate Barracuda Green SATA 3Gb/s 1TB Hard Drive – ST31000520AS – SATA II (Rev 2.6 Drives) 5.9K RPM, they should perform nicely for our environment…
(Buts like most techies, I wish they were faster)
More information here about the drives and SATA 2.6 vs 3.x
A storage pool is not a new concept but in a device this cost effective it’s unheard of.
Basically, I’m dividing up my 8 drives like this..
Storage Pool 0 (SP0) 4 Drives for basic file shares (CIFS)
Storage Pool 1 (SP1_NFS) 2 drives for ESX NFS Shares only
Storage Pool 2 (SP2_iSCSI) 2 drives dedicated for ESX iSCSI only
I could have placed all 8 drives into one Storage pool but…
One of our requirements was to have SP0 isolated from SP1 and SP2 for separation reasons…
NO Down time for RAID Expansion… Sweet…
Another great feature is NO down time to expand your RAID5 Set..
Simply edit the Storage pool, Choose your new drive, and click apply.
The Raid set will rebuild and you’re all done!
Note: the downside to this… If you decide to remove a drive from a RAID set, you’ll have to rebuild the entire set.
TIP: To check the status of your RAID reconstruction check on the Dashboard under status or the home page at the bottom.
Mine reconstructed the 3 Storage Pools or all 12 drives at the same time in about 4.5 hours…
Teaming your NIC’s!
The ix12 comes with 4 x 1gb NICS, these can be bonded together, stay separate, or a mix of both.
You can setup your bonded NICs as Adaptive Load Balancing, Link Aggregation (LG), or Failover modes.
In our case we bonded NIC 3 and 4 with LG for ESX NFS/iSCSI Traffic and set NIC 1 up for our CIFS traffic.
For the most part setting up the networking is simple and easy to do.
Simply enter your IP’s, choose to bond or not and click apply.
Note: Don’t uncheck DHCP from unused adapters, if you do you’ll get an invalid IP address error when you click apply.
Also, making changes to the network area, usually requires a reboot of the device.. Tip: Setup your Network First..
Adding the NFS Folder to your ESX server
Note: These steps assume you completed the Iomega installation (Enabled iSCSI, NFS, Files shares,etc), networking, and your ESX Environment…
From the ix12 web interface simply add a folder on the correct Storage pool.
In our case I choose the folder name of ESX_NFS and the SP1_NFS storage pool
Tip: ALL Folders are broadcasted on all networks and protocols… I haven’t found a way to isolate folders to specific networks or protocols.
If needed make sure your security is enabled… I plan to talk with IOMega about this…
In vCenter Server, Add NAS storage and point it to the ix12.
Note: use /nfs/[folder name] for the folder name…
Once it’s connected it will show up as a NFS Data store!
Adding iSCSI to your ESX Server..
Note: This assumes you setup your esx environment to support iSCSI with the ix12…
Add your shared storage as an iSCSI Drive, set your iSCSI Drive name, and Select the correct Storage Pool.
Next is to set the Size of the iSCSI device, in this case we have 922GB free, but can only allocate 921.5GB
After clicking on apply, you should see the information screen…
In vCenter Server ensure you can see the iSCSI drive..
Add the iSCSI disk…
Give this disk a name…
Choose the right block size…
Finally there she is… one 920GB iSCSI disk…
From a price vs. performance stand point the IOMega line of NAS devices (ix2, ix4, and our ix12) simply ROCK.
It will be hard to find such a feature rich product that will cost you so little.
This post has merely scratched the features of these devices. It is really hard to believe that 10+ years ago Iomega was known only for ZIP and Jazz Drives…
There new logo is IOMega Kicks NAS, and from what I’ve seen they do!
Follow up posts…
Over the next couple of months I hope to performance test my VM’s against the ix12
I’d like to figure out their protocol multi tendency issue (CIFS, NFS, iSCSI broadcasting over all NICS)
I’ll post of the results as they come in..