So you bought an Equallogic San, now what…. part two

… belay those orders for setting up the SAN stress testing. Let’s first get some alerts working, and call back home functionality. This email from EQL lays it out well

*****

Using rinetd for management of a closed network
Solution Details 
Network Access for Management and Notification

Networks set up for iSCSI SANs often have limited connectivity to general “public” network. This poses significant problems when management and event notifications use standard TCP/IP protocols. One example of this is when trying to configure event notification via SMTP. If the iSCSI network does not have an SMTP server, and there is no gateway from the iSCSI network to the public network, significant events notifications cannot be delivered to the responsible persons.

Additionally, management of the EqualLogic array would have to be done from one of the servers directly connected to the iSCSI network.

One method of circumventing this problem is to use port redirection. This is a procedure that takes requests on a particular port and interface, and routes them to another node. One program that has been used successfully is rinetd, available from http://www.boutell.com/rinetd/ . This program can be used to route SMTP, HTTP, telnet, and SSH traffic to and from an array through a Windows system to one or more nodes on the public network.

The following lines are a configuration that will:
1. route all SMTP traffic received on interface 192.168.30.201 to the system at address 192.168.10.200.
2. Route all HTTP traffic received on interface 192.168.10.201 to the array at address 192.168.30.10.
3. Route all traffic received on any interface for ports 3002 and 3003 to the array at address 192.168.30.10.

192.168.30.201 25 192.168.10.200 25
192.168.10.201 80 192.168.30.10 80
0.0.0.0 3002 192.168.30.10 3002
0.0.0.0 3003 192.168.30.10 3003

Using two tools from the Microsoft Windows 2003 Resource Kit Tools, this program can be set up to run as a service. Srvany.exe allows the rinetd to be run as a service, and instsrv.exe does the actual installation. They are available at:

http://www.microsoft.com/downloads/details.aspx?displaylang=en&familyid=9d467a69-57ff-4ae7-96ee-b18c4790cffd

You may also find this related solution of use:
Network ports used by a PS Series group.

FYI – there is no equivalent tool in Windows 2008. The customer will have to set up a router.

*****

Instead of using srvany.exe I got clued on to using ServiceEx from this post http://blog.ehuna.org/2009/10/an_easier_way_to_access_the_wi.html

So config your rinetd (note: it looks like the java portion of the group manager uses ports 3002 and 3003, hence its use in the example from EQL)

And then use ServiceEx to make it into a service. The cool thing about this is that you can also access group management from any PC as well! When I used rinetd it didn’t want to bind to port 80 because of course vcenter web server was running there. So I just bound it to 8080.

Ok, so now you have a way to get your SMTP messages out so go and enable the call back home functionality in the Equallogic. If all goes well you should get a couple of email messages, and maybe a phone call letting you know they heard from you.

Ok, now let’s try stress testing the SAN. So this is my crude and not very scientific method. First things first, create a vritual machine with two nics (so we can use MPIO) and it needs to have a server OS. Ok, put those NICS on the same subnet of your SAN so they can talk to it (I like the VMXNET 3, I have no idea if this is good or bad for iSCSI, but seems to work) . Edit the properties to enable Jumbo frames. Download the HIT (Host Integration Tools) and install them. Of course yet another download that is called Setup.exe maybe my wining will change this…. probably not… anyways the HIT installs: EqualLogic Multipath I/O DSM, integrates with Microsoft built in MPIO, Microsoft iSCSI Initiator, iSCSI Initiator properties tab, enables Dell EqualLogic MPIO tab

If you had an outside facing NIC configured you would want to exclude that NICs subnet from the MPIO. In order to do so, press start, programs, equallogic, and then go to the remote setup wizard (I have no idea why this setting is here, seems a button off of the the new tab in the iSCSI initiator would make more sense but that is just me) the third radial button is Configure MPIO settings for this computer, this is where you can exclude the desired subnet.

Ok, so on the SAN create 4 volumes. Use the iSCSI initiator to connect to the volumes (Using MPIO)

I’m not sure if it matters if on the discovery if you choose to hard code to use the iSCSI initiator for the adapter, and then choose one of the adapters. And then do this again for the other adapter.

But next fire up Iometer use one worker and then control click and select the four volumes. Use 128 for the # of Oustanding I/O’s on this tab. Create a new Specification (On the Access Specifications Tab) Move the slider to 100% Sequential and the other slider to 100% Read and set the Transfer Request Size to 64 Kilobytes. Don’t forget to press Add to move it to the Assigned Access Specifications. On results display move it to five seconds (so you can see what its doing) and then on Test Setup choose how long you want to run it for. Also, don’t forget to save your settings.

Now that this is all setup hit the green flag to see some action. What you should see on the SAN headquarters (as well as IOmeter) is over 100 MB of read access. So to really stress test this sucker, clone your VM a couple of times, create four more volumes per each new VM, attach the new VM to its respective new volumes and then let it rip. I got it up to around 400 MB a second this way and then left it to run over the weekend. After coming back on Monday the SAN had transfered in total I think somewhere around 70 terrabytes.

Well if you still feel the need for punishment stay tuned for part three of this post.

6 comments

  1. If only I found this post earlier… I’ve just setup our Dell EQL PS4000, 6000 SAN. had so many questions that are answered in your posts. Also would have like to do the stress testing like you did but never got the chance due to deadlines… great post.

  2. We recently acquired three Dell EqalLogic PS 6000 and using VMWARE. My IT guy is saying that the EQL is much slower than a physical server attached to an MD1000. How could this be? Is this possible? For example, he did the following test:
    “Just to add to my previous email that today I created a 14 GB test set of files and tried to copy them from one logical drive on Lancer to another logical drive on Lancer (physical server) while at the same time copying same files from one logical drive on SonOfRobin to another logical drive on SonOfRobin (new virtual server used in the latest performance tests)

    It took 4 min and 45 sec to perform this task on the physical server It took 11 min and 17 sec (!!!) to perform this task on the virtual server.”

    Please point me in the right direction as to what we are doing wrong. Thanks, Neil.

    • Hi Neil,
      Well first off I would like to note that I am very much still learning this stuff so take everything with a grain of salt 🙂 But I guess my first question is, did I read it right that this was a comparison of a physical machine to a virtual machine?

      That being the case I don’t think its a true apples to apples comparison to start out with.

      First off, you might take a performance hit when you start going virtual. What are the specs of your ESX machines compared to the physical machine?

      Also, is the virtual machine a physical to virtual conversion? I only ask because it is named SonOfRobin (which sounds like it was P2V’d) Typically the best way to create virtual machines (if performance is desired) is to start from scratch.

      Is your OS partition aligned on the VM, is your data partition aligned on the VM. How are you accessing the data partition? Is it iSCSI from within the VM? How many NICS are you using for iSCSI?

      What type of drives are in your PS6000. Are they all together in one pool so that the volume spans all three for performance? What is your switching infrastructure? Flow control enabled on the switches? Jumbo frames?

      What RAID level is on your MD1000, what RAID level is on your PS6000?

      What version of ESX are you running?

      So, a bucket of questions for you 🙂 but hopefully some things to pursue 🙂

      -Michael

Leave a comment