4 Operator

 • 

9.3K Posts

December 25th, 2011 11:00

A few basics:

- did you install the MD3000i host software on the Windows 2008 R2 server? Host and management is fine too.

- what is the subnet on your iSCSI IP addresses? If you're using anything less than 24-bit, you need to fix this.

- are (is) you iSCSI subnet(s) different from your LAN subnet?

- what does your iSCSI network consist of? (direct connect, dedicated iSCSI VLAN on an existing LAN switch, or a dedicated iSCSI switch (what brand and model)?

Depending on where your bottleneck is located, creating multiple virtual disks on a single diskgroup can definitely decrease performance as the virtual disks are all sharing the same physical disks, so if 1 virtual disk is being hit hard, the others have to wait on it.

4 Posts

December 25th, 2011 19:00

"- did you install the MD3000i host software on the Windows 2008 R2 server? Host and management is fine too."

Yes, and I made sure to do so off of the latest MD3000i resource disk as available on the site.  Host and management were both installed.

"- what is the subnet on your iSCSI IP addresses? If you're using anything less than 24-bit, you need to fix this."

"- what does your iSCSI network consist of? (direct connect, dedicated iSCSI VLAN on an existing LAN switch, or a dedicated iSCSI switch (what brand and model)?"

Of the one controller in the MD3000i, one iSCSI port has an address of 10.0.1.1 and the other is 10.0.1.2.  These cables both go into a PowerConnect 5424 switch that was configured using a Dell guide specifically on setting the switch up for iSCSI use.  As an aside, this switch in its current config was also used before the rebuild of the MD3000i where it achieved the 66MB/s performance.  Finally, the server has two dedicated ports for iSCSI use via a Broadcom NIC, one with an address of 10.0.1.101 and the other 10.0.2.101.

Right now, this server is the only one that accesses the MD3000i.  In the iSCSI Initiator settings on the server, I see that I have two "Favorite Targets", though I am noticing something odd.  One of them makes sense, it has a source IP of 10.0.1.101 and a destination IP of 10.0.1.1.  The other however has a blank source IP and a destination IP of 10.0.2.1.  Shouldn't that target also have a proper source IP?

4 Operator

 • 

9.3K Posts

December 26th, 2011 18:00

But what subnets (subnetmasks) are you using for iSCSI? Are they 10.0.1.x /24 and 10.0.2.x /24, or are you using something less than a 24 bit subnet? If so, your 2 iSCSI ports are actually in the same subnet, which could very well be part of the problem.

Also; do you have the 5424 plugged into your LAN in any way? If you used this article to configure the switch, you don't want to plug the switch into your LAN and the only connections to this switch should be the iSCSI ports from the MD3000i and the NICs on the server(s) that are being used for iSCSI.

4 Posts

December 27th, 2011 06:00

I apologize, apparently my  Christmas reading comprehension was lacking.  The two were on the same subnet, 255.0.0.0.  I have since changed them to both be 255.255.255.0.  I did use that linked article, and while the 5424 was plugged into my LAN and is now no longer.

After making those changes, I am still testing out about a 46MB/s speed.  What else should I be looking at?

Thank you for the help so far.

4 Posts

December 27th, 2011 09:00

Well, it seems that my testing methodology may be flawed, so now I am sheepishly wondering if: A) I am inventing my own problems and B) what the proper testing methodology of a SAN is.

Here's the background. To test the MD3000i, both before and after, I've been using IOMeter. I removed all but one worker process, set it up to run against the volume on the MD3000i, used a "All in one" access specification, and then duplicated it so that there were ten worker processes before clicking start. Before rebuild this would get me about 66MB/s after it stabilized, and after it gets me the 46MB/s. In both tests initially the MTU was set to 1500, I only upped them to 9000 after noticing the reduced performance (and it was something I wanted to do after the rebuild anyway).

However, I was going to get some more data in terms of how fast copying from localdrive>MD3000i as well as MD3000i>MD3000i (two different partitions) was, and noticed  something.  I copied a 30GB .bak file using Windows and the speed showed ~110MB/s avg speed. Upon recommendation I ran ATTO Disk Benchmark against the volume instead of IOMeter, and ran it at the default settings. It shows at the top end a 172MB/s write and a 164MB/s read, and a pretty linear speed reduction as you look closer to the 0.5KB read/write.

Which one should be considered definitive for the purposes of testing and/or which one is more credible?  Really, what is the proper way of testing the disk bandwidth, since clearly different tools can show wildly different results?

No Events found!

Top