Unsolved
This post is more than 5 years old
10 Posts
0
36094
iSCSi and Switch setup
I posted a question ealier and received no reply so I will reword and try again.
I have purchased the following devices:
1 PowerVault MD3220i redundant iSCSi connectors
2 PowerConnect 6224 Switches
2 PowerEdge R410 Servers with onboard dual Gbe nics
I have used the default config for the SAN which is:
RAID CM 0 port 0 : 192.168.130.101
RAID CM 0 port 0 : 192.168.131.101
RAID CM 0 port 0 : 192.168.132.101
RAID CM 0 port 0 : 192.168.133.101
RAID CM 0 port 0 : 192.168.130.102
RAID CM 0 port 0 : 192.168.131.102
RAID CM 0 port 0 : 192.168.132.102
RAID CM 0 port 0 : 192.168.133.102
In this configuration it looks like i should have 4 iscsi vlans. (one for each subnet) but a lot posts recommend only 2 vlans. What does dell recommend and what would be the setup?
Also, the 2 servers that we have will be setup in a vmware high availability setup, so one server will be the main and the other the backup (i believe thats the way it should be) or should they both be used? How do you connect the 2 nics on each server? Should one be used for iscsi or both with multiple vlans including the iscsi?
Any help would be appreciated.
mbartle
10 Posts
0
January 15th, 2012 18:00
Also, what raid level should be set on the md3220i for this setup? It came already set as a raid 6 but in some post it mentions that it should be set to raid 1/10 for performance. Is this correct?
Kong Yang
180 Posts
0
January 15th, 2012 21:00
Hi Malcolm,
Let me address your VMware HA w/ two servers question first. You should utilize them both. If one of the server fails in your ESXi cluster, the VMs will automatically restart on the other server.
As for your configuration setup, have you seen these two videos on set up & VMware integration - and
Also, this answered thread might provide insights to the MD3200i iSCSI vLAN setup even though it's a Hyper-V cluster:
en.community.dell.com/.../19373464.aspx
For your RAID level question, here's a great write-up on RAID 6 compared to RAID 10. www.techrepublic.com/.../2689
It really depends on your environment needs i.e. your application profile/quality of service metrics, your capacity requirements and rebuild time requirements.
Regards,
KongY@Dell
mbartle
10 Posts
0
January 17th, 2012 07:00
Thanks for the info, a couple of things that I did not know. Very helpful.
A question on RAID 10.
We have 6 x 2 Terabyte drives in our array and when I connect to it using the PowerVault Modular Disk Storage Manager it says that there is 11 terabytes avaialble and 0 configured. This makes sense. When I set it up for RAID 10 and checked it the next day, it says that there is 3.8 terabytes configured and 1.8 terabytes available. I believe this is because there was a hot spare and thus only 4 of the 6 drives would be used for RAID 10. Is that correct? Do I require a hot spare in a RAID 10 setup?
Kong Yang
180 Posts
0
January 17th, 2012 14:00
Hi Malcom,
I'm glad that the info helped you out :)
That is correct - if you have a hot spare & with only 6-drives, the RAID 10 could only configure 4 of the 6 drives since there would be no mirrored pair for the other drive. With RAID 10, I am not a fan of a hot spare if you only have an even # of drives in the storage array. In the event of a failure, you can always hot-plug another drive into the failed mirror's spot and initiate a rebuild. What the hot spare allows you in most cases is that the RAID controller after sensing a drive failure will pull in the hot spare and should auto-initiate a rebuild without manual intervention. In the case where you may want the hot spare, an N+1 drive scenario would be best (where N = even #) because you can fully utilize all N-drives while having 1 drive as a hot spare.
Regards,
KongY@Dell
mbartle
10 Posts
0
January 17th, 2012 14:00
Gotcha,
What would you suggest for the iSCSi setup? We purchased 2 R410's with only their 2 Onboard Gbe Nics. It doesn't seem to me that this would work to carry the iSCSi traffic and the network traffic for the VM's and what would be the point of the 2 SCSi raid controller's with 4 ports each if you only have 2 ports to connect to on the servers. Should i get the 4 port Gbe module for each server and dedicate those to the iSCSi? Also, will ESXi 5 support those modules?
Sorry about all the questions
mbartle
10 Posts
0
January 18th, 2012 06:00
Hopefully this is the last question,
Can you add drives to a RAID 10 on a MD3200i with destroying the RAID and rebuilding?
JOHNADCO
847 Posts
0
January 18th, 2012 12:00
Add free capacity on the Modify tab of the Storage Manager software, but I have not tried to add to a RAID10 before.
mbartle
10 Posts
0
January 18th, 2012 12:00
I have ordered the 4 port. Just to clarify, the 4 ports would be for iSCSi, 1 onboard port for vmotion and the other onboard port for LAN traffic?
I read the document you sent on RAID and it seems to say that disk writes are more than twice as fast for RAID 10 and disk read is about the same. We are eventually going to deploy some thin clients to replace 2 or 3 desktops. Would the decision on RAID affect a users performance when using their virtual desktop?
JOHNADCO
847 Posts
0
January 18th, 2012 12:00
PS: RAID 10 is good, but generally people will have way more spindles than what you have. I think you would probably get better overall performance with your drive count in a raid 5.
We use R415's and you have to be careful how you configure them to make sure the one expansion slot if free. If the slot is available then you can go for a dual port Intel card for not much money. Dell charges to much for their multiport nics at the time of order. You really do generally want two paths for iSCSI minimum, and one separate for LAN traffic at a min. So three min. 4 is really cool because you have an extra for a separate vmotion network.
Kong Yang
180 Posts
0
January 19th, 2012 09:00
Malcolm,
RAID decisions could factor into end user experience with their virtual desktop. Factors to consider: R/W disk latencies, IOs per second (IOPs), the # of disk spindles, application & IO profile, caching on the VMs. You will need to measure the perf metrics mentioned above before+after and at least match it to provide similar end user experience in terms of quality of service.
Good rules of thumb for IOPs calculation - RAID 10 can sustain about 180-200 IOPs per spindle for random IOs. RAID 5 can sustain about 150 IOPs per spindle (minus hot spare & the one parity drive) for a similar random IOs.
Regards,
KongY@Dell
JOHNADCO
847 Posts
0
January 19th, 2012 13:00
For best redundancy go a head and make one lan port on board for iSCSI and one on the 4 port for iSCSI. That way either card and/or lan subsection the motherboard can fail and you will still have access to your storage.
mbartle
10 Posts
0
January 31st, 2012 07:00
Question
I ordered and recieved the Dell / Intel Gigabit ET NIC,Quad Port, Copper, PCIe-4 cards.
Went to install them and realized that there are PERC H700 Raid Controller in each of the systems. Obviously there is only one PCIe slot so the Raid controllers will have to be removed. (Should have ordered these servers myself) Can I remove these cards and use the onboard RAID without much performance degradation?
mbartle
10 Posts
0
January 31st, 2012 08:00
SAS 6/iR card? Do you mean onboard?
Do RAID options really matter? The SAN provides your main backup anyway and if something fails it moves over to the backup srvr anyway. Am I right on this?
Why did you order a replacement host? Couldn't you have just taken out the RAID controller and installed the 4 port GE card and then used the onboard RAID? Is there something I'm missing?
JOHNADCO
847 Posts
0
January 31st, 2012 08:00
Bummer.... We ran into the same thing with R415's....
Luckily only one host was ordered wrong, but it was months before we realized it. They told me I could not change it after the fact. They told me they would not take the host back as it had been to long. I had to order a replacement host.
Really important on these to use the SAS 6/iR card to keep that slot available. Really limits your RAID options though not to bad with 2TB SATA drives, you can have 2TB mirrored at least.
Kong Yang
180 Posts
0
February 20th, 2012 08:00
Malcolm
The SAS 6/iR is the onboard controller.
Onboard RAID matters when you need the redundancy and availability of the local drives for your local data. You will get some level of redundancy and availability with every RAID level except RAID 0 which just spans across disk to give you capacity & spindles but at the cost of losing the whole volume if just one drive in the RAID 0 fails.