Start a Conversation

Unsolved

This post is more than 5 years old

W

8692

September 4th, 2012 21:00

Jumbo frames on default Vlan PC6224 & Raid on 24 drives PS6100

​Hi,​

​We've got a couple of PS4000 units connected via 2 PC6224 switches, linked via dual port SFP+ LAG. first 16 ports on each switch assigned to VLAN1, which is the default vlan, and the rest in VLAN2 for management traffic.​

​jumbo frames configured, but can someone confirm if it's true that it's not a good idea to have iscsi traffic with jumbo frames on the default network as "The Link Layer on the default VLAN was designed to provide reliable communication for 1600 bytes, when using Jumbo on the default VLAN, the larger Jumbo frames would not be reliable, and you would see packets dropped."​

​We've also just recently ordered a PS6100, with 24x 3TB drives, and planning on setting it up in a totally seperate infrastructure to the PS4000s.. any advice on best practice for configuring the RAID level? For the previous 2x PS4000 units, we used Raid 6, and since each chassis is only holding 16 drives, it's pretty much like 2 sets of RAID 6 of 16 drives all together in a single storage pool... but given that we're now dealing with 24 drives in a single chassis, would Riad 50 make more sense than sticking with Raid 6?​

​according to the Config guide ​
​ ​

​on PS6100 units with 24 drives, if using RAID 6, it is implemented in multiple RAID 6 sets within a single chassis, rather than a single set, totalling 23 data and parity drives with 1 hot spare.​

​If i'm not mistaken, it makes more sense on such large RAID sets for reliability, rebuilt time etc to be using Raid 50 instead of Raid 6??​

​thanks in advance​

4 Operator

 • 

1.8K Posts

September 5th, 2012 00:00

Hello,

on august 2012 dell published a new raid configuration guide and with latest firmware (5.2.5) there are some changes. For all modell with drive sizes => 1TB they strongly suggest RAID6 because of the long rebuild times of those. Chances are higher there that you get a double drive failure during a rebuild and with a RAID6 you will survive this.

Also they remove the function to create a RAID5 within the GUI. You can do it only from command line. If needed i can upload the document.

Regards

Joerg

4 Operator

 • 

9.3K Posts

September 5th, 2012 06:00

Actually, the document says that raid 6 or 10 are recommended for 1TB or larger drives.

It also says that raid 6 is good for sequential reads and writes and random reads, but for random writes (e.g. when virtualizing and/or running database type applications), raid 10 is better.

7 Technologist

 • 

729 Posts

September 5th, 2012 07:00

Regarding Jumbo support on the Default VLAN:

The largest supported frame size on VLAN1 is typically 1600 bytes commonly referred to as a “Baby Giant” in Cisco documentation.  It is recommended that user traffic be configured on VLANs other than VLAN 1, primarily to prevent unnecessary user broadcast and multicast traffic from being processed by the Network Management Processor (NMP) of the supervisor, and unreliable handling of jumbo frames (i.e., sometimes packets go through as jumbo, and sometimes they don’t).

There is a way to enable jumbo on the default vlan (disabling LLDP transmit and receive, then setting each port to use an MTU of 9216, and enabling spanning-tree portfast), but the benefits of doing this don’t outweigh the simple task of setting up a separate vlan’s for iSCSI and your user network traffic.

So the best practice it not to use Jumbo on the default VLAN.

-joe

2 Posts

September 5th, 2012 15:00

Thanks Joe-S, Unfortunately, that's how we've got it set up at the moment, by disabling LLDP, and MTU to 9216 with portfast on the default vlan, but we'll do a reconfig and switch iscsi over to a new vlan.

DevMgr/ Joerg... i was under the impression that rebuild time would be substantially longer for 24x3TB drives array on RAID 6 compared to RAID 50...  whilst it's true that a RAID6 can tolerate a 2nd failed drive in the 24 drives array during rebuilt, a RAID50 should be able to tolerate a 2nd failed drive in the array as long as it's not in the same RAID5 'set' and if i'm not mistaken, on the PS6100, that's implemented as 2 sets of 11 drives with 1 hotspare each RAID 5...

so on raid 6, it can tolerate 2 out of the 23 data/parity drives failing.  thus 8.7%, whereas in 50, it can tolerate 1 out of the 11 out of the 11 data/parity drives in each RAID5 sets failing, thus 9%...   risk seems similar, but lower rebuilt time, and 2 hotspare disks in 24 drives instead of just 1...

would appreciate your thoughts

5 Practitioner

 • 

274.2K Posts

September 7th, 2012 21:00

Re: Rebuild times.  If you had two failures in R6 the rebuild times would be longer.  But with only one failure in either case its about the same.   Since R50 is two R5 RAIDsets.   Or if you had a failure in each R5 RAIDset at same time it would take longer to rebuild.

Correct that RAID50 can tolerate two drive failures so long as they aren't in the same RAIDset.   R5/50 are just riskier compared to R6.   Especially as drives get larger and larger.   The window for the 2nd failure becomes larger and must be understood.   If you don't use R6, then more frequent replication and/or backups should be considered to mitigate data loss risks.  

No Events found!

Top