Start a Conversation

Unsolved

This post is more than 5 years old

1082

August 13th, 2016 23:00

CX4-120, new raid groups slow disk writes.

Hey,

We just acquired a CX4-120 that was used by another department and after we recreated the new RAID groups and configuring the required LUNs the write speeds are actually very low. Was wondering if this is due to the background initialization going on the LUNs itself. This is quite a small box , with around 30 disks (mix of FC & SATA) so I would expecting very high IOPS but writing at an average of 20MB/s seems to be quite low.

Thanks!

Etienne

1 Rookie

 • 

20.4K Posts

August 14th, 2016 05:00

did you check if write cache is configured and enabled ?  How's read speed ?

4 Posts

August 14th, 2016 23:00

its enabled on both SPs currently at 60% low and 80% high. Disk read is at around 180Mb/s. Can be this low write speeds related to the zeroing of this disks ?

1 Rookie

 • 

20.4K Posts

August 15th, 2016 04:00

i have not tested performance right after RG creation but i would not expect it to be very significant, should be a background task. How are you testing write performance ?

4 Posts

August 15th, 2016 07:00

Im using a dd :

[root@xxxxxxxx ~]# dd if=/dev/zero of=/dev/mapper/IQ_MAIN bs=128 count=50000

50000+0 records in

50000+0 records out

6400000 bytes (6.4 MB) copied, 0.32338 s, 19.8 MB/s

dirty caching has been configured to 0 in order to flush directly to disk :

vm.dirty_background_ratio = 0

vm.dirty_background_bytes = 0

vm.dirty_ratio = 0

vm.dirty_ratio = 10

### added dirty_expire and dirty_writeback

vm.dirty_expire_centisecs=500

vm.dirty_writeback_centisecs=100

vm.dirty_bytes = 0

Not sure what else i could check maybe the LUN configuration is what is causing the issues ?

Thanks!

4.5K Posts

August 15th, 2016 08:00

Write performance depends on a lot of different factors - the amount of Write Cache Memory assigned - it should be much larger than Read Cache Memory, the design of the Raid Group - Raid 5, Raid 6, Raid 10, the speed of the disks in the raid group - FC (either 10K RPM or 15K RPM) or SATA (7200 RPM, the host conection type  - FC or iSCSI, the speed of the connection, etc.

If you could provide a more detailed description of the type of disks used in the raid group, the raid type that would help.

The rating on the different disks in the CX4 are below - these are called the "rule-of-thumb" that EMC uses to help determine how a particular workload will behave. The Throughput numbers are based on a workload that is small block (less than 32KB IO Size), random IO with Read/Write ratio of 80/20. The Bandwidth is based on IO Size of 64KB or larger.

Throughput IO/sec

15K RPM FC = 180 IOPS

10K RPM FC = 120 IOPS

7200 RPM SATA = 80

Bandwidth MB/s

15K RPM FC = 12-14MB/s

10K RPM FC = 10-12MB/s

7200 RPM SATA = 8MB/s

All of these are general in nature and are affected by other activities occurring on the array. See the following document for a more detailed looked at how performance is measured.

https://www.emc.com/collateral/hardware/white-papers/h5773-clariion-best-practices-performance-availability-wp.pdf

glen

4 Posts

August 15th, 2016 23:00

Thanks glen for the info provided - appreciate that. Already had a look at the doc and its quite informative, re-IO/s and bandwidth, i'm aware about how much speed each disk type is able to provide & that is what is strange here is that even though we have multiple SATA disks I cannot reach more than the mentioned 20MB/s of speed. Below is the RAID groups that we have together with disk types and number of disks in each :

RAID GROUP 0 - Raid 1/0 Type FC , 2 disks

RAID GROUP 1 - Raid 1/0 Type FC , 8 disks

RAID GROUP 2 - Raid 5 Type Sata2, 5 disks

RAID GROUP 3 - Raid 5 Type Sata2, 4 disks

RAID GROUP 4 - Sata2 , hotspare

RAID GROUP 5 - FC , hotspare

On top of these we created the LUNs that should be used by our platform , basically R1/0 LUN1, on RAID GROUP 0, R1/0 LUN 2 on RAID GROUP1 and then we have multiple R5 LUNs on RAID GROUP 2 , and 1 R5 LUN on RAID GROUP 3.

Thanks!

Etienne.

4.5K Posts

August 16th, 2016 08:00

Let's take RG 2 as an example - this is a 4+1 R5 - each disk can handle about 6MB/s - with 5 disks that would be about 30MB/s - without anything else going on in that raid group.

You could try using IOmeter as a better test. Set it up to run 100% Writes, 100% Sequential, 64KB IO Size - this would give you the best possible test. But IOMeter uses "Workers" and the more threads you generate, the faster the performance. So set up at least 8 workers and Set the Queuing to 10. Single threaded tests are not a good measure of an array's potential as it is designed specifically to work best when using multiple threads.

glen

No Events found!

Top