This post is more than 5 years old
32 Posts
2
3448
Write Pending too high
During a Database restore activity we found our cache utilization go very high almost to about almost 98%.
Later we were able to sort it out by implementing a host limit on the server.
Now we noticed one of the LUN having a very high Write pending limit.
Notice the LUN highlighted 0892 its bound to sata Pool having only 63.2 IOPs and Notice the Write Pending Go up to 2.8Lacs.
Can anyone explain what actually happened here.?
How can 63 IOPS lead tyo 2.8 Lacs WP ?
Quincy561
1.3K Posts
0
December 16th, 2014 09:00
Too much FE write workload for the BE resources. Increase the BE capability to destage (add disks, DAs, change RAID, faster disks,etc)
If it is the SATA pool that is overloaded, you can use DCP to fence the writes to that pool, so other writes going to EFD or FC don't suffer as well.
You could also apply host IO limits if you know what SGs are overwhelming the backend.
Neel_c
32 Posts
0
December 16th, 2014 06:00
Cache Size (Mirrored) : 240640 (MB)
# of Available Cache Slots : 3282344
Max # of System Write Pending Slots : 2461758
Max # of DA Write Pending Slots : 0
Max # of Device Write Pending Slots : 123087
Replication Cache Usage (Percent) : 0
Max # of Replication Cache Slots : 461579
I understand that but check the IOs on that device its just 63 and check the WP for that Device...doesn't seem to add up.
Quincy561
1.3K Posts
0
December 16th, 2014 06:00
Looks like a lot of other devices had high WP counts as well. How many slots can be WP? Per device and system? You can get this from a symcfg list -v
Generally high WP counts are because the back-end cannot keep up with front-end writes.
Quincy561
1.3K Posts
0
December 16th, 2014 08:00
Is that device a meta volume? The count (123,087) is per member.
Quincy561
1.3K Posts
1
December 16th, 2014 08:00
So that is only about 23,000 per per member.
I don't see the IO size, maybe they are 1MB IOs which might lead to more WP tracks, or maybe the IO rate was higher before, leading to high WP counts.
Again, if you have high WP counts, the BE can't keep up with the FE, but it could be other devices helping cause the BE issue too.
Neel_c
32 Posts
0
December 16th, 2014 08:00
Yes its a 12 member meta dev.
Still WP going high for that device with just 63 IOs doesn't make sense.
WP can go high only in case of intensive IOs. Writes to be precise. How is it that just 63 IOS are leading to a WP of almost 2.8 Lacs
Neel_c
32 Posts
0
December 16th, 2014 09:00
So is it a case of queuing at the BE ?
How to rectify it?
Because we had a serious issue with the cache utilization went upto 99%?
gautam_sunil
7 Posts
0
December 22nd, 2014 23:00
have you created and enabled DSE pool in your environment ? As DSE pool helps in lowering down the WP limit ..
sauravrohilla
859 Posts
1
December 23rd, 2014 01:00
DSE pool is helpful in case there is a not well planned SRDF. it does not help in each high WP case.
regards,
Saurabh Rohilla