I'm not sure if I've understood your correctly... If you want to find where is your ldev is spread across in tiers in fast-vp, then this syntax would help you to tell how many tracks are in which tier based on your FAST-VP policy against this ldev storage group.
As Schandran says, you can find out how much capacity is on what tier per lun, but the customer tools available can't tell you where an individual IO is serviced from.
There are however performance metrics that can help determine where a performance issue may be. The heat map in U4V is a good place to start. You can also get stats at the disk level showing read and write response times that can indicate if a disk pool is being overworked.
As of I know FAST-VP's Data-Gathering, Analyzing & Chunk Movement are done by the symmetrix software which is part of Enginuity, Customers have no access to see those reports, may be L3 or Engineering support can access.
Just curious to know what will you conclude by knowing where is your ldev read's comes from? I would prefer to leave that to FAST-VP, because if it finds that your ldev have a bottleneck in reads then it'll promote to the highest tier as per your policy, if not it will retain the chunks in default bound pool or demote to lowest tier.
Also if you really have read IO issue, I would suggest you to move that ldev/storage group to FAST SSD policy tier if you have one, because SSD's are proven for best performance in random heavy reads.
JasonC is correct, the only reports we can gather are for the capacity usage per lun. If you are experiencing latency it's most likely a pool is being overworked. You can drill down with diagnostic views from Unisphere Performance Analyzer and see what luns are busiest at the time you are experiencing this and which pools are affected. It may be that there are heavy writes or something else contributing. A holistic approach looking at all components in the array at the time of the issue will highlight if you have a bottleneck and a plan of action can be drawn.
If you are experiencing performance issues it may be a good idea to get your local performance FSS to have a look and check your system top to toe.
The reason I would like to know is our fast policy is disabled due to the pool utilization being high. we are manually moving the workload to pinned FC(by vmotion). when user reports read being slow I am curios to know if latency is due to the read blocks being fetched from sata. since fast is disabled array can't take any action to promote to higher tier. we would like to make sure that the vm we are moving are the right candiate for the move to pinned FC lun. i hope it makes sense.
There are several FAST VP dashboards included in the performance component of Unisphere for VMAX. One of these dashboards is FAST by Storage Group. Charts included on this dashboard include average read response time from disk for the whole SG, so you can tell overall how the SG is performing. It also includes charts detailing the backend throughput by tier, the number of backend requests by tier, and the IO density by tier. From this, you should be able to determine if there is a significant amount of IO being driven through any particular tier.
You will not be able to tell which tier a specific read request came from.
It is high level over view from tier and sg. our storage group has 62 luns so that does not help when we have that many luns in SG. looking at these responses is my request seems to be legitimate. is there possibility to submit as RFE? or is this the right forum to discuss that? I am not sure any of responses are from EMC employees. Thanks for all your replies and taking time to respond.
Yes, you have responses from EMC employees in this thread. Most of the performance issues we deal with are resolved without going to the level detail you are talking about. To perform an IO and also record and store on an IO-by-IObasis where the data came from is a significant overhead. The data needs to be stored somewhere and the additional processing overhead will slow the IO rate down. I don't think this is a practical solution.
If you havea performance difficulty that is not explained within the metrics available to you, the best approach is to open a Service Request with EMC. We have performance specialists that can help answer the questions and resolve the issue.
schandran1
3 Posts
0
November 17th, 2013 02:00
Hi,
I'm not sure if I've understood your correctly... If you want to find where is your ldev is spread across in tiers in fast-vp, then this syntax would help you to tell how many tracks are in which tier based on your FAST-VP policy against this ldev storage group.
#symfast -sid show -association -sg -v
Example Output:
Policy Name : APP_Cost_Reduction
Priority : 2
RDF Coordination : Disabled
Tiers(2)
{
--------------------------------------------------------------------
L
Max SG O Target
Tier Name Type Percent C Tech Protection
-------------------------------- ---- -------- - ----- -------------
Tier1_FC_R1 VP 100 FC RAID-1
Tier2_SATA_R6 VP 50 SATA RAID-6(6+2)
}
Legend:
Tier Type : DP = Disk Group Provisioning, VP = Virtual Pools
karimsid
3 Posts
0
November 17th, 2013 07:00
Hi Mr.Chandran,
my question is if there is read from the LUN (ldev) how do I know if the read was from sata blocks or FC blocks?
is there report I can run to find out from performance analyzer?
Regards,
KD
PedalHarder
3 Apprentice
•
465 Posts
1
November 17th, 2013 14:00
As Schandran says, you can find out how much capacity is on what tier per lun, but the customer tools available can't tell you where an individual IO is serviced from.
There are however performance metrics that can help determine where a performance issue may be. The heat map in U4V is a good place to start. You can also get stats at the disk level showing read and write response times that can indicate if a disk pool is being overworked.
schandran1
3 Posts
0
November 17th, 2013 21:00
Hi KD,
As of I know FAST-VP's Data-Gathering, Analyzing & Chunk Movement are done by the symmetrix software which is part of Enginuity, Customers have no access to see those reports, may be L3 or Engineering support can access.
Just curious to know what will you conclude by knowing where is your ldev read's comes from? I would prefer to leave that to FAST-VP, because if it finds that your ldev have a bottleneck in reads then it'll promote to the highest tier as per your policy, if not it will retain the chunks in default bound pool or demote to lowest tier.
Also if you really have read IO issue, I would suggest you to move that ldev/storage group to FAST SSD policy tier if you have one, because SSD's are proven for best performance in random heavy reads.
Thank You!
rawstorage
3 Apprentice
•
423 Posts
0
November 18th, 2013 00:00
JasonC is correct, the only reports we can gather are for the capacity usage per lun. If you are experiencing latency it's most likely a pool is being overworked. You can drill down with diagnostic views from Unisphere Performance Analyzer and see what luns are busiest at the time you are experiencing this and which pools are affected. It may be that there are heavy writes or something else contributing. A holistic approach looking at all components in the array at the time of the issue will highlight if you have a bottleneck and a plan of action can be drawn.
If you are experiencing performance issues it may be a good idea to get your local performance FSS to have a look and check your system top to toe.
karimsid
3 Posts
0
November 18th, 2013 16:00
The reason I would like to know is our fast policy is disabled due to the pool utilization being high. we are manually moving the workload to pinned FC(by vmotion). when user reports read being slow I am curios to know if latency is due to the read blocks being fetched from sata. since fast is disabled array can't take any action to promote to higher tier. we would like to make sure that the vm we are moving are the right candiate for the move to pinned FC lun. i hope it makes sense.
SYMCLIGuy
98 Posts
0
November 19th, 2013 06:00
There are several FAST VP dashboards included in the performance component of Unisphere for VMAX. One of these dashboards is FAST by Storage Group. Charts included on this dashboard include average read response time from disk for the whole SG, so you can tell overall how the SG is performing. It also includes charts detailing the backend throughput by tier, the number of backend requests by tier, and the IO density by tier. From this, you should be able to determine if there is a significant amount of IO being driven through any particular tier.
You will not be able to tell which tier a specific read request came from.
karimsid
3 Posts
0
November 19th, 2013 10:00
It is high level over view from tier and sg. our storage group has 62 luns so that does not help when we have that many luns in SG. looking at these responses is my request seems to be legitimate. is there possibility to submit as RFE? or is this the right forum to discuss that? I am not sure any of responses are from EMC employees. Thanks for all your replies and taking time to respond.
cheers
Karim
PedalHarder
3 Apprentice
•
465 Posts
1
November 19th, 2013 14:00
Yes, you have responses from EMC employees in this thread. Most of the performance issues we deal with are resolved without going to the level detail you are talking about. To perform an IO and also record and store on an IO-by-IObasis where the data came from is a significant overhead. The data needs to be stored somewhere and the additional processing overhead will slow the IO rate down. I don't think this is a practical solution.
If you havea performance difficulty that is not explained within the metrics available to you, the best approach is to open a Service Request with EMC. We have performance specialists that can help answer the questions and resolve the issue.
mkdurrani
2 Posts
0
November 19th, 2013 17:00
Thanks for all the responses. Can someone tag this as answered and close the thread. I am not able to see that option.
Karim