Start a Conversation

Unsolved

This post is more than 5 years old

1080

February 1st, 2013 08:00

How to identify where the performance bottleneck is in an I/O stack?

We run oracle & redbrick file-system based database on AIX LPARs...also.. we run oracle on ASM/raw devices too...

These LPARs are virtuals... in an NPIV configuration.. i.e.  they are virtual HBA's that are mapped to the physical's HBA's (8G) on the VIO server.

My query as I stated in the subject is .. how to identify where the performance bottleneck is in an I/O stack?

More details:

1. The IOSTAT on AIX server reports scaring avg read response times .. we have seen response times as high as 150ms..

2. I know that both disk (queue_depth) and hba/adapters (num_cmd_elems) have limits..

3. Also I am aware that the total I/O response times.. involves time taken to traverse from the host to the disk in the disk subsystem.. and this would involve the cache on CLARiiONs.. too

4. How do I make sure.. that the bad response times.. are because of we reaching the adapter limits.. disk limits.. or due to the cache on the disk subsystem itself?

Please suggest or point me in the right direction....

thanks

465 Posts

February 6th, 2013 20:00

You really need to look at the array performance metrics to understand what is going on in the array the can be contributing to the service times. Your case, it would be "analyser" for Clariion.

No Events found!

Top