Start a Conversation

Unsolved

This post is more than 5 years old

2924

December 4th, 2012 01:00

Oracle block size, LVM stripe size, RAID stripe size and thin device extent size

Hello Experts

Can any one explain if we need to match Oracle block size, LVM stripe size, RAID stripe size and thin device extent size, and why?

Thanks!

December 4th, 2012 07:00

Zhaos : To take it a step further, in years past, we used to review the OS block size as well.

For older versions of Oracle databases, one does need to review the above parameters you have given.

See the following doc for example :

http://docs.oracle.com/cd/B10501_01/server.920/a96533/iodesign.htm

Are you wanting specific information for certain database releases or types of EMC arrays ? Is this for a customer that may be interested in FAST VP and are they also looking at ASM ?

Thanks

46 Posts

December 7th, 2012 07:00

Zhaos:

My take on this is yes, at least we should consider how all the layers interact.

For example, if you move a thin device extent (Symmetrix VMAX) then you move 768K (unfortunately this cannot be adjusted in the current version) or 7.5 MB depending on the algorithm.

- How many Oracle blocks fit in 768K? How many rows does that reflect?

- How many different tables can be part of that?

- If you use Oracle ASM with a default AU size of 1MB, what's the effect of chopping each chunk in 2 or sometimes 3 pieces and putting them on different tiers?

- What's the effect of forcing larger AU sizes? (I always recommend at least 8 or 16MB)

- Is it possible that a piece of a REDO log, a TEMP table, an INDEX and a DATA segment are moved all at once as part of the 768K or 7.5MB? Is that efficient from a tiering perspective?

- Is it possible that you move hot and cold data at the same time?

- If so, are there ways to optimize?

Consider striping. Let's say you stripe with a 32K stripe size across 8 devices  (stripe depth is then 256). A short sequential I/O of 512K now generates 8 random I/O's, 32K each. The storage system cannot identify this as a sequential I/O. If you don't stripe you might have a single random seek followed by 7x 32K sequential I/O served from storage cache (prefetch).

If the sequential read is larger than this 512K (say, full table scan of many GB) then the box has to keep track of 8 prefetch tasks (not one).

Which one is more storage friendly?

If the stripe is a REDO log however and you're using synchronous mirorring (SRDF/S) then the striping allows more parallelism on the replication links and write performance might improve.

And so on...

No Events found!

Top