Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

714

September 22nd, 2009 02:00

granularity of load balance

what's the granularity of load balancing? I/O or LUN. in a extreme case, if i have only one LUN, will all I/O to this lun use different path?

341 Posts

September 22nd, 2009 14:00

The sequence number of each fibre channel IO (frame) is contained within the frame itself, all part of the Fiber-Channel protocol.

341 Posts

September 22nd, 2009 02:00

PowerPath load-balancing is per IO.

PowerPath load balancing in active/active environment works as follows [very rough explanation]:

PowerPath will use the path previously chosen for the I/O against the same device as long as the load on this path is not too high. This explains why, as long as the load generated by the host is not too high, the I/Os are always sent along the same path (giving the impression that PowerPath load-balancing is broken!). [can be seen from powermt display - you will see Q-IO's more on one path than others]

In summary, unless the path that was last used for I/O to that device is very busy, PowerPath will route the I/O along the same path. Otherwise, it will route the I/O down an alternate path (balance the I/Os down alternate path).

Alternately you can chose the round-robin policy where each IO is set down a different path in turn.

2 Intern

 • 

1.3K Posts

September 22nd, 2009 03:00

Also when ISL connected nodes are in the situation and w/o a trunking license ( NOT load balanced rather just fail over) then also you may observe the Q-IO are different/higer on some paths

125 Posts

September 22nd, 2009 08:00

if there are 100 I/O to one LUN, say IO1 to IO 100 order by time, and these I/O is distributed in many path, which part will keep the order? and how to keep the order.

125 Posts

September 22nd, 2009 08:00

if there are 100 I/O to one LUN, say IO1 to IO 100 order by time, and these I/O is distributed in many path, which part will keep the order? and how to keep the order.
No Events found!

Top