Start a Conversation

Unsolved

This post is more than 5 years old

P

425

March 16th, 2009 13:00

Performance Essential: Batch performance review

Hello.

I have a client running PSP 3.0.4,
The information in the CONREP has gone without review for a very long time (years). Recently, my client focused on one particular batch job that appeared to be running too long. They deleted and redefined the CONREF records for this job and observed a dramatic reduction in run time.
My question is: What approach / method would be useful for reevaluating the CONREF entries to see if any (more) have become less than optimized?

Message was edited by: Benevolent Mainframe Moderator; spelling.
David Yates

154 Posts

March 16th, 2009 13:00

This thread was spawned from SR 27743864. At the Benevolent Host S/W & Mainframe Forum Moderator's request, the customer agreed to move the issue to the EMC Customer Support forums for the benefit of other EMC customers who may have similar Control Repository questions or concerns.

I'll forward this to the SME's for their thoughtful review.

Best regards,
Dave Yates
EMC TSE3
Benevolent Host S/W & Mainframe Forum Moderator

24 Posts

March 17th, 2009 09:00

The first question is: why are they less than optimized? Having job steps which are less than optimized can mean that either an LSR/NSR non-transparency was detected and the file was force to LSR to prevent data loss or ANALYZE was fooled into calculating that the other I/O methodology (LSR vs NSR) would provide better throughput or ANALYZE is invoked on too few I/Os (EXCPTHLD is set too low; the default is 100).

If the reason was because of an LSR/NSR non-transparency issue then the JOBNAME used on the control repository record will be VRPLRC20. An LSR/NSR non-transparency issue occurs when the program fails to process data properly in an LSR environment. As such, these records should not be deleted unless the program is re-written such that LSR/NSR non-transparency issues do not occur. In an earlier post, http://forums.emc.com/forums/thread.jspa?threadID=94113&tstart=0, I discussed LSR/NSR non-transparency issues and the problems which can occur when trying to circumvent the issue.

In the other two cases the JOBNAME on the control repository record will be VANALYZE. Just deleting the ANALYZE control repository records without identifying why they were written will only mean that ANALZYE will re-write them the next time the job runs.

The first reason is that ANALYZE was fooled into calculating that the other I/O methodology (LSR vs NSR) would provide better throughput, if sorted transactions are processing using LSR or randomized transactions are processed using NSR. In either situation the existing control repository record should be updated to change the I/O methodology. In the first case, it will not be enough to change FORCENSR to FORCELSR, the parameter CTLSEQ will have to be added.

The second reason is that EXCPTHLD (the default is 100) and IMPTHLD (the default is 10%) are to low for the site. EXCPTHLD is the threshold used to determine if ANALYZE should be invoked. And IMPTHLD is threshold used to determine if ANALYZE should write a record changing the I/O methodology. To do determine if EXCPTHLD needs to be adjusted, one must know the average number of I/O for the majority of jobs in the entire site. This does not mean the average number of transactions; it means the average number of I/O PSP does. PSP causes multiple transactions to be done in each I/O so the number of transactions and the number of I/O can be substantially different. For example, in one of my test jobs 27,664,455 puts is reduced to 38,018 I/O (writes to an AIX during a build index) and 59,463,999 gets is reduced to 10,197 I/O (reads of the base cluster during the same build index). In this test job the base cluster had just been loaded and so had no CI or CA splits, CI and CA splits can cause additional I/O. If most of the jobs have large I/O counts then EXCPTHLD should be raised. IMPTHLD is expressed as a percentage; it is the minimum percent of improvement gained before ANALYZE will actually change the I/O methodology. In most cases increasing EXCPTHLD is sufficient. Once EXCPTHLD has been updated, records written by ANALYZE may be deleted. Increasing EXCPTHLD will not prevent ANALYZE from being fooled into calculating that the other I/O methodology (LSR vs NSR) would provide better throughput, if sorted transactions are processing using LSR or randomized transactions are processed using NSR; so the control repository should be monitored and jobs examined.
No Events found!

Top