In my application, we are facing issue of failure due to unavailability of the loader slots. The utilities (fastload,Mload and Fast export) are triggered using the powercentre ETL tool. The powercentre workflow is in turn controlled via CA7 mainframes jobs for all the applications in my company.
As there are many batches running at the same time for various projects thereby I feel that the maximum loader slots are consumed during some peak interval leading to this issue.
I have tried to analyse the total number of active sessions at each hour during the day as below :
Step 1 : Created a volatile table to store some values from the DBQLogTbl
Create volatile table XYZ
on commit preserve rows;
insert into XYZ
where appid in ('TPTEXP','TPTLOAD','TPTUPD')
group by 1;
Step 3 : extracted the total number of sessions during the time 01:00 till 01:59 by using the below query
select sum(no_of_sessions) from XYZ
where extract (hour from start_time) = 01
Once I get the data for all the hour interval, I can select some of the intervals with peak sessions and then check the username with the help of sessionID to identify the project name. Once we identify the list of the batches running simultaneously for different projects,the batch trigger time can be moved to avoid conflicts.
Can anyone please advice if this is a correct approach or can suggest some more efficient way of doing this.
Also, let me know if for step3, I can get the total no of sessions for each hour with a single query instead of manually editing the hour.
Thanks for your help!!
You don't have to care about the number of sessions, but the number of jobs.
And if you got a table with 24 rows, one for each hour, you can simply join to it:
select day_of_calendar as hr
where hr between 0 and 23
) as hours
left join XYZ
on hr between extract(hour from XYZ.start_time) and extract(hour from XYZ.end_time)
group by 1
order by 1
But what you actually need is the number of jobs running in parallel and not the number of jobs per hour.
If you're on TD13.10 you can use TD_SEQUENCED_COUNT to calculate this:
WITH cte(dummy, session_id, pd) AS
0 as dummy -- TD_SEQUENCED_COUNT needs something to partition by
,period(start_time, end_time)) as pd
BEGIN(pd) AS Start_TS
,END(pd) AS End_TS
RETURNS (dummy byteint
HASH BY dummy
LOCAL ORDER BY dummy, pd
) as dt
ORDER BY 1,2
Btw, instead of failing a job it might be put in a queue.
Concurrency of load jobs is usually controlled by TASM.