Best ways to migrate code from Mainframes to Unix

Analytics

Best ways to migrate code from Mainframes to Unix

Hi,

We have a huge system with several Teradata boxes for different applications and countries of our organization. All of the overnight batch jobs for processing the business logic are bteq, fastload, multiload scripts running from JCLs on mainframe environment. We have facilities in these environment to run JCL based bteq jobs with several steps. We will be able to restart the jobs from any of the selected steps.

Now we are trying to introduce Near Real time applications with custom built unix based applications which will be loading at regualr intervals spanning from 15 minutes two several hours. On mainframes we used to have code version management and security like hiding production logon parameters from normal users. Also we usually apply production code changes by mainframe tools like endevor etc. But on unix we need to have a similar environment with which we can handle the code management and security of the applications. Can you please share your best practices and any tools to be used on unix environment? Also how do you manage code and credentials security in these scenarios? Can you pass on any unix based sample jobs with multiple steps and how to achieve the above mentioned restartability from a particular step.

Along with these can you please mention any useful scheduling environment tools for unix which can be easily integrated with the custom built applications and near real time applications (as it involves dynamic parameters passing to jobs using file triggers as opposed to traditional time triggers).

Thanks in advance.
1 REPLY

Re: Best ways to migrate code from Mainframes to Unix

Hello,

For security of logons, the logon id passwords can be stored in a file and encrypted on UNIX. It can be then refered in the bteqs in the logon statement by passing username as a parameter to the encrypted file.

Unlike JCLs where you execute a BTEQ/FLOAD/MLOAD as JCL Steps of a Job,In UNIX, one approach is to create a job chain in the scheduler(could be IBM TWS or Redwood Cronacle) where each step of the job chain would be a bteq/mload/fload step. Each step in the scheduler just needs to executed like any other shell(sh filename.sh parameter1 parameter2) which inturn would invoke bteq inside it and can use the parameters. As far as my knowledge goes, these schedulers support file triggers just like dataset triggers on Mainframes.These schedulers can easily restart from any particular step in the Job Chain.

Mainframe provides a very useful config management tools like changeman and endevor.I am not sure if UNIX has something like that or not. You may have to config manage it through a seprate config management tool like SVN.

BTW, Like you, I have also worked mostly on Mainframes, just recently switched to a diif project with UNIX.

Regards,
Ayush Jain