Thanks SteveF. for your detailed response and explanation. I really appreciate it and am sure others will find this very helpful. I'll definitely try and use the templates that are offered to take advantage of the benefits you outlined in your response.
Thanks again for your help!!
Please excuse my ignorance as I'm no unix expert. I'm unable to view the template files in the following path. Whenever I attempt to vi these templates it just thinks I'm creating a new file.
-r-xr-xr-x 1 sysdba users 3931 2017-05-29 23:02 $DATACONNECTOR_CONSUMER.txt
-r-xr-xr-x 1 sysdba users 5717 2017-05-29 23:02 $DATACONNECTOR_PRODUCER.txt
-r-xr-xr-x 1 sysdba users 2817 2017-05-29 23:02 $DDL.txt
-r-xr-xr-x 1 sysdba users 3183 2017-05-29 23:02 $DELETER.txt
-r-xr-xr-x 1 sysdba users 3779 2017-05-29 23:02 $EXPORT.txt
-r-xr-xr-x 1 sysdba users 2607 2017-05-29 23:02 $FE_OUTMOD.txt
-r-xr-xr-x 1 sysdba users 5707 2017-05-29 23:02 $FILE_READER.txt
-r-xr-xr-x 1 sysdba users 3917 2017-05-29 23:02 $FILE_WRITER.txt
-r-xr-xr-x 1 sysdba users 2235 2017-05-29 23:02 $FL_INMOD.txt
-r-xr-xr-x 1 sysdba users 3103 2017-05-29 23:02 $INSERTER.txt
-r-xr-xr-x 1 sysdba users 3677 2017-05-29 23:02 $LOAD.txt
-r-xr-xr-x 1 sysdba users 2501 2017-05-29 23:02 $ML_INMOD.txt
-r-xr-xr-x 1 sysdba users 2586 2017-05-29 23:02 $ODBC.txt
-r-xr-xr-x 1 sysdba users 2143 2017-05-29 23:02 $OS_COMMAND.txt
-r-xr-xr-x 1 sysdba users 2560 2017-05-29 23:02 $SCHEMAP.txt
-r-xr-xr-x 1 sysdba users 3497 2017-05-29 23:02 $SELECTOR.txt
-r-xr-xr-x 1 sysdba users 4275 2017-05-29 23:02 $STREAM.txt
-r-xr-xr-x 1 sysdba users 4147 2017-05-29 23:02 $UPDATE.txt
I'm asking because I tried the example you shared with me and am getting the following error.
sysdba@arctdat18-18:/archive/teradata/tpt-scripts/tpt-usb-test> tbuild -f tkt-driver-load.txt -v tkt-driver-load-vars.txt
Teradata Parallel Transporter Version 14.10.00.19
TPT_INFRA: TPT04128: Error: The job variables supported for providing DBS logon
parameters are unassigned, and the job script does not provide logon parameters.
TPT cannot logon to a DBS to generate a required schema.
Job script preprocessing failed.
Job terminated with status 8.
I ask because I tried to look at the templates to make sure I'm using the Template job names for the variables in my variables file but they are exactly what you shared with me in this discussion string earlier.
Thanks for your help!
I've gotten past the variable naming error but am now receiving this error.
$LOAD: TPT10507: CLI Error 235: MTDP: EM_GSSINITFAIL(235): call to gss_init failed. Failure Credentials cache file '/tmp/krb5cc_509' not found
I attemped to rename the file /usr/teragss/linux-i386/client/etc/tdgssconfig.bin.prebuilt to /usr/teragss/linux-i386/client/etc/tdgssconfig.bin , a recommendation that I found in another thread but that did not correct my issue.
These same scripts in my "hard-coded" version work just fine. I am only receiving these errors since I've tried to utilize the templates.
Any idea what the problem is now?
Thanks for your help,
What version of TPT are you running?
Are you sure that all of the TTU components were installed at the same time, all using the same release version?
The "gss" error you are getting is due to either TeraGSS or CLIv2 not being installed properly.
Most likely, CLIv2 is trying to use TeraGSS and it is not installed properly.
This error should be occurring no matter how you run TPT, because of you are trying to load to Teradata or extract from Teradata, we will be going through CLIv2.
It is just with templates, we will use CLIv2 much earlier in the job.
We are using TTU version 14.10 and TPT 14.10. We had to stick with the older version of TTU due to a compatibility issue with Protegrity.
Could I possibly resolve this issue by reinstalling TTU 14.10 on top of this current version?
I suppose you can try; I am not the install expert. :)
Incidentally, templates and job variable names have change quite a bit since 14.10 so some of my previous comments may have led you astray.
I will go back and look, but I am not sure you posted the TTU version when you first reported an issue.
Always a good idea to do so; I am not sure why everyone doesn't do that. :)
Yes, I figured that out...apparently the LoadTargetTable variable name needed to be SourceTargetTable for my version (14.10).
I think we had some issues with this TTU install. When we upgraded to 15.10, they were NOT supposed to upgrade the TTU but they did. They then had to back it down to 14.10 but I'm not sure how they did that. Then a few months later, I installed TPT 14.10 on this Teradata system. Not sure if that is the root cause of this gss/cli2 error I'm experiencing though. I was able to run the following "hard-coded" TPT script successfully just now. Below are the job output details and actual TPT script that I ran.
The other thing I noticed is that whether I load this DRIVER table using TPT or just by running the same SQL in SQL Asst., it takes about the same time, 23 minutes to load empty table using TPT and then 21 minutes to load empty table using SQL Asst.. I thought that TPT would be much faster. I specified max sessions of 6 but when the job ran it actually acquired 32 sessions, 25 of which were active during the acquisition phase. Then towards the end of the job is showed 28 active sessions in Viewpoint and then completed. Not sure why it is grabbing so many sessions, is there a default set somewhere for TPT jobs...possibly in TASM? Also, why might the TPT run just as long as a straight SQL doing the same thing with a single session?
Thanks again for your help!
Here is the output from the job run:
Teradata Parallel Transporter Version 14.10.00.19
Job log: /opt/teradata/client/14.10/tbuild/logs/sysdba-46.out
Job id is sysdba-46, running on arctdat18-18
Found CheckPoint file: /opt/teradata/client/14.10/tbuild/checkpoint/sysdbaLVCP
This is a restart job; it restarts at step MAIN_STEP.
Teradata Parallel Transporter Export Operator Version 14.10.00.19
ex_TKT_ID: private log specified: tpt_test_log
Teradata Parallel Transporter Load Operator Version 14.10.00.19
ld_TKT_ARCHIVE_DRIVER: private log specified: load_log
ex_TKT_ID: connecting sessions
ld_TKT_ARCHIVE_DRIVER: connecting sessions
ld_TKT_ARCHIVE_DRIVER: preparing target table
ld_TKT_ARCHIVE_DRIVER: entering Acquisition Phase
ex_TKT_ID: sending SELECT request
ex_TKT_ID: entering End Export Phase
ex_TKT_ID: Total Rows Exported: 6594958
ld_TKT_ARCHIVE_DRIVER: entering Application Phase
ld_TKT_ARCHIVE_DRIVER: Statistics for Target Table: 'DBA_AP.TKT_ARCHIVE_DRIVER'
ld_TKT_ARCHIVE_DRIVER: Total Rows Sent To RDBMS: 6594958
ld_TKT_ARCHIVE_DRIVER: Total Rows Applied: 6594958
ld_TKT_ARCHIVE_DRIVER: Total Rows in Error Table 1: 0
ld_TKT_ARCHIVE_DRIVER: Total Rows in Error Table 2: 0
ld_TKT_ARCHIVE_DRIVER: Total Duplicate Rows: 0
ld_TKT_ARCHIVE_DRIVER: disconnecting sessions
ex_TKT_ID: disconnecting sessions
ex_TKT_ID: Total processor time used = '1.38 Second(s)'
ex_TKT_ID: Start : Wed Apr 18 15:36:42 2018
ex_TKT_ID: End : Wed Apr 18 15:59:29 2018
ld_TKT_ARCHIVE_DRIVER: Total processor time used = '1.72 Second(s)'
ld_TKT_ARCHIVE_DRIVER: Start : Wed Apr 18 15:36:42 2018
ld_TKT_ARCHIVE_DRIVER: End : Wed Apr 18 15:59:34 2018
Job step MAIN_STEP completed successfully
Job sysdba completed successfully
Job start: Wed Apr 18 15:36:38 2018
Job end: Wed Apr 18 15:59:34 2018
Here is the TPT script that I ran in the above job run:
DEFINE JOB TKT_ARCHIVE_DRIVER_LOAD
DESCRIPTION 'Loads the Driver table used by the archive/purge of TKT tables.'
DEFINE SCHEMA TKT_ARCHIVE_DRIVER
DEFINE OPERATOR ex_TKT_ID
UserName = 'tptarchive',
UserPassword = 'sfadfsdf',
TdpId = '192.999.999.130'
,PrivateLogName = 'tpt_test_log'
,SelectStmt = '
LOCKING IDW_DATA.TKT FOR ACCESS MODE
WHERE DT_OF_ISSUE between ''2017-09-01'' and ''2017-09-02'' ;'
DEFINE OPERATOR ld_TKT_ARCHIVE_DRIVER
VARCHAR PrivateLogName = 'load_log',
VARCHAR TdpId = '192.999.999130',
VARCHAR UserName = 'tptarchive',
VARCHAR UserPassword = 'asdfgdfgdfg',
VARCHAR LogTable = 'DBA_AP.TKT_ARC_DRVR_LG',
VARCHAR ErrorTable1 = 'DBA_AP.TKT_ARC_DRVR_ET',
VARCHAR ErrorTable2 = 'DBA_AP.TKT_ARC_DRVR_UV',
VARCHAR TargetTable = 'DBA_AP.TKT_ARCHIVE_DRIVER'
('INSERT INTO DBA_AP.TKT_ARCHIVE_DRIVER
TO OPERATOR (ld_TKT_ARCHIVE_DRIVER)
SELECT * FROM OPERATOR(ex_TKT_ID);
As far as sessions are concerned, if they number used is different than the number you specified, that is most likely due to TASM.
If you look at the operator's private log, it will tell you why the number if sessions was changed.
The TASM integration requirements specified that the number of sessions from TASM would be absolute.
Thus, it is possible that a job will use more sessions than you specified in the MaxSessions attribute.
As for performance, take a look at the private logs from both the Export operator and the Load operator to see if it is obvious why the numbers are the way they are.
I doubt the Export operator is the bottleneck, but the Load operator is pretty fast.
Also look at the Acquisition Phase time in the Load operator. That might be pretty quick.
It is possible that the elapsed time is being spent more in the Application Phase than the Acquisition Phase.
The Client application (TPT) has no control over the Application Phase. We just issue the request and wait until it is done.
I'm having trouble locating the private log files for the TPT job. I see the standard log in the following path on my system.
Any idea where the private logs are written by default?
"private logs" are virtual logs inside the .out file.
If you assign a name to the PrivateLogName attributes of the various operators, then the output from those operators can be easily sorted out from the rest of the information.
You need to use the "tlogview" tool to extract out the information from the TPT logs into a readable format.
You can also use "tdlog" (which is just a script wrapper around "tlogview") to get the information.
tlogview -j <job-id> -f "*" -g
By using the -j command line option, "tlogview" will find the log no matter where it resides.