I need to ingest huge volume of data from Teradata systems to flat files in Unix System.I would to know if TPT would provide better performance over Fast Export utility and also whether TPT Export operator would offer any advantage/disadvantage compared to Fast Export?
Well, not exactly.
Yes, TPT is the product that everyone should be using going forward.
We are not adding any features to FastExport, and that utility will also most likely not support new DBS features going forward.
TPT is the client load/unload product going forward.
But TPT offers parallelism on the client load server, plus a whole slew of advanced features.
For example, just for your particular scenario, if you want to export the rows from a single table into multiple files, FastExport cannot do that, but TPT can.
FastExport will process the blocks in order.
TPT Export can have multiple processes, each process controlling multiple data sessions, and thus you can get parallelism extracting the data from Teradata. Plus you can achieve parallelism by writing data to multiple smaller files instead of a large single file (depends on how you want your data written out, and what you will be doing with the data after it is written to files).
does TPT support case below
100,000 records from one single table, and when doing export, automatically split 100,000 records into mutiple files, say into 10 files, each file contains 10000 records.
If TPT support this, could anybody give one detail example on this.
Yes, TPT supports the writing of data to multiple files.
Just use the Export operator to extract the data from Teradata.
Use the DataConnector operator as the file writer.
Specify how many instances you want to use (each instance will write to its own file).
Make sure to use the -C (uppercase) command line option to round robin the data to the instances of the DC operator.
The rows will not be perfectly distributed to each file, but it will be close.
The data from the Export operator to the DC operator in blocks and each block can have a different number of rows (depending on whether VARCHAR data is being extracted, where the size of each row could be different).
i was trying to export 76 million rows in a text file.
i wrote below code in a text file :
.BEGIN EXPORT SESSIONS 2;
.EXPORT OUTFILE C:\abc.txt MODE RECORD FORMAT TEXT;
select * from <db.tablename>
Then i opened command prompt and ran it as :
FEXP < FINAL_OUTPUT.txt > Error_file.txt
it ran around for 5 hours and gave below error msg :
**** 17:13:24 UTY8713 RDBMS failure, 8055: Session forced off by PMPC or
gtwglobal or security violation.
**** 17:13:24 UTY8756 Retrieval Rows statistics:
Elapsed time: 04:00:42 (hh:mm:ss)
CPU time: 22.8385 Seconds
**** 17:13:27 UTY1024 Session modal request, 'DATABASE', re-executed.
**** 17:13:28 UTY1024 Session modal request, 'SET
QUERY_BAND='UTILITYNAME=FASTEXP;' UPDATE FOR SESSION;', re-executed.
= Logoff/Disconnect =
**** 17:13:28 UTY6215 The restart log table was not dropped by this task.
**** 17:13:50 UTY6212 A successful disconnect was made from the RDBMS.
**** 17:13:50 UTY2410 Total processor time used = '23.6186 Seconds'
. Start : 13:11:57 - THU NOV 30, 2017
. End : 17:13:50 - THU NOV 30, 2017
. Highest return code encountered = '12'.
Therefore i want to know if it is ok to drop "databasename.abc_log" table.
After dropping log table whether ee need to release MLOAD too? Or any more session needs to released ?
If yes then how?
RELEASE MLOAD is only when you are trying to fix an issue when using MultiLoad to load a table, not using FastExport to export data from a table.
(And you should really post this issue as a new thread with a new subject, not as a continuation of the previous discussion.)