Every operator attribute is exposed to tdload.
However, TrimColumns is only for the file reader. Not for the file writer.
Fred, I am testing my TPT script on AWS teradata Instance and it has 2 node, 6AMP system.
its taking 16 mins to export 1.6 GB of data. I tried to set MaxSessions to 6 each for source and target.
Can it be increased or I will get same output
If you are using the Export operator for extracting the data out of the database, then you cannot increase the number of sessions.
The Export operator supports a max of 1 session per available AMP.
And for loading, the Load and Update operators also only support a max of 1 session per available AMP.
I was able to export 2.5 GB in less than 5 minutes. And I have defined export instance as 4 and maxSessions as 24.
But can not make it more faster
Also, We have tables FallBack protected having size 30GB. Its not even starting to run.
There are few more questions I have.
1) Should we use Bteq in place of TDload for smaller tables.
2) Decimal 56.90 is stored as 56.9 in files and I need to have zero at the end too for data validation.Is it possible. Also, if there is any digit such as 56.62, it is coming fine in Tdload as well as Fast export. Please assist
I am having 40 amp teradata test environment.
I am trying to extract a table data using TPT export utility. I used max sessios as 10.
The extracted file size is 5.9 GB. But it is taking 16 minutes to extract the full table.
Please assist me how to increase the speed of extraction? Can we use nospool?
I don't want the output file to be splitted. I want a single output file.
I am not quite sure why this question is being posted to this thread.
NoSpool will be faster than Spool mode because you are eliminating the time it takes the DBS to spool the data from the elapsed time of the job.
After that, the speed of the job will most likely be bottlenecked at your disk I/O speed.
Since you want a single output file, you are basically eliminating any type of performance gain from parallelism.
Have you tried outputting the data to multiple files, and then merging the files after TPT completes?
That might be faster than trying to write out ~6GB to a single file.