We need to export and directly place a file in a S3 bucket rather than TPT creating a directory on the filename and then filenames starting with F0000 using Teradata S3 utility.Please let us know if there is anyway we could do this?
If you set S3SinglePartFile=True then you don’t get files in subdirectories with F00000 names. There is, however, a file size limit of 10,000*bufsize. Typically, that is 80Gig, but you can make the buf size larger.
If you use the index feature in the DataConnector (file writer) operator, you can improve on the file size limitation by having it create multiple files directly in the bucket (or folder selected by the prefix) and the filenames are adjusted with 00, 01, 02, etc. These files are directly importable without having to deal with F0000, etc. If you used 20 as the operator index, you would have a limit of 20*10,000*bufsize. Further, if you specify a filename that ends in .gz, the files will be zipped, which can help a lot but it’s hard to predict how much it will help. If you are zipping, using a DataConnector operator index helps performance a lot because zipping is CPU bound and each indexed operation gets it’s own CPU core.
The only downside of doing it this way is that you have to have an idea of how large the object is you are exporting because if any single object created this way is larger than 10,000*bufsize, the job will fail – at the end.
We have a similar requirement - please if you could share a sample script it will be very helpful.
We need to export large tables from Teradata and place it in AWS S3. We only have windows machines with latest TTU.
Please any help is much appreciated.