We are facing a failure during loading a file into a table using TPT utility. We loaded files a lot of times successfully but this time, its a very huge file (>100 GB) probably causing the issue.
Errror Observed :
FILE_READER: TPT19350 I/O error on file '/dev/aps/file123.dat.
FILE_READER: TPT19417 pmRead failed. Buffer overflow (17)
FILE_READER: TPT19305 Fatal error reading data.
I did some research based on the error codes but couldn't get anything solid to narrow down to the root cause.
Could you please throw some light on what the actual issue could be ?
Hi , Sorry i was out for a couple of weeks and couldn't respond.
The teradata client version is 13.0.
Platform : AIX
We are calling the TPT in a shell script with parameters.
We loaded 100s of files successfully without any issues till date. I think its the colume of the huge file (>100 GB) thats causing the issue.
What exact version of 13.0?
We have released several patches since its GCA version.
And 13.0 is not an officially supported version of TPT, so my first response is for you to update TPT to the latest patch.
Then re-run the job.
If there is still an issue, let me know and we can dig further.
We are using the TPT version 13.00.00.02 and facing the same issue while loading 320GB file. Immediate help is required here.
Please upgrade to the latest efix version and try again.
If the problem persists, let me know (and let me know what version you used).
Hi Steve, Thanks for your responses.
I have a general question. When loads for smaller files (upto 80-90 GB) are going through successfully using the same scripts and the same version of TPT, it is indicating the the script and version are actually working. If the framework works for none of the loads we can assume that the version doesn't support the load process.
Could you please share your thoughts on the version/patch impacts only higher volumes? Also could you please share if addressing bulk volumes was one of the considerations for the efix ?
We fix issues all of the time.
We are currently in development for the 15.10 release.
This means that bug fixes have occurred for 13.0, 13.10, 14.0, 14.10, and 15.0.
When bugs are found in newer releases, we ofte back-ship the fixes to older releases.
It is nearly impossible for me to determine whether any of the 13.0 efixes will take care of the problem you are experiencing.
However, it does not make sense to have you generate traces and logs for us to look at because the code has most likely changed so much between the version you have and the most recent efix.
Yes, you are correct that if the load process works on an 80GB file, it should work on a 300GB file. But I have seen stranger things happen.
That particular error usually means the row encountered is larger than our block size. On the surface, it does not make much sense, but the file might have a bad row in it and we are having trouble processing it. It is possible that we made changes in one of the efixes to handle it better (not continue, but maybe provide a better error message?
I have a problen which I think is also a version issue.
I detail the problem (I have posted it in other topic too):