I get the above error message for a file. This file has Unix-style line endings (i.e. 'LF')
However, if I go into the data (using Notepad++) and change to Windows-style line endings - i.e. 'CRLF' - then the file is imported successfully.
While TPT says " 1 error rows sent to error file out.err" - there are actually two rows in that file, as if TPT hasn't noticed the line feed. However, it has managed to process 109784 rows before this without issue.
The lines in the error file look properly formatted
Also, I have tried importing the file using the fastload utility, and it works ok - it is just TPT that has the error
On which platform are you running TPT?
How many rows are in the file?
The end-of-record delimiter must be appropriate for the platform on which the job is being executed.
If TPT is running on the Windows platform, the data file must have the CRLF end-of-record marker.
I'm running on Windows. However, many of the files which need to be uploaded originate externally from Unix-like systems.
LF works 99.9% of the time via Windows TPT. Actually, there are something like 109,786 rows in the file... If I truncate the number of rows to around 100,000 it processes fine. If I increase the number of rows the error message changes to "pmRead failed. Buffer overflow (17)"
I have seen this behaviour on a couple of files, both of which have been LF, and have text qualifiers ("). In both cases, either changing to CRLF, OR removing text qualifiers causes the file to process correctly.
Just to confirm - this is expected functionality that I should work around, rather than an issue?
We are looking into this.
A few questions need answering please:
* You mention having the problems in 15.10.00.03. Did the same jobs work on a previous TPT installation?
* Are you using a multi-byte character set?
* It sounds like you are using quoted data. If so, is QuotedData set to ‘yes’ or ‘optional’?
* Do you have a log and/or sample script we could look at?
I recently upgraded to 15.10 from 15 - I don't recall having these issues before, although that might be a coincidence.
There aren't any non-ASCII characters in my input file.
I get the same behaviour when QuotedData is 'Yes' or 'Optional'
I've actually got a test data set of purely dummy data which I could pass to you, together with the scripts/log, if you let me know how?
Please send everything to firstname.lastname@example.org.