I understand that ErrorTable1 has my bad load records in it but how can I get them back out.
The DataParcel column is defined as : Entire data record, as provided by the source producer operator.
I see this question is related to another thread "bteq export and import for _ET dataparcel" : 20 Feb 2012 is there any utility that will dump the error table to a file?
I would also like to see the input record number added to the error table.
We are coming out with a utility to extract the data from the DataParcel field and write it out in readable format (delimited), if the original record was delimited. If the original record format was binary, then the data will be written out in its original form. This will be available in 15.0. It will be part of TPT (although the utility itself will be a standalone utility).
As for changes to the layout of the error table, right now there are no plans to change that.
Incidentally, just so you know (this is not a trade secret), the information in DataParcel is in the exact format that we send the data to Teradata. Thus, if your original file contained delimited records, the information in DataParcel is a series of VARCHAR fields (2-byte 'n', followed by 'n' bytes, for each field). If the data was loaded by TPT, then each record begins with a 2-byte row length, followed by the indicator bytes, followed by the data. If the format of the input records was binary, then the data portion will be the exact data record from the input file.
True, there are no record numbers (unless that is part of the data), but at least you know that there is no "trick" to the format of the data.
Any tool that can SELECT from Teradata. :)
You can use BTEQ or FastExport. Some people have used SQLA.
However, realize that the data will be in binary. The reason for the field being VARBYTE is so that character data is not converted according to the client session character set. The VARBYTE field keeps the data (byte-for-byte) in its original form.
This also means if you are extracting the data from a different target architecture machine from where the data originated, you will need to account for that as well.
In Appendix B of the TPT Reference Manual we list:
DEFINE SCHEMA schema-name 'DBS table name'
DEFINE SCHEMA schema-name DELIMITED 'DBS table name'
as being obsolete, and not to use.
The syntax should be:
DEFINE SCHEMA schema-name FROM TABLE 'DBS table name'
DEFINE SCHEMA schema-name FROM TABLE DELIMITED 'DBS table name'
When you tried this:
DEFINE SCHEMA EOD_USAGE FROM TABLE DELIMITED '@X';
it failed because you put the job variable in quotes.
We do not parse anything in quotes. The error is expected.
As you saw, when using job variables, this is the correct syntax:
DEFINE SCHEMA EOD_USAGE FROM TABLE DELIMITED @X;
As for this:
DEFINE SCHEMA EOD_USAGE FROM TABLE DELIMITED ''' || @X || ''';
That is supposed to fail because there is nothing to concatenate. You just put the string "||@X||" as the table name.
But if you say this does not work:
DEFINE SCHEMA EOD_USAGE FROM TABLE DELIMITED 'SONARTEST.EOD_USAGE';
then that is a problem.
Please re-run just this example and send me the output.