I am having one problem.
I know BTET transaction is "all or none". If I submit a multistatement transaction and any one statemetn fails then entire transaction will roll back in BTET mode. However in ANSI mode if I use COMMIT then only the failed statements will roll back while the others will commit.
Also I know TPT runs in Teradata mode, as it is defined in DBSControl. I guess FastLoad also runs the same.
Now, I was running a TPT job (using Load operator) through informatica. In the source file there was a 'NULL' data at one column position and that column was defined as NOT NULL in DDL. My expectation was the entire transaction will fail. But only the affected records were excluded with error written in _ET table, rest were committed in target table.
Then I tried to load the same source file using FASTLOAD with CHECKPOINT 10 and received same result. Only the affected records were excluded with error in ET, rest were committed.
Why am I receiving this result? What is the logic behind this?
Does this have any relation with the fact that FLOAD/TPT LOAD happens with 64K block?
Please let me know your replies.
In Teradata utilities there are two kind of tables to handle errors, one is to handle UV violations and the other is used to store rows with errors, these errors are usually related to data mismatches, anomolies.
So in your case the rows with bad data will automatically moved to error table based on error_limit defined instead of failing the whole process.
Thanks Saeed for your response.
So I guess if it is multi-statement insert/select in bteq with btet transaction mode, there will be rollback to last check point. However, the utilities will simply write to corresponding ET and UV tables even if they are in btet transaction mode. This is true even for TPT also.
Let me try one thing. I will create an error table for one base table to log errors. Then I will run simple ins/sel transaction in btet through odbc. In this case the transaction should not fail but log those errors.
I will let all know the result.
Sure, you can do that test, but remember here the rows are inserted into error table due to data error, and this will fail transaction in case the errors exceed the defined error_limit.
What you are seeing is a difference in behavior between our special load/unload protocols, and generic SQL insert/select.
FastLoad (and TPT Load) are "bulk" loaders. They were designed to be able to load large volumes of data quickly, and (if possible) without terminating.
Thus, these protocols have rules.
Certain conditions will be considered fatal.
Certain conditions will just allow for the rows in error to be placed into special error tables and allow for the job to continue.
Thanks Feinholz and Saeed for your reply.
So I think any normal ODBC multistatement transaction (even through any BI tool) will follow the specific transaction rules of commit and rollback (ANSI and BTET). But these utilities are designed in such a way that they will report those errors and continue processing with out terminating transaction.
I now get the picture. This approach is really handy.