I'm having an issue between Informatica and Teradata. The source table contains 11397 records. I'm doing a staight pull from the source to target table with no manipulation of the data. The source is SQL Server and the Target is Teradata. The workflow is successful and indicates within Workflow Manager that 11397 was read, 2 had issues, and 11395 was written to the target table. The 2 issues is correct and they have a secondary unique key constraint.
My issue is that when I perform an count within Teradata on the target table, I get only 8207 and not 11395 that Informatica indicated. After researching the data, I found out that something is causing all the records (that are perfectly fine) not to be written to the target table that are before the last of the 2 records with the unique constraint issue. Also, the number of actual successful records very based on what the commint interval in informatica is set to.
Is there a way to leave the default commit interval in informatica as 10000 but to have all successful records be writen to the target table and only these two records with unique secondary key constraints be left out?
Say if you have 100 records and have an issue with a record which is located at row 30. Is there a way to have all 99 records be written to the target table instead of only 70 making it there?
I guess your target table is a set table which does nt allow you to load duplicates.Try to change the target table structure to Multiset and then give a try.
Teradata's default transactio mode rolls back entire transaction to last commit point whenever a sql statement fails. This is unlike ANSI transcation mode, which only fails the current statements and does not really rollback the entire transaction.
You are seeing count mismatches b/w what Infa says as affected rows and what Teradata table actually has - I experienced this in the past, for some reason either Infa fails to catch the errors or Teradata doesn't pass the error information to Infa, and Infa is completely ignorant of such DML errors in Teradata. This is annoying, and I am very surprised when I first noticed this.
I would suggest you either use loader utilities (this works) or change the transaction mode to ANSI and try.
Couple of questions for you, what type of Informatica connection are you using? When you say the records are "perfectly fine" what exactly do you mean by "perfectly fine"? Are there any duplicate records in the source data set? What does your DDL for the target table look like?
Thanks Karteek! We resolved the issue in line to your suggestion. We changed the ODBC connection from Teradata to ANSI transaction mode and also set the Custom Properties of the Workflow task to "OptimizeODBCWrite=No;".
The source table that I was using contained two records that violated the target table's secondary unique key constraint. Since Informatica was comminting rows at 10,000 records at a time, the all the records that came before the records with issues where under 10,000 records which then Teradata rolled them back from the point of the last record with an issue.
Once we made the modifications above, the informatica monitor record counts matched the actual results within the target table (teradata). The ANSI mode will only exclude records with errors or violations and not roll back entire groups of records like Teradata mode will do.
Thank you all for your input and recommendations. I really appreciate it.