Teradata PT 15.10

General
Highlighted
STK
Fan

Teradata PT 15.10

Hello Team

 

AM very new to teradata and trying to work on a TPT script. The script is being used for loading data from flat file to a staging table. Now we do get data sometimes which causes a datatype length overflow, the load terminates immediately on encountering such issues. Am looking for ways to contnue the loading process when the number of error records is lessser than  a pre -defined number(say 5 records). is there an option with TPT to achieve this?

I see there is an option of rowErrorFileName, which would push all the error records into a error file and continue the loads. But the requirement is to have an upper threshhold of say 5 records(pre defined) and if the number of error records> 5 then the TPT load should terminate. please suggest. appreciate the help.


Accepted Solutions
Senior Apprentice

Re: Teradata PT 15.10

Hi,

 

I think your statement "My undestanding is the schema matching for all records might be happening in the data connector producer phase and the reason why the file is getting created. The rest are database errors and gets appended in the _ET tables during the consuer LOAD operator phase. does this make sense?" is spot on.

 

As per my previous post, think about which piece of code identifies that there is an error and where that piece of code is running.

 

The DataConnector operator code runs on the client system.

The LOAD operator code that detects data errors is on the dbms platform.

 

As your comment above says "the rest are database errors".

 

Cheers,

Dave

Ward Analytics Ltd - information in motion
www: http://www.ward-analytics.com
1 ACCEPTED SOLUTION
6 REPLIES
Senior Apprentice

Re: Teradata PT 15.10

Hi,

 

If you are using the LOAD operator then you probably want the ErrorLimit parameter.

 

I couldn't see the "rowErrorFileName" parameter for the LOAD operator, not sure where that comes from. For the LOAD operator you have ErrorTable1/2 parameters.

 

HTH

Dave

Ward Analytics Ltd - information in motion
www: http://www.ward-analytics.com
STK
Fan

Re: Teradata PT 15.10

Thanks Dave. So as i understand, i could use the Error Limit option to set a threshold. If the bad records are lesser than the assigned threshold, then the loads still continue and the bad records are loaded to the error table.If the no. of error records> threshold then the loads are terminated.

In case of bad records lesser than the threshold , is  there a way to push those records to a badfile than having to access the error tables to identify the bad records? appreciate the help. 

Senior Apprentice

Re: Teradata PT 15.10

Hi,

 

Yes, your understanding is correct. No, the bad records can only be written to a table - it is the dbms code that detects the problem and that code doesn't send the rows back out to a file on your client/etl server. It only puts them into a table.

 

Cheers,

Dave

Ward Analytics Ltd - information in motion
www: http://www.ward-analytics.com
STK
Fan

Re: Teradata PT 15.10

Hello Dave/Everyone

 

Thanks for the response.  i wanted to do some checks on my side before asking another question or accepting the solution.

So below are my finding.

I am trying to debug an existng script which makes use of the teradata tpt for loading incoming files into empty teradata tables.

while doing the load, the schema is defined and DATA CONNECTOR operator is used as the Producer and later LOAD operator is being used for the Consumer piece of the equation. the tbuild is used to initiate the tpt. I enabled the parameter RowErrFileName and did some testing. During error Handling, what is being noted is if there is an error occuring in the schema, say an extra demiliter or additional column then the error file gets created with the error records, but when there is a error in the data type conversion or data type length(data overflow) the records are loaded to the ET table and doesnt show up in the error file. Is there a possible reason, why not every error falls into the table or in the _ET tables?

My undestanding is the schema matching for all records might be happening in the data connector producer phase and the reason why the file is getting created. The rest are database errors and gets appended in the _ET tables during the consuer LOAD operator phase. does this make sense? also what kind of errors other than the schema mismatch will fall into the error file in the above mentioned scenario? 

if there is some literature suggesting /explaining similar issues, kindly suggest.

Appreciate everyones time and effort on this. thanks.

Senior Apprentice

Re: Teradata PT 15.10

Hi,

 

I think your statement "My undestanding is the schema matching for all records might be happening in the data connector producer phase and the reason why the file is getting created. The rest are database errors and gets appended in the _ET tables during the consuer LOAD operator phase. does this make sense?" is spot on.

 

As per my previous post, think about which piece of code identifies that there is an error and where that piece of code is running.

 

The DataConnector operator code runs on the client system.

The LOAD operator code that detects data errors is on the dbms platform.

 

As your comment above says "the rest are database errors".

 

Cheers,

Dave

Ward Analytics Ltd - information in motion
www: http://www.ward-analytics.com
STK
Fan

Re: Teradata PT 15.10

Thanks Dave, it was very much helpful.