TPT output from ODBC includes Indicator Bytes?

Tools & Utilities
Senior Apprentice

TPT output from ODBC includes Indicator Bytes?

Hi,

I'm trying to transfer data from a MySQL table into a Teradata table. The export uses the ODBC operator and the inport uses the LOAD operator.

All records go into the ET table, usually with 2673 (source parcel length incorrect) errors, some of them fail with a 6760 (invalid timestamp field).

Looking at the DataParcel column what it appears is happening is that every data record sent to Teradata has an additional 2 bytes added to the front. I'm certain that these two bytes are indicator bytes. They are not 'record length' (or if they are then they are completely wrong !) or field length because the first field in the output is a BIGINT (i.e. a fixed length field).

Also, interpreting them as indicator bytes and comparing to the source data they match up. I confess that I haven't checked all records, but the first few match up.

So it looks like the data parcel includes indicator bytes which makes sense because the data may include nulls, but the LOAD operator is not expecting them (and therefore is not telling the DBMS to expect them). There is no "indicator bytes" attribute in the LOAD operator that I can see.

Looking at the TPT LOAD operator documentation it appears that you have to use the data connector operator to handle 'indicators'. Is this correct? (I have to say that if so then this would seem to be a missing piece of design/functionality in the TPT load operator).

I am using TPT 14.10.

Cheers,

Dave

Ward Analytics Ltd - information in motion
www: http://www.ward-analytics.com
13 REPLIES
Teradata Employee

Re: TPT output from ODBC includes Indicator Bytes?

Data is sent to Teradata in indicator mode by all of our operators, and it is not something the user can control.

The only option is for the user to tell us whether the incoming data is in indicator mode or not so that the Data Connector operator will know how to process the data.

During the transfer of data in the data streams from the DC operator to the Load operator, we add the indicator bytes (we also have to account for possible CASE logic in the script language which might NULL the data).

So, yes you are correct that those 2 bytes are indicator bytes.

And no, we do not have a missing piece of the design. This is all done on purpose.  :)

-- SteveF
Senior Apprentice

Re: TPT output from ODBC includes Indicator Bytes?

Hi,

Glad to hear it's done on purpose, thanks for that. I've resolved my problem, the SCHEMA defintion was wrong, I'd misunderstood the data format as it came out of the source.

Cheers,

Dave

Ward Analytics Ltd - information in motion
www: http://www.ward-analytics.com

Re: TPT output from ODBC includes Indicator Bytes?

Use TPT for
Loading ONE DATABASE TO ANOTHER DATABASE,DATABASE TO FILE, FILE TO DATABASE. that simply solve your problem.......
Enthusiast

Re: TPT output from ODBC includes Indicator Bytes?

Hi I am trying to Load the data using TPT.. but i am getting one error TPT unicode data identifier between USING and INSERT but i didn't use any where USER key word, could you please helpm out

Enthusiast

Re: TPT output from ODBC includes Indicator Bytes?

Thanks Steve and Dave. I'm also facing the same issue while reading data from Oracle table and loading into Teradata table. TTU version I'm using is 15.10. For reading from Oracle I'm using ODBC oparator and for loading data I'm using UPDATE operator.

So, what is the suggestion here? In the schema defination add a dummy column for Indicator bytes or is there any other better approach to handle this problem?

Thanks & Regards,

Arpan.

Teradata Employee

Re: TPT output from ODBC includes Indicator Bytes?

If you are reading data from Oracle with the ODBC operator, there are no indicator bytes coming from Oracle.

They are added to the data when the rows are moved from the ODBC operator to the loading operator (Load, Update, Stream).

But these indicator bytes will not be the issue.

The issue is most likely with the schema definition.

The loading operator will use the schema definition to build the USING clause for the INSERT statement.

And it is possible that the column definitions in the schema do not match the actual data coming from Oracle.

If you can provide the script, and the layout of the source table, I might be able to assist more.

-- SteveF
Enthusiast

Re: TPT output from ODBC includes Indicator Bytes?

Hi Steve,

Thanks for your response.

Below is my TPT contril file:

DEFINE JOB LOAD_ACR_ACCTS_RECEIVABLE_TO_TABLE
DESCRIPTION 'export LOAD_ACR_ACCTS_RECEIVABLE_TO_TABLE'
(
DEFINE SCHEMA SCHEMA_ACR_ACCTS_RECEIVABLE
(
AR_CTGRY_CDE_ACR VARCHAR(8),
AR_CTGRY_DESC_ACR VARCHAR(12),
SUB_ACCT_NO_ACR VARCHAR (16)
);

DEFINE OPERATOR o_ODBCOper
TYPE ODBC
SCHEMA SCHEMA_ACR_ACCTS_RECEIVABLE
ATTRIBUTES (
VARCHAR UserName = @UserName
,VARCHAR UserPassword = @UserPassword
,VARCHAR DSNName = @DSNName
,VARCHAR PrivateLogName = 'ODBCloadlog'
,VARCHAR SelectStmt = @SelectClause || @SourceSchema || @SourceTable
,VARCHAR TraceLevel = 'all'
,INTEGER DataBlockSize = 2048
);

DEFINE OPERATOR o_MLoad
TYPE UPDATE
SCHEMA SCHEMA_ACR_ACCTS_RECEIVABLE
ATTRIBUTES (
VARCHAR TdpId = @TdpId
,VARCHAR UserName = @TargetUserName
,VARCHAR UserPassword = @TargetUserPassword
,INTEGER MaxSessions = @MaxSessions
,INTEGER MinSessions = @MinSessions
,VARCHAR TargetTable = @TargetTable
,VARCHAR WorkingDatabase = @WorkingDatabase
,VARCHAR LogTable = @LogTable
,VARCHAR ErrorTable1 = @ErrorTable1
,VARCHAR ErrorTable2 = @ErrorTable2
,VARCHAR WorkTable = @WorkTable
,VARCHAR PrivateLogName = 'Load'
,VARCHAR AmpCheck = 'None'
,VARCHAR AccountId = @TeraAccountId
,VARCHAR TraceLevel = 'all'
);

APPLY
(
'INSERT INTO ' || @WorkingDatabase || '.' || @TargetTable ||
'(
:AR_CTGRY_CDE_ACR,
:AR_CTGRY_DESC_ACR,
:SUB_ACCT_NO_ACR
);')

TO OPERATOR (o_MLoad[@LoadInst])
SELECT * FROM OPERATOR (o_ODBCOper[@ReadInst] ATTRIBUTES (SelectStmt=@SelectClause || @SourceSchema || @SourceTable || ' WHERE SYS_ACR = 8155 AND PRIN_ACR = 7000;'));
);

Below is my sample record:

Added separator as '|' just to differentiate between columns:
8155|TAXES|TAXES FEES|1111111111111111

Below is Source table structure:

AR_CTGRY_CDE_ACR  VARCHAR2 (8 Byte)
AR_CTGRY_DESC_ACR VARCHAR2 (12 Byte)
SUB_ACCT_NO_ACR CHAR (16 Byte)

Below is my tbuild command:

tbuild -C -h 100M -f /data/Sample/ctl/acr_accts_receivable.tpt.ctl -v /data/Sample/logon/aroy001c_tpt.logon -u " UserName='XXXXXXX' , UserPassword='YYYYYY' , DSNName='VCMC_Oracle' , load_op=o_ODBCOper , LoadInst=7 , ReadInst=7 ,MaxSessions=5, MinSessions=1 ,WorkingDatabase='NDW_TEMP' , TargetTable='acr_accts_receivable' , LogTable='NDW_TEMP.LG_acr_accts_receivable' , ErrorTable1='NDW_TEMP.ET1_acr_accts_receivable' , ErrorTable2='NDW_TEMP.ET2_acr_accts_receivable' , WorkTable='NDW_TEMP.WT_acr_accts_receivable', LOAD_DTS='2016-07-04 08:21:34' , SkipRows=1 , SourceSchema='SourceSchema.' , SourceTable='ACR_ACCTS_RECEIVABLE', SelectClause='SELECT AR_CTGRY_CDE_ACR, AR_CTGRY_DESC_ACR,SUB_ACCT_NO_ACR FROM '" ACR_ACCTS_RECEIVABLE

Thanks & Regards,

Arpan.

Teradata Employee

Re: TPT output from ODBC includes Indicator Bytes?

Your schema has 3 columns.

Your sample data had 4 fields.

The schema does not match the source table.

Also, remove the -C from the command line; you do not want to round-robin your data to the instances of the Update operator.

It slows down performance.

Let TPT manage how best to process the data.

The -C command line option is only for when you want to write data out to multiple output files and have each file roughly the same size.

Also on the command line, SkipRows will not be in effect; that is only supported by the DataConnector operator (file reader).

-- SteveF
Enthusiast

Re: TPT output from ODBC includes Indicator Bytes?

Thanks a lot Steve.

Just wanted to understand, what will be our focus while defining SCHEMA in tpt control file? Is it Source Table or Target Table? If Source and Target table is having different datatypes ( while moving data from Oracle to Teradata, this is very obvious case), DATE should be cast to VARCHAR(19). Are there any other datatypes that needs this casting?

One of my source column was having datatype as NUMBER(4) and in target it was having INTEGER. When I defined schema as INTEGER, it failed, but when I changed to NUMBER(4) it got succeeded.

Thanks & Regards,

Arpan.