Use Teradata Parallel Transporter to Move Data Without Landing the Data to Disk

Tools
Tools covers the tools and utilities you use to work with Teradata and its supporting ecosystem. You'll find information on everything from the Teradata Eclipse plug-in to load/extract tools.
Highlighted
Enthusiast

Re: Use Teradata Parallel Transporter to Move Data Without Landing the Data to Disk

Sam@7, you have to use gunzip command to first unzip the file and then use the tar command as follows:

#gunzip filename.tar.Z

#tar -xvf filename.tar

once you do that, you can find install.sh file to run it to install Progress drivers.

Having Said that, If you are using the following versions as Tony mentioned, thr Progress DataDirect ODBC drivers are bundled with TPT and will be installed with it. You can find more information in the TPT Reference Manual on how to use these drivers. 

If you do not have a Data Direct ODBC driver permanent license, please contact your Teradata Account Representative or Teradata Customer Support to procure a permanent license. Alternatively, you are can order the Data Direct Driver Connector license from Progress Software.

Enthusiast

Re: Use Teradata Parallel Transporter to Move Data Without Landing the Data to Disk

Hi,

I tried as you told..but still same error..I have downloaded PROGRESS_DATADIRECT_ODBC_ORACLE_SOL_SPARC_32.tar.Z.I tried it as you said..but its not working.When I run gunzip filename.tar.Z in unix it will show following error

gunzip PROGRESS_DATADIRECT_ODBC_ORACLE_SOL_SPARC_32.tar.Z;

gzip: PROGRESS_DATADIRECT_ODBC_ORACLE_SOL_SPARC_32.tar.Z: corrupt input. Use zcat to recover some data.

According to error I tried zcat command.It is showing following output.

 zcat PROGRESS_DATADIRECT_ODBC_ORACLE_SOL_SPARC_32.tar.Z| tar xvf -

x autorun.dat, 2904 bytes, 6 tape blocks

x etc/lang/license.txt, 59166 bytes, 116 tape blocks

x etc/lang/msg.dat, 30 bytes, 1 tape blocks

x etc/lang/useng.msg, 18233 bytes, 36 tape blocks

uncompress: PROGRESS_DATADIRECT_ODBC_ORACLE_SOL_SPARC_32.tar.Z: corrupt input

It is showing some files but after that again error.

please give solution for this.

Thanku.

Enthusiast

Re: Use Teradata Parallel Transporter to Move Data Without Landing the Data to Disk

Sam@7,

Looks like the file is corrupt and hence the message.

Did you download the file directly from a linux/Sparc box or copied it from a windows machine?

Did the file download complete successfully?

If thats the case, Did you copy the file in Binary format? to the Sparc machine?

Not applicable

Re: Use Teradata Parallel Transporter to Move Data Without Landing the Data to Disk

Hi,

I'm trying to configure TPT to fetch data from MQ Websphere and write directly into Teradata table. 

TPT: v15.10 64bit (Window)

MQ Websphere: v8 64bit

TD database: v13.10

Data in MQ Queue:

a|s|d|f|g

c|d|f|g|h

I managed to fetch 1 row from MQ and also inserted into the target table. But when i uploaded 2 rows of record into the MQ queue and run the same TPT script again, it fails due to the below error:

Error:

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Teradata Parallel Transporter Version 15.10.01.02 64-Bit

Teradata Parallel Transporter DataConnector Operator Version 15.10.01.02

MQ_READER[2]: Instance 2 directing private log report to 'dataconnector_log-2'.

MQ_READER[1]: Instance 1 directing private log report to 'dataconnector_log-1'.

MQ_READER[1]: DataConnector Producer operator Instances: 2

Teradata Parallel Transporter Stream Operator Version 15.10.01.02

STREAM_OPERATOR: private log specified: stream_log-1

**** 13:31:18 MQA19080 Teradata Websphere MQ AMOD, Version 15.10.01.001

**** 13:31:18 MQA19081 COPYRIGHT 2001-2016, Teradata Corporation.  ALL RIGHTS RESERVED.

**** 13:31:18 MQA19083 Code version 15.10.01.01

**** 13:31:18 MQA19084 Compiled for 64-bit WIN64

**** 13:31:18 MQA19085 pmdcomt_HeaderVersion: 'Common 15.10.00.00' - packing 'pack (push, 1)'

**** 13:31:18 MQA19086 pmddamt_HeaderVersion: 'Common 15.10.00.00' - packing 'none'

**** 13:31:18 MQA19087 MQAM Parameters in effect:

     .        Queue Manager Name: 'xxx_LOCAL'

     .        Queue Name: 'xxx.TPT'

     .        Checkpoint: File-based to pathname CKFILE_xxx.log

     .        Media flush: No

     .        Logging: none

     .        Terminate on duplicate message: yes

     .        MQ convert message data: no

     .        Queue open: NON-EXCLUSIVE.

     .        BlockSize: 32000

     .        MQ open retry tenacity: 5

     .        MQ open wait interval: 5

     .        MQ read wait interval: 15

MQ_READER[1]: ECI operator ID: 'MQ_READER-7592'

MQ_READER[1]: Operator instance 1 processing file 'DD:DATA'.

STREAM_OPERATOR: Start-up Rate: UNLIMITED statements per Minute

STREAM_OPERATOR: Operator Command ID for External Command Interface: STREAM_OPERATOR7588

STREAM_OPERATOR: connecting sessions

STREAM_OPERATOR: The job will use its internal retryable error codes

STREAM_OPERATOR: The job will use its internal data-related error codes

MQ_READER[1]: TPT19134 !ERROR! Fatal data error processing file 'DD:DATA'. Delimited Data Parsing error: Column length o

verflow(s) in row 1.

MQ_READER[1]: TPT19015 TPT Exit code set to 12.

STREAM_OPERATOR: disconnecting sessions

MQ_READER[1]: Total files processed: 0.

STREAM_OPERATOR: Total processor time used = '0.624004 Second(s)'

STREAM_OPERATOR: Start : Tue Jun 21 13:31:18 2016

STREAM_OPERATOR: End   : Tue Jun 21 13:31:57 2016

Job step MAIN_STEP terminated (status 12)

Job sckhaw terminated (status 12)

Job start: Tue Jun 21 13:31:14 2016

Job end:   Tue Jun 21 13:31:57 2016

---------------------------------------------------------------------- end error ----------------------------------------------------------------------------------------

And here's my script:

---------------------------------------------------------------------- TPT script ----------------------------------------------------------------------------------------

DEFINE JOB MQ_LOAD

DESCRIPTION 'Load a Teradata table using MQSeries'

(

DEFINE SCHEMA MQ_SCHEMA

(

Associate_Name VARCHAR(10),

DOJ VARCHAR(10),

Designation VARCHAR(15),

Loan_Amount VARCHAR(1),

Martial_Status VARCHAR(1)

);

DEFINE OPERATOR MQ_READER

TYPE DATACONNECTOR PRODUCER

SCHEMA MQ_SCHEMA

ATTRIBUTES

(

VARCHAR PrivateLogName = 'dataconnector_log',

VARCHAR FileName = 'DD:DATA',

VARCHAR Format = 'Delimited',

VARCHAR OpenMode = 'Read',

VARCHAR TextDelimiter ='|',

VARCHAR AccessModuleName = 'libmqsc',

VARCHAR AcceptMissingColumns = 'Y',

VARCHAR AccessModuleInitStr = '-qnm xxx.TPT

                               -qmgr ..._LOCAL

                               -TRCL 4 MQTRCE_xxx

                               -CKFILE CKFILE_xxx.log'

);

DEFINE OPERATOR STREAM_OPERATOR

TYPE STREAM

SCHEMA *

ATTRIBUTES

(

VARCHAR PrivateLogName = 'stream_log',

VARCHAR TdpId = 'xxx.xxx.xxx.xxx',

VARCHAR UserName = 'xxx',

VARCHAR UserPassword = 'xxx',

VARCHAR LogTable = 'xxx.sanity_test_MQS_log',

VARCHAR ErrorTable = 'xxx.sanity_test_MQS_error'

);

APPLY

('INSERT INTO xxx.nick_MQS_result VALUES (

:Associate_Name,

:DOJ,

:Designation,

:Loan_Amount,

:Martial_Status

);

')

TO OPERATOR (STREAM_OPERATOR[2])

SELECT * FROM OPERATOR (MQ_READER[2]);

);

---------------------------------------------------------------------- End TPT script -----------------------------------------------------------------------------------

Appreciate your advice on this.

-- Nickkyboy --

Enthusiast

Re: Use Teradata Parallel Transporter to Move Data Without Landing the Data to Disk

Hi All,

I have a TPT script to fetch data from ORACLE and load it into Teradata Table. I am using ODBC operator and LOAD operator for this activity. Now I want to load the data from oracle into 2 different TD systems at the same time. I have tried this scenario by defining two LOAD OPERATORS but only one TD system is getting loaded.

Please advice on thie scenario if its possible to load 2 TD servers at the same time.

Thanks in advance :)

Regards,

Karthik

Not applicable

Re: Use Teradata Parallel Transporter to Move Data Without Landing the Data to Disk

Hi All,

I would like to use TPT to extract data from Oracle and load it into teradata. I understand we need to use ODBC operator to connect oracle with using ODBC data direct connection. is there any way to connect to oracle with out ODBC data direct connection?

like using native oracle client drivers instead odbc.

when we use odbc connection to export data , it will take more time than oracle client. How can achieve the performance. I need to run this every day for 40 tables.

Thanks,

chandra

Teradata Employee

Re: Use Teradata Parallel Transporter to Move Data Without Landing the Data to Disk

Karthik, yes, it's possible to load to multiple TD servers at the same time.

You need to define different values for the TdpId attribute for each of the 2 Load operators in order to load to 2 TD servers.

See the "Multiple APPLY feature" in the TPT User Guide for more information.

Teradata Employee

Re: Use Teradata Parallel Transporter to Move Data Without Landing the Data to Disk

Chandra, you can use the Simba ODBC drivers.

See the "ODBC Operator" chapter 10 in the TPT 15.10 Reference for more information.

Enthusiast

Re: Use Teradata Parallel Transporter to Move Data Without Landing the Data to Disk

Hi Tony,

  I have refered the manual and I am able to load multiple target servers using MULTIPlLE APPLY feature of TPT. Thank you for your assistance on the same

Regards,

Karthik

Enthusiast

Re: Use Teradata Parallel Transporter to Move Data Without Landing the Data to Disk

Hi,

I want to write TPT script that will sync the data from oracle to teradata table.I am confused with the variables,for example what will be the TdpId,PrivateLogName etc.Please tell me with example.Also how to write TPT script step by step.As I am beginner not able to understand how start with it.

Thanku.