com.teradata.connector.common.exception.ConnectorException: Connection refused

Hadoop

com.teradata.connector.common.exception.ConnectorException: Connection refused

Hello Teradata users, i have a problem when i try to use TPT using Teradata as Producer and HDFS as Consumer,

my teradata BD is the Teradata express 16.00.00 virtual machine and i want to write data to hdfs located in my host machine (linux),

I have a functional pseud-distributed hadoop 2.7 hdfs, inside the  WM with teradata DB i have the client hadoop (edge node), so i can run commands like

hdfs dfs -ls / , mkdir, rm, etc etc.

But when I try to run tpt file like "tbuild -f test04.tpt -v ParamsTPT.txt" i have the output:

TPT19603 Failed to launch TDCH client. See log TDCH-TPT_log_27911.txt in TPT logs directory for details.

After i checked the log i see the error:

 

7/05/15 19:20:46 INFO mapreduce.Job: map 0% reduce 0%
17/05/15 19:20:49 INFO mapreduce.Job: Task Id : attempt_1494363673396_0018_m_000000_0, Status : FAILED
Error: com.teradata.connector.common.exception.ConnectorException: Connection refused (Connection refused)
at com.teradata.connector.idatastream.IDataStreamConnection.connect(IDataStreamConnection.java:65)
at com.teradata.connector.idatastream.IDataStreamInputFormat$IDataStreamRecordReader.initialize(IDataStreamInputFormat.java:183)
at com.teradata.connector.common.ConnectorCombineInputFormat$ConnectorCombinePlugedinInputRecordReader.initialize(ConnectorCombineInputFormat.java:505)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

 

In the browser i can see the accepted job but after minutes FAIL, with that error.

Error: com.teradata.connector.common.exception.ConnectorException: Connection refused (Connection refused)

 

I tried the follow list:

1.- The host file and name.

2.- Firewall ( telnet the port etc and down service)

3.- i can reach the database from the host to virtual machine (i can send query from DBVis)

4.- My cluster hadoop and hdfs its ok, i can write and create dirs

5.-  The tpt use teradata-connector-1.4.4.jar  so i tested  like hadoop jar [teradata-connector-1.4.4.jar] params ... and it works [read teradata and write hdfs]

5.1.- Tested jar from host machine and worked, and inside virtual machine also worked.

5.2.- I thought the version was the issue but no, i download the new driver and tested in the same way as 5.1. and 5.2 and works good

6.- Also put the jars  tdgssconfig.jar teradata-connector-1.4.4.jar terajdbc4.jar in the lib of hadoop.

 

You have some ideas from the above ?

 

Thanks

 

 

 

  • consumer
  • data stream
  • Dataconnector
  • export
  • HDFS
2 REPLIES

Re: com.teradata.connector.common.exception.ConnectorException: Connection refused

Hi, We are also facing the same issue. Did you find any solution to the problem?

Re: com.teradata.connector.common.exception.ConnectorException: Connection refused

Hi,

 

Not yet,  I  had to install inside of my VM Teradata Express 16.00 my hadoop pseud-distributed hadoop 2.7  with that i could access via tpt with the same

script of tpt.

My thoughs is the  VM with teradata have some kind of block IPC connections or similar.

Are you using a edge node/client hadoop to access to your cluster?

Can you share your base config core-site, yarn-site, etc.

I hope my config could help you.

 

This is my Hadoop conf:

core-site

   

    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://192.168.1.**bleep**:9000</value>
    </property>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://192.168.1.**bleep**:8020</value>
    </property>

yarn-site

  <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>192.168.1.**bleep**:8032</value>
    </property>

    <property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
    </property>

    <property>
        <name>yarn.nodemanager.vmem-pmem-ratio</name>
        <value>4</value>
    </property>

mapred-site

 

  <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobtracker.address</name>
        <value>192.168.1.**bleep**:9001</value>
    </property>

Now let me show you my tpt file

 

DEFINE JOB TERADATA_TO_HDFS
DESCRIPTION 'exporta tabla tblXXX'
(
        DEFINE SCHEMA myschema(
                NU_AFIL VARCHAR (7),
                 ......
                 ......
         )
        /*PRODUCER*/
        DEFINE OPERATOR READER_TERADATA
        DESCRIPTION '------'
        TYPE EXPORT
        SCHEMA myschema
        ATTRIBUTES(
                VARCHAR TdpId                   = @SourceTdpId,
                VARCHAR UserName                = @SourceUserName,
                VARCHAR UserPassword            = @SourceUserPassword,
                VARCHAR WorkingDatabase         = @SourceWorkingDatabase,
                VARCHAR SelectStmt              = @ExportSelectStmt,
                INTEGER MaxSessions             = @ExportMaxSessions
        );


        /*CONSUMER*/
        DEFINE OPERATOR WRITER_HDFS
        DESCRIPTION '-------'
        TYPE DATACONNECTOR CONSUMER
        SCHEMA *
        ATTRIBUTES(
                VARCHAR HadoopHost              = @FileWriterHadoopHost,
                VARCHAR HadoopJobType           = @FileWriterHadoopJobType,
                VARCHAR HadoopSeparator         = @FileWriterHadoopSeparator,
                VARCHAR HadoopTargetPaths       = @FileWriterHadoopTargetPaths,
                VARCHAR HadoopUser              = @FileWriterHadoopUser,
                INTEGER HadoopNumMappers        = @FileWriterHadoopNumMappers
        );

        APPLY TO OPERATOR (
                WRITER_HDFS[1]
        )
        SELECT * FROM OPERATOR (
                READER_TERADATA[1]
        );

);

And the params file:

 

/*READER*/
SourceTdpId                     = '192.168.1.**bleep**'
SourceUserName                  = 'user'
SourceUserPassword              = 'pass'
SourceWorkingDatabase           = 'database'
ExportSelectStmt                = 'select * from database.table'
ExportMaxSessions               = 8


/*WRITER*/
FileWriterHadoopHost            = '192.168.1.**bleep**'
FileWriterHadoopJobType         = 'hdfs'
FileWriterHadoopSeparator       = '|'
FileWriterHadoopTargetPaths     = 'hdfs://192.168.1.**bleep**/user/tdatuser/table'
FileWriterHadoopUser            = ''
FileWriterHadoopNumMappers      = 1                   

I hope that can help you more, I'll keep searching for the solution,  i you find it first, share please.

 

Regards

Eder