I am trying to export data from Hive Parquet table ( partitoned and non partitioned) into Teradata.
I am successfully able to export from Textfile Hive table but when i try to run an export from Parquet table, it fails with following error
"tool.ConnectorExportTool: com.teradata.connector.common.exception.ConnectorException: java.lang.IllegalArgumentException: The value of property tdch.input.hive.fields.separator must not be null"
The error is misleading since Parquet doesnt have field separator and we never specify the same for TD as well.
Appreciate any insight on what am I missing. I read somewhere that DECIMAL columns require special conversion, hence, created columns with all VARCHARS (except 2 columns that are BIGINT) as a test.
My command looks like below
hadoop jar $USERLIBTDCH com.teradata.connector.common.tool.ConnectorExportTool \
-libjars $LIB_JARS \
-D mapreduce.job.queuename=<queue_name> \
-classname com.teradata.jdbc.TeraDriver \
-url jdbc:teradata://<server>/database=<db> -username <user> \
-password <password> \
-jobtype hive \
-fileformat parquet \
-hiveconf file:///etc/hive/conf/hive-site.xml \
-sourcedatabase <db name> \
-sourcetable <tbl_nm> \
-sourcefieldnames "<field names>" \
-nummappers 1 \
-targettable <tbl_name> \
-targetfieldnames "<same column list>" \
-method batch.insert <----- tried both batch.insert and internal.fastload
Hey, Did your this issue got solved ? I am getting the same issue, so please let me know if it got fixed and the steps for the same.