I am using a csv file (Saving a excel file in csv format) to load a table. I have numeric field with thousand seperator. When I am opening the csv file, the numeric field is showing up in double quotes. I have written a BTEQ script to load the table using this csv file. While I am running the script, it is throwing a error.
However if I am removing the thousand seperator, it is working. Is there any way we can make BTEQ script work with thousand seperator? Thanks in advance for all the help.
*** Failure 2673 The source parcel length does not match data that was defi
Statement# 1, Info =2
*** A non-retryable error occurred. The repeat was stopped.
A ,B ,C ,D ,E ,F ,G
3,1001,2000,10 ,"52,915,745",2012HO,Deadline 1
3,1001,2000,20 ,"329,579,468",2012HO,Deadline 1
3,1001,2000,30 ,"1,525,491",2012HO,Deadline 1
How to load data which is in csv file format into a table using a BTEQ script; Data file has a numeric field with values having thousand seperator?
If you save the Excel file as tab-delimited text, it should avoid the quotes.
You can then specify the format in Fastload as 'zzz,zzz,zzz,zz9' to get it to accept tthe commas.
Jimm, thank you for your reply.
I understand that we can solve this issue by saving the file in different format. But I am just curious to know if it is possible in the BTEQ script (using ',' as a delimiter in IMPORT command) to load values with thousand separators as a single value from a csv file. And also, it has to ignore the double quotes that a csv file format will give to values with thousand seperator.
The Teradata load utilities dont recognise Microsoft's convention that anything within double quotes should be ignored when evaluating control type characters. (TPT may be different; I have not used that.)