I need to find out what are the columns that has any language characters other than english (unicode with other language characters in it - not dbc.columns.chartype=2) Teradata version: 13.10
eg: tablea=> cola => unicode => the data in it is in chinese
tableb=> colb => unicode => the data is it is in english
output:// tablea => cola
select translate_check('布第二张专辑' using unicode_to_latin )
The above query is failing with the error " 6706 - String contains an untranslatable character". I have also tried with WITH ERROR option.
If I could know a function which returns either 0 or 1 based on the data in it, so that I could find all columns which fail. (I guess unicode_to_latin fails whenever there are characters other than english, please correct me if I am wrong).
If the above one works, then my query would be
sel tablename, columnname , translate_check ( a.columnname using unicode_to_latin) as trns_result from dbc.columns a where databasename='RETAIL' and tablename like '%ITEMPPI%';
I tried to use the above SQL and it is failing with the error "3706 - Datatype columnname does not match a defined type name"
Can you please help me out to find a solution for the above 2 issues.
You get the error because the function name is translate_chk not translate_check. But this will check the name of the column itself and not the table contents - so won't do what you want.
You would have to individually query each table (and potentially scan all rows / aggregate the results). Also note that translate_chk will return zero for many non-English accented Latin characters and some symbols, and returns non-zero for many symbols that are not Chinese (the single-character TM symbol, for example).