Currently fastloading into an empty stage table from an SAP extractor. One of the columns is defined as unicode in our stage table. We are loading through Data Services specifying the source extractor data as being defined as utf-8. This appears to be working most of the time. The unicode characters are converting over to utf-16 on Teradata just fine. This one particular field, in this one particular table, however is NOT converting. Three records are being kicked out into the ET error table due to the unconverted 'junk' data. I have read that there are instances where a utf8 to utf16 conversions will not occur on Teradata (anyone run into this?). I suspect this is what is occuring during this load. If this is truely what is occuring, can I do anything to prevent the entire load from failing due to these three ET records kicking out as a result of the conversion issue? Would it be best to try an ODBC load instead? This is our first exposure to both SAP and Unicode data so any help would be greatly appreciated.
Joe, Did you solve this? I am on the other side of this equation managing the same challenges loading unicode data in teradata from SAP. I understand SAP very well and have worked on Unicode covnersions a few years back. I may be able to help you with the SAP side if you still have questions. I am relativley new ot Teradata
I'm interested in this as well.
I have a different problem though.
I get characters loaded into Teradata that are not valid Unicode values and it cuases problems with our reporting.
Specifically, whenever we try to filter on these columns we get a Teradata error related to an unstranslatable character.