I am working on a data set in SQL Assistant that contains Chinese characters. I am using UTF-16 as Session Character Set to read the Chinese characters. However, an issue that I am facing is related to creating a subset of the data by using where condition.
For instance, I have a column called market which has null values and 'CHINA ' (trimming doesn't help in rendering 'CHINA ' to 'CHINA').
where market='CHINA ' returns 0 entries
where market='CHINA' RETURNS 0 ENTRIES
where market is not null gives correct result
where market like '%CHINA%' gives correct result
Can anyone please help out if this is quite common and if there is a workaround.
Thanks a lot,
looks like there's something else in your market column.
Check the actual data with SELECT CHAR2HEXINT(market) ... where market like '%CHINA%', for 'CHINA' it should return '004300480049004E0041'
Thanks for the help, actually it's CN not CHINA
SELECT CHAR2HEXINT(market) ... where market like '%CN%' returns 0043004E0000
looks like market is a CHAR(3), there's a trailing blank.
But for this data both conditions should return data, too:
where market='CN '
Can you check DBQL which source code was actually sent to the DBMS?
What's that step in Explain?
I did a help table ... , here is the output for market:
|Decimal Total Digits||?|
|Decimal Fractional Digits||?|
A Column is having chinese characters. I need to select rows from a table which is having chinese characters, Can some one pls help me how i can
select these type of data.
I need some clarification of your question. Are you asking how to detect (e.g., with a SQL function/UDF) if a Unicode character column contains any Chinese Ideograhic script characters?