I use pyodbc and Python to connect to Teradata on Linux. I use a simple query: `SELECT COUNT(*) FROM "my_table";` If I use Teradata Studio then I get a correct result, e.g: "123". With my python script I get this value: "140376711102716". When I rerun the script I get a different "bigint" (or timestamp or I don't know what this value is...). I use "ANSI=True" flag in my connection string. I don't use "CHARSET=UTF8" flag because if I use this then I get "bigint" too. But if my query result should contain strings as well then with "CHARSET=UTF8" flag I get "u'\u3152\u5453'" value instead of my correct result string that is "STR1"
So this issue occurs at numeric data type columns and numeric values. But only on Linux. On Windows, everything works perfectly with the same query, the same database, etc.
Does anybody have any idea about the issue?
If I use "SELECT COUNT(*) FROM "my_table"; then the same issue occurs. So the result is not the row count e.g.: "456" but something like this: "140376711129873". So I think the problem is not in column data types but numeric value "conversion" between Teradata and python...