We are getting erraneous output while using the below UDF function.
void myudf(int *a, float *result, char sqlstate)
*result = 1 / (1 + 0.2316419 );
replace FUNCTION myudf(
EXTERNAL NAME 'CS!myudf!<path>/myudf.c'
PARAMETER STYLE TD_GENERAL;
UDF output : 1.062197830
Expected output (1 / (1 + 0.2316419 ): 0.8119243
Please help us why the results returned by the UDF differs from the expected results.
I'm not a C-programmer, but when you define the result datatype as FLOAT within the C-code and as DECIMAL(38,9) within the CREATE FUNCTION this will never match.
The expected output from UDF is always a decimal. So I am not sure of the data type that should be used in C UDF.
Any help on this is highly appreciated! Thanks!
As detailed in the SQL External Routine Programming manual, a DECIMAL(38,6) maps to a "DECIMAL16" structure:
unsigned int int1;
unsigned int int2;
unsigned int int3;
Where the four fields together are treated as a single 128-bit signed integer value with the 6 decimal places implied (i.e. integer value is the decimal value multiplied by 1,000,000).
It may be easier to use FLOAT, or to write the UDF in Java where DECIMAL maps to BigDecimal.
Also, note that you should not be modifying the input parameter a... but that's a different issue.