BUG! i think

Connectivity
Enthusiast

BUG! i think

Using CLIv2 on a channel attached IBM mainframe.

I am trying to load packed decimal fields into teradata, and I think there is a bug in the way CLI converts packed fields into teradata format. Basically I have a field defined as a BIGINT in TD (which gets stored as DECIMAL 18,0 for some reason). I have a 6-byte packed decimal field, which can hold at most an 11 digit number. I have set this value to the 10-digit value 9999999999. I use the following teradata command to load the field"
USING x1 (DECIMAL(10))
INSERT INTO xxxxx.AAATEST1
(VAL1,VAL2,VAL3)
VALUES ('09','99',:x1);
The resulting value of the teradata field is not 9999999999 but 1410065407. This is what happens if you overflow an integer, you can verify this by running the c code:
long val = 9999999999;
printf("Overflow? %d\n",val);

It seems that CLI is converting packed numbers into integers before trying to load them into teradata, and obviously will overflow when the value is greater than 2147483647 (the max for a signed long). The teradata CLI documentation states it can handle packed fields up to 8 bytes long (which would correspond to 15 digit numbers), but that would not be that case if what I suspect is correct.

Could someone confirm this is a big...is there is a workaraound...

Thanks
Mick

4 REPLIES
Teradata Employee

Re: BUG! i think

I have no experience on Mainframe but know that CLI simply passes the Data (or IndicData) parcel directly to Terdata database.

I suggest retrieving the number from Teradata as to compare the External representation of Decimal(10) in Record Parcel with the value in Data/IndicData Parcel. For example "Select Cast(9999999999 as Decimal(10))".
Enthusiast

Re: BUG! i think

We are getting the mainframe CLI client software upgraded from 8.1 to 8.2, and I noticed in the 8.2 release doco it mentions something about extending the precision of DECIMAL to 31. Lets hope that fixes it...
Teradata Employee

Re: BUG! i think

V2R6.2 supports Large Decimals up to 38 precision. V2R6.1 supports Decimals up to 18 precision. I dont think the upgrade will have any impact on Decimal(10).

Are you using Record Mode or Indicator Mode? I have ran into cases where the IndicData Parcel did not have the extra byte for the NULL indictor(s). That caused what looked like data corruption on the DBS side.
Enthusiast

Re: BUG! i think

Dont know, whatever the default is being loaded by DBCHINI. Also, im quite certain its not the version of the database that is the issue...I have already confirmed it can handle large decimal fields (say when being loaded from a character or zoned field). The issue is with the client CLI software that is converting the value to an signed integer before sending it to the database. If the upgrade fails to make a difference, i will investigate the record/indicator mode and see if it makes any difference.

Cheers