I am working on a project where we are migrating from Mainframe to Clerity (Linux) servers. I have a fastload job running in Mainframe, pulling from a DB2 unload file and loading into a TD table. There are 5 columns, one of which is a decimal (11,2) column.
The issue here is when I am reading from the file, the total record length comes to 26 bytes (6 bytes for the decimal column). On the same lines, the same fastload when ran in Clerity the record length comes to 28 bytes (8 bytes for the decimal column) with the same input data. Is there a way in which decimal column is interpreted on both sides?
Here is how my definition looks like:
COL1 OFFSET = 0 LEN = 4 INTEGER
COL2 OFFSET = 4 LEN = 4 CHAR
COL3 OFFSET = 8 LEN = 4 INTEGER
COL4 OFFSET = 12 LEN = 4 INTEGER
COL5 OFFSET = 16 LEN = 8 DECIMAL ---same column comes to 6 bytes on MF
COL6 OFFSET = 24 LEN = 4 INTEGER
TOTAL RECORD LENGTH = 28
Any pointers are highly appreciated.
On the mainframe, the data is stored in packed decimal format.
On non-mainframe platforms, the data is most likely stored in scaled integer format (Teradata's internal format).
This might account for the difference.