I have a situation where in I need to compare the attribute that was landed to a table from mainframe dataset against one target table.
I have an attribute with a value of 15108 of Char (10) loaded to a stage table as it is. The actual source data is something like 99298+4 byte low value+space. This data is landed in a STAGE table as it is, and approach is to build SQL’s using the STAGE table and compare this value against the target table value at the same key level.
We have the Cleansing rule in place which converts these HIGH/LOW value to SPACES. But since we don’t have these rules implemented at the STAGE table level, we see fall outs between source and target, even though the values are same. We are not able to convert these LOW values to SPACES at the SQL level. This issue is observed for some attributes and we are facing difficulty in validating these. We are able to do manual compare if the fall outs are less 10K, by copying them into excel. I would like to know if there is a way convert this high low value into spaces and then compare with the target.
If the whole value is high or low, I am able to trim it by removing trailing '00' or 'FF', but if it is a combination, I am not sure how to do it. Please let me know how to handle this using SQL.
Thanks in Advance...
Are you doing this comparison as part of testing activity?
If not, I would recommend to filter the low and high values at source itself if there is a transfer of data is associated across different platforms (Say mainframe to UNIX). ie Handle the hex values at mainframe itself before taking it to teradata.