I have a requirement where a single Database table with 1600 columns of which 2 are key columns and part of primary index , is to be split into 2 tables such that
Each table will have the above 2 key columns in common and one table will have around 80 columns which have been identified as fast changing and the other will have remaining 1520 columns identified as slow changing.
he thing is that when splitting happens, rows which were not duplicates previously in the original table, become duplicates in the new table
For eg: consider data like below in the original table
here Custno,strt_dt and end_dt are composite keys
CustNo. Name Age Address Tel No. STRT_DT END_DT
1 ABC 30 1st Street 9999 1-jan-2000 31-dec-2000
1 ABC 30 2nd Street 9999 31-dec-2000 31-dec-9999
Table 1 contains CustNo, Name, Age, strt_dt,end_dt
Table 2 contains Custno, Address,Tel No ,strt_dt,end_dt
So table 1 will look like
CustNo. Name Age STRT_DT END_DT
1 ABC 30 1-jan-2000 31-dec-2000
1 ABC 30 31-dec-2000 31-dec-9999
Table 2 will look like
CustNo Address Tel No STRT_DT END_DT
1 1st Street 9999 1-jan-2000 31-dec-2000
1 2nd Street 9999 31-dec-2000 31-dec-9999
My requirement is post splitting Table 1 should have a single row with start date as 1-Jan-2000 and end date will be high date
How about just having max(END_DT ) over (partition by key_fields order by some_fields). So in that case, your table1 will be thin.
You can even think of splitting into three, the static fields if any can be placed in the third one with one key for reference.