My senario is like -
I have two table Tab1 and Tab2. Incase, I Tab1 can't have more than 5 milion entries, incase of any thing more old data should be passed to Tab 2.
How to do it in teradata?
Can you please elaborate the scenario? You have two tables, Tab1 can hold only 5 million rows? and what is the benchmark for more old data?
and what is the source? is there any flag to represent older data?
Yes, 5 million is the volume constraint.
Or you can think I do not want data which is more than 1 yr old in Tab1. it should be passed to tab2( rather a history table by nature.).
I think you may need to have a look at temporal features(Valid time, transaction time and bitemporal tables) of Teradata and then use a script to purge data to table2.
The have features like time dimension.
Temporal will not help in that case as deleting rows will not remove them from the table, but update them to closed.
How do you determine which rows to be moved? If you need exactly 5,000,000 rows you might try to put that logic in a statement trigger , but i don't know if this is actually possible and performant.
If it's just one year simply add partitioning by day for the next few years and run a daily job to drop the oldest partitions using WHERE PARTITION = 1 and the INSERT INTO option.