Large History Table Design using PPI


Large History Table Design using PPI

Hi All, I am designing an archiving solution for recording 7 years of Call Detail Records. I am thinking of using a single table using PPI, by Date with a 1 month interval. Data will be added to the table a month at a time. The table will grow by 3 billion records / month. After 1 year there will be 36 billion, after 3 years 108 billion after 7 years 252 billion. We are using a dedicated Teradata 1555 single node server with 62 amps. The server is only used by a small team of 3 users running about 500 queries with no joins a day. It currently runs a daily load process on about 20 tables including the CDR tables being archived.

Will I have any performance issues with using single PPI table or would I be better off using individual PPI tables partioned by date?? The latter is currently being used for another companies CDR records on the same 1555 server and performs without any issues.

Junior Contributor

Re: Large History Table Design using PPI

Why separate tables?
As long as the optimizer is able to eliminate partitions based on the WHERE-condition, there is no reason to create seperate tables.

And if the typical date range queried is not exactly a month, better use daily partitions.


Re: Large History Table Design using PPI

Hi Dieter,

The exisiting archiving solution uses seperate tables as the data is loaded via a Teradata 5500 machine and then archived to the 1555 server (via backup / restore) after 6 months, 1 month at at time. This solution has been in place for over 3 years and is working well in terms of query performance.

The reason why we are thinking of using a date range of 1 month instead of daily is that it simplifies dropping data via a single partition (after 7 years). I have had my doubs about this as we can do a MLOAD delete, rather than dropping monthly partition, also the queries can span any date range not just month boundaries. My thinking is to use a daily range as per your suggestion.

What I am more concerned about is updating statisitics..will that cause any probelms on a single PPI table that will contain 36 billion records after 1 year ?