We enabled logging on our teradata system and now the size of DBQLogTbl is increasing rapidly. So, i want to delete the data from DBQLogTbl. But before doing that, do we need to consider any other tables that are related to this table?
Also, what would be the best practice to manage DBQLogTbl when logging is enabled?
There is no table 'related' to this table per se however depending on what level of logging you have enabled, you may have/want to copy/purge other dbql tables as well.
A general practice is to have a daily job that copies all the data from this table to a 'staging' or 'temporary' table as it is, delete the table in DBC and then copy this staging table to a history table.
The reason for using this staging table is to reduce the time the DBC tables are kept locked. Furthermore, the history table can then have additonal constructs such as PPI, Secondary Index on it for the ease of querying.
You might also check to see if you have some ETL processes that are loading a very large table via ODBC. This would cause your DBQL tables to grow rapidly since every target table insert would also be inserted into the DBQL table.
We have a process of moving data from dbc dbql tables to dbql history tables. But , off late we are facing spece crunch on these history databases as well.
There is a process of deleting data from these history pdcr tables as well. for e.g for sql table it's 3 months older purgig policy, for log and object table it is a year.
My question is :
1. where do i go and validate/get details this purging process ?
2. is this done automatically by TD routines or is it specific to a shop (using their own routines)
Purging the dbql tables to history is a very common task. How often you run the script and how much history you retain is generally dependent upon a combination of how busy your system is and what level of logging you've enabled.
If you're going to engage Teradata in a PDCR engagement, they like to have a good sample of dbql data at their fingertips, at least four weeks.
Some shops keep a rolling 13 months worth of data.
It's generally up to the dba's to schedule or customize the basic script, which you should be able to find, along with the necessary ddl t the following link on this site: http://d289lrf5tw1zls.cloudfront.net/database/dbql-scripts/TD13_0_and_TD13_10_DBQL_logging_and_maint...