I am working on a implementation which let us know about utilization of indexes in real time use. This implementation has two parts.
1) Identifying all those columns which is being used in query for join processing on reagular basis by scanning dbql tables and there is no index has been defined on those.The idea behind this is to put approriate index on right column.
I am thinking to implement this in below steps.
a) Get DBQL data in flat file at unix box. (because data scanning will be much easy in shell script as compare to TD. b) Get DB name ,table name & column (non index only) name with join condition in separate file. c) then dump this result set in Teradata d) in target table combination of DB, table, column and join condition will be a multiple entry with deleted flag and another column weight will be there which will be increment by one every time to show requirement of index on the column.
For Example Table structure
Entry on 10/18/2010
DATE DBNAME TABLENAME COLNAME JOIN_CONDITION Weight DELFLAG 10/18/2010 DB1 TAB1 COL1 TAB1.COL1=TAB2.COL1 1 N
Entry on 10/19/2010
DATE DBNAME TABLENAME COLNAME JOIN_CONDITION Weight DELFLAG 10/18/2010 DB1 TAB1 COL1 TAB1.COL1=TAB2.COL1 1 Y 10/19/2010 DB1 TAB1 COL1 TAB1.COL1=TAB2.COL1 1 N
2) Identify those indexes which is there but not being used in data processing , kind of dead indexes.
For setp 2) I am thinking on same line as pt 1. And rather then increasing I will start decreasing it by 1 every time.
Thoughts are welcome on approach / implementation.