I need to get list of obsolete tables to drop unused tables.The query is simple but tables last access timestamp is coming as less than 180 days but has been accessed recently as per dbql log.The list is hugh so very difficult to check tables manually in DBQL(like any is taking time).CAn anybody help why this gap logdatevs lastaccesstimestamp
sel b.databasename,a.tablename,a.CreatorName,a.CreateTimeStamp,a.LastAccessTimeStamp,sum(currentperm)/(1024*1024) as MB_SIZE from dbc.tables a
and a.databasename like 'aw%'
or (lastaccesstimestamp<date-180 and lastaccesstimestamp is null)
group by 1,2,3,4,5
sel LOGDATE,CLIENTID,collecttimestamp,sessionid,queryid,STATEMENTTYPE,substr(QUERYTEXT,1,159),AMPCPUTIME,ParserCPUTime ,SPOOLUSAGE, a.starttime,username,
( ( a.firstresptime - a.starttime ) HOUR( 4 ) TO SECOND( 2 ) ) AS ElapsedTime
from PDCRINFO . DBQLogTbl_v1400 A
/*join PDCRINFO.DBQLSqlTbl_V1400 b
and querytext like any
I think your mismatch is probably due to a lack of 'object use logging' on the TD system. This feature is not turned on by default and therefore the columns you're querying will not get updated.
Assuming that you're on TD14 or higher then look at the USECOUNT feature of query logging - see https://info.teradata.com/HTMLPubs/DB_TTU_15_10/index.html#page/SQL_Reference/B035_1144_151K/BEGIN_Q....
As per dbql rule..
we have object usgae is true where as other columns are false..for this DB.is this make difference?The dates are like different not null or blank.
DBC.tables have 04/14/2017 whereas dbql says logon select was on 10/09/2017
Firstly looking at your queries, how do the following selection criteria relate to each other in your environment? They do not appear to be obviously related to each other.
a.databasename like 'aw%'
and querytext like any ( '%AA_SW_DD_Account_Final%', '%AA_SW_DD_ACCT%', '%AA_SW_DD_IPI%', '%AA_SW_FINAL_2015%', '%AA_SW_FINAL_2016%', '%AA_SW_FINAL_COPY%', '%ACXIOM_EXT_DATA_CONS%' )
I understand what each of those is doing, but if you're expecting to be able to compare the results from each query and make a decision then I'm a little bit confused.
Also you might not be getting back what you want. If you explain your 'dbc' query I think you'll find that the term is completely ignored:
(lastaccesstimestamp<date-180 and lastaccesstimestamp is null)
This term by itself can never be satisfied and as it is OR'd with everything else it is simply ignored.
If you are trying to find tables where 'lastaccesstimestamp' is over 180 days ago or it is null then you need something like:
sel b.databasename,a.tablename,a.CreatorName,a.CreateTimeStamp,a.LastAccessTimeStamp,sum(currentperm)/(1024*1024) as MB_SIZE from dbc.tables a inner join dbc.tablesize b on a.tablename=b.tablename and a.databasename=b.databasename and a.databasename like 'aw%' and (lastaccesstimestamp<date-180 or lastaccesstimestamp is null) and tablekind='t' group by 1,2,3,4,5 ;
You also asked "Also can u tell me which DBS display i can see this for further info on parameter". If the 'parameter' you're referring to is the USECOUNT then please refer to the link I provided in my first response in this topic.
enable object logging for the databases. It will tell you who and when they were accessed and give you information to make the delete decision.
Thanks...Enabling object logging.So, wont it will have logging over head and also consume system resources.can I enable object logging for multiple database using single line sql ..Please guide as its a must now as we need to find details of access,access count of columns,tables
Yes enabling additional logging in DBQL will add 'over head' and use extra system resources but those will mainly be disk space. DBQL logging is efficient and shouldn't really impact your processing.
You can enable usecount logging for multiple objects with a single line of SQL. See https://info.teradata.com/HTMLPubs/DB_TTU_15_10/index.html#page/SQL_Reference/B035_1144_151K/BEGIN_Q... for full details, but the basics are:
begin query logging with usecount on db1,db2,db3...;
use count will add numbers to dbc.tables accesscount?i want to make sure dbql object table is up to date.For that what exectly i need to do.Do just enable with object option will work?
Yes the 'usecount' option will cause lastaccessxxx fields to be updated/have values.
The 'objects' option will cause the dbql object table to be updated.
You say that you want to esnure the " dbql object table is up to date". Remember that with DBQL logging the data is written to these tables on a timer and/or when the buffer is full, not when a query ends. There may be a time delay between a query (that accesses a certain table) completing and that information being available in the dbql tables. By default this delay is no more than 10 minutes.