Last Access timestamp and Performance Issue

Database
Fan

Last Access timestamp and Performance Issue

For the past one month, the last access timestamp is not being updated for any of the objects in our Teradata systems. This could be fixed by restarting the system, but we are not planning to do it for another 2 weeks. 

Meanwhile I have noticed , we are facing some performance issue also in the recent past. I have searched to find out if there is any relation between the last access timestamp issue and performance, but could not find any.

 

Anyone has any idea/experience similar to this? Appreciate your time. 

 

 


Accepted Solutions
Teradata Employee

Re: Last Access timestamp and Performance Issue

 

What you are describing, that only some of the tables and statistics are being impacted and others not, is entirely possible. Sometimes the skipping logic for ‘system determined change-based threshold’ depends on more factors than just the UDI counts. In addition, there are some statistics that are always recollected, for example small tables and highly non-unique NUSIs. See Section 5.1 in the 14.10 Statistics Enhancement orange book for more detail.

 

Another possibility is that that some of your statistics collection statements might have been defined with “USING” clauses that override the system threshold functionality and force full recollections each time.

 

If you want to nail down what exactly is happening in your particular case, more information would need to be analyzed. It would be best to open an incident with the support center if you require that level of investigation.

 

Regards, -Carrie

1 ACCEPTED SOLUTION
6 REPLIES
Junior Supporter

Re: Last Access timestamp and Performance Issue

The process to update the 'last access timestamp' most likely got disabled during a time when the system was in overuse state, such as flow control, AWT exhaustion, CPU use 100%. The restart will enable it again.

There is no direct correlation between this and your current performance issue. But the fact that your system was in overuse state before, probably means this is still the case, maybe only during limited times in the day.

Some analysis of your overall system usage would show if/when this is the case.

You need to define your 'performance issue' a bit more clearly. Is it all queries, all the time, or specific queries ?

Are specific queries always an issue, or only sometimes ?

You may have specific query issues, or overall system saturation.

Teradata Frank, Certified Master
Fan

Re: Last Access timestamp and Performance Issue

Only few of the queries have issues. Queries that used to run in less than 5 minutes , now taking around 5 to 6 hours. I found out that the stats were not getting collected for many of the tables after the "Last Access timestamp" issue started.  After we force collect the stats , the query gets completed quicker the next run. But we had to do the manual force stats collection every time, 

Because the normal stats statement which uses the default system threshold was not working. 

 

My theory is "Last Access timestamp" affecting the stats collection of the tables and in turn affecting the performance of the queries. 

 

Is it a valid/reasonable theory? is it possible to get such an issue ? 

 

In such case, what can be done to make sure the stats gets updated properly?  Because stats seems to fix the performance problem.

Highlighted
Teradata Employee

Re: Last Access timestamp and Performance Issue

There is a DR for this, but unfortunately it will not be available until 16.20. However, this fix is something they may be considering for backporting, so I suggest you or your customer call the support center and ask for it. It’s DR 183229.

 

What is happening is this: When use count feature (OUC) encounters any unrecognized or abnormal error, the feature automatically disables itself internally. When this happens the update counts of tables undergoing change do not get recorded. When the OUC turns itself off, the optimizer is unaware of it.   And DBQL rules will still indicate USECOUNT is enabled.

 

The only way to revive disabled UOC tasks is a restart, as you have already noted.

 

Since update, insert, delete counts are not being updated, the statistics collection routines see the percent of change for all tables as zero, so recollections are skipped, and skipped, and skipped.

 

Thanks, -Carrie

Fan

Re: Last Access timestamp and Performance Issue

Thanks for the DR details.

 

I have one question here, We are not facing the problem for all the tables. Around 75% of the tables , the last stats collection happened during the time the last Accesstimestamp stopped updating but 25 % tables the stats are up to date and there are no problem with them. Why is it happening ?  If the use count feature is disabled and the optimizer is skipping stats collection, then it should do it for all the tables. Why for some tables, it works as usual without any problem? 

 

 

Teradata Employee

Re: Last Access timestamp and Performance Issue

 

What you are describing, that only some of the tables and statistics are being impacted and others not, is entirely possible. Sometimes the skipping logic for ‘system determined change-based threshold’ depends on more factors than just the UDI counts. In addition, there are some statistics that are always recollected, for example small tables and highly non-unique NUSIs. See Section 5.1 in the 14.10 Statistics Enhancement orange book for more detail.

 

Another possibility is that that some of your statistics collection statements might have been defined with “USING” clauses that override the system threshold functionality and force full recollections each time.

 

If you want to nail down what exactly is happening in your particular case, more information would need to be analyzed. It would be best to open an incident with the support center if you require that level of investigation.

 

Regards, -Carrie

Fan

Re: Last Access timestamp and Performance Issue

Thanks Carrie. The information that you provided were useful.