Collecting stats gives the PE an idea of the data distribution of the tables involved in a query and come up with an optimised plan for query execution.
Stats collection is a highly CPU intensive active.
We have a main Production system and a secondary DR(Disaster Recover ) system for planned/unplanned Outages.This is a replica of the Primary System.
Is there any options available in Teradata to collect stats in the Primary system and apply them in the Secondary system to minimize the impact of CPU overhead?
What's your TD release?
You can extract stats from prod and re-apply them to DR using the System Emulation Tool (TSET)
In TD14 you can also simply catch the output of a SHOW STATS VALUES ON table from you Prod system and re-run it on your DR system.
Thanks Dieter for the reply.We are on TD 13.10.
Though we can use TSET to reapply stats, I believe we cannot automate the process of applying the Stats for a large number of tables using TSET.
Please correct if I am wrong.
You can run TSET on command line, check the "Teradata System Emulation Tool User Guide" manual, "Chapter 4: Command-Line Interface"
Hi Team, I had couple of scenarios in one application. 1) After stats collection on base tables, queries run faster , but with High Impact CPU for the insert statements. 2) There are some scenarios , where queries are running better with no stats and fixing the High Impact CPU after dropping stats. Looks ,Teradata optimizer is choosing a suboptimal query plan. Have we encountered any such scenarios and , what areas on code and objects , we should be looking at.