Our system go in cirtical stage(viewpoint) every now than with very few number of users.Here is one of situation.System went 99.99CPU utilization.
here are some key columns from VP for active queries
|SESSION ID||REQ CPU||CPU Skew Overhead||DURATION||USERNAME||WORKLOAD||IMPACT CPU||SNAPSHOT CPU SKEW||ΔCPU||REQ I/O|
result from resusgResUsageSpma at 9:50 when system went critical:
Can somebody please explain math behind CPU calc.
--So estimated total available CPU seconds/day is #Nodes*#CPUs*Seconds/day
Per hour : 806400(19353600/24)
Teradata is designed to fully consume resources of the platform.The goal is to use the resources as full as possible in order to minimize response time. There are many AMPs on each node, each running a portion of each request. All of them can consume resources simultaneously. This means that when even a small number of CPU intensive pieces of work run simultaneously, the CPU available on the platform will be fully utilized. With a typical mixed workload, there is some work that is CPU intense and some that is IO intense at a particualr time and the resource utilization is more balanced. But if several (especially long running) CPU intensive pieces of work are running at the same time then the CPU can easily be fully utilized. On modern platforms with lots of memory and SSD, this can even be more pronounced if the data being worked on is in-memory or on fast storage.
Full utilization of CPU is not a problem by itself. However, if the CPU intensive work is impacting response time of higher priority work, then some workload manamgent rules to allocate CPU priority to the higher priority work may be indicated.
And of course it is always good to take a good look at high consuming queries to see if there are opportunities to optimize the work.
Can you also help me understand workload distribution. We have simple timeshare (no tactical/SLG:no active TASM) top,high,medium,low).
If there is nothing running on the system and top priority workload comes than obviously all resources will go to it(no 8:4:2:1).Now if a high,medium workloads comes all at same time.How resources will divide.Also if it followed by a low priority workload.
If only low WD is running (it will take all resources) than a top WD comes and it will take 8 times of all.I am confused.
When a query runs,how viewpoint (in query monitor) show resource consumption details. Is it till current step resources consumed or only for current step.I get confused as at one time it show 500GB spool and after few minutes it show 50GB.for total resource consumed by query do we need to use dbql?
On a current sytem (SLES 11), a query in Top is given the chance to consume 8x the amount of CPU over an elapsed time interval as a query running concurrently in Low. It's all about taking turns and prioritizing who gets to go next.
Yes, many of the metrics in query monitor are "snapshot" values (as of last sample / current step) rather than cumulative. DBQL is a better place to get total / peak values.
A key thing to understanding CPU prioritization is that the CPU allocation is continuously re-evaluated at very short intervals. In each interval the relative CPU allocation is granted/enforced. So when a new piece of work enters the system, the allocations to all work are very quickly changed to reflect the new mix. Thus, a low prioirty only workload can consume the entire CPU capacity but when a high priority piece of worrk arrives, the appropriate allocation will be given to the high work at the expense of the low work.
For more details about scheduling and WLM, I suggest looking for Carrie Ballinger's posts and blogs on the topic.