Some Rules are Meant to be Broken: Things that Don't Obey CPU Limits

Blog
The best minds from Teradata, our partners, and customers blog about whatever takes their fancy.
Teradata Employee

This content is only relevant to systems using the SLES 10 priority scheduler.

Maybe you want to ensure that your sandbox applications never use more than 2% of the total platform CPU. No problem! Put a CPU limit of 2% on them. Or maybe you’ve got some resource-intensive background work you want to ensure stays in the background. CPU limits are there for you. But if you plan to use CPU limits as part of your workload management scheme, be aware that there are some database operations that simply won’t obey the limits. So let’s take a look at what those special cases are and why they’re allowed to violate the rules.

What is a CPU Limit?

CPU limits are an option within Priority Scheduler that limit how much CPU a given group of users can consume at any point in time. At the most granular level, CPU limits can be added to a Priority Scheduler Allocation Group. For a broader scope they can be applied to Resource Partitions. And under some conditions, such as after a hardware upgrade, a CPU limit may be applied to the entire system. The DBA controls where, when, and if, to use them. And they are easily changeable, day and night, for example.

A CPU limit does not hold back resources ahead of time for the work that falls under the control of the limit. It simply dictates how much CPU can be consumed by work running under its control at any point in time. If consumption should ever reach that level, Priority Scheduler slows down access to CPU for that work. It’s a ceiling, but it works like a thermostat. Only when resource usage indicates that the CPU limit has been exceeded will the limit cause Priority Scheduler to hold back CPU temporarily for the group.

This means that if the group under the control of a CPU limit is not consuming up to that limit, that group will not notice any slow down or have less CPU available for their processing.

Which Database Operations Ignore CPU Limits?

There are several types of operations within the database that do not honor CPU limits. This non-compliant behavior towards CPU limits is by intent, not by oversight. It’s how the database was designed to work. It’s a good thing.

1.  The System Performance Group

The best example of a type of work that will not honor CPU limits is the work that is running in what is referred to as the System Performance Group (PG). The System PG is home to a special collection of very-high priority internal DBS operations that have been deemed important enough to always run at the highest possible priority on the platform. You can’t change the fixed priority of the System PG, neither can you place a CPU limit on it’s allocation group.

Because you cannot place a CPU limit on the System PG individually, work in the System PG not honoring CPU limits is only relevant if a CPU limit has been applied on the entire system.  Generally, the System PG uses a small percentage of the total CPU, usually in the low/middle single digits. Anyone who monitors CPU usage either through schmon –m, Teradata Manager services, or ResUsageSPS can see what percent of CPU is being used by the System PG.

The critical database operations that run in the System PG include many short things such as dispatching, dictionary cache management, and error logging. At the other end of the scale, one type of work usually running in the System PG that can use a large amount of CPU is rollbacks and abort processing.

2. Rollbacks

Rollbacks can run in the System PG, or at the user’s priority. It is important to note that rollbacks and abort processing will not honor a CPU limit, if one exists, no matter where they are executing. By default, rollbacks run in the System PG and would be immune to CPU limits along with the other work running at that special priority.

But when the DBS Control parameter RollbackPriority is changed from FALSE to TRUE, this forces rollbacks to run at the priority or the user who submitted the request which is now rolling back. And if that user’s Allocation Group includes a CPU limit, that CPU limit will be ignored by the rollback processing.

The thing to remember here is that when the rollback is running at user priority it will be allocated CPU at a lower priority, so it will likely get CPU less quickly than when it ran at the System PG. That is the advantage of changing the DBS Control setting to TRUE.   However, even though the rollback will run slower, at the same time it will be able to violate a CPU limit, if one exists, as it runs to completion.

3.  Other Cases

In the Teradata Database, some DBS work is deemed too important too interrupt, and needs to complete quickly once it begins. This category of operations that ignore CPU limits includes:

• Deadlock detection routines

• CHECK TABLE

• TABLE REBUILD

• Parts of an ALTER TABLE

ALTER TABLE may be issued more frequently than the other utilities mentioned above, because of the desire change the partitioning of a PPI table, or to add or change compression. For that reason, ALTER TABLE is of particular interest in this discussion.

ALTER TABLE is a data definition command that changes the underlying structure of a base table. Some of the work done by an ALTER TABLE will honor CPU limits, but a portion of the activity of an ALTER TABLE falls into the category of being too important to interrupt. The following graphic illustrates the CPU usage during the life of an ALTER TABLE command that is adding compression to an already-loaded table. In this example, the CPU limit that had been set was exceeded by the ALTER TABLE activity.

 

Summary

CPU limits provide an opportunity to manage the CPU consumption of specific groups of users. CPU limits are not for everyone. They have a very focused and specialized purpose, and need to be monitored regularly if they are used.

Understanding the cases where the CPU limit will be ignored is useful if you are thinking about applying CPU limits. As with any workload management option, it’s always a good idea to monitor the work under the control of any CPU limits that you have defined. And remember to use CPU limits selectively, as it is possible that some of the platform CPU may be wasted as a result.

7 Comments
Enthusiast
Does imposing CPU limit result in a Delay Time or Throttle?
An example: In TDWM, there is a WDs in an AG and that AG is assigned a CPU limit of 20% (rel. wt.). If there are 5 queries that classified in this WD.
If 1 query could execute in 1 second by utilizing 20% CPU utilization then all 5 queries could run in 1 second utilizing 100% CPU, however due to the imposed limit in this case when all these 8 queries are executed within a CPU limit of 20% then they will have to wait 8 seconds or granularly, 1st query waits 0 second, 2nd waits 1 sec., 3rd waits 2 secs. etc.
Where can this waiting time be found? can it be tracked by the difference of FirstStepTime - StartTime or will it be accounted in DelayTime?
Teradata Employee
Imposing a CPU limit on an allocation group has no impact on TDWM delays that happen as a result of a throttle. Throttles can delay the time when a query begins execution. This shows up in DBQL. Delays caused by CPU limits are different. These are very small delays in accessing CPU and are imposed after the query is already running. These delays cannot be seen in DBQL, although their presence could account for a longer execution time for a query, making the difference between firststeptime and firstresptime longer.

It is possible to get some very basic information about CPU limit delays, by issuing the following command from the command prompt (you must have root privileges on the node to do this):

schmon -m -L

This command and its output is documented in the Utilities manual, the Priority Scheduler chapter, under schmon -m usage notes. The output from the command shows CPU usage for the allocation groups as well as a count of delays imposed on each allocation group during the delay age period in order to enforce a CPU limit.

I don't use this command myself. I just assume that if the CPU limit is being enforced effectively then enough queries are being delayed in their access to CPU to keep the CPU usage down below the limit. There is no easy way to track the schmon -m L output back to a particular query. Best to assume all queries in the allocation group are likely to impacted by the CPU limit. And then reassign or reclassify queries that you don't want impacted by a CPU limit.

Enthusiast
Carrie, thanks for the detailed response. I didn't respond earlier because I was (and still am) unable to run schmon on my system (Windows) :(
From your explanation, there is delay caused due to CPU limit but it cannot be tracked in conjunction with the queries running at that time e.g. it cannot be related to the data found in DBQL. Right?

Appreciate your help.
Teradata Employee
That is correct. DBQL will not indicate whether or not a query took longer to complete because it was being held back by a CPU limit.
Enthusiast
Thanks for the confirmation.
Enthusiast
Would a system running in a flow control mode with a pretty deep message box(100s) attribute to the fact that the TASM regulations are not honored for the duration of the event.Is this a myth or the truth, could you help with some explanation.
Teradata Employee
I am not sure that I am correctly reading your question. Please let me know if I have mis-interpreted what you are asking.

Whether or not a system is in flow control will not impact how CPU limits work or whether TASM rules are obeyed. A CPU limit will continue to hold back the work running under it's control, whether or not there is AMP worker tasks exhaustion or flow control (except for the situations described in the text above). Having CPU limits in place could make congestion situation worse because CPU limits usually mean that active work will take longer to complete, and so those queries under the control of the CPU limit may hold on to their AMP worker tasks longer, making the message queue longer and congestion worse.

In terms of "TASM regulation" not working when the system is in flow control, I don't know of any case where TASM rules will not be followed under those conditions. Filters and throttles will work the same, as will query classification. Exception actions should be similar, for things like demotions, aborting queries, or sending alerts.

In Teradata 12 there are some optional event-based rules that you can set up, that will allow you (for example) to alter some of your workload management settings when flow control happens on a specified number of AMPs. Those types of rules can help you automate a response to congestion that might help you manage the system better under those conditions.