Complex Throttle Rules Can Tie You Up In Knots

Blog
The best minds from Teradata, our partners, and customers blog about whatever takes their fancy.
Teradata Employee

Chances are that if you’re into workload management at all, your using at least one throttle. Maybe you are using more. On the other hand, maybe you’re just thinking about it. Object Throttles (as opposed to Workload Throttles) allow you stray from the straight and narrow, because they offer you an array of options and can be used creative combinations. This makes it possible for you to get in over your head when your throttles overlap or conflict. If you are not able to keep your throttles simple, this article is intended to preview what are getting yourself into.

First, a brief explanation. Throttles are rules you define in TDWM that allow you to control concurrency of specific groups of queries. Workload throttles are only usable when TASM is active, and allow you to limit concurrency of all queries that have classified to a given Workload, as a choice.  Object Throttles (the kind of throttle we are focused on here) are applied to incoming queries before they are classified to Workloads (and can be applied to incoming queries when TASM is not being used), and once defined, are usually associated to context or database objects. The object association establishes the throttle rule's scope. 

When a Query is Controlled By Multiple Object Throttles

Here’s the first type of complexity to think about. You can slip into this quandry without intending to, so pay attention. It has to do with defining multiple Object Throttles in such a way that a single query might be caught up by more than one of them at the same time. That can happen when different throttles are associated to different types of objects (accounts, profiles, tables, views, users, etc.). I refer to this as “overlapping” throttles.

Using the scenario expressed in the figure below as an example, you could set up 3 throttles something like this:

  • Associated to User X with a limit of 2
  • Associated with Account Y with a limit of 12
  • Associated with Profile Z with a limit of 20

If a query comes in that qualifies for being controlled by all 3 of those throttles (that is, the query was submitted by User X, within Account Y, and belongs to Profile Z), the counters of all 3 throttles must be below their respective limits in order for the query to be able to execute. For example, if the query comes in and User X already has 2 queries running, the new query will go into the delay queue. When one of User X’s active queries complete, allowing the new query to begin executing from the User X throttle perspective, it is possible that Account Y has already reached its limit of 12. And when the Account Y throttle falls below its limit, it could be that Profile Z has reached its limit of 20.

Multiple Object Throttles that control the same queries may result in some queries staying in the delay queue for an indeterminable length of time. All Object Throttle rules must be below their counters at the same time before the query can be freed from the delay queue.

When a Multiple Object Throttles are Defined on the Same Object

Here’s a different scenario that is equally, if not more complex. In this situation there are multiple Object Throttles, all associated to the same object, Account X.

Here’s a description of those three Object Throttle rules

  • Associated to Account X using the “limit user members option” with a limit of 2 (each user/member is considered individually; this means that each user within Account X will be limited to 2 queries at a time)
  • Associated to Account X with a step time threshold of 4 seconds with a limit of 5 (this means that only queries with plans that have estimated times of 4 seconds or greater are controlled by this throttle)
  • Associated to Account X with a step time threshold of 1 second with a limit of 10 (this means that only queries with estimated times of 1 second or greater are controlled by this throttle)

The use of different estimated processing times is intended to let all queries with an estimated time of less than 1 second run without ever being delayed, while at the same giving a higher concurrency limit to queries with estimated times that are greater than 1 second and less than 4 seconds (limit of 10), compared to queries with an estimate of 4 seconds or more (limit of 5), against queries within Account = X.

The important behavior to note is that TDWM in Teradata 12 (and earlier releases) only keeps one throttle counter for each object. Even though 3 Object Throttles have been defined against Account X, any point in time Account X will only have a single counter and will enforce a single limit. This changes in Teradata 13, at which time TDWM keeps one counter for each object/rule combination.

So if you are on Teradata 12 or an earlier release, the throttle count will change based on the limit for the rule that applied to the last request. Keep in mind a second behavior of TDWM in Teradata 12: If more than one throttle rule applies to a query, the one with the lowest limit is used.  Let’s examine the implications of these behaviors for 2 of these 3 Object Throttles.

• Account X submits 6 very long requests

  • Both rules apply, because all 6 queries have estimated times greater than 4 seconds 
  • The rule with the lower limit (Rule A) is applied
  • 5 queries run, 1 query is delayed
  • The throttle limit for Account X is 5, the counter is at 5

• Account X submits a short query (estimated time = 2 sec)

  • Only Rule B applies
  • The throttle limit is switched internally from 5 to 10
  • The short query begins to run and the counter is incremented to 6

• The long delayed query from Rule A is released to run because the limit is now 10 for Account X

If you set up throttles in such a way, it will be difficult to know at any point in time what counter is being used to control concurrency.   You are unlikely to experience the behavior that you want.

Here’s a Better Approach

Instead of wallowing in complexity and driving your fellow DBA’s crazy as they attempt to predict what concurrency will actually be at any point in time, consider using Workload Throttles. Similar functionality can be achieved substituting Workloads with classification that is equivalent to the step time threshold number of seconds, as illustrated below, and then applying Workload Throttles as needed:

This type of setup is less confusing, and will make it easier to project actual concurrency levels. With fewer options, there are fewer opportunities for poor design using Workload Throttles.