How an Odd Use of Throttles Can Go Wrong

The best minds from Teradata, our partners, and customers blog about whatever takes their fancy.
Teradata Employee

I came across an interesting behavior related to Teradata workload management. This behavior is reflected in both TASM and TIWM.  It has to do with using throttles to delay all queries that include access to a certain table (or set of tables) at times when those tables are being dropped and recreated.


A system throttle was set up with target classification of table name and a limit of zero. When this throttle was active, all queries that touched that object (table) were be placed in the delay queue while the table maintenance was going on.  The intent was to change the throttle limit to be higher than zero, or deactivate the throttle, when the table maintenance was complete. If all worked as planned, queries could be freely submitted even though the table(s) were physically unavailable at that time.


But things didn’t work out that way. Queries were held in the delay queue and released just fine. But when they started to execute, the AMP code caused them to fail.  All the queries ended up getting an error saying that the Table ID they were using was no longer in existence.


Here’s what was actually happening.


When a query is being pre-processed in the parsing engine, the Resolver Module checks that the table name existed and pulls the Table ID from the data dictionary.  When the concrete steps are built after the Resolver, the Parser, and the Optimizer complete their work, that Table ID is embedded in the query steps which will later be sent to the AMPs. 


But as soon as the table is dropped the table loses its Table ID.   When the table is recreated, the table is given a new Table ID.  


In the scenarios described above, TASM classification had already taken place before the query was placed in the delay queue.  Workload management classification is based on table name only, and never considers the Table ID.  As far as TASM/TIWM classification was concerned these queries that were in the delay queue were good to run. When the throttle limit was raised, the queries were released without any recognition that the Table ID has changed while the queries were being delayed.


In a situation such as this, there is really only one option: Select the REJECT option when defining the throttle, rather than the DELAY option. With a throttle limit of zero, all queries that classify to the throttle will be rejected immediately at their start time.  The user then will have to resubmit the query after the table of concern has been recreated, assuming the throttle limit has been raised.  At that time, the query will undergo a second round of parsing-optimization and new Table ID will be used in the query plan. This will prevent any abort at the AMP level because Table IDs will be consistent.  The tradeoff is that the abort is pushed up to the workload management level at the time the query is first submitted.