Migrating Priority Scheduler settings to TASM Using Teradata Workload Analyzer

Tools covers the tools and utilities you use to work with Teradata and its supporting ecosystem. You'll find information on everything from the Teradata Eclipse plug-in to load/extract tools.
Teradata Employee

Migrating Priority Scheduler settings to TASM Using Teradata Workload Analyzer

Teradata provides advanced Workload Management capabilities through Teradata Active System Management (TASM). However, some customers are still relying on only Priority Scheduler, without TASM's added capabilities. These customers can easily move forward to TASM. This article shows how to use Teradata Workload Analyzer (TWA) to migrate existing Priority Scheduler settings into TASM.

A Priority Definition Set (PD Set) is a collection of Teradata Priority Scheduler definitions which includes Resource Partition, Performance Group, Allocation Group, Performance Period and other definitions that control how the Priority Scheduler manages session execution.  A PD Set is created using Teradata Priority Scheduler Administrator (PSA), a graphical interface that allows a user to define PD Sets and generate schmon scripts to implement these sets. PSA provides the capability to define a schedule for when a PD Set is active on the system. Teradata Manager allows a user to schedule a PD Set which is referred to as a “scheduled PD Set”. A PD Set which is not scheduled through Teradata Manager is referred to as an “unscheduled PD Set”.

Teradata Workload Analyzer can convert Teradata Priority Scheduler settings, as defined in either scheduled or unscheduled PD Sets, to TASM workloads (WDs), as defined in a TASM Rule Set. 

The following diagram illustrates the High Level Flow of the PDSET to TASM WDs Migration process,

Figure 1: High Level Flow of PD Set to TASM WD Migration

Selecting and Merging PD Sets

Multiple PD Sets can be defined through PSA. Creating multiple PD Sets allowed a user to define different Priority Scheduler weights at different times.  In TASM, the equivalent functionality of changing Priority Scheduler weights is accomplished within one Rule Set with multiple planned environments. To migrate Priority Scheduler settings, the user must merge the PD Sets for the various time periods to create a single TASM Rule Set with the equivalent planned environments associated with those time periods.

TWA prompts the user to select PD Sets for migration.

  • If there are no scheduled PD Sets, the list of unscheduled PD Sets available is displayed.  The user can select one or more unscheduled PD Sets for migration into a single TASM rule set.  For each PD Set selected, the user must define a period event and associated operating environment. 
  • When migrating scheduled PD Sets, the operating environments and their associated period will be derived automatically from the scheduling information in the PD Sets. 


                    Figure 2: Selecting and Merging PD Sets

Whether migrating from multiple unscheduled PD Sets or from a scheduled PDSet, the following rules must be adhered to in order to merge multiple PD Sets into one TASM Rule Set:

  • The number of Resource Partitions across the PD Sets must be same.
  • The number of Performance Groups across the PD Sets must be same.
  • Performance Group names must match exactly in all Resource Partitions and must be in the same Resource Partition in all PD Sets.
  • The number of performance periods in a Performance Group, the CPU milestones and Allocation Groups must be the same in all PD Sets.
  • If multiple PD Sets are selected to merge, the PD Sets cannot contain Time of Day Performance Periods for any Performance Groups.

These differences are acceptable:

  • RP Weights, AG Weights and CPU Limits

                                                          Figure 3: Existing PSA Setting 

Conversion Process

Mapping to Allocation Groups

With the Priority Scheduler, an incoming request is mapped to a Performance Group (PG) based on its logon account string. Embedded within the account string is the name of the PG the request will be run in. The PG’s first performance period indicates the Allocation Group the request will run under.

With TASM, incoming requests are mapped to Workload Definitions (WDs) based on a variety of classification criteria instead of just the account string. The conversion process creates one WD for each Allocation Group, and assigns classification criteria primarily based on the account string of the owning PG. The WD is then mapped to execute its requests in that Allocation Group. Using this approach, a request that used to run in a particular Allocation Group will still run in that same Allocation Group.

Migrating Priority Scheduler Resource Partitions

Two options are provided to migrate Resource Partition structures. In both options, the actual allocation group relative weights will be maintained identical to how it existed in the PD Sets.

  • “As-is” Mapping: The current Resource Partition, Performance Group,  Allocation Group structures and their relative weights will remain the same, ie: if Allocation Groups are defined within all 5 Resource Partitions before, they will still be defined within 5 Resource Partitions after the migration. 
  • Simplified Structure Mapping: This is the default and recommended option. WD’s and their Allocation Groups will be mapped to three basic resource partitions:
    • Default: runs only “system” work; no WDs are mapped here.
    • Tactical: WDs with an Enforcement Priority of “tactical” are mapped here.
    • Standard: WDs with an Enforcement Priority of “priority”, “normal” or “background” are mapped here.

The Simplified Structure Mapping option enables two specific priority scheduler best practices to be automatically applied to your settings:

  •  Reducing the weight of the default Resource Partition.

Starting with V2R6, the default resource partition weight does not need to dominate as it did before V2R6.

  • Moving all tactical work into a common Resource Partition separate from non-tactical work.

This avoids a situation where non-tactical work can exceed the resource assignment of tactical work. With As-Is mapping, when a tactical Allocation Group in one Resource Partition was inactive, it gave its unused resources to the other non-tactical work within the same resource partition rather than giving its unused resources to other tactical work in other resource partitions. With Simplified Structure Mapping, it maps all tactical work to the "tactical RP" so that any idle resources are given first to other tactical work before being given to non-tactical work within the standard RP.

The following formula is used to calculate the relative weights when choosing simplified structure mapping:

  • Tactical RP Weight = Sum of the relative weight of all tactical AG’s
  • Standard RP Weight = Sum of the relative weight of all other AG’s
  • Default RP Weight = 0.25 * (Tactical RP Weight + Standard RP Weight)
  • The assigned weights of the Allocation Group's in the Default Resource Partition Allocation Group “L”, Allocation Group “M”, Allocation Group “H” and Allocation Group “R” is 5, 10, 20, and 40.
  • All other assigned AG Weights = Previous relative weight when default resource partition’s Allocation Groups are inactive.

Conversion Process Examples

                                                             Figure 4: Conversion Process

The table below shows the migrated TASM setting as per ‘As is’ and ‘Simplified Structure’ mapping options.


Classify Condition

Mapped AG

RP Name(Existing PSA)

AG Weight

(Map as is)

AG Weight

(Simplified Structure mapping)

Enforcement Priority









Acct=AcctB AND (MinEstTime = 0 AND MaxEstTime = 100sec)




14 (RP1)



Acct=AcctB AND (MinEstTime = 100sec AND MaxEstTime = 500sec)







Acct=AcctB  AND (MinEstTime = 500sec AND MaxEstTime = 2000sec)




14 (RP2)










Acct=AcctF OR Acct=AcctG




29 (RP1)


Following the merge of PD Sets as previously described, the user is presented with the “PSA to Workloads mapping” screen which shows the mapping of WDs to AG’s and their RP’s, Classification criteria, the Relative weights of the RP and AG’s (including previous relative weights), Enforcement Priority and the name of the Planned Environment. Because migration cannot automatically assign Enforcement Priority settings to the WDs, the user should specify here this assignment via the pull-downs provided.

The conversion process creates multiple WDs, one for each unique Account found, and assigns a classification criteria primarily based on the Account String, including the Performance Group definition within that account string. Each WD is mapped to an Allocation Group as previously defined within the PD Set, which designated certain Allocation Group(s) to be associated with a Performance Group. By this approach, a request that used to run in a particular Allocation Group will still run in the same Allocation Group.

The example below (figure 5) uses “As-is” mapping.

                                                          Figure 5: Mapping with “As-is” Structure

The next example show below (figure 6) illustrates “Simplified Structure” Mapping.

A point of clarification: When comparing As-Is relative weight to the Simplified Structure relative weight columns, they are not the same. Simplified Structure is actually 25% less than As-Is. In this view, the affect of the new default resource partition, with a resource partition weight of 25% is factored into these weights. Because no WDs are mapped to the default resource partition, the default resource partition should essentially be inactive. Thus, if the relative weights were recomputed without this inactive default resource partition’s weights factored in, you will see that the relative weights are indeed the same, and that the conversion to simplified structure does maintain the same relative weights as before with respect to active Allocation Groups.

                                                    Figure 6: Mapping with Simplified Structure

CPU Distribution pie-chart

Once migration is completed, the CPU Distribution pie-chart is generated on DBQL log data for CPU percentage of migrated workloads.

Figure 7: CPU Distribution

The user can save workload definitions into a new TASM ruleset by selecting the ‘Tools->Create Workloads…’ menu option.


By using TWA migration option, users can easily move forward to TASM's more advance workload management capabilities. TWA guides the user through his migration options, and provides a conversion that maintains the same allocation group weights in the TASM environment as he had in his priority scheduler environment. From that equivelant baseline, the user can then take advantage of TASM's more advanced workload management capabilities to yield a sophistication unachievable with priority scheduler alone.

Teradata Employee

Re: Migrating Priority Scheduler settings to TASM Using Teradata Workload Analyzer

Thanks BK and Anita for this article. It clears a few lingering questions in my mind. I am mentioning a few points in addition to a few questions:

(a) Can PD Sets and WD both exist simultaneously ? I guess, no.
(b) If both PD Sets and WD Set can exist simultaneously, how the whole system work then ? Whats the flow then ?
Teradata Employee

Re: Migrating Priority Scheduler settings to TASM Using Teradata Workload Analyzer

The answer is no, you need to choose between a priority scheduler PD sets architecture or a TASM WD architecture. They cannot co-exist.