Because they look like just another group of workloads, you might think that SLES11 virtual partitions are the same as SLES10 resource partitions. I’m here to tell you that is not the case. They have quite different capabilities and purposes. So don’t fall victim to retro-conventions and old-school habits that might hold you back from the full value of new technology. Start using SLES11 with fresh eyes and brand new attitudes. Begin at the virtual partition level.
This content is relevant to EDW platforms only.
Use of multiple resource partitions (RP) in SLES10 originated due to restrictions in the early days on how many different priorities each RP could support. The original Teradata priority scheduler had four external performance groups and four internal performance groups contained in a single default RP. Even today, the original RP (RP 0, the default RP) usually supports no more than four default priorities of $L, $M, $H, and $R.
In Teradata V2R5 came the ability to add resource partitions, but even then each new resource partition could only support 4 different external performance groups, similar to how RP 0 worked. This forced users to branch out to more RPs if they had a greater number of priority differences. So it was common to see 4 to 5 RPs in use, and some users raised complaints that that wasn’t enough to provide homes to the growing mix of priorities they were trying to support.
In V2R6, priority scheduler was enhanced to allow more than 4 priority groupings in any RP. At that time we encouraged users to consolidate all their performance groups into three standard partitions for ease of management: Default, Standard, and Tactical. Generally, a Tactical RP was needed to give special protection to short tactical queries. Some internal work still ran in RP 0 so it was recommended that you avoid assigning user work there, which necessitated that a “Standard” RP be set up to manage all of the non-tactical performance groups. In SLES10 many users embraced this three-RP approach, while others went their own way with subject-area divisions or priority-based divisions among multiple RPs (creating a Batch RP and a User RP, for example).
Here are four rationales for the multiple resource partition usage patterns that are in heavy rotation with SLES10 today. For the most part they came into being due to restrictions within the SLES10 priority scheduler which encouraged out-of-the-box use of multiple RPs on EDW platforms, whether you thought you needed them or not.
Very limited examples of using resource partitions for business unit divisions has been in evidence among Teradata sites on SLES10, partly because of only having four usable RPs and partly because the SLES10 technology has not been all-encompassing enough to support the degree of separation required.
First, let’s address the four key motives (or rationales) users have had for spreading workloads and performance groups across multiple RPs in SLES10, but looking at it from the SLES11 perspective.
Applying higher level (partition-level) resource limits on a group of workloads at the partition level, as we have see in some SLES10 sites, is much less likely to be needed in SLES11 (I personally believe it will not be needed at all). That is because the accounting in SLES11 priority scheduler is more accurate, giving SLES 11 the ability to deliver exactly what is specified. No more, no less. There is no longer a performance-protection need for resource limits or an over-/under-allocation of weight at the partition level. And because that need has gone away, the argument in favor of separate partitions for performance benefit is less compelling.
A virtual partition in SLES11 is a self-contained microcosm. It has a place for very high priority tactical work in the Tactical tier. It has many places in the SLG Tiers for critical, time dependent work across all applications ranging from the very simple to the more complex. And at the base of its structure in Timeshare it can accommodate large numbers of different workloads submitting resource-intensive or background work at different access levels, including load jobs, sandbox applications and long-running queries. Within its self-sufficient world, priorities at the workload level can be changed multiple times every day if you wish, using planned environments in the TASM state matrix.
If you’re on an EDW platform with SLES11, you are offered multiple virtual partitions, but their intent is different from SLES10 resource partitions. Virtual partitions were implemented in order to provide a capability that SLES10 was not well suited to deliver: Supporting differences in resource availability across multiple business units, or distinct geographic areas, or a collection of tenants.
Virtual partitions are there to provide a method of slicing up available resources among key business divisions of the company on the same hardware platform. Once you get on SLES11, if you begin moving in a direction that made sense in SLES10, you lose the ability to sustain distinct business units in the future. And you’ll be less in harmony with TASM/SLES11 enhancements going forward.
New capabilities around virtual partitions, such a virtual partition throttles in 15.0, and other similar enhancements being planned, are all being put in place with the same consistent vision of what a virtual partition is. Keep in step with these enhancements and position yourself to use them fully, by letting go of previous conventions and embracing the new world of SLES11 possibilities.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.