Utility Session Management - It's Inside the Database in Teradata 13.10! (UPDATED)

Blog
The best minds from Teradata, our partners, and customers blog about relevant topics and features.
Teradata Employee

How do you pick the number of sessions to assign to your utility jobs?  Chances are you guess.  In Teradata 13.10 the task of deciding the number of sessions has been moved inside the database, meaning one less thing for you to worry about.  Read on for a quick intoduction to how this feature works.

Once you get on Teradata 13.10, the database is dictating the number of sessions each load job will get.  This is happening automatically, without your doing a thing.  Any parameters or config files you have set up will be ignored, and by default you will get a number of sessions between 20 and 100 for each utility you run. 

Session management applies to both EDW and Appliance platforms, and the only requirement is that one of the three workload management features in Viewpoint Workload Designer be enabled:  Filters, Throttles, or Workloads.  You can view the enabled features for your platform by going to the Activation tab within the General category.

Default Session Management Rules

There are default session management rules in place that will be in control when you first get on Teradata 13.10.  You can change the number of sessions that these default rules provide if you like.  There are 3 types of default rules:

  1. FastLoad and Muiltiload (including TPT and JDBC variants)
  2. FastExport
  3. Archive and Restore

The number of sessions specified by the session management default rules will be different based on the number of AMPs in your configuration, as illustrated by the following formulas.  First is the formula that the default session management rules use for the load type utilities:

 

FastExport always uses 1 session per AMP, up to an upper limit of 4 sessions per job.

Here is the formula that the default session management rules use to determine the number sessions for archive and restore jobs:

 

User-Defined Session Management Rules

In order to provide more granular control over the number of sessions used by utilities, you may create your own rules.   Which session management rules will be used by a given utility job is determined  by means of a classification process, similar to workload classification.   Classification to the default rules is based on utility name and optionally utility data size.  If you create your own customized session management rules you will have the opportunity to select TASM "who" criteria (user, account, query band, etc.) that will limit which utilities will use that rule.

For example, you may want to make sure that user loads from a sandbox user classify to a rule with a very low number of sessions.  Or you might want to ensure that your loads to your large, dynamic, sales tables have access to a higher number of sessions when they run.

Once you set up additional rules, you will need to specify an evaluation order.  The first session management rule that a utility classifies to will be the rule that determines the number of sessions that it will use.   At the bottom of the evaluation order is the default rule for that utility type, which is where all unmatched utilities will classify.

Is the Session Setting a Max or a Limit?

If you modify the number of sessions in one of the default session rules, or create a new customized rule, you will see that Viewpoint Utility Session screens refer to this number of sessions as a "Session Limit" or  a "Max Number of Sessions".

However, this number of sessions you specify is not an upper limit for the supported utilities (FastLoad, TPTLoad, etc.).  Rather it’s the exact number that must be available for the utility to begin execution, assuming you are on both Teradata 13.10 and TTU13.10 or later releases.  The session number specified in the TASM session rule will override both the minimum and maximum session specifications that have been placed in the script.

However, for non-conforming utilities that come from 3rd party tools or for utilities executing on pre-TTU13.10 software, the TASM session rules set a limit.  Only the maximum sesssion specification within the script will be overridden for those jobs, so it is possible that fewer sessions than specified in the TASM rule may be applied.

Viewpoint Utility Session terminology accounts for all of the situations described above, when it uses the term "limit".  However, if you are using the standard Teradata utilities from TTU13.10, this should be interpreted as "Number of sessions".

Utility Data Size 

If you know the size of the load job that is typically submitted by a specific user, then you can create a customized session management rule that uses the load type and the user name (or other criteria) to ensure those user’s jobs get the number of sessions that are appropriate.   Knowing the user and creating a rule for him is one way to manage the relationship between data size and number of sessions, and also keep the decision on the number of sessions inside the database.

Another approach is by using a new reserved query band called UtilityDataSize.  You have 2 values you can pass with that query band:  LARGE and SMALL.  If you do not pass a query band, the default number of sessions (equivalent to a MEDIUM size) is used.  Use of query bands puts control in the hands of the client, rather than the database, and assume that is 3rd party tool is being used, that the tool has the intelligence to support query bands.  

Below is some abbreviated FastLoad code that shows you where you can place the query band. 

sleep 1;
tenacity 1;

.SET SESSION CHARSET "ASCII";
.RUN NODEID.user1;
set record VARTEXT "|" DISPLAY_ERRORS NOSTOP;
errlimit 3;

SET QUERY_BAND = 'UtilityDataSize=SMALL;' UPDATE for session;

DELETE FROM fl_table_1;

define
     start_datetime (varchar(200),NULLIF='')
     ...
     ,file=data1;

begin loading    fl_table_1
      errorfiles et_fl_table_1,
                 uv_fl_table_1;

insert into fl_table_1 (
      start_datetime
     ...
     ) values (
     :start_datetime
     ...
     );

end loading;
.logoff;

The session management rule that the utility classifies to has a specific number of sessions.  There are three versions of each default rule, one for SMALL, one for medium data sizes called "Default", and one for LARGE data sizes.  The default version represents the standard number of sessions for that utility and will be used if no query band is passed.  If the query band says “LARGE” it will match to the LARGE version of the rule, whose number of sessions will have been increased by 50%.  If the query band says “SMALL” the rule it matches to will have a number of sessions that will be half of those in the default.  An example of those different size-specific rules whose default specifies 32 sessions follows:

DEFAULT:     = 32 sessions

LARGE:         32 * 1.50 = 48 sessions

SMALL:         32 * 0.5 = 16 sessions

Number of Sessions with Teradata Parallel Transporter (TPT)

When you’re using TPT you can initiate multiple instances of a load operator within a single load job.  All the instances will share a common logical sequence number (LSN).  One utility slot is required for each LSN.  In the database, the number of sessions specified by a session management rule will be applied to the LSN.  So if there are multiple load operators within a load job, they will share the number of sessions given to the load utility.

For example, if the session management rule for a TPT job specifies 10 sessions, and the TPT Load job has 2 load operators, then each operator will get 5 sessions.

How To Benefit from the Session Management Feature

You can do nothing, and once you are on Teradata 13.10, the default session management rules will be deciding how many sessions each of your utility jobs will use.

You can simply change the number of sessions that are specified in the default session management rules.    For your site, you may find 32 sessions for a configuration with 240 AMPs (for example) is too high, and you want most users to have a much lower number of sessions.

You can start using query bands to influence the number of sessions based on passing the value of LARGE or SMALL, if appropriate.  In that case, the number of sessions in the default rules or in the rules you have created will be selected based on the size value specified within the query band, as shown in the example above.

Or  you can define additional customized session management rules, based on your knowledge of who is submitting load utilities, and their needs.  Creating your own rules will give you more granular control over session assignments.  Don’t forget the evaluation order.

91 Comments
Enthusiast
Hi Carrie,

I am unsure why GSC is giving us recommendations the way they are!
We have been told that any utility sessions that are in responding status does not hold on to any utility slots.
Could you through some light on their rationale?

Teradata Employee
Vinay,

I sent you an email directly in response. The specifics involved in your recent comment are probably more suited to an offline discussion.

I think there are some confusions concerning terminology, which any discussion of utility throttles and session management is easily subject to. I have, and continue to, struggle with the same thing myself. One option is to re-open the incident so that there is clarity on all sides.

Thanks, - Carrie
Teradata Employee

Hi Carrie, Great article!!

You mention above that the default number of sessions is obtained using a formula that depends on the number of amps, Data Size etc. 

From my understanding, if my job is a MultiStream job, then the default sessions obtained using the above formula is applied to each data stream.

For example, if I run an ARC job with 5 data streams on a 40 amp system, then the number of sessions would be :

(20 + 40/20) = 22 sessions / Stream. With 5 data streams, it would be 5 * 22 = 110 sessions / job.

Would 110 sessions for my ARC job still be an efficient number to provide optimal archive/restore performance or is there any other formula to determine the number of sessions when runnuing a Multi stream job ?

Thanks

Jaison

Teradata Employee

Jaison,

Assuming the configuration has a moderate or low number of AMPs (as your example does), ARC is most efficient when there is 1 session per AMP since each session can only process 1 AMP at a time.  When you run multi-stream, ARC allocates the AMPs evenly among the streams, so it's ideal if the number of AMPs is a multiple of the number of ARC streams (if the math works out).  If you are running with 5 streams on a 40 AMP system, then ARC will allocate 8 AMPs to each stream.  In this case, the most efficient number of sessions to use would be 8 sessions per stream. 

Session management within TASM comes up with a number of sessions, and when TASM gives that number of sessions to ARC, ARC will use that number of session for each stream in the ARC job. So, ARC views the TASM number of sessions as a ‘per stream’ number and not a ‘per job’ number.

In your example below, I think your formula should be: 40 AMPs/1 AMP per session/5 streams = 8 sessions/stream and 40 total sessions for the job.

Thanks, -Carrie

Teradata Employee

Hi Carrie. Thanks for the clarification. To get a little more detail :

1. Is there a guideline on what number classifies as small number of amps and what classifies as large number of amps ? 

2. In my above example, if I did have a larger number of amps (say 400), then would ARC still work best if I have 1 session / amp? i.e. 400 sessions 'per job' ? or would  the performance be better if the number of sessions is calculated using the above formula (4 + #amps/50) = (4 + 400/50) = 12 sessions ?

3. If the number of sessions is known, is there any known formula to compute the number of streams (for a multi stream job) rather than always using a single stream that would provide ideal performance ?

Thanks

Jaison

Teradata Employee

Unfortunately, a "large number of AMPs" is arbitrary.  If the number of sessions you would have to specify in order to get one session per AMP becomes unwieldly, too large a number of sessions for you to allocate to an ARC job (as it might with 500 AMPs, for example) then you could consider 500 a large number of AMPs.  In this context "large" means a number of AMPs that would require something outside of what is reasonable at  your site, in order to have one session per AMP for a single ARC job.  

ARC with one session per AMP will usually be more efficient since each session can only process one AMP at a time.   The alternative is to pick a number of streams that divides evenly by the number of AMPs, so each stream is doing the same amount of work.   I cannot tell you whether the default session management rule number of sessions would perform better on your platform than a number of sessions that you calculate yourself...you would have to try that out.  But it's always preferable if the math comes out even, based on the number of AMPs:  # AMPs / #AMPs per session / # streams.

I don't have any advice on number of streams.  You could post that question on Teradata Forum where it would reach a larger audience and see what others are dong.  If you are archiving to tape you are probably limited to one stream per tape drive.  You could start calculating a stream number based on even AMP distribution and then see how that number fits into how many storage devices are available.

Thanks, - Carrie

Not applicable

Hello Carrie,

We did all changes as suggested in this article, We are now in TD14.0.3 database versions and we flipped the DBScontrol internal parameter 198(DISABLETDWMSESSIONS) to FALSE so that the UTILITY session is all controlled by the database and it ignores the number of sessions requested by clients. We have set the FEXP default to 18 and FASTLOAD/MLOAD values to 64. But I am seeing there are many utility jobs that are connecting to our box with more sessions than what is the DEFAULT rules, I could see a fastexport job with 1000+ sessions from a Hadoop job and around 512 sessions from a USER running a FASTLOAD.

Are there any other changes we have to do in the "SESSIONS TAB" to make sure we are restricting the session usage?

Thanks!

Jana. 

Not applicable

Carrie,

I think I had issue in my PDCR table that showed high number of sessions. Not that I look at the session count again, the rule is working fine for allocating max of 64 sessions. But I do see some fastlaod jobs that run using lesser sessions like 27/40/59 etc..instead of connecting 64 sessions. Thought the 64 is the minimum number of sessions required to start he utility job?

Thanks!

Jana

Teradata Employee

Jana,

I'm glad that you resolved the issue about the max session counts being honored.   You may see fewer sessions than your session management rules specify on some load jobs if  non-Teradata load utilities are being used. 

There are a few sentences about that situation in the above posting.  It's the 3rd paragraph under the heading: 

Is the Session Setting a Max or a Limit?

=========  

However, for non-conforming utilities that come from 3rd party tools or for utilities executing on pre-TTU13.10 software, the TASM session rules set a limit.  Only the maximum session specification within the script will be overridden for those jobs, so it is possible that fewer sessions than specified in the TASM rule may be applied.

========

Thanks, -Carrie

Not applicable

Thanks Carrie for your response. What I researched so far is the SESSIONINFO is not getting all of the utility sessions logged though the LOGONOFF logs all sessions(matches to viewpoint). Its so weird I traced back a job and its sessions in viewpoint and I could see more sessions in viewpoint(64 as set in the fexp-default rule) whereas the sessioninfo doesnt log all sessions, but thats the only place where we can tie a utility job and its sessions using LSN. 

ANy ideas how the sessioninfo(dbc.sessiontbl) is populated(I will dig this more)? I was hoping all utility sessions for a job will all be logged in sessioninfo and in logonoff. But its not. 

Not applicable

Also the jobs where I am seeing lesser sessions are actually from the same client running 13.10 TTU and hitting TD14 database. Like I explained above its a logging issue, I even scanned the ETL JOB LOG for those jobs that has lesser sessions as per sessioninfo and they all say, requested sessions: 6 and connected sessions: 64. 

Teradata Employee

Jana,

The SessionInfo view only shows currently logged on sessions.

All utility sessions are logged in the DBC.EventLog which is also accessible via the DBC.LogonOff view.  Normally, there should be two rows for each session: logon and logoff.  If a logon fails, then there would be only one row for the session.   

If you continue to have problems or questions in this area, I would suggest you contact the support center and work with them to get your issues resolved.

Thanks, -Carrie

Enthusiast

Hi,

I am using TD 12. An application is firing 60 concurrent queries. Our SLA is 10 seconds for a query but its taking 20 secs to execute a query. If we increase the number of sessions for a query can we achieve the SLA? Please advise.

Thanks in advance.

Not applicable

Hi Carrie

I've got a question about calculating the number of sessions to be used as the default.  We have a 16 node system, however our busiest ETL server is connected via a private LAN and that private LAN is only physically connected to 7 nodes.  30 amps per node.

If I use the standard formula for calculating sessions I get 44 if I use 480 amps (16 nodes) but I get 30 if I use 210 amps (7 nodes).  TDWM of course uses 480 and defaults to 44.

So I suppose this question is really about what is behind the calculation; i.e. does it matter that the sessions will only have access to connect to a subset of the nodes? 

Thanks in advance,

Ricky

Fan

Hello Carrie

That was really an informative blog. I have a few questions regarding TPT Update API Using Informatica,Current DBS Control settings are

MaxLoadtasks: 15 and  Maxloadawt is Zero. When i run 30 parallel jobs  few ETL's fail with error 2633 "Too Many Load task" though i have set Tenacity to 4 hours and sleep to be 6 minutes why are the jobs failing? Ideally they should be queued.

Thanks

Raghav

Teradata Employee

Ricky,

The calculation used by session management is based on the number of AMPs in the system.  LAN configuration, or even the number of nodes, is not considered.

Thanks, -Carrie

Teradata Employee

Raghav,

If you relying on MaxLoadTask to  manage the concurrency of utility jobs, you will never get the delay option.  When MaxLoadTask finds that a new load utility job would exceed the limit for MaxLoadTasks, it will reject that job.  It does not have the ability to delay

If you want the delay option, try using Utility Throttles, ot if you are on 13.10 for both the database and the client then internal default throttles will delay even if you are not using utility throttles.  Once you have utility throttles defined, MaxLoadTask will be ignored.  You can specify that you want the delay option on the screens that allow you to define utility throttles.

Another reason for not getting a delay action when you expected to is if you are using non-conforming utilities, which will only be able to reject.   Non-conforming utilities are non-Teradata utilities that do not use the TPT API, but instead rely on their own legacy CLI interfaces.  Most current releases of non-Teradata utilities tend to use the TPT API, so should be able to delay, but you would have to double check that.  

Tenacity and sleep settings will not have an impact on whether or not a load job is delayed or rejected.  Tenacity only plays a role if a job has already been rejected and the client software sees that rejection.  If the job is delayed, it stays under the control of TDWM in the database and will no longer be under the control of the client.  

Thanks, -Carrie

Enthusiast

Hi carrie,

I'm not sure to be in the right topic. If not sorry.

My question is about the session limit  defined in workload Designer/Utility Sessions/Session based utility.

The session limit can be defined for Fastload, TPT Load operator, Stand Alone FastLoad, Stand Alone MultiLoad ... But something is missing in the list :   TPT STREAM.

How can I define a session limit for TPT STREAM utility ?

Thanks

 
Enthusiast

My Teradata Viewpoint version is V14.10.00.04-b38

Teradata Employee

Vincent,

Session management in Workload Designer only covers the load utilities, and TPT Stream is SQL-based, so it not it is not officially a utility.    The number of sessions is a parameter on the TPT Streams or TPump job, and has to be specified there.   Number of sessions can secondarily be controlled by managing the number of such jobs that are started up at the same time. 

You might want to inquir post a question on one of the Teradata Forums and ask what other Teradata practitioners do to manage the number of sessions used by TPT Streams or Tpump jobs.   Unfortunately, there is not centralized place to do that.  

Thanks, -Carrie

Teradata Employee
Hi Carrie,

There is limit for load utilities (fastload Mload etc) in Teradata system that is handling by MaxLoadTask parameter in DBS control. According to this we can increase this filed value up to 15. If we want to increase its value more than 15 then we also have to set the MAXloadAwt field

in DBS control to limit the AWT in system for load utilities. Othet wise most of the AWT will be consume by Load utilities only. We can define 30 utility Limit at max.

My Question is :

If TASM is there, utilities limit is define in ruleset in session tab. If we define utilities Limit 23(exmple) without any change in DBS control

Then will it work as we did not set MAxLoadawt and defined utilities Limit 23. If Yes, then how will TASM mange the AWT for load utilities in system.

please help on this

Teradata Employee

Sandeep,

Once you are using TASM, or even just using the throttle category of TASM, the DBS Control parameters (MaxLoadTasks and MaxLoadAWTs) are no longer honored.

Under those conditions, load jobs will never be allowed to use more than 60% of the total AMP worker tasks.  If you have the default number of AWTs/AMP, which is 80, then load jobs in combination will never be allowed to user more than 48 AWTs (60% of 80) at a time.

Thanks, -Carrie

Teradata Employee

Thanks Carrie.

Enthusiast

Hi Carrie,

I want to start with "Happy New Year" first :)

Coming to my question, sometime back i read your posting on Evaluation Order from TD Magazine. In that you mentioned the order of the workloads may not be a performance concern, but still it is recommendable to have most popular workloads on the top to expect some slight savings. Personally what I realized that the most popular WLs should be on top and WD-Default should be at the end (As WD-Default is for unexpected queries). I saw recently a PRODUCTION Rule set where the number of WLs are around 40. In evaluation order, WD-ConsoleH/M/L/R occupying first 4 places. This is on Enterprise class with SLES10 configuration.

So i just wanted to make my self clear, whether I should bother about having the highly used WLs with strict classification criteria on the top OR not.

Can you please clarify?

Enthusiast

And just forgot to ask related question on eval.order, i heard that in 14.0, the eval order was removed and there will be only one evaluation order for all. Is it true?

Teradata Employee

Response to Geeta's first posting above:

If you only have 40 workloads, then it really doesn't matter from the performance perspective if you re-order the evaluation order list based on workoads with the most hits going first.   The primary reason for ordering the evaulation order to is in the case where the ordering is essential for correct classification, such as when you have workloads such as this:

MSI-Short:          Account = ‘$M1$MSI&I’, Est Time <= 20 Sec.                       (High priority) 

MSI-Medium:    Account = ‘$M1$MSI&I’, Est Time <= 500 Sec.                     (Medium priority) 

MSI-Long:           Account = ‘$M1$MSI&I’, Est Time <= 2000 Sec.                   (Low priority) 

If you have MSI-Long higher in the evaluation order than MSI-Medium or MSI-Short, all MSI queries will classify there.  So even if MSI-Long was a more popular classification destination, it would be a mistake to put it higher in the evaluation order.

Thanks, -Carrie

Teradata Employee

Response to Geeta's second posting above:

In 14.0, you will still have a workload evaluation order, but you no longer have the ability to change the workload evaluation order across different states.

This is because of the  14.0 enhancement that greatly reduced the overhead of changing states in TASM.

In 14.0 and later releases, requests stay in the delay queue and are re-evaluated in place, rather than the delay queue being flushed and requests being sent back to the source and resubmitted.

Delay queues are no longer repopulated and recounted, significantly reducing message passing overhead.  In addition, internal structures such as the TDWM cache, are not deleted and rebuilt, but rather tweaked in place, a much more modest effort.

As a result, state changes are more efficient and much faster, especially at busy times.  But in order to accomplish this, counters have to be maintained for all throttles across all states, even when the throttle is not active.   For this reason, the ability of changing evaluation order by state had to be removed.

Thanks, -Carrie

Enthusiast

Thank you very much.

Enthusiast

Carrie, 

just one last question.

From your presentation, i heard one important point that you have emphasized a lot is to have a seperate the utility work in to seperate workloads having classification critieria 'Utility'

For example, assume a PRODUCTION box with 40 workloads, among which 3 workloads (WD-BatchH,BatchM,BatchL) to support loading.

For these 3 WDs, the classification criterion is simple where we just included the concern profiles (and NOT used the Utility as classifc cri). So who ever the users included in those profiles, that particular work fall into cocnern Workload. The most comman work from these Profile users is loading using Utilities and also do some adhoc INS/UPD/DELs.

Ex:

WD-BatchH clssifcn cri: Profile_1;    WD-BatchM classification criterion: Profile_2;     WD-BatchL classification criterion: Profile_3; 

Here my question is, why it is so important to prevent the non-utility work from classifying to the same workload. 

The reason is the utility workload will be mixed up with regular work and it is tough to analyze and report how the load utilities and concern workloads are performing? OR there is any other performance cocnern.

Enthusiast

As a continuation to my question above....

I just went through your another forum posting (at 
http://developer.teradata.com/blog/carrie/2012/06/eliminating-unplanned-delays-within-load-utility-j...related to my question, and I understood that the main reason to have the separate WD for Utility work is to avoid any unexpected delays with Auxiliary sessions. After going through all the comments and your responses, i put my interpretation in the form of below example to check my understanding with you on best practice recommendation for handling Utility Jobs to avoid delays. Am i missing anything here?

3 workloads something like below will cover highly critical, Medium and Low priority Batch jobs.

1) WD-BatchNoThrtl-- Covers jobs from important batch IDs under BatchSpecialProfile 

ClassfCrit: BatchSpecialProfile + Utility:FL,ML and FE...allKindsOfUtilities; 

Throttle:

WD Throttle: No

Utility Throttle: Yes (limit 30)

System Throttle: No


EvalOrder: 1



2) WD-BatchWthThrtl-- Covers jobs from rest of the non-critical batch IDs under BatchProfile

ClassfCrit: BatchProfile + Utility:FL,ML and FE...allKindsOfUtilities:

Throttle:

WD Throttle: Yes(limit 15)

Utility Throttle: Yes (limit 30)

System Throttle: No


EvalOrder: 2



3) WD-BatchUtilAux- Covers auxiliary sql sessions from both BatchSpecialProfile & BatchProfile

ClassfCrit: BatchSpecialProfile or BatchProfile 

Throttle: 

WD Throttle: No

Utility Throttle: No

System Throttle: No


EvalOrder: 3

 

-- 


Thanks

Geeta.

 

Teradata Employee

Geeta,

The reason for separating non-utility work at the WD level by means of Utility classification is three-fold:  1.)  As you already mentioned, for reporting and analysis purposes, and 2.) so the utility unit of work will be recognized, and 3.) so no non-utility work will ever classify there. 

When the WD classification includes Utility the DBS then knows that only utility jobs will be running in that workload, so it will assume there is a utility unit of work.  Many sites put workload throttles on these utility workloads and with utility classification the delay will only take place at the beginning of the utility, not several times during the utilities execution.

There are no performance impacts, other than the potential for having multiple delays within the utility job if a workload throttle is present.

Your three examples look like they would work fine.   The main thing to remember is to add "utility" classification to the workloads where the actual utilities will run, and then use the same basic classification criteria for another workload for auxiliary sessions WITHOUT the utility classification and without throttles.

Thanks, -Carrie

Enthusiast

Could you tell me if TPT Stream operator would occupy Utility Slots?

Teradata Employee

TPT Stream will not use a utility slot.  TPT Stream operator and it's predecessor, TPump, use SQL-based load approaches and therefore do not tie up resources to the degree that TPT Load (FastLoad) or TPT Update (MultiLoad) do.  That is why TPT Stream jobs are unlimited.

Thanks, -Carrie 

Enthusiast

Thanks Carrie for the article.

We have a very peculiar problem. We are using TD14.10. We have system level throttle for TPT Multiload and is 15. We are trying to run 7 MLOAD through TPT concurrently and is triggering 230 sessions. 

Out of that 222 sessions, are moving to IDLE state and other 8 sessions are in delayed Queue. These 222 sessions are remaining in Queue for more than 20 mins and is not getting activated. It is remaining in the same state and it is not failing too. There is nothing active during this time.

if we are running 6 MLOAD jobs concurrently then everything is running smoothly without any issue.

Please let me know, what is the issue here.

Thanks in advance.

Teradata Employee

If you mean that you have a system utility throttle for TPT Update jobs (MultiLoad), but some sessions are being delayed, first find out what throttle is causing the delay.

Some sessions within a load utility are for auxiliary work, like a write to the restart log.  Auxiliary session SQL is treated as regular SQL and will classify to a different workload from the utility job, and may be delayed by a system query throttle, if one exists that it can classify to.  Check if a system query throttle is responsible for these delays.  If it is, then modify the system query throttle to add classification that excludes all applications = MULTILOAD, TPTUPD, etc.    This wlll prevent auxiliary SQL from within a load utility job from being delayed.   

Thanks, -Carrie

Enthusiast

Hi Carrie 

           I've found the information in this post extrememly helpful, but I do have a question regarding limiting sessions for the TPT Stream Operator.  We are on a 4 node 2750 appliance runniing TD14.10 SLES11.  The application team is using Informatica TPT API to run load jobs using the Stream Operator and each job initiates 23 sessions - most of which sit idle.  They also at times run several jobs concurrently and I often see over 100 idle sessions for that user, with just a few active. They are using the  "UtilityDatasizeSmall" query band which does not have any effect because the Stream Operator sessions are not classified as utility sessions.  So my question is : How do I get the Stream Operator to limit the sessions per job ?  I'm thinking that 12 to 14 might be a better value.

Regards

Mark

Teradata Employee

Mark,

The number of sessions used by an Informatica TPT API Stream workflow should be able to be set in the definition of the Stream connection.  I don't have direct experience in this area, but I have been told that it allows for setting min and max sessions.

Like many Stream/Tpump jobs,  the client providing the data (the "source" in Informatica terminology) could be the bottleneck.  In your case  it may be that the source not supplying data fast enough to keep many of the sessions busy.  You could  try some iterations changing the number of sessions to see what maximizes the throughput.

For additional tips in this area, you could post your question on one of the Teradata Forums, and see how others using Informatica define the number of sessions for their TPT streams jobs.

Thanks, -Carrie

Enthusiast

Thanks very much Carrie - I'll pass this on to the Informatica Team.

kvz
Enthusiast

Hi Carrie,

I want to set session limit for ARC utility in TASM depending on UtilityDataSize (Large,Small,default).

We have 12 PEs and 240 AMPs in configuration. Can you help me how I can calculate number of sessions should be allowed for each UtilityDataSize (Large,Small,default) through TASM?

kvz
Enthusiast

....CONTINUED 

for Large data size we use = 12 streams

for Default(medium) data size = 8 streams

for Small data size= 1 stream

Now, i want to know what should be ideal values for below rules in TSAM in this case?

ARC-Large (UtilityDataSize=Large)

ARC-Default(UtilityDataSize=Any)

ARC-Small (UtilityDataSize=Small)

Teradata Employee

You don't need to  calculate the number of sessions yourself.  The purpose of session management is to do that for you in the database.  The formula that session management uses to determine the default number of sessions for ARC is the following:

If # AMPS > 20 ==>   4 + (#AMPs / 50)

In your case that would be about what you are using now for the medium data size , 8 or 9.  Session management will divide that by 2 to determine the number of sessions for the small data size (that would be 4) and multiplies it by 1.5 to determine the large data size (that would be 1.5).

I cannot tell you the ideal number of sessions.  I suggest you use the default settings, and adjust them if you need to.   It sounds like the defaults are close to what you are already using anyway.   All of this is documented in  TASM orange book, so you might look there if you need more detail on session management. 

Thanks, -Carrie