You may want to skip over this posting. 

Unless you’re curious about some of the more obscure workload classification conventions used by TASM and TIWM.

In a previous blog posting titled “A New, Simplified Approach to Parsing Workload Selection”  I explained how a session workload is determined at the time a session logs onto the database.  Once established, the session workload is used to support the activities required for either logging on a session or for parsing queries within the session.

If you have set up a parsing-only workload to serve as a single, high priority session workload for all of your users, and you happen to monitor that workload, you may find things that are not part of session logons or query parsing that are running in that workload. 

This blog explains what these other activities are that may show up in a session workload, and some of the oddities in how the session workload is used by workload management routines.

Session Workload

First, let’s be clear on what a session workload does:  The session workload manages the access to resources and the priority of activities that take place on the parsing engine, such as logging on a session or parsing a request.  

A session workload that was responsible for parsing a query is reported in a field in the DBQLogTbl named SessionWDID.  SessionWDID may be the same workload as the WDID for that request.   If those two fields are the same, that means that parsing and optimizing for the query took place in the same workload as the AMP execution.  Sometimes SessionWDID and WDID are the same workload, sometimes they are not.  Read the blog posting mentioned above for more background, and for detail on the session workload and how it is selected.

Now let’s consider some of the activities that you may observe running in SessionWDID.

Stored Procedure CALL Statement

Stored procedure CALL statements always run in the SessionWDID workload for the session.  When the dispatcher sees a CALL statement it will ensure it gets put in its SessionWDID so the SessionWDID and the WDID will always be the same for the CALL. 

Here’s how a stored procedure CALL would look in DBQLogTbl:

The stored procedure CALL statement incorporates the code and logic, if any, contained in the stored procedure.  It is usually a very, very small amount of CPU usage that comes before, between, and after the SQL statements within the stored procedure.   A CALL stored procedure is given its own row in the DBQLogTbl, apart from the SQL contained within it.

Below is a different example of DBQL output that illustrates a CALL statement and the three SQL statements within the stored procedure.  Note that DBQL in this example reports zero AMP CPU Time for the CALL, and a slight amount of CPU on the parsing engine.  The request number for all of the SQL statements within the stored procedure will be the same as the request number for the CALL.  However, each SQL statement that is part of the stored procedure will have a distinct internal request number that simply increments the original request number, as shown below.  

In earlier releases stored procedure calls ran in the default workload, called “WD-Default”.  Starting in 14.10 that changed, as some Teradata sites were concerned when they saw activity in WD-Default after having been careful to set up workload classification in way to avoid anything failing into the default workload.  Consequently, there was a change, and today all CALL statements execute in the SessionWDID for the session.   

If you would prefer CALL statements to execute in a workload of your choice, and not in the session workload, you can set up classification criteria on the workload where you want the CALL statements to execute.  Use the Target Criteria named "Stored Procedure".  You can name one or more stored procedures you want to run in that workload, or you can design it so that all stored procedures run in a single workload by using the wild card (*).  But remember, we’re talking about only the CALL statement there, the framework of the stored procedure that encapsulates the SQL statements.   All the SQL statements within the stored procedure will run in whatever workload they classify to, independent of the CALL statement.


CHECK WORKLOAD statements were introduced in Teradata Database 13.10 to support the load utility unit of work.  There are two such statements:

  • CHECK WORKLOAD FOR:    Precedes the utility unit of work.  Passes object information, such as database and table, into the database.
  • CHECK WORKLOAD END:  Begins the utility unit of work.   Determines the number of AMP sessions, triggers workload classification, and performs the utility throttle and resource limit checks.

The CHECK WORKLOAD FOR statement works the same way as the CALL statement in terms of workload classification.  CHECK WORKLOAD FOR will always run in SessionWDID (which will be the parsing-only workload if you have one defined).  This is very light work, as all that statement does is get the names of the objects involved in the load utility from parser memory, where they are initially read into the database, and pass them to the dispatcher.  Similar to a CALL stored procedure, there is no AMP activity involved in the Check Workload For statement.

A CHECK WORKLOAD END statement classifies to the workload where the load utility will execute.  It uses a different approach to workload classification than does a CHECK WORKLOAD FOR.   A CHECK WORKLOAD END goes through normal request classification and matches to a single workload.  Once that workload is selected, the workload becomes the load uiltity’s session workload as well  If you look at DBQL output for a load utility, you will see WDID = SessionWDID for the duration of the load.

Here is an example of how CHECK WORKLOAD FOR and CHECK WORKLOAD END look in DBQL:

CHECK WORKLOAD FOR finds SessionWDID and moves it to the WDID, making them the same. CHECK WORKLOAD END does the opposite.  It finds the WDID and moves it to the SessionWDID, making them the same for the duration of the load utility.

HELP and SHOW Commands

HELP and SHOW commands are short administrative commands that execute on the parsing engine but may go to the AMPs to get data dictionary information.  Typical HELP/SHOW commands include:


Most of the time these types of commands will use express requests if they require AMP data.  Express requests are streamlined requests that bypass the parser and optimizer and go directly to the data dictionary. 

However, if any of these administrative commands are consider large, they will be executed as normal SQL and will be parsed, optimized, and dispatched to the AMPs.  This would be the case if a spool file needed to be built to return the data.

If an express request is used to satisfy one of these commands, as is likely the case with a HELP DATABASE or HELP SESSION, then you will see only a SessionWDID in DBQL data.  The WDID will be NULL.  An express request bypasses the dispatcher, which is the module where the classification to a WDID is performed.  So it never undergoes workload classification.

If, however, the administrative command is satisfied by issuing SQL, which might be the case when a lengthy histogram is returned from a SHOW STATISTICS command, it will appear in DBQL with both a SessionWDID and a WDID.  They may be the same workload or a different workload.

Here is how DBQL output might look for a series of HELP and SHOW commands:

Notice that some of these commands have a WDID and some do not.  

Throttles Can Delay HELP/SHOW Commands

There’s something else to be aware of concerning these administrative commands.  Sometimes they end up in the throttle delay queue.  

Throttle rules are enforced within the dispatcher before the first step in a request is sent to the AMPs.  Because express requests bypass the dispatcher, they are never subject to being delayed by a workload management throttle rule.  This means that most HELP/SHOW commands, at least the ones that only display a SessionWDID but not a WDID, are not at risk of being placed in the delay queue.  

However, the HELP/SHOW commands that run as regular SQL (the ones that do display a WDID) could be sent to the delay queue.  Whether this happens or not depends on whether such commands classify to a system throttle rule, and whether that system throttle is at its limit at the time the SQL reaches the dispatcher.  Since a virtual partition throttle is a variation of a system throttle, such an unexpected delay could also happen as a result of a virtual partition throttle.

You can see this use of SQL instead of express requests yourself by running an EXPLAIN on a HELP or a SHOW command:

This HELP command will use SQL and can tentatively be delayed:


  1) First, we lock DBC.TVM for access.

  2) Next, we do an all-AMPs RETRIEVE step from DBC.TVM by way of an

     all-rows scan with a condition of ("DBC.TVM.Field_1 = '00002F04'XB")

     into Spool 1 (group_amps), which is built locally on the AMPs.

     Then we do a SORT to order Spool 1 by the sort key in spool field1


  3) Finally, we send out an END TRANSACTION step to all AMPs involved

     in processing the request.

  -> The contents of Spool 1 are sent back to the user as the result of statement 1.


This SHOW command uses an express request and will not be delayed:


  -> The row is sent directly back to the user as the result of statement 1.


The chart below is a subset of the earlier chart, displaying only the HELP/SHOW commands that have both a SessionWDID and a WDID, an indication that they were converted to SQL:

A future enhancement will automatically bypass throttles for such commands.


A session classifies to a session workload to do the logon processing and also to do the parsing engine work of all requests that execute within that session.  However, there are additional types of very light work that will run in the session workload as well. 

If you have set up a parsing-only workload, the additional commands and requests that execute at the session priority should not be a performance concern.




StevenSchmid 13 comments Joined 07/11
01 Mar 2016

Hi Carrie
Thanks the article, very informative as always.
Do you think we could classify the "Call" stored procedure statement in the parsing only workload as AMPCPUTime is usually 0, or should they be separate.
Therefore workload definition would look something like:
   Users = *, and
   Target criteria = "Stored Procedure" and object = *     
which would allow SP Call statements to run, and parsing only steps as they only look at the highest who criteria Workload.

Steven Schmid
Teradata DBA
Canberra, Australia

carrie 595 comments Joined 04/08
04 Mar 2016

If you want to add target criteria to a workload so that all stored procedure CALL statements will run there you would select Target Critieria, then Stored Procedure in the drop down box.  Then in the Database box put "*" and then in the Stored Procedure box put "*".  That will catch all SP CALL statements from all databases.   I was a little confused what you meant by "object=*" but maybe you were referring to the stored procedure object type. 
However, there is a problem with this approach:  If you have user classification of "user=*'  as well, as you would likely have if this was your parsing workload, then TASM will AND the user and the SP classification together.  So then both the user and the SP classification would have to be satisfied, with the end result that only SP call statements will classify there.   There is no OR capability supported between unlike criteria types.   
Might be cleaner to just set up a separate workload for CALL statements.  You can still run it at a high priority if you wish.  Doing so would also make it easy for you to break out SP calls independently for monitoring things like resource usage, etc.
Thanks, -Carrie

StevenSchmid 13 comments Joined 07/11
06 Mar 2016

Thanks Carrie, I didn't have a Teradata Sytem available to check the actual settings.

Steven Schmid
Teradata DBA
Canberra, Australia

You must sign in to leave a comment.