Penalty boxes have been around for years.  They come in all shapes and sizes.  Their single, focused purpose is to lock away bad queries and thereby protect good queries, while not condemning the bad ones to the ultimate penalty of being aborted.  If you’re using penalty boxes, I want to encourage you to look inside them once in a while, and open your eyes to some of the side-effects you might not have noticed in the past.

What is a Penalty Box, Really?

First, let me emphasize that only a small subset of Teradata sites feel they need to have a penalty box.  This is not a widespread approach, and you are not missing out if you don’t have one defined.   

That said, a penalty box is a priority scheduler allocation group with a low relative weight, where poorly-written queries (such as one with missing join constraints) can be relocated.  The real force behind the penalty box is not the low relative weight (relative weights can be yielding) but rather a very tight CPU limit that is placed on that allocation group, usually (but not always) set at 5% or less. This keeps CPU usage for penalized queries capped, no matter what else is running on the platform.  As a result, a query moved into the penalty box is slowed down, reigned in, caged.

Before TASM became popular, or even existed, DBAs at some sites would be on the lookout for rogue queries that were exhibiting odd behavior, like maintaining very high CPU to IO ratios, or performing in an outrageously skewed manner.  Once the DBA’s attention was lasered onto one of these queries, he could manually evict the query from whatever priority it is running at and force it into this penalty box, if he decided that was the correct thing to do.  From that point on the query faced a life sentence of slowly creeping to completion.


If more than 1 query at a time was demoted by the DBA, the penalty box might get crowded.  CPU-per-query becomes scarcer when multiple queries share the resources of an allocation group with a low CPU limit.  A crowded penalty box results in lengthier elapsed times for the imprisoned queries, and the queries are holding resources while they are still alive, such as AMP worker tasks, locks, and spool files.  But because the pre-TASM DBA was reaching a verdict on each demoted query one at a time, he usually could tell about how full the penalty box was getting at any point.   

Enter TASM

With the introduction of TASM, a piece of code called exception handling often replaced the DBA and became the judge and jury when it comes to moving queries into a penalty box.  This automation is good in many ways, and has greatly simplified the life of the DBA.  But at the same it has taken away the direct knowledge that the DBA used to have about what’s going on in the penalty box.  With TASM automating query demotions, it is possible for the penalty box to become over-populated, unbeknownst to the DBA.

There are several ways that over-population of the penalty box can happen:

  • The DBA is benefiting from workload exceptions to automate the move of a bad query into the penalty box, instead of performing the demotion manually.
  • The exception criteria on the workload doesn’t match the characteristics of the queries that are running, consequently too many are being demoted.
  • The exception action does not include sending an alert to the DBA so he knows how many demotions are taking place, and when.
  • Query execution patterns have changed, more poorly-written queries are being submitted.
  • The CPU limit on the penalty box allocation group is too low, preventing queries relegated there from completing in a reasonable time.

Why Should You Care?

I want you to understand all of this because I have seen cases where AMP worker task exhaustion is directly attributable to over-crowded, overly-restrictive penalty boxes.  This can more easily happen with TASM because the DBA no longer has to be involved in the process.  What can aggravate this tendency is that complex queries that are candidates for demotion often have parallel steps that do things like row redistributions or duplication or global aggregations, all which require multiple AMP worker tasks at a time.  Even having just 3 such queries in the penalty box could tie up 10 or more AMP worker tasks for a long time.  This could make it hard to get other, new work started up.     

If you Have a Penalty Box, What You Can Do?

All workload management decisions come with tradeoffs. I know that you understand that.   But sometimes out of sight is out of mind.  So when you do your normal workload management tuning, and when you review your TASM settings, as I’m sure you do regularly, put penalty box health and balance high on your list of things to review. 

gryback 151 comments Joined 12/08
20 Aug 2010

Great article Carrie! I would add that paying attention to TASM exceptions in general is something to be considered as this may be an opportunity for TASM tuning or user education on building good queries.

AbeK 37 comments Joined 08/09
23 Aug 2010

What is your opinion/recommendation on having query limits on penalty box, trying to allow the bad ones to complete before the next one comes into the workload. Also, how can we look/measure the AWT in use by a particular session, potentially using that as an indicator for alerting the DBA for reaction.

carrie 595 comments Joined 04/08
26 Aug 2010

Managing the level of concurrency in a penalty box is a very good idea. If queries classified directly into a penalty box workload, you could indeed set up a query limit on the workload and keep concurrency down to whatever level makes sense to you.

But it is more difficult to do that if the queries enter the penalty box workload by means of demotion from another workload. Demoted queries are not subject to control by a workload throttle that is defined on the workload they are demoted into. The query limit only impacts queries that classify directly to that workload.

Demoted queries will, however, cause the query counter to be incremented, if there is a workload throttle on the worklaod where they are demoted. This will make it less likely that new queries that classify there will be able to start running any time soon. But it also means that demoted queries are getting query slots ahead of queries that classify to that workload, which many sites do not like. For that reason, penalty box workloads are often defined in such as way that no queries classify to them, but are only used for demotions. The only downside here is what you have pointed out, that you can't control concurrency from demotions.

That is why it is important to monitor or periodically review activity in the penalty box, if you are using one.

There is no way I know of to view the number of AWTs used by a particular session. You can look at the explain, and if you know what step it is on you can made an educated guess.

Thanks, -Carrie

David.Roth 17 comments Joined 05/09
28 Aug 2010

Also agree, great article! I do have a further question.

In the days before penalty boxes, we used to use TDQM to identify very bad queries and reject them for analysis by the dbas and developers to look for inefficient code. Now, it seems that clients are not taking that step, thinking they need to let every query run, so instead of rejecting them, they load up the penalty box, leading to the issues you describe.

My opinion is that there are still some queries that are "too bad to run" and that they should be rejected and analyzed instead of allowing them to continue to consume resources. What is your take on this? - Dave

carrie 595 comments Joined 04/08
30 Aug 2010

Hi Dave,

Glad you agree. Thanks for commenting. I did do an earlier post called "rejection can be rewarding" that confirms your viewpoint that some queries are "too bad to run". It's at

Thanks, -Carrrie

monisiqbal 17 comments Joined 07/09
02 Sep 2010

Maybe we could create multiple penalty boxes with slightly different characteristics for each of them. Of course this will same implications for each penalty box but we can deal with them categorically.

AbeK 37 comments Joined 08/09
03 Sep 2010

One of the attributes to the PenaltyBox is the hard CPU cap and have too many of these partitions will essentially carve out that much CPU from the system that will be rendered useless as the long running queries will be using them. So too many penalty box might be detrimental... I was hesitating to have a second one and had to split the 5% CPU across two by 3 &2 so we don't waste more than we should...5% is wasted anyway and that by itself is terrible. Wish we could abort the bad ones... but thats always a battle..

carrie 595 comments Joined 04/08
03 Sep 2010

You could split a single penalty box into two or more penalty boxes, but if you are still using CPU limits on all of them, you may end up with similar issues if you are not aware of how many concurrent queries are running each. In addition, more active allocation groups are not always a good idea, it may dilute the relative weight of other allocation groups too much. It will really depend on how that approach fits into the entire setup. I think the main point is being aware of what gets moved into penalty boxes, either through alerts or by taking a look once in awhile, or doing after-the-fact analysis on a regular basis. Thanks, -Carrie

rksumner 2 comments Joined 04/09
30 Sep 2010

Carrie, great article and explanation of something I've observed but not fully understood (probably still don't) ;-).
Perhaps part of the problem with the penalty box overcrowding since TASM is that it isn't always easy to characterize a bad query. Some good queries do run long and do use a lot of spool. Some foreign keys do have skewed distributions which cause skewed join steps and that is normal too but the DBA generally had knowledge of these conditions and could be selective about demotion. And perhaps the answer is another priority class or workload definition but I think that this subject needs more evangelizing as I don't believe that this situation is widely understood. -- Rick


carrie 595 comments Joined 04/08
30 Sep 2010


I agree with your points. Thanks for adding them into the discussion. -Carrie

vkbagare 6 comments Joined 02/10
16 Oct 2010

Hi Carrie,

Even though we do have a Penalty box, I am strong proponent of nuking a bad workload.
But I agree, we all live in asymmetrical world, there has to be room for bad "Workload" too.

Thanks for sharing your insight.
As always, they are informative.

Vinay Bagare

Vinay Bagare

carrie 595 comments Joined 04/08
19 Oct 2010

Thank you for the comments, Vinay. I can't argue with your approach, as I've been some sites save a non-trivial percent of their platform CPU by being willing to reject and/or abort badly-written queries as they identify them. -Carrie

Random_Thought 2 comments Joined 06/09
27 Oct 2010

Thanks for writing this article. I have been explaining this to various DBAs here and Teradata consultants and I don't think they actually believed me. After all the Teradata Salesman told them that TASM fixed everything...... (we have CPU here, its concurrancy we are short of! AWT etc.)

Now you have carved the words into the metaphorical stone tablet the word will spread.

For your next article, can you explain that TASM is only a tool, and not a wand, and that waving the TASM wand in the air and saying "query expedious" will not sort out their workload? That would be great! (tongue firmly in cheek)

House elf. Ministry of Magical banking

teradata.madhu 1 comment Joined 10/10
09 Nov 2010

Really Great Article and interesting too...

AbeK 37 comments Joined 08/09
10 Nov 2010

Habits are hard to break, especially when you go to new shop and find a limitless roof!! Getting your management team to realize the benefit of aborting is an awesome start and once they start seeing the benefits, tightening the knob becomes more acceptable!! TASM is a concept and needs a lot of socialization.

KhalidRehman 2 comments Joined 02/11
16 Feb 2011

From TASM perspective, can't we just make a new RP for Penalty box give it very low rel weight, say 1% and do not specify any CPU Limit. Considering the best case, when system is almost free making this 1% to 100% , the penalty box can utilise the free resources of the system. By enforcing the CPU limit if there is nothing else running on the system the requests in Pen Box won't be able to utilise the system resources (aprt from 1%) which are there free...Agree that most of the queries are bad, but we are not rejecting them, we want those to be completed but have least priority...
Any comments ?

carrie 595 comments Joined 04/08
18 Feb 2011


Yes, what you are describing is certainly an option. Some people prefer to add a CPU limit onto a low-weighted allocation group because their system is generally quite busy, and they have decided that they do not want poorly performing queries to have more than a limited amount of the resources.

Often an allocation group with 1% relative weight will consume more than 1% of the CPU, usually because of unused resources falling through to the lower priorities. Relative weight by itself sometimes is not strong enough to really keep low priority work at a low consumption level. On a very busy system things often run better if there is some small amount of unused resources, when there is some breathing room. CPU limits can be useful in making sure there is breathing room. But it is entirely up to you how you design the penalty box.

Thanks, -Carrie

nyjoeyp 1 comment Joined 03/11
06 Mar 2011

Hi Carrie,

I actually understood what you described above for the most part. :-) The resources that are tied up become troublesome on a congested system. A reduced workload trigger in TASM based on maximum threshold of AWTs, on so many AMPs for a predefined period of time can help.

Joe P.

koebyj 1 comment Joined 10/07
01 Dec 2011

Question: Where in Viewpoint do we setup these rules and demotions for long running queries?

Koeby J

gryback 151 comments Joined 12/08
01 Dec 2011

First and foremost, you need a Teradata platform with entitlement to TASM. Then within the Workload Designer portlet, you will be able to setup TASM exceptions that allow for an exception action of demotion.

carrie 595 comments Joined 04/08
01 Dec 2011


Check the Viewpoint User Guide chapter on Workload Designer. That should help you with where and how to define exceptions on workloads.

Thanks, -Carrie

StevenSchmid 13 comments Joined 07/11
16 Feb 2012

Hi Carrie
I would like to create a penalty box under it's own resource partition in TDWM, however there is a limit of 5 resource partitions, and I already have 5 (3 different business groups with multiple AG & WD's each, 1 tactical rp, and the default rp). I thought I may have read somewhere that the default RP is nolonger really used by database utilities, and can be removed. Can you advise if we are able to delete the Default RP ?

Steven Schmid
Teradata DBA
Canberra, Australia

carrie 595 comments Joined 04/08
16 Feb 2012

Hi Steve,

You can try to delete the default RP, but you will not be allowed to. Neither can you delete or rename any of its performance groups. The default RP is special in that way, it is protected. This is because some small amount of internal DBS code still runs there from time to time (although the really critical stuff has been moved to System).

You do not actually have to have a separate resource partition to set up a penalty box, although I can understand you have nice clean divisions with your current setup. You could put it in any of your RPs (probably not the tactical RP though). As a separate workload associated to its own allocation group you can give it an allocaton group-level CPU limit that will only impact the requests that are moved into that workload.

Among limited choices, my suggestion would be to put the penalty box workload into one of the 3 business group RPs, the one which is likely to have the most queries that require penalization.

Thanks, -Carrie

shankerao 6 comments Joined 03/11
24 Jul 2012

Hi Carrie,

We are facing the issue penalty box over population and locking the resources. Even though we set QUERY LIMIT to 1 its not working as expected?? We can see many queries in penalty box..Can you please explain can we delay the queries before down grade from one work load to other?? We want to limit queries in penalty box . Please advise.


carrie 595 comments Joined 04/08
26 Jul 2012

Hi Shanker,

There is no mechanism for delaying queries that have already begun execution. When a query is demoted into a penalty box workload from another workload, that query is already active, so it cannot be delayed.

Also, even if you have a query limit on the workload that represents the penalty box, that will not prevent queries from being demoted into that workload; it will only prevent new queries from starting up in that workload, queries that classify there.

If you have too many queries in the penalty box due to demotions, your choices are the following:

-- Change the demotion criteria so fewer queries are being demoted.

-- Put an exception on the penalty box workload that aborts some of the higher-consuming queries.

-- Manually move queries out of the penalty box to some other workload.

-- Increase the CPU limit on the penalty box workload (if you have one), so the penalty box queries can receive more resource

Sometimes filter rules can be useful in preventing undesirable queries from even getting started, so you might want to take a look at filters as a way of rejecting some of these queries before they get into the penalty box.

Thanks, -Carrie

SmarakDas 25 comments Joined 02/12
01 Oct 2012

Thanks Carrie for clearing a major doubt. I always thought what would happen if a query is demoted to a workload whose concurrency limit (set by Throttle) is already filled. As per your explanation, such demoted queries are independent of Throttle limit (Throttle Limit is set for Queries that directly classify to that particular Workload) and they would enter the workload increasing the Throttle Counter by one (doing so, further delaying the Delayed Queries of that Workload). I kind of summarized what you explained so well above, but then summarizing feels good. Big Thanks again Carrie.

carrie 595 comments Joined 04/08
04 Oct 2012

You are very welcome. Thanks for taking time to let me know that it helped. - Carrie

geethareddy 145 comments Joined 10/11
06 Jan 2013

Hi Carrie,
We recently migrated to Teradata from Sybase. We have full license to TASM for PRODUCTION system which is on V14, SLES 10 SP3. After going through the above article and different views on Penalty Box(PB), I am considering the PB is not the mandatory feature everybody should have, as we are on V14 there are enough features of TASM. As we moved to TD just 10 months ago, we are facing many challenges with heavy skew issues on batch jobs. We can able to figure out some of them are due to bad PI choice, but still there are things need to be analyzed and fixed at the same time. TD services are working with us to improve the situation.

I am believing that, as of now we are on very recent version of Database, VP (14.1) and TASM. I say NO NO to penalty box as we need to keep some % of resources aside and manually keep the things on DBA radar to keep the PBox out of control (overpopulated). I vote for filters, exception handling instead. Currently we are in data loading phase (from Sybase to TD), we assigned 1% relative weight per allocation group under 'Background(Enforcement Priority)'. But still I would like to hear from you regarding keeping the PBox.


carrie 595 comments Joined 04/08
09 Jan 2013

Whether or not you want to define a penalty box workload is completely up to you. Most sites that use a penalty box do not have to spend too much much time preventing it from becoming over-populated. But overpopulation could become a problem if it the conditions under which queries are moved into the penalty box are not designed correctly, or if patterns of demotions into the penalty box change suddenly. So I don't think you have to plan ahead of time to set a certain percent of DBA time aside just to monitor a penalty box, if you decide you want to use one.

I would never advise anyone to decide to include a penalty box workload before they have exhausted other workload management opportunities. You are correct that there are many options in TASM to control the heavy query work (throttles, low priorities, exceptions, filters). So yes, it is a good idea to work with those features first.

However, here is the main reason some sites have turned to penalty boxes: They have some set of queries that consume more than they are intended to, and by doing so they disrupt other active work. Sometimes this is due to a query being very I-O intensive, which the SLES 10 priority scheduler is not designed to manage. Setting up a penalty box with a low CPU limit has proven to be an effective way to keep those kinds of queries under control. A low relative weight alone may be adequate for keeping resource-intensive, or highly-skewed queries under control without using a penalty box. However, I have seen cases where priority alone is not the complete answer.

So I would keep an open mind about the possibility of using penalty boxes in the future, and then only if you find that other opportunities are either not working for you or have tradeoffs that make them less desirable for you to use. And if you use penalty boxes, make sure the conditions under which a query is moved into the penalty box are truly "exceptional" and not the norm. That should help to keep concurrency in the penalty box at a reasonable level.

You are going to want to keep in eye on all your workloads through regular monitoring and evaluation anyway. This blog is simply a reminder that penalty boxes are not exempt from your monitoring effort, and should be checked for both effectiveness and unplanned for side-effects. Just like any workload management technique.

Thanks, - Carrie

geethareddy 145 comments Joined 10/11
10 Jan 2013

Thank you Carrie and I agree with you that WLM has its own trade offs and we need to react and make use of the features available based on the system condition.



macktd 6 comments Joined 09/12
25 Feb 2013

How to know number of AWT's used by a particular session.


carrie 595 comments Joined 04/08
27 Feb 2013


Unfortunately, that information is not made available. You can only guess by looking at the explain text.

The ResusageSPS table will tell you AMPs worker tasks used by the workload, but if you have more than one query active in the workload, that doesn't really provide what you are looking for.

Thanks, -Carrie

mohdmufid4u 4 comments Joined 10/10
18 Sep 2013

Hi Carrie,
Thanks again for wonderful article.. very helpful and you are making all of us in forum a little Carrie :)
Couple of questions I have in my mind.
We are a finance company and aborting batch queries is not an option for us.

  • However I would like to abort the bad queries which are user written adhoc.
  • We have a penalty box where based on CPU the queries are demoted to rogue bucket but they still consume a lot of CPU what would we be mssing ?
  • Most of the times we see that explain plan is bad and we demote the queries and it doesnt get enough resource but actually it executes with less CPU and we have users complaining for delay ? How to fix this?
  • In viewpoint can we have the query aborted at query level instead of session level because aborting a session may lead to loss of work as many people use volatile tables ?
  • What should be the criterion in TASM for defining a bad query and have them aborted, should it be time based or CPU based, if CPU or time based what threshold would you recommend ?

Thanks in advance for your help.

carrie 595 comments Joined 04/08
24 Sep 2013

It's usually a good idea if you can identify poorly-written queries before they run and create filter rules in TASM to reject them before they begin to run.   I have seen such rules be very successful in preventing large unconstrained product joins from getting into the system.   However, aborting bad queries, once they have been identified is also a commonly used approach.
If you do not choose to abort a CPU-intensive query, then it may continue to consume a lot of CPU no matter which workload you move it to, even if that is a penalty box.  The only choice you have if you do not want that much CPU used by a query is to abort the query.
Are you saying that you use the explain estimates to classify queries, and that sometimes explain estimates cause a query to classify to a low priority workload, and then the user complains because the query takes a long time to complete?  If that is the case, it can indicate that you to collect better statitics, so the optimizer can produce more accurate estimates.  If there are certain queries that are always critical, you may not want to use estimates to classify them, if you estimates are inaccurate.
I cannot advise you when to abort a bad query or whether to abort at the request or the session level.  Those decisions are very site-specific and depend on what business needs you are trying to meet and the importance of the work involved.   But from what I have seen, many sites use CPU time and skew when they decide whether or not to abort a query.  You could post that question on the Teradata Forum where many people will have the opportunity to respond to you, and see if others can share example with you.   
Thanks, -Carrie

You must sign in to leave a comment.