Error message

Warning: Creating default object from empty value in _dxcontentbanner_set_details_blog() (line 206 of /appl1/devx/drupal/sites7/all/modules/devx/modules/dxcontentbanner/
Subscribe to Blog content and comments for carrie Latest blog posts

Have you ever wanted two levels of control on load utilities?  More specifically, have you ever wanted to limit how many load utilities a subset of users are able to run? This was not possible before, but it is something that is supported in the Teradata 13.10 release.  Let me explain how this works.    

If you are planning on using Priority Scheduler CPU limits to restrict the amount of CPU available to the system after a hardware upgrade, there are two questions you will need to answer up front.

  1. What is the desired percent of the total platform CPU that you want to make available?
  2. What CPU limit setting will achieve that?

After answering those two questions, setting the CPU limit is simple to do.  So let’s focus on the harder part, answering those 2 preparatory questions, starting with how to determine your desired percent of CPU.

Long before TASM, there were throttles.  We call them “system throttles” or “object throttles” today in order to differentiate them from TASM’s workload throttles.  These classic concurrency control mechanisms, that can delay queries before they begin execution, are alive and thriving.  And interestingly, they offer sometimes-forgotten power and flexibility that can make them very useful for customers without full TASM implementation or capabilities, but who still want more control over their workload environment.

In the past, you, like many people, probably considered 62 AMP worker tasks (AWT) as the logical limit that can be used to support user work at any one point in time.

Have you ever tried to figure out ahead of time how many CPU seconds an application would require after upgrading to a new hardware platform?    I talked about one approach to solving this problem at the recent Partners Conference in San Diego, and would like to share my approach with you.

First, there a couple of assumptions we need to agreed upon when it comes to converting CPU seconds from one node generation to another:

Penalty boxes have been around for years.  They come in all shapes and sizes.  Their single, focused purpose is to lock away bad queries and thereby protect good queries, while not condemning the bad ones to the ultimate penalty of being aborted.  If you’re using penalty boxes, I want to encourage you to look inside them once in a while, and open your eyes to some of the side-effects you might not have noticed in the past.

I’ve mentioned it before, Marcio has blogged about it, customers have brought it up at the Partners Conferences. It’s cheap, fast, risk-free, with immediate results. But some of you are still not getting it. Or it could be you’re one of the few who truly don’t need it.

Either way, I’m going to take this opportunity to pound away a little more on the advantages of collecting statistics on your dictionary tables.

Collecting full statistics involves scanning the base table and performing a sort to compute the number of occurrences for each distinct value. For some users, the time and resources required to adequately collect statistics and keep them up-to-date is a challenge, particularly on large tables. Collecting statistics on a sample of the data reduces the resources required and the time to perform statistics collection. This blog gives you some tips on using sampled stats.

Last month I talked about things that don’t honor a CPU limit, and explained what a CPU limit is. This month I’d like to look at CPU limits from a slightly different perspective—What happens when you define CPU limits at multiple levels? For example, you may already have a system level CPU limit on your platform, but now you’d like to use CPU limits on one or two of your resource partitions (RP) as well. Yes, you can do this. Read along with me while I explain to you what you can expect.

Maybe you want to ensure that your sandbox applications never use more than 2% of the total platform CPU. No problem! Put a CPU limit of 2% on them. Or maybe you’ve got some resource-intensive background work you want to ensure stays in the background. CPU limits are there for you. But if you plan to use CPU limits as part of your workload management scheme, be aware that there are some database operations that simply won’t obey the limits. So let’s take a look at what those special cases are and why they’re allowed to violate the rules.