Posted 18 Apr 2011
Have you ever wanted two levels of control on load utilities? More specifically, have you ever wanted to limit how many load utilities a subset of users are able to run? This was not possible before, but it is something that is supported in the Teradata 13.10 release. Let me explain how this works. |
A utility throttle evaluates and considers delaying a utility job at the CHECK WORKLOAD END statement. CHECK WORKLOAD END is an SQL statement that procedes the utility logon....
|
Posted 28 Jan 2011
If you are planning on using Priority Scheduler CPU limits to restrict the amount of CPU available to the system after a hardware upgrade, there are two questions you will need to answer up front.
After answering those two questions, setting the CPU limit is simple to do. So let’s focus on the harder part, answering those 2 preparatory questions, starting with how to determine your desired percent of CPU. |
Posted 04 Jan 2011
Long before TASM, there were throttles. We call them “system throttles” or “object throttles” today in order to differentiate them from TASM’s workload throttles. These classic concurrency control mechanisms, that can delay queries before they begin execution, are alive and thriving. And interestingly, they offer sometimes-forgotten power and flexibility that can make them very useful for customers without full TASM implementation or capabilities, but who still want more control over their workload environment. |
Posted 07 Dec 2010
In the past, you, like many people, probably considered 62 AMP worker tasks (AWT) as the logical limit that can be used to support user work at any one point in time. |
Posted 15 Nov 2010
Have you ever tried to figure out ahead of time how many CPU seconds an application would require after upgrading to a new hardware platform? I talked about one approach to solving this problem at the recent Partners Conference in San Diego, and would like to share my approach with you. First, there a couple of assumptions we need to agreed upon when it comes to converting CPU seconds from one node generation to another: |
Posted 20 Aug 2010
Penalty boxes have been around for years. They come in all shapes and sizes. Their single, focused purpose is to lock away bad queries and thereby protect good queries, while not condemning the bad ones to the ultimate penalty of being aborted. If you’re using penalty boxes, I want to encourage you to look inside them once in a while, and open your eyes to some of the side-effects you might not have noticed in the past. |
Posted 24 May 2010
I’ve mentioned it before, Marcio has blogged about it, customers have brought it up at the Partners Conferences. It’s cheap, fast, risk-free, with immediate results. But some of you are still not getting it. Or it could be you’re one of the few who truly don’t need it. Either way, I’m going to take this opportunity to pound away a little more on the advantages of collecting statistics on your dictionary tables. |
Posted 29 Apr 2010
Collecting full statistics involves scanning the base table and performing a sort to compute the number of occurrences for each distinct value. For some users, the time and resources required to adequately collect statistics and keep them up-to-date is a challenge, particularly on large tables. Collecting statistics on a sample of the data reduces the resources required and the time to perform statistics collection. This blog gives you some tips on using sampled stats. |
Posted 15 Mar 2010
Last month I talked about things that don’t honor a CPU limit, and explained what a CPU limit is. This month I’d like to look at CPU limits from a slightly different perspective—What happens when you define CPU limits at multiple levels? For example, you may already have a system level CPU limit on your platform, but now you’d like to use CPU limits on one or two of your resource partitions (RP) as well. Yes, you can do this. Read along with me while I explain to you what you can expect. |
Posted 04 Feb 2010
Maybe you want to ensure that your sandbox applications never use more than 2% of the total platform CPU. No problem! Put a CPU limit of 2% on them. Or maybe you’ve got some resource-intensive background work you want to ensure stays in the background. CPU limits are there for you. But if you plan to use CPU limits as part of your workload management scheme, be aware that there are some database operations that simply won’t obey the limits. So let’s take a look at what those special cases are and why they’re allowed to violate the rules. |