If you are planning on using Priority Scheduler CPU limits to restrict the amount of CPU available to the system after a hardware upgrade, there are two questions you will need to answer up front.

  1. What is the desired percent of the total platform CPU that you want to make available?
  2. What CPU limit setting will achieve that?

After answering those two questions, setting the CPU limit is simple to do.  So let’s focus on the harder part, answering those 2 preparatory questions, starting with how to determine your desired percent of CPU.

Desired CPU Percent on Uniform Configurations

Determining the right percent of CPU to make available after an upgrade is most often thought of in terms of nodes.  For example, you have 6 nodes, and you are going to add 4 more nodes of the same type.  After the upgrade you might  want to hold back the CPU from 2 of those nodes, and make that power available at a later date.

The math is fairly easy here:  You want to make the CPU from 8 nodes out of 10 nodes available.  8 / 10 = 80%.  Your percent that represents your desired level of CPU is 80% in this case.

 Desired CPU Percent on Co-existence systems

Taking the same example of a 6-node system, assume that the 4 new nodes are of a newer generation.  Assume that the old nodes have 10 AMPs per node and the new nodes have 12 AMPs per node.

After the expansion, the total number of nodes in the configuration will be 10.  They wish to remove 2 new nodes-worth of processing power from the configuration initially. 

By considering the number of AMPs configured on each node type, it can be assumed that the old nodes are 5/6th as powerful as the new nodes (10 AMPs / 12 AMPs). Using that as input, a new-node-equivalent can be calculated for the entire configuration, as one approach to finding the desired CPU ceiling percent:

     Old nodes:  5/6 * 6 nodes = 5 new-node-equivalents

     New nodes:  4 nodes * 1 = 4 new-node-equivalents

     Total new-node-equivalent nodes: 5 + 4  = 9 new-node-equivalents

     Total new-node-equivalent nodes after removing 2 nodes = 7 new- node-equivalents

     The percent that represents the CPU ceiling:  7/9 = 77.7 or 78%  

The approach used above to finding the correct CPU ceiling in a co-existence system means converting all nodes to the new-node-equivalent, then subtracting the number of new-node-equivalents to be removed, and finally dividing the new number of new-node-equivalents by the old.

Let’s consider a more realistic example where a current system comprised of four  5580s and four 5555s is being supplemented with six 5650 nodes.  CPU from two of the 5650 nodes are intended to be held back, and released later.  The question being answered in the formulas below is “What is the percent of CPU we want available on the COD platform after removing CPU equivalent to two 5650 nodes?”  

 

To answer that question, we use the differences in the number of AMPs on each node type to convert the all of the older nodes to new-node-equivalent numbers.  Then we add up all the new-node-equivalent numbers for all the node types, including the new nodes, and subtract two nodes for the two new nodes we want to hold back.  The final step is finding the ratio between the new-node-equivalent number that includes two fewer new nodes, and the new-node-equivalent number that represents all  nodes.  That gives us our desired CPU percent on the COD system, in this case 84%.

 Where to set the CPU limit

To answer the second question, the desired CPU percent for the COD platform must be translated into an appropriate Priority Scheduler or TASM CPU limit setting. This step has to be taken because the CPU limit only acts to limit the CPU that is seen by Priority Scheduler, which is the database-only CPU. 

You will want to adjust the desired CPU ceiling percent downwards to account for the CPU that is used by non-database work on the node.  This non-database CPU is usually coming from operating system and gateway activity, but could include non-database applications that have not been moved off the node. 

The amount of this downward adjustment can be established by comparing CPU Busy % reported in the ResNode macro against the total CPU utilization levels reported by all active allocation groups (including Allocation Group 200, associated with the System Performance Group) in Priority Scheduler monitor output, or similar information reported in the ResUsageSPS table. 

CPU reported by ResNode macros or the ResUsageSPMA table includes all CPU consumed on the node, inlcuding operating systemn, gateway, and application overhead. 

The following graphic illustrates what happens if you try to set a CPU limit at the desired percentage without taking into account the non-database CPU.

 As you can see, because the CPU being used outside of Teradata is 10% in this case, if you set the CPU limit parameter to be the 80% you desire, you will actually end up seeing 90% CPU consumption.     

All system-level CPU limits used for post-expansion capacity control must be adjusted downwards.  It may only be a few percentage points.  Ignoring the adjustment to the CPU limit described above may lead to a CPU cap that is too lax.  CPU may then be available at a higher level than expected intiailly, and as a consequence, less CPU will flow into the system when the limit is removed.

Discussion
Ian Russell 4 comments Joined 08/10
24 Mar 2011

Carrie - is there any level you would say not to set this at. The reason for asking is that our DR server is set at 35% - which we can increase during a DR event - but I always felt it was so low it must have an impact somewhere - not that I have identified it.

carrie 595 comments Joined 04/08
28 Mar 2011

Hi Ian,

I'm not aware of any lower limit on where you set a system-level CPU limit.

The thing to keep in mind is that CPU is only one of the critical resources on your platform. There is an attempt to balance resources on a given node at the time the system is originally configured. If you set a CPU limit too low, you will be that much more out of balance with other resources on the node, which are not being restricted, such as I/O, memory, etc., including the number of AMPs per node

The impact on a given query, or a given subset of work, when you only restrict one resource is usually less predictable than if you were to add or remove entire nodes (which of course you are trying to avoid doing using CPU limits). CPU consumption will be held to the level you specify, even if it is quite low, but the impact on individual queries will vary.

IntensityMan 2 comments Joined 02/10
24 May 2011

Why is it that one cannot use workload defs in TASM (TDWM) on a Teradata appliance? Is there a physical/architecture issue causing the limit?

Dave

gryback 151 comments Joined 12/08
25 May 2011

This is more of a PM question than a technical one so I'll help Carrie out here. There are many aspects that direct our appliance workload management strategy but I'll mention a few here for context. Simplicity of the solution which is key in this space. Competitively, maintaining a leadership position. Differentiation for the Teradata Platform family. As part of this overall strategy, WDs were reserved for our full TASM solution. However we constantly review our strategy across all the dynamics and make adjustments where and when necessary.

Fahim Kundi 1 comment Joined 06/11
07 Jul 2011

In our case Teradata CS has applied limit for Limiting CPU for Capacity on Demand. They have mentioned that they perfrom this activity through defined script.
Do they consider above scenario when they apply Limit as it is done through standard script for CS Team.

carrie 595 comments Joined 04/08
08 Jul 2011

Unfortunately, I am not familiar with the process that was used in your case. The best I can suggest is that you consult with your CS representative to answer that question.

Thanks, -Carrie

kattamadhu 4 comments Joined 02/11
10 Nov 2011

Hi carrie ,can you please give solution for my question

question

I am working on TD13 trial version….

CREATE SET TABLE tduser.jn1_emp ,NO FALLBACK ,

NO BEFORE JOURNAL,

NO AFTER JOURNAL,

CHECKSUM = DEFAULT

(

emp_no INTEGER,

emp_loc varchar(12))

Unique PRIMARY INDEX ( emp_no );

Insert into tduser.jn1_emp(1,’hyd’);

Insert into tduser.jn1_emp(2,’bang’);

Insert into tduser.jn1_emp(3,’visak’);

Collect stats on tduser.jn1_emp index(emp_no);

CREATE SET TABLE tduser.jn2_emp ,NO FALLBACK ,

NO BEFORE JOURNAL,

NO AFTER JOURNAL,

CHECKSUM = DEFAULT

(

pme_no INTEGER,

emp_name varchar(12))

Unique PRIMARY INDEX ( pme_no );

Insert into tduser.jn2_emp(1,’raj’);

Insert into tduser.jn2_emp(2,’ravi’);

Insert into tduser.jn2_emp(4,’kishore’);

Collect stats on tduser.jn2_emp index(pme_no);

If I am trying to execute the following it is giving “low confidence” in the explain plan.can anybody suggest how to make it to “high confidence”

Explain sel * from tduser.jn1_emp, tduser.jn2_emp

Where emp_no = pme_no

carrie 595 comments Joined 04/08
11 Nov 2011

This is a duplicate question that was also added as a comment on the utility session management blog entry. The response can be found there, at:

http://developer.teradata.com/blog/carrie/2011/08/utility-session-management-its-inside-the-database-in-teradata-13-10#comment-17546

Thanks, -Carrie

shankerao 6 comments Joined 03/11
11 Jan 2012

Hi Carrie..I have question for you..
I have set up a work load management in our environment..One of the application bath user should be cap to 50% of CPU resources during night window..We have two operating environments one is day and the other is night..So I did one resource partition named SHANK allocated RP weight is 80 RP relative weight is 16.7 RP cpu Limit ..Does that really CAP my CPU limit to 50..Please advice..so currently we have resource partitions..TACTICAL with RP weight as 240 and remaing 3 partitions RP weoght as 80...Issue is when a user under SHANK RP is running collect stats it is consuming 90% of the resources and causing delay for other loads..Please advise...

shanker.k

shankerao 6 comments Joined 03/11
11 Jan 2012

RP CPU LIMIT is 50..Does that mean the user under that RP should take up to only 50% of CPU..Please advise..

shanker.k

carrie 595 comments Joined 04/08
13 Jan 2012

You are not able to place a CPU limit on a user or on an application. A CPU limit can only be placed on an allocation group, on a resource partition, or at the system level. Only those 3 levels. It sounds like you want to place a CPU limit on a resource partition (SHANK). If you want to limit all the work running in the resource partition to 50%, then relative weight assignments will not make that happen. You need to set a separate CPU limit setting on the resource partition at 50%.

If you have a 50% CPU limit on a resource partition, then all requests running there will be limited in combination to 50% of the CPU of the platform.

If you wish to limit how much CPU a collect statistics user is consuming, you can place a CPU limit on the allocation group in which the collect statistics request is running. That would be the lowest level of control. Then everything running in that allocation group, along with the collect statistics request, will be collectively limited to 50% at any point in time. Or you can set the CPU limit at the resource partition level, but then all the other work within the resource partition will also be capped along with the collect statistics request, so you may find with that approach that you are impacting a broader level of other work with the CPU limit.

Thanks, -Carrie

shankerao 6 comments Joined 03/11
16 Jan 2012

Hi Carrie..Thanks a lot for your info..Even though we did a resource partition..When a use running under that partition is consuming more CPU than it allocated when COLLECT STAT statements are running..Is it something expected..Please Advise??

shanker.k

carrie 595 comments Joined 04/08
18 Jan 2012

I don't have a clear yes or no answer to your question.

There are some types of requests that do not completely honor a CPU limit. For example, an ALTER TABLE statement will not completely honor CPU limits, because there is some code executing during parts of an ALTER TABLE that are considered too critical to slow down. Also a rollback, with default settings, will not honor CPU limits.

I am not familiar with COLLECT STATISTICs statement not adhering to CPU limits, nor can I think of any reason that would be the case.

You could see how the CPU limit behaves when the statisics collection is NOT running (and when there is no ALTER TABLE running and no rollback happening in that resource partition and there is enough CPU demand to exceed the limit). Under those conditions observe whether or not the CPU limit is honored. Then add a COLLECT STATS statement and observe if the CPU limit is exceeded then. That should tell you more.

Either way, if your CPU limit is not holding within 5% or so to where it was set, you might want to open an incident with the support center. If you are on an older Teradata release, there have been a number of enhancements to CPU limit algorithms in more recent releases that might resolve your situation.

Thanks, -Carrie

shankerao 6 comments Joined 03/11
01 May 2012

Hi Carie,

In our work load management we have RP CPU limit and AG CPU limit is set to 100 in order to use the all the CPU when the system is empty ..Where as RP rel weight and AG weight are different to each work load based upon the requirements.. But we are not getting use of work load management as what we expected?? is it advisable to SET the limit as 100 or divide the resources as per requirements (standard partitions). Or even if the limit is set to 40 or 50 in RP cpu limit..is the vailable CPU still can be used by user?? Please clarify..Thanks a lot in advance..

shanker.k

carrie 595 comments Joined 04/08
04 May 2012

The CPU limit has nothing to do with the relative weight that is used by the different allocation groups. The CPU limit is a ceiling that limits how much CPU can be used by the RP or AG. You only want to set a CPU to something less than 100% if you really want to keep that RP or AG from using more than that level of CPU. If you have a CPU limit on all the AGs you have defined, then you will likely have unused CPU on your platform. That is because if one AG at some point in time is not using very much CPU, the other AGs may not be able to use it, because they have a limit that prevents them.

CPU limits are good in special cases, where there is an important reason to limit how much CPU an AG or RP is going to be allowed to use. But they are not intended to be used very widely, because then they may prevent unused CPU from being shared across the different priority groups, and you will see overall throughput go down.

Thanks, -Carrie

ARyan 4 comments Joined 01/08
30 Jul 2012

Hi Carrie,

My question relates to the effect (if any) that system-wide CPU limits might have on the relative importance of CPU and IO when priority scheduler calculates recent resource consumption for AGs that have been active within the age interval. Based on the consumption formulas listed in the Priority Scheduler Orange Book it appears that IO consumption across all active AGs will always scale to a factor of 1 as long there has been some IO whereas CPU consumption across all active AGs will only ever scale to a factor of 1 when the system has been running at 100% CPU for the entire age interval. Therefore for a CPU-limited system, or even one which has some idle CPU, is it true to say that IO will have a greater influence over priority scheduler consumption calculations? I hope I haven't totally confused you!

Andrew

carrie 595 comments Joined 04/08
01 Aug 2012

Andrew,

When calculating past usage, CPU usage is accounted against what is currently being consumed. If the system is only consuming 40% CPU, then everything is calculated as if that 40% were actually 100% of the system.

I/O works the same way. So even if you have a CPU limit on the allocation group, or on the system, CPU and I/O will weigh equally in terms of past usage.

There have been changes made to some of the internal algorithms in this area since the priority scheduler orange book was written, so I can see how it might be confusing.

Thanks, -Carrie

VasuKillada 12 comments Joined 10/11
10 Nov 2012

Greetings Carrie. I posted this question in TD Master's forum but thought to ask you here as well since I thought this could be a related question here and it might help someone who is also looking for answer like me...
Pasting my question here...

At our site we had a PMCOD done from 100% to 75% this weekend. Earlier before PMCOD, we had 4800 CPU cycles for one node for a 10 minute interval; we have 8 CPUs per node. So the math I had earlier and it confirms the total CPU per node per 10 minute interval is :

10(min)* 60(sec) * 8 = 4800 total CPU cycles per node.

After applying PMCOD (to 75%) I still get the total CPU cycles per node as 4800. I'm sure I'm missing something. I was expecting the CPU cycles to 4800*0.75 = 3600 per node per 10 minute interval. Because PMCOD is nothing but slowing the CPU cycles. Not sure If I got something wrong or something else.

My CPU total is calculated based on this from ResUsage table. SUM(CPUServSec+CPUExecSec+CPUWaitIOSec+CPUIdleSec) AS CPUTotal

Any scoop or insight will help me to take to next level for my analysis ( our PMCOD is both CPU and IO).

Similarly Is there is a best way to calcuate for the IO cycles as well?

Thanks,
Vasu

carrie 595 comments Joined 04/08
12 Nov 2012

Vasu,

Your question has already been answered on the Masters forum, with this comment:

=======
Resusage will still show the same number of CPU seconds and will still show 100% CPU utilization when a full workload is running. But each of those CPU second will do 75% of the work it did before. Eg you will see the CPU seconds used for a query in dbql increase proportionately.
=======

Thanks, -Carrie

VasuKillada 12 comments Joined 10/11
12 Nov 2012

Thanks Carrie.

Thanks,
Vasu

MarkVYoung 20 comments Joined 03/12
06 Jan 2013

Hi Carrie,
I am looking at the Resusage tables and see that there is a column called SpareTmon00 in some of them that the manual says 'contains the COD value'. Our 6650H system shows this value as 500, which seems to indicate 50% which correlates with the proposed CPU COD, but how does that impact on our system when calculating the number of milliseconds of CPU available in each 10 minute interval in Resusage? I am basing this on the calculation of (No. of Nodes * No. of CPUs * 600*1000) which I would assume to indicate the total amount of CPU available on a system with no COD, running with 100% CPU.

carrie 595 comments Joined 04/08
09 Jan 2013

Hi Mark,

The SpareTmon00 column in the ResUsageSPMA table is actually named CODFactor in 14.0. It does indeed carry the COD percent, but only for platform metering COD, not for priority scheduler COD (which uses priority scheduler system level CPU limits, which the ResUsage subsystem cannot see).

So if the column is showing 500 that means your nodes are defined with 50% PM COD right now.

PM COD impacts your system by reducing the work that each CPU second can perform. It uses a metering technique that forces idle time into each CPU second. In your case, 50% of a CPU second's power has been removed, so each CPU second is only delivering 1/2 of the power that it had before. It will take twice as many such CPU seconds to get the same work done.

ResUsageSPMA will report CPU seconds as though they were full, as though you had no PM COD defined. You can calculate the number of CPU milliseconds as you have defined for a 10-minute interval, however that will represent complete, full seconds, not CPU seconds that have had their power reduced by platform metering.

To understand how that number of milliseconds would translates to full CPU seconds (without platform metering) you can multiply what you get from your formula by the PM COD percent (50%). In your case that will cut the number of CPU milliseconds you have in a 10 minute interval in half. But be aware, DBQL will be reporting the actual PM COD seconds (the weak seconds) required by a query, as DBQL doesn't know about PM COD. So expect DBQL to report approximately twice as much CPU per query as it did without PM COD.

Thanks, -Carrie

suhailmemon84 64 comments Joined 09/10
14 May 2013

Hi Carrie,
Based on your earlier response:
"Resusage will still show the same number of CPU seconds and will still show 100% CPU utilization when a full workload is running. But each of those CPU second will do 75% of the work it did before. Eg you will see the CPU seconds used for a query in dbql increase proportionately."
is it true to assume that after releasing the COD%, we can expect the same queries to have comparitively lesser values of AMPCPUTTIME in DBQLOGTBL?
I'm particularly seeing this scenario when we removed our COD limit a few weeks back for our production database. We were operating at 75% earlier and now after releasing the COD limit completely, we're operating at 100%. Since the day we went to 100%, I'm seeing a reduction in AMPCPUTIME values for the same set of queries.
 
Regards,
Suhail
 

carrie 595 comments Joined 04/08
16 May 2013

Suahil,
 
If you are talking about PM COD (platform metering COD), then DBQL seconds to perform the same query should go down somewhat-proportionately with the change in percentage of PM COD.   So what you are observing seems correct to me.   
 
Thanks, - Carrie

suhailmemon84 64 comments Joined 09/10
18 May 2013

Thank you Carrie for your response.
Regards,
-Suhail

21 Oct 2014

Hi Carrie,
While doing capacity planning for a system, where I have an idea of amount of data that I am expecting, how can I decide how many Node I have to use for that?
Thanks

28 Oct 2014

Can you give some idea for #AMP/Node for latest Teradata Models ?( Teradata Box 6690,6700)

carrie 595 comments Joined 04/08
28 Oct 2014

It would be best to ask both of your questions of Teradata associates who specialize in hardware setup and configurations.   Unfortunately, that is not an area of experience for me.  You can speak with your Teradata account team or someone in Teradata Customer Support who works with your site. 
 
Best regards, -Carrie

You must sign in to leave a comment.