Subscribe to Blog content and comments for carrie Latest blog posts

Looking for an extra level of protection around your online, web, or interactive queries? Think about expediting the priority scheduler allocation group that they run in.

In the past, expediting priority scheduler allocation groups that support tactical queries was one way to prevent out-of-AMP worker task conditions from slowing down those response-sensitive queries. But even if you never run out of AWTs, there are some less obvious reasons why expediting those allocation groups is a good, proactive idea.

Suppose you have a table with several non-unique secondary indexes (NUSI). When TPump loads the table, should you expect that each row’s INSERT will cause a table level lock on each of the secondary index sub-tables? And if so, couldn’t this create a lot of blocking across sessions? To answer that, let’s take a closer look at what happens when NUSI sub-tables are updated.

Please  note:  This posting is relevant only to software releases prior to Teradata Database 14.0.  Ignore this recommendation if you are on 14.0 or greater software.

You may have already heard the recommendation on collecting statistics on the system-derived column PARTITION on partitioned primary index (PPI) tables. What I want to address in this blog may sound strange, but there are times when collecting statistics on PARTITION makes sense for a NON-PARTITIONED primary index (NPPI) table as well.

When you push too much paper into your shredder, it jams; when you pour too much coffee into your cup, it overflows; when you eat too much food, you…well, you know. Everything’s got limits.

One thing that grabbed my attention when I first joined Teradata back in '88, and that I still find striking today, is how the database manages the level of work that enters the system. Managing the flow of work inside Teradata is decentralized, low-effort, and scalable, exhibiting control at the lowest possible level—at the AMP.

I was asked the other day about the request cache, speficially, how to keep plans in the cache so the query does not have to undergo repetitive parsing and optimizing.  A request cache exists on each parsing engine (PE) and will hold generic plans that have been identified as potentially re-usable. For very short queries, request cache hits can reduce your elapsed time and provide more consistent response time. So the longer those re-usable plans stay in the cache, the better.

NOTE:  This content is only relevant to Teradata 12.0 and earlier releases.

Chances are that if you’re into workload management at all, your using at least one throttle. Maybe you are using more. On the other hand, maybe you’re just thinking about it. Object Throttles (as opposed to Workload Throttles) allow you stray from the straight and narrow, because they offer you an array of options and can be used creative combinations. This makes it possible for you to get in over your head when your throttles overlap or conflict. If you are not able to keep your throttles simple, this article is intended to preview what are getting yourself into.

Michael McIntire’s recent posting on NoPI tables (read it now!) got me thinking more and more about mini-batch, and how it’s growing in popularity. Mini-batch is a technique that lets you update database tables in small batches several, many times a day using batch approaches. This could be a little batch job every hour, or it could be every 5 minutes, whatever frequency you choose.

The question of what Estimated Processing Time actually is comes up a lot. For example, the DBQLogTbl table carries an “EstProcTime” Value. If you are EXPLAIN-savy, then you’ve bumped up against “Estimated Time” numbers in almost every step of every query plan you’ve ever looked at. You TASM users frequently rely on “Estimated Processing Time” as a classification criteria to manage the priority that a query will enjoy. Some think Estimated Processing Time is an estimate of clock seconds, and others believe it represents CPU seconds. Here’s what it actually is.

Access locking, Teradata’s dirty read capability, is a good match if you want your online application to read through a write lock placed by load activity running at the same time. Works like a charm.  But there is one type of lock that access locking is helpless when faced with—an exclusive lock. Here’s what to watch out for, and here’s some recommendations to reduce the time your access lock queries will ever have to be kept waiting.

Pages