Error message

Warning: Creating default object from empty value in _dxcontentbanner_set_details_blog() (line 206 of /appl1/devx/drupal/sites7/all/modules/devx/modules/dxcontentbanner/dxcontentbanner.inc).
Subscribe to Blog content and comments for mtmoura Latest blog posts

There are data models that use surrogate keys as part of primary key/primary index definition.

Examples are: Industry and Homegrown models.

The use of surrogate keys is usually considered when loading different source systems with different natural keys on the same integrated data model. This implies a conversion of the different natural keys into a standard surrogate key that can be used for all source systems.

It's about time to start considering the use of Aggregate Join Indexes (AJI) instead of aggregate summary tables and/or aggregate views.

Teradata 13 contains new cool enhancements on the Teradata Optimizer to improve the utilization of the AJI's as listed below. But I have been using successfully AJI's with great results on V2R6 and Teradata 12 too.

I still remember when PPI was first introduced in V2R5 and all the questions that came up in my mind in terms of utilization and performance impact. Long time has passed and most of the Teradata systems nowadays have at least one large PPI table defined. Also most of the questions in terms of performance impact are already gone especially with the use of "partition elimination" and "dynamic partition elimination".  Now in TD12, a multilevel PPI was introduced and again the same kind of questions came up in my mind in terms of utilization and performance impact, as listed below.

In my opinion stored procedures can be used to enforce standards for commonly used processes and to avoid the same code to be copied out into multiple scripts/programs creating a maintenance nightmare.

But let's list some of the pro's & con's associated with stored procedures.

Continuing from Part I where we discussed the factors involved in determining when to refresh statistics, now we'll look at a process that I designed/created and have been using for a quite long time. This process may not fit into all environments but it will provide some ideas as to how to develop your own process. I call the process “Smart Stats.”

The appropriate time to refresh statistics is just before the query or the load process breaks. This means just before the optimizer change the access plan from a good plan to a bad plan. However, the more relevant question is: How can I determine the right time to refresh statistics of any table or join index? To answer this question there are a couple of items that is important to know and understand first as listed below: