0 - 16 of 16 tags for optimizer

New DBQL logging options USECOUNT and STATSUSAGE, introduced in Teradata Database 14.10, enable the logging of used and missing statistics.  The output of this logging can be utilized to find used, unused, and missing statistics globally (for all queries) or for just a subset of queries.

Managing Teradata Optimizer statistics can be labor intensive and error prone. Many users struggle to know what columns and indexes to collect statistics on, how to detect and prevent statistics from becoming stale, and how to know if collected statistics are being used.

Hi Guys,
 
I have a doubt regarding the confidence levels of the optimizer in the explain plan.
 
Below is the scenario that i created:
1. Created 2 tables Location and Employee with unique primary indexes on EmpNo column
 

This presentation builds on Recorded Webcast # 49911 Explaining the EXPLAIN, which did just that (if you have not viewed, please do so before watching this webcast).

Hi All,
I was reading Carrie's blog on skew sensitivity of TD12 optimizer.
http://developer.teradata.com/database/articles/skew-sensitivity-in-the-optimizer-new-in-teradata-12
 

Hi.
I've created a very small table to illustrate the issue, I'ld like to hear opinions about.
CREATE MULTISET TABLE Table_1
,NO FALLBACK ,
NO BEFORE JOURNAL,
NO AFTER JOURNAL,
CHECKSUM = DEFAULT,
DEFAULT MERGEBLOCKRATIO
(
Textfield CHAR(20) CHARACTER SET LATIN NOT CASESPECIFIC
)

Hi all,
 

I'm having some performance issues due to the way certain queries are being processed by the optimizer. I have a set of views that I want to join together, and each of these views has the same JOIN/WHERE clause combination:
SELECT ...

You would expect the other way around .... Optimizer can make estimates that are thrown off the real count based on so many factors.
I dont recollect seeing this kind of situation before - I am on v13 . 

Good Day Folks...
3 Q's bunched up on stats ...
<> When should you collect Index stats  and column stats for the component columns , both as against just Index stats. How does index stats help and how does stats on component columns that make up the Ix help 

estimated cardinality for the following predicate towards timestamp column is far too high vs date column

1) Table DDL:

--------------------------------------

Statistics about tables, columns and indexes are a critical component in producing good query plans. This session will examine the differences between random AMP sampling, full statistics collection, and USING SAMPLE collection, and will share details about how each of these 3 approaches works internally.  Statistics histograms, their uses and limitations, will be explored.  You'll get some insights into collecting statistics on multiple columns, on indexes, on and on the system-derived column PARTITION.   Statistics extrapolation will be touched on briefly, and the session will wrap up with some general recommendations for collection statistics.

Hi,

I am new to Terra Data. I have complex queries that are retrieving data from different tables and using different joins. Also, i have to apply some conditions to clean the duplicates and time period checks.

The way I thought Teradata always work is trying to keep things "local" to amp. Therefore, if A Join B on column C , the table A and Table B will be retrieved with not null on C and redistribute on hashcode of C and then join , do I under stand this right? (well of course, there are situations that you like to do duplicate to all amps in case one table is too small). The idea is that table needs to be distributed by hash of equal join column and then the data will resides on the same amp and they can perform join operation.

I gave a presentation at the Teradata Partners Conference last week on the fundamentals of collecting statistics, where I touched briefly on the rules behind confidence levels. I’m using this article to go into further depth, and offer you more examples of how these confidence levels come about.

Cached plans are good. For repetitive queries, cached plans remove the overhead of parsing and optimizing, and this helps short work perform more consistently.

Have you ever had what looks like a great query plan turn into life in the slow lane at run time?