0 - 50 of 83 tags for performance

Pages

Hi,


I have a large query where there are two window sets, one over year and another over quarter. I think that since teh parttion by and order by clause is same for 1 set of column and another set of columns, this query can be re-written to avoid the partioning on rows being done again and again. Below is the query:

I have a requirement to fit in 441GB of history data into a table which makes my table bulky and difficult to query. Just to explain it is a fact table and stores amount fields etc. I have a partition on the business date field.

Hi all,
We are experiencing a relative slow ODBC throughput from TD to Microstrategy and I am wondering if this is "as good as it gets" or if the throughput is below standard.

I have a transactional dataset which has a date field, my query is whether it is better to bring in the date field into the PI for the table i am using or leave it out. The PI is currently made up of a product unique identifier which is duplicated for each date related to the product.

Given: a large table; 14/50 fields responsible by one group; the rest by a second group; the need to for 5/14 fields to be used by analysts familar with the large table (ie. large numbers of scripts written against the current large table; business hour performance on this large table is a primary concern. Updates are typically schedule after hours.

Hi all,

I have a particular question regarding how the data are stored inside ResUsageSAWT.
When the column TheTime is for example '04:10:00.00' all the data regarding AWTs refers to the interval from 04:00 to 04:10 or refers to the interval 04:10 to 04:20?
Thanks in advance for your help!

Hi, I am new to teradata learning, I have download vm Express edition for testing purpose, but this is a very limited in performance, i want to test a machine have a little bit big data, ALL i have to face that, when ever i run a query especially with aggregation, i takes lots of mins to perform the resul

Aster is an analytic discovery application that lets you perform exploratory queries across your data regardless if it is structured or not.

This session focuses on Monitoring for System Performance. 

There is  "poetic query" that takes forever to run
Kind of looks like this
sel
col1.tb1,
col2,tb1,
(case <condition involving tb1 and tb2>) as "Dcol1",
Sum ( col1.tb3),
sum (col2.tb3 ),
sum (col3.tb3)
etc
from
tb1 left outer join tb2 <condition> LOJ tb3 <conditions>

Hi,
We have a set table with NUPI and is partitioned with date. My question is when we try inserting a new record based on row hash it will perform duplicate checking or it will go to particular partition and perform duplicate check?
Ex: in one AMP
Primary Index(ex: ID1): 10
Partition 1 with date 2015-01-01: 100 rows

Hi Guys, thanks for reading the post. I did do a search for "view on view" and there was no result.

Hi All,

I am trying to debug a query which is taking a lot of time.
Following objects are involved in the query :
 

Have you ever wished for a magic wand that could quickly point out the missing, stale, and unused statistics for you?

This presentation reviews the benefits of collecting performance data on the Teradata platform.

SQL performance is vital for driving value from a data warehouse. With today's growing query complexity, optimizing SQL can be a daunting task ...

Performance Diagnostic Method and Tools is targeted to uncover and resolve performance related issues in three primary areas...

Hi all. We use PDCR to archive our DBQL tables - the primary index on the tables is (logdate,procid,queryid) and they are partitioned by logdate.
 

Typical 2 years Sales Data Mart contains detail and aggregates in order to support time traverse metrics like 4 Week Shipments, 13 Week Shipments and so on... 

How do you determine your system’s performance prior to the dreaded customer call asking why their queries are running longer?  What’s the impact to the business when the warehouse environment is under performing?

What is the difference between simple system view like dbc.tables, X views and V views and which one is better - performace wise?
Does every system view like dbc.tables will always have a corresponding X and V view? I failed to find any X or V view for for dbc.ErrorMsgs?
 
Thanks

Teradata Intelligent Memory keeps the most frequently used, or hottest, data in memory for in-memory performance without the need to purchase enough memory for all of your data.

Hi ,
Can anybody please tell me which one is better performance wise ... Dynamic or Static Query ?
I have a procedure full of dynamic queries which all can be written as static queries.
But, not sure if it is worth the effort.

For a project, we have several groups that will produce numbers using models. Input and output data will be in the DB. Each group is independent and works in their own different way; however, the outputs they produce will need to be combined, aggregated and queried. 

This session gives a close-up picture of what AMP worker tasks are and how they support user work.

Hi Teradatares :-)
I am new to Teradata database. Currently I am thiking of creating "big" table that should grow 200 mln rows each week. After a year data it will be 10 000 000 000 rows
After reading some best practice, documentation and forum, that's what I kind of understand.
Create table test

Hi,
What happens to the performance when you load same amount of data into same table on two servers one with 140 vprocs and another with 70 vprocs ? Does both the cases take same time to laod or the one with highest vprocs take less time than the one with less vprocs ??
 
Thanks,
Kapil

Hi,
 
Is there any way to measure performance impact on existing queries with enabling LockLogger?
 
I'm planning to enable LockLogger in continuous mode to debug deadlock queries.
But I'm not sure the side-effect on perfomance of existing queries. Will is use CPU a alot? Many Disk I/O?
 

Hi Experts,
There is a tpump which is taking 10 hours to process 7000 records, whcih earlier was taking 1 hour..  not sure why its taking 
so long now. the script cotains the normal insert select statement. 
just want to know that is there any optimal parametrs setting for the various TPUMP parameters..
like 
PACK

Hi All,
    Is there anyway to improve the performance(Reducing the CPU consumed is the main aim for now) of queries involving like in the where clause.
 
 
WHERE FIRST_NM LIKE :V_FIRST_NM||'%'
AND LAST_NM LIKE :V_LAST_NM||'%';

I have few doubts regarding statistics collection.
 
1. Does the order of columns used in collect stats command important? 
2. Consider the following scenario 
 
Table A joins with Table B based on Col1 and Col2 and Table A joins with Table C based on Col1, Col2 and Col3.

Have you ever struggled over whether, or how, to use a new database feature?  In this presentation, we’ll demonstrate how database query log (DBQL) data can help when it comes to determining candidates for MLPPI (Multi-Level Partitioned Primary Index) as well as the new Columnar Partitioning feature of Teradata 14.

Defining workloads in TDWM may qualify as a TASM implementation but is just a small piece of the puzzle.  This presentation will look at this and all the other pieces needed to complete the picture.

Hi All,
I have two tables lets say Tab1 with 1million records and Tab2 with 100million records.
Option 1) sel columns from tab1 inner join tab2
Option 2) sel columns from tab2 inner join tab1
Is option one better than option two? does it make a difference?

Hi All
I want to understand if there will be a difference in performace or on any front between
Any inputs will be of a great help. I am dealing with huge fact tables and many volatile tables to be created in a stored procedure.
 

Does Sql have better performance if columns have COMPRESS? If so how does COMPRESS help the performance? Is it because of lesser Spool space?

Hi everybody,
Could anybody tell me how TD analytical functions are implemented under the hub?

I've got Fastload opening a named pipe, and I'm redirecting the output from a SQL*Plus script to the pipe in Windows. I assumed this would be far faster than allowing SQL*Plus to spool the result to a file locally, and fastload to import the flat file. Turns out it's not.

When I call

ResultSet rs = dbmd.getColumns(null, "CUSTDATA", "TRANS", null);

it takes 16 seconds to get an answer back on a fairly unloaded system.  This table has 27 columns.

 

This seems to be excesvily slow.  Is there anything that can be done to speed this up?

 

Hi,

could you please let me know whether there will be any difference in performance between the following two queries. As i don't have any internal knowledge about sql, i have this doubt.

table A: column alpha has 100 million records
table B: column beta has 45 million records 

 

TPump has been enhanced to dynamically determine the PACK factor and fill up data buffer if there is variable-length data. This feature is available in Teradata TPump 13.00.00.009, 13.10.00.007, 14.00.00.000 and higher releases.

Teradata Parallel Transporter (Teradata PT) has fourteen different operators. Each behaves differently. This article provides a table to help you in selecting the right operator to use for your Teradata PT job. You can view the table as Excel .xls or PDF.

 

Hi ALL, I am  new  to Teradata so sorry if my questions seem trivial and obvious for you.

Do you want to have your Teradata Parallel Transporter (Teradata PT) Stream operator jobs run faster? Are you having difficulty determining the optimal pack factor for your Stream operator jobs? Knowing how to use the Stream operator’s PackMaximum attribute enables you to determine the optimal pack factor and thus improve the performance of your Stream operator job.

We are looking to do some application performance testing and need to simulate a "real world" environment in which the warehouse is under strain.  Are there any tools/scripts out there that folks use to simulate the load typically placed on an EDW where is it generally pegged at 100% all day?

One of the more difficult challenges in database management and administration is determining where and how to implement a new RDBMS feature or function. In this presentation we’ll look at the DBQL data available for evaluation of tables and columns used within a workload and how this data can be leveraged for determining candidates for MLPPI (Multi-Level Partitioned Primary Index).

Everyone is aware of Teradata’s continued commitment to reducing the footprint and resource usage of data by adding compression capabilities and the result on perm space savings and performance, by reducing block size, IO and spool usage.  

Please refer to the following Viewpoint article for Viewpoint performance considerations. http://developer.teradata.com/viewpoint/articles/teradata-viewpoint-performance-considerations

Teradata Active System Management (TASM) continues to evolve to increasingly higher levels of automation and usability. Come hear what new TASM offerings are available in release 13.x, and how you can best leverage these new features.