50 - 100 of 161 tags for database

Pages

Hi All

 

May be I have posted this question before in this forum, but I am reposting it because that time I did not get any reply. Now I have been asked in a technical meeting to suggest a solution. But I am still not sure how to decide this, as my understanding is still in grey area.

 

Hi All,
I was reading Carrie's blog on skew sensitivity of TD12 optimizer.
http://developer.teradata.com/database/articles/skew-sensitivity-in-the-optimizer-new-in-teradata-12
 

Hi Folks,
 
I need confirmation on my understaning of Rowhash Match Scan and Sync Scan. 
 
1.
I am joining two tables TableA and TableB using TableA.A1 column which is PI and TableB.B2 column which is not PI.
Prior to join TableB needs to be spooled for Rowhash Match Scan. 

This session gives a close-up picture of what AMP worker tasks are and how they support user work.

Hi,
I am not sure if this is the right category of the current issues we are facing right now. Unity Director had been implemented in our client together with the Teradata database implementation. I am not sure if this is a verified bug but below are the current scenario we are facing.

Everyone with a Teradata Database collects statistics.  No way around it.  Good query plans rely on it, and we’ve all gotten used to it.

But starting in Teradata Database 14.10 there’s some big changes.  Numerous enhancements to the optimizer, some important database changes, a new set of APIs, an added data dictionary table, and a brand new Viewpoint portlet all combine together to produce the Automated Statistics (AutoStats) feature.  Used fully, AutoStats can identify statistics the optimizer looked for and found missing, flag unused statistics that you no longer need to collect on, prioritize and execute the stats collection statements, and organize/run your statistics collection jobs in the most efficient way possible.

There’s a lot going on and a lot of choices involved.  So what’s the easiest way to start getting familiar with AutoStats?   And what’s required with the new DBQL logging options?  If you like to build up confidence in a new feature at your own pace, this blog posting offers a few simple steps designed to introduce you to key features and benefits AutoStats.  

Hi All
I am having problems while loading data from Oracle to TD14. In most cases the problem is with data type, like NUMBER(p,s), LONG RAW, NVARCHAR2 including Unicode, Multi byte and different Character Set etc.

Can following query be rewritten so it does not have to use UNION ?  THANKS.
 
SELECT 1 AS FCST_MONTH_KEY, SUM(s.MDU_Plan), SUM(s.Bulk_Plan), SUM(s.DTH_Plan), SUM(s.CP_Plan), SUM(s.D2_Lite_Plan), SUM(s.UNKNOWNPLAN)

FROM 

Hi Guys
I am working on migration from Oracle to Teradata. I am facing few issues there. I would be glad if some one please clarify below questions.
 
1.

Hi

 

There may be a discussion already in this forum about Hash and Join Index.

Still I am creating this post because I want to clarify few queries for my own understanding.

 

I am creating a HASH INDEX: here the PI of the table is (emp_id)

 

Is Teradata Database Express windows version still available to download in local machine to practice and hands on or it has been obsolete?
Because I can find TD Express VMware version but not any windows (32 bit) version.

Hi all

 

I am using Teradata Express 13.10 & I am trying to write a stored procedure which uses multiple references to dataset defined using a 'common table expression.'

 

I have a test case below:

 

Hi All,
Can anyone tell me how to find the index size and table size using Ferret Utility..
Also what are the other methods to find out index size apart of creating the table with and without index and then take the difference of both..

Hello All,
I have created a dummy table and inserted one record in that table. When i simply select that record it gives me unwanted leading space in the answer set. 

CREATE TABLE dummy (a INTEGER);

INSERT INTO dummy VALUES (986);

 

SELECT '(' || TO_CHAR(a+1) || ')' FROM dummy;

 

How do I combine multiple Select queries?

I have 7 queries run separately. Each query return 2 columns. Count(of something), (date)

Date Range is Fixed for each query Set to 1 Week only.

SEL  CAD.CUST_ACCT_KEY, CATD.CUST_ACCT_MKT_DESC,  CASD.CUST_ACCT_STAT_CODE, ETD.RCVR_TYPE_DESC, ETD.EQUIP_BUS_CAT_DESC  

 

FROM CUST_ACCT_DIM AS CAD

  JOIN CUST_ACCT_STAT_DIM AS CASD 

     ON CAD.CURR_CUST_ACCT_STAT_KEY = CASD.CUST_ACCT_STAT_KEY

     

  JOIN CUST_ACCT_TYPE_DIM AS CATD 

how can I merge these two sql in to one query?

 

SELECT MONTH_BGN_DATE 

 FROM MONTHDAY_DIM 

 WHERE MONTHDAY_KEY = ADD_MONTHS(DATE , -4)) 

 

 SELECT  MONTH_END_DATE 

 FROM MONTHDAY_DIM WHERE 

 MONTHDAY_KEY = ADD_MONTHS(DATE , -3)) 

 

thanks.

 

Hi All,
I have a table with policy no,status code,status date.
I want to get the difference of the status dates for each policy(will have more than one record with different status codes)after getting sorted with the status dates for each policy.
The difference of the status dates should only consider certain status codes.

Hi All,
 
Normally, we can find row_number based on the order of columns using ROW_NUMBER() function.
select row_number() over(order by col1) from tablename ;
Is there any way in Teradata to get the Row number based on the order/time of insertion rather than the order of columns contained.
Thanks in advance.

Has anyone successfully used the Teradata driver with LDAP authentication in the ETL tool Talend?

How can I rewrite following query to avoid join condition like this :
AND SOM2.Address = SOM.ADDR_LINE1 || ' ' || ADDR_LINE2 ||' ' || CITY || ' ' || STATE || ' ' || ZIP_CODE
 
Query:

SEL 

SOM.CUST_ACCT_KEY, 

ORD_ID, 

EQUIPDESC,

SHIP_METH_DESC, 

D_DESC, 

SITE, 

Collecting statistics on data dictionary tables is an excellent way to tune long-running queries that access multi-table dictionary views.  Third party tools often access the data dictionary several times, primarily using the X views.  SAS, for example, accesses DBC views including IndicesX and TablesX for metadata discovery.  Without statistics, the optimizer may do a poor job in building plans for these complex views, some of which are composed of over 200 lines of code.

In an earlier blog posting I discussed the value of collecting statistics against data dictionary tables, and provided some suggestions about how you can use DBQL to determine which tables and which columns to include.  Go back and review that posting.  This posting is a more comprehensive list of DBC statistics that is updated to include those recommended with JDBC.

Statistics about tables, columns and indexes are a critical component in producing good query plans.

 

Merge Statement Error code 5758

 

 

Could some please help on this .Getting below error while executing Merge Statement on Teradata 13.10 . 

 

 

 

Error :- Teradata code 5758 The search condition must fully specify the Target table primary index and partition column(s)

 

In earlier postings I’ve described how TASM system events can detect such things as AMP worker task shortages, and automatically react to change workload managements settings.   These system events tell TASM to look out for and gracefully react to resource shortages without your direct intervention, by doing things like adjusting throttle limits temporarily downwards for the less critical work.

This switchover happens as a result of TASM moving you from one Health Condition to another, and as a result, from one state to another state.  But how does this shift to a new actually state happen?  And under what conditions you will you be moved back to the previous state?

Hi,
My Source data has only two digit year when i am trying to convert to timestamp
sel cast('12/12/12 00:00:00' as timestamp(0) format 'MM/DD/YYBHH:MI:SS')
o/p
12/12/1912 00:00:00
tried with the current_timestamp(0)
eg:
sel cast(current_timestamp(0) as timestamp(0) format 'MM/DD/YYBHH:MI:SS')

Hi
My Source data has only two digit year when i am trying to convert to timestamp
sel cast('12/12/12 00:00:00' as timestamp(0) format 'MM/DD/YYBHH:MI:SS')
o/p
12/12/1912 00:00:00
tried with the current_timestamp(0)
eg:
sel cast(current_timestamp(0) as timestamp(0) format 'MM/DD/YYBHH:MI:SS')
o/p

 

Help on Tuning

I have the below set of tables created.

 

 

CREATE SET TABLE EDMSIT01_T3D_FND.DIM_PARTY ,NO FALLBACK ,

NO BEFORE JOURNAL,

NO AFTER JOURNAL,

CHECKSUM = DEFAULT,

DEFAULT MERGEBLOCKRATIO

(

Party_Id INTEGER NOT NULL,

Statistical information is vital for the optimizer when it builds query plans.  But collecting statistics can involve time and resources.  By understanding and combining several different statistics gathering techniques, users of Teradata can find the correct balance between good query plans and the time required to ensure adequate statistical information is always available.

This recently-updated compilation of statistics collection recommendations are intended for sites that are on any of the Teradata 14.0 software release levels.   Some of these recommendations apply to releases earlier than Teradata 14.0, however some rely on new features available only in Teradata 14.0.

Contributors:  Carrie Ballinger, Rama Krishna Korlapati, Paul Sinclair, February 12, 2013

As you approach full use of your AMP worker tasks, monitoring in-use counts becomes a more important exercise.  The ResUsageSAWT (SAWT) table provides a wealth of information about AWT usage when you need to dive into details.

ResUsageSAWT provides you with several categories of information, including AMP worker task usage and high-water marks broken out by work type, as well as message queue depth/ flow control frequency metrics. 

I'm relatively new to Teradata and we're trying to get off on the right foot.  As some background, we have 2 Teradata 2690 Appliances.  One is for dev and test.  The other is for production.  In our current data warehouse, we have multiple unrelated star schemas.  Our goal is to forklift over the data in the existing format, with the long term goal of imp

Most of us are aware of AMP worker tasks, and some of us are even obsessive about not running out of this finite resource.  But that’s on the AMP.  What about the tasks supporting user work on the parsing engine (PE)?  Should we be just as vigilant about what’s happening with tasks at the PE level?

Not really.  Here’s why.

I have a table (I'll call this DTM) which has a column of the type FLOAT (I'll call this VAL).  This table currently includes rows in which the value of that column is NaN (not a number).  We are inserting these values through DataStage / Java, which is taking the values of "NaN", "+Infinity" and "-Infinity" (all legit IEEE 754 floats) from a text file an

I gave a presentation on AMP worker tasks at the Teradata User Group conference last week in Washington DC.   A question came from someone in the audience concerning FastExport jobs in response mode, and whether or not they were holding AMP worker tasks.  This post addresses that question.

Read through forum and couldn't get a clear answer.  New to teradata.... I have about 2000 views I need to break down.  Some views use multiple tables... I can get the column names of the view but not the table name and columns together.  Could someone provide statement to retrieve:

Due to a new initiative we are now hiring ~25+ Teradata professionals for the following positions in the Richmond, VA area:
---------------------------------------------
Teradata BTEQ Application Developer
Link to review and apply:

Teradata 14.0 offers some very helpful enhancements to the statistics collection process.   This posting discusses a few of the key ones, with an explanation of how these enhancements can be used to streamline your statistics collection process and help your statistics be more effective.

How do you pick the number of sessions to assign to your utility jobs?  Chances are you guess.  In Teradata 13.10 the task of deciding the number of sessions has been moved inside the database, meaning one less thing for you to worry about.  Read on for a quick intoduction to how this feature works.

Good Afternoon,

My question is this what I want to do is to a date depending on the month to add the last day of the month.

For example I get:

Starting in Teradata 13.10, there is a single delay queue for all throttles.  This means that queries delayed by system throttles will reside in the same queue as queries delayed by workload throttles.  In earlier releases, delay queues were set up independently by type of throttle, and each workload throttle had its own dedicated queue.   

Bringing together all delayed objects into a single queue streamlines the entire throttling experience and makes it easier and more accurate to manage internally.  However, as a side-effect, the DelayTime field in DBQL needs a second look.  DelayTime takes requires slightly different interpretation in 13.10 than you gave it in earlier releases.

The Teradata Temporal feature, available with Teradata 13.10 allows the customer to capture, track, and analyze the full history of evolving business data rather than just the most current updates.

There’s a comment on an earlier blog posting of mine from 2009 titled Controlling the Flow of Work in Teradata.  The comment poses a question that is more reasonably answered by making second posting on flow control. 

Hi everyone,
I have a problem with the virtual machine (vmware) demo of TD Database that was working good until yesterday...
For my purposes I need the ResUsage tables to collect data and while making experiments changing in CTL the Logging Rate from 60 to 600 seconds the tables stop collecting rows.
I restart the virtual machine lots of time but nothing seems to happen, every enabled ResUsage table does not collect data anymore!!!

 

I need to get the query result in the form of dataset/resultset/tabular format, just like when we get the result executing a query. so that I can bind the result in my application.

 

I have some ORACLE SQL query which I need to convert to Teradata compatible.

I am using teradata DB version 13.10.

The query used the replace function but in teradata there is no replace function in teradata. And also I saw if I can install the UDF utility then I can get the Oracle functions. Like oReplace.

A quintillion is 10 raised to the power of 18 (that is, 1 followed by 18 zeros).  9.2 quintillion (or more precisely, 9,223,372,036,854,775,807) is the new limit in Teradata Database 14.0 for the number of combined partitions.  Note that this is the same as the maximum BIGINT value (which is a signed 8-byte integer).  While tables that are defined with 65

AMP worker tasks are the dedicated tasks inside each AMP that get the database work done.  As there are a limited number, this important resource is actively monitored by DBAs on busy systems.  TASM system-level events can take care of the monitoring for you, and based on a threshold you have set set, will trigger changes in workload management setup to manage the AWT shortage.  How AMP worker task events do their monitoring has changed in Teradata Database 13.10, and this posting describes those changes.

Hello,

Statistical information is vital for the optimizer when it builds query plans.  But collecting statistics can involve time and resources.  By understanding and combining several different statistics gathering techniques, users of Teradata can find the correct balance between good query plans and the time required to ensure adequate statistical information is always available.

The following statistics collection recommendations are intended for sites that are on any of the Teradata 13.10 software release levels.  Most of these recommendations apply to releases earlier than Teradata 13.10, however some may be specific to Teradata 13.10 only.