100 - 150 of 161 tags for database

Pages

With row partitioning (for a PPI or column-partitioned table), the Teradata Database makes sure rows are placed in their appropriate partitions.  When the row partitioning for the table is altered, rows may need to move from one partition to another so they are in their appropriate partitions according to the altered partitioning.  The partitioning expression must be deterministic (always computes the same value for the same value of the partitioning column) to properly place and retrieve the row.

For instance if the table has RANGE_N partitioning, ALTER TABLE will move rows automatically from NO RANGE to newly-added ranges as needed.  Also, if a range is dropped, rows from that partition are moved to the NO RANGE, NO RANGE OR UNKNOWN partition or a newly-added range that covers the dropped range.

Hi ,

 

How to display count and sub counts in select query itself ?

source data set :

table 1 :

cat1 = 101

cat2 = 102

cat3 = 103

 

table 2 :

Id     Date            catg

12 2012-01-02 101

12 2012-01-02 101

like 9 records

12 2012-01-02 103

Don’t miss this opportunity for an in-depth discussion about the “new” Priority Scheduler for Teradata 14, available for the Linux SLES 11 operating system.

Suprisingly, I'm not finding many examples of this out there.  

I have a table like:

 

id  foo

--  ----

1   yes

2   yes 

3   no

 

i want a 1 row result set like: 

bar

----

1: yes, 2: yes, 3: no

 

Accurately identifying how much a table has grown since statistics on a column were last collected is an important prerequisite for the statistics extrapolation process.  Once table growth has been detected, then stale statistics can be adjusted accordingly.  Extrapolations based on table growth will be be more accurate if statistics are being regularly collected on the system-derived column PARTITON, whether or not a table is partitioned.

I know of a Teradata site that chose to use global temporary tables (GTTs) with their tactical queries in an attempt to get good performance against their fact table. However, there are some known limitations in doing that which I want to share with you.

Can anyone explian me difference  between Hash join and Product join.Based on what constraints and conditions the optimiser will select them.

 

Please elaborate......

Q: Can we store  flat files in Teradata Datawarehouse?

Ans: No

Reason:Since Teradata is an RDBMS it will store all the data in the tables only.It can't store any files in the warehouse.

Please suggest whether my assumption is correct or not

 

 

hi dude,

          I'm going to copy a database from my 2 TD,i got the ARCPIPE file successlly,but when i try to copy it to the new database,i got this:

Fact : Teradata loads the data into the table in parallel.

Interpretation :
I have created a table EMP with UPI on Emp id column.If we are loading some 1million rows into this table Emp which happens  in parallel way.

CREATE SET TABLE EMPLOYEE_TABLE
(
EMPLOYEE INTEGER NOT NULL
,DEPT INTEGER
,FIRST_NAME VARCHAR(20)
,LAST_NAME CHAR(20)
,SALARY DECIMAL(10,2)
)
PRIMARY INDEX (EMPLOYEE)
PARTITION BY DEPT;

Reserving AMP worker  tasks (AWT) for tactical applications is a technique to protect business-critical, short queries, when the platform is experiencing AMP worker task exhaustion.   If you are thinking about reserving AWTs, there are two different settings for which you will be required to provide values.  This posting discusses what these two settings

The FastExport utility is usually seen as a batch-oriented job to use when you need to return large numbers of rows from the database to a user.   It is the ideal tool for efficiently and quickly returning large answer sets, sorted or otherwise, back to the client from the database.   That’s why it was named the way it was:  “FastExport”.  

Hi, Could anyone please tell me how many maximum number of base tables we can create in one Database in TD12.

Thank you

hai Friends ,

Please kindly help me in resolving the issue .

I have written a Bteq Script ... which is having few inset dmls and one update dml for a particular table .
when i execute the Bteq Script ..it runs successfully and returns a error code 0. even in log file i can see the number of rows effected for each dml.
issue is .. Insert dml records are being commited in database table . But last Update dml is showing executed successfully and displays rows effected in log but when i query in database table , the rows have not got updated .

I was recently asked about NO RANGE, UNKNOWN, and NO RANGE OR UNKNOWN partitions for the RANGE_N function.  While these have been available since TD V2R5.0, there is sometimes confusion about what they mean and when to use or not use them.

Have you ever wondered how many AMP worker tasks (AWT) were actually being used during an archive or a restore?  You're not alone.  Is it one per session?  Could it be one per AMP?  Here's how it works.

Hi All,
Yep, I know this might sound like too easy, but I was asked this question and responded with "It does occupy". But couldn't say, how many bits, if so.

Target Table : xyz

CO_ID VARCHAR(3) CHARACTER SET LATIN NOT CASESPECIFIC,
CLI_ID VARCHAR(11) CHARACTER SET LATIN NOT CASESPECIFIC,

Mload Script :

.LAYOUT L_ins ;

.FIELD CO_ID * VARCHAR(200);
.FIELD CLI_ID * VARCHAR(200);

.DML LABEL INS;

INSERT INTO xyz
(
CO_ID
,CLI_ID
)
VALUES
(
:CO_ID
,:CLI_ID
)

.IMPORT INFILE E:\sdfs\sdfs\file_dsfsdfs.dat
FORMAT VARTEXT '~'
LAYOUT L_INS
APPLY INS;

Error :

====================================================== ==================
= =
= MultiLoad Acquisition Phase =
= =

Hai friends,

Hope your are doing good .
i got a issue raised in project reqarding the bteq script .
Our environment is teradata on windows platform .
i want to capture the bteq script final return error code which is executed from windows command prompt .
i came to know from forums how it can be accomplished with the scripts written with in unix script by evaluating the return values by $? .
Is there any Inbuilt Environment variables for Bteq to store the return code as we do have in MLOAD (&sysrec).

Have you ever wanted two levels of control on load utilities?  More specifically, have you ever wanted to limit how many load utilities a subset of users are able to run? This was not possible before, but it is something that is supported in the Teradata 13.10 release.  Let me explain how this works.    

Statistics about tables, columns and indexes are a critical component in producing good query plans. This session will examine the differences between random AMP sampling, full statistics collection, and USING SAMPLE collection, and will share details about how each of these 3 approaches works internally.  Statistics histograms, their uses and limitations, will be explored.  You'll get some insights into collecting statistics on multiple columns, on indexes, on and on the system-derived column PARTITION.   Statistics extrapolation will be touched on briefly, and the session will wrap up with some general recommendations for collection statistics.

Hai Friends ,
Recently is have been asked some one How to control Data Distribution in Teradata ?
his intention is that .. if my Teradata database is having 10 Amps , if UPI is created on the column of Table A . then all the Rows of the Table A are Evenly distributed in All 10 Amps.
If NUPI is created on the column of Table A . then all Duplicate rows stored in same AMP.

we know that even if the user not specifies creating UPI or NUPI while creating Table A . Teradata will Create for Table A for Data Distribution among all the 10 Amps.

Hi, I am trying to import some data from Oracle database. I used to extract data from Oracle and convert to .csv and then use the import function in SQL Assistant. However, this time the amount of data is so large that it is quite time consuming to do so. Is there any method that can export directly without going through .csv file step? Thx.

Hai Friends ,

Can anyone help me in understanding the "Junk Dimension" with some example .

Thanks in Advance.

Hi,

I am getting the error while executing a simple insert select statement. I have a table Product_Change in NewDB database and a table PC in OldDB database.

insert into NewDB.Product_Change
(
change_app_code ,
change_number ,
abstract ,
Author_Person_App_Code ,
Author_Person_Id ,
owner_Person_App_Code ,
owner_Person_ID ,
Created_Datetime ,
Last_Change_Datetime ,
TARGET_CLOSE_DATETIME ,
Close_Reason_Code ,
Class_Code ,
Priority_Code ,
Customer_Impact_Code ,
Found_In_Activity_Code ,
Last_Change_Login_ID ,

Difference between features V2R5,TD12 and TD13?

Hai Teradata Professionals,

Please can any one tell me the Differences between the Teradata versions
V2R5 -------------- TD 12 -------------------TD13 .

(Mainly in Developer Aspect )

Thanks in Advance !

If you are planning on using Priority Scheduler CPU limits to restrict the amount of CPU available to the system after a hardware upgrade, there are two questions you will need to answer up front.

This session will focus on block-level compression (BLC), how it works, what compression rates you can expect, and where it is appropriate to define. Examples of the impact on CPU usage and elapsed time when queries access compressed tables will be shared, and the overhead while performing different database operations on these tables will be explored. Plenty of tips and techniques for successful use of BLC are offered.

Long before TASM, there were throttles.  We call them “system throttles” or “object throttles” today in order to differentiate them from TASM’s workload throttles.

In the past, you, like many people, probably considered 62 AMP worker tasks (AWT) as the logical limit that can be used to support user work at any one point in time.

Hai...
i am new to Teradata . my objective is to retrieve the last word in the sentence with out hard coding . i got the sql query in Oracle but don't know how to accomplish the same in Teradata Sql. i am posting the Oracle Sql systax for retrieving the last word in the sentence .
SQL>
SELECT SUBSTR(sentence_column, INSTR(sentence_column,' ',-1) + 1)
FROM Text_table;

column : sentence_column, data type : varchar2(100)
Eg data :
1 All information must be store in table as data values
2 All views that are theoretically updateable are also updateable by the system.

Have you ever tried to figure out ahead of time how many CPU seconds an application would require after upgrading to a new hardware platform?    I talked about one approach to solving this problem at the recent Partners Conference in San Diego, and would like to share my approach with you.

Hi there, I keep losing what I type out here on this site. I am attempting to retrieve only hours:minutes:seconds from a database table and column: PAT_ENC.CHECKIN_TIME. This is a DateTime field. Displayed as:
2010-09-15 12:45:00
2010-09-16 05:15:15

I need to retrieve only:
12:45:00
05:15:15

I also need to display this in a CASE statement. Something like the following:
CASE
when Pat_enc.Checkin_time between '18:30:00' and '19:29:00' then '6:30-7:29'
when Pat_enc.Checkin_time between '19:30:00' and '20:30:00' then '7:30-8:30'
END Shift_group

There is a BI tool-generated SQL query which (full) joins multiple, 30 in total, smaller SQL as following:

(select a14.Store_ID Store_ID, sum(a11.CountConnections) WJXBFS1
from Retail_Views.F_ACM_Connections a11, ....
where filterset = 1) pa11

(select a14.Store_ID Store_ID, sum(a11.CountConnections) WJXBFS1
from Retail_Views.F_ACM_Connections a11, ....
where filterset = 2) pa12

....

into a query like:

select coalesce(pa11.Store_ID, ..., pa130.Store_ID) Store_ID, a132.StoreName StoreName, max(pa11.WJXBFS1) WJXBFS1, ..., max(pa130.WJXBFS1) WJXBFS30
from ....

Hi,

We do have lot of tables with skew factor up to 98%.

We were thinking of using NoPI i.e. allowing a table with no primary index acting as a staging table, data can be loaded into the table a lot more efficiently and in turn faster.

We do have a table
REATE MULTISET TABLE NZL.200 ,NO FALLBACK ,
C2O DECIMAL(1,0),
C2PS_POUND DECIMAL(6,0),
C2R DECIMAL(1,0),
C2S DECIMAL(1,0),
C2SAL DECIMAL(3,0),
C2STS CHAR(1) CHARACTER SET LATIN NOT CASESPECIFIC,
C2TYP CHAR(1) CHARACTER SET LATIN NOT CASESPECIFIC,
COR DECIMAL(1,0),

alter table mydb.rtl_transactions drop sales_amt;
 *** Failure 2631 Transaction ABORTed due to deadlock.
                Statement# 1, Info =0

Getting this error while dropping a column. Please suggest how to avoid the deadlock.

Need strictly a Teradata DBA.
Must conduct day to day TeraData DBA tasks.
Must have physical modeling experience.
Must have excellent communication skills.

Should have at least 5+ years experience.

Please forward your rate requirement, resume, and contact information to:

Michael White
OSPRO SYS

How could I select the values using "column name" and primary key (row) information of a table using SQL.

For example Table1 contains 3 columns (TrackID, var1 and var2) with values:

Table1
__________________
TrackID var1 var2
1 3 1.2
2 2 1
3 8 2.3

In Table2 have Attribute ("column name of Table 1") and TracID. How could I select the values from Table 1 based on input from Table 2.

Table2
________

Attrbute TrackID
var1 1
var1 2
var1 3
var2 1
var2 2
var2 3

The result will be

Result
----------
3
2
8
1.2
1
2.3

I’ve mentioned it before, Marcio has blogged about it, customers have brought it up at the Partners Conferences. It’s cheap, fast, risk-free, with immediate results. But some of you are still not getting it. Or it could be you’re one of the few who truly don’t need it.

Time is one of the most powerful dimensions a data warehouse can support. Unfortunately it’s also one of the most problematic. Unlike OLTP environments that focus only on the most current versions of reference data, Data Warehouse environments are often required to present data not only as it currently exists, but also as it previously existed. Implemented correctly, a data warehouse can support several temporal orientations, the three most common being “current,” “point-in-time,” and “periodic.” Implemented incorrectly, you will create a solution that will be impossible to maintain or support.

Want to batch sync among the four databases. Which software can be more effective ? Can you give me any idea?

Collecting full statistics involves scanning the base table and performing a sort to compute the number of occurrences for each distinct value. For some users, the time and resources required to adequately collect statistics and keep them up-to-date is a challenge, particularly on large tables.

Last month I talked about things that don’t honor a CPU limit, and explained what a CPU limit is. This month I’d like to look at CPU limits from a slightly different perspective—What happens when you define CPU limits at multiple levels?

Data Warehouses often contain data in effect for a time period, denoted by Start and End dates within a row. With time-period data comes a set of frequently asked business questions. For example, of home, auto, and life insurance policies, for what ranges of time have my customers 1) had zero policies active, or 2) at least one active, or 3) multiple active? Solutions to business questions of this type are not obvious using SQL. However, this session will show how to solve this class of problems efficiently with SQL instead of using stored procedures or exporting to another tool. See unique ways to apply correlated subqueries and the Ordered Analytical Functions SUM(), ROW_NUMBER(), and MIN() to this class of problems.

Maybe you want to ensure that your sandbox applications never use more than 2% of the total platform CPU. No problem! Put a CPU limit of 2% on them. Or maybe you’ve got some resource-intensive background work you want to ensure stays in the background. CPU limits are there for you.

Looking for an extra level of protection around your online, web, or interactive queries? Think about expediting the priority scheduler allocation group that they run in.

This course was designed for Data Analysts and Business Users of the Teradata Data Warehouse. It covers data distribution, access, storage and Teradata terminology and how to use the Teradata Utility SQL Assistant to submit Structured Query Language (SQL) statements. This course offers practical, hands-on experience with retrieving and manipulating data with Teradata SQL using both ANSI standard conventions and Teradata extensions to the language.

Offers training in advanced features & techniques used for retrieving & manipulating data with Teradata Structured Query Language (SQL) using both ANSI standard conventions & Teradata extensions to the language.