All Forums

Share best practices, knowledge, successes and issues in Analytics, and discuss Analytical Applications.

You must login to post to the forums.
Bad character in format or data Topic by Meenakshi S 12 Nov 2008

Hi,I have two tables, say Table1 and Table 2. Both the tables have the same column, Say 'Phone number'.But the Datatype of Phone number field in Table1 is char(27) and Table2 is decimal(10).Now i am using the below query to delete the phone numbers from Table1 which are present in Table2:delFrom Table1Where Phone_Number In (sel Phone_Number From Table2)The query works fine, except in the cases like if Table1 has a Phone_Number of max 27 characters, I get the error "Bad Character in format or data of Table1"Any idea how to resolve this?

51176 views
8 replies, 8 years ago
Create new row using SQL Topic by RRCIL 03 Aug 2011

Here is my requirement. I prefer to do this sql.

TableA:
0123400000,11/01/2008,04/29/2009
0123400000,04/30/2009,07/14/2010
0123400000,07/15/2010,07/16/2010
0123400000,07/17/2010,09/17/2010

TableB:
0123400000 01/01/2010,12/31/2021

select a.col1, a.col2, a.col3, b.col1, b.col2,b.col3
from tablea
left join tableB
on tabla.col1 = tablb.col1
and a.col2<= b.col3
and a.col3 >= b.col2

Result:
0123400000,11/01/2008,04/29/2009,0123400000,NULL,,NULL
0123400000,04/30/2009,07/14/2010,0123400000,01/01/2010,12/31/2021

5849 views
0 replies
How to break a date range into 2 different rows? Topic by sam_dhse 02 Aug 2011

Hi,

I have Table 1
EmployeeNo StartDt EndDt
1 1/1/09 12/31/09
1 1/1/10 12/31/10
1 1/1/11 12/31/11

Table2
EmployeeNo ManagerStartDate ManagerEndDate
1 7/1/10 6/30/11

Employee 1 was promoted as Manager from 7/1/10 and 6/30/11 and then demoted back as employee starting 7/1/11
Results

6930 views
1 reply, 8 years ago
terminology Topic by pratiktaank 03 Aug 2011

what is market share in brief.?
what is volume growth and volume growth incentives?
Is there any connection between volume growth and market share of the particular product.?

5601 views
0 replies
Reagrding Procedures Topic by Ansh 29 Jul 2011

Dear TD Masters,

I have the below procedure :

REPLACE PROCEDURE ProcedureName
BEGIN

DECLARE stmt VARCHAR(1024);

SET stmt = 'REPLACE VIEW DatabaseX.ViewA AS LOCK ROW FOR ACCESS'
' SELECT * FROM DatabaseY.TableA;
COMMIT; CALL dbc.SysExecSQL(:stmt);

SET stmt = 'REPLACE VIEW DatabaseX.ViewB AS LOCK ROW FOR ACCESS'
' SELECT * FROM DatabaseY.TableB;
COMMIT; CALL dbc.SysExecSQL(:stmt);

END;

My requirement is like when I execute this procedure,either both the stmt gets executed or none.

6722 views
1 reply, 8 years ago
need SQL help (The Output does not contain correct number of rows) Topic by sk73 19 Jul 2011

Hi,

I need your help on this issue.

I have a query between two tables A & B.

I get >100 rows in the result set when I write a query as follows:-

SELECT
A.id, B.id
FROM
table_a A,
table_b B
where
A.id=B.id;

But, no rows are returned when I write a query as follows:-

6978 views
4 replies, 8 years ago
Teradata Optimization needed on query. Topic by lucky_ipsu 02 Apr 2009

Hi All,Below is the explain plan for the query which needs tobe optimized. Sorry could not attach it due to some security problems. Do let me know which part of the explain needs to be modified to optimize the query.Explanation 1) First, we lock aedwprd1.membership_fact_ for access, we lock aedwprd1.src_sys_dim_ for access, we lock aedwprd1.prod_var_ernng_cmpnt_dim_ for access, we lock aedwprd1.mbrshp_covg_mth_strt_dim_ for access, we lock aedwprd1.mbrshp_acct_mth_dim_ for access, we lock aedwprd1.firm_dim_ for access, and we lock aedwprd1.chrtfld_dim_ for access. 2) Next, we do an all-AMPs RETRIEVE step from aedwprd1.mbrshp_covg_mth_strt_dim_ by way of an all-rows scan with" a condition of (""(aedwprd1.mbrshp_covg_mth_strt_dim_.yr_mth_nbr <=" 200810.) AND (aedwprd1.mbrshp_covg_mth_strt_dim_.yr_mth_nbr >=" 200801.)"") into Spool 4 (all_amps) (compressed columns allowed)," which is duplicated on all AMPs. Then we do a SORT to order Spool 4 by row hash. The size of Spool 4 is estimated with low confidence to be 2,310 rows. The estimated time for this step is 0.00 seconds. 3) We execute the following steps in parallel. 1) We do an all-AMPs JOIN step from Spool 4 (Last Use) by way of an all-rows scan, which is joined to aedwprd1.membership_fact_ by way of a traversal of index # 12 without accessing the base table extracting row ids only. Spool 4 and aedwprd1.membership_fact_ are joined using a" nested join, with a join condition of (""dt_dt ="" aedwprd1.membership_fact_.covg_mth_strt_dt""). The input" table aedwprd1.membership_fact_ will not be cached in memory. The result goes into Spool 5 (all_amps), which is built locally on the AMPs. Then we do a SORT to order Spool 5 by field Id 1. The size of Spool 5 is estimated with low confidence to be 7,149,465 rows. The estimated time for this step is 2.59 seconds. 2) We do an all-AMPs RETRIEVE step from aedwprd1.src_sys_dim_ by way of an all-rows scan with no residual conditions into Spool 6 (all_amps) (compressed columns allowed), which is duplicated on all AMPs. The size of Spool 6 is estimated with high confidence to be 4,830 rows. The estimated time for this step is 0.01 seconds. 4) We execute the following steps in parallel. 1) We do an all-AMPs JOIN step from Spool 5 (Last Use) by way of an all-rows scan, which is joined to aedwprd1.membership_fact_ by way of an all-rows scan with no residual conditions. Spool 5 and aedwprd1.membership_fact_ are joined using a row id join, with a join condition of (" ""(1=1)""). The input table aedwprd1.membership_fact_ will not" be cached in memory. The result goes into Spool 7 (all_amps) (compressed columns allowed), which is built locally on the AMPs. The size of Spool 7 is estimated with low confidence to be 7,149,465 rows. The estimated time for this step is 2.79 seconds. 2) We do an all-AMPs RETRIEVE step from aedwprd1.mbrshp_acct_mth_dim_ by way of an all-rows scan with no residual conditions into Spool 8 (all_amps) (compressed columns allowed), which is duplicated on all AMPs. The size of Spool 8 is estimated with high confidence to be 149,730 rows. The estimated time for this step is 0.10 seconds. 3) We do an all-AMPs RETRIEVE step from aedwprd1.prod_var_ernng_cmpnt_dim_ by way of an all-rows scan with no residual conditions into Spool 9 (all_amps) (compressed columns allowed), which is duplicated on all AMPs. Then we do a SORT to order Spool 9 by row hash. The size of Spool 9 is estimated with high confidence to be 12,754,560 rows. The estimated time for this step is 1.47 seconds. 5) We do an all-AMPs JOIN step from Spool 6 (Last Use) by way of an all-rows scan, which is joined to Spool 7 (Last Use) by way of an all-rows scan. Spool 6 and Spool 7 are joined using a product" join, with a join condition of (""src_sys_key = src_sys_key""). The" result goes into Spool 10 (all_amps) (compressed columns allowed), which is built locally on the AMPs. Then we do a SORT to order Spool 10 by row hash. The size of Spool 10 is estimated with low confidence to be 7,149,465 rows. The estimated time for this step is 1.45 seconds. 6) We execute the following steps in parallel. 1) We do an all-AMPs JOIN step from Spool 9 (Last Use) by way of a RowHash match scan, which is joined to Spool 10 (Last Use) by way of a RowHash match scan. Spool 9 and Spool 10 are joined using a merge join, with a join condition of (" ""prod_var_ec_key = prod_var_ec_key""). The result goes into" Spool 11 (all_amps) (compressed columns allowed), which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 11 by row hash. The size of Spool 11 is estimated with low confidence to be 7,149,465 rows. The estimated time for this step is 3.78 seconds. 2) We do an all-AMPs JOIN step from Spool 8 (Last Use) by way of an all-rows scan, which is joined to aedwprd1.chrtfld_dim_ by way of an all-rows scan with a condition of (" ""((aedwprd1.chrtfld_dim_.cf_covg_cd = '100') OR" ((aedwprd1.chrtfld_dim_.cf_covg_cd = '130') OR (aedwprd1.chrtfld_dim_.cf_covg_cd = '140'))) AND (((aedwprd1.chrtfld_dim_.cf_fndg_arr_cd = '10') OR (aedwprd1.chrtfld_dim_.cf_fndg_arr_cd = '50')) AND" (aedwprd1.chrtfld_dim_.cf_bus_unit_cd = '01800'))""). Spool 8" and aedwprd1.chrtfld_dim_ are joined using a product join," with a join condition of (""(1=1)""). The result goes into" Spool 12 (all_amps) (compressed columns allowed), which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 12 by row hash. The size of Spool 12 is estimated with high confidence to be 10,072,551 rows. The estimated time for this step is 1.46 seconds. 7) We do an all-AMPs JOIN step from aedwprd1.firm_dim_ by way of a RowHash match scan with no residual conditions, which is joined to Spool 11 (Last Use) by way of a RowHash match scan. aedwprd1.firm_dim_ and Spool 11 are joined using a merge join," with a join condition of (""aedwprd1.firm_dim_.firm_key = firm_key""). " The result goes into Spool 13 (all_amps) (compressed columns allowed), which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 13 by row hash. The size of Spool 13 is estimated with low confidence to be 7,149,465 rows. The estimated time for this step is 1.47 seconds. 8) We do an all-AMPs JOIN step from Spool 12 (Last Use) by way of a RowHash match scan, which is joined to Spool 13 (Last Use) by way of a RowHash match scan. Spool 12 and Spool 13 are joined using a" merge join, with a join condition of (""(cf_key = cf_key) AND"" (dt_dt = acct_mth_dt)""). The result goes into Spool 3 (all_amps)" (compressed columns allowed), which is built locally on the AMPs. The size of Spool 3 is estimated with low confidence to be 7,149,465 rows. The estimated time for this step is 0.60 seconds. 9) We do an all-AMPs SUM step to aggregate from Spool 3 (Last Use) by way of an all-rows scan, and the grouping identifier in field 1. Aggregate Intermediate Results are computed globally, then placed in Spool 14. The size of Spool 14 is estimated with no confidence to be 5,362,099 rows. The estimated time for this step is 4.75 seconds. 10) We do an all-AMPs RETRIEVE step from Spool 14 (Last Use) by way of an all-rows scan into Spool 1 (group_amps), which is built locally on the AMPs. The size of Spool 1 is estimated with no confidence to be 5,362,099 rows. The estimated time for this step is 0.37 seconds. 11) Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request. -> The contents of Spool 1 are sent back to the user as the result of statement 1. The total estimated time is 17.80 seconds.

19233 views
4 replies, 8 years ago
TVA 3 in TD13.0 Topic by Dixxie 30 May 2011

Hi,

I'm testing TVA 3 patch 2 (3.00.04.02) in TD 13.00.01.04, but some task with insert-select run very slow.

I don't find another patch for apply to TVA 3.

Would give me some suggestion.

Regards.

5684 views
0 replies
Percentile on MS Excel vs Teradata Topic by jainayush007 03 May 2011 percentile percent_rank quantile

Hello,

My client performs percentile calculations using Percentile fuction on MS EXCEL. I am to get the same using Teradata.

I tried both Qunatile and Percent_Rank and none of them fetches the same result.

Here is an example:-

Value set:-

1034
1034
1034
1034
1034
1034
1034
1034
1137
1266
1273
1294
1294
1295
1305
1327
1327
1328
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329
1329

8027 views
2 replies, 8 years ago
Concatenate Vertically? Topic by Benjamin Marcus 09 Jul 2007

Dear Who know this,If I had 3 rows of data like this:Number****CITY1****USA2****NYK3****HKGIn excel, there are '&' function to concatenate vertically,e.g =ROW1&","&ROW2&","&ROW3The result= USA,NYK,HKGHow do I do this on SQL?Many Thanks.

91711 views
12 replies, 8 years ago
Best ways to migrate code from Mainframes to Unix Topic by enjoycoding 07 Apr 2011 teradata mainframes unix migration

Hi,

We have a huge system with several Teradata boxes for different applications and countries of our organization. All of the overnight batch jobs for processing the business logic are bteq, fastload, multiload scripts running from JCLs on mainframe environment. We have facilities in these environment to run JCL based bteq jobs with several steps. We will be able to restart the jobs from any of the selected steps.

7530 views
1 reply, 8 years ago
How can we Reverse a String in Teradata Topic by Jagdish 16 Aug 2007

Hi,How can we reverse a string in teradata.For example a string 'STAGINGAREA' need to be reversed to 'AERAGNIGATS'.thanksjagdish

45596 views
5 replies, 8 years ago
Multi-Value Compression Evaluation Topic by JustMe 01 Apr 2011

Does anyone have an algorithm/SQL to evaluate possible opportunities for multi-value compression on a single table?

6842 views
0 replies
Counting number of text lines in a character string Topic by mitumanish 23 Jun 2008

Hi, I have got a character string with muliple lines (user presses carriage return). I need to count the number of lines (number of times carraige return) is entered. Does anyone know how to do it.

18619 views
2 replies, 8 years ago
Dynamic views Topic by lgscheurich 08 Nov 2010 view qualify by date range

We have a query where the rows returned depend on a date range. Here is an example:

SELECT a.*
FROM TABLE1,
INNER JOIN TABLE2
ON TABLE1.pty_id = TABLE2.pty_id
AND TABLE1.evt_end_dte between TABLE2.stt_dte and TABLE2.end_dte
where TABLE1.evt_end_dte between '2010-10-01' and '2010-10-31'
QUALIFY ROW_NUMBER() OVER (PARTITION BY pty_id ORDER BY evt_end_tme) = 1

9595 views
3 replies, 8 years ago
what are my options in tuning this kind of query. In real life what worked best for you Topic by shaftngears 11 Mar 2011

I have a huge table with billions of rows and 3+ tb data being LOJ to another similar though lesser in size table.
The pattern is something like

Sel .....
<

from
tb1 /* this is the real gorilla table with size 2 tb+*/
LOJ C2 /* runners up to Tb1 in size */
on

Where

and
Tb1.C5 not in ( < a huge list of values > )
and
Tb4.C12 not in ( < another huge list of values > )

8291 views
8 replies, 8 years ago
Object already exists - Workaround to overwrite dataset in teradata Topic by karanberry 28 Feb 2011 object already exisits

Hi,

I use SAS quite often and all my programs conenct to the teradata sql server to get the data. However, some of the programs also create files in the server.

The problem occurs when we run the same program again and teradata gives an error that the 'object already exisits'. Currently we use the following snippet to remove those datasets before the relevant step executes.

%local check ;
%let check=%sysfunc(exist(.));
%if &check=1 %then %do;
proc datasets lib=(;
delete ;
run;
%end;

However, this snippet works sometimes and doesn't work other times.

6877 views
0 replies
MLOAD Errors Topic by ColumbusKid 28 Jun 2006

Is there a way to remove rows that have insufficient data in a record within an MLOAD so the MLOAD will continue to load the rows that are of the proper length?**** 09:25:16 UTY4017 Not enough data in vartext record number 13145033.

23640 views
2 replies, 8 years ago
TeradataR 1.0 Topic by andeek 02 Nov 2010 statistics, analytics, r, teradatar, rodbc

I've found this package while researching a connection between R and teradata. Has anyone seen documentation/reference manual for the package? Since it's a 3rd party package (vs. residing in CRAN), I haven't found a lot of people talking about/using it.

Any experience with or knowledge about the TeradataR would be appreciated!

Thanks!

9529 views
9 replies, 9 years ago
Warehouse Miner Literal Parameters Topic by tmonte1 31 Aug 2010 teradata, warehouse miner, literal parameter

Does anyone know if there is a way to dynamically set the value of a literal parameter based on the values returned in a prior analysis? As with all reports, we use a lot of dates and in order to hit the partitions on most of our tables, the dates have to be in a literal (date) or a literal parameter (date).

11101 views
1 reply, 9 years ago
Temporary variable store in SQL Topic by zxz118 26 Oct 2010 temporary variable

Hi,
Is there any way to let SQL aware of a temporary variable which value varies from record to record? The job needs to be done is following:

Suppose C1 and C2 are two columns in the table, a1 and a2 are two constants. f1(x) to fn(x) is n transformation functions. want to get:
select
sum(f1(C1*a1+C2*a2)),
sum(f2(C1*a1+C2*a2)),
...
sum(fn(C1*a1+C2*a2))

Is there any way that SQL can first remember a variable y=C1*a1+C2*a2 for each record, then directly use this y to do the select:
sum(f1(y)),
sum(f2(y)),
...
sum(fn(y))

8686 views
1 reply, 9 years ago
Passing Case Expression to a IN Operator Topic by sam_dhse 16 Jul 2010

Hi,

How do we pass a Case expression to a IN Operator

Select * from table
where col1 in (case when parameter is null then Col1 else parameter)

Parameter in this case has a value = 'A','B','C'

Sam

11883 views
6 replies, 9 years ago
Order of joined tables Topic by Zbirda 04 Mar 2010 table, join, tuning

Hi ALL,

I have a simple question. Is it important the order of joind tables? What's better?:

select ...

from large_Table lt

inner join medium_table mt
on...

inner join small_table st
on...

or?:

select ...

from small_Table st

inner join medium_table mt
on...

inner join large_table lt
on...

8906 views
2 replies, 9 years ago
Dynamic DDL Topic by saan 22 Jun 2010 ddl, dynamic

Hi,
I need to create a new table based on the data from a query result. The name of the columns in new table will be the data values from result of a query.

For example, lets say my source query returns following records:

Field1 Field2
V1 A1
V2 A2
V3 A3

Then table DDL should be like:

Create table testdb.test_tab
(
Start_dt date,
v1 varchar(30),
v2 varchar(30),
v3 varchar(30),
a1 varchar(30),
a2 varchar(30),
a3 varchar(30)
)
primary index(start_dt);

10496 views
1 reply, 9 years ago
using dbc.tablesize, determine total tablesize for a specific user Topic by stuffie0912 16 Jun 2010 dbc.tablesize

Using TD SQL Assistant, how can I find the total size of all my tables I have created?

Just for my username.

What statistics are in dbc.tablesize?

will it collect tablesize info for tables I have created if I dd not use "Collect statistics"?

Thank you,

Stephanie

23911 views
2 replies, 9 years ago

Pages