4378 | 13 Apr 2016 @ 12:02 PDT | Teradata Database on VMWare | Reply | Check partition that a query is accessing. | Hi Jatan,
why do you need that information?
It's not available in DBQL, depending on the Select it might be a single partition or 200000, how to store it?
The Explain text found in Q... |
4377 | 12 Apr 2016 @ 11:45 PDT | Database | Reply | Convert TSQL function to Teradata | There's no way to rewrite that function as a SQL-UDF.
For a single calculation you might rewrite it as a Stored Procedure, but you shouldn't run this logic on a larger number of rows becau... |
4376 | 12 Apr 2016 @ 11:17 PDT | Database | Reply | Ephoc to teradata timestamp |
case when EMP_join_date=-99 then NULL else ...
|
4375 | 12 Apr 2016 @ 11:08 PDT | Database | Reply | Ephoc to teradata timestamp | You might simply use a NULL timestamp instead of '-99'.
DATE '1970-01-01' is used because you wrote that it's a Epoch:
https://en.wikipedia.org/wiki/Unix_epoch
|
4374 | 11 Apr 2016 @ 09:52 PDT | Tools | Reply | Exporting CSV from TPT |
I don't have expertise in reading detailed TPT logs, better wait for Steve Feinholz :-)
Sending the data was only 6 seconds:
**** 15:53:11 ... |
4373 | 11 Apr 2016 @ 09:17 PDT | Database | Reply | Execution modes in teradata | SQLUDFs are plain SQL, there's no compiled code, thus there's no protected mode.
|
4372 | 11 Apr 2016 @ 09:17 PDT | Jobs | Reply | How to reinitiatite the BTEQ after a sleep time if SQL code outputs an error? | You can run a SP within a transaction, but there's no reason to pause a transaction for a minute, transactions ahould always run as fast a possible, no user interaction and no pause.
|
4371 | 11 Apr 2016 @ 09:14 PDT | General | Reply | NEED HELP !! - Merging Consecutive and OVERLAPPING date spans | You can't create a view with a CTE, don't know why.
You can materialize the result using
INSERT INTO target -- or CREATE TABLE newtab AS (
WITH cte...
If you&... |
4370 | 11 Apr 2016 @ 05:31 PDT | Database | Reply | Execution modes in teradata | Whenever you CREATE or REPLACE a C-UDF it's created in protected mode, you need to
ALTER FUNCTION my_C-UDF EXECUTE NOT PROTECTED;
|
4369 | 11 Apr 2016 @ 05:26 PDT | General | Reply | NEED HELP !! - Merging Consecutive and OVERLAPPING date spans | You can't nest a Common Table Expression (cte = WITH) in a Derived Table (dt = FROM (SELECT)).
WITH cte(pstncd, effper) AS
(
SELECT CAST(pstncd AS INT) , effper
FROM cte_base
)
SELECT ... |
4368 | 11 Apr 2016 @ 12:57 PDT | Database | Reply | mload delete | There's a nice manual: http://www.info.teradata.com/HTMLPubs/DB_TTU_15_10/Load_and_Unload_Utilities/B035_2409_035K/2409apD.032.1.html
|
4367 | 11 Apr 2016 @ 12:18 PDT | Database | Reply | Will increasing block size help handle the error : The maximum possible Row length in the table is too large | Hi Meenakshi,
as Greenplum is based on PostgreSQL which has a much higher record size limit, so you probably need to split the table (or start normalizing it).
|
4366 | 10 Apr 2016 @ 11:56 PDT | Database | Reply | How Tos to run TDExpress on VM | What information exactly?
There are no more pre-installed databases in the newer TDExpress releases, you create one and then load your own data:
.logon VM.ip.address/dbc,dbc
-- you shouldn'... |
4365 | 10 Apr 2016 @ 11:48 PDT | Database | Reply | qualify rank() over (partition.....question | If there's always a row per month you can simply use:
SUM(travelhours)
OVER (PARTITION BY State, City, Traveler
ORDER BY YR-MNTH
ROWS 11 PRECEDING)
And for the stat... |
4364 | 10 Apr 2016 @ 11:43 PDT | Tools | Reply | Exporting CSV from TPT | You submitted a SQL Select, but used the FastExport protocol, so this is the fastest possible way.
Changing the number of sessions might not help at all, because this is usually set/adjusted by Wo... |
4363 | 10 Apr 2016 @ 11:09 PDT | General | Reply | NEED HELP !! - Merging Consecutive and OVERLAPPING date spans | It's working for other data types, too, but you have to define them accordingly.
For a VarChar it must be a matching definition including the chracter set:
RETURNS (grpcol VARCHAR(n) ... |
4362 | 10 Apr 2016 @ 11:04 PDT | Tools | Reply | Using Rank Numbers are getting Skipped in Teradata 13.10 | TD13.10 is a very old version.
You need a DENSE_RANK, whihc is not supported before TD14.10, but there's a workaround see:
Missing Functions: DENSE_RANK
|
4361 | 09 Apr 2016 @ 03:40 PDT | Tools | Reply | Exporting CSV from TPT | This is probably related to the session's DateForm setting, ansiDate returns a 10-character string 'yyyy-mm-dd' and integerDate returns the internal 4-byte integer.
Try a... |
4360 | 09 Apr 2016 @ 03:23 PDT | Jobs | Reply | How to reinitiatite the BTEQ after a sleep time if SQL code outputs an error? | REPEAT is similar to "=", both repeat a SQL statement n times, the former the follwowing the latter the preceding.
They don't work with BTEQ commands like HANG.
If you creat... |
4359 | 09 Apr 2016 @ 03:09 PDT | Database | Reply | Split a string parameter in a stored procedure | Try STRTOK_SPLIT_TO_TABLE:
http://forums.teradata.com/forum/analytics/passing-a-string-into-a-macro-for-use-in-a-in-statement#comment-147224
|
4358 | 09 Apr 2016 @ 03:05 PDT | Tools | Reply | Execute Oracle's stored procedures from tpt | The RANK is exactly the same in Teradata, there's just a missing table alias directly before WHERE.
Oracle allows omitting that alias, but Standard SQL requires it.
|
4357 | 08 Apr 2016 @ 10:54 PDT | Database | Reply | Subtracting Timestamps as decimals hours |
As long as the difference is less than 27 years you can simply switch to
tracking_time -
(max(tracking_time)
over (partition by member_sk
order by member_sk, tr... |
4356 | 08 Apr 2016 @ 05:27 PDT | Database | Reply | How to Add IDENTITY_COLUMN into existing table ? | There's no way to add an IDENTITY column to a populated table using ALTER TABLE.
And copying 100.000.000 rows to a new table isn't that hard on a Teradata system (of course you sho... |
4355 | 08 Apr 2016 @ 05:22 PDT | Database | Reply | help optimizing GROUP BY query | If you only got a two-digit number of rows per PPI it should be safe to to switch to
PRIMARY INDEX (customer_id)
PARTITION BY (
RANGE_N(dt BETWEEN '2000-04-01'
... |
4354 | 06 Apr 2016 @ 02:01 PDT | Database | Reply | help optimizing GROUP BY query | Statistics will hardly help, just the estimates might be better, but the actual resource usage wil not change.
In therory the fastest way should be
PRIMARY INDEX (customer_id)
PARTITI... |